Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Dec 15.
Published in final edited form as: J Mach Learn Res. 2022 Jan-Dec;23:108.

Bayesian subset selection and variable importance for interpretable prediction and classification

Daniel R Kowal 1
PMCID: PMC10723825  NIHMSID: NIHMS1905654  PMID: 38105917

Abstract

Subset selection is a valuable tool for interpretable learning, scientific discovery, and data compression. However, classical subset selection is often avoided due to selection instability, lack of regularization, and difficulties with post-selection inference. We address these challenges from a Bayesian perspective. Given any Bayesian predictive model , we extract a family of near-optimal subsets of variables for linear prediction or classification. This strategy deemphasizes the role of a single “best” subset and instead advances the broader perspective that often many subsets are highly competitive. The acceptable family of subsets offers a new pathway for model interpretation and is neatly summarized by key members such as the smallest acceptable subset, along with new (co-) variable importance metrics based on whether variables (co-) appear in all, some, or no acceptable subsets. More broadly, we apply Bayesian decision analysis to derive the optimal linear coefficients for any subset of variables. These coefficients inherit both regularization and predictive uncertainty quantification via . For both simulated and real data, the proposed approach exhibits better prediction, interval estimation, and variable selection than competing Bayesian and frequentist selection methods. These tools are applied to a large education dataset with highly correlated covariates. Our analysis provides unique insights into the combination of environmental, socioeconomic, and demographic factors that predict educational outcomes, and identifies over 200 distinct subsets of variables that offer near-optimal out-of-sample predictive accuracy.

Keywords: education, linear regression, logistic regression, model selection, penalized regression

1. Introduction

Subset and variable selection are essential components of regression analysis, prediction, and classification. By identifying subsets of important covariates, the analyst can acquire simpler and more interpretable summaries of the data, improved prediction or classification, reduced estimation variability among the selected covariates, lower storage requirements, and insights into the factors that determine predictive accuracy (Miller, 1984). Classical subset selection is expressed as the solution to the constrained least squares problem

minβyXβ22    subject to    β0k (1)

where y is an n-dimensional response, X is an n×p matrix of covariates, and β is the p-dimensional vector of unknown coefficients. Traditionally, the goal is to determine (i) the best subset of each size k, (ii) an estimate of the accompanying nonzero linear coefficients, and (iii) the best subset of any size. Subset selection has garnered additional attention recently, in part due to the algorithmic advancements from Bertsimas et al. (2016) and the detailed comparisons of Hastie et al. (2020). More broadly, variable selection has been deployed as a tool for interpretable machine learning, including for highly complex and nonlinear models (e.g., Ribeiro et al., 2016; Afrabandpey et al., 2020).

Although often considered the “gold standard” of variable selection, subset selection remains underutilized due to several critical limitations. First, subset selection is inherently unstable: it is common to obtain entirely distinct subsets under perturbations or resampling of the data. This instability undermines the interpretability of a single “best” subset, and is a significant motivating factor for the proposed methods. Second, the solutions to (1) are unregularized. While it is advantageous to avoid overshrinkage, Hastie et al. (2020) showed that the lack of any regularization in (1) leads to deteriorating performance relative to penalized regression in low-signal settings. Third, inference about β requires careful adjustment for selection bias, which limits the available options for uncertainty quantification. Finally, solving (1) is computationally demanding even for moderate p, which has spawned many algorithmic advancements spanning multiple decades (e.g., Furnival and Wilson, 2000; Gatu and Kontoghiorghes, 2006; Bertsimas et al., 2016). For these reasons, penalized regression techniques that replace the 0-penalty with convex or nonconvex yet computationally feasible alternatives (Fan and Lv, 2010) are often preferred.

From a Bayesian perspective, (1) is usually translated into a predictive model via a Gaussian (log-) likelihood and a sparsity (log-) prior. Indeed, substantial research efforts have been devoted to both sparsity (e.g., Ishwaran and Rao, 2005) and shrinkage (e.g., Polson and Scott, 2010) priors for β. Yet the prior alone cannot select subsets: the prior is the component of the data-generating process that incorporates prior beliefs, information, or regularization, while selection is ultimately a decision problem (Lindley, 1968). For instance, Barbieri and Berger (2004) and Liang et al. (2013) applied sparsity priors to obtain posterior inclusion probabilities, which were then used for marginal selection and screening, respectively. Jin and Goh (2020) selected subsets using marginal likelihoods, but required conjugate priors for Gaussian linear models. Hahn and Carvalho (2015) more fully embraced the decision analysis approach for variable selection, and augmented a squared error loss with an 1-penalty for linear variable selection. Alternative loss functions have been proposed for seemingly unrelated regressions (Puelz et al., 2017), graphical models (Bashir et al., 2019), nonlinear regressions (Woody et al., 2020), functional regression (Kowal and Bourgeois, 2020), time-varying parameter models (Huber et al., 2020), and a variety of Kullback-Leibler approximations to the likelihood (Goutis and Robert, 1998; Nott and Leng, 2010; Tran et al., 2012; Piironen et al., 2020).

Despite the appropriate deployment of decision analysis for selection, each of these Bayesian methods relies on 1-penalization or forward search. As such, they are restricted to limited search paths that cannot fully solve the (exhaustive) subset selection problem. More critically, these Bayesian approaches—as well as most classical ones—are unified in their emphasis on selecting a single “best” subset. However, in practice it is common for many subsets (or models) to achieve near-optimal predictive performance, known as the Rashomon effect (Breiman, 2001). This effect is particularly pronounced for correlated covariates, weak signals, or small sample sizes. Under these conditions, it is empirically and theoretically possible for the “true” covariates to be predictively inferior to a proper subset (Wu et al., 2007). As a result, the “best” subset is not only less valuable but also less interpretable. Reporting a single subset—or a small number of subsets along a highly restricted search path—obscures the likely presence of many distinct yet equally-predictive subsets of variables.

We advance an agenda that instead curates and summarizes a family of optimal or near-optimal subsets. This broader analysis alleviates the instability issues of a single “best” subset and provides a more complete predictive picture. The proposed approach operates within a decision analysis framework and is compatible with any Bayesian model for prediction or classification. Naturally, should represent the modeler’s beliefs about the data-generating process and describe the salient features in the data. Several key developments are required:

  1. We derive optimal (in a decision analysis sense) linear coefficients for any subset of variables.

    Crucially, these coefficients inherit regularization and uncertainty quantification via , but avoid the overshrinkage induced by 1-penalization. As such, these point estimators resolve multiple limitations of classical subset selection and (1-penalized) Bayesian decision-analytic variable selection, and further are adapted to both regression and classification problems. Next,

  2. We design a modified branch-and-bound algorithm for efficient exploration over the space of candidate subsets.

    The search process is a vital component of subset selection, and our modular framework is broadly compatible with other state-of-the-art search algorithms (e.g., Bertsimas et al., 2016). Until now, these algorithms have not been fully deployed or adapted for Bayesian subset selection. Additionally,

  3. We leverage the predictive distribution under to collect the acceptable family of near-optimal subsets.

    A core feature of the acceptable family is that it is defined using out-of-sample metrics and predictive uncertainty quantification, yet is computed using in-sample posterior functionals from a single model fit of . Hence, we maintain computational scalability and coherent uncertainty quantification that avoids data reuse. Lastly,

  4. We summarize the acceptable family with key member subsets, such as the “best” (in terms of cross-validation error) predictor and the smallest acceptable subset, along with new (co-) variable importance metrics that measure the frequency with which variables (co-) appear in all, some, or no acceptable subsets.

    Unlike variable importances based on effect size, this inclusion-based metric effectively measures how many “predictively plausible” explanations (i.e., near-optimal subsets) contain each (pair of) covariate(s) as a member. Notably, each of these developments is presented for both prediction and classification.

The importance of curating and exploring a collection of subsets has been acknowledged previously. Existing approaches are predominantly frequentist, including fence methods (Jiang et al., 2008), Rashomon sets (Semenova and Rudin, 2019), bootstrapped confidence sets (Lei, 2019), and subsampling-based forward selection (Kissel and Mentch, 2021). Although the acceptable family has appeared previously for Bayesian decision analysis (Kowal, 2021; Kowal et al., 2021), it was applied only along the 1-path which does not enumerate a sufficiently rich collection of competitive subsets. Further, previous applications of the acceptable family did not address points 1., 2., and 4. above.

The paper is outlined as follows. Section 2 contains the Bayesian subset search procedures, the construction of acceptable families, the (co-) variable importance metrics, and the predictive uncertainty quantification. Section 3 details the simulation study. The methodology is applied to a large education dataset in Section 4. We conclude in Section 5. The Appendix provides additional algorithmic details, simulation studies, and results from the application. An R package is available online at https://github.com/drkowal/ BayesSubsets. Although the education dataset (Children’s Environmental Health Initiative, 2020) cannot be released due to privacy protections, access to the dataset can occur through establishing affiliation with the Children’s Environmental Health Initiative (contact cehi@nd.edu).

2. Methods

2.1. Predictive decision analysis

Decision analysis establishes the framework for extracting actions, estimators, and predictions from a Bayesian model (e.g., Bernardo and Smith, 2009). These tools translate probabilistic models into practical decision-making and can be deployed to summarize or interpret complex models. However, additional methodology is needed to convert subset selection into a decision problem, and further to evaluate and collect many near-optimal subsets.

Let denote any Bayesian model with a proper posterior distribution p(θy) and posterior predictive distribution

p(y˜y)p(y˜θ,y)p(θy)dθ,

where θ denotes the parameters of the model and y is the observed data. Informally, p(y˜y) defines the distribution of future or unobserved data y˜ conditional on the observed data y and according to the model . Decision analysis evaluates each action δ based on a loss function (y˜,δ) that enumerates the cost of each action when y˜ is realized. Examples include point prediction (e.g., squared error loss) or classification (e.g., cross-entropy loss), interval estimation (e.g., minimum length subject to 1α coverage), and selection among a set of hypotheses (e.g., 0–1 loss). Since y˜ is unknown yet modeled probabilistically under , an optimal action minimizes the posterior predictive expected loss

δˆarg minδ Ey˜y(y˜,δ) (2)

with the expectation taken under the Bayesian model . The operation in (2) averages the predictive loss over the possible realizations of y˜ according to the posterior probability under and then minimizes the resulting quantity over the action space.

Yet without careful specification of the loss function , (2) does not provide a clear pathway for subset selection. To see this, let y˜(x) denote the predictive variable at covariate x. For point prediction of y˜(x) under squared error loss, the optimal action is

δˆ(x)arg min δEy˜yy˜(x)δ(x)22=Ey˜y{y˜(x)},

i.e., the posterior predictive expectation at x. Similarly, for classification of y˜(x){0,1} under cross-entropy loss (see (10)), the optimal action is the posterior predictive probability δˆ(x)=p{y˜(x)=1y}. For a generic model , there is not necessarily a closed form for δˆ(x) : these actions are computed separately for each x with no clear mechanism for inducing sparsity or specifying distinct subsets. Hence, additional techniques are needed to supply actions that are not only optimal but also selective and interpretable.

Note that we use observation-driven rather than parameter-driven loss functions. Unlike the parameters θ, which are unobservable and model-specific, the predictive variables y˜ are observable and directly comparable across distinct Bayesian models. A decision analysis based on y˜ operates on the same scale and in the same units as the data that have been—and will be—observed, which improves interpretability. Perhaps most important, an observation-driven decision analysis enables empirical evaluation of the selected actions.

2.2. Subset search for linear prediction

The subset search procedure is built within the decision framework of (2). For any Bayesian model , we consider linear actions δ(x)=xδ with δp, which offer both interpretability and the capacity for selection. Let δ𝒮 denote the linear action with zero coefficients for all j𝒮, where 𝒮{1,,p} is a subset of active variables. For prediction, we assemble the aggregate and weighted squared error loss

({y˜i}i=1n˜,δ𝒮)=i=1n˜ω(x˜i)y˜ix˜iδ𝒮22 (3)

where y˜iy˜(x˜i) is the predictive variable at x˜i for each i=1,,n˜ and ω(x˜i)>0 is a weighting function. The covariate values {x˜i}i=1n˜ can be distinct from the observed covariate values {xi}i=1n, for example to evaluate the action for a different population.

The loss in (3) evaluates linear coefficients δ𝒮 for any given subset 𝒮 by accumulating the squared error loss over the covariate values {x˜i}i=1n˜. Since (3) depends on {y˜i}i=1n˜, the loss inherits a joint predictive distribution p(y˜1,,y˜n˜y) from . The loss is decoupled from the Bayesian model : the linear action does not require a linearity assumption for . The weights ω(x˜i) can be used to target actions locally, which provides a sparse and local linear approximation to . For example, we might parametrize the weighting function as ω(x˜i)exp(x˜ix*22/) with range parameter in order to weight based on proximity to some particular x* of interest (Ribeiro et al., 2016). Alternatively, ω can be specified via a probability model for the likelihood of observing each covariate value, including ω(x˜i)=n˜1 as a simple yet useful example, especially when using the observed covariate values {x˜i}i=1n˜={xi}i=1n; this is our default choice.

Crucially, the optimal action can be solved directly for any subset 𝒮:

Lemma 1 Suppose Ey˜yy˜(x˜i)22< for i=1,,n˜. The optimal action (2) for the loss (3) is given by the nonzero entries

δˆ𝒮=arg minδ𝒮i=1n˜ω(x˜i)yˆix˜iδ𝒮22 (4)
=(X˜𝒮ΩX˜𝒮)1X˜𝒮Ωyˆ, (5)

with zeros for indices j𝒮, where yˆiEy˜y{y˜(x˜i)},yˆ(yˆ1,,yˆn˜),Ωdiag{ω(x˜i)}i=1n˜, and X˜𝒮 is the n˜×|𝒮| matrix of the active covariates in {x˜i}i=1n˜ for subset 𝒮.

Proof It is sufficient to observe that Ey˜yy˜(x˜i)x˜i'δ𝒮22=Ey˜y{y˜(x˜i)yˆi}+(yˆix˜i'δ𝒮)22=Ey˜yy˜(x˜i)yˆi22+yˆix˜i'δ𝒮22, where the first term is a (finite) constant that does not depend on δ𝒮. The remaining steps constitute a weighted least squares solution. ■

The consequence of Lemma 1 is that the optimal action for each subset 𝒮 is simply the (weighted) least squares solution based on pseudo-data (X˜𝒮,yˆ)—i.e., a “fit to the fit” from . The advantages of can be substantial: the Bayesian model propagates regularization (e.g., shrinkage, sparsity, or smoothness) to the point predictions yˆ, which typically offers sizable improvements in estimation and prediction relative ordinary (weighted) least squares. This effect is especially pronounced in the presence of high-dimensional (large p) or correlated covariates. The optimal action may be non-unique if X˜𝒮ΩX˜𝒮 is noninvertible, in which case the inverse in (5) can be replaced by a generalized inverse.

At this stage, the Bayesian model is only needed to supply the pseudo-response variable yˆi; different choices of will result in distinct values of yˆi and therefore distinct actions δˆ𝒮. An illuminating special case occurs for linear regression:

Corollary 2 For the linear regression model with Ey˜θ{y˜(x)}=xβ and any set of covariate values {x˜i}i=1n˜ and weights ω(x˜i)>0, the optimal action (2) under (3) for the full set of covariates is δˆ{1,,p}=βˆ, where βˆEθyβ.

Depending on the choice of {x˜i}i=1n˜ and ω,δˆ{1,,p} may be non-unique. Corollary 2 links the optimal action to the model parameters: the posterior expectation βˆ is also the optimal action under the parameter-driven squared error loss (β,δ)=βδ22. Similarly, a linear model for implies that yˆi=x˜iβˆ, so the optimal action (5) for any subset 𝒮 is intrinsically connected to the (regression) model parameters. This persists for other regression models as well. By contrast, these restrictions also illustrate the generality of (3)-(5): the optimal linear actions are derived explicitly under any model (with Ey˜|yy˜(x˜i)22<) and using any set of covariate values {x˜i}i=1n˜, active covariates 𝒮{1,,p}, and weighting functions ω(x˜)>0.

The critical remaining challenge is optimization—or at least evaluation and comparison—among the possible subsets 𝒮. Our strategy emerges from the observation that there may be many subsets that achieve near-optimal predictive performance, often referred to as the Rashomon effect (Breiman, 2001). The goal is to collect, characterize, and compare these near-optimal subsets of linear predictors. Hence, there are two core tasks: (i) identify candidate subsets and (ii) filter to include only those subsets that achieve near-optimal predictive performance. These tasks must overcome both computational and methodological challenges—similar to classical (non-Bayesian) subset selection—which we resolve in the subsequent sections.

An exhaustive enumeration of all possible subsets presents an enormous computational burden, even for moderate p. Although tempting, it is misguided to consider direct optimization over all possible subsets of {1,,p},

δˆ𝒮^arg min 𝒮,δ𝒮Ey˜y({y˜i}i=1n˜,δ𝒮), (6)

for the aggregate squared error loss (3). To see this—and find suitable alternatives—consider the following result:

Lemma 3 Let RSS(y,μ)yμ22 and yˆEy˜yy˜, and suppose Ey˜yy˜22<. For any point predictors μ1 and μ2, we have

Ey˜y{RSS(y˜,μ1)}Ey˜y{RSS(y˜,μ2)}RSS(yˆ,μ1)RSS(yˆ,μ2). (7)

Proof Since Ey˜y{RSS(y˜,μ)}RSS(yˆ,μ)=Ey˜yy˜22yˆ22 is finite and does not depend on μ, the ordering of μ1 and μ2 will be identical whether using Ey˜|y{RSS(y˜,μ)} or RSS(yˆ,μ). ■

Notably, Ey˜y{RSS(y˜,μ)} is the key constituent in optimizing the predictive squared error loss (3), while RSS(yˆ,μ) is simply the usual residual sum-of-squares (RSS) with yˆ replacing y.

Now recall the optimization of (6). For any subset 𝒮, the optimal action is the least squares solution (5) with pseudo-data yˆ. However, RSS in linear regression is ordered by nested subsets: RSS(yˆ,X˜δˆ𝒮1)RSS(yˆ,X˜δˆ𝒮2) whenever 𝒮2𝒮1. By Lemma 3, it follows that the solution of (6) is simply

δˆ𝒮^=(X˜ΩX˜)1X˜Ωyˆ,   𝒮^={1,,p}

for X˜=(x˜1,,x˜n˜). As with (5), a generalized inverse can be substituted if necessary. The main consequence is that the optimal actions in (4) and (6) alone cannot select variables or subsets: (4) provides the optimal action for a given subset 𝒮, while (6) trivially returns the full set of covariates. Despite the posterior predictive expectation in (6), this optimality is only valid in-sample and is unlikely to persist for out-of-sample prediction. Hence, this optimality is unsatisfying.

Yet Lemma 3 provides a path forward. Rather than fixing a subset 𝒮 in (4) or optimizing over all subsets in (6), suppose we compare among all subsets up to size kp. Equivalently, this constraint can be representation as an 0-penalty augmentation to the loss function (3), i.e.,

δˆkarg min 𝒮,δ𝒮Ey˜y({y˜i}i=1n˜,δ𝒮)    subject to    δ𝒮0k. (8)

In direct contrast with previous approaches for Bayesian variable selection via decision analysis (e.g., Hahn and Carvalho, 2015; Woody et al., 2020; Kowal et al., 2021) we do not use convex relaxations to 1-penalties, which create unnecessarily restrictive search paths and introduce additional bias in the coefficient estimates.

Under the loss (3), the solution to (8) reduces to

δˆk=arg min 𝒮,δ𝒮i=1n˜ω(x˜i)yˆix˜iδ𝒮22    subject to    δ𝒮0=k (9)

using the “fit-to-the-fit” arguments from Lemma 1 and the RSS ordering results from Lemma 3. In particular, the solution δˆk resembles classical subset selection (1), but uses the fitted values yˆi from instead of the data y and further generalizes to include possibly distinct covariates {x˜i} and weights {ω(x˜i)}.

Because of the representation in (9), we can effectively solve (8) by leveraging existing algorithms for subset selection. However, our broader interest in the collection of near-optimal subsets places greater emphasis on the search and filtering process. For any two subsets 𝒮1 and 𝒮2 of equal size k=|𝒮1|=|𝒮2|, Lemma 3 implies that Ey˜y{RSS(y˜,X˜δ𝒮1)}Ey˜y{RSS(y˜,X˜δ𝒮2)} if and only if RSS(yˆ,X˜δ𝒮1)RSS(yˆ,X˜δ𝒮2). Therefore, we can order the linear actions from (4) among all equally-sized subsets simply by ordering the values of RSS(yˆ,X˜δ𝒮). This result resembles the analogous scenario in classical linear regression on {(xi,yi)}i=1n : subsets of fixed size maintain the same ordering whether using RSS or information criteria such as AIC, BIC, or Mallow’s Cp. Here, the criterion of interest is the posterior predictive expected RSS, Ey˜y{RSS(y˜,X˜δ𝒮)}, and the RSS reduction occurs with the model fitted values yˆi=Ey˜yy˜(x˜i) serving as pseudo-data.

Because of this RSS-based ordering among equally-sized linear actions, we can leverage the computational advantages of the branch-and-bound algorithm (BBA) for efficient subset exploration (Furnival and Wilson, 2000). Using a tree-based enumeration of all possible subsets, BBA avoids an exhaustive subset search by carefully eliminating non-competitive subsets (or branches) according to RSS. BBA is particularly advantageous for (i) selecting smaxp covariates, (ii) filtering to mk(pk) subsets for each size k, and (iii) exploiting the presence of covariates that are almost always present for low-RSS models (Miller, 1984). In the present setting, the key inputs to the algorithm are the covariates {x˜i}, the pseudo-data {yˆi}, and the weights {ω(x˜i)}. In addition, we specify the maximum number of subsets mk(pk) to return for each size k. As mk increases, BBA returns more subsets—with higher RSS—yet computational efficiency deteriorates. We consider default values of mk=100 and mk=15 and compare the results in Section 4. An efficient implementation of BBA is available in the leaps package in R. Note that our framework is also compatible with many other subset search algorithms (e.g., Bertsimas et al., 2016).

In the case of moderate to large p, we screen to smaxp covariates using the original model . Specifically, we select the smax covariates with the largest absolute (standardized) linear regression coefficients. When is a nonlinear model, we use the optimal linear coefficients δˆ𝒮 on the full subset 𝒮={1,,p} of (standardized) covariates. The use of marginal screening is common in both frequentist (Fan and Lv, 2008) and Bayesian (Bondell and Reich, 2012) high-dimensional linear regression models, with accompanying consistency results in each case. Here, sceening is motivated by computational scalability and interpretability: BBA is quite efficient for p35 and mk100, while the interpretation of a subset of covariates—acting jointly to predict accurately—is muddled as the subset size increases. By default, we fix smax=35. We emphasize that although this screening procedure relies on marginal criteria, it is based on a joint model that incorporates all p covariates. In that sense, our screening procedure resembles the most popular Bayesian variable selection strategies based on posterior inclusion probabilities (Barbieri and Berger, 2004) or hard-thresholding (Datta and Ghosh, 2013).

2.3. Subset search for logistic classifiers

Classification and binary regression operate on {0, 1}, rendering the squared error loss (3) unsuitable. Consider a binary predictive functional h(y˜){0,1} where y˜~p(y˜y). In this framework, binarization can come from one of two sources. Most common, the data are binary yi{0,1}, paired with the identity functional h(y˜)=y˜, and is a Bayesian classification (e.g., probit or logistic regression) model. Less common, non-binary data can be modeled via and paired with a functional h that maps to {0, 1}.For example, we may be interested in selecting variables to predict exceedance of a threshold, h(y˜)=I{y˜τ}, for some τ based on real-valued data y. The latter case is an example of targeted prediction (Kowal, 2021), which customizes posterior summaries or decisions for any functional h. This approach is distinct from fitting separate models to each empirical functional {h(yi)}i=1n—which is still compatible with the first setting above—and instead requires only a single Bayesian reference model for all target functionals h.

For classification or binary prediction of h(y˜){0,1}, we replace the squared error loss (3) with the aggregate and weighted cross-entropy loss,

[{h(y˜i)}i=1n˜,δ𝒮]=i=1n˜ω(x˜i)[h(y˜i)log{π𝒮(x˜i)}+{1h(y˜i)}log{1π𝒮(x˜i)}], (10)

where h(y˜i){0,1} is the predictive variable at x˜i under and π𝒮(x˜i){1+exp(x˜iδ𝒮)}1. The cross-entropy is also the deviance or negative log-likelihood of a series of independent Bernoulli random variables h(y˜i) each with success probability π𝒮(x˜i) for i=1,,n˜. However, (10) does not imply a distributional assumption for the decision analysis; all distributional assumptions are encapsulated within , including the posterior predictive distribution of h(y˜i). As before, δ𝒮 is the linear action with zero coefficients for all j𝒮, where 𝒮{1,,p} is a subset of active variables.

The optimal action (2) is obtained for each subset 𝒮 by computing expectations with respect to the posterior predictive distribution under and minimizing the ensuing quantity. As in the case of squared error loss, key simplifications are available:

δˆ𝒮=arg min δ𝒮i=1n˜ω(x˜i)[hˆilog{π𝒮(x˜i)}+(1hˆi)log{1π𝒮(x˜i)}] (11)

where hˆiEy˜yh(y˜i) is the posterior predictive expectation of h(y˜i) under . The representation in (11) is quite useful: it is the negative log-likelihood for a logistic regression model with pseudo-data {(x˜i,hˆi)}i=1n˜. Standard algorithms and software, such as iteratively reweighted least squares (IRLS) in the glm package in R, can be applied to solve (11) for any subset 𝒮.

Instead of fitting a logistic regression to the observed binary variables h(yi){0,1}, the optimal action under cross-entropy (10) fits to the posterior predictive probabilities hˆi=p{h(y˜i)=1y}[0,1] under . For a well-specified model , these posterior probabilities hˆi can be more informative than the binary empirical functionals h(yi) : the former lie on a continuum between the endpoints zero and one. Furthermore, for non-degenerate models with hˆi(0,1), the optimal action (11) resolves the issue of separability, which is a persistent challenge in classical logistic regression.

Despite the efficiency of IRLS for a fixed subset 𝒮, the computational savings of BBA for subset search rely on the RSS from a linear model. As such, solving (11) for all or many subsets incurs a much greater computational cost. Yet IRLS is intrinsically linked with RSS. At convergence, IRLS obtains a weighted least squares solution

δˆ𝒮=(X˜𝒮WX˜𝒮)1X˜𝒮Wz (12)

where Wdiag{wi}i=1n˜ for weights wiω(x˜i)πˆ𝒮(x˜i){1πˆ𝒮(x˜i)}, fitted probabilities πˆ𝒮(x˜i){1+exp(x˜iδˆ𝒮)}1, and pseudo-data

zilogπˆ𝒮(x˜i)1πˆ𝒮(x˜i)+hˆiπˆ𝒮(x˜i)πˆ𝒮(x˜i){1πˆ𝒮(x˜i)} (13)

with z(z1,,zn˜). By design, the weighted least squares objective associated with (12) is a second-order Taylor approximation to the predictive expected cross-entropy loss:

Ey˜y[{h(y˜i)}i=1n˜,δˆ𝒮]i=1n˜ω(x˜i){hˆiπˆ𝒮(x˜i)}2πˆ𝒮(x˜i){1πˆ𝒮(x˜i)}=i=1n˜wi(zix˜iδˆ𝒮)2. (14)

The weighted least squares approximation in (14) summons BBA for subset search. Hosmer et al. (1989) adopted this strategy for subset selection in classical logistic regression on yi{0,1}. Here, both the goals and the optimization criterion are distinct: we are interested in curating a collection of near-optimal subsets—rather than selecting a single “best” subset—and the weighted least squares objective (14) inherits the fitted probabilities hˆi from the Bayesian model along with the weights ω(x˜i).

Ideally, we might apply BBA directly based on the covariates {x˜i}, the pseudo-data {zi}, and the weights {wi}. However, both zi and wi depend on πˆ𝒮(x˜i) and therefore are subset-specific. As a result, BBA cannot be applied without significant modifications. A suitable alternative is to construct subset-invariant psuedo-data and weights by replacing πˆ𝒮(x˜i) with the corresponding estimate from the full model, hˆi. Specifically, let

zˆilog{hˆi/(1hˆi)},    wˆiω(x˜i)hˆi(1hˆi), (15)

both of which depend on rather than the individual subsets 𝒮. The pseudo-data zˆi is defined similarly to zi in (13), where the second term now cancels. Finally, BBA subset search can be applied using the covariates {x˜i}, the pseudo-data {zˆi}, and the weights {wˆi}. As for squared error, we restrict each subset size to mk=100 or mk=15 for all subsets of size k=1,,p. Despite the critical role of the weighted least squares approximation in (14) for subset search, all further evaluations and comparisons rely on the exact cross-entropy loss (10).

2.4. Predictive evaluations for identifying near-optimal subsets

The BBA subset search filters the 2p possible subsets to the mk best subsets for each size k=1,,smax according to posterior predictive expected loss using the weighted squared error loss for prediction (Section 2.2) or the cross-entropy loss for classification (Section 2.3). However, as noted below Lemma 3, further comparisons based on these expected losses are trivial: the full set 𝒮={1,,p} always obtains the minimum, which precludes variable selection. Selection of a single “best” subset of each size k invites additional difficulties: if multiple subsets perform similarly—which is common for correlated covariates—then selecting mk=1 subset will not be robust or stable against perturbations of the data. Equally important, restricting to mk=1 subset is blind to competing subsets that offer similar predictive performance yet may differ substantially in the composition of covariates; see Sections 34. These challenges persist for both classical and Bayesian approaches.

We instead curate and explore a collection of near-optimal subsets. The notion of “near-optimal” derives from the acceptable family of Kowal (2021). Informally, the acceptable family uses out-of-sample predictive metrics to gather those predictors that match or nearly match the performance of the best out-of-sample predictor with nonnegligible posterior predictive probability under . The out-of-sample evaluation uses a modified K-fold cross-validation procedure. Let k{1,,n} denote the kth validation set, where each data point appears in (at least) one validation set, k=1Kk={1,,n}. By default, we use K=10 validation sets that are equally-sized, mutually exclusive, and selected randomly from {1,,n}. Define an evaluative loss function L(y,x˜δˆ𝒮) for the optimal linear coefficients of subset 𝒮, and let S denote the collection of subsets obtained from the BBA filtering process. Typically, the evaluative loss L is identical to the loss used for optimization, but this restriction is not required. For each data split k and each subset 𝒮S, the out-of-sample empirical and predictive losses are

L𝒮out(k)1|k|ikL(yi,xiδˆ𝒮k),    L˜𝒮out(k)1|k|ikL(y˜ik,xiδˆ𝒮k) (16)

respectively, where δˆ𝒮karg minδ𝒮Ey˜yk({y˜i}ik,δ𝒮) is estimated using only the training data yk{yi}ik and y˜ik~p(y˜iyk) is the posterior predictive distribution at xi conditional only on the training data. Averaging across all data splits, we obtain L𝒮outK1k=1KL𝒮out(k) and L˜𝒮outK1k=1KL˜𝒮out(k).

The distinction between the empirical loss L𝒮out and the predictive loss L˜𝒮out is important. The empirical loss is a point estimate of the risk under predictions from δ𝒮 based on the data y. By comparison, the predictive loss provides a distribution of the out-of-sample loss based on the model . Both are valuable: L𝒮out is entirely empirical and captures the classical notion of K-fold cross-validation, while L˜𝒮out leverages the Bayesian model to propagate the uncertainty from the model-based data-generating process.

For any two subsets 𝒮1 and 𝒮2, consider the percent increase in out-of-sample predictive loss from δˆ𝒮1 to δˆ𝒮2:

D˜𝒮1,𝒮2out100×(L˜𝒮2outL˜𝒮1out)/L˜𝒮1out. (17)

Since D˜𝒮1,𝒮2out inherits a predictive distribution under , we can leverage the accompanying uncertainty quantification to determine whether the predictive performances of δˆ𝒮1 and δˆ𝒮2 are sufficiently distinguishable. In particular, we are interested in comparisons to the best subset for out-of-sample prediction,

𝒮minarg min 𝒮SL𝒮out, (18)

so that δˆ𝒮min is the optimal linear action associated with the subset 𝒮min that minimizes the empirical K-fold cross-validated loss. Unlike the RSS-based in-sample evaluations from Section 2.2, the subset 𝒮min can—and usually will—differ from the full set {1,,p}, which enables variable selection driven by out-of-sample predictive performance.

Using 𝒮min as an anchor, the acceptable family broadens to include near-optimal subsets:

Aη,ε{𝒮S:(D˜𝒮min,𝒮out<η)ε},    η0,ε[0,1]. (19)

With (19), we collect all subsets 𝒮 that perform within a margin η0% of the best subset, D˜𝒮min,𝒮out<η, with probability at least ε[0,1]. Equivalently, a subset 𝒮 is acceptable if and only if there exists a lower (1ε) posterior prediction interval for D˜𝒮min,𝒮out that includes η. Hence, unacceptable subsets are those for which there is insufficient predictive probability (under ) that the out-of-sample accuracy of 𝒮 is within a predetermined margin of the best subset. The acceptable family is nonempty, since 𝒮minAη,ε, and is expanded by increasing η or decreasing ε. By default, we select η=0% and ε=0.10 and assess robustness in the simulation study (see also Kowal, 2021; Kowal et al., 2021).

Within the acceptable family, we isolate two subsets of particular interest: the best subset 𝒮min from (18) and the smallest acceptable subset,

𝒮smallargmin𝒮Aη,ε|𝒮|. (20)

When 𝒮small is nonunique, so that multiple subsets of the same minimal size are acceptable, we select from those subsets the one with the smallest empirical loss L𝒮out. By definition, 𝒮small is the smallest set of covariates that satisfies the near-optimality condition in (19). As noted by Hastie et al. (2009) and others, selection based on minimum cross-validated error, such as 𝒮min, often produces models or subsets that are more complex than needed for adequate predictive accuracy. The acceptable family Aη,ε, and in particular 𝒮small, exploits this observation to provide alternative—and often much smaller—subsets of variables.

To ease the computational burden, we adapt the importance sampling algorithm from Kowal (2021) to compute (19) and all constituent quantities (see Appendix A). Crucially, this algorithm is based entirely on the in-sample posterior distribution under model , which avoids both (i) the intensive process of re-fitting for each data split k and (ii) data reuse that adversely affects downstream uncertainty quantification. Briefly, the algorithm uses the complete data posterior p(θy) as a proposal for the training data posterior p(θyk). The importance weights are then computed using the likelihood p(ykθ) under model . Similar algorithms have been deployed for Bayesian model selection (Gelfand et al., 1992), evaluating prediction distributions (Vehtari and Ojanen, 2012), and 1-based Bayesian variable selection (Kowal et al., 2021).

In the case of new (out-of-sample) covariate values {x˜i}i=1n˜{xi}i=1n, the predictive loss may be defined without cross-validation: L˜𝒮outn˜1i=1n˜L(y˜i,x˜iδˆ𝒮), where y˜iy˜(x˜i) is the predictive variable at x˜i for each i=1,,n˜. The empirical loss is undefined without a corresponding observation of the response variable for each x˜i. Hence, we modify the acceptable family (19) to instead reference the full subset 𝒮={1,,p} in place of 𝒮min, which is no longer defined. When is a linear model, Corollary 2 implies that the corresponding reference is simply the posterior predictive expectation under .

2.5. Co-variable importance

Although it is common to focus on a single subset for selection, the acceptable family Aη,ε provides a broad assortment of competing explanations: linear actions with distinct sets of covariates that all provide near-optimal predictive accuracy. Hence, we seek to further summarize Aη,ε beyond 𝒮min and 𝒮small to identify (i) which covariates appear in any acceptable subset, (ii) which covariates appear in all acceptable subsets, and (iii) which covariates appear together in the same acceptable subsets.

For covariates j and , the sample proportion of joint inclusion in Aη,ε achieves each of these goals:

VIincl(j,)|Aη,ε|1𝒮Aη,εI{j,𝒮} (21)

and measures (co-) variable importance. Naturally, (21) is generalizable to more than two covariates, but is particularly interesting for a single covariate: VIincl(j)VIincl (j,j) is the proportion of acceptable subsets to which covariate j belongs. When VIincl (j)>0, covariate j belongs to at least one acceptable subset. Such a result does not imply that covariate j is necessary for accurate prediction, but rather that covariate j is a component of at least one near-optimal linear subset. When VIincl (j)=1, we refer to covariate j as a keystone covariate: it belongs to all acceptable subsets and therefore is deemed essential. For j,VIincl (j,) highlights not only the covariates that co-occur, but also the covariates that rarely appear together. It is particularly informative to identify covariates j and such that VIincl(j) and VIincl() are both large yet VIincl(j,) is small. In that case, covariates j and are both important yet also redundant, as might be expected for highly correlated variables.

Although VIincl  is defined based on linear predictors for each subset 𝒮, the variable importance metric applies broadly to (possibly nonlinear) Bayesian models and a variety of evaluative loss functions L, and can be targeted locally via the weights ω(x˜i). The inclusion-based metric VIincl  can be extended to incorporate effect size, which is a more common strategy for variable importance. Related, Dong and Rudin (2019) aggregated model-specific variable importances across many “good models”. Alternative approaches use leave-one-covariate-out predictive metrics (e.g., Lei et al., 2018), but are less appealing in the presence of correlated covariates.

2.6. Posterior predictive uncertainty quantification

A persistent challenge in classical subset selection is the lack of accompanying uncertainty quantification. Given a subset 𝒮ˆ selected using the data (X,y), familiar frequentist and Bayesian inferential procedures applied to (X𝒮ˆ,y) are in general no longer valid. In particular, we cannot simply proceed as if only the selected covariates 𝒮ˆ were supplied from the onset. Such analyses are subject to selection bias (Miller, 1984).

A crucial feature of our subset filtering (Section 2.2 and Section 2.3) and predictive evaluation (Section 2.4) techniques is that, despite the broad searching and the out-of-sample targets, these quantities all remain in-sample posterior functionals under . There is no data re-use or model re-fitting: every requisite term is a functional of the complete data posterior p(θy) or p(y˜y) from a single Bayesian model. Hence, the posterior distribution under remains a valid facilitator of uncertainty quantification.

We elicit a posterior predictive distribution for the action by removing the expectation in (2):

δ˜arg min δ(y˜,δ) (22)

which no longer integrates over p(y˜y) and hence propagates the posterior predictive uncertainty. For the squared error loss (3), the predictive action is

δ˜𝒮=(X˜𝒮ΩX˜𝒮)1X˜𝒮Ωy˜ (23)

akin to (5), where y˜=(y˜1,,y˜n˜)~p(y˜y). The linear coefficients δ˜𝒮 inherit a posterior predictive distribution from y˜ and can be computed for any subset 𝒮. Similar computations are available for the cross-entropy loss (10). In both cases, draws {y˜s}s=1S~p(y˜y) from the posterior predictive distribution are sufficient for uncertainty quantification of δ˜𝒮. Under the usual assumption that p(y˜θ,y)=p(y˜θ), these draws are easily obtained by repeatedly sampling θs~p(θy) from the posterior and y˜s~p(y˜θ=θs) from the likelihood.

The predictive actions are computable for any subset 𝒮, include those selected based on the predictive evaluations in Section 2.4. Let 𝒮ˆ denote a subset identified based on the posterior (predictive) distribution under , such as 𝒮min or 𝒮small. The predictive action δ˜𝒮ˆ using this subset in (23) is a posterior predictive functional. Unlike for a generic subset 𝒮, the predictive action δ˜𝒮ˆ is a functional of both y˜ and 𝒮ˆ, which factors into the interpretation.

A similar projection-based approach is developed by Woody et al. (2020) for Gaussian regression. In place of (y˜,δ) in (22), Woody et al. (2020) suggest Ey˜θ(y˜,δ), which is a middle ground between (2) and (22): it integrates over the uncertainty of y˜ given model parameters θ, but preserves uncertainty due to θ. For example, under the linear regression model Ey˜θ{y˜(x˜)}=x˜β, the analogous result in (23) is (X˜𝒮ΩX˜𝒮)1X˜𝒮ΩX˜β; when 𝒮={1,,p}, this simplifies to the regression coefficient β. Both approaches have their merits, but we prefer (22) because of the connection to the observable random variables y˜ rather than the model-specific parameters θ.

3. Simulation study

We conduct an extensive simulation study to evaluate competing subset selection techniques for prediction accuracy, uncertainty quantification, and variable selection. Prediction for real-valued data is discussed here; classification for binary data is presented in Appendix B. Although we do evaluate individual subsets from the proposed framework—namely, 𝒮min and 𝒮small—we are more broadly interested in the performance of the acceptable family of subsets. In particular, the acceptable family is designed to collect many subsets that offer near-optimal prediction; both cardinality and aggregate predictive accuracy are critical.

The simulation designs feature varying signal-to-noise ratios (SNRs) and dimensions (n,p), including pn, with correlated covariates and sparse linear signals. Covariates xi,j are generated from marginal standard normal distributions with Cor(xi,j,xi,j)=(0.75)|jj| for i=1,,n and j=1,,p. The p columns are randomly permuted and augmented with an intercept. The true linear coefficients β* are constructed by setting β0*=1 and fixing p*=5 nonzero coefficients, with p*/2 equal to 1 and p*/2 equal to −1, and the rest at zero. The data are generated independently as yi~N(yi*,σ*2) with yi*xiβ* and σ*sd(y*)/SNR. We consider (n,p){(50,50),(200,400),(500,50)} and SNR{0.25,1} for low and high SNR. Evaluations are conducted over 100 simulated datasets.

First, we evaluate point prediction accuracy across competing collections of near-optimal subsets. Each collection is built from the same Gaussian linear regression model with horseshoe priors (Carvalho et al., 2010) and estimated using bayeslm (Hahn et al., 2019). The collections are generated from different candidate sets S based on distinct search methods: the proposed BBA method (bbound(bayes)), the adaptive lasso search (lasso (bayes)) proposed by Hahn and Carvalho (2015) and Kowal et al. (2021), and both forward (forward(bayes)) and backward (backward(bayes)) search. For each collection of candidate subsets, we compute the acceptable families for η=0%,ε=0.10,mk=15, and the observed covariate values {x˜i}i=1n˜={xi}i=1n; variations of these specifications are presented in Appendix C. These methods differ only in the search process: if a particular subset 𝒮 is identified by more than one of these competing methods, the corresponding point predictions—based on the optimal action δˆ𝒮 in (5)—will be identical.

The competing collections of near-optimal subsets are evaluated in aggregate. At each simulation, we compute the qth quantile of the root mean squared errors (RMSEs) for the true mean {yi*}i=1n across all subsets within that acceptable family, and then average that quantity across all simulations. For example, q=1 describes the predictive performance if we were to use the worst acceptable subset at each simulation, as determined by an oracle. These quantities summarize the distribution of RMSEs within each acceptable family; we report the values for q{0,0.25,0.5,0.75,1} and present the results as boxplots in Figure 1. Most notably, the proposed bbound(bayes) strategy produces up to 10 times the number of acceptable subsets as the other methods, yet crucially maintains equivalent predictive performance. Naturally, this expanded collection of subsets also produces a minimum RMSE (q=0) that outperforms the remaining methods. Clearly, the proposed approach is far superior at collecting more subsets that provide near-optimal predictive accuracy.

Figure 1:

Figure 1:

Root mean squared errors (RMSEs) for predicting y* across (n,p,SNR) configurations. The boxplots summarize the RMSE quantiles for the subsets within each acceptable family, while the vertical lines denote RMSEs of competing methods. The average size of each acceptable family is annotated. The proposed BBA search returns vastly more subsets that remain highly competitive, while 𝒮small performs very well and substantially outperforms classical subset selection.

Figure 1 also summarizes the point prediction accuracy for several competing methods. First, we include the usual point estimate under given by the posterior expectation of the regression coefficients β (post.mean), which does not include any variable or subset selection. Next, we compute point predictions for the key acceptable subsets 𝒮min and 𝒮small using the proposed bbound(bayes) approach; the competing search strategies produced similar results and are omitted. Among frequentist estimators, we use the adaptive lasso (lasso; Zou, 2006) with λ chosen by 10-fold cross-validation and the one-standard-error rule (Hastie et al., 2009). In addition, we include classical subset selection (subset) implemented using the leaps package in R with the final subset determined by AIC. When p>35, we screen to the first 35 predictors that enter the model in the aforementioned adaptive lasso. Forward (forward) and backward (backward) selection were also considered, but were either noncompetitive or performed similarly to the adaptive lasso and are omitted from this plot.

Most notably, the predictive performance of 𝒮small is excellent, especially for high SNRs, and substantially outperforms the frequentist methods in all settings. Because 𝒮min is overly conservative—i.e., it selects too many variables (see Table 1)—it performs nearly identically to post.mean. Although 𝒮min and post.mean outperform 𝒮small when the SNR is low and the sample size is small, note that 𝒮small is not targeted exclusively for optimal predictive performance; rather, it represents the smallest subset that obtains near-optimal performance. Lastly, the regularization induced by is greatly beneficial: every member of each acceptable family decisively outperforms classical subset selection across all scenarios.

Table 1:

True positive rates (TPR) and true negative rates (TNR) for Gaussian synthetic data with p*+1=6 active covariates including the intercept. The selection performance of 𝒮small is excellent and similar to the the adaptive lasso. Classical subset, forward, and backward selection are too aggressive, while the posterior interval-based selection is too conservative.

n=50, p=50, SNR = 0.25
lasso forward backward subset posterior HPD Smin Ssmall
TPR 0.22 0.96 0.93 0.53 0.06 0.51 0.22
TNR 0.98 0.06 0.10 0.63 1.00 0.81 0.98
n=50, p=50, SNR = 1
lasso forward backward subset posterior HPD Smin Ssmall
TPR 0.54 0.95 0.89 0.72 0.16 0.77 0.34
TNR 0.92 0.06 0.11 0.64 1.00 0.78 0.99
n=200, p=400, SNR = 0.25
lasso forward backward subset posterior HPD Smin Ssmall
TPR 0.36 0.79 0.78 0.67 0.17 0.75 0.34
TNR 1.00 0.52 0.54 0.96 1.00 0.95 1.00
n=200, p=400, SNR = 1
lasso forward backward subset posterior HPD Smin Ssmall
TPR 0.97 0.96 0.95 0.95 0.75 0.98 0.93
TNR 0.99 0.57 0.60 0.96 1.00 0.95 1.00
n=500, p=50, SNR = 0.25
lasso forward backward subset posterior HPD Smin Ssmall
TPR 0.84 0.96 0.95 0.94 0.59 0.99 0.86
TNR 0.98 0.83 0.83 0.83 1.00 0.70 0.98
n=500, p=50, SNR = 1
lasso forward backward subset posterior HPD Smin Ssmall
TPR 1.00 1.00 1.00 1.00 1.00 1.00 1.00
TNR 1.00 0.84 0.83 0.83 1.00 0.67 0.99

Next, we compare uncertainty quantification for the regression coefficients β*. We compute 90% intervals using the predictive action δ˜𝒮 from (23) for 𝒮small and 𝒮min based on the proposed bbound(bayes) procedure. In addition, we compute 90% highest posterior density (HPD) credible intervals for β under (post.HPD) and 90% frequentist confidence intervals using Zhao et al. (2021), which computes confidence intervals from a linear regression model that includes only the variables selected by the (adaptive) lasso (lasso+lm). The 90% interval estimates are evaluated and compared in Figure 2 using interval widths and empirical coverage. The intervals from 𝒮small using (22) are uniformly better (i.e., narrower) than the usual posterior HPD intervals under . In addition, the intervals from Zhao et al. (2021) are highly competitive, despite ignoring selection bias. In all cases, the intervals are overly conservative and achieve the nominal empirical coverage.

Figure 2:

Figure 2:

Mean interval widths (boxplots) with empirical coverage (annotations) for β*. Non-overlapping notches indicate significant differences between medians. The proposed intervals based on 𝒮small are significantly narrower than the usual HPD intervals under yet maintain the empirical nominal 90% coverage.

Lastly, marginal variable selection is evaluated using true positive rates and true negative rates in Table 1. In addition to 𝒮min and 𝒮small, we include the common Bayesian selection technique that includes a variable j when the 95% HPD interval for βj excludes zero (posterior HPD). The selection performance of 𝒮small is excellent and similar to the the adaptive lasso. Classical subset, forward, and backward selection are too aggressive, while the posterior interval-based selection is too conservative. Hence, despite the primary focus on the collection of near-optimal subsets, the smallest acceptable subset 𝒮small is a key member with excellent prediction, uncertainty quantification, and marginal selection performance.

The appendix contains simulation studies for classification (Appendix B) and variations for ε{0.01,0.10,0.20},{x˜i}i=1n˜, and the distributions of {xi}i=1n and {x˜i}i=1n˜ (Appendix C). All results are qualitatively similar to those presented here.

4. Subset selection for predicting educational outcomes

Childhood educational outcomes are affected by adverse environmental exposures, such as poor air quality and lead, as well as social stressors, such as poverty, racial residential isolation, and neighborhood deviation. We study childhood educational development using end-of-grade reading test scores for a large cohort of fourth grade children in North Carolina (Children’s Environmental Health Initiative, 2020). The reading scores are accompanied by a collection of student-level information detailed in Figure 3, which includes air quality exposures, birth information, blood lead measurements, and social and economic factors (see also Kowal et al., 2021). The goal is to identify subsets of these variables and interactions that offer near-optimal prediction of reading scores as well as accurate classification of at-risk students.

Figure 3:

Figure 3:

Variables in the NC education dataset. Data are restricted to individuals with 37–42 weeks gestation, mother’s age 15–44, Blood_lead ≤80, birth order ≤4, no current limited English proficiency, and residence in NC at time of birth and time of 4th grade test.

A prominent feature of the data is the correlation among the covariates. After centering and scaling the continuous covariates to mean zero and standard deviation 0.5, we augment the variables in Figure 3 (excluding the test scores) with interactions between race and the social and economic factors. The resulting dimensions are n=16806 and p=32. Figure E.1 displays the pairwise correlations among the covariates and the response variable. There are strong associations among the air quality exposures as well as among race and the social and economic factors. Due to the dependences among variables, it is likely that distinct subsets of similar predictive ability can be obtained by interchanging among these correlated covariates. Hence, it is advantageous to collect and study the near-optimal subsets.

We compute acceptable families for (i) the reading scores yi under squared error loss and (ii) the indicator h(y˜i)=I{y˜iτ0.1} under cross-entropy loss, where τ0.1 is the 0.1-quantile of the reading scores (see Appendix D). While task (i) broadly considers the spectrum of educational outcomes via reading scores, task (ii) targets at-risk students in the bottom 10% of reading ability. Acceptable families and accompanying quantities for both tasks can be computed using the same Bayesian model : we focus on Gaussian linear regression with horseshoe priors. The acceptable families are computed using the proposed BBA search with η=0%,ε=0.10,mk=100 and {x˜i}i=1n˜={xi}i=1n; results for other η values, ε=0.20, and mk=15 are noted, while alternative target covariates {x˜i}i=1n˜ are in Section 4.2.

4.1. Subset selection for predicting reading scores

First, we predict reading scores using a linear model for and squared error loss for L. Since acceptable family is defined based on D˜𝒮min,𝒮out in (19), we summarize its distribution in Figure 4. For each 𝒮S, we display 80% intervals, expectations, and the empirical analog D𝒮,𝒮minout100×(L𝒮outL𝒮minout)/L𝒮minout. The smaller subsets of sizes four to six demonstrate clear separation for certain subsets. Along with the intercept and the race indicators, these subsets include mEdu (college diploma), EconDisadv, and mEdu (completed high school) in sequence. However, larger subsets are needed to procure near-optimal predictions for smaller margins, such as η<2%. While the best subset |𝒮min|=29 includes nearly all of the covariates, many subsets with |𝒮|>10 achieve within η=1% of the accuracy of 𝒮min .

Figure 4:

Figure 4:

The 80% intervals (bars) and expected values (circles) for D˜𝒮min,𝒮out with D˜𝒮,𝒮minout (x-marks) under squared error loss for each subset size |𝒮| with 𝒮S. We annotate 𝒮min (dashed gray line) and 𝒮small (solid gray line) for ε=0.10 and η=0 and jitter the subset sizes for clarity of presentation.

Among the |S|=2761 candidate subsets identified from the BBA search, there are |A0,0.1|=1183 acceptable subsets. We summarize Aη,ε via the co-variable importance metrics VIincl(j) and VIincl(j,) in Figures 5 and E.2, respectively. Unlike many variable importance metrics that measure effect sizes, VIincl (j) instead quantifies whether each covariate j is a component of all, some, or no competitive subsets. There are many keystone covariates that appear in (nearly) all acceptable subsets, including environmental exposures (prenatal air quality and blood lead levels), economic and social factors (EconDisadv, mother’s education level, neighborhood deprivation at time of test), and demographic information (race, gender), among others.

Figure 5:

Figure 5:

Variable importance VIincl (j) for prediction. There are several tiers: variables appear in (nearly) all, many (> 70%), some (> 30%), or no acceptable subsets.

Interestingly, chronic and acute PM2.5 exposure each belong to nearly 50% of acceptable subsets (Figure 5), yet rarely appear in the same acceptable subset (Figure E.2). The pairwise correlations (Figure E.1) offer a reasonable explanation: these variables are weakly correlated with reading scores but highly correlated with one another. Similar results persist for neighborhood deprivation and racial residential isolation both at birth and time of test. Moreover, this analysis was conducted after removing one acute PM2.5 outlier (50% larger than all other values). When that outlier is kept in the data, acute PM2.5 no longer belongs to any acceptable subset, while VIincl (j) for chronic PM2.5 increases. These results are encouraging: the acceptable family identifies redundant yet distinct predictive explanations, but prefers the more stable covariate in the presence of outliers.

Next, we analyze the smallest acceptable subset 𝒮small and incorporate uncertainty quantification for the accompanying linear coefficients. The |𝒮small|=19 selected covariates are displayed in Figure 6 alongside the point and 90% intervals based on the proposed approach, the Bayesian model , and the adaptive lasso. The estimates and intervals for covariates excluded from 𝒮small are identically zero for the proposed approach (and, in this case, the adaptive lasso as well), while the estimates and HPD intervals from are dense for all covariates. Despite the encouraging simulation results for the Zhao et al. (2021) frequentist intervals, these intervals often exclude the adaptive lasso point estimates, which undermines interpretability.

Figure 6:

Figure 6:

Estimated linear effects and 90% intervals for the variables in 𝒮small based on the proposed approach, model , and the adaptive lasso.

The estimates from 𝒮small and are similar with anticipated directionality: higher mother’s education levels, lower blood lead levels in the child, less neighborhood deprivation, and absence of economic disadvantages predict higher reading scores. Prenatal air quality exposure (PM2.5) is significant: due to seasonal effects, the 1st and 3rd trimester exposures have negative coefficients, while the 2nd trimester has a positive effect. Naturally, these effects can only be interpreted jointly. Among interaction terms, the negative effect of NH Black × RI_birth suggests the lower reading scores among non-Hispanic Black students are accentuated by racial residential isolation in the neighborhood of the child’s birth. Since we do not force all main effects into each subset, 𝒮small does not contain an estimated effect of RI_birth for other race groups. Other interactions, such as the positive effects of Hisp and NH Black by EconDisadv and Hisp × Blood_lead, must also be interpreted carefully: the vast majority of Hispanic and non-Hispanic Black students belong to the EconDisadv group and have much higher blood lead levels on average, while each of NH Black, EconDisadv, and Blood_lead has a strong negative main effect.

The results are not particularly sensitive to mk or ε. When mk=15, there are |S|=436 candidate subsets and |A0,0.1|=197 acceptable subsets. The variable importance metrics broadly agree with Figures 5 and E.2, while 𝒮small—and therefore Figure 6—is unchanged. When ε=0.20 (and mk=100 as before), the acceptable family reduces slightly to |A0,0.1|= 977 members and 𝒮small differs only in the addition of Smoker and Hisp × NDI_test.

4.2. Out-of-sample prediction

We evaluate the predictive capabilities of the proposed approach for 20 training/testing splits of the NC education data. The same competing methods are adopted from Section 3, including the distinct search strategies for collecting near-optimal subsets. Since 𝒮small and 𝒮min are reasonably robust to mk, we select mk=15 for computational efficiency. In addition, we include the acceptable family defined by setting {x˜i}i=1n˜ to be the testing data covariate values (bbound(Xtilde)), which is otherwise identical to bbound(bayes). Root mean squared prediction errors (RMSPEs) are used for evaluation.

The results are presented in Figure 7. Among single subset methods, 𝒮small outperforms all competitors—including the classical forward and backward estimators and the smallest acceptable subsets from lasso(bayes), forward(bayes), and backward (bayes) discussed in Section 3 (not shown). The adaptive lasso selects fewer variables and is not competitive. Among search methods, Figure 7 confirms the results from the simulation study: the proposed BBA strategy (bbound (bayes)) identifies 10–25 times the number of subsets as the other search strategies, yet does not sacrifice any predictive accuracy in this expanded collection. Clearly, bbound(bayes) provides a more complete predictive picture, as the competing search strategies omit a massive number of subsets that do offer near-optimal prediction. The acceptable family based on the out-of-sample covariates bbound(Xtilde) is much larger, which is reasonable: the subsets are computed and evaluated on covariates for which the accompanying response variables are not available. This collection of subsets sacrifices some predictive accuracy relative to the other search strategies, yet still outperforms the adaptive lasso—yet another testament to the importance of the regularization induced by .

Figure 7:

Figure 7:

Root mean squared prediction errors (RMSPEs) across 20 training/testing splits of the NC education data. The boxplots summarize the RMSPE quantiles for the subsets within each acceptable family, while the vertical lines denote RMSPEs of competing methods. The average size of each acceptable family is annotated. The proposed BBA search returns vastly more subsets that remain highly competitive, while the accompanying 𝒮small subset performs best overall.

Although the RMSPE differences appear to be small, minor improvements are practically relevant: even a single point on a standardized test score can be the difference between progression to the next grade level (on the low end) or eligibility for intellectually gifted programs (on the high end). More generally, education data are prone to weak signals and small effect sizes, which are conditions under which many methods may offer similar predictive performance. Nonetheless, our primary goal is not to substantially improve prediction, but rather to identify and analyze a large collection of near-optimal subsets.

The importance of the broader BBA search strategy is further highlighted in comparison lasso(bayes), which uses a lasso search path for decision analytic Bayesian variable selection (Hahn and Carvalho, 2015; Kowal et al., 2021). In particular, lasso(bayes) generates only |S|=25 candidate subsets and |A0,0.1|=9 acceptable subsets. By comparison, bbound(bayes) returns more than 100 times the number of candidate subsets and acceptable subsets. Figure 7 shows that the subsets omitted by lasso(bayes) yet discovered by bbound(bayes) are indeed near-optimal. Further, the restrictive search path of lasso(bayes) does not guarantee greater stability: the smallest acceptable subsets for bbound(bayes) and lasso(bayes) are nearly identical, yet the interquartile range of |𝒮small| across the 20 training/testing splits under bbound(bayes) is 0, while the same quantity under lasso(bayes) is 2. Indeed, by this metric, bbound(bayes) actually provides the most stable smallest acceptable subset across all search strategies considered.

5. Discussion

We developed decision analysis tools for Bayesian subset search, selection, and (co-) variable importance. The proposed strategy is outlined in Algorithm 1. Building from a Bayesian predictive model , we derived optimal linear actions for any subset of covariates. We explored the space of subsets using an adaptation of the branch-and-bound algorithm. After filtering to a manageable collection of promising subsets, we identified the acceptable family of near-optimal subsets for linear prediction or classification. The acceptable family was summarized by a new (co-) variable importance metric—the frequency with which variables (co-) appear in all, some, or no acceptable subsets—and individual member subsets, including the “best” and smallest subsets. Using the posterior predictive distribution from , we developed point and interval estimates for the linear coefficients of any subset. Simulation studies demonstrated better prediction, interval estimation, and variable selection for the proposed approach compared to existing Bayesian and frequentist selection methods—even for high-dimensional data with p>n.

We applied these tools to a large education dataset to study the factors that predict educational outcomes. The analysis identified several keystone covariates that appeared in (almost) every near-optimal subset, including environmental exposures, economic and social factors, and demographic information. The co-variable importance metrics highlighted an interesting phenomenon where certain pairs of covariates belonged to many acceptable subsets, yet rarely appeared in the same acceptable subset. Hence, these variables are effectively interchangeable for prediction, which provides valuable context for interpreting their respective effects. We showed that the smallest acceptable subset offers excellent prediction of end-of-grade reading scores and classification of at-risk students using substantially fewer covariates. The corresponding linear coefficients described new and important effects, for example that greater racial residential isolation among non-Hispanic Black students is predictive of lower reading scores. However, our results also caution against overreliance on any particular subset: we identified over 200 distinct subsets of variables that offer near-optimal out-of-sample predictive accuracy.

Future work will attempt to generalize these tools via the loss functions and the actions. Alternatives to squared error and cross-entropy loss can be incorporated with an IRLS approximation strategy similar to Section 2.3, which would maintain the methodology and algorithmic infrastructure from the proposed approach. Similarly, the class of parametrized actions can be expanded to include nonlinear predictors, such as trees or additive models, with acceptable families constructed in the same way.

graphic file with name nihms-1905654-f0001.jpg

Acknowledgments

Research was sponsored by the Army Research Office (W911NF-20-1-0184) and the National Institute of Environmental Health Sciences of the National Institutes of Health (R01ES028819). The content, views, and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, the North Carolina Department of Health and Human Services, Division of Public Health, the National Institutes of Health, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

Appendix A. Approximations for out-of-sample quantities

The acceptable family Aη,ε in (19) is based on the out-of-sample predictive discrepancy metric D˜𝒮1,𝒮2out and the best subset δˆ𝒮min in (18). These quantities both depend on the out-of-sample empirical and predictive losses L𝒮out(k) and L˜𝒮out(k), respectively, in (16). Hence, it is required to compute (i) the optimal action on the training data, δˆ𝒮karg minδ𝒮Ey˜yk({y˜i}ik,δ𝒮), and (ii) samples from the out-of-sample predictive distribution y˜ik~p(y˜iyk). Because of the simplifications afforded by (4)-(5) and (11), computing δˆ𝒮k only requires the out-of-sample expectations yˆikEy˜yk{y˜(x˜i)} or hˆikEy˜ykh(y˜i) for the classification setting. For many models , there is a further simplification that Ey˜yk{y˜(x˜i)}=Eθyk[Ey˜θ{y˜(x˜i)}], where often Ey˜θ{y˜(x˜i)} has an explicit form (such as in regression models). Absent such simplifications, the expectations can be computed by averaging the draws of y˜ik~p(y˜iyk).

Although these terms can be computed by repeatedly re-fitting the Bayesian model for each training/validation split k=1,,K, such an approach is computationally demanding. Instead, we apply a sampling-importance resampling (SIR) algorithm to approximate these out-of-sample quantities. Notably, the SIR algorithm requires only a single fit of model to the complete data—which is already needed for posterior inference—and therefore contributes minimally to the computational cost of the aggregate analysis.

The details are provided in Algorithm 2. By construction, the samples {y˜is˜}s˜=s˜1S˜ are from the out-of-sample predictive distribution p(y˜iyk). Based on Algorithm 2, it is straightforward to compute draws of D˜𝒮1,𝒮2out for any 𝒮1,𝒮2S, as well as the best subset δˆ𝒮min in (18). Therefore, the acceptable family Aη,ε is readily computable for any η,ε. By default, we use S˜=S/2 SIR samples.

graphic file with name nihms-1905654-f0002.jpg

Algorithm 2 recycles computations: steps 1–2(d) are shared among all subsets 𝒮 and loss functions L. Hence, the algorithm is efficient even when the number of candidate subsets |S| is large—and notably does not require re-fitting the Bayesian model . Modifications for classification (Section 2.3) are straightforward: steps (c), (d), and (e) are replaced by (c) y˜is˜~p(y˜iθs˜) for s˜=s˜1,,s˜S˜ and i=1,,n; (d) hˆjkS˜1s˜=s˜1S˜h(y˜js˜) for jk; and (e) compute δˆ𝒮k by solving (11) using the training data covariates {xi}ik, the weights {ω(xi)}ik, and the pseudo-data {hˆjk}jk.

Appendix B. Simulation study for classification

The synthetic data-generating process for classification mimics that for prediction: the only difference is that the data are generated as yi~indepBernoulli(πi*) with πi*{1+exp(yi*)}1 and yi*xiβ* as before. For the Bayesian model , we use a logistic regression model with horseshoe priors and estimated using rstanarm (Goodrich et al., 2018). The competing estimators are constructed similarly as before, now using cross-entropy loss (10) for the proposed approach and the logistic likelihood for the adaptive lasso.

The classification performance is evaluated using cross-entropy loss for πi* in Figure B.1 (top), using the same competing search methods as in Figure 1. As in the regression case, the proposed bbound(bayes) search procedure returns vastly more subsets in the acceptable family, yet maintains excellent classification performance within this collection. In addition, classification based on 𝒮min and 𝒮small substantially outperform the adaptive lasso. We also include the 90% interval comparisons in Figure B.1 (bottom), which confirm the results from the prediction scenario: 𝒮small and Zhao et al. (2021) (modified for the logistic case) achieve the nominal coverage with the narrowest intervals.

Figure B.1:

Figure B.1:

Top: cross-entropy loss for πi*. The boxplots summarize the cross-entropy quantiles for the subsets within each acceptable family, while the vertical lines denote cross-entropy of competing methods. The average size of each acceptable family is annotated. Bottom: Mean interval widths (boxplots) with empirical coverage (annotations) for β*.

Appendix C. Simulation study for sensitivity analysis

We revisit the comparisons in Figure 1 to consider sensitivities to ε and {x˜i}i=1n˜. In Figure C.1, we report results for the acceptable families with ε=0.01 and ε=0.20; the frequentist estimators and post.mean are unchanged from Figure 1. Compared also to the analogous case in Figure 1 with ε=0.10, we see the expected ordering in the cardinalities of the acceptable families: larger values of ε produce a more stringent criterion and therefore fewer acceptable subsets. Notably, the main results are not sensitive to the choice of ε{0.01,0.10,0.20}, and 𝒮small performs exceptionally well in all cases.

Figure C.1:

Figure C.1:

Root mean squared errors (RMSEs) for predicting y*. The boxplots summarize the RMSE quantiles for the subsets within each acceptable family; here, the acceptable families use ε=0.01 (left) and ε=0.20 (right). The vertical lines denote RMSEs of competing methods and the average size of each acceptable family is annotated. The results suggest only minor sensitivity to the choice of ε.

Next, we again modify the comparisons in Figure 1 to evaluate prediction at a new collection of covariates {x˜i}i=1n˜. We allow variations in the covariate data-generating process—either correlated standard normals from Section 3 or iid Uniform(0,1)—and whether the observed covariates {xi}i=1n follow the same data-generating process as the target covariates {x˜i}i=1n˜. These configurations generate four scenarios, all with {x˜i}i=1n˜{xi}i=1n. The competing acceptable families are each generated using this choice of {x˜i}i=1n˜ and the point predictions are evaluated for y*(x˜i)=x˜iβ* for i=1,,n˜=1000. The results are in Figure C.2. Naturally, performance worsens across the board when observed and target covariates differ in either their values ({xi}i=1n{x˜i}i=1n˜) or their distributions. When the target covariates are iid uniform (top right, bottom left), all methods actually perform better—likely due to the light-tailed (bounded) and independent covariate values. Yet perhaps most remarkably, the relative performance among the methods remains consistent, while 𝒮small outperforms all competitors in all but one scenario—and decisively outperforms the frequentist methods in all cases.

Appendix D. Subset selection for classifying at-risk students

We now focus the analysis on at-risk students via the functional h(y˜i)=I{y˜iτ0.1} under cross-entropy loss, where τ0.1 is the 0.1-quantile of the reading scores. The posterior predictive variables y˜i derive from the same model as in Section 4, which eliminates the need for additional model specification, fitting, and diagnostics. However, it is also possible to fit a separate logistic regression model to the empirical functionals h(yi), which we do for the logistic adaptive lasso competitor.

Figure C.2:

Figure C.2:

Root mean squared errors (RMSEs) for predicting y*(x˜i)=x˜iβ* for covariates {x˜i}i=1n˜=1000 when the observed covariates {xi}i=1n are correlated standard normals (top) or iid uniforms (bottom) and when the target covariates {x˜i}i=1n˜=1000 have the same distribution (left) or a different distribution (right) from {xi}i=1n. The boxplots summarize the RMSE quantiles for the subsets within each acceptable family. The vertical lines denote RMSEs of competing methods and the average size of each acceptable family is annotated.

It is most informative to study how the acceptable families A0,0.1 differ for targeted classification relative to prediction. The comparative distributions of D˜𝒮min,𝒮out (Figures 4 and D.1) clearly show that classification admits smaller subsets capable of matching the performance of 𝒮min within 1%.

The variable importance metrics (Figures D.2 and E.3) identify the same keystone covariates as in the prediction setting. However, although there are many more acceptable subsets for classification (|A0,0.1|=1547) than for prediction (|A0,0.1|=1183), each VIincl (j) is generally much smaller for classification. Indeed, the smallest acceptable subset for classification is smaller, |𝒮small|=16, and is accompanied by 21 acceptable subsets of this size. In aggregate, these results suggest that near-optimal linear classification of at-risk students is achievable for a broader variety of smaller subsets of covariates.

Lastly, the selected covariates from 𝒮small are plotted with 90% intervals in Figure D.3. Compared to the prediction version, 𝒮small for classification replaces 1st trimester PM2.5 exposure with acute PM2.5 exposure and omits the interactions Hisp × Blood_lead and Hisp × EconDisadv, but is otherwise the same. The posterior expectations under are excluded because they are not targeted for prediction of h(y˜i) and therefore are not directly comparable. However, there is some disagreement between 𝒮small and the adaptive lasso, the latter of which excludes several keystone covariates and again suffers from inconsistency between the point and interval estimates.

Figure D.1:

Figure D.1:

The 80% intervals (bars) and expected values (circles) for D˜𝒮min,𝒮out with D𝒮,𝒮minout (x-marks) under cross-entropy for each subset size |𝒮| with 𝒮S. We annotate 𝒮min (dashed gray line) and 𝒮small (solid gray line) and jitter the subset sizes for clarity of presentation.

Figure D.2:

Figure D.2:

Variable importance VIincl(j) for classification of the 0.1-quantile (mk=100).

Figure D.3:

Figure D.3:

Estimated linear effects and 90% intervals for the variables in 𝒮small based on the proposed approach and the adaptive lasso.

Appendix E. Supporting figures

These figures include: pairwise correlations among the covariate (Figure E.1) and co-variable importance for prediction (Figure E.2) and classification (Figure E.3).

Figure E.1:

Figure E.1:

Pairwise correlations among the covariates in the NC education data.

Figure E.2:

Figure E.2:

Co-variable importance VIincl(j,) for prediction. For certain pairs of variables (chronic and acute PM2.5 exposure; neighborhood deprivation and racial residential isolation both at birth and time of test), it is common for one—but not both—to appear in an acceptable subset.

Figure E.3:

Figure E.3:

Co-variable importance VIincl(j,) for classification of the 0.1-quantile.

References

  1. Afrabandpey Homayun, Peltola Tomi, Piironen Juho, Vehtari Aki, and Kaski Samuel. A decision-theoretic approach for model interpretability in Bayesian framework. Machine Learning, 109(9):1855–1876, 2020. ISSN 1573–0565. [Google Scholar]
  2. Barbieri Maria Maddalena and Berger James O.. Optimal predictive model selection. Annals of Statistics, 32(3):870–897, 2004. ISSN 00905364. doi: 10.1214/009053604000000238. [DOI] [Google Scholar]
  3. Bashir Amir, Carvalho Carlos M., Hahn P. Richard, and Jones M. Beatrix. Post-processing posteriors over precision matrices to produce sparse graph estimates. Bayesian Analysis, 14(4):1075–1090, 2019. ISSN 19316690. doi: 10.1214/18-BA1139. [DOI] [Google Scholar]
  4. Bernardo José M and Smith Adrian F M. Bayesian theory, volume 405. John Wiley and Sons, Inc., 2009. ISBN 047031771X. [Google Scholar]
  5. Bertsimas Dimitris, King Angela, and Mazumder Rahul. Best subset selection via a modern optimization lens. Annals of statistics, 44(2):813–852, 2016. ISSN 0090–5364. [Google Scholar]
  6. Bondell Howard D. and Reich Brian J.. Consistent high-dimensional Bayesian variable selection via penalized credible regions. Journal of the American Statistical Association, 107(500):1610–1624, 2012. ISSN 01621459. doi: 10.1080/01621459.2012.716344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Breiman Leo. Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author). Statistical Science, 16(3):199–231, 2001. ISSN 0883–4237. doi: 10.1214/ss/1009213726. [DOI] [Google Scholar]
  8. Carvalho Carlos M., Polson Nicholas G., and Scott James G.. The horseshoe estimator for sparse signals. Biometrika, 97(2):465–480, 2010. ISSN 00063444. doi: 10.1093/biomet/asq017. [DOI] [Google Scholar]
  9. Children’s Environmental Health Initiative. Linked Births, Lead Surveillance, Grade 4 End-Of-Grade (EoG) Scores [Data set], 2020. URL 10.25614/COHORT_2000. [DOI] [Google Scholar]
  10. Datta Jyotishka and Ghosh Jayanta K. Asymptotic properties of Bayes risk for the horseshoe prior. Bayesian Analysis, 8(1):111–132, 2013. [Google Scholar]
  11. Dong Jiayun and Rudin Cynthia. Variable importance clouds: A way to explore variable importance for the set of good models. arXiv preprint arXiv:1901.03209, 2019. [Google Scholar]
  12. Fan Jianqing and Lv Jinchi. Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 70(5): 849–911, 2008. ISSN 13697412. doi: 10.1111/j.1467-9868.2008.00674.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Fan Jianqing and Lv Jinchi. A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20(1):101–148, 2010. ISSN 10170405. [PMC free article] [PubMed] [Google Scholar]
  14. Furnival George M and Wilson Robert W. Regressions by leaps and bounds. Technometrics, 42(1):69–79,2000. ISSN 0040–1706. [Google Scholar]
  15. Gatu Cristian and Kontoghiorghes Erricos John. Branch-and-bound algorithms for computing the best-subset regression models. Journal of Computational and Graphical Statistics, 15(1):139–156, 2006. ISSN 1061–8600. [Google Scholar]
  16. Gelfand AE, Dey DK, and Chang H. Model determination using predictive distributions, with implementation via sampling-based methods (with discussion). Bayesian Statistics 4, 4: 147–167, 1992. ISSN 01621459. [Google Scholar]
  17. Goodrich Ben, Gabry Jonah, Ali Imad, and Brilleman Sam. rstanarm: Bayesian applied regression modeling via Stan., 2018. URL http://mc-stan.org/.
  18. Goutis Constantinos and Robert Christian P.. Model choice in generalised linear models: A Bayesian approach via Kullback-Leibler projections. Biometrika, 85(1):29–37, 1998. ISSN 00063444. doi: 10.1093/biomet/85.1.29. [DOI] [Google Scholar]
  19. Hahn P. Richard and Carlos M Carvalho. Decoupling shrinkage and selection in bayesian linear models: A posterior summary perspective. Journal of the American Statistical Association, 110(509):435–448, 2015. ISSN 1537274X. doi: 10.1080/01621459.2014.993077. [DOI] [Google Scholar]
  20. Hahn P. Richard, He Jingyu, and Lopes Hedibert F.. Efficient Sampling for Gaussian Linear Regression With Arbitrary Priors. Journal of Computational and Graphical Statistics, 28 (1):142–154, 2019. ISSN 15372715. doi: 10.1080/10618600.2018.1482762. [DOI] [Google Scholar]
  21. Hastie Trevor, Tibshirani Robert, and Friedman Jerome. The Elements of Statistical Learning, volume 2. Springer, 2009. ISBN 0387848584. [Google Scholar]
  22. Hastie Trevor, Tibshirani Robert, and Tibshirani Ryan. Best Subset, Forward Stepwise or Lasso? Analysis and Recommendations Based on Extensive Comparisons. Statistical Science, 35(4):579–592, 2020. ISSN 0883–4237. [Google Scholar]
  23. Hosmer David W, Jovanovic Borko, and Lemeshow Stanley. Best subsets logistic regression. Biometrics, pages 1265–1270, 1989. ISSN 0006–341X. [Google Scholar]
  24. Huber Florian, Koop Gary, and Onorante Luca. Inducing Sparsity and Shrinkage in Time-Varying Parameter Models. Journal of Business and Economic Statistics, pages 1–48, 2020. ISSN 15372707. doi: 10.1080/07350015.2020.1713796. [DOI] [Google Scholar]
  25. Ishwaran Hemant and Rao J. Sunil. Spike and slab variable selection: Frequentist and Bayesian strategies. Annals of Statistics, 33(2):730–773, 2005. ISSN 00905364. doi: 10.1214/009053604000001147. [DOI] [Google Scholar]
  26. Jiang Jiming, Rao J Sunil, Gu Zhonghua, and Nguyen Thuan. Fence methods for mixed model selection. The Annals of Statistics, 36(4):1669–1692, 2008. ISSN 0090–5364. [Google Scholar]
  27. Jin Shiqiang and Goh Gyuhyeong. Bayesian selection of best subsets via hybrid search. Computational Statistics, pages 1–17, 2020. ISSN 0943–4062. [Google Scholar]
  28. Kissel Nicholas and Mentch Lucas. Forward Stability and Model Path Selection. arXiv preprint arXiv:2103.03462, 2021. [Google Scholar]
  29. Kowal Daniel R.. Fast, Optimal, and Targeted Predictions using Parametrized Decision Analysis. Journal of the American Statistical Association, 2021. doi: 10.1080/01621459.2021.1891926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kowal Daniel R and Bourgeois Daniel C. Bayesian Function-on-Scalars Regression for High-Dimensional Data. Journal of Computational and Graphical Statistics, 29(3):1–10, 2020. ISSN 1061–8600. doi: 10.1080/10618600.2019.1710837.33013150 [DOI] [Google Scholar]
  31. Kowal Daniel R., Bravo Mercedes, Leong Henry, Griffin Robert J., Ensor Katherine B., and Miranda Marie Lynn. Bayesian Variable Selection for Understanding Mixtures in Environmental Exposures. Statistics in Medicine, 40(22):4850–4871, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Lei Jing. Cross-Validation With Confidence. Journal of the American Statistical Association, 0(0):1–53, 2019. ISSN 0162–1459. doi: 10.1080/01621459.2019.1672556. URL 10.1080/01621459.2019.1672556. [DOI] [Google Scholar]
  33. Lei Jing, G’Sell Max, Rinaldo Alessandro, Tibshirani Ryan J, and Wasserman Larry. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523):1094–1111, 2018. ISSN 0162–1459. [Google Scholar]
  34. Liang Faming, Song Qifan, and Yu Kai. Bayesian subset modeling for high-dimensional generalized linear models. Journal of the American Statistical Association, 108(502): 589–606, 2013. ISSN 0162–1459. [Google Scholar]
  35. Lindley DV. The Choice of Variables in Multiple Regression. Journal of the Royal Statistical Society: Series B (Methodological), 30(1):31–53, 1968. doi: 10.1111/j.2517-6161.1968.tb01505.x. [DOI] [Google Scholar]
  36. Miller Alan J. Selection of subsets of regression variables. Journal of the Royal Statistical Society: Series A (General), 147(3):389–410, 1984. ISSN 0035–9238. [Google Scholar]
  37. Nott David J. and Leng Chenlei. Bayesian projection approaches to variable selection in generalized linear models. Computational Statistics and Data Analysis, 54(12):3227–3241, 2010. ISSN 01679473. doi: 10.1016/j.csda.2010.01.036. [DOI] [Google Scholar]
  38. Piironen Juho, Paasiniemi Markus, and Vehtari Aki. Projective inference in high-dimensional problems: Prediction and feature selection. Electronic Journal of Statistics, 14(1):2155–2197, 2020. ISSN 1935–7524. URL http://arxiv.org/abs/1810.02406. [Google Scholar]
  39. Polson Nicholas G. and Scott James G.. Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction. Bayesian Statistics, 9780199694:501–538, 2010. doi: 10.1093/acprof:oso/9780199694587.003.0017. [DOI] [Google Scholar]
  40. Puelz David, Hahn P. Richard, and Carvalho Carlos M.. Variable selection in seemingly unrelated regressions with random predictors. Bayesian Analysis, 12(4):969–989, 2017. ISSN 19316690. doi: 10.1214/17-BA1053. [DOI] [Google Scholar]
  41. Ribeiro Marco Tulio, Singh Sameer, and Guestrin Carlos. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016. [Google Scholar]
  42. Semenova Lesia and Rudin Cynthia. A study in Rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning. arXiv preprint arXiv:1908.01755, 2019. URL http://arxiv.org/abs/1908.01755. [Google Scholar]
  43. Tran Minh Ngoc, Nott David J., and Leng Chenlei. The predictive Lasso. Statistics and Computing, 22(5):1069–1084, 2012. ISSN 09603174. doi: 10.1007/s11222-011-9279-3. [DOI] [Google Scholar]
  44. Vehtari Aki and Ojanen Janne. A survey of Bayesian predictive methods for model assessment, selection and comparison. Statistics Surveys, 6(1):142–228, 2012. ISSN 19357516. doi: 10.1214/12-ss102. [DOI] [Google Scholar]
  45. Woody Spencer, Carvalho Carlos M., and Murray Jared S.. Model interpretation through lower-dimensional posterior summarization. Journal of Computational and Graphical Statistics, pages 1–9, 2020. ISSN 1061–8600. URL http://arxiv.org/abs/1905.07103.33013150 [Google Scholar]
  46. Wu Shaohua, Harris TJ, and McAuley KB. The use of simplified or misspecified models: Linear case. The Canadian Journal of Chemical Engineering, 85(4):386–398, 2007. ISSN 0008–4034. [Google Scholar]
  47. Zhao Sen, Witten Daniela, and Shojaie Ali. In defense of the indefensible: A very naive approach to high-dimensional inference. Statistical Science, 36(4):562–577, 2021. ISSN 0883–4237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Zou Hui. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429, 2006. ISSN 01621459. doi: 10.1198/016214506000000735. [DOI] [Google Scholar]

RESOURCES