Abstract
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Keywords: C-TMLE, IPTW, variable reduction
1 Introduction
In the causal inference and censored data literature, the propensity score (Rosenbaum and Rubin, 1983) is defined as the conditional probability of treatment or of remaining uncensored given a set of measured covariates. Marginal treatment effects such as the average treatment effect (ATE) can be estimated by weighting outcomes by the inverse of the estimated propensity score (Robins, Rotnitzky, and P, 1995, Robins, Hernán, and Brumback, 2000). This estimation method is called Inverse Probability of Treatment Weighting (IPTW). While this and other propensity score estimation techniques have been gaining in usage in the medical and scientific literature (Luo, Gardiner, and Bradley, 2010, Thoemmes and Kim, 2011) confusion and lack of guidelines remain as to how variable selection for the propensity score model should and should not be carried out (Vansteelandt, Bekaert, and Claeskens, 2012).
In general, unbiased or consistent estimation of the parameter of interest requires full or partial knowledge of the underlying causal structure and time-ordering of the variables (Robins, 2001, Hernán, Hernández-Diaz, Werler, and Mitchell, 2002) in order to preselect all potential confounders and avoid post-treatment variables. Some experts have claimed that controlling for the widest possible set of pre-treatment variables protects against unobserved confounding (Rubin, 2009, Schneeweiss, Rassen, Glynn, Avorn, Mogun, and Brookhart, 2009). However, VanderWeele and Shpitser (2011) show that controlling for all pre-treatment variables may not lead to correct confounder control even if a sufficient confounder set has been observed. They show that even with partial ignorance of the causal structure of the pre-treatment variables, adjusting for all observed variables that cause treatment or outcome (including common causes) is sufficient to adjust for confounding bias (if any such sufficient set exists among the observed variables). Both of these selection schemes may potentially lead to an excessively large set of potential confounders, possibly resulting in inflated variance or an inability to fit the propensity score model using standard techniques due to the curse of dimensionality. In the applied literature, variable reduction methods are often used within the propensity score model (Judkins, Morganstein, Zador, Piesse, Barrett, and Mukhopadhyay, 2007, Li, Evans, and Hser, 2010). Many authors proposed machine learning methods to optimize the fit of the propensity score model (e.g. Westreich, Lessler, and Funk 2010, Lee, Lessler, and Stuart 2009, Setoguchi, Schneeweiss, Brookhart, Glynn, and Cook 2008, Austin, Tu, Ho, Levy, and Lee 2013). However, as we show theoretically and through simulation, flexible modeling methods for estimation of the propensity score can perform poorly as they will primarily adjust to variables that are strongly correlated with the treatment variable.
For the estimation of the marginal treatment-specific mean, double robust estimators (Rotnitzky and Robins, 1995a, Robins et al., 1995) are a class of methods that require fitting both the propensity score model and a model for the expectation of the outcome conditional on treatment and covariates. These models are called double robust because if either of these two models is correctly specified, the estimator will be consistent for the parameter of interest. One example of a double robust method (or category of methods) that is also semiparametric efficient is Targeted Minimum Loss-based Estimation (TMLE; van der Laan and Rubin 2006). As with the other double-robust methods, TMLE requires estimation of both the conditional mean outcome and the propensity score. Many papers on TMLE encourage implementation with flexible methods for these models in order to avoid model misspecification (van der Laan and Rubin, 2006, Zheng and van der Laan, 2011, van der Laan and Rose, 2011). Under model uncertainty, TMLE implemented with the ensemble-learning method known as Super Learner (Polley and Van der Laan, 2010) has been shown to produce superior results over IPTW and TMLE implemented with generalized linear models (Porter, Gruber, van der Laan, and Sekhon, 2011, Schnitzer, van der Laan, Moodie, and Platt, 2014). However, confounder uncertainty and selection has not been fully evaluated in the data-adaptive TMLE context.
Asymptotically, IPTW and TMLE are both unbiased when the propensity score model, conditional on a sufficient covariate set to control for confounding, is correctly specified. TMLE is also consistent when the model for the outcome conditional on a sufficient covariate set is correctly specified. TMLE is asymptotically efficient when both models are correctly specified. It is also known that the minimal variance bound will vary depending on the exclusion restrictions placed on the covariate space (Hahn, 1998, Rotnitzky, Li, and Li, 2010). For consistent inference using double-robust estimators, the propensity score and outcome models need to be specified in such a way that the combined models collaboratively adjust for a sufficient confounder set (van der Laan and Gruber, 2010). In particular, collaborative theory suggests that when these models jointly contain a sufficient confounder set, the double-robust estimator might be consistent (even if neither model contains a sufficient confounder set on its own). This collaborative property is exploited by the procedure described as Collaborative Targeted Minimum Loss-based Estimation (C-TMLE; van der Laan and Gruber 2010, Gruber and van der Laan 2010a). C-TMLE is a stagewise variable selection procedure using TMLE updates to produce a list of candidate estimates. It then uses cross-validated estimates of a loss function to select the optimal estimate from the list.
Several authors have proposed new data-driven procedures to better target the causal quantity of interest. De Luna, Waernbaum, and Richardson (2011) described the necessary assumptions for the existence and identification of minimal sufficient adjustment sets of confounders in the nonparametric setting. They proposed two generic variable selection algorithms to obtain such a minimal confounder set by iteratively testing the conditional independence of covariates (a version of which was implemented by Persson, Häggström, Waernbaum, and de Luna 2013). Vansteelandt et al. (2012) proposed a stochastic variable selection procedure that targets the minimization of the mean squared error (MSE), which is approximated through cross-validation as in Brookhart and van der Laan (2006). Other confounder selection procedures for similar contexts have been recently proposed by Crainiceanu, Dominici, and Parmigiani (2008), Wang, Parmigiani, and Dominici (2012) (also see critical commentary), and Cefalu, Dominici, and Parmigiani (2014). While the High-Dimensional Propensity Score methodology of Schneeweiss et al. (2009) is primarily intended to reduce residual confounding bias by searching for additional potential confounders amongst medical codes in administrative databases, their approach could potentially be used as a covariate selection strategy when the number of adjusted-for binary covariates needs to be reduced. VanderWeele and Shpitser (2011) propose to reduce a non-minimal but sufficient confounding set using backwards selection by sequentially discarding variables that are independent of the outcome conditional on the remaining set of covariates.
In this article, we evaluate the usage of data-adaptive estimation for the nuisance models of IPTW and TMLE in addition to the performance of C-TMLE. In Section 2 we describe the goals and framework of variable selection procedures in causal inference. In order to demonstrate the consequences of certain variable selection approaches, in Section 3 we illustrate the asymptotic variance inflation of IPTW under the inclusion of an “instrumental variable” (i.e. a pure cause of treatment). In Section 4, we provide descriptions of collaborative double robustness and the TMLE and C-TMLE procedures for the marginal treatment-specific mean. In Section 5, we simulate several challenging scenarios and evaluate the empirical performance of C-TMLE versus other methods with an emphasis on the usage of data-adaptive methods. We review our results in the Discussion (Section 6).
2 Causal variable selection framework
2.1 Notation and assumptions
Suppose we have n independently and identically distributed observations O = (X,A,Y) (with subscripts i added to denote an individual’s particular value). Let A be an indicator of whether a subject received a treatment of interest. A is therefore binary and takes on realizations in {0,1}. Let Y be the univariate outcome of interest. Let X be the possibly multidimensional set of variables that might confound an estimate of the effect of interest. In the Neyman-Rubin counterfactual framework (Rubin, 1974), for a given individual let Ya be defined as the outcome that would have been observed if the individual had been treated according to A = a. For simplicity, the target of inference (or “target parameter”) is the marginal population mean of the outcome under a given treatment, E(Ya), defined as the mean of Y in the population had every individual been treated according to A = a. The ATE is defined as E(Y1) – E(Y0), the difference in population mean had the entire population been treated with option 1 compared to 0.
In order to consistently estimate the marginal population mean, a set of pre-treatment variables must be measured that is sufficient to control for confounding. As described in the Neyman-Rubin counterfactual framework, unbiased (or consistent, depending on the estimator) estimation requires the existence of a measured variable set X (or a summary of X) that satisfies the ignorability requirement, i.e. conditioning on X results in independence between the treatment-specific potential outcome and the treatment (Rosenbaum and Rubin, 1983), i.e. Ya∐A | X. In the directed acyclic graph (DAG) framework, identifiability of a causal quantity (i.e. ignorability) is satisfied when the set X blocks every path between A and Y that contains an arrow into A (Pearl, 2009a, Def 3.3.1). In this paper, we assume that ignorability holds on the full set of variables X but are interested in the situation in which a subset W ⊂ X also satisfies this requirement so that Ya∐A | W as well. We will describe any set of variables that leads to ignorability of the treatment-outcome relationship as “sufficient”, as in sufficient to adjust for confounding (Greenland, Pearl, and Robins, 1999). Throughout we assume that the Stable Unit Treatment Value Assumption (Rubin, 1980) holds.
In order to be able to coherently define the parameter of interest, we must also assume positivity, meaning that every subject in the population could have hypothetically been assigned either level of treatment (Holland, 1986). This corresponds with the assumption that the probability of receiving treatment a must be strictly greater than zero for every level of the covariates in a sufficient adjustment set, i.e. P(A = a | W) > 0. Even if positivity holds, practical positivity violations may occur if for some values of the covariates, no or very few units (relative to the sample size) treated with a have been observed. This leads to estimated propensity score values of approximately zero (Westreich and Cole, 2010, Bembom, Fessel, Shafer, and van der Laan, 2008).
In this paper, we assume that the initial set X is sufficient. (If ignorability does not hold for X nor for any subset of X, then the estimation will inevitably be inconsistent.) We follow Crainiceanu et al. (2008) in assuming that, for all subsets W ⊂ X, any superset of W is also sufficient to control for confounding bias. For instance, we assume that there are no colliders of two unmeasured variables exclusively causing the treatment and outcome, respectively (see Section 4.4 where we briefly discuss M-bias). However, we allow situations where non-supersets of a given variable subset may be sufficient to correct for bias, because controlling for different non-nested sets of covariates can be sufficient for confounding control (Hernán et al., 2002).
2.2 The motivation for variable reduction
When using a propensity score approach, the first step in causal variable selection must begin with an expert selection of all pre-treatment causes of the outcome and treatment (VanderWeele and Shpitser, 2011). The possibility of unbiased inference relies on the assumption that experts have identified a (possibly non-minimal) set that will sufficiently control for confounding bias. This set might be large and in particular, contain instruments and pure causes of the outcome. Attempting to control for a high-dimensional variable set can become problematic for three reasons: 1) Inability to fit the propensity score model due to the “curse of dimensionality”, 2) Artificial positivity violations caused by strong predictors of the treatment that do not reduce confounding bias and 3) Variance inflation caused by the inclusion of instruments or weak confounders that strongly predict treatment. The first issue relates to the inability to fit a given parametric model when the size of the variable set is large relative to the sample size. The second issue relates to the positivity assumption. Suppose that positivity and practical positivity both hold conditional on the set W. Now consider an additional variable I that is strongly predictive of the treatment A. Suppose that I is so predictive of treatment that within some stratum of I the probability of receiving treatment is nearly zero. In finite samples, we might estimate that the probability of receiving treatment in that stratum is zero and therefore determine that we have a positivity violation and cannot proceed. We refer to this as an “artificial” positivity violation because it arises due to the unneeded additional variable I but does not occur with the sufficient set W. The third issue describes the inflation of the variance when instruments are included as covariates. Including instruments in the estimation of the propensity score can increase the variance in finite-sample analysis (Brookhart, Schneeweiss, Rothman, Glynn, Avorn, and Stürmer, 2006, Lefebvre, Delaney, and Platt, 2008), asymptotically (Rotnitzky and Robins, 1995b), and even increase bias when there is residual unmeasured confounding (Wooldridge, 2009, Pearl, 2009b). Conversely, including pure predictors of the outcome can improve both finite-sample (Brookhart et al., 2006, Shrier, Platt, and Steele, 2007) and asymptotic precision (Rotnitzky and Robins, 1995b) of the IPTW estimator.
2.3 Inferential objectives and variable selection criteria
Given a variable set X, there may be different finite-sample objectives for a statistical variable selection algorithm. For example, one might favor targeting the smallest possible finite-sample MSE. Alternatively, some causal inference experts would rather target a minimization of finite-sample bias, achievable by including the largest possible adjustment set with the goal of fully controlling for confounding. If one or more sufficient sets Wr ⊂ X;r = 1, …,R exist (so that no confounding bias remains after adjustment), most analysts would then want to select a sufficient subset that minimizes the finite-sample variance.
Procedures that involve exclusively optimizing the fit of the propensity score would not be expected to fulfill these criteria because
Variable selection on the propensity score model may remove true confounders of the treatment and outcome if the confounders are relatively weak predictors of treatment and the sample size is too small,
They will be more likely to select instruments into the propensity score model, which might increase the variance without reducing bias, and
They will not select pure predictors of the outcome that might reduce the variance in the estimation of the parameter of interest.
Since one is primarily interested in selecting variables that are predictors of the outcome, one might propose a selection scheme based uniquely on the conditional outcome model E(Y | A,X) (for instance, selecting variables by running a linear regression on Y). This is also suboptimal because such a method might remove confounders that are not strongly associated with the outcome whose inclusion might still reduce bias and not increase the variance (Vansteelandt et al., 2012).
We agree with the assertion that in the estimation of a causal quantity it is preferable to target the estimation of the parameter of interest rather than a specific model fit. One criterion for causal variable selection is the MSE of the quantity of interest (Brookhart and van der Laan, 2006, Vansteelandt et al., 2012). Brookhart and van der Laan (2006) use a cross-validation procedure to estimate this quantity and use it to guide variable selection.
Using locally efficient semiparametric models may aid in the goal of minimizing finite-sample variance. By reducing the extent of parametric assumptions, we might also be able to limit the asymptotic and finite-sample bias caused by model misspecification. Both TMLE and C-TMLE target the efficient influence function (see van der Laan and Rubin (2006), van der Laan and Gruber (2010) and Section 4.1 of this article). This is done by choosing models for E(Y | A,X) and the propensity score that converge to quantities that solve the efficient influence function equation. This leads to consistency of the estimator for the parameter of interest. C-TMLE sequentially adds variables to the propensity score model, and it is assumed that the complete set is sufficient to produce consistency of a TMLE. C-TMLE balances consistency, which is ensured with sufficient covariate additions, with low finite-sample variance. It does so using a loss function for E(Y | A,X), the relevant part of the likelihood for estimating E(Ya). The expectation of this loss function is minimized at the true E(Y | A,X). C-TMLE selects the (possibly lower-dimensional) propensity score model that minimizes the cross-validated loss function.
3 Characterizing asymptotic variance inflation from the inclusion of an instrumental variable
3.1 Large-sample variance calculations
Our goal is to demonstrate the large-sample variance inflation obtained from conditioning on an instrumental variable by characterizing the results using a simple case (full mathematical details available in the Supplementary Materials). In general, it is known that including a pure predictor of treatment in the modeling procedure increases the minimal variance bound (Rotnitzky et al., 2010) and the IPTW asymptotic variance (Rotnitzky and Robins, 1995b) and therefore may result in less efficient estimation.
For this section alone, suppose that we observe data (L,A,Y) where L is the only binary baseline covariate, A is binary treatment, and Y is the (unrestricted) outcome of interest. We are interested in estimating μ = E(Y1) where the superscript indicates the counterfactual under A = 1. Suppose also that Y1∐A without any conditioning. This means that μ can be estimated without adjusting for L. We are interested in comparing the asymptotic variance of causal estimators including and excluding L as a covariate under different independence assumptions. Let q = P(L = 1). Let g(1) = P(A = 1|L = 1) > 0, and g(0) = P(A = 1|L = 0) > 0. Let g = P(A = 1) = qg(1)+(1 – q)g(0) > 0. Let σ2 = Var(Y1). Let PnV indicate the empirical average of a variable V over all observations. We write as the estimate of the population mean under treatment.
If L is not included in estimation, g can be estimated nonparametrically using gn = PnA. The IPTW estimating equation without including L as a covariate is
Therefore,
where we define the function h1(x1,x2) = x1/x2.
Suppose we use IPTW without adjusting for L. We can use the Delta method with function h1 to derive the asymptotic variance. Notice that because of the Central Limit Theorem,
so that the Delta method leads to
Taking the matrix inner product by the gradient , we get that the asymptotic variance of is σ2/g = Var(Y1)/P(A = 1).
Alternatively, consider the IPTW estimator when the baseline variable L is included as a covariate in the propensity score model. Define
nonparametric estimates of P(A | L = 1) and P(A | L = 0), respectively. If L is included in estimation, IPTW is defined by the estimating equation
which can be rewritten to express the estimator as
Define the function h2 as
Note that this IPTW estimator can be expressed as a function where . The resulting asymptotic inference will depend on the causal relationship between L and the variables A and Y1.
3.2 Characterizing the variance inflation
Now suppose that L is an instrument, so that it influences A, but not Y1. It is easy to see that if we know that the usual no unmeasured confounding assumption holds with L, that is, A∐Y1|L, and that Y1∐L, then we also have that Y1∐A. Therefore, we do not need to include L in the inverse probability of treatment weights, but we could if we were not sure about the independence between Y1 and A.
Including L as a covariate in the propensity score, leads to consistent and asymptotically normal inference, as long as positivity is not violated: that is, if g(1) ≠ 0 and g(0) ≠ 0. It leads to suboptimal inference by inflating the large-sample variance relative to the large-sample variance of the estimator that excludes L.
By Central Limit Theorem,
where T is the 5×5 variance-covariate matrix for (PnALY,PnA(1–L)Y,PnAL,PnA,PnL). By taking the matrix inner product of T by the gradient , we get that the asymptotic variance of is .
The large-sample variance inflation obtained by including L (i.e. the asymptotic relative efficiency) is then
This inflation is independent of the distribution of Y (beyond the initial independence assumptions). For q ≠ 0, either term in the product is only equal to 1 if g(0) = g(1), i.e. in the case where A is equivalently distributed in both strata of L. For any fixed q, this expression is always greater than 1 (to see this, one can reparametrize by δ = g(1)/g(0) and use basic calculus to check the minimum and second derivative). This indicates that including L in the propensity score will never decrease the variance. For example, setting q = 0:5, the variance inflation in terms of g(0) and g(1) can be visualized as in Figure 1(a). From this plot, we see that when g(0) and g(1) are close, the variance inflation is minimized, but when they are increasingly different, the inflation increases unboundedly.
Figure 1.

Large-sample variance inflation (VI) from including a binary instrument in the IPTW model (a) when varying the probabilities of treatment g(0) and g(1) while setting q = 0.5 and (b) when varying the ratio of treatment probabilities δ = g(1)=g(0) and the probability q of having the instrument characteristic.
We reparametrized the variance inflation using δ = g(1)/g(0) which is a measure of instrument strength (with corresponding results using the reciprocal 1/δ). In Figure 1(b), we plot the inflation while allowing δ and q to vary. From this graph, one can observe that when δ is close to 1, corresponding to no effect (or correlation) of L on A, there is no inflation. These graphs show that the variance inflation escalates unboundedly as the ratio δ goes to zero. As the δ approaches infinity, the variance inflation again increases unboundedly. Note also that very small and very large values of δ > 0 correspond with a strong instrument. For a fixed δ, there are local maxima at q = 1/2, meaning that an evenly distributed baseline instrument maximizes the variance inflation. For extremely low or high prevalence of L (i.e. q is close to 0 or 1), there is little to no variance inflation.
The above derivation uses asymptotic approximations of the variance. To see whether the results apply in finite samples, we undertook a simulation study to estimate the variance inflation at various sizes of n and instrumental variable strengths, δ. For each size of n and each value of δ, we generated 5,000 datasets with binary variables O = (L,A,Y). The instrument L was generated according to probability q = P(L = 1) = 0:5. Treatment variable A was generated as conditional on L with P(A = 1 | L) = expit(log(0.5) + log(β)L). Binary outcome Y was generated according to the probability P(Y | A) = 0.4 + 0.2A. The values of β were chosen to correspond with the values of δ = 0.1,0.2,0.3,0.5, so that the instrumental variables were generated with decreasing strength. Figure 2 depicts the results of this simulation study. This graph shows that even for finite samples, the expected variance inflation is close to the asymptotic inflation. Therefore, the penalty from including an instrumental variable in this example may be adequately represented using these asymptotic results.
Figure 2.

Simulation: True large-sample and Monte-Carlo-estimated finite-sample variance inflation from including the instrumental variable. Each point on the graph was estimated using 5,000 generated datasets and is an estimate of the true variance inflation at sample size n. The dotted lines represent the asymptotic variance inflation at each value of δ. Due to practical positivity violations in at least one generated dataset, no values were plotted for smaller sample sizes of the strongest instruments.
In the Supplementary Materials, we follow the same procedure while instead assuming that L is a pure cause of the outcome. We show that adjusting for L under these assumptions leads to increased large-sample efficiency.
4 Collaborative adjustment and methods
4.1 Targeted Minimum Loss-based Estimation
Targeted Minimum Loss-based Estimation (TMLE) is a general framework to produce semiparametric efficient and double robust plug-in estimators (van der Laan and Rubin, 2006). TMLE begins with the estimation of the relevant component of the underlying likelihood. This estimate is then updated in order to solve the empirical mean of the efficient influence function for the target parameters set equal to zero. When this occurs, the resulting estimator inherits the properties of semiparametric efficiency (in the class of regular, asymptotically linear estimators) and double robustness (van der Laan and Robins, 2003).
TMLE requires that the target parameter of interest ψ can be expressed as a smooth function of a component of the underlying likelihood. For instance, the parameter of interest may be the marginal mean outcome under treatment a, so that ψ = E(Ya). Let the true conditional mean outcome for Y given X and A = a be denoted . A plug-in estimator for the mean can be defined by specifying a model for , and then taking an empirical mean over the predicted values for all subjects (a simple case of G-Computation, Robins 1986). However, if the model for is misspecified, this method will be biased. TMLE begins with an estimate of and updates the fit to reduce bias in the estimation of E(Ya).
The TMLE algorithm is designed to solve the empirical mean of the efficient influence function (Hampel, 1974, Tsiatis, 2006) of ψ, set equal to zero. Let denote the true propensity score for treatment A = a. For the marginal mean ψ = E(Ya), the efficient influence function D is a function of and . Specifically, it is equal to (van der Laan and Robins, 2003, Section 1.6). The TMLE procedure begins with an estimate of , and then updates this estimate to produce . The individual updated outcome predictions are denoted . We then set , the TMLE estimate of the parameter of interest. This estimate then solves the equation . An estimation procedure that satisfies results in asymptotically unbiased, locally efficient and double robust estimation of ψ (van der Laan and Robins, 2003).
Focusing on the estimation of ψ = E(Ya) with a continuous outcome Y, without loss of generality assume the outcome variable Y is bounded between (0,1). If this is not the case, scale a bounded continuous outcome variable by subtracting off the lower bound and dividing by the difference in the bounds. This scaling is needed to improve the stability of the estimator (Gruber and van der Laan, 2010b). If the outcome is unbounded, one might use values somewhat below the observed minimum and above the observed maximum (the default in the TMLE package being to widen the observed bounds by 10% Gruber and van der Laan 2012). At the end of the procedure, scale the final parameter estimate back to the original scale.
A TMLE procedure for this setting is as follows. A model for the propensity score is fit. From this model, a prediction of the probability of receiving treatment a given covariates X is calculated for each subject. The estimate of is updated through the fluctuation function . The fluctuation covariate is constructed to ensure that fitting ɛ by maximum likelihood estimation solves the efficient influence curve estimating equation (van der Laan and Rose, 2011, Section 5.2). Thus, the estimated value of ɛ is determined by fitting the logistic regression of Y on the single variable with no intercept and offset equal to using all subjects with A = a. The TMLE update step is carried out by setting for all subjects. The targeted estimator for E(Ya) is then .
TMLE is double robust, meaning that it is consistent if either of the models for or is correctly specified. If the subset W ⊂ X is a sufficient confounding set, then correct specification of either marginal model for or for will also yield consistent estimates for the TMLE. The TMLE algorithm does not impose a specific model specification on or and the general semiparametric TMLE philosophy is to data-adaptively estimate these quantities in order to avoid bias arising from model misspecification. Cross-validation may be needed to avoid overfitting (van der Laan and Rubin, 2006, Zheng and van der Laan, 2011).
4.2 Collaborative adjustment
The collaborative double robustness result (van der Laan and Gruber, 2010) states that for double robust estimators, the propensity score model need only condition on the error of a misspecified outcome model in order to obtain unbiased estimation. Specifically, suppose is an estimate obtained using a correctly specified model for . Then, solves the efficient influence function equation for any (possibly misspecified) . Now consider estimates from a misspecified model, consistent for some other quantity . It will not share the above property of solving the efficient influence function equation regardless of the form of the propensity score model. A double robust estimator (such as TMLE) using as the initial outcome estimate will be unbiased when also using propensity score estimates consistent for .
This has interesting consequences for variable selection. Suppose that X is a minimal sufficient confounding set that can be partitioned into subsets (U,V). X is minimal sufficient in the sense that removing any variable from the set will result in a set that would not adjust for confounding bias. Then suppose V is not empty and that the chosen initial outcome model with estimates is a consistent estimator for E(Y | U,A = a). Let propensity score estimates correspond to a correctly specified treatment model conditional on so that they estimate . Then a double robust estimator with initial model estimates and will produce consistent estimation of the target parameter. If the error can be expressed as a function of V and a alone, then depends only on the “complementary” set V (van der Laan and Gruber, 2010). Therefore, the models for treatment and outcome can potentially adjust for complementary components of the full adjustment set. Whether this is possible will depend on the true structure of .
As a simple example, suppose that the true conditional outcome expectation is equal to E(Y | X,A = a) = β0 + β1a + β2 exp(V) + β3U + β4V2a. Suppose that the initial outcome model corresponds to which is correctly specified for E(Y | U,A = a) = {β0 + β2E exp(V)} + {β1 + β4E(V2)}a + β3U. Then, because a TMLE will be consistent if it uses a correctly specified treatment model conditional on only functions of V, i.e. correctly specified for .
4.3 Collaborative Targeted Maximum Likelihood Estimation (C-TMLE)
C-TMLE (van der Laan and Gruber, 2010) is founded on two principles: 1) variable selection for can be performed conditional on the residual of the estimated conditional outcome model and 2) cross-validation can be used to select an optimal estimator from a convergent sequence of estimators. The cross-validation selects the estimator with the minimal value for a given loss function for , the TMLE-updated estimate of the conditional expectation of the outcome. Examples of loss functions include the negative log-likelihood for a binomial outcome or the residual sum of squares for a continuous outcome. A convergent sequence of estimators, indexed by k = 1,2,3,…,K, can be constructed in many ways, but we consider the sequence of estimators indexed by a forward-selection of covariates in the propensity score model. This particular procedure of C-TMLE was first described and used in Gruber and van der Laan (2010a) and explained in greater detail in the book chapter by Gruber and van der Laan (2011).
Below, we describe this C-TMLE procedure for ψ = E(Ya). As an overview, this procedure starts with an estimate of the conditional expectation of the outcome, , the initial “current” estimate. Given an estimate of the propensity score, the TMLE step modifies to produce an updated estimate of the conditional expectation of the outcome. The goodness-of-fit of this updated estimate is assessed through a chosen loss function. The C-TMLE procedure starts with the intercept model for the propensity score, then chooses one variable to add, the choice of which is determined by the greatest improvement to the loss function. Covariates are added sequentially in such a manner. Once no new covariate additions can be found that improve the loss, a TMLE update step is carried out using the propensity score model with the current set of covariates, resulting in a new current estimate, . The procedure is then repeated; new variables are sequentially added to the propensity score model using the same loss function criterion but using the new current expected outcome estimate. is updated each time no additional covariates improve the loss function. This combination of updates creates a sequence of TMLE candidate estimators where both the fit of the propensity score model and the loss function are uniformly improving. Using cross-validation, the procedure then selects the number of variable selection steps (i.e. the number of covariates included) that minimizes the cross-validated loss.
First we describe some details of the steps and decision-making criteria used in the C-TMLE procedure. Following this, we explain the C-TMLE procedure. Note that in this section and the next, we often drop the notation used for the dependencies of the model estimates and on the treatment and covariate set for simplicity and because the covariates fluctuate as part of the variable selection. However, they are all estimates of the counterfactual quantities under treatment a. A bracketed superscript (k) is used as an index enumerating the sequence of estimators created by the C-TMLE procedure.
The TMLE update step
Given a current expected outcome estimate and an estimate of the propensity score model , the TMLE update step is defined as in Section 4.1 by setting where ɛn is estimated by taking the subjects with A = a and fitting an intercept-free logistic regression with sole covariate . We will denote this step in the following procedure as performing a TMLE update with .
The loss function criterion
The C-TMLE procedure uses a loss function to determine whether variable selected into the propensity score improves estimation. For example, the loss function can be taken to be the negative log-likelihood of a logistic regression, i.e. , with sum taken over all n observations. Given two candidate estimates and of the conditional expectation of the outcome, their respective losses can be used to select which model is a better fit. The candidate indexed by k1 will be selected when .
The forward variable selection step
Starting with a current estimate of the conditional expected outcome, , and a propensity score model that may already adjust for a set of covariates U ⊂ X. The procedure selects the next covariate to add to the propensity score model from candidate variables remaining in the set X \ U. For each candidate w ∈ X \ U, w is tentatively added to the propensity score model, and the TMLE update step is performed on where is the propensity score model with the added candidate variable. The estimated loss is then calculated on this updated expected outcome estimate. The candidate variable resulting in the smallest loss is selected.
C-TMLE Procedure
Fit an estimate of . Set the “current” TMLE, .
Estimate the propensity score model with only an intercept term. Define this propensity score model as and as the result of the TMLE update on .
- Let K be the size of variable set X. For k = 1,…,K,
- Add an additional term to the propensity score model using the forward variable selection step. Let be the model with the additional covariate selected from this procedure. Let the “candidate” denote the result of the TMLE update on .
- If , use of the new propensity score model improves the estimated loss. Define .
- Otherwise, since , no new propensity score model can offer an improvement. Set as the new current estimate, and use the forward variable selection step to add an additional term to the propensity score model starting with this new . Define as the updated propensity score estimate and set as the result of the TMLE update on .
The result of the above procedure is a list of candidate updated estimates of the expected outcome where k = 0,…,K indexes the number of covariates included in the model.
-
4
The optimal number kn of covariates to adjust for is chosen amongst k = 0,…,K by selecting the with the lowest cross-validated estimation of the loss. Once the optimal number kn is chosen, the C-TMLE estimate is .
4.4 Assumptions and convergence of C-TMLE
Here we provide a short discussion of the convergence of this estimator in the context of variable selection. Full technical details of C-TMLE convergence can be found in van der Laan and Gruber (2010). Let the initial converge to some conditional expectation (not necessarily equal to the true ). Then, in order for the C-TMLE procedure to produce a consistent estimate, there must exist a k in 0,…,K such that converges to a limiting distribution such that .
To guarantee the consistency of C-TMLE, at any stage in the variable selection process, there must exist remaining variables in X to be sufficient to adjust for the residual confounding at that stage (if any exists). For an example of where this requirement is violated, suppose that the DAG in Figure 3 holds. We observe X = M but we do not observe variables (U1,U2). Suppose the initial does not adjust for any variables (and might therefore be the mean of Y among subjects taking treatment a). For a large sample size, the C-TMLE algorithm might be likely to select the covariate M because it appears to improve the fit of the (biased) outcome model. However, once M is selected, there does not exist a measured superset of M that would adjust for confounding. This is the classic example of M-bias (Greenland et al., 1999, Shrier et al., 2007) and as in other data-driven variable selection schemes that do not assume an apriori DAG, it must be assumed that this structure does not occur or that enough variables are measured so that there will exist a sufficient superset (M,U1,U2) ⊃ M that can be selected. Alternatively, M-bias will not be an issue if sufficient information is known about the DAG to allow the investigator to limit X to only direct causes of both the treatment and the outcome (shown to be a sufficient selection criterion in VanderWeele and Shpitser 2011).
Figure 3.

The example of M-bias when interest lies in estimating the marginal mean of Y under treatment A = a.
Since the propensity score model fit is always improved (or at worst maintained) from the addition of a covariate, the sequence of propensity score models produced by C-TMLE is monotonically decreasing in error. Due to the nature of the TMLE update step, the sequence is also monotonically decreasing in terms of the negative log-likelihood for the conditional outcome model. Therefore, there will exist a k at which point is rich enough to adjust for the residual confounding (by the assumption that X is sufficient) and the TMLE estimator will have minimal bias. The cross-validation in the final step will select the step kn at which the estimated loss function of the outcome model fit is minimized. However, for small sample sizes, this selection may choose an earlier step which does not adjust for a (technically) sufficient set if it comes at a gain in this particular loss function. Consider now the squared error loss function and the bias-variance tradeoff. In small sample size, a small improvement in bias may come at the cost of a great increase in variance. The C-TMLE procedure will preferentially select a slightly biased estimate with a lower squared error. The penalization for finite-sample variance or bias can be increased by choosing a different loss function for the conditional outcome model fit. Since the expectation of a valid loss function is minimized at the truth and converges to zero as the sample size increases, for large samples a sufficient confounding set (if it exists in X) will always be selected.
The semiparametric efficiency bound is conditional on a set of covariates X (Hahn, 1998). In van der Laan and Gruber (2010), the authors explain how C-TMLE is an irregular estimator and can be superefficient (where the asymptotic variance of the estimator is smaller than the minimal variance suggested by the theoretical bound for regular asymptotically linear estimators). This is because conditional on an initial fit, the cross-validation procedure will generally select that only depends on the confounding variables unadjusted for in . C-TMLE will generally not select an instrument, for instance, and can therefore attain the efficiency bound that excludes the instrument from the covariate space if it is also not included in the model for the initial .
4.5 Recommendations for flexible implementation of C-TMLE
Due to the iterative nature of the procedure and the need to repeatedly fit the propensity score model, integrating data-adaptive estimation into the C-TMLE procedure is computationally challenging. It is suggested to fit the initial outcome model fully adaptively. However, fitting each propensity score using computationally intensive methods can be impractical due to the iterative nature of the C-TMLE procedure.
In the simulation study of Section 5, we used a procedure that fits the propensity score using logistic regressions. It is also straight-forward to include non-linear functions of X (such as squared terms and interactions) separately as candidate covariates and allow these to be selected into the propensity score model. As an extension of this idea, it may also be beneficial to include data-adaptive estimates of as covariates to be selected into the propensity score model. This approach is valid because if treatment assignment is ignorable given X then it is also ignorable given a correctly (nonparametrically) specified (Rosenbaum and Rubin, 1983). Including data-adaptive estimates of can adjust for nonlinearities present in the data generating function for treatment (regardless of whether corresponding nonlinearities are also present in the generation of the outcome variable and therefore confound the analysis). However, extreme predictions of (close to 0 or 1) can cause overfitting of the propensity score model. Therefore, truncation of when used as a candidate covariate is recommended. Since the optimal level of truncation is not predetermined, many different levels of truncation can be used. Each truncation level will result in a new candidate covariate, and all can be included in the C-TMLE variable selection procedure (Porter et al., 2011).
5 Simulation Study
In this section, we present two simulation studies that represent situations where post-knowledge variable selection is desirable or necessary. Both involve settings where the analyst has chosen a set X of potential confounders such that a sufficient proper subset W ⊂ X exists. For each simulation study, we divide the estimation into two categories: 1) where the analyst has access to the list of confounders in the minimal adjustment set (which one might consider the “gold standard”) and 2) where the analyst is assumed to be a priori ignorant of the minimal adjustment set. Some of the methods described below use a powerful and flexible prediction method called Super Learner (Polley and Van der Laan, 2010). Super Learner is an ensemble-learning method that optimally combines the predictive power of a user-defined library of prediction methods.
For each simulation, we compare IPTW and TMLE where main-terms logistic regressions are used for the estimation of the propensity score and the conditional outcome model (for TMLE). When the true confounders are considered known, these logistic regressions are fit conditional on W (with the resulting estimators called “IPTW-W” and “TMLE-W”, respectively). When the true confounders are not known, the logistic regressions are fit on the entire variable set X where possible (“IPTW-all” and “TMLE-all”). We also evaluate IPTW when the propensity score model is estimated with Super Learner (called “IPTW-SL”) and TMLE when the the propensity score and outcome models are estimated with Super Learner (“TMLE-SL”). We apply C-TMLE in two different ways: 1) where we include no information about X in the outcome model so that is estimated as the mean of Y in group A = a (“CTMLE-all-noQ”), and 2) where we use Super Learner to optimize the prediction of (“CTMLE-SL”). In both implementations, logistic regressions are used for the propensity score models and all variables in X are allowed to be selected as main terms. We also compare a one-step confounder-selection method that fits full conditional models for the treatment and the outcome and then fits IPTW with a main-terms logistic regression using just those variables that were a = 0:05 significant in both models (“IPTW-select”). Finally, to verify the extent of confounding, we include the (unadjusted) difference in treatment-specific means, weighted by the prevalence of treatment (“No adjust”).
Because of the challenging nature of the generated data, we present the median statistics which we believe better summarize the average performance of the estimators (i.e. are not affected by outlying estimates arising from instability). The mean statistics and other measures of performance are included in the Supplementary Materials. For larger n, the medians and means can be observed to converge as expected.
5.1 Estimation in the presence of strong instruments
In this situation, we generated datasets with five confounders, two instruments, and two variables that only influence the outcome. The instruments were generated to be strong predictors of the treatment. The outcome was Gaussian with mean generated nonlinearly in the confounder variables and linearly in the pure causes of the outcome. The treatment probabilities were generated linearly in the confounders and instruments (with logit link). See the Supplementary Materials for the data generation code.
Conditional on just the confounders, the minimal probability of treatment and no-treatment are 0.2 and 0.06, respectively. Conditioning on both confounders and instruments, the minimal probability of treatment and no-treatment are 0.004 and 0.001. Therefore, including the instruments in the estimation of the propensity score would be expected to create apparent practical positivity violations in smaller samples even though the data are generated without theoretical positivity violations. When the instruments are omitted, practical positivity violations are less likely to occur. All of the investigated methods utilize inverse probability of treatment weights and are therefore susceptible at varying degrees to instability caused by the large weights produced by near practical positivity violations. Therefore, statistical variable selection is very useful in this situation, both to obtain an estimate of the causal effect with low estimation bias and to reduce the estimation variance.
The data were generated in such a way that the median squared error was about 6.3 across samples sizes in the unadjusted analysis (Table 1). When W (the set of true confounders) was considered to be known, TMLE performed far better than IPTW with relative median squared error roughly proportional across sample sizes. When conditioning on all covariates, TMLE again substantially outperformed IPTW, although they were similarly unbiased for larger sample sizes. Without fitting models for the conditional mean of the outcome, C-TMLE can be roughly thought of as IPTW with a variable selection procedure for the propensity score model. It performed better than IPTW in terms of the median squared error, but worse in terms of bias for larger n. IPTW with regression-based covariate selection did better than IPTW in terms of median squared error but maintained a higher bias even for large values of n.
Table 1.
Median squared error and median bias for Simulation 1. Estimates taken over 1,000 generated datasets for different sample sizes, n. True value of the ATE is E(Y1)−E(Y0) = 2.
| Method | n = 200 | n = 1,000 | n = 5,000 | |||
|---|---|---|---|---|---|---|
| Med Sq E | Med bias | Med Sq E | Med bias | Med Sq E | Med bias | |
| W known | ||||||
| IPTW-W | 0.449 | 0.106 | 0.071 | 0.026 | 0.014 | 0.031 |
| TMLE-W | 0.063 | −0.004 | 0.012 | 0.003 | 0.002 | 0.002 |
| W unknown | ||||||
| No adjust | 6.258 | 2.502 | 6.324 | 2.515 | 6.272 | 2.504 |
| IPTW-all | 1.858 | 0.160 | 0.317 | 0.006 | 0.064 | −0.004 |
| TMLE-all | 0.070 | 0.012 | 0.014 | 0.010 | 0.002 | 0.005 |
| CTMLE-all-noQ | 0.509 | −0.131 | 0.069 | −0.060 | 0.007 | −0.018 |
| IPTW-select | 1.151 | −0.766 | 0.220 | −0.414 | 0.018 | −0.048 |
| W unknown, using Super Learner | ||||||
| IPTW-SL | 2.559 | 1.284 | 0.355 | 0.198 | 0.085 | 0.165 |
| TMLE-SL | 0.048 | −0.007 | 0.005 | 0.016 | 0.001 | 0.006 |
| CTMLE-SL | 0.019 | −0.027 | 0.003 | −0.005 | 0.001 | 0.001 |
When IPTW was fit with Super Learner, its performance deteriorated across all measures. However, the performance of TMLE improved in terms of median squared error (and was comparable in terms of bias). The results of C-TMLE drastically improved with the addition of Super Learner to estimate the initial outcome model and it outperformed all other methods.
IPTW is known to be sensitive to large weights, which infrequently occurred in this data generation. For n = 200 (where large weights might be the most detrimental), the maximum weight observed in an individual data set had a mean of 21 when all covariates were included in the logistic regression, and a mean of 11 when only W was included. When Super Learner was used, the mean maximum weight in a given data set was 13. We also ran IPTW with weight truncation at the 95th percentile (i.e. the weights greater than the 95th percentile were imputed with the 95th percentile weight) but this reduced the performance across statistics when either logistic regression or Super Learner was used.
5.2 Estimation in a high-dimensional covariate space
To represent a high-dimensional potential-confounder space in an epidemiological study, we generated datasets with 90 baseline variables: 20 confounders, 10 highly correlated instruments, 10 pure causes of the outcome, 20 noise variables, and 30 proxies of the observed confounders (generated using means that were linear combinations of the realizations of the true confounders). Once again, the outcome means were generated non-linearly in the confounders (but not in the pure causes of the outcome) and the treatment probabilities were generated linearly with logit link. When conditioning on the true confounders alone, the true minimal probabilities of treatment and non-treatment were both approximately 0.13. When conditioning on both confounders and instruments, the true minimal probability of treatment and non-treatment were both 0.005. However, unlike in the previous scenario, the particularities of the data generation (see the Supplementary Materials) made practical positivity violations much less likely (for very large sample sizes, we estimated minimal probabilities of treatment and non-treatment at 0.07). Therefore, even with the inclusion of instruments in the analysis, practical positivity violations were not expected to occur for large-enough n. The challenge in this scenario is how to fit the propensity score models with a high-dimensional covariate space with no a priori knowledge of which set of variables should be included.
For small sample sizes, the propensity score could not be estimated using all-terms conditional logistic regressions due to the resulting data sparsity. In addition, fitting full conditional models for the outcome and treatment was not reasonable for smaller n, so the IPTW-select procedure was not implemented here. Therefore, we look at the abilities of each data-adaptive estimator to correctly estimate the ATE (the difference of the marginal treatment-specific means under both levels of treatment). Due to computational limitations of the data-adaptive methods in a high-dimensional covariate space, we chose to limit the investigation to n = 200 and 1,000.
Without adjustment for covariates, the median squared error obtained was above 9 for both sample sizes (Table 2). When W was known, TMLE once again out-performed IPTW in terms of median squared error and bias. C-TMLE without an outcome model had poor median squared error but better bias than TMLE with Super Learner. It also had large outlying results for n = 200 not reflected in the median statistics (see complete results in the Supplementary Materials). IPTW with Super Learner on the full set of covariates performed very poorly. TMLE with Super Learner on the full covariate set performed better in terms of median squared error than IPTW with the true confounders W but not as well as TMLE with known W. Finally, C-TMLE with Super Learner for the outcome model on the full covariate set performed better than TMLE on the reduced confounder set W in terms of median squared error. However, for n = 200, it produced a much higher MSE (not shown) due to some extreme outlying estimates.
Table 2.
Median squared error and median bias for Simulation 2. Estimates taken over 1,000 generated datasets for different sample sizes, n. True value of the ATE is E(Y1)−E(Y0) = 2.
| Method | n = 200 | n = 1,000 | ||
|---|---|---|---|---|
| Med Sq E | Med bias | Med Sq E | Med bias | |
| W known | ||||
| IPTW-W | 0.482 | 0.027 | 0.030 | 0.005 |
| TMLE-W | 0.067 | 0.013 | 0.013 | 0.008 |
| W unknown | ||||
| No adjust | 9.839 | 3.137 | 9.321 | 3.053 |
| CTMLE-all-noQ | 1.182 | 0.303 | 0.068 | −0.099 |
| W unknown, using Super Learner | ||||
| IPTW-SL | 3.130 | 1.753 | 1.237 | 1.112 |
| TMLE-SL | 0.285 | 0.519 | 0.008 | 0.071 |
| CTMLE-SL | 0.080 | 0.139 | 0.003 | 0.013 |
We varied the number of partitions of the data used in the cross-validation procedure for C-TMLE, but this did not change the results. We also tried using the penalized MSE for the loss function in C-TMLE and it did not improve the results either. We tried truncating the weights at the 95th percentile for IPTW, but this resulted in somewhat worse performance (and large weights were not a problem in this scenario).
5.3 Variance estimation and coverage
In both of the above scenarios, we used an estimate of the influence function to estimate the standard error of IPTW, TMLE and C-TMLE. This corresponded to taking the standard deviation of the empirical estimates of the influence function (calculated for each individual), divided by the square-root of the total number of subjects (van der Laan and Rose, 2011). In order to summarize the validity of the standard error estimation in the simulation study, Table 3 contains the standard errors and 95% confidence interval coverage for TMLE (with and without Super Learner) and C-TMLE (using Super Learner for the outcome model) using the standard error calculation based on the influence function. In Simulation 1, the coverage for TMLE using logistic regression for the propensity score and outcome models conditional on W was close to nominal. When Super Learner was instead used, the coverage dropped substantially for n = 200. C-TMLE with Super Learner also had low-coverage for n = 200 but was close to nominal for n = 1;000 and n = 5;000. In Simulation 2, TMLE with logistic regression again had close to nominal coverage. However, both TMLE and C-TMLE with Super Learner had much lower coverage despite having the lowest bias in the simulation study (see Table 2 for bias results). Preliminary simulations did not show improvement when correcting for the estimation of the conditional probability of treatment (van der Laan and Rose, 2011). IPTW had similar patterns of liberally estimated standard errors, and these results are available in the Supplementary Materials.
Table 3.
Median standard error estimates and empirical coverage for TMLE and C-TMLE. Estimates taken over 1,000 generated datasets for different sample sizes, n. True value of the ATE is E(Y1)−E(Y0) = 2.
| Method | n = 200 | n = 1,000 | n = 5,000 |
|---|---|---|---|
| SE (Cov%) | SE (Cov%) | SE (Cov%) | |
| Simulation 1 | |||
| TMLE-W | 0.326 (87.6) | 0.152 (94.8) | 0.068 (95.5) |
| TMLE-SL | 0.212 (80.0) | 0.091 (92.4) | 0.043 (93.9) |
| CTMLE-SL | 0.150 (84.1) | 0.074 (93.3) | 0.033 (92.9) |
| Simulation 2 | |||
| TMLE-W | 0.356 (91.7) | 0.157 (94.1) | – |
| TMLE-SL | 0.310 (57.6) | 0.083 (77.7) | – |
| CTMLE-SL | 0.178 (61.2) | 0.067 (88.6) | – |
6 Discussion
In the absence of full knowledge of the true underlying DAG, a sufficient causal variable selection approach advises the selection of all variables that are direct causes of either the treatment or the outcome (or both) (VanderWeele and Shpitser, 2011). This may result in a high-dimensional covariate space that can unknowingly include pure causes of the treatment, which are also called instruments. It is therefore often necessary or desirable to perform a secondary variable selection in order to reduce the set. In both low and high-dimensional covariate spaces, variable selection can be complicated by the presence of instruments and strong predictors of the treatment.
In the simple example of Section 3, we demonstrated that the large-sample variance of IPTW is consistently increased by the inclusion of a binary instrument in the propensity score model. This variance inflation is maximized by an instrument that has probability 0.5 and increases unboundedly with the strength of association with treatment. Intuitively, the variance increase makes sense as the inclusion of the instrument moves the probability of treatment closer to 0 or 1, while a randomized treatment assignment probability of 1/2 leads to optimal efficiency in estimating the average treatment effect.
We presented TMLE and C-TMLE as alternatives that may be more robust to the inclusion of strong predictors of the treatment. Both methods can incorporate flexible prediction (e.g. machine learning methods) to become more robust to model misspecification. In particular, parametric modeling assumptions, when incorrect, will bias the estimation of the target parameter. Flexible prediction methods can be seen as a generalization of data-adaptive variable selection procedures as they select the variables and structural components to be included in the model. C-TMLE also performs a forward selection of the variables in the propensity score model conditional on the fit of the outcome model. None of the methods presented is robust to the presence of colliders of unmeasured causes of the outcome and treatment. This is due to the fact that the colliders will appear to be related to both outcome and exposure and preferentially selected into the TMLE and C-TMLE models, causing M-bias. A high-dimensional potential confounder space may protect against this danger by increasing the likelihood that all relevant variables are included in the model (Schneeweiss et al., 2009), or contrastingly increase the danger by allowing for more model uncertainty.
Through the simulation study of Section 5, we have seen that flexible modeling of the propensity score model in IPTW can lead to higher squared errors and estimation bias. This is because flexible modeling of the propensity score results in the selection of strong predictors of treatment which may or may not be true confounders. TMLE with flexible modeling was more robust to this problem. In this simulation, C-TMLE did not perform as well when the initial outcome model was poorly estimated. In practice, there is no reason not to use fully flexible methods for the initial outcome model, and we observed the improved performance of this method when implemented with Super Learner for the initial outcome model. In the simulation study, we occasionally saw outlying estimates for the high-dimensional setting with a small sample size, so caution may be necessary when using C-TMLE in this most challenging scenario.
Some problems associated with automated flexible learning of the propensity score model can be avoided by pre-screening strong instruments (that have no effect on the outcome) and using TMLE instead of IPTW. It is also true that overfitting the initial estimate of the outcome model in the TMLE procedure will prevent an appropriate update step (ɛ will be estimated as zero). Cross-validation can be used to avoid overfitting the initial outcome model.
The simulation study also revealed that influence function-based estimators (Gruber and van der Laan, 2012) for the standard error of TMLE and C-TMLE with Super Learner can be overly liberal for finite samples, resulting in less than 95% coverage (although they performed well for TMLE with logistic regression for the propensity score and outcome models). The full results of the simulation study are reported in the Supplementary Materials. In response to the need for improved asymptotic inference, van der Laan (2014) recently developed the theoretical groundwork for modified TMLE and IPTW procedures that produce valid asymptotic inference when data-adaptive procedures are used for the outcome and propensity score models. At the time of writing, these procedures have yet to go through intensive empirical evaluation, and so future work will involve further investigation into these rapidly developing methods.
Supplementary Material
Contributor Information
Mireille E. Schnitzer, Email: mireille.schnitzer@umontreal.ca.
Judith J. Lok, Email: jlok@hsph.harvard.edu.
Susan Gruber, Email: sgruber@reaganudall.org.
References
- Austin PC, Tu JV, Ho JE, Levy D, Lee DS. Using methods from the data-mining and machine-learning literature for disease classification and prediction: a case study examining classification of heart failure subtypes. Journal of Clinical Epidemiology. 2013;66:398–407. doi: 10.1016/j.jclinepi.2012.11.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bembom O, Fessel JF, Shafer RW, van der Laan MJ. Data-adaptive selection of the adjustment set in variable importance estimation. UC Berkeley Division of Biostatistics Working Paper Series 2008 [Google Scholar]
- Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Stürmer T. Variable selection for propensity score models. American Journal of Epidemiology. 2006;163:1149–1156. doi: 10.1093/aje/kwj149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brookhart MA, van der Laan MJ. A semiparametric model selection criterion with applications to the marginal structural model. Computational Statistics & Data Analysis. 2006;50:475–498. doi: 10.1016/j.csda.2010.02.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cefalu M, Dominici F, Parmigiani G. A model averaged double robust estimator. Technical report, Department of Biostatistics, Harvard School of Public Health 2014 [Google Scholar]
- Crainiceanu CM, Dominici F, Parmigiani G. Adjustment uncertainty in effect estimation. Biometrika. 2008;95:635–651. [Google Scholar]
- De Luna X, Waernbaum I, Richardson TS. Covariate selection for the nonparametric estimation of an average treatment effect. Biometrika. 2011;98:861–875. [Google Scholar]
- Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology. 1999;10:37–48. [PubMed] [Google Scholar]
- Gruber S, van der Laan MJ. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics. The International Journal of Biostatistics. 2010a;6 doi: 10.2202/1557-4679.1182. Article 18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gruber S, van der Laan MJ. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. The International Journal of Biostatistics. 2010b;6 doi: 10.2202/1557-4679.1260. Article 26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gruber S, van der Laan MJ. C-tmle of an additive point treatment effect. In: van der Laan MJ, Rose S, editors. Targeted Learning: Causal Inference for Observational and Experimental Data. 2011. pp. 301–322. (Springer Series in Statistics). [Google Scholar]
- Gruber S, van der Laan MJ. tmle: An R package for targeted maximum likelihood estimation. Journal of Statistical Software. 2012;51:1–35. URL http://www.jstatsoft.org/v51/i13/. [Google Scholar]
- Hahn J. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica. 1998;66:315–331. [Google Scholar]
- Hampel FR. The influence curve and its role in robust estimation. Journal of the American Statistical Association. 1974;69:383–393. [Google Scholar]
- Hernán MA, Hernández-Diaz S, Werler MM, Mitchell AA. Causal knowledge as a prerequisite for confounding evaluation: An application to birth defects epidemiology. American Journal of Epidemiology. 2002;155:176–184. doi: 10.1093/aje/155.2.176. [DOI] [PubMed] [Google Scholar]
- Holland PW. Statistics and causal inference. Journal of the American Statistical Association. 1986;81:945–960. [Google Scholar]
- Judkins DR, Morganstein D, Zador P, Piesse A, Barrett B, Mukhopadhyay P. Variable selection and raking in propensity scoring. Statistics in Medicine. 2007;26:1022–1033. doi: 10.1002/sim.2591. [DOI] [PubMed] [Google Scholar]
- Lee BK, Lessler J, Stuart EA. Improving propensity score weighting using machine learning. Statistics in Medicine. 2009;29:337–346. doi: 10.1002/sim.3782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lefebvre G, Delaney JAC, Platt RW. Impact of mis-specification of the treatment model on estimates from a marginal structural model. Statistics in Medicine. 2008;27:3629–3642. doi: 10.1002/sim.3200. [DOI] [PubMed] [Google Scholar]
- Li L, Evans E, Hser Y. A marginal structural modeling approach to assess the cumulative effect of drug treatment on later drug use abstinence. THE JOURNAL OF DRUG ISSUES. 2010;40:221–240. doi: 10.1177/002204261004000112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luo Z, Gardiner JC, Bradley CJ. Applying propensity score methods in medical research: Pitfalls and prospects. Medical Care Research and Review. 2010;67:528–554. doi: 10.1177/1077558710361486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pearl J. Causality: Models, Reasoning, and Inference. 2 Cambridge University Press; 2009a. [Google Scholar]
- Pearl J. Technical report. University of California; Los Angeles: 2009b. On a class of bias-amplifying covariates that endanger effect estimates. [Google Scholar]
- Persson E, Häggström J, Waernbaum I, de Luna X. Data-driven algorithms for dimension reduction in causal inference: analyzing the effect of school achievements on acute complications of type 1 diabetes mellitus. arXiv 2013 [Google Scholar]
- Polley EC, Van der Laan MJ. Super learner in prediction. UC Berkeley Division of Biostatistics Working Paper Series 2010 [Google Scholar]
- Porter KE, Gruber S, van der Laan MJ, Sekhon JS. The relative performance of targeted maximum likelihood estimators. The International Journal of Biostatistics. 2011;7:1–34. doi: 10.2202/1557-4679.1308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robins JM. A new approach to causal inference in mortality studies with a sustained exposure period – application to control of the healthy worker survivor effect. Mathematical Modelling. 1986;7:1393–1512. [Google Scholar]
- Robins JM. Data, design, and background knowledge in etiologic inference. Epidemiology. 2001;11:313–320. doi: 10.1097/00001648-200105000-00011. [DOI] [PubMed] [Google Scholar]
- Robins JM, Hernán MA, Brumback B. Marginal structural models and causal inference in Epidemiology. Epidemiology. 2000;11:550–560. doi: 10.1097/00001648-200009000-00011. [DOI] [PubMed] [Google Scholar]
- Robins JM, Rotnitzky A, P ZL. Analysis of semiparametric regression models for repeated outcomes under the presence of missing data. Journal of the American Statistical Association. 1995;90:106–121. [Google Scholar]
- Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55. [Google Scholar]
- Rotnitzky A, Li L, Li X. A note on overadjustment in inverse probability weighted estimation. Biometrika. 2010;97:997–1001. doi: 10.1093/biomet/asq049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rotnitzky A, Robins JM. Semi-parametric estimation of models for means and covariances in the presence of missing data. The Scandinavian Journal of Statistics. 1995a;22:323–333. [Google Scholar]
- Rotnitzky A, Robins JM. Semiparametric regression estimation in the presence of dependent censoring. Biometrika. 1995b;82:805–820. [Google Scholar]
- Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology. 1974;66:688–701. [Google Scholar]
- Rubin DB. Randomization analysis of experimental data: The Fisher randomization test comment. Journal of the American Statistical Association. 1980;75:591–593. [Google Scholar]
- Rubin DB. Should observational studies be deigned to allow lack of balance in covariate distributions across treatment groups? Statistics in Medicine. 2009;28:1420–1423. [Google Scholar]
- Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology. 2009;20:512–522. doi: 10.1097/EDE.0b013e3181a663cc. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schnitzer ME, van der Laan MJ, Moodie EEM, Platt RW. Effect of breastfeeding on gastrointestinal infection in infants: A targeted maximum likelihood approach for clustered longitudinal data. Annals of Applied Statistics. 2014 doi: 10.1214/14-aoas727. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Setoguchi S, Schneeweiss S, Brookhart MA, Glynn RJ, Cook EF. Evaluating uses of data mining techniques in propensity score estimation: a simulation study. Pharmacoepidemiology and Drug Safety. 2008;17:546–555. doi: 10.1002/pds.1555. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shrier I, Platt RW, Steele RJ. Re: variable selection for propensity score models. American Journal of Epidemiology. 2007;166:238–239. doi: 10.1093/aje/kwm164. [DOI] [PubMed] [Google Scholar]
- Thoemmes FJ, Kim ES. A systematic review of propensity score methods in the social sciences. Multivariate Behavioral Research. 2011;46:90–118. doi: 10.1080/00273171.2011.540475. [DOI] [PubMed] [Google Scholar]
- Tsiatis AA. Semiparametric Theory and Missing Data. Springer; 2006. (Springer Series in Statistics). [Google Scholar]
- van der Laan MJ. Targeted estimation of nuisance parameters to obtain valid statistical inference. The International Journal of Biostatistics. 2014;10:29–57. doi: 10.1515/ijb-2012-0038. [DOI] [PubMed] [Google Scholar]
- van der Laan MJ, Gruber S. Collaborative double robust targeted maximum likelihood estimation. The International Journal of Biostatistics. 2010;6 doi: 10.2202/1557-4679.1181. Article 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- van der Laan MJ, Robins JM. Unified Methods for Censored Longitudinal Data and Causality. Springer Verlag; New York: 2003. (Springer Series in Statistics). [Google Scholar]
- van der Laan MJ, Rose S. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer; 2011. (Springer Series in Statistics). [Google Scholar]
- van der Laan MJ, Rubin D. Targeted maximum likelihood learning. The International Journal of Biostatistics. 2006;2 doi: 10.2202/1557-4679.1211. Article 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- VanderWeele TJ, Shpitser I. A new criterion for confounder selection. Biometrics. 2011;67:1406–1413. doi: 10.1111/j.1541-0420.2011.01619.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vansteelandt S, Bekaert M, Claeskens G. On model selection and model misspecification in causal inference. Statistical Methods in Medical Research. 2012;21:7–30. doi: 10.1177/0962280210387717. [DOI] [PubMed] [Google Scholar]
- Wang C, Parmigiani G, Dominici F. Bayesian effect estimation accounting for adjustment uncertainty. Biometrics. 2012;68:661–671. doi: 10.1111/j.1541-0420.2011.01731.x. [DOI] [PubMed] [Google Scholar]
- Westreich D, Cole SR. Invited commentary: Positivity in practice. American Journal of Epidemiology. 2010;171:674–677. doi: 10.1093/aje/kwp436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Westreich D, Lessler J, Funk MJ. Propensity score estimation: neural networks, support vector machines, decision trees (cart), and meta-classifiers as alternatives to logistic regression. Journal of Clinical Epidemiology. 2010;63:826–833. doi: 10.1016/j.jclinepi.2009.11.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wooldridge J. Technical report. Michigan State University; MI: 2009. Should instrumental variables be used as matching variables? [Google Scholar]
- Zheng W, van der Laan MJ. Asymptotic Theory for Cross-validated Targeted Maximum Likelihood Estimation. Springer; 2011. Targeted Learning: Causal Inference for Observational and Experimental Data; pp. 459–474. (Springer Series in Statistics). chapter. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
