Abstract
In many contemporary applications, large amounts of unlabeled data are readily available while labeled examples are limited. There has been substantial interest in semi-supervised learning (SSL) which aims to leverage unlabeled data to improve estimation or prediction. However, current SSL literature focuses primarily on settings where labeled data is selected uniformly at random from the population of interest. Stratified sampling, while posing additional analytical challenges, is highly applicable to many real world problems. Moreover, no SSL methods currently exist for estimating the prediction performance of a fitted model when the labeled data is not selected uniformly at random. In this paper, we propose a two-step SSL procedure for evaluating a prediction rule derived from a working binary regression model based on the Brier score and overall misclassification rate under stratified sampling. In step I, we impute the missing labels via weighted regression with nonlinear basis functions to account for stratified sampling and to improve efficiency. In step II, we augment the initial imputations to ensure the consistency of the resulting estimators regardless of the specification of the prediction model or the imputation model. The final estimator is then obtained with the augmented imputations. We provide asymptotic theory and numerical studies illustrating that our proposals outperform their supervised counterparts in terms of efficiency gain. Our methods are motivated by electronic health record (EHR) research and validated with a real data analysis of an EHR-based study of diabetic neuropathy.
Keywords: Semi-Supervised Learning, Stratified Sampling, Model Evaluation, Risk Prediction
1. Introduction
Semi-supervised learning (SSL) has emerged as a powerful learning paradigm to address big data problems where the outcome is cumbersome to obtain and the predictors are readily available [Chapelle et al., 2009]. Formally, the SSL problem is characterized by two sources of data: (i) a relatively small sized labeled dataset with n observations on the outcome y and the predictors x and (ii) a much larger unlabeled dataset with observations on only x. A promising application of SSL and the motivation for this work is in electronic health record (EHR) research. EHRs have immense potential to serve as a major data source for biomedical research as they have generated extensive information repositories on representative patient populations [Murphy et al., 2009, Kohane, 2011, Wilke et al., 2011]. Nonetheless, a primary bottleneck in recycling EHR data for secondary use is to accurately and efficiently extract patient level disease phenotype information [Sinnott et al., 2014, Liao et al., 2015]. Frequently, true phenotype status is not well characterized by disease-specific billing codes. For example, at Partner’s Healthcare, only 56% of patients with at least 3 International Classification of Diseases, Ninth Revision (ICD9) codes for rheumatoid arthritis (RA) have confirmed RA after manual chart review [Liao et al., 2010]. More accurate EHR phenotyping has been achieved by training a prediction model based on a number of features including billing codes, lab results and mentions of clinical terms in narrative notes extracted via natural language processing (NLP) [Liao et al., 2013, Xia et al., 2013, Ananthakrishnan et al., 2013, e.g.]. The model is traditionally trained and evaluated using a small amount of labeled data obtained from manual medical chart review by domain experts. SSL methods are particularly attractive for developing such models as they leverage unlabeled data to achieve higher estimation efficiency than their supervised counterparts. In practice, this increase in efficiency can be directly translated into requiring fewer chart reviews without a loss in estimation precision.
In the EHR phenotyping setting, it is often infeasible to select the labeled examples uniformly at random, either due to practical constraints or due to the nature of the application. For example, it may be necessary to oversample individuals for labeling with a particular billing code or diagnostic procedure for a rare disease to ensure that an adequate number of cases are available for model estimation. In some settings, training data may consist of a subset of individuals selected uniformly at random together with a set of registry patients whose phenotype status is confirmed through routine collection. Stratified sampling is also an effective strategy when interest lies in simultaneously characterizing multiple phenotypes. One may oversample patients with at least one billing code for several rare phenotypes and then perform chart review on the selected patients for all phenotypes of interest. Though it is critical to account for the aforementioned sampling mechanisms to make valid statistical inference, it is non-trivial in the context of SSL since the fraction of subjects being sampled for labeling is near zero. The challenge is further amplified when the fitted prediction models are potentially misspecified.
Existing SSL literature primarily concerns the setting in which is a uniform random sample from the underlying pool of data and thus the missing labels in are missing completely at random (MCAR) [Wasserman and Lafferty, 2008]. In this setting, a variety of methods for classification have been proposed including generative modeling [Castelli and Cover, 1996, Jaakkola et al., 1999], manifold regularization [Belkin et al., 2006, Niyogi, 2013] and graph-based regularization [Belkin and Niyogi, 2004]. While making use of both and can improve estimation, in many cases SSL is outperformed by supervised learning (SL) using only when the assumed models are incorrectly specified [Castelli and Cover, 1996, Jaakkola et al., 1999, Corduneanu, 2002, Cozman et al., 2002, 2003]. As model misspecification is nearly inevitable in practice, recent work has called for ‘safe’ SSL methods that are always at least as efficient as the SL counterparts. For example, several authors have considered safe SSL methods for discriminative models based on density ratio weighted maximum likelihood [Sokolovska et al., 2008, Kawakita and Kanamori, 2013, Kawakita and Takeuchi, 2014]. Though the true density ratio is 1 in the MCAR setting, the efficiency gain is achieved through estimation of density ratio weight, a statistical paradox previously observed in the missing data literature [Robins et al., 1992, 1994]. More recently, Krijthe and Loog [2016] introduced a SSL method for least squares classification that is guaranteed to outperform SL. Chakrabortty and Cai [2018] proposed an adaptive imputation-based SSL approach for linear regression that also outperforms supervised least squares estimation. It is unclear, however, whether these methods can be extended to accommodate additional loss functions. Moreover, none of the aforementioned methods are applicable to settings where the labeled data is not a uniform random sample from the underlying data such as the stratified sampling design. Additionally, the focus of existing work has been on the estimation of prediction models, rather than the estimation of model performance metrics. Gronsbell and Cai [2018] recently proposed a semi-supervised procedure for estimating the receiver operating characteristic parameters, but this method is similarly limited to the standard MCAR setting.
This paper addresses these limitations through the development of an efficient SS estimation method for model performance metrics that is robust to model misspecification in the presence of stratified sampling. Specifically, we develop an imputation based procedure to evaluate the prediction performance of a potentially misspecified binary regression model. To the best of our knowledge, the proposed method is the first SSL procedure that provides efficient and robust estimation of prediction performance measures under stratified sampling. We focus on two commonly used error measurements, the overall misclassification rate (OMR) and the Brier score. The proposed method involves two steps of estimation. In step I, the missing labels are imputed with a weighted regression with nonlinear basis functions to account for stratified sampling and to improve efficiency. In step II, the initial imputations are augmented to ensure the consistency of the resulting estimators regardless of the specification of the prediction model or the imputation model. Through theoretical results and numerical studies, we demonstrate that the SS estimators of prediction performance are (i) robust to the misspecification of the prediction or imputation model and (ii) substantially more efficient than their SL counterparts. We also develop an ensemble cross-validation (CV) procedure to adjust for overfitting and a perturbation resampling procedure for variance estimation.
The remainder of this paper is organized as follows. In Section 2, we specify the data structure and problem set-up. We then develop the estimation and bias correction procedure for the accuracy measures in Sections 3 and 4. Section 5 outlines the asymptotic properties of the estimators and section 6 introduces the perturbation resampling procedure for making inference. Our proposals are then validated through a simulation study in Section 7 and a real data analysis of an EHR-based study of diabetic neuropathy is presented in Section 8. We conclude with additional discussions in Section 9.
2. Preliminaries
2.1. Data Structure
Our interest lies in evaluating a prediction model for a binary phenotype y based on a predictor vector for some fixed p. The underlying full data consists of independent and identically distributed random vectors
where , is a discrete stratification variable that defines a fixed number of strata for sampling, and is the sample size of stratum s. Throughout, we let be a future realization of F.
Due to the difficulty in ascertaining y, a small uniform random sample is obtained from each stratum and labeled with outcome information. The observable data therefore consists of
where indicates whether yi is ascertained. We let
Without loss of generality, we suppose that the first subjects are labeled and are specified by design. We assume that
as n and respectively. This ensures that
as for any pair of s and t [Mirakhmedov et al., 2014]. As in the standard SS setting, we further assume maxs as [Chakrabortty and Cai, 2018, Zhang et al., 2019]. This assumption distinguishes the current setting from (i) the familiar missing data setting where is bounded above 0 and (ii) standard SSL under uniform random sampling as y is MCAR conditional on (i.e. ) under stratified sampling.
2.2. Problem Set-Up
To predict y0 based on x0, we fit a working regression model
| (1) |
where is an unknown vector of regression parameters and is a specified, smooth monotone function such as the expit function. The target model parameter, , is the solution to the estimating equation
We let the predicted value for y0 be for some function . In this paper, we aim to obtain SS estimators of the prediction performance of quantified by the Brier score
and the overall misclassification rate (OMR) with for some specified constant c.
We focus on these two metrics as the convey distinct information about the performance of the prediction model. The OMR summarizes the overall discrimination capacity of the model while the Brier score summarizes the calibration of the model. More complete discussions regarding assessment of model performance can be found in Hand [1997, 2001], Gneiting and Raftery [2007] and Gerds et al. [2008].
To simplify presentation, we generically write
where for Brier score, and for the OMR. We will construct a SS estimator of to improve the statistical efficiency of its SL counterpart, , where
and the weights account for the stratified sampling with . Since for those with , standard M-estimation theory cannot be directly applied to establish the asymptotic behavior of the SL estimators. We show in Appendix C that is a root-n consistent estimator for and derive the asymptotic properties of in Appendix D. We also note that throughout the article we use the subscripts n or N to index estimating equations to clarify if they are computed with the labeled or full data, respectively.
3. Estimation Procedure
Our approach to obtaining a SS estimator of proceeds in two steps. First, the missing outcomes are imputed with a flexible model to improve statistical efficiency. Next, the imputations are augmented so that the resulting estimators are consistent for regardless of the specification of the prediction model or the imputation model. The final estimator of the accuracy measure is then estimated using the full data and the augmented imputations. This estimation procedure is detailed in the subsequent sections.
We comment here that the initial imputation step allows for construction of a simple and efficient SS estimator of . As efficient estimation of may not be of practical utility in the prediction setting, we keep our focus on accuracy parameter estimation and defer some of the technical details of model parameter estimation to the Appendix. However, we do note that the estimator of inherently has two sources of estimation variability. The dominating source of variation is from estimating the accuracy measure itself while the second source is from the estimation of the regression parameter. Therefore, by leveraging a SS estimator of in estimating , we may further improve the efficiency of our SS estimator of the accuracy measure. These statements are elucidated by the influence function expansions of the SS and SL estimators of presented in Section 5.
3.1. Step 1: Flexible imputation
We propose to impute the missing y with an estimate of . The purpose of the imputation step is to make use of as it essentially characterizes the covariate distribution due to its size. The accuracy metrics provide measures of agreement between the true and predicted outcomes and therefore depend on the covariate distribution. We thus expect to decrease estimation precision by incorporating into estimation. In taking an imputation-based approach, we rely on our estimate of m(u) to capture the dependency of y on u in order to glean information from . While a fully nonparametric method such as kernel smoothing allows for complete flexibility in estimating m(u), smoothing generally does not perform well with moderate p due to the curse of dimensionality [Kpotufe, 2010]. To overcome this challenge and allow for a rich model for m(u), we incorporate some parametric structure into the imputation step via basis function regression.
Let be a finite set of basis functions with fixed dimension that includes x. We fit a working model
| (2) |
to and impute y as where is the solution to
| (3) |
and is a tuning parameter to ensure stable fitting. Under Conditions 1–3 given in Section 5, we argue in Appendix D that is a regular root-n consistent estimator for the unique solution, , to . We take our initial imputations as .
With y imputed as , we may also obtain a simple SS estimator for , , as the solution to
The asymptotic behaviour of is presented and compared with in the Appendix. When the working regression model (1) is correctly specified, it is shown that is fully efficient and is asymptotically equivalent to . When the outcome model in (1) is not correctly specified, but the imputation model in (2) is correctly specified, we show in Appendix E that is more efficient than . When the imputation model is also misspecified, tends to be more efficient than , but the efficiency gain is not theoretically guaranteed. We therefore obtain the final SS estimator, denoted as , as a linear combination of and to minimize the asymptotic variance. Details are provided in Appendices A and B.
3.2. Step 2: Robustness augmentation
To obtain an efficient SS estimator for , we note that
| (4) |
is linear in y when . With a given estimate of , denoted by , a SS estimate of can be obtained as
However, needs to be carefully constructed to ensure that the resulting estimator is consistent for under possible misspecification of the imputation model. Using the expression in (4), a sufficient condition to guarantee consistency for is that
| (5) |
This condition implies that . Unfortunately, does not satisfy (5) when (2) is misspecified. To ensure that (5) holds regardless of the adequacy of the imputation model used for estimating the regression parameters, we augment the initial imputation as
where and is the solution to the IPW estimating equation
| (6) |
We let be the limiting value of which solves the limiting estimation equation
| (7) |
This estimating equation is monotone in zθ for any θ and thus exists under mild regularity conditions. It also follows from (7) that (i) and (ii) which ensure that the sufficiency condition in (5) is satisfied. We thus construct a SS estimator of D(θ) as
In Section 5.2, we present the asymptotic properties of and and compare with its supervised counterpart, . Similar to the SS estimation of , it is shown in Appendix F that the unlabeled data helps to reduce the asymptotic variance of . Specifically, is shown to be asymptotically more efficient than when the imputation model is correct. In practice, however, we may want to use instead of to achieve improved finite sample performance.
4. Bias Correction via Ensemble Cross-Validation
Similar to the supervised estimators of the prediction performance measures, the proposed plug-in estimator uses the labeled data for both constructing and evaluating the prediction model and is therefore prone to overfitting bias [Efron, 1986]. K-fold cross-validation (CV) is a commonly used method to correct for such bias. However, it has been observed that CV tends to result in overly pessimistic estimates of accuracy measures, particularly when n is not very large relative to p [Jiang and Simon, 2007]. Bias correction methods such as the 0.632 bootstrap have been proposed to address this behavior [Efron, 1983, Efron and Tibshirani, 1997, Fu et al., 2005, Molinaro et al., 2005]. Here, we propose an alternative ensemble CV procedure that takes a weighted sum of the apparent and K-fold CV estimators.
We first construct a K-fold CV estimator by randomly partitioning into K disjoint folds of roughly equal size, denoted by . Since N is assumed to be sufficiently large, no CV is necessary for projecting to the full data. For a given k, we use to estimate and , denoted as and , respectively. The observations in are used in the augmentation step to obtain , the solution to . For the kth fold, we estimate the accuracy measure as
where , and take the final CV estimator as . In practice, we suggest averaging over several replications of CV to remove the variation due to the CV partition. We then obtain the weighted CV estimator with
We may similarly obtain a CV-based supervised estimator, denoted by , as well as the corresponding weighted estimator, . Note that the fraction of observations from stratum s in the kth fold, , deviates from in the order of . Although this deviation is asymptotically negligibile, it may be desirable to perform the K-fold partition within each strata to ensure that when ns is small or moderate.
Using similar arguments as those given in Tian et al. [2007], it is not difficult to show that and are first-order asymptotically equivalent. Thus, the ensemble CV estimator reduces the higher order bias of and , but has the same asymptotic distribution. Although the empirical performance is promising, it is difficult to rigorously study the bias properties of as the regression parameter doesn’t necessarily minimize the loss function, . We provide a heuristic justification of the ensemble CV method in Appendix H which assumes minimizes .
5. Asymptotic Analysis
We next present the asymptotic properties of our proposed SS estimator of . To facilitate our presentation, we first discuss the properties of as the accuracy parameter estimates inherently depend on the variability in estimating . We then present our main result high-lighting the efficiency gain of our proposed SS approach for accuracy parameter estimation. We conclude our theoretical analysis with two practical discussions of (i) intrinsic efficient estimation in the SS setting and (ii) optimal allocation in stratified sampling.
For our asymptotic analysis, we let if is positive definite and if is positive semi-definite for any two symmetric matrices Σ1 and Σ2. For any matrix M and vectors v1 and v2, Mj· represents the jth row vector, , and is the vector concatenating v1 and v2. To establish our theoretical results, we recall that and converge to some fixed values ρ1s and ρs in probability, as assumed in Section 2.1, and introduce the following three conditions.
Condition 1.
The basis Φ(u) contains x, has compact support, and is of fixed dimension. The density function for x, denoted by p(x), and P(y = 1 | u) are continuously differentiable in the continuous components of x and u, respectively. There is at least one continuous component of x with corresponding non-zero component in .
Condition 2.
The link function g(·) is continuously differentiable with derivative .
Condition 3.
(A) There is no vector γ such that and . (B) There is a small neighborhood of for some δ > 0, such that for any θ ∈ Θ, there is no vector r such that and . (C) .
Remark 1.
Conditions 1–3 are commonly used regularity conditions in M-estimation theory and are satisfied in broad applications. Similar conditions can be found in Tian et al. [2007] and Section 5.3 of Van der Vaart [2000]. Condition 3(A) and 3(B) assume that there is no γ and ν such that can perfectly separate the samples based on y. In our application of EHR data analysis, these conditions are typically satisfied as the outcomes of interest (i.e. disease status) do not perfectly depend on covariates such as billing codes, lab values, procedure codes, and other features extracted from free-text. Similar to Tian et al. [2007], Condition 3(A) ensures the existence and uniqueness of the limiting parameters and and Condition 3(B) ensures the existence and uniqueness of .
5.1. Asymptotic Properties of
The asymptotic properties of are summarized in Theorem 1 and the justification is provided in Appendix E.
Theorem 1.
Under Conditions 1–3, , and
which weakly converges to where
Remark 2.
To contrast with the supervised estimator , we show in Appendix C that
which weakly converges to where
It follows that when the imputation model is correctly specified, . When , we have that .
5.2. Asymptotic Properties of and
The asymptotic properties of are summarized in Theorem 2 and the justification is provided in Appendix F.
Theorem 2.
Under Conditions 1–3, , and is asymptotically Gaussian with mean zero and variance given in Appendix F. Also, is asymptotically equivalent to
where , is the imputation model based approximation to and .
Remark 3.
We also show that is asymptotically equivalent to
which is also asymptotically Gaussian with mean zero where W is a diagonal matrix defined in Appendix B.
Remark 4.
As shown in Appendix D, is asymptotically Gaussian with mean zero and variance defined in Appendix D. It is equivalent to
We verify in Appendix F that when the imputation model is correctly specified, the asymptotic variance of is smaller than that of regardless of the specification of the working regression model in (1). This is because the accuracy measures always depend on the marginal distribution of x and the proposed SS approach leverages . Therefore is asymptotically more efficient than even when model (1) is correctly specified and is fully efficient.
While we cannot theoretically guarantee that the SS estimator is more efficient than the supervised estimator under misspecification of the imputation model, the first and dominating term in the influence function expansion corresponds to the variability from estimating the accuracy measure. Even when the imputation model is misspecified, it may still provide a close approximation to P(y = 1 | u) and therefore result in reduced variability relative to the supervised approach. The second term of the influence function corresponds to the variability from estimation of the regression parameter. As the SS estimator of the regression parameter is more efficient than its supervised counterpart under model misspecification, we also expect this term to have smaller variation than its supervised counterpart. In our simulation studies, we evaluate the performance of our proposals under various model misspecifications to assess whether this heuristic justification holds up empirically. We also further study this limitation from a theoretical viewpoint in the next section where we introduce a SS estimator with the intrinsic efficiency property from the semiparametric inference literature for comparison.
5.3. Intrinsic Efficient SS Estimation
For simplicity, we begin our discussion of intrinsic efficient estimation focusing on estimation of the regression parameter. Recall that the idea in Section 3.1 is to (i) solve to obtain estimated coefficients for imputation and then (ii) solve to obtain the SS estimator, . By Theorem 1, for any , the asymptotic variance of can be expressed as:
| (8) |
where for each . When the imputation model is misspecified, an alternative estimating equation for γ may be used to directly reduce the asymptotic variance of the resulting SS estimator. Specifically, for a fixed Φ, we may find the estimating equation for γ that leads to the lowest asymptotic variance of the estimator for , a property referred to as “intrinsic efficiency” in the semiparametric inference literature [Tan, 2010]. We briefly propose estimation procedures for an estimator achieving this property with potential to improve upon our original proposal under potential misspecification of the imputation model.
To directly minimize the asymptotic variance of the SS estimator of given by (8), we obtain the estimated coefficients for the imputation model with
| (9) |
where , are the empirical estimates of and A, respectively, and is again a tuning parameter for stable fitting. We then solve to obtain , and return as the intrinsic efficient estimator for . The moment condition in (9) is used for calibrating the potential bias from a misspecified imputation model and ensuring the consistency of . This condition is explicitly imposed when constructing our original proposal.
To study the asymptotic properties of and compare it with our original proposal, , we let
be the limit of , where . The proof of Theorem 3 is provided in Appendix G.2.
Theorem 3.
Under condition 1, and conditions A1 and A2 introduced in Appendix G.2, converges weakly to a mean zero normal distribution, and is asymptotically equivalent to where
In addition: (i) when the imputation model is correctly specified, is asymptotically equivalent to and (ii) the asymptotic variance of is minimized among estimators with . Consequently, the variance of the intrinsic efficient estimator is always less than or equal to the asymptotic variance of both and .
The details and theoretical analysis of the intrinsic efficient estimation procedure of the accuracy measure is presented Appendices G.1 and G.2. Similar to Theorem 3, we show that is asymptotically equivalent with our proposal, , when the imputation model is correctly specified and has smaller asymptotic variance than when the imputation model is misspecified. However, it is important to note that estimation based on intrinsic efficiency is a non-convex problem and one may encounter numerical optimization issues which may limit its use in practice. We provide simulation studies comparing the intrinsic efficient estimator to the proposed approach in Section S4 Supplement.
5.4. Optimal Allocation in Stratified Sampling
Another important practical issue is how to select the strata and the corresponding selection probabilities. Here we provide here a detailed assessment of the optimal (or Neyman) allocation of the labeled data across the strata. Specifically, the general form of the influence function for our estimators is
for a function f with and . The asymptotic variance can then be expressed as
by the Cauchy-Schwarz inequality, and equality holds if and only if
| (10) |
The optimal sampling probabilities are therefore proportional to (i) the relative stratum size and (ii) the variability within the stratum, with greater weight placed on large stratum with high variability. Consequently, stratified sampling leads to a more efficient estimator than uniform random sampling when the allocation in (10) is used.
Remark 5.
There is a rich body of survey sampling literature concerning model-assisted approaches that address the practically important question of how to select the strata and the corresponding selection probabilities [Neyman, 1934, Särndal et al., 2003, Nedyalkova and Tillé, 2008]. The optimal allocation given by (10) is in similar spirit with the sampling schemes used in Cai and Zheng [2012], Liu et al. [2012]. It is particularly useful for HER-based phenotyping studies such as the diabetic neuropathy example in Section 8 as it is often straightforward for domain experts to define a filter variable(s) that yields relatively large stratum with increased prevalence of y (e.g. patients with notes containing terms related to the disease, relevant lab values, or specialist visits) and thus increased variability.
We provide additional numerical studies to illustrate Remark 5 in Section S5 of the Supplement.
6. Perturbation Resampling Procedure for Inference
We next propose a perturbation resampling procedure to construct standard error (SE) and confidence interval (CI) estimates in finite samples. Resampling procedures are particularly attractive for making inference about when since is not differentiable in θ. To this end, we generate a set of independent and identically distributed (i.i.d) non-negative random variables, , independent of , from a known distribution with mean one and unit variance.
For each set of , we first obtain a perturbed version of as
where . We use CV to correct for variance underestimation due to overfitting. Next, we find the solution to the perturbed objective function
| (11) |
and the solution that solves
to obtain perturbed counterparts of and , respectively. We then compute and obtain the perturbed estimator of as
Following arguments such as those in Tian et al. [2007], one may verify that converges to the same limiting distribution as . Additionally, it may be shown that and hence converge to the limiting distribution of . We utilize these results to approximate the distribution of with the empirical distribution of a large number of perturbed estimates using the above resampling procedure to base inference for on the proposed bias-corrected estimator. The variance of can correspondingly be estimated with the sample variance and confidence intervals may be constructed accordingly.
7. Simulation Studies
We conducted extensive simulation studies to evaluate the performance of the proposed SSL procedures and to compare to existing methods. Throughout, we generated p = 10 dimensional covariates x from N(0, C) with . Stratified sampling was performed according to generated from the following two mechanisms:
We let . For both settings, we sampled ns = 100 or 200 observations from each stratum. Throughout, we let v1 be the natural spline of x with 3 knots and v2 be the interaction terms , where and represent interaction terms of x1 with the remaining covariates and x2 with covariates excluding x1 and x2, respectively. With and and denoting noise generated from the logistic and extreme valuep−2,0.3q distributions, we simulated y from the following models:
- with correct outcome model and correct imputation model:
- with incorrect outcome model and correct imputation model:
- with incorrect outcome model and incorrect imputation model:
While the outcome model is misspecified in both (ii) and (iii), the misspecification is more severe in (iii) due to the higher magnitude of nonlinear effects. These configurations are chosen to mimic EHR settings where the signals are typically sparse and S is small. The covariate effects of 1 represent the strong signals from the main billing codes and free-text mentions of the disease of interest. The two weaker signals 0.5 characterize features such as related medications, signs, symptoms and lab results relevant to the disease of interest.
Across all settings, we compare our SS estimators to both the SL estimator and the alternative density ratio (DR) method [Kawakita and Kanamori, 2013, Kawakita and Takeuchi, 2014]. The basis function required in the DR method was chosen to be the same as in our method for all settings. The details and theoretical properties of the DR method are further discussed in the Supplement. We employed the ensemble CV strategy to construct a bias corrected DR estimator for , denoted as , to ensure a fair comparison to our approach. The three settings of outcome and imputation models under (i), (ii), and (iii) allow us to verify the asymptotic efficiency of the proposed SSL procedures relative to the SL and DR methods under various scenarios of misspecification.
For each configuration, results are summarized with 500 independent data sets. The size of the unlabeled data was chosen to be 20,000 across all settings. For all our numerical studies including the real data application, CV was performed with either K = 3 or K = 6 and averaged over 20 replications. The estimated SEs were based on 500 perturbed realizations and the OMR was evaluated with c = 0.5. We let when fitting the ridge penalized logistic regression. We focus primarily on results for S = 2 and K = 6, but include results for S = 4 and K = 3 in Section S3 of the Supplement as they show similar patterns. Additionally, our analyses concentrate on the performance of the accuracy metrics. Results for the regression parameter estimates can be found in Section S2 of the Supplement. The code to implement the proposed methods and run the simulation studies can be found at https://github.com/jlgrons/Stratified-SSL.
In Figure 1, we present the percent biases of the apparent, CV, and ensemble CV estimators of the accuracy parameters. Although all three estimators have negligible biases, the SSL exhibits slightly less bias than its supervised counterpart and the DR estimator under . The ensemble CV method is effective in bias correction while the apparent estimators are optimistic and the standard CV estimator is pessimistic. For example, under , ns = 100 and S = 2, the percent bias of the SSL estimators for the OMR are −8.2%, 8.8% and −0.5% when we use the plug-in, 6-fold CV, and the ensemble CV methods, respectively. The efficiency of and relative to for both the Brier scoreSL and OMR are presented in Figure 2. Again, is substantially more efficient than and , with efficiency gains of approximately 15%–30% under , 40% under , and 40%–80% under . Results for K = 3 have similar patterns and are presented in Figure S1 and S2 of the Supplement. In Table 1, we present the results for the interval estimation obtained from the perturbation resampling procedure for the SS estimator . The SEs are well approximated and the empirical coverage for the 95% CIs is close to the nominal level across all settings.
Figure 1:
Percent biases of the apparent (AP), CV, and ensemble cross validation (eCV) estimators of the Brier score (BS) and overall misclassification rate (OMR) for SL, SSL and DR under (i) , (ii) , and (iii) . Shown are the results obtained with K = 6 fold for CV.
Figure 2:
Relative efficiency (RE) of (SSL) and compared to for the Brier score (BS) and OMR under (i) , (ii) , and (iii) . Shown are the results obtained with K = 6 fold CV.
Table 1:
The 100×ESE of , and under (i) ; (ii) , and (iii) . For , we also show the average of the 100 × ASE as well as the empirical coverage probability (CP) of the 95% confidence intervals constructed based on the resampling procedure.
| (i) |
| Brier score | OMR | |||||||
|---|---|---|---|---|---|---|---|---|
|
| ||||||||
|
| ||||||||
| ESE | ESEASE | CP | ESE | ESE | ESEASE | CP | ESE | |
|
|
||||||||
| S = 2, ns = 100 | 1.27 | 1.191.25 | 0.94 | 1.31 | 2.29 | 2.102.23 | 0.97 | 2.35 |
| S = 2, ns = 200 | 0.97 | 0.870.86 | 0.93 | 0.95 | 1.67 | 1.501.60 | 0.97 | 1.67 |
| S = 4, ns = 100 | 0.97 | 0.890.85 | 0.93 | 0.95 | 1.70 | 1.541.58 | 0.95 | 1.64 |
| S = 4, n = 200 | 0.67 | 0.580.60 | 0.95 | 0.66 | 1.16 | 1.021.11 | 0.97 | 1.16 |
| (ii) |
| Brier score | OMR | |||||||
|---|---|---|---|---|---|---|---|---|
|
| ||||||||
|
| ||||||||
| ESE | ESEASE | CP | ESE | ESE | ESEASE | CP | ESE | |
|
|
||||||||
| S = 2, ns = 100 | 1.72 | 1.471.39 | 0.93 | 1.85 | 3.09 | 2.562.65 | 0.94 | 3.42 |
| S = 2, ns = 200 | 1.13 | 1.010.94 | 0.92 | 1.16 | 2.14 | 1.851.85 | 0.95 | 2.21 |
| S = 4, ns = 100 | 1.21 | 1.070.96 | 0.92 | 1.24 | 2.13 | 1.871.88 | 0.95 | 2.18 |
| S = 4, ns = 200 | 0.86 | 0.730.67 | 0.93 | 0.87 | 1.62 | 1.371.34 | 0.94 | 1.62 |
| (iii) |
| Brier score | OMR | |||||||
|---|---|---|---|---|---|---|---|---|
|
| ||||||||
|
| ||||||||
| ESE | ESEASE | CP | ESE | ESE | ESEASE | CP | ESE | |
|
|
||||||||
| S = 2, ns = 100 | 1.57 | 1.311.29 | 0.94 | 1.53 | 2.68 | 2.152.31 | 0.96 | 2.61 |
| S = 2, ns = 200 | 1.05 | 0.860.85 | 0.95 | 1.01 | 1.87 | 1.451.54 | 0.97 | 1.83 |
| S = 4, ns = 100 | 1.11 | 0.870.88 | 0.96 | 0.99 | 1.97 | 1.461.57 | 0.96 | 1.87 |
| S = 4, ns = 200 | 0.78 | 0.610.61 | 0.95 | 0.73 | 1.37 | 1.021.08 | 0.97 | 1.30 |
Remark 6.
While our simulation studies focus on the SSL estimators proposed in Section 3, we also investigated the finite sample performance of and and compared them with our original proposals. The numerical studies are described in Section S4 of the Supplement and demonstrate that when the estimated coefficients for the imputation model of the original estimators are equal or close to those of the intrinsic efficient estimator, these two methods perform equivalently with respect to mean square errors (MSE). In contrast, under a setting where the coefficients for the imputation model differ across these two methods, and have about 30% smaller MSE than and , on average. The detailed results are presented in Table S6 of the Supplement.
Remark 7.
To illustrate the benefit of stratified sampling in both the supervised and SS settings, we provide numerical studies of the optimal allocation in S5 of the Supplement. Mimicking our example in Section 8, we let the risk of y significantly differ across the S = 2 sampling groups. The stratification variable is picked so that is much lower than . We consider two sampling strategies: (i) uniform random sampling of n subjects, and (ii) stratified sampling of n/2 subjects from each stratum. Since is low and close to 0, the variability of this stratum is smaller than that of = 2 with not near 0 and 1. Connecting this with (10), the stratified sampling strategy oversamples within so that its allocation of n2 is more close to the optimal choice. Consistent with this observation, our simulation results indicate that stratified sampling is more efficient than uniform random sampling in both the supervised and SS settings with an average relative efficiency > 1.45 across different setups. We further inspect the supervised estimator of under setup (I) in Section S5, for which the optimal allocation is n1 = 0.47n and n2 = 0.53n and nearly coincides with our equal allocation of n across the two stratum.
8. Example: EHR Study of Diabetic Neuropathy
We applied the proposed SSL procedures to develop and evaluate an EHR phenotyping algorithm for classifying diabetic neuropathy (DN), a common and serious complication of diabetes resulting in nerve damage. The full study cohort consists of N = 16,826 patients over age 18 with one or more of 12 ICD9 codes relating to DN identified from Partners HealthCare EHR. An initial assessment of 100 charts by physicians revealed the prevalence of DN in the study cohort was approximately 17%. To obtain a labeled set with sufficient DN cases for model training and improve efficiency, the investigators decided to employ a stratified sampling scheme. To do so, a binary filter variable indicating whether a patient had a neurological exam and a neurology note with at least 1,000 characters was created. The prevalence of DN in the “enriched” stratum with was expected to be higher than that in the stratum with . As demonstrated in our theoretical analysis in Section 5.4 and our numerical studies in Section S5, oversampling within the enriched set can improve estimation efficiency relative to taking a uniform random sample and is a common approach taken in EHR-based analyses. For this study, the investigators sampled n0 = 70 and n1 = 538 patients from the N0 = 13608 patients with and the N1 = 3218 patients with , respectively, for developing the phenotyping algorithm.
To train the model for classifying DN, a set of 11 codified and NLP features related to DN were selected from an original list of 75 via an unsupervised screening as described in Yu et al. [2015]. The codified features included , diagnostic codes for diabetes, type 2 diabetes mellitus, diabetic neuropathy, other idiopathic peripheral autonomic neuropathy, and diabetes mellitus with neurological manifestation as well as normal glucose lab values and prescriptions for anti-diabetic medications. The NLP features included mentions of terms related to DN in the patient record including glycosylated hemoglobin (HgA1c), diabetic, and neuropathy. As all these features (with the exception of ) are count variables and tend to be highly skewed, we used the transformation to stabilize model fitting.
We developed DN classification models by fitting a logistic regression with the above features based on and obtained from density ratio weighted estimation. Since the proportion of observations with is relatively low in the labeled data, we implemented 100 replications of 6-fold CV procedure by splitting the data randomly within each strata to improve the stability of the CV procedure. To construct the basis for and , we used a natural spline with 3 knots on all covariates except . To improve training stability, we set the ridge tuning parameter λn = n−1 when fitting the imputation model.
As shown in Table 2(a), the point estimates are reasonably similar which confirms the consistency and stability of SS estimator in a real data setting. As expected, we find that the two most influential predictors are the diagnostic code for diabetic neuropathy and anti-diabetic medications. Importantly, we note substantial efficiency gains of compared to . The SSL estimates are > 50% more efficient than the SL estimates for several features including six diagnostic code features and one NLP feature for DN. Additionally, is the most efficient estimator among all three approaches for nearly all variables.
Table 2:
Results from the diabetic neuropathy EHR study: (a) estimates (Est.) of the regression parameters for both codified (COD) and NLP features based on , and along with their estimated SEs and the coordinate-wise relative efficiencies (RE) of compared to and (b) , and along with their estimated SEs and relative efficiencies (RE) of and compared to .
| (a) Estimates of the Regression Coefficients |
| Est. | SE | Est. | SE | RE | Est. | SE | RE | ||
|---|---|---|---|---|---|---|---|---|---|
| Intercept | −3.89 | 0.80 | −3.85 | 0.73 | 1.21 | −3.26 | 0.66 | 1.45 | |
|
| |||||||||
| COD | Neurological exam & note | −1.09 | 0.81 | −0.90 | 0.61 | 1.77 | −2.38 | 1.16 | 0.48 |
| Diabetes | −0.24 | 0.57 | 0.02 | 0.43 | 1.79 | −0.09 | 0.48 | 1.42 | |
| Type 2 Diabetes Mellitus | −1.05 | 0.73 | −0.52 | 0.60 | 1.62 | −0.62 | 0.86 | 0.73 | |
| Diabetic Neuropathy | 1.99 | 0.86 | 1.79 | 0.68 | 1.58 | 1.66 | 1.08 | 0.63 | |
| Anti-diabetic Meds | 1.70 | 0.59 | 1.12 | 0.44 | 1.79 | 1.50 | 0.51 | 1.32 | |
| Diabetes Mellitus with | |||||||||
| Neuro Manifestation | 0.36 | 1.17 | 0.60 | 0.98 | 1.42 | 0.59 | 0.90 | 1.68 | |
| Other Idiopathic Peripheral | |||||||||
| Autonomic Neuropathy | 0.86 | 0.71 | 0.93 | 0.68 | 1.09 | 1.01 | 0.76 | 0.89 | |
| Normal Glucose | −0.56 | 0.65 | −0.20 | 0.51 | 1.64 | −1.27 | 0.80 | 0.66 | |
|
| |||||||||
| NLP | Diabetic | 0.30 | 0.58 | −0.57 | 0.49 | 1.39 | 0.14 | 0.65 | 0.80 |
| HgA1c | −0.52 | 0.75 | −0.64 | 0.70 | 1.16 | −0.67 | 0.85 | 0.79 | |
| Neuropathy | 0.27 | 0.58 | 0.30 | 0.47 | 1.55 | 0.37 | 0.54 | 1.15 | |
| (b) Estimates of the Accuracy Parameters (×100) |
| Est. | SE | Est. | SE | RE | Est. | SE | RE | |
|---|---|---|---|---|---|---|---|---|
| Brier score | 8.97 | 2.09 | 9.59 | 1.68 | 1.55 | 9.60 | 1.87 | 1.26 |
| OMR | 12.87 | 3.25 | 14.01 | 2.55 | 1.63 | 12.29 | 3.04 | 1.14 |
In Table 2(b), we compare and for the Brier score and OMR with c = 0.5. While the point estimates for the accuracy measures based on these different approaches are relatively similar, is 55% more efficient than for the Brier score and 63% more efficient for the OMR. Again, is substantially more efficient than the DR estimator . These results support the potential value of our method for EHR-based research as these gains in efficiency may be directly translated into requiring fewer labeled examples for model evaluation.
9. Discussion
In this paper, we focused on the evaluation of a classification rule derived from a working regression model under stratified sampling in the SS setting. In particular, we introduced a two-step imputation-based method for estimation of the Brier score and OMR that makes use of unlabeled data. Additionally, as a by-product of our procedure, we obtained an efficient SS estimator of the regression parameter. Through theoretical and numerical studies, we demonstrated the advantage of the SS estimator over the SL estimator with respect to efficiency. We also developed a weighted CV procedure to adjust for overfitting and a resampling procedure for making inference. Our numerical studies indicate that our proposed method outperforms the existing DR method for SSL in the finite sample studies and we provide further discussion of this finding in Section S6 of the Supplement. Importantly, this article is one of the first theoretical studies of labeling based on stratified sampling within the SSL literature. We focus on the stratified sampling scheme due to its direct application to a variety of EHR-based analyses, including the development of a phenotyping algorithm for diabetic neuropathy presented in the previous section.
In our numerical studies, we used spline functions with 3 or 4 knots and interaction terms for the imputation model. It would be possible to use more knots or add more features to the basis function for settings with a larger n. However, care must be taken to avoid overfitting and potential loss in the efficiency gain of the SS estimator in finite sample. Alternatively, other basis functions can be utilized provided that Φ(u) contains x in its components to ensure consistency of the regression parameter. In settings where nonlinear effects of x on y are present, it may be desirable to impose a more complex outcome model to improve the prediction performance. A potential approach is to explicitly include nonlinear basis functions in the outcome model. In Section S7 of the Supplementary Materials, we consider using the leading principal components (PCs) of x and Ψ(x) where Ψ(·) is a vector of nonlinear transformation functions under a variety of settings. This approach performs similarly or better than the commonly used random forest model with respect to predictive accuracy, suggesting the utility of our proposed methods in the presence of nonlinear effects. Our numerical results also illustrate the efficiency gain of the SS estimators of the Brier score and OMR relative to the SL and DR methods. It is important to note, however, that the dimensions of both x and Φ were assumed to be fixed in our asymptotic analysis. Accommodating more complex modeling with p not small relative to n requires extending the proposed SSL approach to settings where x and Φ are high dimensional.
For accuracy estimation, we proposed an ensemble CV estimator that eliminates first-order bias when the estimated regression parameter is the minimizer of the empirical performance measure. Though this condition may not hold when the outcome model is misspecified, we have found that the suggested weights perform well in our numerical studies. Such ensemble methods that accommodate the more general case in both the supervised and SS settings warrant further research. Additionally, an important setting where our proposed SSL procedure would be of great use is in drawing inferences about two competing regression models. As it is likely that at least one model is misspecified, we would expect to observe efficiency gains in estimating the difference in prediction error with the proposed method.
Lastly, while the present work focuses on the binary outcome along with the Brier score and OMR, the proposed SSL framework can potentially be extended to more general settings with continuous y and/or other accuracy parameters. In particular, for binary y and corresponding classification rule , it would be of interest to consider estimation of the sensitivity, specificity, and weighted OMR with different threshold values to analyze the costs associated with the false positive and false negative errors.
Supplementary Material
Acknowledgments
This research was supported by grants F31-GM119263, T32-NS048005, and R01HL089778 from the National Institutes of Health.
Appendix
Here we provide justifications for our main theoretical results. The following lemma confirming the existence and uniqueness of the limiting parameters , and will be used in our subsequent derivations.
Lemma A1.
Under conditions 1–3, unique and exist. In addition, there exists δ > 0 such that a unique exists for any θ satisfying .
Proof.
Conditions 3 (A) and (B) imply that there is no θ and γ such that with non-trivial probability,
and
It follows directly from Appendix I of Tian et al. [2007] that there exist finite and that solve and , respectively, and that and are unique. For , there exists no ν such that
or
which similarly implies there exists a finite that is the solution to . The solution is also unique as for any . □
A. Estimation Procedure for
We propose to obtain a simple SS estimator for , as the solution to
| (A.1) |
Note that when u includes the stratum information and the imputation model (2) is correctly specified, there is no need to use the inverse probability weighted (IPW) estimating equation with as in (3). However, under the general scenario where the imputation model may be misspecified, the unweighted estimating equation is not guaranteed to provide an asymptotically unbiased estimate of and necessitates the use of the IPW approach.
As detailed in the subsequent sections, when the imputation and outcome models are misspecified, the efficiency gain of relative to , is not theoretically guaranteed. We therefore obtain the final SS estimator, denoted as , as a linear combination of and to minimize the asymptotic variance. For simplicity we consider here the component-wise optimal combination of the two estimators. That is, the jth component of , is estimated with
where is the first component of the vector and is a consistent estimator for . To estimate the variance of , one may rely on estimates of the influence functions of and . To avoid under-estimation in a finite sample, we obtain bias-corrected estimates of the influence functions via K-fold CV. Details on the K-fold CV procedure, as well as the computations for the aforementioned estimation of and , are given in Appendix B.
B. Cross-validation Based Inference for
Here we provide the details of the procedure to obtain as well as an estimate of its variance. We employ K-fold CV in the proposed procedure to adjust for overfitting in finite sample and denote each fold of as for . First, we estimate with
where is the estimator of based on is the supervised estimator of based on ,
In practice, may be unstable due to the high correlation of and . One may use a regularized estimator with some to stabilize the estimation and obtain accordingly. The covariance of may then be consistently estimated with
| (B.1) |
| (B.2) |
is an estimate of is the first component of and is the first component of . The confidence intervals for the regression parameters can be constructed with the proposed variance estimates and the asymptotic normal distribution of the SS estimator.
C. Asymptotic Properties of
The main complication in deriving the asymptotic properties of the SL estimators arises from the fact that as and hence is an ill behaved random variable tending to infinity in the limit for those with Vi = 1. This substantially distinguishes the SS setting from the standard missing data literature. To overcome this complication, we note that for subjects in the labeled set, and , are independent identically distributed (i.i.d) random variables since the labeled observations are drawn randomly within each stratum. Also note that as assumed in Section 2.1. Hence for any function f with ,
| (C.1) |
| (C.2) |
We begin by verifying that is consistent for . It suffices to show that (i) and (ii) for any [Newey and McFadden, 1994, Lemma 2.8]. To this end, we write
Under Conditions 1–3, x belongs to a compact set and is continuous and uniformly bounded for . From the uniform law of large numbers (ULLN) [Pollard, 1990, Theorem 8.2], converges to in probability uniformly as and . Furthermore, (ii) follows directly from Lemma A1 and consequently as .
Next we consider the asymptotic normality of . Noting that and , we apply Theorem 5.21 of Van der Vaart [2000] to obtain the Taylor expansion
where and . It then follows by the classical Central Limit Theorem that in distribution and
where .
D. Asymptotic Properties of
We begin by showing that as . We first note that since ,
| (D.1) |
It follows by the ULLN that since is continuously differentiable in θ and uniformly bounded. The consistency of for then follows from the fact that converges in probability to as .
To establish the asymptotic distribution of , we first consider . We verify that there exists δ > 0, such that the classes of functions indexed by θ:
are Donsker classes. For , we note that is a Vapnik-Chervonenkis class [Van der Vaart, 2000, Page 275] and thus
is a Donsker class. For is continuously differentiable in θ and uniformly bounded by a constant. It follows that is a Donsker class [Van der Vaart, 2000, Example 19.7]. By Theorem 19.5 of Van der Vaart [2000] we then have
which converges weakly to a mean zero Gaussian process index by θ. Thus, is stochastically equicontinuous at . In addition, note that D(θ) is continuously differentiable at and . It then follows that
which converges in distribution to where
E. Asymptotic Properties of
We first consider the asymptotic properties of . Under Conditions 1–3 and using that and , we can adapt the same procedure as in Appendix C to show that as . We obtain the following Taylor series expansion
where . We then have that converges to zero-mean Gaussian distribution. To verify that is consistent for , it suffices to show that (i) and (ii) , for any ϵ > 0 [Newey and McFadden, 1994, Lemma 2.8], where
For (i), we first note that since is continuous and Φ is bounded,
| (E.1) |
Note that is bounded and continuously differentiable in θ. We then apply the ULLN to have that . For (ii), we note
Therefore, (ii) holds by Lemma A1 and is consistent for .
Now we consider the weak convergence of . Under Conditions 1–3, we have the Taylor expansion
where . This expansion coupled with the fact that imply that
Letting , we note that for ,
Since xij is a component of the vector , the minimizer β can be chosen such that for which implies that . Thus
where . It then follows from the classical Central Limit Theorem that in distribution where
We then see that
Therefore, when the imputation model is correctly specified, it follows that
F. Asymptotic Properties of and
First note that by Lemma A1, there exists δ > 0 such that for all θ satisfying , so that is unique. Then, similar to the derivations in Appendices C and E, we may show that is consistent for and is asymptotically Gaussian with mean zero.
Let . For the consistency of for , we note that the uniform consistency of for and for , together with the ULLN and under regularity Conditions 1–3 and , imply . It then follows from the consistency of and for that and as .
To derive the asymptotic distribution for and , we first consider
Under Conditions 1–3 and by Taylor series expansion about and and the ULLN,
where and . From the previous section we have
Similar arguments can be used to verify that
where and . These results, together with the fact that converges weakly to zero-mean Gaussian process in θ, imply that
We may simplify the above expression by noting that is a linear combination of ziθ and hence . Additionally, which implies that . Thus,
This combined with the fact that D(θ) is continuously differentiable at is consistent for its limiting value W introduced in Appendix B, and Conditions 1–3 then give that
Note that the existence of is implied by Condition 1, namely, that the density function of is continuously differentiable in and is continuously differentiable in the continuous components of u. We then have that converges to a zero mean normal random variable by the classical Central Limit Theorem. Using similar arguments as those for , we have that can be expanded as
which also converges to a zero mean normal random variable.
Comparing the asymptotic variance of with , first note that
Letting and
we note that and are functions of Φi and do not depend on yi. Thus, when , we have and
while the asymptotic variance of is
Therefore, when , it follows that . Additionally, when model (1) is correct and , we have which is not equal to 0 with probability 1, so that again .
G. Intrinsic Efficient Estimation
G.1. Intrinsic Efficient Estimator for
We first introduce the intrinsic efficient estimator of the accuracy measures. Without loss of generality, we set the imputation basis for both θ and D(θ) as , where θ is plugged in with some preliminary estimator for θ, denoted as . In practice, one may take as either the simple SL estimator or the SSL estimator obtained following SectionT 3.1. We include in the imputation basis to simplify the notation and presentation in this section. Although this distinguishes the following discussion from the proposal in Section 3.2, it is straightforward to extend our results to the original proposal.
Recall that for the original SSL estimator of the regression parameter, one first obtains as the solution to
and then solves to obtain the estimator of . Despite the change in basis, we still denote this estimator as with a slight abuse in the notation. Adapting the augmentation procedure in Section 3.2, we then find as the solution to
and estimate with where . Extending Theorem 2, the asymptotic variance of may be expressed as
| (G.1) |
where represents the limits of , and . Analogous to the construction of , we consider minimizing the asymptotic variance given by (G.1) to estimate . Specifically, we first solve for with
| (G.2) |
where is an estimation of and the tuning parameter . Similar to (3) and (9), moment constraints in (G.2) calibrate potential bias of the estimators for and .
Next, we present the construction of for the Brier score and OMR separately. For the Brier score, , we take
For the the OMR, , recall that a simple estimator is given by the empirical average
Since is not a differentiable function of θ, we first smooth each as , where K(·) represents the Gaussian kernel function, and with some bandwidth h > 0. Then, is estimated with
With , we then solve
to obtain for estimation of and employ the augmentation procedure in Section 3.2.
That is, we solve from
and estimate by where .
To present the asymptotic properties of , we define
Theorem A1 provides the asymptotic expansion of and its proof, together with the proof of Theorem 3 from the main text, is detailed in Appendix G.2.
Theorem A1.
Under Conditions 1, Conditions A1 and A2 from Appendix G.2, and with the bandwidth , weakly converges to a Gaussian distribution with mean zero, and is asymptotically equivalent to where
This implies that (i) is asymptotically equivalent to when the imputation model is correctly specified and (ii) the asymptotic variance of is minimized among . Consequently, the asymptotic variance of the intrinsic efficient estimator is always less than or equal to the asymptotic variance of and .
G.2. Asymptotic Properties of and
We first introduce the smoothness condition on the link function g(·), which is stronger than Condition 2, but still holds for the most commonly used link functions such as the logit and probit functions.
Condition A1.
The link function is continuously twice differentiable with derivative and the second order derivative .
Given Condition A1, we let and define
and . We next present the regularity condition on the covariates and regression coefficients required by Theorem 3.
Condition A2.
There exists for some δ′ > 0, such that for any , there is no γ such that or . It is also the case that and .
Remark A1.
Condition A2 is analog to Condition 3. It assumes there is no linear combination of or perfectly separating the samples based on y, and the Hessian matrices of the constrained least square problems for and are positive definite. Again, these assumptions are mild and common in the M-estimation literature [Van der Vaart, 2000].
Under these regularity conditions, we present the proofs of Theorem 3 and A1. In our development, we take as the SSL estimator for θ introduced in Section 3.1, but note that proof remains basically unchanged when taking as the SL estimator. We first derive the consistency (error rates) of and . For the Brier score, letr denote the derivative the of evaluated at . We then use , Theorem 1, Conditions 1 and A1, and the classical Central Limit Theorem to derive that
For the estimator of the derivative for the OMR, let be the limiting value of . We then have
where represent the density function of evaluated at c. This follows from the fact that . Since the Gaussian kernel K(·) is continuously differentiable and by Theorem 1, Conditions 1 and A1, we have
For Δ2, Condition 1 and the classical Central Limit Theorem imply that
and from Condition 1,
where , and represent the density of x given that . By Condition 1, there exists C > 0 such that for any . Thus, we have and with , we obtain . It then follows that for both Brier score and OMR, .
Leveraging these results, we establish the asymptotic normality of and . Similar to Appendices C and E, we apply the ULLN [Pollard, 1990], together with Conditions 1, A1, and A2, and the facts that , and that and are consistent for their respective limits, to obtain
where and are two compact sets containing and , respectively. This implies that and . We then expand (9) and (G.2) to derive that
and , are two fixed loading matrices of the order O(1). By Condition 1 and the classical Central Limit Theorem, converges to a Gaussian distribution with mean 0. By Theorem also converges to a mean-zero Gaussian distribution. Analogous to the proof of Theorem 5.21 of Van der Vaart [2000], we then obtain
| (G.3) |
By Conditions 1, A1 and A2, the consistency of for its limit, and the asymptotic expansion of derived above, we can use the argument of Appendix E to show that and obtain the expansion
The second equality follows from the fact that
Thus, the asymptotic variance of , which is minimized among those of . From Theorem is asymptotically equivalent with . Therefore, when the imputation model is correctly specified, that is, there exists γ0 such that , it follows that is asymptotically equivalent to . This completes the proof of Theorem 3.
Using our previous arguments, we next establish Theorem A1. Similar to (G.3), we expand
The third equality follows from the fact that and
Using this result, and applying similar arguments as those used for , we have that
We then follow the same procedure as in Appendix F (specifically, noting that corresponds to the plugged into , the derivation for the augmentation approach in Section 3.2 can be used directly) to derive that and
By the definition of , the asymptotic variance of is minimized among those of . Additionally, we may use a similar procedure as that in Appendix F to derive that
Thus, when the imputation model for estimating D, i.e. is correct, we have and that is asymptotically equivalent to . These arguments establish Theorem A1.
H. Justification for Weighted CV Procedure
To provide a heuristic justification for the weights for our ensemble CV method, consider an arbitrary smooth loss function d(·,·) and let . Let denote the empirical unbiased estimate of and suppose that minimizes (i.e. ). Suppose that in distribution. Then by a Taylor series expansion of ,
where . For the K-fold CV estimator, , we note that since is independent of
where the second equality follows from the fact that when minimizes . Letting with , it follows that and thus
REFERENCES
- Ananthakrishnan A, Cai T, Savova G, Cheng S, Chen P, Perez R, Gainer V, Murphy S, Szolovits P, Xia Z, et al. Improving case definition of crohn’s disease and ulcerative colitis in electronic medical records using natural language processing: a novel informatics approach. Inflammatory bowel diseases, 19(7):1411–1420, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Belkin M. and Niyogi P. Semi-supervised learning on riemannian manifolds. Machine learning, 56(1–3):209–239, 2004. [Google Scholar]
- Belkin M, Niyogi P, and Sindhwani V. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. The Journal of Machine Learning Research, 7:2399–2434, 2006. [Google Scholar]
- Cai T. and Zheng Y. Evaluating prognostic accuracy of biomarkers in nested case–control studies. Biostatistics, 13(1):89–100, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Castelli V. and Cover TM The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. Information Theory, IEEE Transactions on, 42(6):2102–2117, 1996. [Google Scholar]
- Chakrabortty A. and Cai T. Efficient and adaptive linear regression in semi-supervised settings. The Annals of Statistics, 46(4):1541–1572, 2018. [Google Scholar]
- Chapelle O, Scholkopf B, and Zien A. Semi-supervised learning (chapelle O. et al. , eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3):542–542, 2009. [Google Scholar]
- Corduneanu AAD Stable mixing of complete and incomplete information. PhD thesis, Massachusetts Institute of Technology, 2002. [Google Scholar]
- Cozman FG, Cohen I, and Cirelo M. Unlabeled data can degrade classification performance of generative classifiers. In FLAIRS Conference, pages 327–331, 2002. [Google Scholar]
- Cozman FG, Cohen I, and Cirelo MC Semi-supervised learning of mixture models. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 99–106, 2003. [Google Scholar]
- Efron B. Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association, 78(382):316–331, 1983. [Google Scholar]
- Efron B. How biased is the apparent error rate of a prediction rule? Journal of the American Statistical Association, 81(394):461–470, 1986. [Google Scholar]
- Efron B. and Tibshirani R. Improvements on cross-validation: the 632+ bootstrap method. Journal of the American Statistical Association, 92(438):548–560, 1997. [Google Scholar]
- Fu WJ, Carroll RJ, and Wang S. Estimating misclassification error with small samples via bootstrap cross-validation. Bioinformatics, 21(9):1979–1986, 2005. [DOI] [PubMed] [Google Scholar]
- Gerds TA, Cai T, and Schumacher M. The performance of risk prediction models. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 50(4):457–479, 2008. [DOI] [PubMed] [Google Scholar]
- Gneiting T. and Raftery AE Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007. [Google Scholar]
- Gronsbell JL and Cai T. Semi-supervised approaches to efficient evaluation of model prediction performance. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(3):579–594, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hand DJ Construction and assessment of classification rules. Wiley, 1997. [Google Scholar]
- Hand DJ Measuring diagnostic accuracy of statistical prediction rules. Statistica Neerlandica, 55(1):3–16, 2001. [Google Scholar]
- Jaakkola T, Haussler D, et al. Exploiting generative models in discriminative classifiers. Advances in neural information processing systems, pages 487–493, 1999.
- Jiang W. and Simon R. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification. Statistics in medicine, 26(29):5320–5334, 2007. [DOI] [PubMed] [Google Scholar]
- Kawakita M. and Kanamori T. Semi-supervised learning with density-ratio estimation. Machine learning, 91(2):189–209, 2013. [Google Scholar]
- Kawakita M. and Takeuchi J. Safe semi-supervised learning based on weighted likelihood. Neural Networks, 53:146–164, 2014. [DOI] [PubMed] [Google Scholar]
- Kohane IS Using electronic health records to drive discovery in disease genomics. Nature Reviews Genetics, 12(6):417–428, 2011. [DOI] [PubMed] [Google Scholar]
- Kpotufe S. The curse of dimension in nonparametric regression. PhD thesis, UC San Diego, 2010. [Google Scholar]
- Krijthe J. and Loog M. Projected estimators for robust semi-supervised classification. arXiv preprint arXiv:1602.07865, 2016. [Google Scholar]
- Liao KP, Cai T, Gainer V, Goryachev S, Zeng-treitler Q, Raychaudhuri S, Szolovits P,Churchill S, Murphy S, Kohane I, et al. Electronic medical records for discovery research in rheumatoid arthritis. Arthritis care & research, 62(8):1120–1127, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liao KP, Kurreeman F, Li G, Duclos G, Murphy S, Guzman PR, Cai T, Gupta N, Gainer V, Schur P, et al. Autoantibodies, autoimmune risk alleles and clinical associations in rheumatoid arthritis cases and non-ra controls in the electronic medical records. Arthritis and rheumatism, 65(3):571, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liao KP, Cai T, Savova GK, Murphy SN, Karlson EW, Ananthakrishnan AN, Gainer VS, Shaw SY, Xia Z, Szolovits P, et al. Development of phenotype algorithms using electronic medical records and incorporating natural language processing. bmj, 350: h1885, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu D, Cai T, and Zheng Y. Evaluating the predictive value of biomarkers with stratified case-cohort design. Biometrics, 68(4):1219–1227, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mirakhmedov SM, Jammalamadaka SR, and Mohamed IB On edgeworth expansions in generalized urn models. Journal of Theoretical Probability, 27(3):725–753, 2014. [Google Scholar]
- Molinaro AM, Simon R, and Pfeiffer RM Prediction error estimation: a comparison of resampling methods. Bioinformatics, 21(15):3301–3307, 2005. [DOI] [PubMed] [Google Scholar]
- Murphy S, Churchill S, Bry L, Chueh H, Weiss S, Lazarus R, Zeng Q, Dubey A, Gainer V, Mendis M, et al. Instrumenting the health care enterprise for discovery research in the genomic era. Genome research, 19(9):1675–1681, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nedyalkova D. and Tillé Y. Optimal sampling and estimation strategies under the linear model. Biometrika, 95(3):521–537, 2008. [Google Scholar]
- Newey WK and McFadden D. Large sample estimation and hypothesis testing. Handbook of econometrics, 4:2111–2245, 1994. [Google Scholar]
- Neyman J. On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. Journal of the Royal Statistical Society, 97(4):558–606, 1934. [Google Scholar]
- Niyogi P. Manifold regularization and semi-supervised learning: Some theoretical analyses. The Journal of Machine Learning Research, 14(1):1229–1250, 2013. [Google Scholar]
- Pollard D. Empirical processes: theory and applications. In NSF-CBMS regional conference series in probability and statistics, pages i–86. JSTOR, 1990. [Google Scholar]
- Robins JM, Mark SD, and Newey WK Estimating exposure effects by modelling the expectation of exposure conditional on confounders. Biometrics, pages 479–495, 1992. [PubMed]
- Robins JM, Rotnitzky A, and Zhao LP Estimation of regression coefficients when some regressors are not always observed. Journal of the American statistical Association, 89 (427):846–866, 1994. [Google Scholar]
- Särndal C-E, Swensson B, and Wretman J. Model assisted survey sampling. Springer Science & Business Media, 2003. [Google Scholar]
- Sinnott JA, Dai W, Liao KP, Shaw SY, Ananthakrishnan AN, Gainer VS, Karlson EW, Churchill S, Szolovits P, Murphy S, et al. Improving the power of genetic association tests with imperfect phenotype derived from electronic medical records. Human genetics, 133(11):1369–1382, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sokolovska N, Cappé O, and Yvon F. The asymptotics of semi-supervised learning in discriminative probabilistic models. In Proceedings of the 25th international conference on Machine learning, pages 984–991. ACM, 2008. [Google Scholar]
- Tan Z. Bounded, efficient and doubly robust estimation with inverse weighting. Biometrika, 97(3):661–682, 2010. [Google Scholar]
- Tian L, Cai T, Goetghebeur E, and Wei L. Model evaluation based on the sampling distribution of estimated absolute prediction error. Biometrika, 94(2):297–311, 2007. [Google Scholar]
- Van der Vaart AW Asymptotic statistics, volume 3. Cambridge University Press, 2000. [Google Scholar]
- Wasserman L. and Lafferty JD Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems, pages 801–808, 2008.
- Wilke R, Xu H, Denny J, Roden D, Krauss R, McCarty C, Davis R, Skaar T, Lamba J, and Savova G. The emerging role of electronic medical records in pharmacogenomics. Clinical Pharmacology & Therapeutics, 89(3):379–386, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xia Z, Secor E, Chibnik LB, Bove RM, Cheng S, Chitnis T, Cagan A, Gainer VS, Chen PJ, Liao KP, et al. Modeling disease severity in multiple sclerosis using electronic health records. PloS one, 8(11):e78927, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu S, Liao KP, Shaw SY, Gainer VS, Churchill SE, Szolovits P, Murphy SN, Kohane IS, and Cai T. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources. Journal of the American Medical Informatics Association, 22(5):993–1000, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang A, Brown LD, Cai TT, et al. Semi-supervised inference: General theory and estimation of means. Annals of Statistics, 47(5):2538–2566, 2019. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


