Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Sep 1.
Published in final edited form as: J R Stat Soc Series B Stat Methodol. 2022 Apr 26;84(4):1353–1391. doi: 10.1111/rssb.12502

Efficient Evaluation of Prediction Rules in Semi-Supervised Settings under Stratified Sampling

Jessica Gronsbell 1,2, Molei Liu 1,2, Lu Tian 1, Tianxi Cai 1
PMCID: PMC9586151  NIHMSID: NIHMS1779685  PMID: 36275859

Abstract

In many contemporary applications, large amounts of unlabeled data are readily available while labeled examples are limited. There has been substantial interest in semi-supervised learning (SSL) which aims to leverage unlabeled data to improve estimation or prediction. However, current SSL literature focuses primarily on settings where labeled data is selected uniformly at random from the population of interest. Stratified sampling, while posing additional analytical challenges, is highly applicable to many real world problems. Moreover, no SSL methods currently exist for estimating the prediction performance of a fitted model when the labeled data is not selected uniformly at random. In this paper, we propose a two-step SSL procedure for evaluating a prediction rule derived from a working binary regression model based on the Brier score and overall misclassification rate under stratified sampling. In step I, we impute the missing labels via weighted regression with nonlinear basis functions to account for stratified sampling and to improve efficiency. In step II, we augment the initial imputations to ensure the consistency of the resulting estimators regardless of the specification of the prediction model or the imputation model. The final estimator is then obtained with the augmented imputations. We provide asymptotic theory and numerical studies illustrating that our proposals outperform their supervised counterparts in terms of efficiency gain. Our methods are motivated by electronic health record (EHR) research and validated with a real data analysis of an EHR-based study of diabetic neuropathy.

Keywords: Semi-Supervised Learning, Stratified Sampling, Model Evaluation, Risk Prediction

1. Introduction

Semi-supervised learning (SSL) has emerged as a powerful learning paradigm to address big data problems where the outcome is cumbersome to obtain and the predictors are readily available [Chapelle et al., 2009]. Formally, the SSL problem is characterized by two sources of data: (i) a relatively small sized labeled dataset L with n observations on the outcome y and the predictors x and (ii) a much larger unlabeled dataset U with Nn observations on only x. A promising application of SSL and the motivation for this work is in electronic health record (EHR) research. EHRs have immense potential to serve as a major data source for biomedical research as they have generated extensive information repositories on representative patient populations [Murphy et al., 2009, Kohane, 2011, Wilke et al., 2011]. Nonetheless, a primary bottleneck in recycling EHR data for secondary use is to accurately and efficiently extract patient level disease phenotype information [Sinnott et al., 2014, Liao et al., 2015]. Frequently, true phenotype status is not well characterized by disease-specific billing codes. For example, at Partner’s Healthcare, only 56% of patients with at least 3 International Classification of Diseases, Ninth Revision (ICD9) codes for rheumatoid arthritis (RA) have confirmed RA after manual chart review [Liao et al., 2010]. More accurate EHR phenotyping has been achieved by training a prediction model based on a number of features including billing codes, lab results and mentions of clinical terms in narrative notes extracted via natural language processing (NLP) [Liao et al., 2013, Xia et al., 2013, Ananthakrishnan et al., 2013, e.g.]. The model is traditionally trained and evaluated using a small amount of labeled data obtained from manual medical chart review by domain experts. SSL methods are particularly attractive for developing such models as they leverage unlabeled data to achieve higher estimation efficiency than their supervised counterparts. In practice, this increase in efficiency can be directly translated into requiring fewer chart reviews without a loss in estimation precision.

In the EHR phenotyping setting, it is often infeasible to select the labeled examples uniformly at random, either due to practical constraints or due to the nature of the application. For example, it may be necessary to oversample individuals for labeling with a particular billing code or diagnostic procedure for a rare disease to ensure that an adequate number of cases are available for model estimation. In some settings, training data may consist of a subset of individuals selected uniformly at random together with a set of registry patients whose phenotype status is confirmed through routine collection. Stratified sampling is also an effective strategy when interest lies in simultaneously characterizing multiple phenotypes. One may oversample patients with at least one billing code for several rare phenotypes and then perform chart review on the selected patients for all phenotypes of interest. Though it is critical to account for the aforementioned sampling mechanisms to make valid statistical inference, it is non-trivial in the context of SSL since the fraction of subjects being sampled for labeling is near zero. The challenge is further amplified when the fitted prediction models are potentially misspecified.

Existing SSL literature primarily concerns the setting in which L is a uniform random sample from the underlying pool of data and thus the missing labels in U are missing completely at random (MCAR) [Wasserman and Lafferty, 2008]. In this setting, a variety of methods for classification have been proposed including generative modeling [Castelli and Cover, 1996, Jaakkola et al., 1999], manifold regularization [Belkin et al., 2006, Niyogi, 2013] and graph-based regularization [Belkin and Niyogi, 2004]. While making use of both U and L can improve estimation, in many cases SSL is outperformed by supervised learning (SL) using only U when the assumed models are incorrectly specified [Castelli and Cover, 1996, Jaakkola et al., 1999, Corduneanu, 2002, Cozman et al., 2002, 2003]. As model misspecification is nearly inevitable in practice, recent work has called for ‘safe’ SSL methods that are always at least as efficient as the SL counterparts. For example, several authors have considered safe SSL methods for discriminative models based on density ratio weighted maximum likelihood [Sokolovska et al., 2008, Kawakita and Kanamori, 2013, Kawakita and Takeuchi, 2014]. Though the true density ratio is 1 in the MCAR setting, the efficiency gain is achieved through estimation of density ratio weight, a statistical paradox previously observed in the missing data literature [Robins et al., 1992, 1994]. More recently, Krijthe and Loog [2016] introduced a SSL method for least squares classification that is guaranteed to outperform SL. Chakrabortty and Cai [2018] proposed an adaptive imputation-based SSL approach for linear regression that also outperforms supervised least squares estimation. It is unclear, however, whether these methods can be extended to accommodate additional loss functions. Moreover, none of the aforementioned methods are applicable to settings where the labeled data is not a uniform random sample from the underlying data such as the stratified sampling design. Additionally, the focus of existing work has been on the estimation of prediction models, rather than the estimation of model performance metrics. Gronsbell and Cai [2018] recently proposed a semi-supervised procedure for estimating the receiver operating characteristic parameters, but this method is similarly limited to the standard MCAR setting.

This paper addresses these limitations through the development of an efficient SS estimation method for model performance metrics that is robust to model misspecification in the presence of stratified sampling. Specifically, we develop an imputation based procedure to evaluate the prediction performance of a potentially misspecified binary regression model. To the best of our knowledge, the proposed method is the first SSL procedure that provides efficient and robust estimation of prediction performance measures under stratified sampling. We focus on two commonly used error measurements, the overall misclassification rate (OMR) and the Brier score. The proposed method involves two steps of estimation. In step I, the missing labels are imputed with a weighted regression with nonlinear basis functions to account for stratified sampling and to improve efficiency. In step II, the initial imputations are augmented to ensure the consistency of the resulting estimators regardless of the specification of the prediction model or the imputation model. Through theoretical results and numerical studies, we demonstrate that the SS estimators of prediction performance are (i) robust to the misspecification of the prediction or imputation model and (ii) substantially more efficient than their SL counterparts. We also develop an ensemble cross-validation (CV) procedure to adjust for overfitting and a perturbation resampling procedure for variance estimation.

The remainder of this paper is organized as follows. In Section 2, we specify the data structure and problem set-up. We then develop the estimation and bias correction procedure for the accuracy measures in Sections 3 and 4. Section 5 outlines the asymptotic properties of the estimators and section 6 introduces the perturbation resampling procedure for making inference. Our proposals are then validated through a simulation study in Section 7 and a real data analysis of an EHR-based study of diabetic neuropathy is presented in Section 8. We conclude with additional discussions in Section 9.

2. Preliminaries

2.1. Data Structure

Our interest lies in evaluating a prediction model for a binary phenotype y based on a predictor vector x=(1,x1,,xp) for some fixed p. The underlying full data consists of N=s=1SNs independent and identically distributed random vectors

F={Fi=(yi,ui)}i=1N

where ui=(xi,Si), Si{1,2,,S} is a discrete stratification variable that defines a fixed number of strata S for sampling, and Ns=i=1NI(Si=s) is the sample size of stratum s. Throughout, we let F0=(y0,x0,S0) be a future realization of F.

Due to the difficulty in ascertaining y, a small uniform random sample is obtained from each stratum and labeled with outcome information. The observable data therefore consists of

D={Di=(yiVi,ui,Vi)}i=1N

where Vi{0,1} indicates whether yi is ascertained. We let

P(Vi=1F)=π^Si,π^s=ns/Nsandns=j=1NI(Sj=s)Vj.

Without loss of generality, we suppose that the first n=s=1Sns subjects are labeled and {ns,s=1,,S} are specified by design. We assume that

ρ^1s=ns/npρ1s(0,1)andρ^s=Ns/Npρs(0,1)

as n and N respectively. This ensures that

π^s/π^tp(ρ1sρt)/(ρsρ1t)(0,1)

as n for any pair of s and t [Mirakhmedov et al., 2014]. As in the standard SS setting, we further assume maxs π^sp0 as n [Chakrabortty and Cai, 2018, Zhang et al., 2019]. This assumption distinguishes the current setting from (i) the familiar missing data setting where π^s is bounded above 0 and (ii) standard SSL under uniform random sampling as y is MCAR conditional on S (i.e. V(y,x)S) under stratified sampling.

2.2. Problem Set-Up

To predict y0 based on x0, we fit a working regression model

P(y=1x)=g(θx) (1)

where θ=(θ0,θ1,,θp) is an unknown vector of regression parameters and g():(,)(0,1) is a specified, smooth monotone function such as the expit function. The target model parameter, θ¯, is the solution to the estimating equation

U(θ)=E[x{yg(θx)}]=0.

We let the predicted value for y0 be Y(θ¯x0) for some function Y. In this paper, we aim to obtain SS estimators of the prediction performance of Y(θ¯x0) quantified by the Brier score

D¯1=E[{y0Y1(θ¯x0)}2]withY1(x)=g(x),

and the overall misclassification rate (OMR) D¯2=E[{y0Y2(θ¯x0)}2] with Y2(x)=I{g(x)>c} for some specified constant c.

We focus on these two metrics as the convey distinct information about the performance of the prediction model. The OMR summarizes the overall discrimination capacity of the model while the Brier score summarizes the calibration of the model. More complete discussions regarding assessment of model performance can be found in Hand [1997, 2001], Gneiting and Raftery [2007] and Gerds et al. [2008].

To simplify presentation, we generically write

D¯=D(θ¯)withD(θ)=E[d{y0,Y(θx0)}],

where d(y,z)=(yz)2,Y()=Y1() for Brier score, and Y()=Y2() for the OMR. We will construct a SS estimator of D(θ¯) to improve the statistical efficiency of its SL counterpart, D^SL(θ^SL), where

θ^SLsolves Un(θ)=1Ni=1Nw^ixi{yig(θxi)}=0,D^SL(θ)=1Ni=1Nw^id{yi,Y(θxi)},

and the weights w^i=Vi/π^Si account for the stratified sampling with i=1Nw^i=N. Since w^ip for those with Vi=1, standard M-estimation theory cannot be directly applied to establish the asymptotic behavior of the SL estimators. We show in Appendix C that θ^SL is a root-n consistent estimator for θ¯ and derive the asymptotic properties of D^SL(θ^SL) in Appendix D. We also note that throughout the article we use the subscripts n or N to index estimating equations to clarify if they are computed with the labeled or full data, respectively.

3. Estimation Procedure

Our approach to obtaining a SS estimator of D(θ¯) proceeds in two steps. First, the missing outcomes are imputed with a flexible model to improve statistical efficiency. Next, the imputations are augmented so that the resulting estimators are consistent for D(θ¯) regardless of the specification of the prediction model or the imputation model. The final estimator of the accuracy measure is then estimated using the full data and the augmented imputations. This estimation procedure is detailed in the subsequent sections.

We comment here that the initial imputation step allows for construction of a simple and efficient SS estimator of θ¯. As efficient estimation of θ¯ may not be of practical utility in the prediction setting, we keep our focus on accuracy parameter estimation and defer some of the technical details of model parameter estimation to the Appendix. However, we do note that the estimator of D(θ¯) inherently has two sources of estimation variability. The dominating source of variation is from estimating the accuracy measure itself while the second source is from the estimation of the regression parameter. Therefore, by leveraging a SS estimator of θ¯ in estimating D(θ¯), we may further improve the efficiency of our SS estimator of the accuracy measure. These statements are elucidated by the influence function expansions of the SS and SL estimators of D(θ¯) presented in Section 5.

3.1. Step 1: Flexible imputation

We propose to impute the missing y with an estimate of m(u)=P(y=1u). The purpose of the imputation step is to make use of U as it essentially characterizes the covariate distribution due to its size. The accuracy metrics provide measures of agreement between the true and predicted outcomes and therefore depend on the covariate distribution. We thus expect to decrease estimation precision by incorporating U into estimation. In taking an imputation-based approach, we rely on our estimate of m(u) to capture the dependency of y on u in order to glean information from U. While a fully nonparametric method such as kernel smoothing allows for complete flexibility in estimating m(u), smoothing generally does not perform well with moderate p due to the curse of dimensionality [Kpotufe, 2010]. To overcome this challenge and allow for a rich model for m(u), we incorporate some parametric structure into the imputation step via basis function regression.

Let Φ(u) be a finite set of basis functions with fixed dimension that includes x. We fit a working model

P(y=1u)=g{γΦ(u)} (2)

to L and impute y as g{γ˜Φ(u)} where γ˜ is the solution to

Q˜n(γ)=1Ni=1Nw^iΦi[yig{γΦi}]λnγ=0 (3)

Φi=Φ(ui) and λn=o(n12) is a tuning parameter to ensure stable fitting. Under Conditions 13 given in Section 5, we argue in Appendix D that γ˜ is a regular root-n consistent estimator for the unique solution, γ˜, to Q(γ)=E{Φ(u)[yg{γΦ(u)}]}=0. We take our initial imputations as m˜I(u)=g{γ˜Φ(u)}.

With y imputed as m˜I(u), we may also obtain a simple SS estimator for θ¯, θˇSSL, as the solution to

U^N(θ)=1Ni=1Nxi{g(γ˜Φi)g(θxi)}=0.

The asymptotic behaviour of θˇSSL is presented and compared with θ^SL in the Appendix. When the working regression model (1) is correctly specified, it is shown that θ^SL is fully efficient and θˇSSL is asymptotically equivalent to θ^SL. When the outcome model in (1) is not correctly specified, but the imputation model in (2) is correctly specified, we show in Appendix E that θˇSSL is more efficient than θ^SL. When the imputation model is also misspecified, θˇSSL tends to be more efficient than θ^SL, but the efficiency gain is not theoretically guaranteed. We therefore obtain the final SS estimator, denoted as θ^SSL=(θ^SSL,0,,θ^SSL,p), as a linear combination of θ^SL and θˇSSL to minimize the asymptotic variance. Details are provided in Appendices A and B.

3.2. Step 2: Robustness augmentation

To obtain an efficient SS estimator for D¯=D(θ¯), we note that

d(y,Y)=y(12Y)+Y2 (4)

is linear in y when y{0,1}. With a given estimate of m(), denoted by m˜(), a SS estimate of D¯ can be obtained as

1Ni=1Nd{m˜(ui),Y(θxi)}.

However, m˜(u) needs to be carefully constructed to ensure that the resulting estimator is consistent for D¯ under possible misspecification of the imputation model. Using the expression in (4), a sufficient condition to guarantee consistency for D¯ is that

E[{y0m˜(u0)}{12Y(θ¯x0)}m˜()]p0asn. (5)

This condition implies that E[d{y0,Y(θ¯x0)}d{m˜(u0),Y(θ¯x0)}]p0. Unfortunately, m˜I(u)=g{γ˜Φ(u)} does not satisfy (5) when (2) is misspecified. To ensure that (5) holds regardless of the adequacy of the imputation model used for estimating the regression parameters, we augment the initial imputation m˜I(u) as

m˜II(u;θ)=g{γ˜Φ(u)+ν˜θzθ}

where zθ=[1,Y(θx)] and ν˜θ is the solution to the IPW estimating equation

Pn(ν,θ)=1Ni=1Nw^i{yig(γ˜Φi+νziθ)}ziθ=0for any givenθ. (6)

We let ν¯θ be the limiting value of ν˜θ which solves the limiting estimation equation

R(νθ)=E(zθ[yg{γ¯Φ(u)+νzθ}])=0. (7)

This estimating equation is monotone in zθ for any θ and thus ν¯θ exists under mild regularity conditions. It also follows from (7) that (i) E([yg{γ¯Φ(u)+ν¯θzθ}])=0 and (ii) E(Y(θx)[yg{γ¯Φ(u)+ν¯θzθ}])=0 which ensure that the sufficiency condition in (5) is satisfied. We thus construct a SS estimator of D(θ) as

D^SSL(θ)=1Ni=1Nd{m˜II(ui;θ),Y(θxi)}.

In Section 5.2, we present the asymptotic properties of D^SSL(θ^SSL) and D^SSL(θˇSSL) and compare D^SSL(θˇSSL) with its supervised counterpart, D^SL(θ^SL). Similar to the SS estimation of θ¯, it is shown in Appendix F that the unlabeled data helps to reduce the asymptotic variance of D^SSL(θˇSSL). Specifically, D^SSL(θˇSSL) is shown to be asymptotically more efficient than D^SL(θ^SL) when the imputation model is correct. In practice, however, we may want to use D^SSL(θ^SSL) instead of D^SSL(θˇSSL) to achieve improved finite sample performance.

4. Bias Correction via Ensemble Cross-Validation

Similar to the supervised estimators of the prediction performance measures, the proposed plug-in estimator uses the labeled data for both constructing and evaluating the prediction model and is therefore prone to overfitting bias [Efron, 1986]. K-fold cross-validation (CV) is a commonly used method to correct for such bias. However, it has been observed that CV tends to result in overly pessimistic estimates of accuracy measures, particularly when n is not very large relative to p [Jiang and Simon, 2007]. Bias correction methods such as the 0.632 bootstrap have been proposed to address this behavior [Efron, 1983, Efron and Tibshirani, 1997, Fu et al., 2005, Molinaro et al., 2005]. Here, we propose an alternative ensemble CV procedure that takes a weighted sum of the apparent and K-fold CV estimators.

We first construct a K-fold CV estimator by randomly partitioning L into K disjoint folds of roughly equal size, denoted by {Lk,k=1,,K}. Since N is assumed to be sufficiently large, no CV is necessary for projecting to the full data. For a given k, we use L/Lk to estimate γ¯ and θ¯, denoted as γ˜(k) and θ^(k), respectively. The nk observations in Lk are used in the augmentation step to obtain ν˜θ^(k),k, the solution to Pnk(ν,θ^(k))=0. For the kth fold, we estimate the accuracy measure as

D^k(θ^(k))=N1i=1Nd{m˜II,k(ui),Y(θ^(k)xi)},

where m˜II,k(ui)=g(γ˜(k)Φi+ν˜θ^(k),kZiθ^(k)), and take the final CV estimator as D^SSLcv=K1k=1KD^k(θ^(k)). In practice, we suggest averaging over several replications of CV to remove the variation due to the CV partition. We then obtain the weighted CV estimator with

D^SSLω=ωD^SSL+(1ω)D^SSLcv,whereω=K/(2K1).

We may similarly obtain a CV-based supervised estimator, denoted by D^SLcv, as well as the corresponding weighted estimator, D^SLω. Note that the fraction of observations from stratum s in the kth fold, ρ^1s,k, deviates from ρ^1s in the order of O(ns/n). Although this deviation is asymptotically negligibile, it may be desirable to perform the K-fold partition within each strata to ensure that ρ^1s,k=ρ^1s when ns is small or moderate.

Using similar arguments as those given in Tian et al. [2007], it is not difficult to show that n12(D^SSLD¯) and n12(D^SSLcvD¯) are first-order asymptotically equivalent. Thus, the ensemble CV estimator D^SSLω reduces the higher order bias of D^SSL and D^SSLcv, but has the same asymptotic distribution. Although the empirical performance is promising, it is difficult to rigorously study the bias properties of D^SSLω as the regression parameter doesn’t necessarily minimize the loss function, D^(θ). We provide a heuristic justification of the ensemble CV method in Appendix H which assumes θ^ minimizes D^(θ).

5. Asymptotic Analysis

We next present the asymptotic properties of our proposed SS estimator of D¯. To facilitate our presentation, we first discuss the properties of θˇSSL as the accuracy parameter estimates inherently depend on the variability in estimating θ¯. We then present our main result high-lighting the efficiency gain of our proposed SS approach for accuracy parameter estimation. We conclude our theoretical analysis with two practical discussions of (i) intrinsic efficient estimation in the SS setting and (ii) optimal allocation in stratified sampling.

For our asymptotic analysis, we let Σ1Σ2 if Σ1Σ2 is positive definite and Σ1Σ2 if Σ1Σ2 is positive semi-definite for any two symmetric matrices Σ1 and Σ2. For any matrix M and vectors v1 and v2, M represents the jth row vector, v12=v1v1, and {v1,v2}=(v1,v2) is the vector concatenating v1 and v2. To establish our theoretical results, we recall that ρ^1s and ρ^s converge to some fixed values ρ1s and ρs in probability, as assumed in Section 2.1, and introduce the following three conditions.

Condition 1.

The basis Φ(u) contains x, has compact support, and is of fixed dimension. The density function for x, denoted by p(x), and P(y = 1 | u) are continuously differentiable in the continuous components of x and u, respectively. There is at least one continuous component of x with corresponding non-zero component in θ¯.

Condition 2.

The link function g(·) is continuously differentiable with derivative g˙().

Condition 3.

(A) There is no vector γ such that P(γΦ1>γΦ2y1>y2)=1 and E[Φ2g˙{γ¯Φ}]>0. (B) There is a small neighborhood of θ¯,Θ={θ:θθ¯2<δ} for some δ > 0, such that for any θΘ, there is no vector r such that P(r{Φ1,z1θ}>r{Φ2,Z2θ}y1>y2)=1 and E[zθ2g˙{γ¯Φ+ν¯θzθ}]0. (C) E[x2g˙(θ¯x)]0.

Remark 1.

Conditions 13 are commonly used regularity conditions in M-estimation theory and are satisfied in broad applications. Similar conditions can be found in Tian et al. [2007] and Section 5.3 of Van der Vaart [2000]. Condition 3(A) and 3(B) assume that there is no γ and ν such that γΦ+νzθ can perfectly separate the samples based on y. In our application of EHR data analysis, these conditions are typically satisfied as the outcomes of interest (i.e. disease status) do not perfectly depend on covariates such as billing codes, lab values, procedure codes, and other features extracted from free-text. Similar to Tian et al. [2007], Condition 3(A) ensures the existence and uniqueness of the limiting parameters θ¯ and γ¯ and Condition 3(B) ensures the existence and uniqueness of ν¯θ.

5.1. Asymptotic Properties of θˇSSL

The asymptotic properties of θˇSSL are summarized in Theorem 1 and the justification is provided in Appendix E.

Theorem 1.

Under Conditions 13, θˇSSLpθ¯, and

W^SSL=n12(θˇSSLθ¯)=n12s=1Sρs(ns1i=1NViI(Si=s)eSSLi)+op(1)

which weakly converges to N(0,ΣSSL) where

ΣSSL=s=1Sρs2ρ1s1E{eSSLi2Si=s},eSSLi=A1xi{yig(γ¯Φi)},andA=E{xi2g˙(θ¯xi)}.

Remark 2.

To contrast with the supervised estimator θ^SL, we show in Appendix C that

W^SL=n12(θ^SLθ¯)=n12s=1Sρs(ns1i=1NViI(Si=s)eSLi)+op(1),

which weakly converges to N(0,ΣSL) where

ΣSL=s=1Sρs2ρ1s1E{eSLi2Si=s},andeSLi=A1xi{yig(θ¯xi)}.

It follows that when the imputation model P(y=1u)=g{γ¯Φ(u)} is correctly specified, ΣSLΣSSL. When P(γ¯Φ(u)θ¯x)>0, we have that ΣSLΣSSL.

5.2. Asymptotic Properties of D^SSL(θˇSSL) and D^SSL(θ^SSL)

The asymptotic properties of D^SSL(θˇSSL) are summarized in Theorem 2 and the justification is provided in Appendix F.

Theorem 2.

Under Conditions 13, D^SSL(θˇSSL)pD¯, and TˇSSL=n12{D^SSL(θˇSSL)D(θ¯)} is asymptotically Gaussian with mean zero and variance σSSL2 given in Appendix F. Also, TˇSSL is asymptotically equivalent to

n12s=1Sρs(ns1i=1NViI(Si=s)[{d(yi,Y¯i)d(mII,i,Y¯i)}+D˙(θ¯)eSSLi]),

where Y¯i=Y(θ¯xi), mII,i=g(γ¯Φi+ν¯θ¯ziθ¯) is the imputation model based approximation to P(y=1u) and D˙(θ)=D(θ)/θ.

Remark 3.

We also show that T^SSL=n12{D^SSL(θ^SSL)D(θ¯)} is asymptotically equivalent to

n12s=1Sρs(ns1i=1NViI(Si=s)[{d(yi,Y¯i)d(mII,i,Y¯i)}+D˙(θ¯){WeSSLi+(IW)eSLi}]),

which is also asymptotically Gaussian with mean zero where W is a diagonal matrix defined in Appendix B.

Remark 4.

As shown in Appendix D, T^SL=n12{D^SL(θ^SL)D(θ¯)} is asymptotically Gaussian with mean zero and variance σSL2 defined in Appendix D. It is equivalent to

n12s=1Sρs(ns1i=1NViI(Si=s)[{d(yi,Y¯i)D(θ¯)}+D˙(θ¯)eSLi]).

We verify in Appendix F that when the imputation model is correctly specified, the asymptotic variance of D^SSL(θˇSSL) is smaller than that of D^SL(θ^SL) regardless of the specification of the working regression model in (1). This is because the accuracy measures always depend on the marginal distribution of x and the proposed SS approach leverages U. Therefore D^SSL(θˇSSL) is asymptotically more efficient than D^SL(θ^SL) even when model (1) is correctly specified and θ^SL is fully efficient.

While we cannot theoretically guarantee that the SS estimator is more efficient than the supervised estimator under misspecification of the imputation model, the first and dominating term in the influence function expansion corresponds to the variability from estimating the accuracy measure. Even when the imputation model is misspecified, it may still provide a close approximation to P(y = 1 | u) and therefore result in reduced variability relative to the supervised approach. The second term of the influence function corresponds to the variability from estimation of the regression parameter. As the SS estimator of the regression parameter is more efficient than its supervised counterpart under model misspecification, we also expect this term to have smaller variation than its supervised counterpart. In our simulation studies, we evaluate the performance of our proposals under various model misspecifications to assess whether this heuristic justification holds up empirically. We also further study this limitation from a theoretical viewpoint in the next section where we introduce a SS estimator with the intrinsic efficiency property from the semiparametric inference literature for comparison.

5.3. Intrinsic Efficient SS Estimation

For simplicity, we begin our discussion of intrinsic efficient estimation focusing on estimation of the regression parameter. Recall that the idea in Section 3.1 is to (i) solve N1i=1Nw^iΦi[yig{γΦi}]λnγ=0 to obtain estimated coefficients γ˜ for imputation and then (ii) solve N1i=1Nxi{g(γ˜Φi)g(θxi)}=0 to obtain the SS estimator, θ^SSL. By Theorem 1, for any ep+1{0}, the asymptotic variance of n12(eθ^SSLeθ¯) can be expressed as:

1ni=1nζi(eA1xi)2{yig(γ¯Φi)}2, (8)

where ζi=s=1Sρs2ρ1s2ViI(Si=s) for each i{1,2,,N}. When the imputation model P(y=1u)=g(γΦ) is misspecified, an alternative estimating equation for γ may be used to directly reduce the asymptotic variance of the resulting SS estimator. Specifically, for a fixed Φ, we may find the estimating equation for γ that leads to the lowest asymptotic variance of the estimator for eθ¯, a property referred to as “intrinsic efficiency” in the semiparametric inference literature [Tan, 2010]. We briefly propose estimation procedures for an estimator achieving this property with potential to improve upon our original proposal under potential misspecification of the imputation model.

To directly minimize the asymptotic variance of the SS estimator of eθ¯ given by (8), we obtain the estimated coefficients for the imputation model with

γ˜(1)=argminγ12ni=1nζ^i(eA^1xi)2{yig(γΦi)}2+λn(1)γ22,s.t.1Ni=1Nw^ixi{yig(γΦi)}=0, (9)

where ζ^i=s=1Sρ^s2ρ^1s2ViI(Si=s), A^=N1i=1Nxi2g˙(θ^SSLxi) are the empirical estimates of ζi and A, respectively, and λn(1)=O(n12) is again a tuning parameter for stable fitting. We then solve N1i=1Nxi{g(γ˜(1)Ψi)g(θxi)}=0 to obtain θ^intri, and return eθ^intri as the intrinsic efficient estimator for eθ¯. The moment condition in (9) is used for calibrating the potential bias from a misspecified imputation model and ensuring the consistency of θ^intri. This condition is explicitly imposed when constructing our original proposal.

To study the asymptotic properties of θ^intri and compare it with our original proposal, θ^SSL, we let

γ¯(1)=argminγE[R(eA1x)2{yg(γΦ)}2],s.t.E[x{yg(γΦ)}]=0,

be the limit of γ˜(1, where R=s=1SI(S=s)ρs/ρ1s. The proof of Theorem 3 is provided in Appendix G.2.

Theorem 3.

Under condition 1, and conditions A1 and A2 introduced in Appendix G.2, n12(θ^intriθ¯) converges weakly to a mean zero normal distribution, and is asymptotically equivalent to W^(γ¯(1)) where

W^(γ)=n12s=1Sρs[ns1i=1NViI(Si=s)A1xi{yig(γΦi)}].

In addition: (i) when the imputation model P(y=1u)=g(γΦ) is correctly specified, θ^intri is asymptotically equivalent to θ^SSL and (ii) the asymptotic variance of n12(eθ^intrieθ¯) is minimized among estimators with {eW^(γ):E[x{yg(γΦ)}]=0}. Consequently, the variance of the intrinsic efficient estimator is always less than or equal to the asymptotic variance of both n12(eθ^SLeθ¯) and n12(eθ^SSLeθ¯).

The details and theoretical analysis of the intrinsic efficient estimation procedure of the accuracy measure D¯ is presented Appendices G.1 and G.2. Similar to Theorem 3, we show that D^intri is asymptotically equivalent with our proposal, D^SSL, when the imputation model is correctly specified and has smaller asymptotic variance than D^SSL when the imputation model is misspecified. However, it is important to note that estimation based on intrinsic efficiency is a non-convex problem and one may encounter numerical optimization issues which may limit its use in practice. We provide simulation studies comparing the intrinsic efficient estimator to the proposed approach in Section S4 Supplement.

5.4. Optimal Allocation in Stratified Sampling

Another important practical issue is how to select the strata and the corresponding selection probabilities. Here we provide here a detailed assessment of the optimal (or Neyman) allocation of the labeled data across the strata. Specifically, the general form of the influence function for our estimators is

n1/2s=1Sρs{ns1i=1NViI(Si=s)f(Fi)}+op(1)

for a function f with σs2=E{f2(Fi)Si=s} and E{f(Fi)}=0. The asymptotic variance can then be expressed as

s=1Sρs2{ns2i=1NViI(Si=s)E{f2(Fi)Si=s}}=s=1Sρs2σs2ns=n1s=1Snss=1S(ρsσs)2nsn1(s=1Sρsσs)2,

by the Cauchy-Schwarz inequality, and equality holds if and only if

ns=nρsσss=1Sρsσsfors=1,,S. (10)

The optimal sampling probabilities are therefore proportional to (i) the relative stratum size and (ii) the variability within the stratum, with greater weight placed on large stratum with high variability. Consequently, stratified sampling leads to a more efficient estimator than uniform random sampling when the allocation in (10) is used.

Remark 5.

There is a rich body of survey sampling literature concerning model-assisted approaches that address the practically important question of how to select the strata and the corresponding selection probabilities [Neyman, 1934, Särndal et al., 2003, Nedyalkova and Tillé, 2008]. The optimal allocation given by (10) is in similar spirit with the sampling schemes used in Cai and Zheng [2012], Liu et al. [2012]. It is particularly useful for HER-based phenotyping studies such as the diabetic neuropathy example in Section 8 as it is often straightforward for domain experts to define a filter variable(s) that yields relatively large stratum with increased prevalence of y (e.g. patients with notes containing terms related to the disease, relevant lab values, or specialist visits) and thus increased variability.

We provide additional numerical studies to illustrate Remark 5 in Section S5 of the Supplement.

6. Perturbation Resampling Procedure for Inference

We next propose a perturbation resampling procedure to construct standard error (SE) and confidence interval (CI) estimates in finite samples. Resampling procedures are particularly attractive for making inference about D¯ when Y=Y2 since D^SSL(θ) is not differentiable in θ. To this end, we generate a set of independent and identically distributed (i.i.d) non-negative random variables, G=(G1,,Gn), independent of D, from a known distribution with mean one and unit variance.

For each set of G, we first obtain a perturbed version of θ^SSL as

θ^=θ^SSL+A^1k=1KiLkw^i(Gi1)j=1nw^j[xiyiW^xig(γ˜(k)Φi)(IW^)xig(θ^(k)xi)],

where A^=N1i=1Nxi2g˙(θ^SSLxi). We use CV to correct for variance underestimation due to overfitting. Next, we find the solution γ˜ to the perturbed objective function

Q˜n(γ)=i=1nw^iΦi[yig(γΦi)]Gii=1nw^iGiλnγ=0 (11)

and the solution ν˜ that solves

Pn(ν,θ^)=i=1nw^i{yig(γ˜Φi+νziθ^)}ziθ^Gii=1nw^iGi=0

to obtain perturbed counterparts of γ˜ and ν˜, respectively. We then compute m˜11(ui)=g(γ˜Φi+ν˜ziθ^) and obtain the perturbed estimator of D^SSL(θ^SSL) as

D^SSL(θ^)=N1i=1N[m˜11(ui){12Y(θ^xi)}+Y2(θ^xi)].

Following arguments such as those in Tian et al. [2007], one may verify that n12{D^SSL(θ^SSL)D^SSL(θ^SSL)}D converges to the same limiting distribution as T^SSL. Additionally, it may be shown that T^SSLcv=n12{D^SSLcvD(θ¯)} and hence T^SSLω=n12{D^SSLωD(θ¯)} converge to the limiting distribution of T^SSL. We utilize these results to approximate the distribution of T^SSLω with the empirical distribution of a large number of perturbed estimates using the above resampling procedure to base inference for D(θ¯) on the proposed bias-corrected estimator. The variance of D^SSLω can correspondingly be estimated with the sample variance and confidence intervals may be constructed accordingly.

7. Simulation Studies

We conducted extensive simulation studies to evaluate the performance of the proposed SSL procedures and to compare to existing methods. Throughout, we generated p = 10 dimensional covariates x from N(0, C) with Ckl=3(0.4)|kl|. Stratified sampling was performed according to S generated from the following two mechanisms:

  1. S{1,S=2}withS=1+I(x1+δ10.5)andδ1N(0,1).

  2. S{1,2,3,S=4}withS=1+I(x1+δ10.5)+2I(x3+δ20.5),δ1N(0,1),δ2N(0,1),andδ1δ2.

We let S=(I(S=1),,I(S=S1)). For both settings, we sampled ns = 100 or 200 observations from each stratum. Throughout, we let v1 be the natural spline of x with 3 knots and v2 be the interaction terms {x1:x1,x2:x(1,2)}, where x1:x1 and x2:x(1,2) represent interaction terms of x1 with the remaining covariates and x2 with covariates excluding x1 and x2, respectively. With θ={0,1,1,0.5,0.5,0(p4)×1} and ϵlogistic and ϵextreme denoting noise generated from the logistic and extreme valuep−2,0.3q distributions, we simulated y from the following models:

  1. (Mcorrect,Icorrect) with correct outcome model and correct imputation model:
    y=I(θx+ϵlogistic>2)andΦ=(1,x,v1,S);
  2. (Mincorrect,Icorrect) with incorrect outcome model and correct imputation model:
    y=I[θx+0.5{x1x2+x1x5x2x6I(S=1)}+ϵlogistic>0]andΦ=(1,x,v2,S);
  3. (Mincorrect,Iincorrect) with incorrect outcome model and incorrect imputation model:
    y=I{θx+x12+x32+exp(23x43x6)ϵextreme>2}andΦ=(1,x,v1,S).

While the outcome model is misspecified in both (ii) and (iii), the misspecification is more severe in (iii) due to the higher magnitude of nonlinear effects. These configurations are chosen to mimic EHR settings where the signals are typically sparse and S is small. The covariate effects of 1 represent the strong signals from the main billing codes and free-text mentions of the disease of interest. The two weaker signals 0.5 characterize features such as related medications, signs, symptoms and lab results relevant to the disease of interest.

Across all settings, we compare our SS estimators to both the SL estimator and the alternative density ratio (DR) method [Kawakita and Kanamori, 2013, Kawakita and Takeuchi, 2014]. The basis function φ(u) required in the DR method was chosen to be the same as Φ(u) in our method for all settings. The details and theoretical properties of the DR method are further discussed in the Supplement. We employed the ensemble CV strategy to construct a bias corrected DR estimator for D¯, denoted as D^DRω, to ensure a fair comparison to our approach. The three settings of outcome and imputation models under (i), (ii), and (iii) allow us to verify the asymptotic efficiency of the proposed SSL procedures relative to the SL and DR methods under various scenarios of misspecification.

For each configuration, results are summarized with 500 independent data sets. The size of the unlabeled data was chosen to be 20,000 across all settings. For all our numerical studies including the real data application, CV was performed with either K = 3 or K = 6 and averaged over 20 replications. The estimated SEs were based on 500 perturbed realizations and the OMR was evaluated with c = 0.5. We let λn=log(2p)/n1.5 when fitting the ridge penalized logistic regression. We focus primarily on results for S = 2 and K = 6, but include results for S = 4 and K = 3 in Section S3 of the Supplement as they show similar patterns. Additionally, our analyses concentrate on the performance of the accuracy metrics. Results for the regression parameter estimates can be found in Section S2 of the Supplement. The code to implement the proposed methods and run the simulation studies can be found at https://github.com/jlgrons/Stratified-SSL.

In Figure 1, we present the percent biases of the apparent, CV, and ensemble CV estimators of the accuracy parameters. Although all three estimators have negligible biases, the SSL exhibits slightly less bias than its supervised counterpart and the DR estimator under (Mincorrect,Iincorrect). The ensemble CV method is effective in bias correction while the apparent estimators are optimistic and the standard CV estimator is pessimistic. For example, under (Mincorrect,Iincorrect), ns = 100 and S = 2, the percent bias of the SSL estimators for the OMR are −8.2%, 8.8% and −0.5% when we use the plug-in, 6-fold CV, and the ensemble CV methods, respectively. The efficiency of D^SSLw and D^DRω relative to D^SLw for both the Brier scoreSL and OMR are presented in Figure 2. Again, D^SSLw is substantially more efficient than D^SLw and D^DRω, with efficiency gains of approximately 15%–30% under (Mcorrect,Icorrect), 40% under (Mcorrect,Icorrect), and 40%–80% under (Mcorrect,Icorrect). Results for K = 3 have similar patterns and are presented in Figure S1 and S2 of the Supplement. In Table 1, we present the results for the interval estimation obtained from the perturbation resampling procedure for the SS estimator D^SSLω. The SEs are well approximated and the empirical coverage for the 95% CIs is close to the nominal level across all settings.

Figure 1:

Figure 1:

Percent biases of the apparent (AP), CV, and ensemble cross validation (eCV) estimators of the Brier score (BS) and overall misclassification rate (OMR) for SL, SSL and DR under (i) (correct,correct), (ii) (incorrect,correct), and (iii) (incorrect,incorrect). Shown are the results obtained with K = 6 fold for CV.

Figure 2:

Figure 2:

Relative efficiency (RE) of D^SSLω (SSL) and D^DRω compared to D^SLω for the Brier score (BS) and OMR under (i) (correct,correct), (ii) (incorrect,correct), and (iii) (incorrect,incorrect). Shown are the results obtained with K = 6 fold CV.

Table 1:

The 100×ESE of D^SLw, D^SSLw and D^DRw under (i) (correct,correct); (ii) (incorrect,correct), and (iii) (incorrect,incorrect). For D^SSLw, we also show the average of the 100 × ASE as well as the empirical coverage probability (CP) of the 95% confidence intervals constructed based on the resampling procedure.

(i) (correct,correct)
Brier score OMR

D^SLw D^SSLw D^DRw D^SLw D^SSLw D^DRw

ESE ESEASE CP ESE ESE ESEASE CP ESE

S = 2, ns = 100 1.27 1.191.25 0.94 1.31 2.29 2.102.23 0.97 2.35
S = 2, ns = 200 0.97 0.870.86 0.93 0.95 1.67 1.501.60 0.97 1.67
S = 4, ns = 100 0.97 0.890.85 0.93 0.95 1.70 1.541.58 0.95 1.64
S = 4, n = 200 0.67 0.580.60 0.95 0.66 1.16 1.021.11 0.97 1.16
(ii) (incorrect,correct)
Brier score OMR

D^SLw D^SSLw D^DRw D^SLw D^SSLw D^DRw

ESE ESEASE CP ESE ESE ESEASE CP ESE

S = 2, ns = 100 1.72 1.471.39 0.93 1.85 3.09 2.562.65 0.94 3.42
S = 2, ns = 200 1.13 1.010.94 0.92 1.16 2.14 1.851.85 0.95 2.21
S = 4, ns = 100 1.21 1.070.96 0.92 1.24 2.13 1.871.88 0.95 2.18
S = 4, ns = 200 0.86 0.730.67 0.93 0.87 1.62 1.371.34 0.94 1.62
(iii) (incorrect,incorrect)
Brier score OMR

D^SLw D^SSLw D^DRw D^SLw D^SSLw D^DRw

ESE ESEASE CP ESE ESE ESEASE CP ESE

S = 2, ns = 100 1.57 1.311.29 0.94 1.53 2.68 2.152.31 0.96 2.61
S = 2, ns = 200 1.05 0.860.85 0.95 1.01 1.87 1.451.54 0.97 1.83
S = 4, ns = 100 1.11 0.870.88 0.96 0.99 1.97 1.461.57 0.96 1.87
S = 4, ns = 200 0.78 0.610.61 0.95 0.73 1.37 1.021.08 0.97 1.30

Remark 6.

While our simulation studies focus on the SSL estimators proposed in Section 3, we also investigated the finite sample performance of θ^intri and D^intri and compared them with our original proposals. The numerical studies are described in Section S4 of the Supplement and demonstrate that when the estimated coefficients for the imputation model of the original estimators are equal or close to those of the intrinsic efficient estimator, these two methods perform equivalently with respect to mean square errors (MSE). In contrast, under a setting where the coefficients for the imputation model differ across these two methods, θ^intri and D^intri have about 30% smaller MSE than θ^SSL and D^SSL, on average. The detailed results are presented in Table S6 of the Supplement.

Remark 7.

To illustrate the benefit of stratified sampling in both the supervised and SS settings, we provide numerical studies of the optimal allocation in S5 of the Supplement. Mimicking our example in Section 8, we let the risk of y significantly differ across the S = 2 sampling groups. The stratification variable is picked so that P(y=1S=1) is much lower than P(y=1S=2). We consider two sampling strategies: (i) uniform random sampling of n subjects, and (ii) stratified sampling of n/2 subjects from each stratum. Since P(y=1S=1) is low and close to 0, the variability of this stratum σ12=E{f2(F)S=1} is smaller than that of S = 2 with P(y=1S=2) not near 0 and 1. Connecting this with (10), the stratified sampling strategy oversamples within S=2 so that its allocation of n2 is more close to the optimal choice. Consistent with this observation, our simulation results indicate that stratified sampling is more efficient than uniform random sampling in both the supervised and SS settings with an average relative efficiency > 1.45 across different setups. We further inspect the supervised estimator of D(θ¯) under setup (I) in Section S5, for which the optimal allocation is n1 = 0.47n and n2 = 0.53n and nearly coincides with our equal allocation of n across the two stratum.

8. Example: EHR Study of Diabetic Neuropathy

We applied the proposed SSL procedures to develop and evaluate an EHR phenotyping algorithm for classifying diabetic neuropathy (DN), a common and serious complication of diabetes resulting in nerve damage. The full study cohort consists of N = 16,826 patients over age 18 with one or more of 12 ICD9 codes relating to DN identified from Partners HealthCare EHR. An initial assessment of 100 charts by physicians revealed the prevalence of DN in the study cohort was approximately 17%. To obtain a labeled set with sufficient DN cases for model training and improve efficiency, the investigators decided to employ a stratified sampling scheme. To do so, a binary filter variable S indicating whether a patient had a neurological exam and a neurology note with at least 1,000 characters was created. The prevalence of DN in the “enriched” stratum with S=1 was expected to be higher than that in the stratum with S=0. As demonstrated in our theoretical analysis in Section 5.4 and our numerical studies in Section S5, oversampling within the enriched set can improve estimation efficiency relative to taking a uniform random sample and is a common approach taken in EHR-based analyses. For this study, the investigators sampled n0 = 70 and n1 = 538 patients from the N0 = 13608 patients with S=0 and the N1 = 3218 patients with S=1, respectively, for developing the phenotyping algorithm.

To train the model for classifying DN, a set of 11 codified and NLP features related to DN were selected from an original list of 75 via an unsupervised screening as described in Yu et al. [2015]. The codified features included S, diagnostic codes for diabetes, type 2 diabetes mellitus, diabetic neuropathy, other idiopathic peripheral autonomic neuropathy, and diabetes mellitus with neurological manifestation as well as normal glucose lab values and prescriptions for anti-diabetic medications. The NLP features included mentions of terms related to DN in the patient record including glycosylated hemoglobin (HgA1c), diabetic, and neuropathy. As all these features (with the exception of S) are count variables and tend to be highly skewed, we used the transformation xlog(x+1) to stabilize model fitting.

We developed DN classification models by fitting a logistic regression with the above features based on θ^SL,θ^SSL and θ^DR obtained from density ratio weighted estimation. Since the proportion of observations with S=0 is relatively low in the labeled data, we implemented 100 replications of 6-fold CV procedure by splitting the data randomly within each strata to improve the stability of the CV procedure. To construct the basis for θ^SSL and θ^DR, we used a natural spline with 3 knots on all covariates except S. To improve training stability, we set the ridge tuning parameter λn = n−1 when fitting the imputation model.

As shown in Table 2(a), the point estimates are reasonably similar which confirms the consistency and stability of SS estimator in a real data setting. As expected, we find that the two most influential predictors are the diagnostic code for diabetic neuropathy and anti-diabetic medications. Importantly, we note substantial efficiency gains of θ^SSL compared to θ^SL. The SSL estimates are > 50% more efficient than the SL estimates for several features including six diagnostic code features and one NLP feature for DN. Additionally, θ^SSL is the most efficient estimator among all three approaches for nearly all variables.

Table 2:

Results from the diabetic neuropathy EHR study: (a) estimates (Est.) of the regression parameters for both codified (COD) and NLP features based on θ^SL, θ^SSL and θ^DR along with their estimated SEs and the coordinate-wise relative efficiencies (RE) of θ^SSL compared to θ^SL and (b) D^SLω, D^SSLω and D^DRω along with their estimated SEs and relative efficiencies (RE) of D^SSLω and D^DRω compared to D^SLω.

(a) Estimates of the Regression Coefficients
θ^SL θ^SSL θ^DR
Est. SE Est. SE RE Est. SE RE
Intercept −3.89 0.80 −3.85 0.73 1.21 −3.26 0.66 1.45

COD Neurological exam & note −1.09 0.81 −0.90 0.61 1.77 −2.38 1.16 0.48
Diabetes −0.24 0.57 0.02 0.43 1.79 −0.09 0.48 1.42
Type 2 Diabetes Mellitus −1.05 0.73 −0.52 0.60 1.62 −0.62 0.86 0.73
Diabetic Neuropathy 1.99 0.86 1.79 0.68 1.58 1.66 1.08 0.63
Anti-diabetic Meds 1.70 0.59 1.12 0.44 1.79 1.50 0.51 1.32
Diabetes Mellitus with
 Neuro Manifestation 0.36 1.17 0.60 0.98 1.42 0.59 0.90 1.68
Other Idiopathic Peripheral
 Autonomic Neuropathy 0.86 0.71 0.93 0.68 1.09 1.01 0.76 0.89
Normal Glucose −0.56 0.65 −0.20 0.51 1.64 −1.27 0.80 0.66

NLP Diabetic 0.30 0.58 −0.57 0.49 1.39 0.14 0.65 0.80
HgA1c −0.52 0.75 −0.64 0.70 1.16 −0.67 0.85 0.79
Neuropathy 0.27 0.58 0.30 0.47 1.55 0.37 0.54 1.15
(b) Estimates of the Accuracy Parameters (×100)
D^SLω D^SSLω D^DRω
Est. SE Est. SE RE Est. SE RE
Brier score 8.97 2.09 9.59 1.68 1.55 9.60 1.87 1.26
OMR 12.87 3.25 14.01 2.55 1.63 12.29 3.04 1.14

In Table 2(b), we compare D^SLw,D^SSLw and D^DRw for the Brier score and OMR with c = 0.5. While the point estimates for the accuracy measures based on these different approaches are relatively similar, D^SSLw is 55% more efficient than D^SLw for the Brier score and 63% more efficient for the OMR. Again, D^SSLw is substantially more efficient than the DR estimator D^DRw. These results support the potential value of our method for EHR-based research as these gains in efficiency may be directly translated into requiring fewer labeled examples for model evaluation.

9. Discussion

In this paper, we focused on the evaluation of a classification rule derived from a working regression model under stratified sampling in the SS setting. In particular, we introduced a two-step imputation-based method for estimation of the Brier score and OMR that makes use of unlabeled data. Additionally, as a by-product of our procedure, we obtained an efficient SS estimator of the regression parameter. Through theoretical and numerical studies, we demonstrated the advantage of the SS estimator over the SL estimator with respect to efficiency. We also developed a weighted CV procedure to adjust for overfitting and a resampling procedure for making inference. Our numerical studies indicate that our proposed method outperforms the existing DR method for SSL in the finite sample studies and we provide further discussion of this finding in Section S6 of the Supplement. Importantly, this article is one of the first theoretical studies of labeling based on stratified sampling within the SSL literature. We focus on the stratified sampling scheme due to its direct application to a variety of EHR-based analyses, including the development of a phenotyping algorithm for diabetic neuropathy presented in the previous section.

In our numerical studies, we used spline functions with 3 or 4 knots and interaction terms for the imputation model. It would be possible to use more knots or add more features to the basis function for settings with a larger n. However, care must be taken to avoid overfitting and potential loss in the efficiency gain of the SS estimator in finite sample. Alternatively, other basis functions can be utilized provided that Φ(u) contains x in its components to ensure consistency of the regression parameter. In settings where nonlinear effects of x on y are present, it may be desirable to impose a more complex outcome model to improve the prediction performance. A potential approach is to explicitly include nonlinear basis functions in the outcome model. In Section S7 of the Supplementary Materials, we consider using the leading principal components (PCs) of x and Ψ(x) where Ψ(·) is a vector of nonlinear transformation functions under a variety of settings. This approach performs similarly or better than the commonly used random forest model with respect to predictive accuracy, suggesting the utility of our proposed methods in the presence of nonlinear effects. Our numerical results also illustrate the efficiency gain of the SS estimators of the Brier score and OMR relative to the SL and DR methods. It is important to note, however, that the dimensions of both x and Φ were assumed to be fixed in our asymptotic analysis. Accommodating more complex modeling with p not small relative to n requires extending the proposed SSL approach to settings where x and Φ are high dimensional.

For accuracy estimation, we proposed an ensemble CV estimator that eliminates first-order bias when the estimated regression parameter is the minimizer of the empirical performance measure. Though this condition may not hold when the outcome model is misspecified, we have found that the suggested weights perform well in our numerical studies. Such ensemble methods that accommodate the more general case in both the supervised and SS settings warrant further research. Additionally, an important setting where our proposed SSL procedure would be of great use is in drawing inferences about two competing regression models. As it is likely that at least one model is misspecified, we would expect to observe efficiency gains in estimating the difference in prediction error with the proposed method.

Lastly, while the present work focuses on the binary outcome along with the Brier score and OMR, the proposed SSL framework can potentially be extended to more general settings with continuous y and/or other accuracy parameters. In particular, for binary y and corresponding classification rule Y2=I{g(θx)>c}, it would be of interest to consider estimation of the sensitivity, specificity, and weighted OMR with different threshold values to analyze the costs associated with the false positive and false negative errors.

Supplementary Material

supinfo

Acknowledgments

This research was supported by grants F31-GM119263, T32-NS048005, and R01HL089778 from the National Institutes of Health.

Appendix

Here we provide justifications for our main theoretical results. The following lemma confirming the existence and uniqueness of the limiting parameters θ¯,γ¯, and ν¯θ will be used in our subsequent derivations.

Lemma A1.

Under conditions 13, unique θ¯ and γ¯ exist. In addition, there exists δ > 0 such that a unique ν¯θ exists for any θ satisfying θθ¯2<δ.

Proof.

Conditions 3 (A) and (B) imply that there is no θ and γ such that with non-trivial probability,

I(Y1>Y2)=I(θx1>θx2),

and

I(Y1>Y2)=I(γΦ1>γΦ2).

It follows directly from Appendix I of Tian et al. [2007] that there exist finite θ¯ and γ¯ that solve U(θ¯) and Q(γ¯), respectively, and that θ¯ and γ¯ are unique. For θΘ, there exists no ν such that

I(Y1>Y2)=I(γ¯Φ1+νz1θ>γ¯Φ2+νz2θ),

or

I(Y1>Y2)=I(γ¯Φ1+νz1θ<γ¯Φ2+νz2θ),

which similarly implies there exists a finite ν¯θ that is the solution to R(νθ)=0. The solution is also unique as E[zθ2g˙{γ¯Φ+νzθ}]0 for any ν. □

A. Estimation Procedure for θ^SSL

We propose to obtain a simple SS estimator for θ¯,θˇSSL, as the solution to

U^N(θ)=1Ni=1Nxi{g(γ˜Φi)g(θxi)}=0. (A.1)

Note that when u includes the stratum information S and the imputation model (2) is correctly specified, there is no need to use the inverse probability weighted (IPW) estimating equation with w^i as in (3). However, under the general scenario where the imputation model may be misspecified, the unweighted estimating equation is not guaranteed to provide an asymptotically unbiased estimate of γ¯ and necessitates the use of the IPW approach.

As detailed in the subsequent sections, when the imputation and outcome models are misspecified, the efficiency gain of θˇSSL relative to θ^SL, is not theoretically guaranteed. We therefore obtain the final SS estimator, denoted as θ^SSL=(θ^SSL,0,,θ^SSL,p), as a linear combination of θ^SL and θˇSSL to minimize the asymptotic variance. For simplicity we consider here the component-wise optimal combination of the two estimators. That is, the jth component of θ¯,θ¯j, is estimated with

θ^SSL,j=W^1jθ˜SSL,j+(1W^1j)θ˜SL,j

where W^1j is the first component of the vector W^j=1Σ^j1/(1Σ^j11) and Σ^j is a consistent estimator for cov{(θˇSSL,j,θ˜SL,j)}. To estimate the variance of θ^SSL, one may rely on estimates of the influence functions of θˇSSL and θ^SL. To avoid under-estimation in a finite sample, we obtain bias-corrected estimates of the influence functions via K-fold CV. Details on the K-fold CV procedure, as well as the computations for the aforementioned estimation of Σ^j and W^1j, are given in Appendix B.

B. Cross-validation Based Inference for θ^SSL

Here we provide the details of the procedure to obtain θ^SSL as well as an estimate of its variance. We employ K-fold CV in the proposed procedure to adjust for overfitting in finite sample and denote each fold of L as Lk for k=1,,K. First, we estimate Σ^j with

n1k=1KiLk{Wj(γ˜(k),Di),Vj(θ^(k),Di)}{Wj(γ˜(k),Di),Vj(θ^(k),Di)}

where γ˜(k) is the estimator of γ¯ based on L/Lk,θ^(k) is the supervised estimator of θ¯ based on L/Lk,

W(γ˜(k),Di)=A^1[ϖixi{yig(γ˜(k)Φi)}],V(θ^(k),Di)=A^1[ϖixi{yig(θ^(k)xi)}],A^=N1i=1Nxi2g˙(θˇSSLxi)andϖi=w^in/N

In practice, Σ^j may be unstable due to the high correlation of θˇSSL and θ^SL. One may use a regularized estimator (Σ^j+δnI)1 with some δn=O(n12) to stabilize the estimation and obtain θ^SSL accordingly. The covariance of θ^SSL may then be consistently estimated with

n1k=1KiLk{Z(γ˜(k),θ^(k),Di)}{Z(γ˜(k),θ^(k),Di)}where (B.1)
Z(γ˜(k),θ^(k),Di)=W^{W(γ˜(k),Di)}+(IW^){V(θ^(k),Di)} (B.2)

W^=diag(W^10,,W^1p) is an estimate of W=diag(W10,,W1p),W^1j is the first component of W^j=1Σ^j1/(1Σ^j11) and W1j is the first component of Wj=1Σj1/(1Σj11). The confidence intervals for the regression parameters can be constructed with the proposed variance estimates and the asymptotic normal distribution of the SS estimator.

C. Asymptotic Properties of θ^SL

The main complication in deriving the asymptotic properties of the SL estimators arises from the fact that P(Vi=1F)=π^Si0 as n and hence w^i=Vi/π^Si is an ill behaved random variable tending to infinity in the limit for those with Vi = 1. This substantially distinguishes the SS setting from the standard missing data literature. To overcome this complication, we note that for subjects in the labeled set, N1w^i=s=1SI(Si=s)ρ^sns1Vi and ViSi=s, are independent identically distributed (i.i.d) random variables since the labeled observations are drawn randomly within each stratum. Also note that ρ^spρs as assumed in Section 2.1. Hence for any function f with var{f(Fi)Si=s}<,

N1i=1Nw^if(Fi)=s=1Sρ^sns1i=1NViI(Si=s)f(Fi)=s=1S{ρs+op(1)}{ns1i=1NViI(Si=s)f(Fi)}+op(1) (C.1)
=s=1SρsE{f(Fi)Si=s}+op(1)=E{f(Fi)}+op(1). (C.2)

We begin by verifying that θ^SL is consistent for θ¯. It suffices to show that (i) supθΘUn(θ)U(θ)2=op(1) and (ii) infθθ¯2>ϵU(θ)2>0 for any ϵ>0 [Newey and McFadden, 1994, Lemma 2.8]. To this end, we write

Un(θ)=1Ni=1Nw^ixi{yig(θxi)}=s=1Sρs[ns1Vi=1I(Si=s)xi{yig(θxi)}]+op(1).

Under Conditions 13, x belongs to a compact set and g˙(θx) is continuous and uniformly bounded for θΘ. From the uniform law of large numbers (ULLN) [Pollard, 1990, Theorem 8.2], ns1Vi=1I(Si=s)xi{yig(θxi)} converges to E[xi{yig(θxi)}Si=s] in probability uniformly as n and supθΘUn(θ)U(θ)2=op(1). Furthermore, (ii) follows directly from Lemma A1 and consequently θ^SLpθ¯ as n.

Next we consider the asymptotic normality of W^SL=n12(θ^SLθ¯). Noting that θ^SLpθ¯ and ρ^spρs, we apply Theorem 5.21 of Van der Vaart [2000] to obtain the Taylor expansion

W^SL=n12(θ^SLθ¯)=n121Ni=1Nw^iA1xi{yig(θ¯xi)}+op(1)=n12s=1S{ρs+op(1)}{ns1Vi=1I(Si=s)eLLi}+op(1),

where eSLi=A1xi{yig(θ¯xi)} and A=E{xi2g˙(θ¯xi)}. It then follows by the classical Central Limit Theorem that W^SLN(0,ΣSL) in distribution and

W^SL=n12s=1Sρs{ns1Vi=1I(Si=s)eSLi}+op(1),

where ΣSL=s=1Sρs2ρ1s1E{eSLi2Si=s}.

D. Asymptotic Properties of D^SL(θ^SL)

We begin by showing that D^SL(θ^SL)pD(θ¯) as n. We first note that since ρ^spρs,

D^SL(θ)=s=1Sρs[ns1Vi=1I(Si=s)d{yi,Y(θxi)}]+op(1). (D.1)

It follows by the ULLN that supθΘ|D^SL(θ)D(θ)|=op(1) since d{y,Y(θx} is continuously differentiable in θ and uniformly bounded. The consistency of D^SL(θ^SL) for D(θ¯) then follows from the fact that θ^SL converges in probability to θ¯ as n.

To establish the asymptotic distribution of T^SL=n12{D^SL(θ^SL)D(θ¯)}, we first consider T˜SL(θ)=n12{D^SL(θ)D(θ)}. We verify that there exists δ > 0, such that the classes of functions indexed by θ:

B1={I(S=s)|yI{g(θx)>c}|:θθ¯2<δ}
andB2={I(S=s)[yg(θx)]2:θθ¯2<δ}

are Donsker classes. For B1, we note that {I{g(θx)>c}:θθ¯2<δ} is a Vapnik-Chervonenkis class [Van der Vaart, 2000, Page 275] and thus

B1={I(s=s0)[I(y=0)I{g(θx)>c}+I(y=1)I{g(θx)c}]:θθ¯2<δ}

is a Donsker class. For B2,[yg(θx)]2 is continuously differentiable in θ and uniformly bounded by a constant. It follows that B2 is a Donsker class [Van der Vaart, 2000, Example 19.7]. By Theorem 19.5 of Van der Vaart [2000] we then have

n12{D^SL(θ)D(θ)}=s=1Sρsρ1s12ns12Vi=1I(Si=s)[d{yi,Y(θxi)}D(θ)]+op(1),

which converges weakly to a mean zero Gaussian process index by θ. Thus, n12{D^SL(θ)D(θ)} is stochastically equicontinuous at θ¯. In addition, note that D(θ) is continuously differentiable at θ¯ and ρ^spρs. It then follows that

T^SL=n12{D^SL(θ^SL)D(θ^SL)}+n12{D(θ^SL)D(θ¯)}=n12s=1Sρs(ns1Vi=1I(Si=s)[d(yi,Y¯i)D(θ¯)+D˙(θ¯)eSLi])+op(1),

which converges in distribution to N(0,σSL2) where

σSL2=s=1Sρs2ρ1s1E[{d(yi,Y¯i)D(θ¯)+D˙(θ¯)eSLi}2Si=s].

E. Asymptotic Properties of θˇSSL

We first consider the asymptotic properties of γ˜. Under Conditions 13 and using that λn=o(n12) and ρ^spρs, we can adapt the same procedure as in Appendix C to show that γ˜pγ¯ as n. We obtain the following Taylor series expansion

n12(γ˜γ¯)=n12s=1Sρs[ns1i=1nI(Si=s)C1Φi{yig(γ¯Φi)}]+op(1)

where C=E{Φi2g˙(γ¯Φi)}. We then have that n12(γ˜γ¯) converges to zero-mean Gaussian distribution. To verify that θˇSSL is consistent for θ¯, it suffices to show that (i) supθΘU^N(θ)U0(θ)2=op(1) and (ii) infθθ¯2ϵU(θ)2>0, for any ϵ > 0 [Newey and McFadden, 1994, Lemma 2.8], where

U0(θ)=E[xi{g(γ¯Φi)g(θxi)}].

For (i), we first note that since γ˜pγ¯,g˙() is continuous and Φ is bounded,

U^N(θ)=N1i=1Nxi{g(γ˜Φi)g(θxi)}=N1i=1Nxi{g(γ¯Φi)g(θxi)}+op(1). (E.1)

Note that g(γ¯Φ)g(θx) is bounded and continuously differentiable in θ. We then apply the ULLN to have that supθΘU^N(θ)U0(θ)2=op(1). For (ii), we note

U0(θ)=E[xi{yig(γ¯Φi)}]+E[xi{yig(θxi)}]=E[xi{yig(θxi)}].

Therefore, (ii) holds by Lemma A1 and θˇSSL is consistent for θ¯.

Now we consider the weak convergence of n12(θˇSSLθ¯). Under Conditions 13, we have the Taylor expansion

n12(θˇSSLθ¯)=n12A1[N1i=1Nxi{g(γ¯Φi)g(θ¯xi)}+B(γ˜γ¯)]+op(1),

where B=E{xiΦig˙(γ¯Φi)}. This expansion coupled with the fact that N1i=1NXi{g(γ¯Φi)g(θ¯xi)}=Op(N12)=op(n12) imply that

n12(θˇSSLθ¯)=n12s=1Sρs[ns1i=1nA1BC1I(Si=s)Φi{yig(γ¯Φi)}]+op(1).

Letting xi=(xi1,,xip), we note that for j=1,,p,

[BC1]j=E{xijΦig˙(γ¯Φi)}E[{Φi2g˙(γ¯Φi)}]1=argminβE{g˙(γ¯Φi)(xijβΦi)2}.

Since xij is a component of the vector Φ(u), the minimizer β can be chosen such that xijβΦi=0 for i=1,,N which implies that xi=BC1Φi. Thus

n12(θˇSSLθ¯)=n12s=1Sρs{ns1i=1nI(Si=s)eSSLi}+op(1)

where eSSLi=A1xi{yig(γ¯Φi)}. It then follows from the classical Central Limit Theorem that n12(θˇSSLθ¯)N(0,ΣSSL) in distribution where

ΣSSL=s=1Sρs2ρ1s1E{eSSLi2Si=s}.

We then see that

ΣSLΣSSL=s=1Sρs2ρ1s1[E{eSLi2Si=s}E{eSSLi2Si=s}]=s=1Sρs2ρ1s1A1E[xi2{g(γ¯Φi)g(θ¯xi)}2+2xi2{yig(γ¯Φi)}{g(γ¯Φi)g(θ¯xi)}Si=s].

Therefore, when the imputation model is correctly specified, it follows that

ΣSLΣSSL=s=1Sρs2ρ1s1E[A1xi2{g(γ¯Φi)g(θ¯xi)}2Si=s]0.

F. Asymptotic Properties of D^SSL(θ^SSL) and D^sSL(θ˜sSL)

First note that by Lemma A1, there exists δ > 0 such that for all θ satisfying θθ¯2<δ, so that ν¯θ is unique. Then, similar to the derivations in Appendices C and E, we may show that ν˜θ is consistent for ν¯θ and n12(ν˜θν¯θ) is asymptotically Gaussian with mean zero.

Let Θδ=Θ{θ:θθ¯2<δ}. For the consistency of D^SSL(θ^SSL) for D(θ¯), we note that the uniform consistency of ν˜θ for ν¯θ and γ˜ for γ¯, together with the ULLN and under regularity Conditions 13 and ρ^spρs, imply supθΘδ|D^SSL(θ)D(θ)|=op(1). It then follows from the consistency of θ^SSL and θˇSSL for θ¯ that D^SSL(θ^SSL)pD(θ¯) and D^SSL(θˇSSL)pD(θ¯) as n.

To derive the asymptotic distribution for T^SSL=n12{D^SSL(θ^SSL)D(θ¯)} and T˘SSL=n12{D^SSL(θˇSSL)D(θ¯)}, we first consider

T˜SSL(θ)=n12{D^SSL(θ)D(θ)}=n12[N1i=1Nd{g(γ˜Φi+ν˜θ),Y(θxi)}D(θ)]

Under Conditions 13 and by Taylor series expansion about γ¯ and ν¯ and the ULLN,

T˜SSL(θ)=n12[N1i=1Nd{g(γ¯Φi+ν¯θziθ),Y(θxi)}D(θ)+Gθ(γ˜γ¯)+Hθ(ν˜θν¯θ)]

where Gθ=E[Φig˙(γ¯Φi+ν¯θziθ){12Y(θxi)}] and Hθ=E[ziθg˙(γ¯Φi+ν¯θziθ){12Y(θxi)}]. From the previous section we have

n12(γ˜γ¯)=n12s=1Sρs[ns1i=1nC1I(Si=s)Φi{yig(γ¯Φi)}]+op(1)

Similar arguments can be used to verify that

n12(ν˜θν¯θ)=n12Jθ1[s=1Sρsns1i=1nI(Si=s)ziθ{yig(γ¯Φi+ν¯ziθ)}+K(γ˜γ¯)]+op(1)

where Jθ=E{ziθ2g˙(γ¯Φi+ν¯ziθ)} and Kθ=E{ziθΦig˙(γ¯Φi+ν¯ziθ)}. These results, together with the fact that N12i=1Nd{g(γ¯Φi+ν¯θziθ),Y(θxi)}D(θ)) converges weakly to zero-mean Gaussian process in θ, imply that

T˜SSL(θ)=n12s=1Sρs(ns1i=1nI(Si=s)[(Gθ+HθJθ1Kθ)C1Φi{yig(γ¯Φi)}+HθJθ1ziθ{yig(γ¯Φi+ν¯θziθ)}])+op(1).

We may simplify the above expression by noting that {12Y(θxi)} is a linear combination of ziθ and hence [HθJθ1]ziθ={12Y(θxi)}. Additionally, HθJθ1Kθ=Gθ which implies that (Gθ+HθJθ1Kθ)C1=0. Thus,

T˜SSL(θ)=n12s=1Sρs[ns1i=1nI(Si=s){12Y(θxi)}{yig(γ¯Φi+ν¯ziθ)}].

This combined with the fact that D(θ) is continuously differentiable at θ¯,W^ is consistent for its limiting value W introduced in Appendix B, and Conditions 13 then give that

T^SSL=n12{D^SSL(θ^SSL)D(θ^SSL)}+n12{D(θ^SSL)D(θ¯)}
=n12s=1Sρs(ns1i=1nI(Si=s)[{12Y¯i}{yig(γ¯Φi+ν¯θ¯ziθ¯)}+D˙(θ¯){WeSSLi+(IW)eSLi}])+op(1)
=n12s=1Sρs(ns1i=1nI(Si=s)[{d(yi,Y¯i)d(mII,i,Y¯i)}+D˙(θ¯){WeSSLi+(IW)eSLi}])+op(1).

Note that the existence of D˙(θ¯) is implied by Condition 1, namely, that the density function of θ¯x is continuously differentiable in θ¯x and P(y=1u) is continuously differentiable in the continuous components of u. We then have that n12{D^SSL(θ^SSL)D(θ¯)} converges to a zero mean normal random variable by the classical Central Limit Theorem. Using similar arguments as those for T˘SL, we have that TˇSSL=n12{D^SSL(θˇSSL)D(θ¯)} can be expanded as

n12s=1Sρs(ns1i=1nI(Si=s)[{d(yi,Y¯i)d(mII,i,Y¯i)}+D˙(θ¯)essLi]),

which also converges to a zero mean normal random variable.

Comparing the asymptotic variance of TˇSSL with T^SL, first note that

T^sL=n12s=1Sρs(ns1i=1nI(Si=s)[d(yi,Y¯i)D(θ¯)+D˙(θ¯)eSLi])+op(1)
=n12s=1Sρs(ns1i=1nI(Si=s)[d(yi,Y¯i)d(mII,i,Y¯i)+d(mII,i,Y¯i)D(θ¯)+D˙(θ¯)A1xi{yig(θ¯xi)}])+op(1)
=n12s=1Sρs(ns1i=1nI(Si=s)[(12Y¯i){yig(γ¯Φi+ν¯θ¯ziθ¯)}+d(mII,i,Y¯i)D(θ¯)+D˙(θ¯)A1xi{yig(γ¯Φi)+g(γ¯Φi)g(θ¯xi)}])+op(1).

Letting h1(Φi)=12Y¯i+D˙(θ¯)A1xi and

h2(Φi)=d(mII,i,Y¯i)D(θ¯)+D˙(θ¯)A1xi{g(γ¯Φi)g(θ¯xi)}.

we note that h1(Φi) and h2(Φi) are functions of Φi and do not depend on yi. Thus, when P(y=1u)=g(γ¯Φ), we have ν¯=0 and

σSL2=s=1Sρs2ρ1s1E[h12(Φi){yig(γ¯Φi)}2+2h1(Φi)h2(Φi){yig(γ¯Φi)}+h22(Φi)Si=s]=s=1Sρs2ρ1s1E[h1(Φi)2{yig(γ¯Φi)}2+h2(Φi)2Si=s]

while the asymptotic variance of TˇSSL is

σSSL2=s=1Sρs2ρ1s1E[h12(Φi){yig(γ¯Φi)}2Si=s].

Therefore, when P(y=1u)=g(γ¯Φ), it follows that ΔaVar:=σSL2σSSL2>0. Additionally, when model (1) is correct and P(y=1u)=g(γ¯Φ)=g(θ¯x), we have h2(Φi)=d(mII,i,Y¯i)D(θ¯) which is not equal to 0 with probability 1, so that again ΔaVar>0.

G. Intrinsic Efficient Estimation

G.1. Intrinsic Efficient Estimator for D¯

We first introduce the intrinsic efficient estimator of the accuracy measures. Without loss of generality, we set the imputation basis for both θ and D(θ) as Ψθi=[Φi,Y(θxi)], where θ is plugged in with some preliminary estimator for θ, denoted as θ˜. In practice, one may take θ˜ as either the simple SL estimator or the SSL estimator obtained following SectionT 3.1. We include Y(θxi) in the imputation basis to simplify the notation and presentation in this section. Although this distinguishes the following discussion from the proposal in Section 3.2, it is straightforward to extend our results to the original proposal.

Recall that for the original SSL estimator of the regression parameter, one first obtains γ˜θ^ as the solution to

N1i=1Nw^iΨθ˜i{yig(γΨθ˜i)}λnγ=0

and then solves N1i=1Nxi{g(γ˜θ˜Ψθ˜i)g(θxi)}=0 to obtain the estimator of θ¯. Despite the change in basis, we still denote this estimator as θ^SSL with a slight abuse in the notation. Adapting the augmentation procedure in Section 3.2, we then find γ˜θ^SSL as the solution to

N1i=1Nw^iΨθ^SSLi{yig(γΨθ^SSLi)}λnγ=0,

and estimate D¯ with D^SSL=D^SSL(θ^SSL) where D^SSL(θ)=N1i=1Nd{g(γ˜θΨθi),Y(θxi)}. Extending Theorem 2, the asymptotic variance of n12{D^SSL(θ^SSL)D¯} may be expressed as

1ni=1nEζi{12Y¯i+D˙(θ¯)A1xi}2{yig(γ¯θ¯Ψ¯i)}2, (G.1)

where γ¯θ¯ represents the limits of γ˜θ^SSL(orγ˜θ˜), and Ψ¯i=Ψ¯θ¯i=[Φi,Y(θ¯xi)]. Analogous to the construction of eθ^intri, we consider minimizing the asymptotic variance given by (G.1) to estimate D¯. Specifically, we first solve for γ˜θ˜(2) with

argminγ12ni=1nζ^i{12Y(θ˜xi)+D˙^A^1xi}2{yig(γΨθ˜i)}2+λn(2)γ22,s.t.1Ni=1Nw^i[xi,Y(θ˜xi)]{yig(γΨθ˜i)}=0, (G.2)

where D˙^ is an estimation of D˙(θ¯) and the tuning parameter λ(2)=o(n12). Similar to (3) and (9), moment constraints in (G.2) calibrate potential bias of the estimators for θ¯ and D¯(θ).

Next, we present the construction of D˙^ for the Brier score and OMR separately. For the Brier score, D¯1, we take

D˙^=D˙^1=1Ni=1N2w^ig˙(θ˜xi){yig(θ˜xi)}xi.

For the the OMR, D¯2, recall that a simple estimator is given by the empirical average

1Ni=1Nw^i{yi+(12yi)I(g(θ˜xi)>c)}.

Since I(g(θxi)>c) is not a differentiable function of θ, we first smooth each I(g(θ˜xi)>c) as c+Kh{g(θ˜xi)u}du, where K(·) represents the Gaussian kernel function, and Kh(a):=h1K(a/h) with some bandwidth h > 0. Then, D˙(θ¯) is estimated with

D˙^=D˙^2=1Ni=1Nw^i(12yi)g˙(θ˜xi)Kh{g(θ˜xi)c}xi.

With γ˜θ˜(2), we then solve

N1i=1Nxi{g(γ˜θ˜(2)Ψθ˜i)g(θxi)}=0

to obtain θ^intriD for estimation of D¯ and employ the augmentation procedure in Section 3.2.

That is, we solve γ˜θ^intriD(2) from

N1i=1Nw^iΨθ^intriD{yig(γΨθ^intriDi)}λn(2)γ=0,

and estimate D¯ by D^intri=D^intri(θ^intriD) where D^intri(θ)=N1i=1Nd{g(γ˜θ(2)Ψθi),Y(θxi)}.

To present the asymptotic properties of D^intri, we define

γ¯θ¯(2)=argminγE[R{12Y¯+D˙(θ¯)A1x}2{yg(γθ¯Ψ¯)}2],s.t.E[x,Y(θ¯x)]{yg(γθ¯Ψ¯)}=0.

Theorem A1 provides the asymptotic expansion of D^intri and its proof, together with the proof of Theorem 3 from the main text, is detailed in Appendix G.2.

Theorem A1.

Under Conditions 1, Conditions A1 and A2 from Appendix G.2, and with the bandwidth hn14, n12(D^intriD¯) weakly converges to a Gaussian distribution with mean zero, and is asymptotically equivalent to T^(γ¯θ¯(2)) where

T^(γ)=n12s=1Sρs[ns1i=1NViI(Si=s){12Y¯i+D˙(θ¯)A1xi}{yig(γΨ¯i)}].

This implies that (i) D^intri is asymptotically equivalent to D^SSL when the imputation model P(y=1u)=g(γΨ¯) is correctly specified and (ii) the asymptotic variance of n12(D^intriD¯) is minimized among {T^(γ):E[x,Y(θ¯x)]{yg(γΨ¯)}=0}. Consequently, the asymptotic variance of the intrinsic efficient estimator is always less than or equal to the asymptotic variance of n12(D^SSLD¯) and n12(D^SLD¯).

G.2. Asymptotic Properties of θ^intri and D^intri

We first introduce the smoothness condition on the link function g(·), which is stronger than Condition 2, but still holds for the most commonly used link functions such as the logit and probit functions.

Condition A1.

The link function g()(0,1) is continuously twice differentiable with derivative g˙() and the second order derivative g¨().

Given Condition A1, we let γ¯(2)=γ¯θ¯(2) and define

A1=E[R(eA1x)2Φ2{g˙2(γ¯(1)TΦ)+g¨(γ¯(1)Φ)[yg(γ¯(1)Φ)]}],
A2=E[R{12Y¯+D˙(θ¯)A1x}2Ψ¯2{g˙2(γ¯(2)Ψ¯)+g¨(γ¯(2)Ψ¯)[yg(γ¯(2)TΨ¯)]}],

B1=E[Φxg˙(γ¯(1)Φ)] and B2=E[Ψ¯{x,Y(θ¯x)}g˙(γ¯(2)Ψ¯)]. We next present the regularity condition on the covariates and regression coefficients required by Theorem 3.

Condition A2.

There exists Θ={θ:θθ¯2<δ} for some δ > 0, such that for any θΘ, there is no γ such that P(γΦ1>γΦ2y1>y2)=1 or P(γΨθ1>γΨθ2y1>y2)=1. It is also the case that A0,A10,A20,B1A11B10 and B2A21B20.

Remark A1.

Condition A2 is analog to Condition 3. It assumes there is no linear combination of Φ or Ψ¯ perfectly separating the samples based on y, and the Hessian matrices of the constrained least square problems for γ¯(1) and γ¯(2) are positive definite. Again, these assumptions are mild and common in the M-estimation literature [Van der Vaart, 2000].

Under these regularity conditions, we present the proofs of Theorem 3 and A1. In our development, we take θ˜ as the SSL estimator for θ introduced in Section 3.1, but note that proof remains basically unchanged when taking θ˜ as the SL estimator. We first derive the consistency (error rates) of D˙^1 and D˙^2. For the Brier score, letr D˙¯1 denote the derivative the of D1(θ) evaluated at θ¯. We then use ρ^spρs, Theorem 1, Conditions 1 and A1, and the classical Central Limit Theorem to derive that

D˙^1D˙¯1=1Ni=1N2w^ig˙(θ˜xi){yig(θ˜xi)}xi1Ni=1N2wig˙(θ¯xi){yig(θ¯xi)}xi+1Ni=1N2wig˙(θ¯xi){yig(θ¯xi)}xiE[2g˙(θ¯x){yg(θ¯x)}x]=Op(θ˜θ¯2)+Op(n12)=Op(n12).

For the estimator of the derivative for the OMR, let D˙¯2 be the limiting value of D˙^2. We then have

D˙^2D˙¯2=1Ni=1N(12yi)[w^ig˙(θ˜xi)Kh{g(θ˜xi)c}wig˙(θ¯xi)Kh{g(θ¯xi)c}]xi+1Ni=1Nwi(12yi)g˙(θ¯xi)Kh{g(θ¯xi)c}xiE[(12y)g˙(θ¯x)xg(θ¯x)=c]fg(c)=:Δ1+Δ2

where fg(c) represent the density function of g(θ¯x) evaluated at c. This follows from the fact that D˙¯2=E[(12y)g˙(θ¯x)xg(θ¯x)=c]fg(c). Since the Gaussian kernel K(·) is continuously differentiable and by Theorem 1, Conditions 1 and A1, we have

Δ12=h1Op(θ˜θ¯2)=Op(n12h1).

For Δ2, Condition 1 and the classical Central Limit Theorem imply that

1Ni=1Nwi(12yi)g˙(θ¯xi)Kh{g(θ¯xi)c}xiE(12y)g˙(θ¯x)Kh{g(θ¯x)c}x=Op{(nh)12},

and from Condition 1,

E(12y)g˙(θ¯x)Kh{g(θ¯x)c}xE[(12y)g˙(θ¯x)xg(θ¯x)=c]fg(c)=01{r(u)fg(u)Kh(uc)r(c)fg(c)}du=c/h(1c)/h{r(c+hv)fg(c+hv)K(v)r(c)fg(c)}dv=O(h),

where r(u)={x:g(θ¯x)=u}{12P(y=1x)}g˙(θ¯x)xfxg(xu)dx, and fxg(u) represent the density of x given that g(θ¯x)=u. By Condition 1, there exists C > 0 such that r(a)r(b)2C|ab| for any a,b. Thus, we have Δ22=Op{(nh)12+h} and with hn14, we obtain D˙^2D˙¯22=Op(n14). It then follows that for both Brier score and OMR, D˙^D˙¯2=Op(n14)=op(1).

Leveraging these results, we establish the asymptotic normality of γ˜(1) and γ˜θ˜(2). Similar to Appendices C and E, we apply the ULLN [Pollard, 1990], together with Conditions 1, A1, and A2, and the facts that ρ^spρs,ρ^1spρ1s, and that θ˜,A^1 and D˙^ are consistent for their respective limits, to obtain

supγΓ(1)|1ni=1nζ^i(eA^1xi)2{yig(γΦi)}2E[R(eA1x)2{yg(γΦ)}2]|=op(1);supγΓ(1)1Ni=1Nw^ixi{yig(γΦi)}E[x{yg(γΦ)}]2=op(1);
supγΓ(2)1ni=1nζ^i{12Y(θ˜xi)+D˙^A^1xi}2{yig(γΨθ˜i)}2E[R{12Y¯+D˙(θ¯)A1x}2{yg(γΨ¯)}2]=op(1);
supγΓ(2)1Ni=1Nw^i[xi,Y(θ˜xi)]{yig(γΨθ˜i)}E[x,Y(θ¯x)]{yg(γΨ¯)}2=op(1),

where Γ(1) and Γ(2) are two compact sets containing γ¯(1) and γ¯(2), respectively. This implies that γ˜(1)γ¯(1)2=op(1) and γ˜θ˜(2)γ¯(2)2=Op(1). We then expand (9) and (G.2) to derive that

γ˜(1)=argminγ(γγ¯(1))[A1(γγ¯(1))+2{1+op(1)}Ξ11+op(γ˜(1)γ¯(1)2+n12)],s.t.B1(γγ¯(1)){1+op(1)}Ξ12+op(γ˜(1)γ¯(1)2+n12)=0;
γ˜θ˜(2)=argminγ(γγ¯(2))[A2(γγ¯(2))+2{1+op(1)}Ξ21+op(γ˜(2)γ¯(2)2+n12)],s.t.B2(γγ¯(2)){1+op(1)}Ξ22+op(γ˜(2)γ¯(2)2+n12),
whereΞ11=1ni=1nζi(eA1xi)2g˙(γ¯(1)Φi)Φi{yig(γ¯(1)Φi)};
Ξ12=1Ni=1Nwixi{yig(γ¯(1)Φi)};
Ξ21=1ni=1nζi{12Y(θ¯xi)+D˙A1xi}2g˙(γ¯(2)Ψ¯i)Ψ¯i{yig(γ¯(2)Ψ¯i)}+Ξ˙θ,21(θ˜θ¯);
Ξ22=1Ni=1Nwi[xi,Y(θ¯xi)]{yig(γ¯(2)Ψ¯i)}+Ξ˙θ,22(θ˜θ¯),

and Ξ˙θ,21, Ξ˙θ,22 are two fixed loading matrices of the order O(1). By Condition 1 and the classical Central Limit Theorem, n12(Ξ11,Ξ12,Ξ21,Ξ22) converges to a Gaussian distribution with mean 0. By Theorem n12(θ˜θ¯) also converges to a mean-zero Gaussian distribution. Analogous to the proof of Theorem 5.21 of Van der Vaart [2000], we then obtain

γ˜(1)γ¯(1)=[A11A11B1(B1A11B1)1B1A11]Ξ11+A11B1(B1A11B1)1Ξ12=Op(n12);
γ˜θ˜(2)γ¯(2)=[A21A21B2(B2A21B2)1B2A21]Ξ21+A21B2(B2A21B2)1Ξ22=Op(n12). (G.3)

By Conditions 1, A1 and A2, the consistency of ρ^1 for its limit, and the asymptotic expansion of γ˜(1)γ¯(1) derived above, we can use the argument of Appendix E to show that θ^intripθ¯ and obtain the expansion

n12(θ^intriθ¯)=n12A1[N1i=1Nxi{g(γ¯(1)Φi)g(θ¯xi)}+B1(γ˜(1)γ¯(1))]+op(1),=n12s=1Sρs[ns1i=1nI(Si=s)A1xi{yig(γ¯(1)Φi)}]+op(1)=W^(γ¯(1))+op(1).

The second equality follows from the fact that

B1(γ˜(1)γ¯(1))=0+Ξ12.

Thus, the asymptotic variance of n12(eθ^intrieθ¯)is E[R(eA1x)2{yg(γ¯(1)Φ)}2], which is minimized among those of {eW^(γ):E[x{yg(γΦ)}]=0}. From Theorem n12(θ^SSLθ¯) is asymptotically equivalent with W(γ¯). Therefore, when the imputation model is correctly specified, that is, there exists γ0 such that P(y=1u)=g(Φγ0),γ¯=γ¯(1)=γ0, it follows that n12(θ^intriθ¯) is asymptotically equivalent to n12(θ^SSLθ¯). This completes the proof of Theorem 3.

Using our previous arguments, we next establish Theorem A1. Similar to (G.3), we expand

n12(θ^intriDθ¯)as
n12(θ^intriDθ¯)=n12A1[N1i=1Nxi{g(γ¯(2)Ψθ˜i)g(θ¯xi)}+B1(γ˜θ˜(2)γ¯(2))]+op(1),=n12s=1Sρs[ns1i=1nI(Si=s)A1xi{yig(γ¯(2)Ψ¯i)}]+n12A1[N1i=1Nxi{g(γ¯(2)Ψθ˜i)g(γ¯(2)Ψ¯i)}+Ξ˙θ,22(θ˜θ¯)]+op(1)=n12s=1Sρs[ns1i=1nI(Si=s)A1xi{yig(γ¯(2)TΨ¯i)}]+op(1),

The third equality follows from the fact that n12(θ˜θ¯)=Op(1) and

(N1i=1Nxi{g(γ¯(2)Ψθ˜i)g(γ¯(2)Ψ¯i)})/θ+Ξ˙θ,22=op(1).

Using this result, and applying similar arguments as those used for γ˜θ˜(2), we have that

γ˜θ^intriD(2)γ¯(2)=[A21A21B2(B2A21B2)1B2A21]Ξ21+A21B2(B2A21B2)1Ξ22=Op(n12),whereΞ21=1ni=1nζi{12Y(θ¯xi)+D˙A1xi}2g˙(γ¯(2)Ψ¯i)Ψ¯i{yig(γ¯(2)Ψ¯i)}+Ξ˙θ,21(θ^intriDθ¯);Ξ22=1Ni=1Nwi[xi,Y(θ¯xi)]{yig(γ¯(2)Ψ¯i)}+Ξ˙θ,22(θ^intriDθ¯).

We then follow the same procedure as in Appendix F (specifically, noting that γ˜θ^intriD(2) corresponds to the θ^intriD plugged into D^intri(θ), the derivation for the augmentation approach in Section 3.2 can be used directly) to derive that D^intripD¯ and

n12(D^intriD¯)=n12[N1i=1N{12Y¯i+D¯A1xi}{yig(γ¯(2)Ψ¯i)}]+op(1)=T^(γ¯θ¯(2))+op(1).

By the definition of γ¯θ¯(2), the asymptotic variance of T^(γ¯θ¯(2)) is minimized among those of {T^(γ):E[x,Y(θ¯x)]{yg(γΨ¯)}=0}. Additionally, we may use a similar procedure as that in Appendix F to derive that

n12(D^SSLD¯)=T^(γ¯θ¯)+op(1).

Thus, when the imputation model for estimating D, i.e. P(y=1u)=g(γΨ¯) is correct, we have γ¯θ¯=γ¯θ¯(2) and that n12(D^intriD¯) is asymptotically equivalent to n12(D^SSLD¯). These arguments establish Theorem A1.

H. Justification for Weighted CV Procedure

To provide a heuristic justification for the weights for our ensemble CV method, consider an arbitrary smooth loss function d(·,·) and let D(θ)=E[d{y0,Y(θx0)}]. Let D^(θ) denote the empirical unbiased estimate of D(θ) and suppose that θ^ minimizes D^(θ) (i.e. D^˙(θ^)=0). Suppose that n12(θ^θ¯)N(0,Σ) in distribution. Then by a Taylor series expansion of D^(θ¯)atθ^,

D^(θ^)=D^(θ¯)12(θ^θ¯)D^¨(θ^)(θ^θ¯)+op(n1)and
E{D^(θ^)}=D(θ¯)12n1Tr{D¨(θ¯)Σ}+op(n1)

where D¨(θ¯)=D(θ)/θθ. For the K-fold CV estimator, D^cv=K1k=1KD^k(θ^(k)), we note that since D^k(θ) is independent of θ^(k)

E(D^cv)=D(θ¯)+K1k=1KE{D^˙k(θ¯)}E(θ^(k)θ¯)+12KK1n1Tr{D¨(θ¯)Σ}+op(n1)=D(θ¯)+12KK1n1Tr{D¨(θ¯)Σ}+op(n1),

where the second equality follows from the fact that E{D^˙k(θ¯)}=D˙(θ¯)=0 when θ¯ minimizes D(θ). Letting D^ω=ωD^(θ^)+(1ω)D^cv with ω=K/(2K1), it follows that ωn1(1ω)Kn1/(K1)=0 and thus

E(D^ω)=D(θ¯)+op(n1).

REFERENCES

  1. Ananthakrishnan A, Cai T, Savova G, Cheng S, Chen P, Perez R, Gainer V, Murphy S, Szolovits P, Xia Z, et al. Improving case definition of crohn’s disease and ulcerative colitis in electronic medical records using natural language processing: a novel informatics approach. Inflammatory bowel diseases, 19(7):1411–1420, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Belkin M. and Niyogi P. Semi-supervised learning on riemannian manifolds. Machine learning, 56(1–3):209–239, 2004. [Google Scholar]
  3. Belkin M, Niyogi P, and Sindhwani V. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. The Journal of Machine Learning Research, 7:2399–2434, 2006. [Google Scholar]
  4. Cai T. and Zheng Y. Evaluating prognostic accuracy of biomarkers in nested case–control studies. Biostatistics, 13(1):89–100, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Castelli V. and Cover TM The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. Information Theory, IEEE Transactions on, 42(6):2102–2117, 1996. [Google Scholar]
  6. Chakrabortty A. and Cai T. Efficient and adaptive linear regression in semi-supervised settings. The Annals of Statistics, 46(4):1541–1572, 2018. [Google Scholar]
  7. Chapelle O, Scholkopf B, and Zien A. Semi-supervised learning (chapelle O. et al. , eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3):542–542, 2009. [Google Scholar]
  8. Corduneanu AAD Stable mixing of complete and incomplete information. PhD thesis, Massachusetts Institute of Technology, 2002. [Google Scholar]
  9. Cozman FG, Cohen I, and Cirelo M. Unlabeled data can degrade classification performance of generative classifiers. In FLAIRS Conference, pages 327–331, 2002. [Google Scholar]
  10. Cozman FG, Cohen I, and Cirelo MC Semi-supervised learning of mixture models. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 99–106, 2003. [Google Scholar]
  11. Efron B. Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association, 78(382):316–331, 1983. [Google Scholar]
  12. Efron B. How biased is the apparent error rate of a prediction rule? Journal of the American Statistical Association, 81(394):461–470, 1986. [Google Scholar]
  13. Efron B. and Tibshirani R. Improvements on cross-validation: the 632+ bootstrap method. Journal of the American Statistical Association, 92(438):548–560, 1997. [Google Scholar]
  14. Fu WJ, Carroll RJ, and Wang S. Estimating misclassification error with small samples via bootstrap cross-validation. Bioinformatics, 21(9):1979–1986, 2005. [DOI] [PubMed] [Google Scholar]
  15. Gerds TA, Cai T, and Schumacher M. The performance of risk prediction models. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 50(4):457–479, 2008. [DOI] [PubMed] [Google Scholar]
  16. Gneiting T. and Raftery AE Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007. [Google Scholar]
  17. Gronsbell JL and Cai T. Semi-supervised approaches to efficient evaluation of model prediction performance. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(3):579–594, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hand DJ Construction and assessment of classification rules. Wiley, 1997. [Google Scholar]
  19. Hand DJ Measuring diagnostic accuracy of statistical prediction rules. Statistica Neerlandica, 55(1):3–16, 2001. [Google Scholar]
  20. Jaakkola T, Haussler D, et al. Exploiting generative models in discriminative classifiers. Advances in neural information processing systems, pages 487–493, 1999.
  21. Jiang W. and Simon R. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification. Statistics in medicine, 26(29):5320–5334, 2007. [DOI] [PubMed] [Google Scholar]
  22. Kawakita M. and Kanamori T. Semi-supervised learning with density-ratio estimation. Machine learning, 91(2):189–209, 2013. [Google Scholar]
  23. Kawakita M. and Takeuchi J. Safe semi-supervised learning based on weighted likelihood. Neural Networks, 53:146–164, 2014. [DOI] [PubMed] [Google Scholar]
  24. Kohane IS Using electronic health records to drive discovery in disease genomics. Nature Reviews Genetics, 12(6):417–428, 2011. [DOI] [PubMed] [Google Scholar]
  25. Kpotufe S. The curse of dimension in nonparametric regression. PhD thesis, UC San Diego, 2010. [Google Scholar]
  26. Krijthe J. and Loog M. Projected estimators for robust semi-supervised classification. arXiv preprint arXiv:1602.07865, 2016. [Google Scholar]
  27. Liao KP, Cai T, Gainer V, Goryachev S, Zeng-treitler Q, Raychaudhuri S, Szolovits P,Churchill S, Murphy S, Kohane I, et al. Electronic medical records for discovery research in rheumatoid arthritis. Arthritis care & research, 62(8):1120–1127, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Liao KP, Kurreeman F, Li G, Duclos G, Murphy S, Guzman PR, Cai T, Gupta N, Gainer V, Schur P, et al. Autoantibodies, autoimmune risk alleles and clinical associations in rheumatoid arthritis cases and non-ra controls in the electronic medical records. Arthritis and rheumatism, 65(3):571, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Liao KP, Cai T, Savova GK, Murphy SN, Karlson EW, Ananthakrishnan AN, Gainer VS, Shaw SY, Xia Z, Szolovits P, et al. Development of phenotype algorithms using electronic medical records and incorporating natural language processing. bmj, 350: h1885, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Liu D, Cai T, and Zheng Y. Evaluating the predictive value of biomarkers with stratified case-cohort design. Biometrics, 68(4):1219–1227, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Mirakhmedov SM, Jammalamadaka SR, and Mohamed IB On edgeworth expansions in generalized urn models. Journal of Theoretical Probability, 27(3):725–753, 2014. [Google Scholar]
  32. Molinaro AM, Simon R, and Pfeiffer RM Prediction error estimation: a comparison of resampling methods. Bioinformatics, 21(15):3301–3307, 2005. [DOI] [PubMed] [Google Scholar]
  33. Murphy S, Churchill S, Bry L, Chueh H, Weiss S, Lazarus R, Zeng Q, Dubey A, Gainer V, Mendis M, et al. Instrumenting the health care enterprise for discovery research in the genomic era. Genome research, 19(9):1675–1681, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Nedyalkova D. and Tillé Y. Optimal sampling and estimation strategies under the linear model. Biometrika, 95(3):521–537, 2008. [Google Scholar]
  35. Newey WK and McFadden D. Large sample estimation and hypothesis testing. Handbook of econometrics, 4:2111–2245, 1994. [Google Scholar]
  36. Neyman J. On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. Journal of the Royal Statistical Society, 97(4):558–606, 1934. [Google Scholar]
  37. Niyogi P. Manifold regularization and semi-supervised learning: Some theoretical analyses. The Journal of Machine Learning Research, 14(1):1229–1250, 2013. [Google Scholar]
  38. Pollard D. Empirical processes: theory and applications. In NSF-CBMS regional conference series in probability and statistics, pages i–86. JSTOR, 1990. [Google Scholar]
  39. Robins JM, Mark SD, and Newey WK Estimating exposure effects by modelling the expectation of exposure conditional on confounders. Biometrics, pages 479–495, 1992. [PubMed]
  40. Robins JM, Rotnitzky A, and Zhao LP Estimation of regression coefficients when some regressors are not always observed. Journal of the American statistical Association, 89 (427):846–866, 1994. [Google Scholar]
  41. Särndal C-E, Swensson B, and Wretman J. Model assisted survey sampling. Springer Science & Business Media, 2003. [Google Scholar]
  42. Sinnott JA, Dai W, Liao KP, Shaw SY, Ananthakrishnan AN, Gainer VS, Karlson EW, Churchill S, Szolovits P, Murphy S, et al. Improving the power of genetic association tests with imperfect phenotype derived from electronic medical records. Human genetics, 133(11):1369–1382, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Sokolovska N, Cappé O, and Yvon F. The asymptotics of semi-supervised learning in discriminative probabilistic models. In Proceedings of the 25th international conference on Machine learning, pages 984–991. ACM, 2008. [Google Scholar]
  44. Tan Z. Bounded, efficient and doubly robust estimation with inverse weighting. Biometrika, 97(3):661–682, 2010. [Google Scholar]
  45. Tian L, Cai T, Goetghebeur E, and Wei L. Model evaluation based on the sampling distribution of estimated absolute prediction error. Biometrika, 94(2):297–311, 2007. [Google Scholar]
  46. Van der Vaart AW Asymptotic statistics, volume 3. Cambridge University Press, 2000. [Google Scholar]
  47. Wasserman L. and Lafferty JD Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems, pages 801–808, 2008.
  48. Wilke R, Xu H, Denny J, Roden D, Krauss R, McCarty C, Davis R, Skaar T, Lamba J, and Savova G. The emerging role of electronic medical records in pharmacogenomics. Clinical Pharmacology & Therapeutics, 89(3):379–386, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Xia Z, Secor E, Chibnik LB, Bove RM, Cheng S, Chitnis T, Cagan A, Gainer VS, Chen PJ, Liao KP, et al. Modeling disease severity in multiple sclerosis using electronic health records. PloS one, 8(11):e78927, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Yu S, Liao KP, Shaw SY, Gainer VS, Churchill SE, Szolovits P, Murphy SN, Kohane IS, and Cai T. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources. Journal of the American Medical Informatics Association, 22(5):993–1000, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Zhang A, Brown LD, Cai TT, et al. Semi-supervised inference: General theory and estimation of means. Annals of Statistics, 47(5):2538–2566, 2019. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supinfo

RESOURCES