Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 1.
Published in final edited form as: Commun Stat Theory Methods. 2019 Jul 15;50(1):216–236. doi: 10.1080/03610926.2019.1634208

Covariate adjustment via propensity scores for recurrent events in the presence of dependent censoring

Youngjoo Cho 1,*, Debashis Ghosh 2
PMCID: PMC7954136  NIHMSID: NIHMS1534764  PMID: 33716388

Abstract

Dependent censoring is common in many medical studies, especially when there are multiple occurrences of the event of interest. Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) proposed estimation procedures using an artificial censoring technique. However, if covariates are not bounded, then these methods can cause excessive artificial censoring. In this paper, we propose estimation procedures for the treatment effect based on a novel application of propensity scores. Simulation studies show that the proposed method provides good finite-sample properties. The techniques are illustrated with an application to an HIV dataset.

Keywords: Survival analysis, Accelerated failure time model, Resampling, Empirical process, Propensity score

1. Introduction

In many medical settings, it is common to have multiple disease recurrences in the presence of death. In this case, the disease recurrences may be associated with the occurrence of death. A representative example is tumor recurrence in oncology studies. Increased tumor recurrences may reflect a deterioration in health, which may be associated with an increased risk of death. In this case, death clearly censors the tumor recurrence, which is the event of the interest. One may view the time to death as a truncation time with respect to the recurrent events process (Ghosh, 2010). Due to the dependence of the failure of interest and death, use of standard survival analysis methods may be invalid.

Procedures for the analysis of recurrent events in the presence of a terminal event have been extensively studied in the past decade. Ghosh and Lin (2002) and Ghosh and Lin (2003) proposed methods of analysis for recurrent events subject to the terminal event by using a Cox-type and accelerated failure time (AFT) model, respectively. Moreover, Ghosh (2010) extended Peng and Fine (2006)’s method to model recurrent events in the presence of death using the AFT model, and Hsieh, Ding and Wang (2011) extended Peng and Fine (2006)’s method for general functions of the time to event of interest and the terminal event.

The joint regression modeling approaches of Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) offer many advantages. First, they are based on the linear model so that the interpretation of regression coefficients is straightforward. Second, for estimation of the treatment effect, their methods do not suffer from any bias, which may occur in models based on observable quantities (Varadhan, Xue and Bandeen-Roche, 2014).

Since the correlation structure between the terminal event and the failure time is typically unknown, it is necessary to adjust for the terminal event in a model-free manner. Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) used artificial censoring. The idea behind artificial censoring (Lin, Robins and Wei, 1996) is to make observations comparable each other by transforming uncensored observations into censored ones. Since the terminal event censors the nonterminal event, the distributions of error terms with respect to non-terminal event given covariates are different from each other (Ding et al. 2009). This is a violation of the i.i.d. assumption of the error terms in the AFT model. By applying the artificial censoring technique, the assumption of a random sample is satisfied. Moreover, this technique guarantees unbiased estimating functions, leading to consistent estimator for the regression coefficients. However, this artificial censoring depends on covariates and in fact is an increasing function of covariates. If covariates are continuous and have a large variance, then this leads to excessive artificial censoring and the subsequent exclusion of information on time to event of interest for many subjects. This excessive artificial censoring is problematic in observational studies because of multiple confounders. Including entire vectors of confounders in the artificial censoring may cause excessive artificial censoring, but excluding confounders causes bias of the estimation of exposure effects in medical studies.

Cho, Hu and Ghosh (2018) proposed a simple and effective technique using propensity scores to reduce artificial censoring in observational studies. This approach leads to consistent estimates of treatment effects along with valid inference approaches. While propensity scores are routinely used in causal inference problems, the main property exploited in Cho, Hu and Ghosh (2018) is the balancing property of propensity scores.

In this paper, we extend the work of Cho, Hu and Ghosh (2018) to accommodate recurrent events. Doing so will require new technical developments, as the martingale theory-based arguments of Cho, Hu and Ghosh (2018) no longer apply. We use empirical process theory to develop new theory for covariate adjustment in recurrent events data under dependent censoring. Moreover, Cho, Hu and Ghosh (2018) uses a computationally intensive resampling approach to estimate the covariance matrix of the proposed estimator. In this paper, we develop a new and much faster resampling procedure for the covariance matrix estimation.

The organization of this paper is as follows. In Section 2.1, we introduce the data structure, while Section 2.2 discusses propensity score. The new estimation procedure is given in Section 2.3. Attendant statistical inference procedures using the proposed method are given in Section 2.4. Section 3 discusses goodness-of-fit procedure. Section 4.1 and Section 4.2 illustrate the methodology using simulation studies as well as with a real-data analysis. Some discussion concludes this paper in Section 5.

2. Methods

2.1. Data and model

In this paper, all times are log-transformed. Let Tk be time to kth recurrent event for ith subject and N*(t) be the number of recurrent events until up to transformed time t without censoring, i.e. N(t)=k=1I{Tkt}. Define D and C to be times to a terminal event and independent censoring (time to random loss to follow up or time to end of study), respectively. W = (Z, VT)T be q-dimensional vector of variables where Z is a binary 0-1 treatment indicator and V are (q−1) confounders. The observable quantities are (N(),D~,ξ,W) where N(t)=N(tDC),D~=DC, and ξ = I(DC). As in Hsieh, Ding and Wang (2011), there exists K=N(D~) events that are continuously observed times on Tk = min{t : N(t) ≥ k} for k = 1,..., K. Thus, the data consist of independent and identically distributed replications of (N(),D~,ξ,W). By Ghosh (2010), an equivalent formation of the dataset is independent and identically distributed copies of (T~k,δk,D~,ξ,W), where T~k=TkD~ and δk=I(TkD~). As we discussed in the Introduction, the correlation between the event of interest and the terminal event is unknown. We do not specify a dependence structure between these two events.

We consider the AFT model for recurrent events (Ghosh and Lin, 2003; Ghosh, 2010; Hsieh, Ding and Wang, 2011).

(Tk=θ0trZ+(θ0cfd)TV+ϵkRD=η0trZ+(η0cfd)TV+ϵD), (1)

where (θ0tr,η0tr) and {(θ0cfd)T, (η0cfd)T}T are true regression coefficients corresponding to the treatment variable and confounders, respectively, and (ϵkR,ϵD) are error terms. We assume that (1) (ϵkR,ϵD) are independent of (Z, V) and (2) the joint model only holds the case when TkD. Our primary interest is to obtain consistent estimates of (θ0tr,η0tr). When k = 1, the model (1) is the AFT model for a single failure of interest and the terminal event (Lin, Robins and Wei, 1996; Peng and Fine, 2006).

2.2. Propensity score

In the causal inference literature, the main interest is to establish a causal relationship between treatment and outcome. A useful tool to express this relationship is the potential outcomes framework (Rubin, 1974; Holland, 1986). Let Y be a continuous outcome without censoring; Z will remain a binary variable, as in the previous section. Denote {Y(1), Y(0)} as the potential outcomes for treatment group and control group, respectively. The primary interest is to estimate E{Y(1) − Y(0)}, which is the average causal effect. In randomized studies, randomization ensures the possibility of a causal interpretation for treatment effects. However, in observational studies, the existence of the confounders prevents the establishment of the causal relationship between the treatment and the outcome. Rosenbaum and Rubin (1983) proposed the propensity score to estimate the causal effect of the treatment for the observational studies. The propensity score, the probability of being treatment group given the confounders, is defined by

e(V)P(Z=1V).

Given the same propensity score, the distribution of the confounders is equal between the treatment group and the control group. By this balancing property, we can obtain conditional independence between the treatment and the confounders. Moreover, if treatment is statistically independent with potential outcomes under the treatment group and the control group (i.e., the so-called strongly ignorable treatment assignment assumption from causal inference), the treatment assignment and potential outcomes are independent provided by the propensity score. Thus the expected value of the difference of the observed responses between the treatment group and the control group at the propensity score value is equal to the mean treatment effect given the propensity score value (Rosenbaum and Rubin, 1983).

With survival data, this balancing property is still valid with possibly unobservable failure time. From the modeling assumptions in the previous section, (Tikθ0TWi,Dη0TWi) is independent of Wi. To estimate the association between treatment and time to recurrent events without including confounders directly, we will use the balancing property of propensity score. The details of the estimation procedure will be explained in Section 2.3. We first establish independence of transformed vectors (Tikθ0trZi,Dη0trZi) given propensity score with confounders as in Cho, Hu and Ghosh (2018).

Theorem 1. Given the propensity score e(Vi), for i = 1 , . . . , n and k = 1, 2, . . . , K, (Tikθ0trZi,Diη0trZi) is independent of Zi and identically distributed.

The proof of Theorem 1 can be found in the Supplementary Materials. Theorem 1 allows us to obtain an estimator of association between failure times and treatment without asymptotic bias as well as little information loss compared to the case when employing entire covariates. This will be seen in the simulations.

2.3. Proposed methodology

As mentioned in the Introduction, regression analysis with dependent censoring typically requires adjustment. The technique we focus on in this paper is artificial censoring. Artificial censoring is originally developed to obtain comparable observations between treatment and control group in randomized clinical trials for failure of interest with existence of death. This simple artificial censoring is extended to include covariates along with use of AFT model. Ghosh and Lin (2003) proposed estimating functions by extending Lin, Robins and Wei (1996):

UnGL,full(β)=n12i=1n[Wij=1nI{D~j(β)t}Wjj=1nI{D~j(β)t}]dN2i(t;β)

where

d(β)=maxi{0,(θη)TWi}D~i(β)=D~iηTWid(β)N2i(t;β)=k=1I{TikθTWitD~i(β)}.

As in Lin, Robins and Wei (1996), the artificial censoring of Ghosh and Lin (2003), denoted by d(β) is a single constant. Hsieh, Ding and Wang (2011) extended Peng and Fine (2006)’s approach to recurrent events with dependent censoring. Their estimating function is

UnH,full(β)=2n12n(n1)1i<jnk(WiWj)ϕijk(β),

where

dij(β)=max{0,(θη)TWi,(θη)TWi}T~i(j)k(β)=(TikθTWi){D~iηTWidij(β)}δ~i(j)k(β)=I[(TikθTWi){D~iηTWidij(β)}]ϕijk(β)=δ~i(j)k(β)I{T~i(j)k(β)T~j(i)k(β)}δ~j(i)k(β)I{T~i(j)k(β)T~j(i)k(β)}.

As with Peng and Fine (2006)’s approach, the artificial censoring quantity dij(β) by Hsieh, Ding and Wang (2011) is the maximum of linear combinations of the difference of two vectors with observation i and j. Hence the estimators of Hsieh, Ding and Wang (2011) are more efficient than those in Ghosh and Lin (2003). However, as mentioned in the Introduction, d(β) and dij(β) are function of covariates and parameters of interest. Hence covariates with a large variation cause excessive artificial censoring in the estimation procedure.

In many medical studies, the primary focus is on the effect of a treatment or intervention of interest. When the treatment assignment is randomized, the treatment variable is independent with other variables. This is not possible in observational studies without strong assumptions. As discussed in Section 2.2, the distribution of the confounders between the treatment and control groups conditional on the true propensity scores will be the same. It suggests that by using weights based on propensity scores, artificial censoring can be reduced in terms of dimensionality.

We assume that the propensity model estimated by logistic regression with parameter α = (α1 , . . . , αq) for Zi is the true model. Let Hi = (1, Vi)T, i = 1 , . . . , n. Note that the propensity score depends on the parameters α. Define the propensity score by

ei(α)=P(Zi=1Hi)=exp(αTHi)1+exp(αTHi).

The weight is defined as

wi(α)=Ziei(α)+1Zi1ei(α).

This weight takes value 1/ei(α) if Zi = 1 and 1/(1−ei(α)) otherwise. Let Gn(α) be the score function for α, where

Gn(α)=n12i=1nHi{Ziexp(αTHi)1+exp(αTHi)}.

Let D~i(ηtr)=D~iηtrZi. Note that ‘residual’ we use for estimating functions still contain information of the V. Then as in Cho, Hu and Ghosh (2018), the estimating function for the dependent censoring is

Sn(ηtr,α)=n12i=1nξiwi(α)[Zij=1nI{D~j(ηtr)D~j(ηtr)}wj(α)Zjj=1nI{D~j(ηtr)D~j(ηtr)}wj(α)],

Let βtr = (ηtr, θtr)T. We adapt the method in a spirit similar to Cho, Hu and Ghosh (2018). The proposed estimating function for βtr, extending Ghosh and Lin (2003), is

UnGL(βtr,α)=n12i=1nwi(α)[Zij=1nI{D~j(βtr)t}wj(α)Zjj=1nI{D~j(βtr)t}wj(α)]dN2i(t;βtr)

where

d(βtr)=maxi{0,(θtrηtr)Zi}D~i(βtr)=(D~iηtrZid(βtr)N2i(t;βtr,α)=k=1I{TikθtrZitD~i(βtr)}.

We also propose a new estimating function extending the work of Hsieh, Ding and Wang (2011) and Cho, Hu and Ghosh (2018).

UnH(βtr,α)=2n12n(n1)1i<jnk(ZiZj)wi(α)wj(α)ϕijk(βtr),

where

dij(βtr)=max{0,(θtrηtr)Zi,(θtrηtr)Zj}T~i(j)k(βtr)=(TikθtrZi){D~iηtrZidij(βtr)}δ~i(j)k(βtr)=I[(TikθtrZi){D~iηtrZidij(βtr)}]ϕijk(βtr)=δ~i(j)k(βtr)I{T~i(j)k(βtr)T~j(i)k(βtr)}δ~j(i)k(βtr)I{T~i(j)k(βtr)T~j(i)k(βtr)}.

Since the propensity score depends on α, joint modeling of α and βtr is required. We propose two sets of estimating functions, QnGL(γ)={GnT(α),Sn(ηtr,α),UnGL(βtr,α)}T and QnH(γ)={GnT(α),Sn(ηtr,α),UnH(βtr,α)}T. We solve the estimating equations in sequence for QnGL(γ) and QnH(γ), where γ = (αT, ηtr, θtr)T. To solve QnGL(γ)=0, an estimator of α, say α^ is obtained. Next, we put α^ in Sn(ηtr, α) and solve Sn(ηtr,α^)=0. Finally, the equation UnGL(θtr,η^catr,α^)=0 is solved. Similarly, to solve QnH(γ)=0, UnH(θtr,η^catr,α^)=0 can be solved using η^catr and α^, where η^catr is a solution of Sn(ηtr, α) = 0. Denote the solutions of UnGL(θtr,η^catr,α^)=0 and UnH(θtr,η^catr,α^)=0 to be θ^GLcatr and θ^Hcatr, respectively.

The solutions of QnGL(γ)=0 and QnH(γ)=0 are easily obtained using standard statistical software packages.

2.4. Statistical Inference

In this section, the theoretical properties of γ^GL=(α^T,η^catr,θ^GLcatr)T and γ^H=(α^T,η^catr,θ^Hcatr)T are explored. The needed assumptions to prove asymptotic properties of these estimators are listed in the Supplementary materials.

First, we show that E{QnGL(γ0)}=0 and E{QnH(γ0)}=0, where γ0 is true value of γ. It implies that QnGL(γ) and QnH(γ) are a zero-crossing estimating functions. When showing E{QnGL(γ0)}=0, we use Theorem 1 along with argument from Ghosh and Lin (2003). The common value of expectation of differential of counting process of interest given the transformed time D~i(β0tr) is greater than or equal to time t and true propensity score plays on an important role to prove this argument. Details of the proof are shown in the Supplementary materials. The other important properties are consistency and asymptotic normality of γ^GL and γ^H. To prove the consistency, it is important to establish the uniform convergence of random functions to deterministic functions under condition of uniqueness of root of γ0 (Peng and Fine, 2006). Theoretically, the proofs are based on empirical process theory (Pollard, 1990; Ying, 1993) and U-statistics theory (Honoré and Powell, 1994; Cho and Ghosh, 2017). Proofs of Theorem 2 and Theorem 3 can be found in the Supplementary materials.

Theorem 2. Under the regularity conditions in Theorem 17 of Ferguson (1996), Ying (1993), Ghosh (2000) and Hsieh, Ding and Wang (2011), γ^GL and γ^H are strongly consistent.

Theorem 3. Under the regularity conditions in Theorem 17 of Ferguson (1996), Ying (1993), Ghosh (2000), Hsieh, Ding and Wang (2011) and by Theorem 2, n12(γ^GLγ0) has asymptotic normal distribution with mean 0 and covariance matrix (Λ0GL)1Ω0GL((Λ0GL)1)T, where Λ0GL is a nonsingular matrix and Ω0GL is a covariance matrix of limiting distribution of QnGL(γ0). Similarly, n12(γ^Hγ0) has asymptotic normal distribution with mean 0 and covariance matrix (Λ0H)1Ω0H((Λ0H)1)T, where Λ0H is a nonsingular matrix and Ω0H is a covariance matrix of limiting distribution of QnH(γ0).

To estimate the variance of the proposed estimators, we propose extending the resampling method of Cho, Hu and Ghosh (2018) as well as a new method based on Straderman (2005) and Zeng and Lin (2008). In the sandwich variance expression from Theorem 3, estimation of Ω0GL and Ω0H are straightforward using the empirical influence functions for estimating functions. Let N1iw(t;ηtr,α)=wi(α)I{D~i(ηtr)t,ξi=1} and N2iw(t;βtr,α)=wi(α)I{TikθtrZitD~i(βtr)}. Let β^GLcatr=(η^catr,θ^GLcatr) and β^Hcatr=(η^catr,θ^Hcatr). Then we define

M^1iw(t;η^catr,α^)=N1iw(t;η^catr,α^)twi(α^)I{D~i(η^catr)t}dR^10w(u;η^catr,α^).M^2iw(t;β^GLcatr,α^)=N2iw(t;β^GLcatr,α^)twi(α^)I{D~i(β^GLcatr)u}dR^20w(u;β^GLcatr,α^),

where

R^10w(t;η^catr,α^)=ti=1ndN1iw(u;η^catr,α^)j=1nwj(α^)I{D~j(η^catr)u}R^20w(t;β^GLcatr,α^)=ti=1ndN2iw(u;β^GLcatr,α^)j=1nwj(α^)I{D~j(β^GLcatr)u}.

The empirical inference function for QnGL(γ0) is

v^1i=Hi{Ziexp(α^THi)1+exp(α^THi)}v^2i(1)=[ZiZ(1)(u;α^,η^catr)]dM^1iw(u;η^catr,α^)v^2i(2)=[ZiZ(2)(u;α^,β^catr)]dM^2iw(u;β^GLcatr,α^)v^2i(3)=2n1j=1nkwi(α^)wj(α^)(ZiZj)ϕijk(β^Hcatr).

Let v^iGL=(v^1iT,v^2i(1),v^2i(2))T and v^iH=(v^1iT,v^2i(1),v^2i(3))T. Ω0GL and Ω0H are estimated by

Ω^GL=1ni=1nv^iGL(v^iGL)TΩ^H=1ni=1nv^iH(v^iH)T.

As discussed by many authors in the literature (Lin, Robins and Wei, 1996; Peng and Fine, 2006; Ghosh and Lin, 2003; Hsieh, Ding and Wang, 2011; Cho, Hu and Ghosh, 2018), it is very difficult to estimate Λ0GL and Λ0H directly. We can use the resampling approach from Parzen, Wei and Ying (1994) to estimate covariance matrix. Parzen, Wei and Ying (1994) proposed a method to estimate the covariance matrix by solving estimating equations repeatedly. This method does not require estimation of ΛGL and ΛH. Let Ai be standard normal random variables and let v^2iGL=(v^2i(1),v^2i(2))T and JnGL(γ)=[SnT(ηtr,α),{UnGL(βtr,α)}T]T.

(Gn(α)=n12i=1nv^1iAiJnGL(γ)=n12i=1nv^2iGLAi). (2)

Then we repeatedly solve equations (2) many times. Let the solution from equation (2) be γGL*. We can estimate the covariance matrix of γ^GL by using empirical distributions of solutions from equations (2), say γ1GL,,γMGL. Calculation of the covariance matrix for γ^H is similar. Let v^2iH=(v^2i(1),v^2i(3))T and JnH(γ)=[SnT(ηtr,α),{UnH(βtr,α)}T]T. Then we construct

(Gn(α)=n12i=1nv^1iAiJnH(γ)=n12i=1nv^2iHAi). (3)

Let the solution from equation (3) be denoted γH*. We can estimation the covariance matrix of γ^H similar way to the case of γ^GL.

This method assumes that asymptotic distribution of n12(γ^GLγ0) and n12(γ^Hγ0) is equal to the asymptotic distributions of n12(γGLγ^GL) and n12(γHγ^H) given observed data, respectively. The following theorem justifies this assumption. The proof of the following theorem also can be found in the Supplementary Materials.

Theorem 4. Let γ* be either βGL* or γH*, and let γ^ be either γ^GL or γ^H. Under the regularity conditions in Theorem 17 of Ferguson (1996), Parzen, Wei and Ying (1994), Ying (1993), Ghosh (2000) and Hsieh, Ding and Wang (2011), the conditional distribution of n12(γγ^) given observed data is asymptotically equivalent to the unconditional distribution of n12(γ^γ0).

Strawderman (2005) and Zeng and Lin (2008) proposed a new resampling technique for nonsmoothing functions. Zeng and Lin (2008) argue that resampling approaches such as those given in Parzen, Wei and Ying (1994) and Jin, Ying and Wei (2001) require solving estimating equations a large number of times and are too computationally expensive. Strawderman (2005) and Zeng and Lin (2008) proposed the least squares method and sample variance method to estimate a covariance matrix. In their approach, the key issue is to generate multivariate normal random variables and perturb them into estimating functions. However, since γ^GL and γ^H have common parts (α^T,η^tr)T, separate generation of these multivariate normal random variables cause different standard errors for common estimators (α^,η^tr)T. To overcome this difficulty, we consider

Ω^=1ni=1nv^iv^iT

where v^i=(v^1iT,v^2i(1),v^2i(2),v^2i(3))T. With this method and adapting the Strawderman (2005) and Zeng and Lin (2008) approach, we propose the following algorithm:

  • (a)

    Generate B1 , . . . , BM, where Bj = (bj1, . . . , bjq, bj(q+1), bj(q+2), bj(q+3)) has multivariate normal distribution with zero mean vector and covariance matrix Ω^12.

  • (b)
    Let bj(1)=(bj1,,bjq)T and bj(2)=(bj(q+1),bj(q+2))T. Note that β^GLcatr=(η^catr,θ^GLcatr)T, β^Hcatr=(η^catr,θ^Hcatr)T and bj(21)=(bj(q+1),bj(q+2))T and bj(22)=(bj(q+1),bj(q+3))T. Let γ^all=(α^T,η^catr,θ^GLcatr,θ^Hcatr)T, Compute Qn(γ^all+n12Bj), where
    Qn(γ^all+n12Bj)=(Gn(α^+n12bj(1))Sn(η^catr+n12bj(q+1),α^+n12bj(1))UnGL(β^GLcatr+n12bj(21),α^+n12bj(1))UnH(β^Hcatr+n12bj(22),α^+n12bj(1))).
  • (c)

    Let Λ^GL and Λ^H be (q + 2) × (q + 2) matrices. Regress Gn(α^+n12bj(1)) on bj(1), and store the least squares estimates from the first to qth row of Λ^GL and Λ^H.

  • (d)

    Regress Sn(η^catr+n12bj(q+1),α^+n12bj(1)) on [{bj(1)}T,bj(q+1)]T, and store least squares estimates on (q + 1)th row of Λ^GL and Λ^H.

  • (e)

    Regress UnGL(β^GLcatr+n12bj(21),α^+n12bj(1))on [{bj(1)}T,{bj(21)}T]T, and store least squares estimates on (q + 2)th row of Λ^GL.

  • (e)’

    Regress UnH(β^Hcatr+n12bj(22),α^+n12bj(1)) on [{bj(1)}T,{bj(22)}T]T, and store least squares estimates on (q + 2)th row of Λ^H.

  • (f)
    Let Λ^GL and Λ^H be the matrices computed from (a) − (e)′. Then the form of (q + 2) × (q + 2) matrix Λ^GL is
    Λ^GL=(×××00×××00××××0×××××)row1rowqrowq+1rowq+2
    Λ^H=(×××00×××00××××0×××××)row1rowqrowq+1rowq+2
  • (g)

    Obtain the covariance matrix based on the asymptotic distributionn12(γ^GLγ0) by (Λ^GL)1Ω^GL((Λ^GL)1)T and n12(γ^Hγ0) by (Λ^H)1Ω^H((Λ^H)1)T, respectively.

It is also possible to apply entire algorithm with Ω^. As Ghosh (2010) discussed, the approach of Strawderman (2005) and Zeng and Lin (2008) is useful to estimate the covariance matrix. It does not require to solve estimating equations in many times to obtain a large number of realizations. It is computationally less extensive than Parzen, Wei and Ying (1994)’s approach.

3. Goodness of fit

Goodness of fit is crucial to model evaluation. With the proposed estimation procedures, we can evaluate (1) under the assumption that the propensity score is true. Since our estimating functions are unbiased, we can define score processes similar to Lin, Robins and Wei (1996) and Cho and Ghosh (2017). Let

Sn(t;η^catr,α^)=n12i=1nZiM^1iw(t;η^catr,α^)UnGL(t;β^GLcatr,α^)=n12i=1nZiM^2iw(t;β^GLcatr,α^).UnH(t;β^Hcatr,α^)=2n12n(n1)1ijnkwi(α^)wj(α^)(ZiZj)ϕijk(β^Hcatr)×I{T~i(j)k(β^Hcatr)T~j(i)k(β^Hcatr)t}

Let {(α)T,ηcatr,θGLcatr,θHcatr} be the solution from (2) and (3). As an extension of Lin, Robins and Wei (1996) and Cho and Ghosh (2017), the null distribution of {Sn(s;η^catr,α^),UnGL(t;β^GLcatr,α^),UnH(u;β^Hcatr,α^)}T can be approximated by

S^n(s)=n12i=1ns[Zij=1nI{D~j(η^catr)u}Zjwj(α^)j=1nI{D~j(η^catr)u}wj(α^)]dM^1iw(u;η^catr,α^)Ai+Sn(s;ηcatr,α)Sn(s;η^catr,α^)U^nGL(t)=n12i=1nt[Zij=1nI{X~j(β^GLcatr)u}Ziwj(α^)j=1nI{X~j(β^GLcatr)u}wj(α^)]dM^2iw(u;β^GLcatr,α^)Ai+UnGL(t;βGLcatr,α)UnGL(t;β^GLcatr,α^)U^nH(u)=2n12n(n1)1i<jnkwi(α^)wj(α^)(ZiZj)ϕijk(β^Hcatr)I{T~i(j)k(β^Hcatr)T~j(i)k(β^Hcatr)u}Ai+UnH(u;βHcatr,α)UnH(u;β^Hcatr,α^)

where βGLcatr=(ηcatr,θGLcatr) and βHcatr=(ηcatr,θHcatr). (3) are called bootstrapped processes. Based on the processes, as Lin, Robins and Wei (1996), Peng and Fine (2006) and Cho and Ghosh (2017), we can compute the test statistics supsSn(s;η^catr,α^), suptUnGL(t;β^GLcatr,α^) and supuUnH(u;β^Hcatr,α^) and compute a p-value as in Hsieh, Ding and Wang (2011) and Cho and Ghosh (2017) by using {(α)T,ηcatr,θGLcatr,θHcatr}. Graphically, we can plot 20 or 30 bootstrapped processes with the observed processes.

4. Results

4.1. Simulation Studies

We first show the results of some simulation studies. In the first simulation setting, we generate a confounder V ~ N(0, 1), and the treatment variable is generated by the logistic regression model, i.e, Z ~ Bernoulli(p), where

p=exp(α0TH)1+exp(α0TH),

H = (1, V)T and α0 = (α1, α2)T = (0, 0.5)T. True parameter values are θ0 = (0.5, 1)T and η0 = (1, 0.5)T. For generating correlation between the time to recurrences and time to the dependent censoring, we create a random variable ν which has a Gamma distribution with mean 1 and variance 1. We denote exp(ϵkR)=j=1kJ1j, where J1j is the time between the recurrent events in the original scale and is generated from exponential distribution with rate 4ν−1. exp (ϵD) follows the exponential distribution with rate ν−1. That is, exp(Tk)=exp(θ0TW)j=1kJ1j and exp(D)=exp(η0TW)exp(ϵD) (Hsieh, Ding and Wang, 2011). It is equivalent to our model (1). Independent censoring time in the original scale has uniform distribution with minimum value 0 and maximum value 20. We performed 400 simulation datasets, and for each simulated dataset, M = 200 resampling runs are used for both the Parzen, Wei and Ying (1994), and the approach from Strawderman (2005) and Zeng and Lin (2008). Sample sizes n = 100 and n = 200 are considered.

In simulation studies from previous papers dealing with related problems (Lin, Robins and Wei, 1996; Peng and Fine, 2006; Ghosh and Lin, 2003; Hsieh, Ding and Wang, 2011), only bounded covariates are employed because of artificial censoring. However, in our simulation study, we employ theoretically unbounded covariates to show validity of the proposed approach. We compute bias, empirical standard deviation (EMPSD), mean of standard error (SEE) and 95% coverage (Cover). For the proposed approach, SEE is computed in three ways : data bootstrap (Boot), resampling approach from Parzen, Wei and Ying (1994) (Parzen), and resampling approach from Strawderman (2005) and Zeng and Lin (2008) (SZL). We compare our approach with the one with employing entire covariates (henceforth termed the “full model approach”). Note that η^F, θ^GLF and θ^HF are estimators of dependent censoring, the event of interest by Ghosh and Lin (2003) and the event of interest by Hsieh, Ding and Wang (2011) using full model approach, respectively.

Table 1 and Table 2 show the full model and the proposed approaches. In the full model approach, 7 simulation runs in the n = 200 case are omitted due to standard error values of zero. The Ghosh and Lin (2003) estimators are biased and do not have good coverage. The proposed approach does not have simulation run with zero standard errors. Moreover, estimators from the new approach has correct coverage and lower bias compared to ones in full model approach.

Table 1:

Full model when n = 100 and n = 200, Simulation setting 1

Bias EMPSD SEE Cover
η^F Z 0.009 0.335 0.355 0.958
V 0.010 0.178 0.178 0.930
θ^GLF Z −0.11 0.444 0.330 0.832
n = 100 V 0.112 0.327 0.247 0.868
θ^HF Z −0.009 0.329 0.360 0.945
V 0.011 0.165 0.203 0.930
η^F Z −0.025 0.234 0.245 0.954
V −0.001 0.119 0.123 0.949
θ^GLF Z −0.11 0.518 0.305 0.88
n = 200 V 0.097 0.377 0.220 0.906
θ^HF Z −0.013 0.235 0.232 0.936
V 0.002 0.116 0.120 0.939

Table 2:

Proposed approach n = 100 and n = 200, Simulation setting 1

Bias EMPSD SEE Cover
Boot Parzen SZL Boot Parzen SZL
η^catr 0.027 0.340 0.358 0.361 0.343 0.95 0.945 0.938
θ^GLcatr −0.001 0.322 0.318 0.327 0.308 0.93 0.942 0.928
n = 100 θ^Hcatr −0.004 0.337 0.339 0.355 0.390 0.95 0.942 0.93
α^1 −0.012 0.206 0.216 0.216 0.213 0.955 0.95 0.973
α^2 0.023 0.216 0.237 0.238 0.228 0.938 0.942 0.968
η^catr −0.018 0.233 0.251 0.251 0.246 0.955 0.95 0.963
θ^GLcatr −0.009 0.214 0.219 0.221 0.216 0.96 0.955 0.963
n = 200 θ^Hcatr −0.009 0.236 0.235 0.237 0.283 0.935 0.925 0.935
α^1 0.001 0.142 0.148 0.149 0.148 0.94 0.938 0.955
α^2 0.014 0.154 0.160 0.161 0.157 0.95 0.942 0.96

Although the simulation study above shows the validity of our approach, we consider another simulation setting. We generate confounders V1 ~ N(50, 102) and V2 ~ N(0, 1). Real data usually contain covariates with a large variability. Our second simulation setting reflects this phenomenon. As in the first simulation setting, treatment variable is generated as Z ~ Bernoulli(p),

p=exp(α0TH)1+exp(α0TH),

where α0 = (α1, α2, α3)T = (2.5, −0.05, 0.05)T. True values are θ0 = (−0.759, 0.007, 0.5)T and η0 = (−0.84, 0.03, 0.2)τ. As in simulation setting 1, we generate a random variable ν which has gamma distribution with mean 50. Time to recurrent events and time to death are generated similarly to simulation setting 1. The number of simulation runs and resampling runs within each simulation are the same as first simulation study. Sample sizes are n = 200 and n = 400.

In this case, for calculations of these quantities using the full model approach, 129 simulation runs are removed when n = 200 and 165 simulation runs are removed when n = 400 due to the same reasons as in Simulation setting 1. Table 3 shows the numerical results from full model approach when n = 200 and n = 400. As can be seen, not only are the Ghosh and Lin (2003) estimators biased, but so do the Hsieh, Ding and Wang (2011) estimators shiw string finite-sample bias. The Hsieh, Ding and Wang (2011) approach gives a more efficient estimator than Ghosh and Lin (2003), but if the degree of the artificial censoring is extremely large, the Hsieh, Ding and Wang (2011) estimator is also inefficient. Table 4 shows the proposed approach for this setting and it shows that the estimators from the proposed method has correct coverage and less bias than the original Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) approach.

Table 3:

Full model when n = 200 and n = 400, Simulation setting 2

Bias EMPSD SEE Cover
Z −0.032 0.338 0.149 0.62
η^F V1 −0.001 0.017 0.017 0.93
V2 0.013 0.156 0.150 0.93
Z 0.1 0.198 0.029 0.166
n = 200 θ^GLF V1 −0.083 0.095 0.039 0.483
V2 0.063 0.131 0.032 0.280
Z 0.117 0.186 0.048 0.203
θ^HF V1 −0.11 0.093 0.034 0.351
V2 0.056 0.127 0.051 0.373
Z −0.03 0.238 0.150 0.749
η^F V1 −0.0002 0.012 0.012 0.936
V2 0.004 0.122 0.113 0.923
Z 0.075 0.151 0.03 0.183
n = 400 θ^GLF V1 −0.072 0.092 0.039 0.579
V2 0.053 0.099 0.034 0.4
Z 0.101 0.156 0.054 0.285
θ^HF V1 −0.104 0.105 0.037 0.434
V2 0.046 0.102 0.062 0.455

Table 4:

Proposed approach n = 200 and n = 400, Simulation setting 2

Bias EMPSD SEE Cover
Boot Parzen SZL Boot Parzen SZL
η^catr 0.007 0.368 0.355 0.359 0.339 0.908 0.912 0.91
θ^GLcatr 0.001 0.208 0.232 0.232 0.225 0.95 0.958 0.96
n = 200 θ^Hcatr 0.007 0.231 0.245 0.245 0.234 0.94 0.932 0.95
α^1 0.036 0.813 0.829 0.822 0.805 0.93 0.932 0.945
α^2 −0.001 0.016 0.016 0.016 0.016 0.932 0.925 0.953
α^3 0.013 0.143 0.154 0.153 0.152 0.952 0.938 0.96
η^catr 0.006 0.248 0.245 0.245 0.24 0.952 0.932 0.95
θ^GLcatr −0.009 0.159 0.161 0.16 0.168 0.945 0.935 0.943
n = 400 θ^Hcatr −0.006 0.165 0.169 0.169 0.165 0.938 0.93 0.943
α^1 0.011 0.538 0.567 0.569 0.562 0.952 0.95 0.965
α^2 −0.0001 0.011 0.011 0.011 0.011 0.955 0.955 0.963
α^3 0.001 0.108 0.106 0.106 0.105 0.938 0.92 0.943

For evaluating α^, we compare the mean of standard error by three methods with one by glm. In Simulation setting 1, the means of standard errors of α^1 and α^2 are 0.209 and 0.225, respectively when n = 100. For n = 200, these values are 0.147 and 0.156, respectively. For Simulation setting 2 with n = 200, the means of standard errors of α^1, α^2 and α^3 are 0.797, 0.016 and 0.149, respectively. When increasing the sample size to 400 in this setting, the values are 0.559, 0.011 and 0.104 for α^1, α^2 and α^3, respectively.

To know how many observations are artificially censored, we compute the artificial censoring rate. The formula are given by

ACPFullGL=1k=1Ki=1nδ~ik(β^GLF)k=1Ki=1nδikACPFullH=1k=1Ki=1njiδ~i(j)k(β^HLF)(n1)k=1Ki=1nδikACPProGL=1k=1Ki=1nδ~ik(β^GLcatr)k=1Ki=1nδikACPProH=1k=1Ki=1njiδ~i(j)k(β^Hcatr)(n1)k=1Ki=1nδik

where β^GLF=(η^F,θ^GLF)T and β^HF=(η^F,θ^HLF)T, and δ~ik is a new censoring indicator using artificial censoring approach from Ghosh and Lin (2003) (Hsieh, Ding and Wang, 2011; Cho, Hu and Ghosh, 2018). In this case, ACPFullGL and ACPFullH are artificial censoring rates from Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011), respectively. On the other hand, ACPProGL and ACPProH are artificial censoring rates from the proposed Ghosh and Lin (2003) type and Hsieh, Ding and Wang (2011) type approach in this paper. Table 5 shows the artificial censoring rate for the simulation settings above. As can be seen, our approach has less artificial censoring than that the Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) approaches, which imply that our method uses more information than the original two approaches in the estimation procedure. Due to this fact, our approach provides smaller bias than the two approaches and better coverage.

Table 5:

Artificial censoring rate for Simulation settings 1 and 2

CenD CenX ACRFullGL ACRFullH ACRProGL ACRProH
Simulation setting 1 n = 100 0.102 0.598 0.708 0.299 0.157 0.157
n = 200 0.099 0.597 0.741 0.29 0.149 0.148
Simulation setting 2 n = 200 0.567 0.436 0.807 0.837 0.058 0.036
n = 400 0.566 0.434 0.799 0.832 0.039 0.024

CenD : censoring rate for dependent censoring

CenX : censoring rate for recurrent events

ACRFullGL : artificial censoring rate for original Ghosh and Lin (2003) method using full model

ACRFullH : artificial censoring rate for original Hsieh, Ding and Wang (2011) method using full model

ACRProGL : artificial censoring rate for proposed Ghosh and Lin (2003) type method

ACRProH : artificial censoring rate for proposed Hsieh, Ding and Wang (2011) type method

Next we perform simulations using setting 1 assuming a randomized study with sample size n =100 and n = 200. In the simulation results from full model, one observation in n = 100 and 5 observations in n = 200 are removed due to standard error values being zero when fitting full model. Table 6 shows the results from fitting full model in this setting. It shows the similar trend as the original Simulation setting 1. Table 7 shows the results from the proposed method in this setting. As results in the original Simulation setting 1, the proposed method shows good numerical results. Since this is a randomized study, using only Z (without any weight) in the estimation procedure still provides the reasonable performance from finite sample. Table 8 describes results from fitting model with Z only and it shows that using only Z still provides numerically reasonable results. However, by comparing results in Table 7 and Table 8, using the estimated propensity score provides a more efficient result (i.e., smaller standard error) than using Z only. Table 9 shows aritificial censoring rate from this modified setting. As the original Simulation setting 1, the proposed method has better use of infomration compared to the conventional modeling, and the proposed method has similar use of modeling only Z.

Table 6:

Conventional approach n = 100 and n = 200, Simulation setting 1 with randomized study

Bias EMPSD SEE Cover
η^F Z −0.008 0.352 0.345 0.930
V −0.018 0.163 0.172 0.952
θ^GLF Z −0.078 0.395 0.320 0.855
n = 100 V 0.106 0.329 0.242 0.840
θ^HF Z 0.006 0.311 0.357 0.945
V −0.014 0.163 0.197 0.940
η^F Z 0.02 0.238 0.236 0.942
V 0.005 0.128 0.119 0.924
θ^GLF Z −0.056 0.339 0.300 0.884
n = 200 V 0.089 0.267 0.220 0.873
θ^HF Z 0.006 0.221 0.223 0.944
V 0.001 0.120 0.116 0.914

Table 7:

Proposed approach n = 100 and n = 200, Simulation setting 1 with randomized study

Bias EMPSD SEE Cover
Boot Parzen SZL Boot Parzen SZL
η^catr −0.005 0.360 0.344 0.348 0.332 0.928 0.932 0.925
θ^GLcatr 0.002 0.296 0.299 0.302 0.290 0.958 0.952 0.960
n = 100 θ^Hcatr 0.004 0.313 0.320 0.323 0.377 0.962 0.952 0.968
α^1 0.006 0.203 0.208 0.208 0.208 0.942 0.935 0.960
α^2 −0.003 0.214 0.219 0.216 0.216 0.928 0.920 0.948
η^catr −0.026 0.232 0.240 0.241 0.237 0.952 0.940 0.948
θ^GLcatr 0.005 0.208 0.209 0.209 0.207 0.958 0.945 0.960
n = 200 θ^HLcatr 0.005 0.224 0.222 0.222 0.218 0.952 0.942 0.940
α^1 −0.007 0.144 0.144 0.144 0.144 0.930 0.930 0.948
α^2 −0.012 0.149 0.147 0.147 0.147 0.928 0.932 0.940

Table 8:

n = 100 and n = 200, Simulation setting 1 with randomized study, fitting only using Z

Bias EMPSD SEE Cover
Parzen SZL Parzen SZL
η^tr −0.009 0.361 0.365 0.354 0.938 0.948
n = 100 θ^GL,tr −0.002 0.350 0.363 0.354 0.95 0.958
θ^H,tr −0.002 0.364 0.392 0.372 0.948 0.960
η^tr 0.002 0.242 0.253 0.251 0.950 0.953
n = 200 θ^GL,tr −0.009 0.256 0.254 0.252 0.940 0.950
θ^H,tr −0.009 0.272 0.267 0.263 0.933 0.945

Table 9:

Artificial censoring rate

CenD CenX ACRFullGL ACRFullH ACRProGL ACRProH ACRTrtGL ACRTrtH
n = 100 0.097 0.594 0.743 0.315 0.156 0.154 0.158 0.156
n = 200 0.096 0.596 0.769 0.315 0.162 0.161 0.164 0.162

CenD : censoring rate for dependent censoring

CenX : censoring rate for recurrent events

ACRFullGL : artificial censoring rate for original Ghosh and Lin (2003) method using full model

ACRFullH : artificial censoring rate for original Hsieh, Ding and Wang (2011) method using full model

ACRProGL : artificial censoring rate for proposed Ghosh and Lin (2003) type method

ACRProH : artificial censoring rate for proposed Hsieh, Ding and Wang (2011) type method

ACRTrtGL : artificial censoring rate for original Ghosh and Lin (2003) method only with fitting Z only

ACRTrtH : artificial censoring rate for original Hsieh, Ding and Wang (2011) method only with fitting Z only

4.2. Real data analysis

We now apply our methodology to data from an HIV study. Although this dataset comes from a randomized study, our method can still provide efficient gains for estimating treatment effects, which was already shown in the simulation study and for another HIV data analysis in Cho, Hu and Ghosh (2018). The rationale for this efficiency gain is that by including confounders in the estimating function through the propensity score, the weight incorporates more information in the estimation procedure.

This dataset was already analyzed by Ghosh (2010). Details of the study can be found in Abrams et al. (1994) and Neaton et al. (1994). A total of 467 patients who were infected with HIV or had a CD4+ lymphocyte count less than 300 cells per mm3. In this case, the main interest is to compare treatment (500mg of ddI and 2.25mg of ddC). Patients are randomly assigned to the treatment. The median CD4 count at the beginning of the study is 37 cells per mm3 (Neaton et al. 1994). Available covariates are treatment, history of previous infection with HIV and Karnofsky score (measure of functional status, where lower means worse functional status). Treatment is coded as 0/1(0 = ddI, 1 =ddc), hivh denotes history of previous infection with HIV (1= yes, 0=no), and Karnofsky score is abbreviated as ks.

With the model employing treatment and other covariates, the estimated association between death and treatment is 0.208 with standard error 0.083. For recurrent events, the effect of association by the Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) approaches is 0.011 with a standard error of 0.012. In this case, all standard error calculations are based on the resampling approach by Parzen, Wei and Ying (1994). Holding covariates fixed, the treatment is associated with a 1.231 fold change of percentiles of death. Similarly, treatment is associated with a little increase of time to recurrent events in terms of percentiles given the confounder values for the Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) procedures.

Table 10 shows the result from the proposed approach. Since this is a randomized study, excluding other variables does not affect estimation of treatment effects. Standard errors of the estimate of treatment variable by fitting treatment only are 0.117 for death, 0.138 for the Ghosh and Lin (2003) approach and 0.145 for Hsieh, Ding and Wang (2011) approach. There is a little improvement of efficiency for estimate of treatment on death and recurrent events.

Table 10:

Results from the proposed method

Estimate Standard error
resampling
(Parzen)
bootstrap resampling
(SZL)
Dependent Censoring 0.247 0.1 0.103 0.099
Ghosh and Lin (2003) 0.022 0.134 0.121 0.116
Hsieh, Ding and Wang (2011) −0.022 0.137 0.131 0.119
Logistic regression model
Estimate Standard error
resampling
(Parzen)
bootstrap resampling
(SZL)
Intercept 1.348 0.828 0.839 0.792
hivh −0.032 0.203 0.217 0.206
ks −0.015 0.009 0.009 0.009

We calculate the artificial censoring rate for the various regression approaches. When fitting a model with the full covariate vector, the artificial censoring rate for both Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) is 1 so that we artificially censor all failures. Inclusion of all variables in the model causes a high loss of information, which makes the estimation of variance impossible. The artificial censoring rate for the proposed Ghosh and Lin (2003) type method is 0.11 and Hsieh, Ding and Wang (2011) type method is 0.127, respectively. We conclude that our proposed approach uses more information to estimate an appropriate level of variability than the original Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011) methods.

We apply the goodness of fit method described in Section 3. Figure 1 shows rhe goodness of fit plot using observed processes with 30 bootstrapped processes. P-values are 0.895 and 0.845 from the proposed appproaches of dependent censoring and recurrent events based on the Ghosh and Lin (2003) method. However, p-value from the goodness of fit based on proposed Hsieh, Ding and Wang (2011) method is 0.05. By applying rule for 0.05 level, there may be a violation of functional form of covariates in the model (1) or issues in the propensity score model with respect to recurrent events.

Figure 1:

Figure 1:

Goodness of fit plot using observed processes with 30 bootstrapped processes (upper left : dependent censoring, upper left : proposed method for dependent censoring, upper right : proposed Ghosh and Lin (2003) method, lower left : proposed Hsieh, Ding and Wang (2011) method).

5. Discussion

In this paper, we propose an approach to estimate the treatment effect by incorporating the propensity score. Using the propensity score provides correct coverage of the estimator and smaller bias than the estimators in Ghosh and Lin (2003) and Hsieh, Ding and Wang (2011). It is also computationally easy to implement the procedure.

In this article, our method assumes that the true model of propensity score is logistic regression model. However, in reality, this assumption may not be true, and it is very difficult to verify or test this assumption. Recently, Zhu et al. (2014) proposed combining propensity scores from parametric and nonparametric approaches using data-based weights. Using the propensity score from different approaches may provide different performance to balance the confounders. Finding the optimal propensity score weight may be important in this case. This will be communicated by a separate report.

One may be interested in establishing a causal interpretation for recurrent events. However, as discussed in Cho, Hu and Ghosh (2018), we need the following assumptions:

  1. Strong ignorability assumption : treatment is independent of potential failure times given confounders. However, since two types of failures which are correlated each other exist, potential outcomes for two types of failures are jointly independent of treatment given confounders. Moreover, in recurrent events, potential outcomes should be considered for each recurrence. Let Tk(1), Tk(0), . . . , TK(0) be potential failure times to kth recurrent events and D(1), D(0) are potential outcomes to the dependent censoring. As Bai et al. (2013) and Cho, Hu and Ghosh (2018), it is reasoable to assume that potential censoring variable for treatment and control groups are the same. Then mathematically, this assumption is
    {T1(1),,TK(1),T1(0),,TK(0),D(1),D(0)(Z,C)V}.
  2. Noninteraction assumption : (ϵkR,ϵD)Z.

  3. Rank preservation assumption : If the dependent censoring (or terminal event) occurs for person i earlier than person j under treatment, hypothetically under control, person i experiences the dependent censoring earlier than person j.

Except for the noninteraction assumption, these assumptions are not testable and are strong assumptions. Research for establishing causal interpretation is an interesting future work.

In reality, the number of the confounders can be very large. Although the number does not exceed the number of observations, using the propensity score directly on all confounders may be inefficient. Variance selection technique is widely used (Tibshirani, 1996; Fan and Li, 2001). It is possible that this variable selection technique avoids overfitting and provides efficient balance for confounders. This is our future work.

Supplementary Material

Supp 1

Acknowledgments

This work was supported by NIH grant R01CA129102.

References

  • [1].Ding AA, Shi G, Wang W and Hsieh JJ (2009). Marginal regression analysis for semicompeting risks data under dependent censoring. Scandinavian Journal of Statistics, 36 (3): 481–500. [Google Scholar]
  • [2].Abrams DI, Goldman AI, Launer C, Korvick JA, Neaton JD, Crane LR, Grodesky M, Wakefield S, Muth K, Kornegay S, Cohn DL, Harris A, Luskim-Hawk R, Markowitz N, Sampson JH, Thompson M, Deyton L and The Terry Beirn community programs for clinical research on AIDS. (1994). A comparative trial of didanosine or zalcitabine after treatment with zidovudine in patients with human immunodeficiency virus infection. New England Journal of Medicine 330 (10): 657–662. doi: 10.1056/NEJM199403103301001 [DOI] [PubMed] [Google Scholar]
  • [3].Bai X, Tsiatis AA and O’Brien SM (2013). Doublyrobust estimators of treatmentspecific survival distributions in observational studies with stratified sampling. Biometrics 69 (4): 830–839. doi: 10.1111/biom.12076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Cho Y and Ghosh D (2017). A general approach to goodness of fit for U processes. Statistica Sinica, 27 (3): 1175–1192. doi: 10.5705/ss.202014.0141 [DOI] [Google Scholar]
  • [5].Cho Y, Hu C and Ghosh D (2018). Covariate adjustment using propensity scores for dependent censoring problems in the accelerated failure time model. Statistics in Medicine 37 (3): 390–404. doi: 10.1002/sim.7513 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96 (456): 1348–1360. DOI: 10.1198/016214501753382273 [DOI] [Google Scholar]
  • [7].Ferguson TS (1996). A Course in Large Sample Theory, London: Chapman and Hall. [Google Scholar]
  • [8].Fine JP, Jiang H and Chappell R (2001). On semi-competing risks data. Biometrika 88 (4): 907–919. doi: 10.1093/biomet/88.4.907. [DOI] [Google Scholar]
  • [9].Ghosh D and Lin DY (2003). Semiparametric analysis of recurrent events data in the presence of dependent censoring. Biometrics 59 (4): 877–885. doi: 10.1111/j.0006-341X.2003.00102.x. [DOI] [PubMed] [Google Scholar]
  • [10].Ghosh D and Lin DY (2002). Marginal regression models for recurrent and terminal events. Statistica Sinica 12, 663–688. [Google Scholar]
  • [11].Ghosh D (2010). Semiparametric analysis of recurrent events: artificial censoring, truncation, pairwise estimation and inference. Lifetime Data Analysis 16 (4): 509–524. doi: 10.1007/s10985-009-9150-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Holland PW (1986). Statistics and causal inference. Journal of the American statistical Association 81 (396): 945–960. [Google Scholar]
  • [13].Honoré BE and Powell JL (1994). Pairwise difference estimators of censored and truncated regression models. Journal of Econometrics 64 (1-2): 241–278. doi : 10.1016/0304-4076(94)90065-5 [DOI] [Google Scholar]
  • [14].Hsieh J-J, Ding AA and Wang W (2011). Regression analysis for recurrent events data under dependent censoring. Biometrics 67 (3): 719–729. doi: 10.1111/j.1541-0420.2010.01497.x. [DOI] [PubMed] [Google Scholar]
  • [15].Jin Z, Ying Z and Wei LJ (2001). A simple resampling method by perturbing the minimand. Biometrika 88 (2): 381–390. doi: 10.1093/biomet/88.2.381 [DOI] [Google Scholar]
  • [16].Lin DY, Robins JM and Wei LJ (1996). Comparing two failure time distributions in the presence of dependent censoring. Biometrika 83 (2), 381–393. doi: 10.1093/biomet/83.2.381 [DOI] [Google Scholar]
  • [17].Neaton JD, Wentworth DN, Rhame F, Hogan C, Abrams DI, and Deyton L (1994). Considerations in choice of a clinical endpoint for AIDS clinical trials. Statistics in Medicine 13 (19-20), 2107–2125. doi : 10.1002/sim.4780131919 [DOI] [PubMed] [Google Scholar]
  • [18].Parzen MI, Wei LJ and Ying Z (1994). A resampling method based on pivotal estimating functions. Biometrika 81 (2): 341–350. doi: 10.1093/biomet/81.2.341. [DOI] [Google Scholar]
  • [19].Peng L and Fine JP (2006). Rank estimation of accelerated lifetime models with dependent censoring. Journal of the American Statistical Association 101 (475): 1085–1093. doi: 10.1198/016214506000000131 [DOI] [Google Scholar]
  • [20].Pollard D (1990). Empirical processes: theory and applications. In: NSF-CBMS regional conference series in probability and statistics, Institute of Mathematical Statistics and the American Statistical Association. pp. i–86. [Google Scholar]
  • [21].Rosenbaum PR and Rubin DB (1983). The central role of the propensity score in observational studies for causal effects, Biometrika 70 (1): 41–55. doi: 10.1093/biomet/70.1.41 [DOI] [Google Scholar]
  • [22].Rubin DB (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology 66 (5): 688–701. doi: 10.1037/h0037350 [DOI] [Google Scholar]
  • [23].Tibshirani R (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58 (1): 267–288. [Google Scholar]
  • [24].Stefanski LA and Boos DD (2002). The calculus of M-estimation. The American Statistician 56 (1): 29–38. doi : 10.1198/000313002753631330 [DOI] [Google Scholar]
  • [25].Strawderman RL (2005). The accelerated gap times model. Biometrika 92 (3): 647–666. doi : 10.1093/biomet/92.3.647 [DOI] [Google Scholar]
  • [26].Varadhan R, Xue Q-L and Bandeen-Roche K (2014). Semicompeting risks in aging research: methods, issues and needs. Lifetime Data Analysis 20 (4): 538–562. doi: 10.1007/s10985-014-9295-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Ying Z (1993). A large sample study of rank estimation for censored regression data, The Annals of Statistics, 21 (1): 76–99. [Google Scholar]
  • [28].Zeng D and Lin DY (2008). Efficient resampling methods for nonsmooth estimating functions. Biostatistics 9 (2): 355–363. doi: 10.1093/biostatistics/kxm034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Zhu Y, Ghosh D, Mitra N, and Mukherjee B (2014). A data-adaptive strategy for inverse weighted estimation of causal effects, Health Services Outcomes and Research Methodology 14 (3), 69–91. doi: 10.1007/s10742-014-0124-y [DOI] [Google Scholar]
  • [30].Ghosh D (2000). Nonparametric and semiparametric analysis of recurrent events in the presence of terminal events and dependent censoring, Ph.D. Thesis. Seattle, WA: University of Washington. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp 1

RESOURCES