Skip to main content
Springer logoLink to Springer
. 2023 Jan 5;29(3):537–554. doi: 10.1007/s10985-022-09584-2

On a simple estimation of the proportional odds model under right truncation

Peng Liu 1,, Kwun Chuen Gary Chan 2, Ying Qing Chen 3
PMCID: PMC10258175  PMID: 36602639

Abstract

Retrospective sampling can be useful in epidemiological research for its convenience to explore an etiological association. One particular retrospective sampling is that disease outcomes of the time-to-event type are collected subject to right truncation, along with other covariates of interest. For regression analysis of the right-truncated time-to-event data, the so-called proportional reverse-time hazards model has been proposed, but the interpretation of its regression parameters tends to be cumbersome, which has greatly hampered its application in practice. In this paper, we instead consider the proportional odds model, an appealing alternative to the popular proportional hazards model. Under the proportional odds model, there is an embedded relationship between the reverse-time hazard function and the usual hazard function. Building on this relationship, we provide a simple procedure to estimate the regression parameters in the proportional odds model for the right truncated data. Weighted estimations are also studied.

Keywords: Biased sampling, Odds ratio, Reverse-time hazard function

Introduction

Truncation is common in survival analysis where the incomplete nature of the observations is due to a systematic biased selection process originated in the study design. Right truncated data arise naturally when an incubation period (i.e., the time between disease incidence and the onset of clinical symptoms) cannot be observed completely in a retrospective study. In survival analysis, right truncation will lead to biased sampling in which shorter observations will be oversampled (Gürler 1996). For example, to study AIDS caused by blood transfusion (Lagakos et al. 1988), the incubation period is the time from a contaminated blood transfusion to the time when symptoms and signs of AIDS are first apparent. However, in those studies, the following-up period are usually limited. Therefore, only those developed AIDS before the end of study can be identified.

Many authors have studied right truncated data: Woodroofe (1985) and Wang et al. (1986) focused on the asymptotic properties the product limit estimator under random truncation. Keiding and Gill (1990) studied asymptotic properties of random left truncation estimator by a reparametrization of the left truncation model as a three-state Markov process. Lagakos et al. (1988) considered nonparametric estimation and inference of right truncated data by treating the process in reverse time, they showed that λB(t)=λ(τ-t), where τ is the study duration, λB(t) and λ(t) are reverse-time hazard and forward-time hazard, respectively. The authors also discussed the implications and limitations of introducing reverse time hazard to analyze right truncated data. Gross (1992) further explained the necessity of reverse time hazard in the Cox model setting.

However, in most of the current literature, researchers study right truncated data in nonparametric setting, fairly few studied semiparametric models, among them, Kalbfleisch and Lawless (1989) formulating the Cox model on the reverse time hazard (or retro hazard, Lagakos et al. (1988); Keiding and Gill (1990)). For other related work on reverse time hazard, please refer to Gross (1992); Chen et al. (2004), among others.

In this paper, we study right truncated data under a semiparametric proportional odds model. Different from a proportional hazards model, the reverse-time hazard in proportional odds model has a simple log-linear relationship with the forward-time hazard, which leads to an intuitive estimator. While Sundaram (2009)’s method can also be adapted to proportional odds model for right truncated data, she focused on applying a reversed-time argument to an estimator for left truncated data. Our estimator, on the other hand, utilize a direct relationship between the reverse-time hazard, the forward-time hazard and the baseline odds function, so that we obtain a simpler estimator. Weighted functions are also being inserted into the estimating equation to obtain more efficient estimates.

The rest of the paper is organized as follows. Section 2 describes the inference procedure as well as asymptotic results, Sect.  3 shows simulation and real data results, Sect. 5 provides some discussion. Proof of theorems are left into the Appendix part.

Inference procedure

Assume that the failure time of interest T follows the semiparametric proportional odds model:

log1-S(tZ)S(tZ)=α(t)+Zβ, 1

and the observed failure time is subject to a right truncation time variable R. The observed data is (Ti,Ri),i=1,,n, where TiRi. Let τ be the study duration, which is greater than max{T1,T2,,Tn}. An (observed) reverse-time sample, (Ti,Ri),i=1,,n can be constructed, where T=τ-T,R=τ-R, so that T is left truncated by the variable R. Denote (T~,R~) as the reverse-time sample (potentially truncated). Then the hazard function of T~ is a quantity originated in τ and counts backward in time. The reverse hazard and cumulative reverse hazard function of backward recurrence time is defined as

λB(tZ)=limΔt0Pr{T~(t-Δt,t]T~t,Z}Δt=f(tZ)F(tZ),ΛB(tZ)=tτλB(sZ)ds.

We would like to mention that a similar definition of the reverse hazard can also be found in Kalbfleisch and Lawless (1989) and Jiang (2011). Denote v(t)=exp(α(t)), and λ(t)=f(t)/S(t) as the forward-time hazard, then

logλ(tZ)-logλB(tZ)=α(t)+Zβ,λB(tZ)=1{1+v(t)exp(Zβ)}v(t)dv(t)dt.

Consider the counting process

Ni(t)=I(tTiRi),Yi(t)=I(TitRi),

and denote

Mi(t,β)=Ni(t)-tτYi(s)1{exp(Ziβ)v(s)+1}v(s)dv(s).

Then Mi(t,β) is a martingale with respect to the self-exciting (canonical) filtration (Keiding and Gill 1990; Stralkowska-Kominiak and Stute 2009) and

Mi(dt,β)=dNi(t)+Yi(t)1{exp(Ziβ)v(t)+1}v(t)dv(t). 2

Multiply both sides of (2) by {exp(Ziβ)v(t)+1} and summing over n observations,

i=1n{exp(Ziβ)v(t)+1}dNi(t)+i=1nYi(t)dv(t)v(t)=i=1n{exp(Ziβ)v(t)+1}Mi(dt,β). 3

Divide both left-hand side and right-hand side by i=1nYi(t), we obtain:

i=1n{exp(Ziβ)v(t)+1}dNi(t)i=1nYi(t)+dv(t)v(t)=i=1n{exp(Ziβ)v(t)+1}Mi(dt,β)i=1nYi(t).

which is equivalent to:

v(t)i=1nexp(Ziβ)dNi(t)i=1nYi(t)+i=1ndNi(t)i=1nYi(t)+dv(t)v(t)=i=1nexp(Ziβ)v(t)+1i=1nYi(t)Mi(dt,β). 4

Denote the left-hand side of (4) as:

U(β,dt)=dv(t)v(t)+pn(t)dt-qn(t,β)v(t)dt,

where

pn(t)dt=i=1ndNi(t)j=1nYj(t),qn(t,β)dt=-i=1nexp(Ziβ)dNi(t)j=1nYj(t).

From standard counting process arguments (Anderson and Gill, 1982;Aalen10), we know that the stochastic integral with respect to the counting process martingale Mi(dt,β) is also a martingale, motivate by the following equation

E1nU(β,dt)=E1ni=1nexp(Ziβ)v(t)+1i=1nYi(t)Mi(dt,β).

We construct the following estimating equation

1nU(β,dt)=0. 5

Only v(t) is unknown in (5), let the estimate of v(t) be v^n(t,β). Denote

Pn(t)=exptτi=1ndNi(s)j=1nYj(s),Qn(t,β)=tτi=1nexp(Ziβ)dNi(s)j=1nYj(s),

then

v^n(t,β)=Pn(t)tτPn(s)Qn(ds,β). 6

Multiply (2) by Zi{exp(Ziβ)v(t)+1}/n and summing over n observations, we obtain

1ni=1nZi{exp(Ziβ)v(t)+1}dNi(t)+Yi(t)dv(t)v(t)=1ni=1nZi{exp(Ziβ)v(t)+1}Mi(dt,β). 7

By virtue of the same idea of (5), take integration on both sides of (7), we can also construct another equation:

1ni=1n0τZi{exp(Ziβ)v(t)+1}dNi(t)+Yi(t)dv(t)v(t)=0 8

Substituting (6) into (8), we can obtain the estimate of β by solving the following equation:

1ni=1n0τZiexp(Ziβ)v^n(t,β)+1dNi(t)+Yi(t)v^n(dt,β)v^n(t,β)=0.

Moreover, since

v^n(dt,β)v^n(t,β)=-k=1ndNk(t)l=1nYl(t)-k=1nexp(Zkβ)dNk(t)l=1nYl(t)v^n(t,β),

then

1ni=1n0τZi-Z¯(t)exp(Ziβ)v^n(t,β)+1dNi(t)=0,

where

Z¯(t)=i=1nZiYi(t)j=1nYj(t).

Finally, let

Sn(β)=1ni=1n0τZi-Z¯(t)exp(Ziβ)v^n(t,β)+1dNi(t), 9

and denote the solution of Sn(β)=0 be β^n, we have the following theorem:

Theorem 1

Under assumptions A1-A4 in the Appendix, n(β^n-β0) converges weakly to a mean-zero normal distribution, with covariance matrix U-1V(U-1), where V is the covariance matrix of nSn(β0), U=limn{Sn(β)/β}β=β0. The kth row of U is:

limn1ni=1n0τZi-Z¯(t)×Zikexp(Ziβ0)v^n(t,β0)+exp(Ziβ0)v^n(t,β)βkβ=β0dNi(t).

Remark

For proportional odds model with the normal logit link:

logS(tZ)1-S(tZ)=α(t)+Zβ. 10

Define

M~i(t,β)=Ni(t)-tτYi(s)exp(Zβ)1+exp(Zβ)v(s)dv(s),

we claim that M~i(t,β) is a martingale. Recall that v(t)=exp(α(t)), following (10), we have

S(t|Z)=exp(α(t)+Zβ)1+exp(Zβ)v(t),

as a result, we can obtain

f(t|Z)=exp(Zβ)v(t)(1+exp(Zβ)v(t))2,F(t|Z)=11+exp(Zβ)v(t).

Following the definition of reverse hazard in Sect.  2, we can write the reverse hazard as

λ~B(t|Z)=f(t|Z)F(t|Z)=exp(Zβ)v(t)1+exp(Zβ)v(t).

From the general definition of martingale in Fleming and Harrington (1991) (pp. 25), we can easily show that M~i(t,β) is a martingale. While for model (1),

λB(t|Z)=v(t)1+exp(Zβ)v(t),

and Ni(t)-tτYi(s)λB(t|Z)dt is the martingale.

The corresponding estimating equation under model (10) has the following form

Sn(1)(β)=i=1n0τZi-Z¯(t,β)exp(Ziβ)v^n(t,β)+1dNi(t), 11

where

Z¯(t,β)=i=1nZiYi(t)exp(Ziβ)j=1nYj(t)exp(Zjβ).

Equation (11) also can be used to estimate β, however, comparing with (9), (11) is more complicated and more computational intensive, while the derivative of (9) with respect to β can be easily obtained. As a result, (9) can be easily solved by the newton raphson algorithm. In the following simulations, we will use estimating equation (9).

In addition to the unweighted object function (9), weighted object function can also being included to obtain a class of weighted estimators of β0. This procedure is often used to minimize the sandwich estimate as well as improve the efficiency. The weighted version of object function is

Sn,W(β)=1ni=1n0τWn(t)Zi-Z¯(t)exp(Ziβ)v^n(t,β)+1dNi(t)=0, 12

here Wn(t) is a predictable weight function with respect to the canonical filtration which converges to a non-random function w(t). One of the common used weight function is the Prentice-Wilcoxon type function Wn1(t)=S^LB(t), where S^LB(·) is the Lynden Bell estimate of the baseline survival function for right truncated failure time data. Denote the corresponding estimate of β as β^n,w. Then we have the following theorem:

Theorem 2

Under the same assumptions as Theorem 1, when n, for a prespecified weight function Wn(·)w(·), n(β^n,w-β0) converges weakly to a mean-zero normal distribution, with covariance matrix Uw-1Vw(Uw-1), where Vw is the covariance matrix of nSn,w(β0), Uw=limn{Sn,w(β)/β}β=β0. The kth row of Uw is:

limn1ni=1n0τWn(t)Zi-Z¯(t)Zikexp(Ziβ0)v^n(t,β0)+exp(Ziβ0)v^n(t,β)/βkβ=β0dNi(t).

Recently, many people considered problem of finding the optimal weight in a weighted estimating equation, including Chen and Cheng (2005); Chen and Wang (2000); Chen et al. (2012), among others. To achieve this goal, we only need to find the w(t) such that Uw(β0)-1Vw(β0)Uw(β0)-1 achieves the minimum. Since both the empirical weight function Wn(t) and its limit w(t) do not rely on unknown parameter β0, it is reasonable to set β0=0. Another explanation for letting β0=0 is that it represents the baseline distribution. Therefore, let β0=0, then we have:

Uw(β0)=limn1ni=1n0τWn(t)Zi-Z¯(t)Ziexp(Ziβ0)v(t)dNi(t)=limn1ni=1n0τWn(t)Zi-Z¯(t)2exp(Ziβ0)v(t)dNi(t)=limn1ni=1n0τWn(t)Zi-Z¯(t)2Yi(t)exp(Ziβ0)v(t)exp(Ziβ0)v(t)+1dt=limn1ni=1n0τWn(t)Zi-Z¯(t)2Yi(t)v(t)v(t)+1dt. 13
Vw(β0)=limn1ni=1n0τWn(t)2Zi-Z¯(t)2×exp(Ziβ0)v(t)+12Yi(t)1exp(Ziβ0)v(t)+1v(t)v(t)dt=limn1ni=1n0τWn(t)2Zi-Z¯(t)2Yi(t)exp(Ziβ0)v(t)+1v(t)v(t)dt=limn1ni=1n0τWn(t)2Zi-Z¯(t)2Yi(t)v(t)+1v(t)v(t)dt 14

Apply the Cauchy-Schwarz inequality to Uw(β0)-1Vw(β0)Uw(β0)-1 and let β0=0, then it follows that the optimal weight is proportional to

w(t)=v(t)(v(t)+1)2=S(t)1-S(t), 15

which minimize the variance of β^n. Since when (15) holds, we have

Uw(β0)=limn1ni=1n0τZi-Z¯(t)2Yi(t)v(t)v(t)(v(t)+1)3dt,Vw(β0)=limn1ni=1n0τZi-Z¯(t)2Yi(t)v(t)v(t)(v(t)+1)3dt,

which means when β0=0, given w(t)=S(t){1-S(t)}, we have Uw(β0)-1Vw(β0)Uw(β0)-1 achieves the minimum value Uw(β0)-1 (or equivalently Vw(β0)-1).

In simulation, let Wn2(t)=S^LB(t)1-S^LB(t), the results are shown in Table , it can be seen that the weight Wn2(t) achieve the minimal variance among the three estimators.

Simulation and real data

We perform simulation studies to evaluate the finite sample properties of the proposed estimator. In simulation, let α(t)=3logt, β0=(1,0.5), Z1 is a continuous variable follows a uniform distribution from 0 to 2, Z2 is a discrete variable follows a Bernoulli distribution with probability 0.5. The failure time variable is generated from model (1). The right truncation variable follows a uniform distribution from 0 to 4. This makes the truncation rate equals to 20%. For each simulation, 1000 datasets are generated, in each dataset, there are n observations, n=300,400,500,600, respectively. Wn1(t) and Wn2(t) are chosen as the weight functions in weighted estimating equations. As is shown in Table 1, three estimation equations yield unbiased estimates and the empirical coverage probability is around nominal level 95%, when weighted function is incorporated into the estimation equation, the efficiency is greatly improved, and the variance achieve minimal for Wn2(t) under three estimates.

Table 1.

Simulation results

β^n(1) β^n(2)
n Bias×103 SSE×103 SEE×103 Cov(%) Bias×103 SSE×103 SEE×103 Cov(%)
300 Unweight 31 329 342 96 8 319 338 97
Prentice-Wilcoxon 30 249 277 95 18 268 278 95
Wn2(t) 23 227 258 93 8 254 261 93
Shen et al. (2017) -2 271 NA NA 39 274 NA NA
400 Unweight 20 278 289 96 7 271 288 96
Prentice-Wilcoxon 12 213 240 96 12 227 241 95
Wn2(t) 8 194 222 94 5 211 225 94
Shen et al. (2017) -9 210 NA NA 7 251 NA NA
500 Unweight 14 247 254 96 5 247 255 96
Prentice-Wilcoxon 8 188 214 96 7 205 215 95
Wn2(t) -1 172 198 94 -3 188 200 94
Shen et al. (2017) -21 187 NA NA 9 235 NA NA
600 Unweight 11 222 229 96 7 227 232 96
Prentice-Wilcoxon 5 173 195 96 5 186 196 95
Wn2(t) -4 155 180 95 -4 172 182 95
Shen et al. (2017) -20 164 NA NA -5 180 NA NA

SSE The sampling standard deviation, SEE The sampling standard error, Cov The empirical coverage of approximate 95% confidence intervals

As pointed out by one of the referees and the associate editor, Shen et al. (2017) also studied right truncated data under linear transformation models, and we know that when the error term in the linear transformation model follows logistic distribution (Fine et al., 1998), the model becomes the proportional odds model. Let

Ni(t)=I(τ-Tit)=I(Tiτ-t),Yi(t)=I(τ-Ritτ-Ti)=I(Tiτ-tRi),

then the estimating equations (3) and (4) in Shen et al. (2017) can be written as

U(β,α(τ-t))=i=1n-τZidNi(t)-Yi(t)dlogexp(Ziβ+α(τ-t))1+exp(Ziβ+α(τ-t))=0,i=1ndNi(t)-Yi(t)dlogexp(Ziβ+α(τ-t))1+exp(Ziβ+α(τ-t))=0.

We recognize that Shen et al. (2017)’s methodology is general and works for all the linear transformation models, including the proportional odds model. However, our approach will be more convenient compared with Shen et al. (2017)’s under the proportional odds model, since our approach has a simpler form, and the estimation of the intercept α(t) can be done beforehand and plugged in the final estimating equation, while Shen et al. (2017) can not achieve this and their estimation produce involves a complicated iteration which increases the risk of non-convergence. Besides, Shen et al. (2017) only deal with the reverse time but not the reverse hazard function, and we utilize the relationship between the reverse hazard function and the forward-time hazard function and produced a more intuitive estimator.

We conduct simulations for Shen et al. (2017)’s method and the results are reported in Table 1. The code was obtained from the authors via personal communication. However, one of the authors, Prof. Pao-Sheng Shen mentioned that they were unable to calculate the asymptotic variance and coverage probabilities, the existing results in their paper contain some errors, and their current code only consists of bias and standard error. As a result, we only report bias and standard error of Shen et al. (2017)’s method. All the simulations were conducted under the same model as ours. We also want to mention that we found the computation speed is very slow for Shen et al. (2017)’s method, though asymptotic variance and coverage probability were not calculated, their method is still more than 3 times slower than ours under the same model setting and the sample size. The SSE of Shen et al. (2017)’s method is smaller than our unweighted estimator, but is bigger than the two weighted estimators. For the second approach in their paper, i.e. the conditional maximum-likelihood approach, since the bias is large, we did not perform further comparisons here. We would like to mention that the large bias of the conditional maximum-likelihood approach is also confirmed in Vakulenko-Lagun et al. (2020).

As suggested by one of the reviewers, we also perform simulations without accounting for the truncation, and the results are shown in Table . We choose the truncation distribution as uniform distributions from 0 to 4, 2 and 1, respectively, which corresponds to 20% truncation rate (mild truncation), 40% truncation rate (moderate truncation) as well as 70% truncation rate (heavy truncation). As we can see from Table 2, all the estimators are biased, and a larger truncation rate will lead to a bigger bias and variance, though for the same truncation, variances will decrease when the sample sizes increase. These results also coincide with Table 2.1 (pp. 20) in Rennert (2018) and Table 1 in Rennert and Xie (2018), though the two articles deal with the doubly truncated data under the Cox model.

Table 2.

Simulation results when ignoring truncation

β^n(1) β^n(2)
n Truncation Bias×103 SSE×103 SEE×103 Cov(%) Bias×103 SSE×103 SEE×103 Cov(%)
300 Mild -39 284 309 95 -35 267 310 99
300 Moderate -134 307 305 86 -74 297 308 96
300 Heavy -306 555 629 91 -106 555 601 98
400 Mild -67 229 260 95 -46 225 263 98
400 Moderate -128 254 261 89 -71 265 264 96
400 Heavy -379 245 251 70 -157 236 261 93
500 Mild -66 200 229 96 -26 222 235 96
500 Moderate -125 215 230 91 -72 227 234 96
500 Heavy -398 215 222 55 -170 224 231 89
600 Mild -53 173 209 96 -20 206 214 98
600 Moderate -129 184 208 93 -72 213 212 93
600 Heavy -406 223 202 45 -165 202 210 87

SSE, the sampling standard deviation; SEE, the sampling standard error; Cov, the empirical coverage of approximate 95% confidence intervals

To better illustrate how to employ the proposed method in real situation, we analyze the Centers for Disease Control’s blood-transfusion data, this data was used by Kalbfleisch and Lawless (1989) and Wang (1989). The data include 494 cases reported to the Center of Disease Control prior January, 1, 1987, and diagnosed before July, 1, 1986. Only 295 of the 494 has consistent data, and they got infection by a single blood transfusion or a short series of transfusions, analyse is restricted to this subset. We obtain the raw observation data via personal communication, Thomas Peterman, Centers for Disease Control and Prevention. The data contains three variables: T is the time from blood transfusion to the diagnosis of AIDS (in months), R is the time from blood transfusion to the end of the study (July, 1986, in months), Age is the age of the person when transfusing blood (in years). Comparing the data with Kalbfleisch and Lawless (1989)’s as well as Wang (1989)’s, the observation (X=16, T=33, Age=34) cannot be found in the raw data, thus is being deleted and the final sample size is 294, and a few fractions of the data are also corrected because these entries are not correct compared to the raw data.

We apply the proposed method to this data and treat Age as the covariate in regression. In Wang (1989)’s paper, the data are categorized into three age groups: ‘children’ aged 1-4, ‘adults’ aged 5-59, and ‘elderly patients’ aged 60 and older because of different patterns of survivorship, the survivor behaviour of groups ‘adults’ and ‘elderly patients’ are similar except for the right tail while there is an evident distinction compared with ‘children’, in current analysis, we delete the data from ‘children’, and focus on a combined sample of ‘adults’ and ‘elderly patients’ with a sample size equal to 260. Finally, the range of T is from 0 to 89, and the range of R is from 0 to 99. For all i{1,,260}, we have TiRi. As a result, our dataset will not have the identifiability issue as mentioned in Seaman et al. (2022). We also applied Shen et al. (2017)’s method and the result is similar. All the results are shown in Table , where the weights are chosen as Wn1(t) and Wn2(t), the estimated parameter between unweighted and weighted estimation equation does not show much difference, but the variance is reduced when weights are considered. In both situations mentioned above, Age has a very weakening positive effect on the odds ratio, but the effect is not significant.

Table 3.

Age effect for blood transfusion data

Age SSE
unweighted -0.0128 0.0153
Prentice-Wilcoxon -0.0120 0.0143
Wn2(t) -0.0122 0.0122
Shen et al. (2017) -0.0125 0.0150

Discussion

Directly consider the right truncated data in normal time order can be failed because ‘at risk’ process is not adapt to the history of the process (Gross 1992). Retro hazard solves this problem which transform right truncated data to left truncated in reverse time (Woodroofe 1985). Statistical modelling is even more flexible by incorporating the nature structure of proportional odds model. The usual form of proportional odds model can also be utilized but the theoretical and computational burden for the estimator will be increased, employ (1) can substantially improve the situation.

Acknowledgements

We thank Thomas Peterman, Centers for Disease Control and Prevention provided us the CDC blood transfusion data. We also thank for Pao-sheng Shen, Tunghai University provided us the code for Shen et al. (2017) and Vakulenko-Lagun Bella, University of Haifa discussed the simulation results of Vakulenko-Lagun et al. (2020).

Appendix 1

Assumptions:

A1: β0Rp is the interior point of a compact set B.

A2: Z is a bounded process.

A3: V(β0) is non-negative.

A4: f(t) is continuous.

Assumption A1 is also used by Chen et al. (2012), A2 is a standard assumption to ensure martingale properties holds (Fleming and Harrington 1991), A3 is also a standard assumption to avoid theoretical discussion, it is also being used in Huang and Qin (2013), A4 is being used in prove the martingale representation of v^n(t,β0)-v(t). Besides that, we also need an condition to ensure that the truncated distribution to be correctly identified, let F(·) and G(t) be the distribution function of T and R, define (aF,bF) and (aG,bG) be the support of F(·) and G(·) of T and R under the meaning that aW=inf{x:W(x)>0},bW=sup{x:W(x)<1}, where W is a distribution function. Under right truncation, actually, only conditional distribution P(Tx|TbG) and P(Rx|RaF) can be estimated, thus we assume aF=aG=0, bR=, so that the conditional distribution will be the actual distribution of T and R, we also assume P(TY)=α>0 to ensure that there exist observations satisfy our condition, similar assumption and discussion also appeared in Woodroofe (1985); Wang (1989), and Sundaram (2009), among others.

Proof of Theorem 1

To prove the Theorem 1, the first step is to derive the martingale representation of S^n(β0). To do this, we need the martingale representation of v^n(t,β0)-v0(t). Notice that

i=1n{exp(Ziβ0)v0(t)+1}dNi(t)+i=1nYi(t)dv0(t)v0(t)=i=1n{exp(Ziβ0)v0(t)+1}Mi(dt,β0) 16
i=1n{exp(Ziβ0)v^n(t,β0)+1}dNi(t)+i=1nYi(t)v^n(dt,β0)dt1v^n(t,β0)=0. 17

Denote w0(t)=1/v0(t) and w^n(t,β)=1/v^n(t,β), then (17) and (16) becomes:

i=1n{exp(Ziβ0)+w0(t)}dNi(t)-i=1nYi(t)dw0(t)=i=1n{exp(Ziβ0)+w0(t)}Mi(dt,β0), 18
i=1n{exp(Ziβ0)+w^n(t,β0)}dNi(t)-i=1nYi(t)w^n(dt,β0)=0. 19

(19)-(18) and divide both side by -i=1nYi(t):

{w^n(t,β0)-w0(t)}t-pn(t)dt{w^n(t,β0)-w0(t)}=i=1n{exp(Ziβ0)+w0(t)}Mi(dt,β0)i=1nYi(t).

Then

w^n(t,β0)-w0(t)=1Pn(t)i=1ntτPn(s)exp(Ziβ0)+w0(s)j=1nYj(s)Mi(ds,β0).

In the interval (0,τ), since 0<v0(t)<, by delta method,

v^n(t,β0)-v0(t)=-1w02(t)w^n(t,β0)-w0(t)=-v02(t)Pn(t)i=1ntτPn(s)v0(s)exp(Ziβ0)+1j=1nYj(s)v0(s)Mi(ds,β0). 20

At the point 0, (20) holds without condition because v^n(0,β)=v0(t)=0. At the point τ, if denote 0×=0, then (20) also holds.

By using (20), for Sn(β0):

Sn(β0)=1ni=1n0τZi-Z¯(t)exp(Ziβ0)v^n(t,β0)+1dNi(t)=1ni=1n0τZi-Z¯(t)exp(Ziβ0)v^n(t,β0)+1Mi(dt,β0)-1ni=1n0τZi-Z¯(t)Yi(t)exp(Ziβ0)v^n(t,β0)+1exp(Ziβ0)v02(t)+v0(t)dv0(t)=I+II.

In the following, we will show that the second part can also be represented as a summation of integral with respect to martingale.

II=-1ni=1n0τ{Zi-Z¯(t)}Yi(t)exp(Ziβ0)v^n(t,β0)+1-exp(Ziβ0)v0(t)-1exp(Ziβ0)v02(t)+v0(t)+1v0(t)dv0(t)=-1ni=1n0τ{Zi-Z¯(t)}Yi(t)exp(Ziβ0)exp(Ziβ0)v02(t)+v0(t)v^n(t,β0)-v0(t)dv0(t). 21

Substitute (20) into (21) and change the integration order, then

II=1nj=1n0τPn(t)exp(Zjβ0)v0(t)+1k=1Yk(t)v0(t)i=1n0t{Zi-Z¯(s)}Yi(s)exp(Ziβ0)v0(s)exp(Ziβ0)v0(s)+11Pn(s)dv0(s)Mj(dt,β0).

Denote

ξi(t,β0)=Zi-Z¯(t)exp(Ziβ0)v^n(t,β0)+1+Pn(t)exp(Ziβ0)v0(t)+1k=1Yk(t)v0(t)×j=1n0t{Zj-Z¯(s)}Yj(s)exp(Zjβ0)v0(s)exp(Zjβ0)v0(s)+11Pn(s)dv0(s).

Then the martingale representation of Sn(β0) is

Sn(β0)=1ni=1n0τξi(t,β0)Mi(dt,β0). 22

Through (22), it is obvious to prove that Sn(β0) converges to zero 0 in probability by the weak law of large numbers.

Let

μ(t)=limni=1nYi(t)Zij=1nYj(t),v(t,β)=limnv^n(t,β).

Denote

sn(β)=1ni=1n0τ{Zi-μ(t)}exp(Ziβ)v^n(t,β)-exp(Ziβ0)v^n(t,β0)dNi(t).

The derivative of Sn(β) and sn(β) are

Sn(β)=1ni=1n{Zi-Z¯(t)}exp(Ziβ)Ziv^n(t,β)+exp(Ziβ){v^n(t,β)/β}dNi(t),sn(β)=1ni=1n{Zi-μ(t)}exp(Ziβ)Ziv^n(t,β)+exp(Ziβ){v^n(t,β)/β}dNi(t).

Notice sn(β0)=0. Assume that there exists ε>0 such that (A5): P{Zi-μ(t)>ε,i=1,2,,n}>0, which means covariate can not be identical for all individuals. Together with the assumption (A6):

Eexp(Zβ0)Zv^n(t,β0)+Eexp(Zβ0){v^n(t,β)/β}β=β0>0.

we have limnsn(β0)>0. Without loss of generality, let limnsn(β0)>0, then there exist a neighborhood of β0 such that sn(β) is strictly increasing. Further notice that Sn(β)=sn(β)+op(1), Sn(β)=sn(β)+op(1), then Sn(β) is strictly increasing in a neighborhood of β0, thus prove the consistency of β^n.

By martingale central limit theorem, the variance of Sn(β0) is

V(β0)=limnVn=limn<n-1/2Sn(β0),n-1/2Sn(β0)>(τ)=limn1n0τi=1nξi(t,β0)2dtτ-Yi(s){exp(Ziβ0)v2(s)+v(s)}-1dv(s)=limn1n0τi=1nξi(t,β0)2Yi(t){exp(Ziβ0)v2(t)+v(t)}-1dv(t).

Further using the delta method will complete the proof of Theorem 1.

Proof of Theorem 2

Since Theorem 1 and 2 are quite similar, in this part, we will omit the proof detail and only give the detailed expression of Vw.

Vw(β0)=limn1n0τi=1nξi,w(t,β0)2Yi(t){exp(Ziβ0)v2(t)+v(t)}-1dv(t).

where

ξi,w(t,β0)=Wn(t)Zi-Z¯(t)exp(Ziβ0)v^n(t,β0)+1+Pn(t)exp(Ziβ0)v0(t)+1k=1Yk(t)v0(t)×j=1n0tWn(s){Zj-Z¯(s)}Yj(s)exp(Zjβ0)v0(s)exp(Zjβ0)v0(s)+1Pn(s)-1dv0(s).

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Aalen OO, Andersen PK, Borgan Ø, Gill RD, Keiding N (2010) History of applications of martingales in survival analysis. arxiv preprint arXiv:1003.0188
  2. Andersen PK, Gill RD. Cox’s regression model for counting processes: a large sample study. Ann Statist. 1982;10:1100–20. doi: 10.1214/aos/1176345976. [DOI] [Google Scholar]
  3. Chen YQ, Wang M-C. Analysis of accelerated hazards models. J Am Statist Assoc. 2000;95:608–18. doi: 10.1080/01621459.2000.10474236. [DOI] [Google Scholar]
  4. Chen YQ, Wang M-C, Huang Y. Semiparametric regression analysis on longitudinal pattern of recurrent gap times. Biostatistics. 2004;5:277–90. doi: 10.1093/biostatistics/5.2.277. [DOI] [PubMed] [Google Scholar]
  5. Chen YQ, Cheng S. Semiparametric regression analysis of mean residual life with censored survival data. Biometrika. 2005;92:19–29. doi: 10.1093/biomet/92.1.19. [DOI] [Google Scholar]
  6. Chen YQ, Hu N, Musoke P, Zhao LP. Estimating regression parameters in an extended proportional odds model. J Am Statist Assoc. 2012;107:318–30. doi: 10.1080/01621459.2012.656021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Fine JP, Ying Z, Wei LG. On the linear transformation model for censored data. Biometrika. 2012;85:980–6. doi: 10.1093/biomet/85.4.980. [DOI] [Google Scholar]
  8. Fleming TR, Harrington DP. Counting processes and survival analysis. New York: John Wiley; 1991. [Google Scholar]
  9. Gross ST. Regression models for truncated survival data. Scand J Stat. 1992;19:193–213. [Google Scholar]
  10. Gürler Ü. Bivariate estimation with right-truncated data. J Am Statist Assoc. 1996;91:1152–65. [Google Scholar]
  11. Huang C-Y, Qin J. Semiparametric estimation for the additive hazards model with left-truncated and right-censored data. Biometrika. 2013;100:877–88. doi: 10.1093/biomet/ast039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Jiang Y. Estimation of hazard function for right truncated data. Thesis: Georgia State University; 2011. [Google Scholar]
  13. Kalbfleisch JD, Lawless JF. Inferences based on retrospective ascertainment: an analysis of the data on transfusion-related AIDS. J Am Statist Assoc. 1989;84:360–72. doi: 10.1080/01621459.1989.10478780. [DOI] [Google Scholar]
  14. Keiding N, Gill R. Random truncation models and Markov processes. Ann Statist. 1990;18:582–602. doi: 10.1214/aos/1176347617. [DOI] [Google Scholar]
  15. Lagakos SW, Barraj LM, De Gruttola V. Nonparametetric analysis of truncated survival data, with application to AIDS. Biometrika. 1988;75:515–23. doi: 10.1093/biomet/75.3.515. [DOI] [Google Scholar]
  16. Rennert L (2018) Statistical methods for truncated survival data. Doctoral dissertation, University of Pennsylvania
  17. Rennert L, Xie SX. Cox regression model with doubly truncated data. Biometrics. 2018;74:725–33. doi: 10.1111/biom.12809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Seaman SR, Presanis A, Jackson C. Estimating a time-to-event distribution from right-truncated data in an epidemic: a review of methods. Stat Methods Med Res. 2022;31:1641–55. doi: 10.1177/09622802211023955. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Shen PS, Liu Y, Maa DP, Ju Y. Analysis of transformation models with right-truncated data. Statistics. 2017;51:404–18. doi: 10.1080/02331888.2016.1268617. [DOI] [Google Scholar]
  20. Stralkowska-Kominiak E, Stute W. Martingale representations of the Lynden-Bell estimator with applications. Stat Probabil Lett. 2009;79:814–20. doi: 10.1016/j.spl.2008.10.038. [DOI] [Google Scholar]
  21. Sundaram R. Semiparametric inference of proportional odds model based on randomly truncated data. J Stat Plan Inference. 2009;139:1381–93. doi: 10.1016/j.jspi.2008.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Vakulenko-Lagun B, Mandel M, Betensky RA. Inverse probability weighting methods for Cox regression with right-truncated data. Biometrics. 2020;76:484–955. doi: 10.1111/biom.13162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Vakulenko-Lagun B, Mandel M, Betensky RA. Inverse probability weighting methods for cox regression with right-truncated data. Biometrics. 2020;76:484–95. doi: 10.1111/biom.13162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Wang M-C, Jewell NP, Tsai W-Y. Asymptotic properties of the product limit estimate under random truncation. Ann Statist. 1986;14:1597–605. doi: 10.1214/aos/1176350180. [DOI] [Google Scholar]
  25. Wang M-C. A semiparametric model for randomly truncated data. J Am Statist Assoc. 1989;84:742–8. doi: 10.1080/01621459.1989.10478828. [DOI] [Google Scholar]
  26. Woodroofe M. Estimating a distribution function with truncated data. Ann Statist. 1985;13:163–77. doi: 10.1214/aos/1176346584. [DOI] [Google Scholar]

Articles from Lifetime Data Analysis are provided here courtesy of Springer

RESOURCES