Skip to main content
Sage Choice logoLink to Sage Choice
. 2023 Nov 8;32(12):2405–2422. doi: 10.1177/09622802231210917

A support vector machine-based cure rate model for interval censored data

Suvra Pal 1,, Yingwei Peng 2,, Wisdom Aselisewine 1,, Sandip Barui 3
PMCID: PMC10710011  PMID: 37937365

Abstract

The mixture cure rate model is the most commonly used cure rate model in the literature. In the context of mixture cure rate model, the standard approach to model the effect of covariates on the cured or uncured probability is to use a logistic function. This readily implies that the boundary classifying the cured and uncured subjects is linear. In this article, we propose a new mixture cure rate model based on interval censored data that uses the support vector machine to model the effect of covariates on the uncured or the cured probability (i.e. on the incidence part of the model). Our proposed model inherits the features of the support vector machine and provides flexibility to capture classification boundaries that are nonlinear and more complex. The latency part is modeled by a proportional hazards structure with an unspecified baseline hazard function. We develop an estimation procedure based on the expectation maximization algorithm to estimate the cured/uncured probability and the latency model parameters. Our simulation study results show that the proposed model performs better in capturing complex classification boundaries when compared to both logistic regression-based and spline regression-based mixture cure rate models. We also show that our model’s ability to capture complex classification boundaries improve the estimation results corresponding to the latency part of the model. For illustrative purpose, we present our analysis by applying the proposed methodology to the NASA’s Hypobaric Decompression Sickness Database.

Keywords: Support vector machine, multiple imputation, sequential minimal optimization, mixture cure rate model, expectation–maximization algorithm

1. Introduction

Ordinary survival analysis techniques such as the proportional hazards (PH) model, the proportional odds (PO) model or the accelerated failure time (AFT) model are concerned with modeling censored time-to-event data by assuming that every subject in the study will encounter the primary event of interest (death, relapse, or recurrence of a disease, etc.). However, it is not appropriate to apply these techniques to situations where a portion of the study cohort does not experience the event, for example, clinical studies involving low fatality rate with death as the event. It can be argued that if these subjects are followed up sufficiently beyond the study period, they may face the event due to some other risk factors. Therefore, these subjects can be considered as cured with respect to the event of interest. The survival model that incorporates the effects of such cured subjects is called the cure rate model. Remarkable progress in medical sciences also necessitate further exploration in the cure rate model where estimating the cure fraction precisely can be of great importance.

Introduced by Boag 1 and exclusively studied by Berkson and Gage, 2 the mixture cure rate model is perhaps the most popular cure rate model. 3 If T* denotes the lifetime of a susceptible (not cured) subject, then, the actual lifetime T for any subject can be modeled by

T=JT*+(1J) (1)

where J is a cure indicator with J=0 if an individual is cured and J=1 otherwise. Furthermore, considering Sp(t)=P(T>t) and Su(t)=P(T*>t) as the respective survival functions corresponding to T and T* , we can express

Sp(t)=(1π)+πSu(t) (2)

where π=P(J=1) . The latency part Su(t)=Su(t|x) and the incidence part π=π(z) are generally modeled to incorporate the effects of covriates x=(x1,,xp)T and z=(z1,,zq)T for any integers p and q . Note here that x and z may share the same covariates.

The properties of the mixture cure rate model with various assumptions and extensions are explored in details by several authors. Modeling lifetime of the susceptible individuals have been studied extensively. For example, a complete parametric mixture cure rate model is studied by assuming homogeneous Weibull lifetimes and logit-link to the cure rate.4,5 Semiparametric cure models with PH structure of the latency has also been studied extensively.68 Generalizations to semiparametric PO,9,10 AFT,1113 transformation class, 14 and additive hazards 15 under mixture cure rate model were also investigated with various estimation techniques and model considerations.

On the other hand, the incidence part π(z) is traditionally and extensively modeled by sigmoid or logistic function

π(z)=exp(z*Tβ)1+exp(z*Tβ) (3)

where β=(β0,β1,,βq)T and z*=(1,zT)T .4,6,7,1620 As observed in the case of logistic regression, the logistic model works well when subjects are linearly separable into the cure or susceptible groups with respect to covariates. However, problem arises when subjects cannot be separated using a linear boundary. Other options to model the incidence include assuming a probit link function ( Φ1(π(z))=z*Tβ ) or a complementary log-log link function ( log[log(1π(z))]=z*Tβ ), where Φ is the cumulative distribution function of the standard normal distribution.2123 However, similar to the logit link (3), these link functions do not offer nonlinear separability and are not sufficient to capture more complex effects of z on the incidence. Non-parametric strategies, for example, the generalized Kaplan–Meier estimate at maximum uncensored failure time 24 to estimate the incidence part π(z) and the modified Beran-type estimator 25 to estimate the latency part in a mixture cure model, are also considered in the literature. Again, applying these strategies to multiple covariates can be challenging. Other non-parametric spline-based mixture cure models are capable of capturing complex patterns in the data, but they do not perform well when there are a large number of covariates with complicated interaction terms, which is a serious drawback.26,27 Therefore, there exists necessity to identify a group of classifiers which would be able to model the incidence part more effectively by allowing nonlinear separating boundaries between the cured and non-cured subjects.

To this end, the support vector machine (SVM) could be a reasonable choice. 28 Introduced by Cortes and Vapnik, 29 the SVM is a machine learning algorithm that finds a hyperplane in multidimensional feature space that maximizes the separating space (margin) between two classes. The main advantage of the SVM is that it can separate nonlinear inseparable data by transforming it to a higher dimensional space using kernel trick. Consequently, this classifier is more robust and flexible than logit or probit link functions. Given the availability of different machine learning algorithms, 30 we propose to use SVM-based techniques in this article mainly because SVM is based on the kernel trick and hence it is possible to design or fuse kernels to improve performance. Furthermore, SVM uses a subset of training points in the decision function, which makes it memory efficient. Additionally, the execution time for SVM is expected to be less when compared to other classifiers such as artificial neural networks (NNs). Recently, Li et al. 31 studied the effect of the covariates on the incidence π(z) by implementing the SVM. The new mixture cure rate model is seen to outperform existing logistic regression-based mixture cure rate model especially in the estimation of the incidence, and performs well for nonlinearly separable classes. However, the authors only considered data under non-informative right censoring mechanism.

Unlike right-censored data,32,33 interval-censored data occur for a study where subjects are inspected at regular intervals, and not continuously.3436 If a subject experiences the event of interest, the exact survival time is not observed and is only known that the event has occurred between two consecutive inspections. Interval-censored data marked by cure prospect are often observed in follow-up clinical studies (cancer biochemical recurrence or AIDS drug resistance) dealing with events having low fatality and patients monitored at regular intervals.37,38 As in the case of right-censored data, some subjects may never encounter the event of interest, and are considered as cured. Mixture cure models for interval censored data have been studied and several estimation methods were proposed for both semiparametric and non-parametric set-ups.3943 Motivated by the work of Li et al., 31 we propose to employ the SVM-based modeling to study the effects of covariates on the incidence part of the mixture cure rate model for survival data subject to interval-censoring. In addition, we compare our model not only with the logistic regression-based mixture cure model but also with the spline regression-based mixture cure model, which is also capable of capturing complex effects of z on π(z) . Note that we use the spline method only in the incidence part of the mixture cure model. To apply the spline model to a classification problem where the response variable is qualitative in nature (as in the case of this article), we approximate the log-odds with a smoothing function. In this article, we consider the thin plate spline as the smoothing function which can accommodate multiple predictor variables and also allows the degrees of freedom and the basis function to be selected automatically from the mathematical statement of the smoothing.44,45 In particular, to capture nonlinear effect of z on π(z) we have

π(z)=exp(g(z))1+exp(g(z)) (4)

where g(z) is a smooth function which is estimated using a thin plate spline by

g^(z)=i=1nτiηmq(||zzi||)+j=1Mαjϕj(z). (5)

In (5), n is the total number of observations, m is such that 2m>q , τ and α are vectors of coefficients to be estimated, and τ is subject to the linear constraints TTτ=0 with Tij=ϕj(zi) . The M=(m+q1q) functions ϕi are linearly independent polynomials spanning the space of polynomials in Rq of degree less than m . 44 Furthermore,

ηmq(r)={(1)m+1+q/222m1πq/2(m1)!(mq/2)!r2mqlog(r), if qis evenΓ(q/2m)22mπq/2(m1)!r2mq, if qis odd.

The R software allows fitting of thin plate spline using the “gam” function in the package “mgcv.”

The rest of the article is organized as follows. In Section 2, we discuss about the mixture cure rate model framework for interval-censored data and develop an estimation procedure based on the expectation–maximization (EM) algorithm that employs the SVM to model the incidence part. In Section 3, a detailed simulation study is carried out to demonstrate the performance of our proposed model in terms of flexibility, accuracy, and robustness. Comparison of our model with the logistic regression-based and spline regression-based mixture cure rate models in the presence of interval censored data is made in this section. The model performance is further examined and illustrated in Section 4 through the NASA’s Hypobaric Decompression Sickness Database (HDSD). Finally, we end our discussion by some concluding remarks and possible future research directions in Section 5.

2. SVM-based mixture cure rate model with interval censoring

2.1. Censoring scheme and modeling lifetimes

The data we observe in situations with interval censoring are of the form (Li,Ri,δi,xi,zi) for i=1,,n , where n denotes the sample size. For the i -th subject, Li denotes the last inspection time before the event and Ri denotes the first subsequent inspection time just after the event. Note that Li<Ri . The censoring indicator is denoted by δi=I(Ri<) , which takes the value 0 if Ri= , meaning that the event is not observed for a subject before the last inspection time, and takes the value 1 if Ri< , meaning that the event took place but its exact time is not known and is only known to belong to the interval [Li,Ri] . To demonstrate the effect of covariates on the latency part, we consider a proportional hazards structure to model the lifetime distribution of the susceptible or non-cured subjects. That is, for the susceptible subjects, we model the hazard function by

hu(ti|xi)=h0(ti)exp{xiTγ} (6)

where γ=(γ1,,γp)T is the p -dimensional regression parameter vector measuring the effects of x and h0() is an unspecified baseline hazard function. Using (6), we can express (2) as

Sp(ti|xi,zi)=1π(zi)+π(zi){S0(ti)}exp(xiTγ) (7)

where S0() is the baseline survival function (unspecified) corresponding to h0() . In this article, we propose to estimate S0() using the non-parametric Turnbull estimator, 46 thereby avoiding any parametric distributional assumption. 47 Such an estimator does not have any closed form and is developed as an iterative procedure. The steps involved can be described as follows:

  • a.

    Using all the Li ’s and Ri ’s, i=1,2,,n , create a grid of time points as 0=τ0<τ1<<τk .

  • b.

    For each i , define a weight Uij that takes the value 1 if the interval (τj1,τj] is contained in the interval (Li,Ri] , and takes the value 0 otherwise.

  • c.

    Make an initial guess of the survival probability at τj (say, S0(0)(τj) ) for each j .

  • d.

    Calculate pj=S0(0)(τj1)S0(0)(τj) , j=1,2,,k , which denotes the probability of an event to occur at time τj .

  • e.

    Estimate the number of events that occurred at time τj by ej=i=1nUijpjmUimpm , where the denominator in ej is the total probability assigned to possible event times in the interval (Li,Ri] .

  • f.

    Calculate the estimated number of subjects at risk at time τj by Yj=l=jkel .

  • g.

    Calculate the updated product-limit estimator of survival function at τj using the data (ej,Yj) , say S0(1)(τj) , j=1,2,,k .

  • h.

    If |S0(1)(τj)S0(0)(τj)|<ϵ for all j , where ϵ is a tolerance, stop the iterative algorithm. Otherwise, repeat step d through step g using S0(1)(τj) .

2.2. Form of the likelihood function

As missing observations are inherent to the problem set-up and model framework, we propose to employ the EM algorithm to estimate the unknown parameters.7,8,48,49 For implementing the EM algorithm, we need the form of the complete data likelihood function. Let us define Δ0={i:δi=0} and Δ1={i:δi=1} . Missing observations that appear in this context are in terms of the cure indicator variable J , where J is as defined in (1). Note that Ji ’s are all known to take the value 1 if iΔ1 . However, if iΔ0 , Ji can either take 0 or 1, and is thus unknown or missing. Using these Ji ’s as the missing data, we can define the complete data as (Li,Ri,δi,Ji,xi,zi) , for i=1,,n , which contain both observed and missing data. Under the interval censoring mechanism, we can now express the complete data likelihood function and log-likelihood function as:

Lc=iΔ1[π(zi){Su(Li|xi)Su(Ri|xi)}]Ji×iΔ0(1π(zi))1Ji{π(zi)Su(Li|xi)}Ji (8)

and

lc=iΔ1Ji[logπ(zi)+log{Su(Li|xi)Su(Ri|xi)}]+iΔ0(1Ji)log(1π(zi))+Ji{logπ(zi)+logSu(Li|xi)} (9)

where Su(ti|xi)={S0(ti)}exp(xiTγ) . It can be further noted that

lc=lc1+lc2 (10)

where

lc1=i=1n[Jilogπ(zi)+(1Ji)log(1π(zi))] (11)

is a function that depends on the incidence part only and

lc2=i=1n[δilog{Su(Li|xi)Su(Ri|xi)}+(1δi)JilogSu(Li|xi)] (12)

is a function that depends on the latency part only.

2.3. Modeling the incidence part with SVM

Let us assume that Ji for iΔ0 are observed by some mechanism to assist our theory. SVM algorithm maximizes the linear or nonlinear margin between the two closest points belonging to the opposite classification groups (cured and susceptible). That is, SVM solves the following optimization problem for di;i=1,,n :

maxd1,,dn[12i=1nj=1ndidj(2Ji1)(2Jj1)Φk(zi,zj)+i=1ndi] (13)

subject to the constraint i=1n(2Ji1)di=0 and 0diC , for i=1,,n , where C is a parameter that trades off between the margin width and misclassification proportion. Smaller values of C cause optimizer to look for a larger margin width allowing higher misclassification. Φk(.,.) is a symmetric positive semi-definite kernel function, which we consider to be the radial basis function (RBF) given by Φk(zi,zj)=exp{(zizj)T(zizj)2σ2} . RBF is a popular choice of the kernel function owing to its robustness by implementing the idea that a linear classifier in higher dimension can be used as a nonlinear classifier in lower dimension. The parameter σ2 determines the kernel-width. Both hyper-parameters C and σ2 are to be tuned to obtain the highest classification accuracy using cross-validation methods. 50 Grid search can be implemented to determine C and σ2 . Low values of σ2 result in overfitting and jagged separator, while high values of σ2 result in more linear and smoother decision boundaries. Also, it is recommended to standardize the covariate vector z .

The mapping Ji to 2Ji1 converts the respective 0 and 1s to −1 and + 1s, which aids in formulation of the optimization problem under the SVM framework. Once di ’s are obtained, we can derive a threshold b as b=i=1n(2Ji1)diΦk(zi,zj)(2Jj1) , for some dj>0 . For any new covariate vector znew , the optimal decision or classification rule is given by

ψ(znew)=i=1ndi(2Ji1)Φk(zi,znew)b. (14)

As suggested by Li et al., 31 the sequential minimal optimization (SMO) method 51 can be applied to solve (13). As opposed to solving large quadratic optimization problems to train an SVM model, SMO solves a series of smallest possible quadratic problems. Thus, SMO is relatively time inexpensive algorithm. Any subject with covariate znew is assigned to the susceptible group if ψ(znew)>0 and to the cured group if ψ(znew)<0 .

In the given context, note that it is not enough to just classify subjects as being cured or susceptible. It is also of our interest to obtain the estimates of uncured probabilities π(zi) or equivalently the cured probabilities 1π(zi) . For this purpose, we use the Platt scaling method to obtain an estimate of π(zi) from the classification rule ψ(.) . 52 The estimate of π(zi) by Platt scaling method is given by

π^(zi)=11+exp{Aψ(zi)+B} (15)

where A and B are obtained by maximizing the following function:

i=1n(1ζi)[Aψ(zi)+B]log[1+exp{Aψ(zi)+B}]. (16)

Here

ζi={n(1)+1n(1)+2, if Ji=11n(0)+2, if Ji=0 (17)

and n(1) and n(0) represents the number of subjects in the susceptible and cured groups, respectively.

We started our discussion on the SVM-based modeling of the incidence part above with the assumption that Ji s are observed and available for training purpose. However, in practice, the cure status Ji is not known for iΔ0 . Multiple imputation-based approach can be applied here to obtain π^(zi) with imputed values of Ji for i=1,,n . Note that the proposed multiple imputation technique does not rely on naive assumptions such as the existence of a known threshold time beyond which all censored observations can be considered cured. 53 The steps of multiple imputation are as follows:

  1. For a pre-defined integer N* and n*=1,2,,N* , generate {Ji(n*):i=1,,n} , where Ji(n*) is a Bernoulli random variable with success probability pi(n*) . The discussion on deriving pi(n*) is provided in Section 2.5.

  2. For the imputed data {Ji(n*):i=1,,n} , obtain π^(n*)(zi) as the estimate of π(zi) by employing the SVM followed by the Platt scaling method given in (15) for n*=1,2,,N* .

  3. Calculate π^(zi)=(1/N*)n*=1N*π^(n*)(zi) as the final estimate of π(zi) .

2.4. Tuning the SVM model

To address the issue with over/under fitting, we split the data into two sets, namely, the training set and the testing set. The training set is used to obtain the optimal hyper-parameters of the SVM model and then those optimal hyper-parameters are used to train the optimal SVM model. On the other hand, the testing set is used to test or validate the final SVM model. We examine two most critical hyper-parameters of the SVM, namely, C and σ when training the SVM model. The parameter C is a regularization parameter ( l2 ) that penalizes the model for any mis-classification. The value of C is inversely proportional to the strength of the regularization. When the value of the C is large, the penalty for mis-classification is substantial, and the strength of the regularization is small and vice versa. The parameter σ of the RBF kernel on the other hand controls the similarity of the impact of a single training point, which influences the performance of the model. These hyper-parameters can be obtained using cross-validation techniques.54,50 In this article, we use the grid search cross-validation technique to obtain the optimal hyper-parameters of the SVM model. For the grid search cross-validation technique, we fit several pairwise models using different sets of hyper-parameter values. These fitted models are then evaluated to obtain the best optimal trained model and hyper-parameter values. These optimal hyper-parameter values from the optimal selected model are then used to fit the final model. Finally, we validate the performance of the final fitted SVM model by performing predictions using the testing set. Model performance evaluation criteria such as the graphical receiver operating characteristic (ROC) curve and it’s area under the curve (AUC) are used to evaluate the performance of the final model.

2.5. Development of the EM algorithm

The E-step in the EM algorithm involves finding the conditional expectation of the complete data log-likelihood function in (9) given the current estimates (say, at the (r+1) -th iteration step) and the observed data, which is equivalent to finding the conditional expectation of Ji given the observed data, π(zi) and (S0(),γT)T , as

wi(r+1)=δi+(1δi)π(r)(zi)Su(r)(Li|xi)1π(r)(zi)+π(r)(zi)Su(r)(Li|xi),i=1,,n (18)

where Su(r)(Li|xi)={S^0(Li)}exp(xiTγ(r)) with S^0(Li) denoting a non-parametric estimator of the baseline survival function evaluated at Li , i=1,2,,n . Note that (18) implies that wi(r+1)=1 for all iΔ1 . We obtain the conditional expectation of lc by simply replacing Ji ’s with wi(r+1) in (9). We denote the aforementioned conditional expectation by

Qc=Qc1+Qc2 (19)

where

Qc1=i=1n[wi(r+1)logπ(zi)+(1wi(r+1))log(1π(zi))] (20)

and

Qc2=i=1n[δilog{Su(Li|xi)Su(Ri|xi)}+(1δi)wi(r+1)logSu(Li|xi)]. (21)

The M-step updates the parameters in Qc1 and Qc2 . For r=0,1, , the procedure for the (r+1) -th iteration step of the EM algorithm is given below.

  1. Carry out the multiple imputation technique, as described in Section 2.3, by considering pi(n*)=wi(r+1) , for n*=1,,N* and i=1,,n . Obtain π^(r+1)(zi)=(1/N*)n*=1N*π^(n*)(zi) by applying the Platt scaling method with the classification rule ψ() defined in (14). Recall that the classification rule is built based on the imputed data {Ji(n*):i=1,,n} , where Ji(n*) is a Bernoulli random variable with success probability pi(n*) .

  2. Obtain γ(r+1) by maximizing the function Qc2 , as defined in (21), with respect to γ . That is, find
    γ(r+1)=argmaxγQc2. (22)
    The maximization in (22) can be carried out by using the “optim()” function in R software and by specifying the method as “Nelder-Mead.” In this regard, one may also look at new optimization methods based on nonlinear conjugate gradient algorithm with an efficient line search technique.5557
  3. Check for the convergence as follows:
    ||θ(r+1)θ(r)||22<ϵ
    where θ(k)=(π(k)¯(z),γ(k)T)T , with π(k)¯(z)=1ni=1nπ(k)(zi) , ϵ>0 is some pre-determined and sufficiently small tolerance and ||||2 is the L2 -norm. If the above criterion is satisfied, then, stop the algorithm. In this case, π^(r+1)(zi) , for i=1,,n , and γ(r+1) are the final pointwise estimates. On the other hand, if the above criterion is not met, continue to Step 4.
  4. Update wi(r+1) in (18) to
    wi(r+2)=δi+(1δi)π^(r+1)(zi)Su(r+1)(Li|xi)1π^(r+1)(zi)+π^(r+1)(zi)Su(r+1)(Li|xi). (23)
  5. Repeat steps 1 to 4 until convergence is achieved.

Note that maximization of Qc2 with respect to γ can be done only after estimating the baseline survival function S0() which appears as a nuisance parameter in (21). As mentioned in Section 2.1, we estimate S0() using the non-parametric Turnbull estimator.

2.6. Calculating the standard errors

The standard errors are estimated by non-parametric bootstrapping. For b=1,,B , b -th bootstrapped data set is obtained by resampling with replacement from the original data. The sample size of the b -th bootstrapped data is the same as the original data. Then, we carry out steps 1 to 5 of the EM algorithm as detailed in Section 2.5 to obtain the estimates of model parameters for each bootstrapped data. This gives us B estimates for each model parameter. For each parameter, the standard deviation of these B estimates provides an estimate of the standard error of the parameter.

2.7. Initial values of model parameters

To start the iterative EM algorithm, we need to come up with initial values of π(zi) , for i=1,,n , and γ . To provide an initial guess of π(zi) , we can consider the censoring indicator δi as the cure indicator. That is, we consider Ji=0 if δi=0 and Ji=1 if δi=1 for i=1,2,,n . Then, we employ the SVM to come up with the classification rule (as in (14)) and finally, apply the Platt scaling method (as in (15)) to obtain π(zi) . On the other hand, to provide an initial guess for the latency parameter γ , we can simply initiate each component in γ by 0.5.

3. Simulation study

In this section, we assess the performance of the proposed SVM-based EM algorithm to estimate the model parameters of the mixture cure rate model for interval censored data. We also compare the performance of the SVM-based EM algorithm with the logistic regression-based and spline regression-based EM algorithms. To fit the thin plate spline for the incidence part, we use the “gam” function in the package “mgcv.” We consider the following scenarios using which we generate the true uncured probabilities π(z) :

Scenario 1:π(z)=e0.35z13z21+e0.35z13z2Scenario 2:π(z)=e0.3+10z1z25z1z21+e0.3+10z1z25z1z2Scenario 3:π(z)=exp(exp(0.8z1z2+1.1z2z4+0.5z3+0.2z721.3sin(z5z6)+1.9cos(z7z8)1.5exp(z5z6z7)1.6z7z8z9z10+0.8z6z7z82z92+1.8cos(z5z6z7z8z9)+1.2(z6z7z8z9z10)0.52.4)).

In Scenarios 1 and 2, z1 and z2 are generated from the standard normal distribution. In Scenario 3, z1,z2,z3 , and z4 are generated from the Bernoulli distribution with success probabilities 0.5, 0.3, 0.5, and 0.7, respectively, whereas z5,z6,,z10 are generated from the standard normal distribution. In all scenarios, we consider z=x ,that is, we use the same set of covariates to model the incidence and latency parts. Note that Scenario 1 represents the standard logistic regression model which captures a linear classification boundary, whereas Scenario 2 captures nonlinear classification boundary (see Figure 1). On the other hand, Scenario 3 represents a more complex link function with a large number of covariates and complicated interaction terms. Corresponding to Scenarios 1 and 2, Figure 2 shows the plots of simulated uncured probabilities and how they vary with respect to the covariates z1 and z2 . We consider different sample sizes as n=300 and n=600 . We assume lifetimes of the susceptible subjects follow the proportional hazards structure with the hazard function

hu(t)=αtα1exp{xTγ}

where the true value of α is chosen as 1 for Scenarios 1 and 2, and 3.5 for Scenario 3. For Scenarios 1 and 2, we consider the true value of γ as (5,10) , whereas for Scenario 3 we consider γ=(0.8,1.5,0.5,1.3,0.6,1.4,0.5,0.8,0.5,1.8) . The censoring time is generated from a uniform distribution in (0,20) . Under these settings, the true cure probability and censoring proportion, denoted by (cure, censoring), for Scenarios 1, 2, and 3 are roughly (0.50, 0.65), (0.40, 0.60), and (0.60, 0.70), respectively. Thus, using the three scenarios we cover low, moderate, and high cure and censoring rates. To generate interval censored lifetime data (Li,Ri,δi),i=1,2,,n , we carry out the following steps:

  • Step 1:

    Generate a Uniform (0,1) random variable Ui and a censoring time Ci ;

  • Step 2:

    If Ui1π(zi), set Ti= ;

  • Step 3:

    If Ui>1π(zi), generate Ti from a Weibull distribution with shape parameter α and scale parameter {exp(γ1x1i+γ2x2i)}1α ;

  • Step 4:
    • a.
      If min{Ti,Ci}=Ci , set Li=Ci , Ri= , and δi=0 ;
    • b.
      If min{Ti,Ci}=Ti , set δi=1 , and generate L1i from Uniform (0.2,0.7) distribution and L2i from Uniform (0,1) distribution. Next, create intervals (0,L2i],(L2i,L2i+L1i],,(L2i+k×L1i,],k=1,2,, and select (Li,Ri) that satisfies Li<TiRi .

Figure 1.

Figure 1.

Simulated cured and uncured observations for Scenarios 1 and 2 considered.

Figure 2.

Figure 2.

Simulated uncured probabilities and their behavior with respect to the covariates for Scenarios 1 and 2.

All simulations are done using the R statistical software (Version 4.0.4) and results are based on M=500 Monte Carlo runs. The computational codes for data generation and SVM-based EM algorithm are available in the Supplemental Material. In all cases, 67% of the data is used as training set and the remaining 33% of the data is used as testing set. To employ our proposed methodology, we consider the number of imputations in the multiple imputation technique to be 5, which is in line with existing works.31,58 In Tables 1 and 2, we report the bias and mean squared error (MSE), respectively, of the estimated uncured probability π^(z) and the susceptible survival probability Su^=Su^(.,.;x) . These are calculated as:

Bias(π^(z))=1Mk=1M[1ni=1n{π(k)^(zi)π(k)(zi)}]Bias(Su^)=1Mk=1M[1ni=1n{Su(k)^(Li,Ri;xi)Su(k)(Li,Ri;xi)}]MSE(π^(z))=1Mk=1M[1ni=1n{π(k)^(zi)π(k)(zi)}2]MSE(Su^)=1Mk=1M[1ni=1n{Su(k)^(Li,Ri;xi)Su(k)(Li,Ri;xi)}2]

where π(k)(zi) and Su(k)(Li,Ri;xi) are the true uncured probability and susceptible survival probability, respectively, corresponding to the i -th subject and the k -th Monte Carlo run. Similarly, π(k)^(zi) and Su(k)^(Li,Ri;xi) are the estimated uncured probability and susceptible survival probability, respectively, corresponding to the i -th subject and the k -th Monte Carlo run. In the above expressions, note that Su(k)(Li,Ri;xi)=Su(k)(Ti;xi), where Ti=Li+Ri2 if Ri< and Ti=Li if Ri= . Su(k)^(Li,Ri;xi) is defined in a similar way.

Table 1.

Comparison of bias of the uncured probability and susceptible survival probability for different models.

Uncured probability Susceptible survival probability
Bias Bias
n Scenario SVM Spline Logistic SVM Spline Logistic
300 1 −0.1425 −0.1632 0.0584 0.1079 0.1101 0.1060
2 −0.0684 0.0900 0.2322 0.0500 0.0505 0.0515
3 0.0544 0.1046 0.1786 0.1058 0.0651 0.1013
600 1 −0.1255 −0.1611 0.0474 0.1075 0.1089 0.1058
2 −0.0628 0.1009 0.2186 0.0492 0.0495 0.0511
3 0.0364 0.0957 0.1494 0.0828 0.0774 0.1034

SVM: support vector machine.

Table 2.

Comparison of MSE of the uncured probability and susceptible survival probability for different models.

Uncured probability Susceptible survival probability
MSE MSE
n Scenario SVM Spline Logistic SVM Spline Logistic
300 1 0.1132 0.1753 0.0618 0.1019 0.1085 0.1022
2 0.0827 0.1906 0.2184 0.0338 0.0363 0.0598
3 0.1052 0.1587 0.2111 0.0609 0.0793 0.1060
600 1 0.1128 0.1715 0.0614 0.0988 0.1001 0.1020
2 0.0809 0.1901 0.2185 0.0328 0.0340 0.0727
3 0.0956 0.1280 0.1696 0.0380 0.0468 0.0649

SVM: support vector machine; MSE: mean squared error.

From Table 1, it is clear that the biases of the estimated uncured probability and the susceptible survival probability obtained from the logistic regression-based EM algorithm is smaller than those obtained from the proposed SVM-based EM algorithm as well as the spline-based EM algorithm when logistic regression is the correct model (Scenario 1). However, when the true model for the uncured probability is not the logistic regression in Scenarios 2 and 3, the proposed SVM-based EM algorithm produces smaller bias in the estimated uncured probability when compared to both logistic regression-based and spline-based EM algorithms. In this case, as far as the estimated susceptible survival probabilities are concerned, the SVM-based EM algorithm produces smaller bias only under Scenario 2. From Table 2, we note that when the logistic regression is the true model for the uncured probability (i.e. under Scenario 1), the MSE of the estimated uncured probability obtained from the logistic regression-based EM algorithm is smaller than those obtained from the SVM-based and spline-based EM algorithms. However, under Scenarios 2 and 3, that is, under non-logistic true models for the uncured probability, the proposed SVM-based EM algorithm produces smaller MSE of the estimated uncured probability when compared to both logistic regression-based and spline-based EM algorithms. When it comes to the estimation of the susceptible survival probability, our proposed SVM-based EM algorithm produces smaller MSEs in all considered scenarios. Overall, we can conclude that the proposed SVM-based EM algorithm performs better than the standard logistic regression-based and spline-based EM algorithms when the true classification boundary is nonlinear and complex. This clearly demonstrates the ability of the proposed SVM-based mixture cure model to handle complex nonlinear classification boundaries.

Although, in practice, the cured status is unobserved for a real data, we do know which observations can be considered as cured when we simulate data. Using such information on the cured status for simulated data, we can easily compare the proposed SVM-based mixture cure model with the logistic regression-based and spline regression-based mixture cure models using the ROC curves and the AUC values for different scenarios we have considered. Note that the true label for calculating the AUC is the true cure index for each subject when the data is generated. Figure 3 presents the ROC curves under different scenarios. The corresponding AUC values are presented in Table 3. These results are based on 500 Monte Carlo runs. It is once again clear that under Scenarios 2 and 3 (i.e. when the true classification boundaries are nonlinear), the performance (or the accuracy) of the SVM-based mixture cure model is much better than the logistic regression-based and the spline-based mixture cure models. However, under Scenario 1 (i.e. when the true classification boundary is linear), the logistic regression-based model performs slightly better than the SVM-based model. The similarity in the AUC values obtained from the training data and testing data implies that there is no issue with over/under fitting.

Figure 3.

Figure 3.

Receiver operating characteristic (ROC) curves for different models and under different scenarios.

Table 3.

Comparison of AUC values for different models and scenarios.

Training AUC Testing AUC
n Scenario SVM Spline Logistic SVM Spline Logistic
300 1 0.8476 0.7461 0.9248 0.8437 0.7409 0.9225
2 0.8057 0.5756 0.5330 0.7990 0.5562 0.5445
3 0.8831 0.6964 0.5885 0.7312 0.5837 0.5507
600 1 0.8229 0.7421 0.9227 0.8218 0.7398 0.9215
2 0.7973 0.5659 0.5255 0.7956 0.5554 0.5432
3 0.9231 0.6721 0.5812 0.8706 0.6398 0.5615

SVM: support vector machine; AUC: area under the curve.

To further assess the robustness and generalizability of the proposed SVM model across different data settings, we study a scenario where we generate 10 correlated covariates from a multivariate normal distribution, N10(0,Σ) , where Σ denotes the variance–covariance matrix whose (i,j) th element, denoted by σij , is defined as σij=0.9|ij| , 1i,j10 . The choice of 0.9 as the base for exponentiation determines how quickly the correlation increases with decreasing separation. In this scenario, we consider all 10 covariates for the incidence part but only choose a subset of five covariates for the latency part. In this way, we ensure zx . The PH model is fitted for the latency with the true value of γ as γ=(0.8,1.5,0.5,1.3,0.6) . Table 4 presents the biases and MSEs of the uncured and susceptible survival probabilities, whereas Table 5 presents the AUC values for both training and testing sets. From Tables 4 and 5, it is clear that the SVM once again outperforms both spline and logistic models, thereby demonstrating robustness and generalizability.

Table 4.

Comparison of different models through the biases and MSEs of different quantities of interest for n=300 and in the presence of correlated covariates, where zx .

Uncured probability Susceptible survival probability
Model Bias MSE Bias MSE
SVM 0.0611 0.0936 0.0966 0.0779
Logistic 0.1740 0.2335 0.0972 0.0936
Spline 0.0949 0.1433 0.0955 0.0825

SVM: support vector machine; MSE: mean squared error.

Table 5.

Comparison of different models through the AUC values for n=300 and in the presence of correlated covariates, where zx .

Model Training AUC Testing AUC
SVM 0.8024 0.7462
Logistic 0.5340 0.5431
Spline 0.6497 0.6087

SVM: support vector machine; AUC: area under the curve.

As per the suggestion of a reviewer, we also used the NN to model the incidence part, that is, π(z) , and then the EM algorithm to estimate all parameters. For this purpose, we fitted a two hidden layers NN with (12, 24) number of neurons respectively in the first and second layers. The sigmoid activation function was used to fit the fully connected NN. In Table 6, we present the biases and MSEs of the uncured and susceptible survival probabilities. Clearly, the performance of SVM is better in estimating the uncured probability, which is our main parameter of interest. Regarding the estimation of the susceptible survival probability, the performances are comparable. In Table 7, we compare the AUCs and computation times of SVM and NN models. The computation times represent the time (in seconds) to produce the incidence and latency estimates along with the standard errors (obtained using a bootstrap sample of size 100) for a generated data of size 300. For other sample sizes, the observations are similar and hence not reported for the sake of brevity. Observe that the computing times of the SVM model is much lower than that of the NN model for all three scenarios. Again, the SVM results in higher AUC values, meaning an improved predictive accuracy. These findings allow us to conclude that the proposed SVM model is preferred to the NN model.

Table 6.

Comparison of SVM and NN models through the biases and MSEs of different quantities of interest for n=300 .

Uncured probability Susceptible survival probability
Scenario Model Bias MSE Bias MSE
1 SVM −0.0954 0.1395 0.1075 0.1092
NN −0.2231 0.2265 0.1074 0.1108
2 SVM −0.0570 0.0877 0.0494 0.0338
NN −0.1846 0.2120 0.0492 0.0352
3 SVM 0.0698 0.1095 0.1172 0.0678
NN −0.0990 0.1544 −0.0385 0.0579

SVM: support vector machine; MSE: mean squared error; NN: neural network.

Table 7.

Comparison of SVM and NN models through the AUC values and computation times for n=300 .

Scenario Model Training AUC Testing AUC Computation time (in seconds)
1 SVM 0.8273 0.8150 86.13
NN 0.7521 0.7393 111.07
2 SVM 0.7922 0.7675 121.88
NN 0.7801 0.7094 143.27
3 SVM 0.8791 0.7216 192.35
NN 0.8558 0.6952 239.16

SVM: support vector machine; AUC: area under the curve; NN: neural network.

4. Illustrative example: Analysis of HDSD data

We further demonstrate our proposed methodology using a data set that is extracted from the NASA’s Hypobaric Decompression Sickness Data Bank, hereafter referred as HDSD data. 59 The data set has information on subjects who underwent denitrogenation test procedures before being exposed to a hypobaric environment. The event of interest is the onset of grade IV venous gas emboli (VGE). The time to onset of grade IV VGE, if it occurred, was not exactly observed but was contained within a time interval. The covariates of interest are age (in years), sex (1: male; 0: female), TR360 which is a measure of decompression stress that ranges from 1.04 to 1.89, and noadyn which is an indicator of whether the subject was ambulatory (noadyn = 1) or lower body adynamic (noadyn = 0) during the test session. Information on 236 subjects is available for downstream analysis whose event times are either interval censored or right censored. 41 In Figure 4, we present a plot of the non-parametric maximum likelihood estimate (NPMLE) of the survival function. Clearly, we can see that the plot levels off to a significant non-zero proportion. This indicates that there could be a greater likelihood of the presence of cured fraction in the data.

Figure 4.

Figure 4.

Plot of the NPMLE of the survival function for the HDSD data.

HDSD: Hypobaric decompression sickness database; NPMLE: non-parametric maximum likelihood estimate.

We fit the proposed SVM-based mixture cure model and, for comparison, we also fit the logistic regression-based and spline regression-based mixture cure models. Noting that the sample size for the HDSD data is small, and to avoid over-fitting or under-fitting, we adopt a 10-fold cross-validation technique that allows us to simultaneously fit and evaluate each model on the full data. This is consistent with Hastie et al. 54 First, we draw inference on the incidence part of the model. In Figure 5, we plot the estimates of the uncured probabilities against age and TR360 when stratified by sex and noadyn for all models. Clearly, under the proposed SVM-based model, the change in the estimates of the uncured probabilities is non-monotonic with respect to age and TR360. This non-monotonic relationship is not captured by the logistic regression-based and spline regression-based models. Note that for the spline regression-based model, the pattern is similar to the logistic regression-based model.

Figure 5.

Figure 5.

Estimates of uncured probabilities as a function of age and TR360 when stratified by noadyn and sex for the Hypobaric decompression sickness database (HDSD) data.

Next, we verify whether our proposed model’s ability to capture nonlinear pattern in the data can result in improved predictive accuracy when predicting the cured/uncured statuses are of interest. This can be verified using the ROC curves and by comparing the AUC values for different models that we have considered. Noting that the cured statuses are unknown for the set of right censored observations in a real data, we first impute the missing cured/uncured statuses. For each right censored observation, the missing uncured status can be imputed by generating a random number from a Bernoulli distribution whose success probability is the conditional probability of uncured, as given in (18). With the complete knowledge of cured/uncured statuses for all subjects, the ROC curves can be drawn and the AUC values can be computed. However, since this method involves simulation (i.e. randomness), we make the ROC curves and the AUC values more consistent by repeating the procedure 500 times and reporting the averaged ROC curves and the averaged AUC values. Figure 6 presents the averaged ROC curves for different models and the corresponding AUC values turn out to be 0.8795, 0.8627, and 0.7766 for the SVM-based, logistic regression-based, and spline regression-based models, respectively. Thus, the proposed SVM-based model indeed provides the highest predictive accuracy for the considered HDSD data.

Figure 6.

Figure 6.

ROC curves under different models for the HDSD data.

HDSD: Hypobaric decompression sickness database; ROC: receiver operating characteristic.

Finally, we look at the results related to the latency parts of the fitted models. Table 8 presents the estimates of the latency parameters, their standard deviations (SD), and the p -values. Clearly, at 5% level of significance, only noadyn turns out to be significant for all models as far as the time to onset of grade IV VGE for uncured patients is concerned. Note that the effect of noadyn is the same for all models. Since the estimate of γ4 is positive, ambulatory subjects tend to experience grade IV VGE faster. This finding is consistent with Ma. 41

Table 8.

Estimation results corresponding to the latency parameters for the HDSD data.

Estimates SD p -value
Parameter SVM Spline Logistic SVM Spline Logistic SVM Spline Logistic
Age 0.0294 −0.1798 −0.1947 0.1037 0.0967 0.115 0.7767 0.0631 0.1418
TR360 0.0628 −0.2697 −0.2522 0.1744 0.1741 0.145 0.7187 0.1214 0.1632
Sex 0.3449 −0.1232 −0.2908 0.4459 0.3431 0.107 0.4392 0.7196 0.5910
Noadyn 1.3252 1.5849 1.6868 0.4081 0.3072 0.107 1.17 ×103 2.47 ×107 8.00 ×104

SVM: support vector machine; HDSD: Hypobaric decompression sickness database; SD: standard deviation.

5. Conclusion

The SVM has received a great amount of interest in the past two decades. It has been shown that the SVM performs well in a wide array of problems including face detection, text categorization and pedestrian detection. However, the use of the SVM in the context of cure rate models is new and not well explored. In this article, we have proposed a new cure rate model that uses the SVM to model the incidence part and a PH structure to model the latency part for survival data subject to interval censoring. The new cure rate model inherits the properties of the SVM and can capture more complex classification boundaries. For the estimation purpose, we have proposed an EM algorithm where sequential minimal optimization together with Platt scaling method are employed to estimate the uncured probabilities. In this regard, due to the unavailability of some cured statuses, we make use of a multiple imputation-based approach to generate missing cured statuses. Due to the complexity of the proposed model and the estimation method, we approximate the standard errors of the estimated parameters using non-parametric bootstrapping. Through a simulation study, we have shown that when the true classification boundary is nonlinear the proposed SVM-based mixture cure model overall performs better than the standard logistic regression-based and spline-based mixture cure models. As future research, it is of great interest to study the performance of the proposed model in the presence of high-dimensional covariates and to develop computationally efficient methods for covariate selection. It is also of great interest to extend the proposed model to accommodate a competing risks scenario.18,60 Furthermore, it is also possible to explore other machine learning algorithms (e.g. NN or tree-based approaches) to study more complicated cure rate models such as those that look at the elimination of risk factors.6166 We are currently looking at some of these problems and we hope to report the findings in our upcoming manuscripts.

Supplemental Material

sj-pdf-1-smm-10.1177_09622802231210917 - Supplemental material for A support vector machine-based cure rate model for interval censored data

Supplemental material, sj-pdf-1-smm-10.1177_09622802231210917 for A support vector machine-based cure rate model for interval censored data by Suvra Pal, Yingwei Peng, Wisdom Aselisewine and Sandip Barui in Statistical Methods in Medical Research

Acknowledgements

The authors thank two anonymous reviewers for their careful review and comments which led to this improved version of the manuscript.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Suvra Pal’s work was supported by the National Institute Of General Medical Sciences of the National Institutes of Health under Award Number R15GM150091. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health. Yingwei Peng’s work was partially supported by a Discovery grant from the Natural Sciences and Engineering Research Council of Canada.

Supplemental material: Supplemental material for this article is available online.

References

  • 1.Boag JW. Maximum likelihood estimates of the proportion of patients cured by cancer therapy. J R Stat Soc Ser B (Methodological) 1949; 11: 15–53. [Google Scholar]
  • 2.Berkson J, Gage RP. Survival curve for cancer patients following treatment. J Am Stat Assoc 1952; 47: 501–515. [Google Scholar]
  • 3.Pal S, Barui S, Davies K. et al. A stochastic version of the EM algorithm for mixture cure model with exponentiated Weibull family of lifetimes. J Stat Theory Pract 2022; 16: 48. [Google Scholar]
  • 4.Farewell VT. The use of mixture models for the analysis of survival data with long-term survivors. Biometrics 1982; 38: 1041–1046. [PubMed] [Google Scholar]
  • 5.Farewell VT. Mixture models in survival analysis: Are they worth the risk? Can J Stat 1986; 14: 257–262. [Google Scholar]
  • 6.Kuk AY, Chen CH. A mixture model combining logistic regression with proportional hazards regression. Biometrika 1992; 79: 531–541. [Google Scholar]
  • 7.Peng Y, Dear KB. A nonparametric mixture model for cure rate estimation. Biometrics 2000; 56: 237–243. [DOI] [PubMed] [Google Scholar]
  • 8.Sy JP, Taylor JM. Estimation in a Cox proportional hazards cure model. Biometrics 2000; 56: 227–236. [DOI] [PubMed] [Google Scholar]
  • 9.Gu Y, Sinha D, Banerjee S. Analysis of cure rate survival data under proportional odds model. Lifetime Data Anal 2011; 17: 123–134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Mao M, Wang JL. Semiparametric efficient estimation for a class of generalized proportional odds cure models. J Am Stat Assoc 2010; 105: 302–311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Li CS, Taylor JM. A semi-parametric accelerated failure time cure model. Stat Med 2002; 21: 3235–3247. [DOI] [PubMed] [Google Scholar]
  • 12.Zhang J, Peng Y. A new estimation method for the semiparametric accelerated failure time mixture cure model. Stat Med 2007; 26: 3157–3171. [DOI] [PubMed] [Google Scholar]
  • 13.Zhang J, Peng Y. Accelerated hazards mixture cure model. Lifetime Data Anal 2009; 15: 455–467. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lu W, Ying Z. On semiparametric transformation cure models. Biometrika 2004; 91: 331–343. [Google Scholar]
  • 15.Barui S, Yi YG. Semiparametric methods for survival data with measurement error under additive hazards cure rate models. Lifetime Data Anal 2020; 26: 421–450. [DOI] [PubMed] [Google Scholar]
  • 16.Balakrishnan N, Pal S. EM algorithm-based likelihood estimation for some cure rate models. J Stat Theory Pract 2012; 6: 698–724. [Google Scholar]
  • 17.Balakrishnan N, Pal S. Lognormal lifetimes and likelihood-based inference for flexible cure rate models based on COM-Poisson family. Comput Stat Data Anal 2013; 67: 41–67. [Google Scholar]
  • 18.Balakrishnan N, Pal S. An EM algorithm for the estimation of parameters of a flexible cure rate model with generalized gamma lifetime and model discrimination using likelihood- and information-based methods. Comput Stat 2015; 30: 151–189. [Google Scholar]
  • 19.Balakrishnan N, Pal S. Likelihood inference for flexible cure rate models with gamma lifetimes. Commun Stat - Theory Method 2015; 44: 4007–4048. [DOI] [PubMed] [Google Scholar]
  • 20.Balakrishnan N, Pal S. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes. Stat Methods Med Res 2016; 25: 1535–1563. [DOI] [PubMed] [Google Scholar]
  • 21.Peng Y. Fitting semiparametric cure models. Comput Stat Data Anal 2003; 41: 481–490. [Google Scholar]
  • 22.Cai C, Zou Y, Peng Y. et al. smcure: an R-package for estimating semiparametric mixture cure models. Comput Methods Programs Biomed 2012; 108: 1255–1260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Tong EN, Mues C, Thomas LC. Mixture cure models in credit scoring: if and when borrowers default. Eur J Oper Res 2012; 218: 132–139. [Google Scholar]
  • 24.Xu J, Peng Y. Nonparametric cure rate estimation with covariates. Can J Stat 2014; 42: 1–17. [Google Scholar]
  • 25.López-Cheda A, Cao R, Jácome MA. et al. Nonparametric incidence estimation and bootstrap bandwidth selection in mixture cure models. Comput Stat Data Anal 2017; 105: 144–165. [Google Scholar]
  • 26.Chen T, Du P. Mixture cure rate models with accelerated failures and nonparametric form of covariate effects. J Nonparametr Stat 2018; 30: 216–237. [Google Scholar]
  • 27.Wang L, Du P, Liang H. Two-component mixture cure rate model with spline estimated nonparametric components. Biometrics 2012; 68: 726–735. [DOI] [PubMed] [Google Scholar]
  • 28.Pal S, Aselisewine W. A semi-parametric promotion time cure model with support vector machine. Ann Appl Stat 2023; 17(3): 2680–2699. [Google Scholar]
  • 29.Cortes C, Vapnik V. Support-vector networks. Mach Learn 1995; 20: 273–297. [Google Scholar]
  • 30.Aselisewine W, Pal S. On the integration of decision trees with mixture cure model. Stat Med 2023; 42(23): 4111–4127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Li P, Peng Y, Jiang P. et al. A support vector machine based semiparametric mixture cure model. Comput Stat 2020; 35: 931–945. [Google Scholar]
  • 32.Wang P, Pal S. A two-way flexible generalized gamma transformation cure rate model. Stat Med 2022; 41: 2427–2447. [DOI] [PubMed] [Google Scholar]
  • 33.Pal S, Balakrishnan N. Expectation maximization algorithm for Box–Cox transformation cure rate model and assessment of model misspecification under Weibull lifetimes. IEEE J Biomed Health Inform 2018; 22: 926–934. [DOI] [PubMed] [Google Scholar]
  • 34.Wiangnak P, Pal S. Gamma lifetimes and associated inference for interval-censored cure rate model with COM–Poisson competing cause. Commun Stat -Theory Method 2018; 47: 1491–1509. [Google Scholar]
  • 35.Treszoks J, Pal S. A destructive shifted Poisson cure model for interval censored data and an efficient estimation algorithm. Communications in Statistics-Simulation and Computation 2022. DOI: 10.1080/03610918.2022.2067876. [DOI] [Google Scholar]
  • 36.Treszoks J, Pal S. On the estimation of interval censored destructive negative binomial cure model. Stat Med 2023. DOI: 10.1002/sim.9904 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Sun J. The statistical analysis of interval-censored failure time data. New York: Springer, 2007. [Google Scholar]
  • 38.Lindsey JC, Ryan LM. Methods for interval-censored data. Stat Med 1998; 17: 219–238. [DOI] [PubMed] [Google Scholar]
  • 39.Kim YJ, Jhun M. Cure rate model with interval censored data. Stat Med 2008; 27: 3–14. [DOI] [PubMed] [Google Scholar]
  • 40.Ma S. Cure model with current status data. Stat Sin 2009; 19: 233–249. [Google Scholar]
  • 41.Ma S. Mixed case interval censored data with a cured subgroup. Stat Sin 2010; 20: 1165–1181. [Google Scholar]
  • 42.Xiang L, Ma X, Yau KK. Mixture cure model with random effects for clustered interval-censored survival data. Stat Med 2011; 30: 995–1006. [DOI] [PubMed] [Google Scholar]
  • 43.Aljawadi BA, Bakar MRA, Ibrahim NA. Nonparametric versus parametric estimation of the cure fraction using interval censored data. Communications in Statistics-Theory and Methods 2012; 41: 4251–4275. [Google Scholar]
  • 44.Wood SN. Generalized Additive Models: An Introduction With R. Boca Raton, FL: Chapman & Hall/CRC, 2017. [Google Scholar]
  • 45.Hastie T, Tibshirani R. Generalized Additive Models. Boca Raton, FL: Chapman & Hall, 1990. [Google Scholar]
  • 46.Turnbull BW. The empirical distribution function with arbitrarily grouped, censored and truncated data. Journal of the Royal Statistical Society-Series B 1976; 38: 290–295. [Google Scholar]
  • 47.Pal S, Peng Y, Aselisewine W. A new approach to modeling the cure rate in the presence of interval censored data. Comput Stat 2023. DOI: 10.1007/s00180-023-01389-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.McLachlan GJ, Krishnan T. The EM algorithm and extensions. 382. Hoboken, NJ: John Wiley & Sons, 2007. [Google Scholar]
  • 49.Pal S. A simplified stochastic EM algorithm for cure rate model with negative binomial competing risks: an application to breast cancer data. Stat Med 2021; 40: 6387–6409. [DOI] [PubMed] [Google Scholar]
  • 50.Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2011; 2: 1–27. [Google Scholar]
  • 51.Platt J. Fast training of support vector machines using sequential minimal optimization. In Schlkopf B, Burges C and Smola A (eds.) Advances in Kernel Methods – Support Vector Learning. Cambridge, MA, USA: MIT Press, 1999. pp.185–208.
  • 52.Platt J. et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv Large Margin Classifiers 1999; 10: 61–74. [Google Scholar]
  • 53.Amico M, Van Keilegom I, Han B. Assessing cure status prediction from survival data using receiver operating characteristic curves. Biometrika 2021; 108: 727–740. [Google Scholar]
  • 54.Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer, 2001. [Google Scholar]
  • 55.Pal S, Roy S. On the estimation of destructive cure rate model: a new study with exponentially weighted Poisson competing risks. Stat Neerl 2021; 75: 324–342. [Google Scholar]
  • 56.Pal S, Roy S. A new non-linear conjugate gradient algorithm for destructive cure rate model and a simulation study: illustration with negative binomial competing risks. Commun Stat - Simu Comput 2022; 51: 6866–6880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Pal S, Roy S. On the parameter estimation of Box-Cox transformation cure model. Stat Med 2023; 42: 2600–2618. [DOI] [PubMed] [Google Scholar]
  • 58.Wu Y, Yin G. Cure rate quantile regression for censored data with a survival fraction. J Am Stat Assoc 2013; 108: 1517–1531. [Google Scholar]
  • 59.Conkin J, Bedahl SR, Van Liew HD. A computerized databank of decompression sickness incidence in altitude chambers. Aviat Space Environ Med 1992; 63: 819–824. [PubMed] [Google Scholar]
  • 60.Davies K, Pal S, Siddiqua JA. Stochastic EM algorithm for generalized exponential cure rate model and an empirical study. J Appl Stat 2021; 48: 2112–2135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Pal S, Balakrishnan N. Destructive negative binomial cure rate model and EM-based likelihood inference under Weibull lifetime. Stat Probab Lett 2016; 116: 9–20. [Google Scholar]
  • 62.Pal S, Balakrishnan N. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes. Stat Methods Med Res 2017; 26: 2093–2113. [DOI] [PubMed] [Google Scholar]
  • 63.Pal S, Balakrishnan N. Likelihood inference for the destructive exponentially weighted Poisson cure rate model with Weibull lifetime and an application to melanoma data. Comput Stat 2017; 32: 429–449. [Google Scholar]
  • 64.Pal S, Balakrishnan N. Likelihood inference based on EM algorithm for the destructive length-biased Poisson cure rate model with Weibull lifetime. Commun Stat - Simu Comput 2018; 47: 644–660. [Google Scholar]
  • 65.Pal S, Majakwara J, Balakrishnan N. An EM algorithm for the destructive COM-Poisson regression cure rate model. Metrika 2018; 81: 143–171. [Google Scholar]
  • 66.Majakwara J, Pal S. On some inferential issues for the destructive COM-Poisson-generalized gamma regression cure rate model. Commun Stat-Simu Comput 2019; 48: 3118–3142. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-smm-10.1177_09622802231210917 - Supplemental material for A support vector machine-based cure rate model for interval censored data

Supplemental material, sj-pdf-1-smm-10.1177_09622802231210917 for A support vector machine-based cure rate model for interval censored data by Suvra Pal, Yingwei Peng, Wisdom Aselisewine and Sandip Barui in Statistical Methods in Medical Research


Articles from Statistical Methods in Medical Research are provided here courtesy of SAGE Publications

RESOURCES