Abstract
The mixture cure rate model is the most commonly used cure rate model in the literature. In the context of mixture cure rate model, the standard approach to model the effect of covariates on the cured or uncured probability is to use a logistic function. This readily implies that the boundary classifying the cured and uncured subjects is linear. In this article, we propose a new mixture cure rate model based on interval censored data that uses the support vector machine to model the effect of covariates on the uncured or the cured probability (i.e. on the incidence part of the model). Our proposed model inherits the features of the support vector machine and provides flexibility to capture classification boundaries that are nonlinear and more complex. The latency part is modeled by a proportional hazards structure with an unspecified baseline hazard function. We develop an estimation procedure based on the expectation maximization algorithm to estimate the cured/uncured probability and the latency model parameters. Our simulation study results show that the proposed model performs better in capturing complex classification boundaries when compared to both logistic regression-based and spline regression-based mixture cure rate models. We also show that our model’s ability to capture complex classification boundaries improve the estimation results corresponding to the latency part of the model. For illustrative purpose, we present our analysis by applying the proposed methodology to the NASA’s Hypobaric Decompression Sickness Database.
Keywords: Support vector machine, multiple imputation, sequential minimal optimization, mixture cure rate model, expectation–maximization algorithm
1. Introduction
Ordinary survival analysis techniques such as the proportional hazards (PH) model, the proportional odds (PO) model or the accelerated failure time (AFT) model are concerned with modeling censored time-to-event data by assuming that every subject in the study will encounter the primary event of interest (death, relapse, or recurrence of a disease, etc.). However, it is not appropriate to apply these techniques to situations where a portion of the study cohort does not experience the event, for example, clinical studies involving low fatality rate with death as the event. It can be argued that if these subjects are followed up sufficiently beyond the study period, they may face the event due to some other risk factors. Therefore, these subjects can be considered as cured with respect to the event of interest. The survival model that incorporates the effects of such cured subjects is called the cure rate model. Remarkable progress in medical sciences also necessitate further exploration in the cure rate model where estimating the cure fraction precisely can be of great importance.
Introduced by Boag 1 and exclusively studied by Berkson and Gage, 2 the mixture cure rate model is perhaps the most popular cure rate model. 3 If denotes the lifetime of a susceptible (not cured) subject, then, the actual lifetime for any subject can be modeled by
| (1) |
where is a cure indicator with if an individual is cured and otherwise. Furthermore, considering and as the respective survival functions corresponding to and , we can express
| (2) |
where . The latency part and the incidence part are generally modeled to incorporate the effects of covriates and for any integers and . Note here that and may share the same covariates.
The properties of the mixture cure rate model with various assumptions and extensions are explored in details by several authors. Modeling lifetime of the susceptible individuals have been studied extensively. For example, a complete parametric mixture cure rate model is studied by assuming homogeneous Weibull lifetimes and logit-link to the cure rate.4,5 Semiparametric cure models with PH structure of the latency has also been studied extensively.6–8 Generalizations to semiparametric PO,9,10 AFT,11–13 transformation class, 14 and additive hazards 15 under mixture cure rate model were also investigated with various estimation techniques and model considerations.
On the other hand, the incidence part is traditionally and extensively modeled by sigmoid or logistic function
| (3) |
where and .4,6,7,16–20 As observed in the case of logistic regression, the logistic model works well when subjects are linearly separable into the cure or susceptible groups with respect to covariates. However, problem arises when subjects cannot be separated using a linear boundary. Other options to model the incidence include assuming a probit link function ( ) or a complementary log-log link function ( ), where is the cumulative distribution function of the standard normal distribution.21–23 However, similar to the logit link (3), these link functions do not offer nonlinear separability and are not sufficient to capture more complex effects of on the incidence. Non-parametric strategies, for example, the generalized Kaplan–Meier estimate at maximum uncensored failure time 24 to estimate the incidence part and the modified Beran-type estimator 25 to estimate the latency part in a mixture cure model, are also considered in the literature. Again, applying these strategies to multiple covariates can be challenging. Other non-parametric spline-based mixture cure models are capable of capturing complex patterns in the data, but they do not perform well when there are a large number of covariates with complicated interaction terms, which is a serious drawback.26,27 Therefore, there exists necessity to identify a group of classifiers which would be able to model the incidence part more effectively by allowing nonlinear separating boundaries between the cured and non-cured subjects.
To this end, the support vector machine (SVM) could be a reasonable choice. 28 Introduced by Cortes and Vapnik, 29 the SVM is a machine learning algorithm that finds a hyperplane in multidimensional feature space that maximizes the separating space (margin) between two classes. The main advantage of the SVM is that it can separate nonlinear inseparable data by transforming it to a higher dimensional space using kernel trick. Consequently, this classifier is more robust and flexible than logit or probit link functions. Given the availability of different machine learning algorithms, 30 we propose to use SVM-based techniques in this article mainly because SVM is based on the kernel trick and hence it is possible to design or fuse kernels to improve performance. Furthermore, SVM uses a subset of training points in the decision function, which makes it memory efficient. Additionally, the execution time for SVM is expected to be less when compared to other classifiers such as artificial neural networks (NNs). Recently, Li et al. 31 studied the effect of the covariates on the incidence by implementing the SVM. The new mixture cure rate model is seen to outperform existing logistic regression-based mixture cure rate model especially in the estimation of the incidence, and performs well for nonlinearly separable classes. However, the authors only considered data under non-informative right censoring mechanism.
Unlike right-censored data,32,33 interval-censored data occur for a study where subjects are inspected at regular intervals, and not continuously.34–36 If a subject experiences the event of interest, the exact survival time is not observed and is only known that the event has occurred between two consecutive inspections. Interval-censored data marked by cure prospect are often observed in follow-up clinical studies (cancer biochemical recurrence or AIDS drug resistance) dealing with events having low fatality and patients monitored at regular intervals.37,38 As in the case of right-censored data, some subjects may never encounter the event of interest, and are considered as cured. Mixture cure models for interval censored data have been studied and several estimation methods were proposed for both semiparametric and non-parametric set-ups.39–43 Motivated by the work of Li et al., 31 we propose to employ the SVM-based modeling to study the effects of covariates on the incidence part of the mixture cure rate model for survival data subject to interval-censoring. In addition, we compare our model not only with the logistic regression-based mixture cure model but also with the spline regression-based mixture cure model, which is also capable of capturing complex effects of on . Note that we use the spline method only in the incidence part of the mixture cure model. To apply the spline model to a classification problem where the response variable is qualitative in nature (as in the case of this article), we approximate the log-odds with a smoothing function. In this article, we consider the thin plate spline as the smoothing function which can accommodate multiple predictor variables and also allows the degrees of freedom and the basis function to be selected automatically from the mathematical statement of the smoothing.44,45 In particular, to capture nonlinear effect of on we have
| (4) |
where is a smooth function which is estimated using a thin plate spline by
| (5) |
In (5), is the total number of observations, is such that , and are vectors of coefficients to be estimated, and is subject to the linear constraints with . The functions are linearly independent polynomials spanning the space of polynomials in of degree less than . 44 Furthermore,
The R software allows fitting of thin plate spline using the “gam” function in the package “mgcv.”
The rest of the article is organized as follows. In Section 2, we discuss about the mixture cure rate model framework for interval-censored data and develop an estimation procedure based on the expectation–maximization (EM) algorithm that employs the SVM to model the incidence part. In Section 3, a detailed simulation study is carried out to demonstrate the performance of our proposed model in terms of flexibility, accuracy, and robustness. Comparison of our model with the logistic regression-based and spline regression-based mixture cure rate models in the presence of interval censored data is made in this section. The model performance is further examined and illustrated in Section 4 through the NASA’s Hypobaric Decompression Sickness Database (HDSD). Finally, we end our discussion by some concluding remarks and possible future research directions in Section 5.
2. SVM-based mixture cure rate model with interval censoring
2.1. Censoring scheme and modeling lifetimes
The data we observe in situations with interval censoring are of the form for , where denotes the sample size. For the -th subject, denotes the last inspection time before the event and denotes the first subsequent inspection time just after the event. Note that . The censoring indicator is denoted by , which takes the value 0 if , meaning that the event is not observed for a subject before the last inspection time, and takes the value 1 if , meaning that the event took place but its exact time is not known and is only known to belong to the interval . To demonstrate the effect of covariates on the latency part, we consider a proportional hazards structure to model the lifetime distribution of the susceptible or non-cured subjects. That is, for the susceptible subjects, we model the hazard function by
| (6) |
where is the -dimensional regression parameter vector measuring the effects of and is an unspecified baseline hazard function. Using (6), we can express (2) as
| (7) |
where is the baseline survival function (unspecified) corresponding to . In this article, we propose to estimate using the non-parametric Turnbull estimator, 46 thereby avoiding any parametric distributional assumption. 47 Such an estimator does not have any closed form and is developed as an iterative procedure. The steps involved can be described as follows:
-
a.
Using all the ’s and ’s, , create a grid of time points as .
-
b.
For each , define a weight that takes the value 1 if the interval is contained in the interval , and takes the value 0 otherwise.
-
c.
Make an initial guess of the survival probability at (say, ) for each .
-
d.
Calculate , , which denotes the probability of an event to occur at time .
-
e.
Estimate the number of events that occurred at time by , where the denominator in is the total probability assigned to possible event times in the interval .
-
f.
Calculate the estimated number of subjects at risk at time by .
-
g.
Calculate the updated product-limit estimator of survival function at using the data , say , .
-
h.
If for all , where is a tolerance, stop the iterative algorithm. Otherwise, repeat step d through step g using .
2.2. Form of the likelihood function
As missing observations are inherent to the problem set-up and model framework, we propose to employ the EM algorithm to estimate the unknown parameters.7,8,48,49 For implementing the EM algorithm, we need the form of the complete data likelihood function. Let us define and . Missing observations that appear in this context are in terms of the cure indicator variable , where is as defined in (1). Note that ’s are all known to take the value 1 if . However, if , can either take 0 or 1, and is thus unknown or missing. Using these ’s as the missing data, we can define the complete data as , for , which contain both observed and missing data. Under the interval censoring mechanism, we can now express the complete data likelihood function and log-likelihood function as:
| (8) |
and
| (9) |
where . It can be further noted that
| (10) |
where
| (11) |
is a function that depends on the incidence part only and
| (12) |
is a function that depends on the latency part only.
2.3. Modeling the incidence part with SVM
Let us assume that for are observed by some mechanism to assist our theory. SVM algorithm maximizes the linear or nonlinear margin between the two closest points belonging to the opposite classification groups (cured and susceptible). That is, SVM solves the following optimization problem for :
| (13) |
subject to the constraint and , for , where is a parameter that trades off between the margin width and misclassification proportion. Smaller values of cause optimizer to look for a larger margin width allowing higher misclassification. is a symmetric positive semi-definite kernel function, which we consider to be the radial basis function (RBF) given by . RBF is a popular choice of the kernel function owing to its robustness by implementing the idea that a linear classifier in higher dimension can be used as a nonlinear classifier in lower dimension. The parameter determines the kernel-width. Both hyper-parameters and are to be tuned to obtain the highest classification accuracy using cross-validation methods. 50 Grid search can be implemented to determine and . Low values of result in overfitting and jagged separator, while high values of result in more linear and smoother decision boundaries. Also, it is recommended to standardize the covariate vector .
The mapping to converts the respective 0 and 1s to −1 and 1s, which aids in formulation of the optimization problem under the SVM framework. Once ’s are obtained, we can derive a threshold as , for some . For any new covariate vector , the optimal decision or classification rule is given by
| (14) |
As suggested by Li et al., 31 the sequential minimal optimization (SMO) method 51 can be applied to solve (13). As opposed to solving large quadratic optimization problems to train an SVM model, SMO solves a series of smallest possible quadratic problems. Thus, SMO is relatively time inexpensive algorithm. Any subject with covariate is assigned to the susceptible group if and to the cured group if .
In the given context, note that it is not enough to just classify subjects as being cured or susceptible. It is also of our interest to obtain the estimates of uncured probabilities or equivalently the cured probabilities . For this purpose, we use the Platt scaling method to obtain an estimate of from the classification rule . 52 The estimate of by Platt scaling method is given by
| (15) |
where and are obtained by maximizing the following function:
| (16) |
Here
| (17) |
and and represents the number of subjects in the susceptible and cured groups, respectively.
We started our discussion on the SVM-based modeling of the incidence part above with the assumption that s are observed and available for training purpose. However, in practice, the cure status is not known for . Multiple imputation-based approach can be applied here to obtain with imputed values of for . Note that the proposed multiple imputation technique does not rely on naive assumptions such as the existence of a known threshold time beyond which all censored observations can be considered cured. 53 The steps of multiple imputation are as follows:
For a pre-defined integer and , generate , where is a Bernoulli random variable with success probability . The discussion on deriving is provided in Section 2.5.
For the imputed data , obtain as the estimate of by employing the SVM followed by the Platt scaling method given in (15) for .
Calculate as the final estimate of .
2.4. Tuning the SVM model
To address the issue with over/under fitting, we split the data into two sets, namely, the training set and the testing set. The training set is used to obtain the optimal hyper-parameters of the SVM model and then those optimal hyper-parameters are used to train the optimal SVM model. On the other hand, the testing set is used to test or validate the final SVM model. We examine two most critical hyper-parameters of the SVM, namely, and when training the SVM model. The parameter is a regularization parameter ( ) that penalizes the model for any mis-classification. The value of is inversely proportional to the strength of the regularization. When the value of the is large, the penalty for mis-classification is substantial, and the strength of the regularization is small and vice versa. The parameter of the RBF kernel on the other hand controls the similarity of the impact of a single training point, which influences the performance of the model. These hyper-parameters can be obtained using cross-validation techniques.54,50 In this article, we use the grid search cross-validation technique to obtain the optimal hyper-parameters of the SVM model. For the grid search cross-validation technique, we fit several pairwise models using different sets of hyper-parameter values. These fitted models are then evaluated to obtain the best optimal trained model and hyper-parameter values. These optimal hyper-parameter values from the optimal selected model are then used to fit the final model. Finally, we validate the performance of the final fitted SVM model by performing predictions using the testing set. Model performance evaluation criteria such as the graphical receiver operating characteristic (ROC) curve and it’s area under the curve (AUC) are used to evaluate the performance of the final model.
2.5. Development of the EM algorithm
The E-step in the EM algorithm involves finding the conditional expectation of the complete data log-likelihood function in (9) given the current estimates (say, at the -th iteration step) and the observed data, which is equivalent to finding the conditional expectation of given the observed data, and , as
| (18) |
where with denoting a non-parametric estimator of the baseline survival function evaluated at , . Note that (18) implies that for all . We obtain the conditional expectation of by simply replacing ’s with in (9). We denote the aforementioned conditional expectation by
| (19) |
where
| (20) |
and
| (21) |
The M-step updates the parameters in and . For , the procedure for the -th iteration step of the EM algorithm is given below.
Carry out the multiple imputation technique, as described in Section 2.3, by considering , for and . Obtain by applying the Platt scaling method with the classification rule defined in (14). Recall that the classification rule is built based on the imputed data , where is a Bernoulli random variable with success probability .
- Obtain by maximizing the function , as defined in (21), with respect to . That is, find
The maximization in (22) can be carried out by using the “optim()” function in R software and by specifying the method as “Nelder-Mead.” In this regard, one may also look at new optimization methods based on nonlinear conjugate gradient algorithm with an efficient line search technique.55–57(22) - Check for the convergence as follows:
where , with , is some pre-determined and sufficiently small tolerance and is the -norm. If the above criterion is satisfied, then, stop the algorithm. In this case, , for , and are the final pointwise estimates. On the other hand, if the above criterion is not met, continue to Step 4. Repeat steps 1 to 4 until convergence is achieved.
Note that maximization of with respect to can be done only after estimating the baseline survival function which appears as a nuisance parameter in (21). As mentioned in Section 2.1, we estimate using the non-parametric Turnbull estimator.
2.6. Calculating the standard errors
The standard errors are estimated by non-parametric bootstrapping. For , -th bootstrapped data set is obtained by resampling with replacement from the original data. The sample size of the -th bootstrapped data is the same as the original data. Then, we carry out steps 1 to 5 of the EM algorithm as detailed in Section 2.5 to obtain the estimates of model parameters for each bootstrapped data. This gives us estimates for each model parameter. For each parameter, the standard deviation of these estimates provides an estimate of the standard error of the parameter.
2.7. Initial values of model parameters
To start the iterative EM algorithm, we need to come up with initial values of , for , and . To provide an initial guess of , we can consider the censoring indicator as the cure indicator. That is, we consider if and if for . Then, we employ the SVM to come up with the classification rule (as in (14)) and finally, apply the Platt scaling method (as in (15)) to obtain . On the other hand, to provide an initial guess for the latency parameter , we can simply initiate each component in by 0.5.
3. Simulation study
In this section, we assess the performance of the proposed SVM-based EM algorithm to estimate the model parameters of the mixture cure rate model for interval censored data. We also compare the performance of the SVM-based EM algorithm with the logistic regression-based and spline regression-based EM algorithms. To fit the thin plate spline for the incidence part, we use the “gam” function in the package “mgcv.” We consider the following scenarios using which we generate the true uncured probabilities :
In Scenarios 1 and 2, and are generated from the standard normal distribution. In Scenario 3, , and are generated from the Bernoulli distribution with success probabilities 0.5, 0.3, 0.5, and 0.7, respectively, whereas are generated from the standard normal distribution. In all scenarios, we consider ,that is, we use the same set of covariates to model the incidence and latency parts. Note that Scenario 1 represents the standard logistic regression model which captures a linear classification boundary, whereas Scenario 2 captures nonlinear classification boundary (see Figure 1). On the other hand, Scenario 3 represents a more complex link function with a large number of covariates and complicated interaction terms. Corresponding to Scenarios 1 and 2, Figure 2 shows the plots of simulated uncured probabilities and how they vary with respect to the covariates and . We consider different sample sizes as and . We assume lifetimes of the susceptible subjects follow the proportional hazards structure with the hazard function
where the true value of is chosen as 1 for Scenarios 1 and 2, and 3.5 for Scenario 3. For Scenarios 1 and 2, we consider the true value of as , whereas for Scenario 3 we consider . The censoring time is generated from a uniform distribution in . Under these settings, the true cure probability and censoring proportion, denoted by (cure, censoring), for Scenarios 1, 2, and 3 are roughly (0.50, 0.65), (0.40, 0.60), and (0.60, 0.70), respectively. Thus, using the three scenarios we cover low, moderate, and high cure and censoring rates. To generate interval censored lifetime data , we carry out the following steps:
-
Step 1:
Generate a Uniform (0,1) random variable and a censoring time ;
-
Step 2:
If set ;
-
Step 3:
If generate from a Weibull distribution with shape parameter and scale parameter ;
-
Step 4:
-
a.If , set , , and ;
-
b.If , set , and generate from Uniform distribution and from Uniform distribution. Next, create intervals and select that satisfies .
-
a.
Figure 1.
Simulated cured and uncured observations for Scenarios 1 and 2 considered.
Figure 2.
Simulated uncured probabilities and their behavior with respect to the covariates for Scenarios 1 and 2.
All simulations are done using the R statistical software (Version 4.0.4) and results are based on Monte Carlo runs. The computational codes for data generation and SVM-based EM algorithm are available in the Supplemental Material. In all cases, 67% of the data is used as training set and the remaining 33% of the data is used as testing set. To employ our proposed methodology, we consider the number of imputations in the multiple imputation technique to be 5, which is in line with existing works.31,58 In Tables 1 and 2, we report the bias and mean squared error (MSE), respectively, of the estimated uncured probability and the susceptible survival probability . These are calculated as:
where and are the true uncured probability and susceptible survival probability, respectively, corresponding to the -th subject and the -th Monte Carlo run. Similarly, and are the estimated uncured probability and susceptible survival probability, respectively, corresponding to the -th subject and the -th Monte Carlo run. In the above expressions, note that where if and if . is defined in a similar way.
Table 1.
Comparison of bias of the uncured probability and susceptible survival probability for different models.
| Uncured probability | Susceptible survival probability | ||||||
|---|---|---|---|---|---|---|---|
| Bias | Bias | ||||||
| Scenario | SVM | Spline | Logistic | SVM | Spline | Logistic | |
| 300 | 1 | −0.1425 | −0.1632 | 0.0584 | 0.1079 | 0.1101 | 0.1060 |
| 2 | −0.0684 | 0.0900 | 0.2322 | 0.0500 | 0.0505 | 0.0515 | |
| 3 | 0.0544 | 0.1046 | 0.1786 | 0.1058 | 0.0651 | 0.1013 | |
| 600 | 1 | −0.1255 | −0.1611 | 0.0474 | 0.1075 | 0.1089 | 0.1058 |
| 2 | −0.0628 | 0.1009 | 0.2186 | 0.0492 | 0.0495 | 0.0511 | |
| 3 | 0.0364 | 0.0957 | 0.1494 | 0.0828 | 0.0774 | 0.1034 | |
SVM: support vector machine.
Table 2.
Comparison of MSE of the uncured probability and susceptible survival probability for different models.
| Uncured probability | Susceptible survival probability | ||||||
|---|---|---|---|---|---|---|---|
| MSE | MSE | ||||||
| Scenario | SVM | Spline | Logistic | SVM | Spline | Logistic | |
| 300 | 1 | 0.1132 | 0.1753 | 0.0618 | 0.1019 | 0.1085 | 0.1022 |
| 2 | 0.0827 | 0.1906 | 0.2184 | 0.0338 | 0.0363 | 0.0598 | |
| 3 | 0.1052 | 0.1587 | 0.2111 | 0.0609 | 0.0793 | 0.1060 | |
| 600 | 1 | 0.1128 | 0.1715 | 0.0614 | 0.0988 | 0.1001 | 0.1020 |
| 2 | 0.0809 | 0.1901 | 0.2185 | 0.0328 | 0.0340 | 0.0727 | |
| 3 | 0.0956 | 0.1280 | 0.1696 | 0.0380 | 0.0468 | 0.0649 | |
SVM: support vector machine; MSE: mean squared error.
From Table 1, it is clear that the biases of the estimated uncured probability and the susceptible survival probability obtained from the logistic regression-based EM algorithm is smaller than those obtained from the proposed SVM-based EM algorithm as well as the spline-based EM algorithm when logistic regression is the correct model (Scenario 1). However, when the true model for the uncured probability is not the logistic regression in Scenarios 2 and 3, the proposed SVM-based EM algorithm produces smaller bias in the estimated uncured probability when compared to both logistic regression-based and spline-based EM algorithms. In this case, as far as the estimated susceptible survival probabilities are concerned, the SVM-based EM algorithm produces smaller bias only under Scenario 2. From Table 2, we note that when the logistic regression is the true model for the uncured probability (i.e. under Scenario 1), the MSE of the estimated uncured probability obtained from the logistic regression-based EM algorithm is smaller than those obtained from the SVM-based and spline-based EM algorithms. However, under Scenarios 2 and 3, that is, under non-logistic true models for the uncured probability, the proposed SVM-based EM algorithm produces smaller MSE of the estimated uncured probability when compared to both logistic regression-based and spline-based EM algorithms. When it comes to the estimation of the susceptible survival probability, our proposed SVM-based EM algorithm produces smaller MSEs in all considered scenarios. Overall, we can conclude that the proposed SVM-based EM algorithm performs better than the standard logistic regression-based and spline-based EM algorithms when the true classification boundary is nonlinear and complex. This clearly demonstrates the ability of the proposed SVM-based mixture cure model to handle complex nonlinear classification boundaries.
Although, in practice, the cured status is unobserved for a real data, we do know which observations can be considered as cured when we simulate data. Using such information on the cured status for simulated data, we can easily compare the proposed SVM-based mixture cure model with the logistic regression-based and spline regression-based mixture cure models using the ROC curves and the AUC values for different scenarios we have considered. Note that the true label for calculating the AUC is the true cure index for each subject when the data is generated. Figure 3 presents the ROC curves under different scenarios. The corresponding AUC values are presented in Table 3. These results are based on 500 Monte Carlo runs. It is once again clear that under Scenarios 2 and 3 (i.e. when the true classification boundaries are nonlinear), the performance (or the accuracy) of the SVM-based mixture cure model is much better than the logistic regression-based and the spline-based mixture cure models. However, under Scenario 1 (i.e. when the true classification boundary is linear), the logistic regression-based model performs slightly better than the SVM-based model. The similarity in the AUC values obtained from the training data and testing data implies that there is no issue with over/under fitting.
Figure 3.
Receiver operating characteristic (ROC) curves for different models and under different scenarios.
Table 3.
Comparison of AUC values for different models and scenarios.
| Training AUC | Testing AUC | ||||||
|---|---|---|---|---|---|---|---|
| n | Scenario | SVM | Spline | Logistic | SVM | Spline | Logistic |
| 300 | 1 | 0.8476 | 0.7461 | 0.9248 | 0.8437 | 0.7409 | 0.9225 |
| 2 | 0.8057 | 0.5756 | 0.5330 | 0.7990 | 0.5562 | 0.5445 | |
| 3 | 0.8831 | 0.6964 | 0.5885 | 0.7312 | 0.5837 | 0.5507 | |
| 600 | 1 | 0.8229 | 0.7421 | 0.9227 | 0.8218 | 0.7398 | 0.9215 |
| 2 | 0.7973 | 0.5659 | 0.5255 | 0.7956 | 0.5554 | 0.5432 | |
| 3 | 0.9231 | 0.6721 | 0.5812 | 0.8706 | 0.6398 | 0.5615 | |
SVM: support vector machine; AUC: area under the curve.
To further assess the robustness and generalizability of the proposed SVM model across different data settings, we study a scenario where we generate 10 correlated covariates from a multivariate normal distribution, , where denotes the variance–covariance matrix whose th element, denoted by , is defined as , . The choice of 0.9 as the base for exponentiation determines how quickly the correlation increases with decreasing separation. In this scenario, we consider all 10 covariates for the incidence part but only choose a subset of five covariates for the latency part. In this way, we ensure . The PH model is fitted for the latency with the true value of as . Table 4 presents the biases and MSEs of the uncured and susceptible survival probabilities, whereas Table 5 presents the AUC values for both training and testing sets. From Tables 4 and 5, it is clear that the SVM once again outperforms both spline and logistic models, thereby demonstrating robustness and generalizability.
Table 4.
Comparison of different models through the biases and MSEs of different quantities of interest for and in the presence of correlated covariates, where .
| Uncured probability | Susceptible survival probability | |||
|---|---|---|---|---|
| Model | Bias | MSE | Bias | MSE |
| SVM | 0.0611 | 0.0936 | 0.0966 | 0.0779 |
| Logistic | 0.1740 | 0.2335 | 0.0972 | 0.0936 |
| Spline | 0.0949 | 0.1433 | 0.0955 | 0.0825 |
SVM: support vector machine; MSE: mean squared error.
Table 5.
Comparison of different models through the AUC values for and in the presence of correlated covariates, where .
| Model | Training AUC | Testing AUC |
|---|---|---|
| SVM | 0.8024 | 0.7462 |
| Logistic | 0.5340 | 0.5431 |
| Spline | 0.6497 | 0.6087 |
SVM: support vector machine; AUC: area under the curve.
As per the suggestion of a reviewer, we also used the NN to model the incidence part, that is, , and then the EM algorithm to estimate all parameters. For this purpose, we fitted a two hidden layers NN with (12, 24) number of neurons respectively in the first and second layers. The sigmoid activation function was used to fit the fully connected NN. In Table 6, we present the biases and MSEs of the uncured and susceptible survival probabilities. Clearly, the performance of SVM is better in estimating the uncured probability, which is our main parameter of interest. Regarding the estimation of the susceptible survival probability, the performances are comparable. In Table 7, we compare the AUCs and computation times of SVM and NN models. The computation times represent the time (in seconds) to produce the incidence and latency estimates along with the standard errors (obtained using a bootstrap sample of size 100) for a generated data of size 300. For other sample sizes, the observations are similar and hence not reported for the sake of brevity. Observe that the computing times of the SVM model is much lower than that of the NN model for all three scenarios. Again, the SVM results in higher AUC values, meaning an improved predictive accuracy. These findings allow us to conclude that the proposed SVM model is preferred to the NN model.
Table 6.
Comparison of SVM and NN models through the biases and MSEs of different quantities of interest for .
| Uncured probability | Susceptible survival probability | ||||
|---|---|---|---|---|---|
| Scenario | Model | Bias | MSE | Bias | MSE |
| 1 | SVM | −0.0954 | 0.1395 | 0.1075 | 0.1092 |
| NN | −0.2231 | 0.2265 | 0.1074 | 0.1108 | |
| 2 | SVM | −0.0570 | 0.0877 | 0.0494 | 0.0338 |
| NN | −0.1846 | 0.2120 | 0.0492 | 0.0352 | |
| 3 | SVM | 0.0698 | 0.1095 | 0.1172 | 0.0678 |
| NN | −0.0990 | 0.1544 | −0.0385 | 0.0579 | |
SVM: support vector machine; MSE: mean squared error; NN: neural network.
Table 7.
Comparison of SVM and NN models through the AUC values and computation times for .
| Scenario | Model | Training AUC | Testing AUC | Computation time (in seconds) |
|---|---|---|---|---|
| 1 | SVM | 0.8273 | 0.8150 | 86.13 |
| NN | 0.7521 | 0.7393 | 111.07 | |
| 2 | SVM | 0.7922 | 0.7675 | 121.88 |
| NN | 0.7801 | 0.7094 | 143.27 | |
| 3 | SVM | 0.8791 | 0.7216 | 192.35 |
| NN | 0.8558 | 0.6952 | 239.16 |
SVM: support vector machine; AUC: area under the curve; NN: neural network.
4. Illustrative example: Analysis of HDSD data
We further demonstrate our proposed methodology using a data set that is extracted from the NASA’s Hypobaric Decompression Sickness Data Bank, hereafter referred as HDSD data. 59 The data set has information on subjects who underwent denitrogenation test procedures before being exposed to a hypobaric environment. The event of interest is the onset of grade IV venous gas emboli (VGE). The time to onset of grade IV VGE, if it occurred, was not exactly observed but was contained within a time interval. The covariates of interest are age (in years), sex (1: male; 0: female), TR360 which is a measure of decompression stress that ranges from 1.04 to 1.89, and noadyn which is an indicator of whether the subject was ambulatory (noadyn = 1) or lower body adynamic (noadyn = 0) during the test session. Information on 236 subjects is available for downstream analysis whose event times are either interval censored or right censored. 41 In Figure 4, we present a plot of the non-parametric maximum likelihood estimate (NPMLE) of the survival function. Clearly, we can see that the plot levels off to a significant non-zero proportion. This indicates that there could be a greater likelihood of the presence of cured fraction in the data.
Figure 4.

Plot of the NPMLE of the survival function for the HDSD data.
HDSD: Hypobaric decompression sickness database; NPMLE: non-parametric maximum likelihood estimate.
We fit the proposed SVM-based mixture cure model and, for comparison, we also fit the logistic regression-based and spline regression-based mixture cure models. Noting that the sample size for the HDSD data is small, and to avoid over-fitting or under-fitting, we adopt a 10-fold cross-validation technique that allows us to simultaneously fit and evaluate each model on the full data. This is consistent with Hastie et al. 54 First, we draw inference on the incidence part of the model. In Figure 5, we plot the estimates of the uncured probabilities against age and TR360 when stratified by sex and noadyn for all models. Clearly, under the proposed SVM-based model, the change in the estimates of the uncured probabilities is non-monotonic with respect to age and TR360. This non-monotonic relationship is not captured by the logistic regression-based and spline regression-based models. Note that for the spline regression-based model, the pattern is similar to the logistic regression-based model.
Figure 5.
Estimates of uncured probabilities as a function of age and TR360 when stratified by noadyn and sex for the Hypobaric decompression sickness database (HDSD) data.
Next, we verify whether our proposed model’s ability to capture nonlinear pattern in the data can result in improved predictive accuracy when predicting the cured/uncured statuses are of interest. This can be verified using the ROC curves and by comparing the AUC values for different models that we have considered. Noting that the cured statuses are unknown for the set of right censored observations in a real data, we first impute the missing cured/uncured statuses. For each right censored observation, the missing uncured status can be imputed by generating a random number from a Bernoulli distribution whose success probability is the conditional probability of uncured, as given in (18). With the complete knowledge of cured/uncured statuses for all subjects, the ROC curves can be drawn and the AUC values can be computed. However, since this method involves simulation (i.e. randomness), we make the ROC curves and the AUC values more consistent by repeating the procedure 500 times and reporting the averaged ROC curves and the averaged AUC values. Figure 6 presents the averaged ROC curves for different models and the corresponding AUC values turn out to be 0.8795, 0.8627, and 0.7766 for the SVM-based, logistic regression-based, and spline regression-based models, respectively. Thus, the proposed SVM-based model indeed provides the highest predictive accuracy for the considered HDSD data.
Figure 6.

ROC curves under different models for the HDSD data.
HDSD: Hypobaric decompression sickness database; ROC: receiver operating characteristic.
Finally, we look at the results related to the latency parts of the fitted models. Table 8 presents the estimates of the latency parameters, their standard deviations (SD), and the -values. Clearly, at 5% level of significance, only noadyn turns out to be significant for all models as far as the time to onset of grade IV VGE for uncured patients is concerned. Note that the effect of noadyn is the same for all models. Since the estimate of is positive, ambulatory subjects tend to experience grade IV VGE faster. This finding is consistent with Ma. 41
Table 8.
Estimation results corresponding to the latency parameters for the HDSD data.
| Estimates | SD | -value | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Parameter | SVM | Spline | Logistic | SVM | Spline | Logistic | SVM | Spline | Logistic |
| Age | 0.0294 | −0.1798 | −0.1947 | 0.1037 | 0.0967 | 0.115 | 0.7767 | 0.0631 | 0.1418 |
| TR360 | 0.0628 | −0.2697 | −0.2522 | 0.1744 | 0.1741 | 0.145 | 0.7187 | 0.1214 | 0.1632 |
| Sex | 0.3449 | −0.1232 | −0.2908 | 0.4459 | 0.3431 | 0.107 | 0.4392 | 0.7196 | 0.5910 |
| Noadyn | 1.3252 | 1.5849 | 1.6868 | 0.4081 | 0.3072 | 0.107 | 1.17 | 2.47 | 8.00 |
SVM: support vector machine; HDSD: Hypobaric decompression sickness database; SD: standard deviation.
5. Conclusion
The SVM has received a great amount of interest in the past two decades. It has been shown that the SVM performs well in a wide array of problems including face detection, text categorization and pedestrian detection. However, the use of the SVM in the context of cure rate models is new and not well explored. In this article, we have proposed a new cure rate model that uses the SVM to model the incidence part and a PH structure to model the latency part for survival data subject to interval censoring. The new cure rate model inherits the properties of the SVM and can capture more complex classification boundaries. For the estimation purpose, we have proposed an EM algorithm where sequential minimal optimization together with Platt scaling method are employed to estimate the uncured probabilities. In this regard, due to the unavailability of some cured statuses, we make use of a multiple imputation-based approach to generate missing cured statuses. Due to the complexity of the proposed model and the estimation method, we approximate the standard errors of the estimated parameters using non-parametric bootstrapping. Through a simulation study, we have shown that when the true classification boundary is nonlinear the proposed SVM-based mixture cure model overall performs better than the standard logistic regression-based and spline-based mixture cure models. As future research, it is of great interest to study the performance of the proposed model in the presence of high-dimensional covariates and to develop computationally efficient methods for covariate selection. It is also of great interest to extend the proposed model to accommodate a competing risks scenario.18,60 Furthermore, it is also possible to explore other machine learning algorithms (e.g. NN or tree-based approaches) to study more complicated cure rate models such as those that look at the elimination of risk factors.61–66 We are currently looking at some of these problems and we hope to report the findings in our upcoming manuscripts.
Supplemental Material
Supplemental material, sj-pdf-1-smm-10.1177_09622802231210917 for A support vector machine-based cure rate model for interval censored data by Suvra Pal, Yingwei Peng, Wisdom Aselisewine and Sandip Barui in Statistical Methods in Medical Research
Acknowledgements
The authors thank two anonymous reviewers for their careful review and comments which led to this improved version of the manuscript.
Footnotes
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Suvra Pal’s work was supported by the National Institute Of General Medical Sciences of the National Institutes of Health under Award Number R15GM150091. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health. Yingwei Peng’s work was partially supported by a Discovery grant from the Natural Sciences and Engineering Research Council of Canada.
ORCID iD: Suvra Pal https://orcid.org/0000-0001-9864-9489
Supplemental material: Supplemental material for this article is available online.
References
- 1.Boag JW. Maximum likelihood estimates of the proportion of patients cured by cancer therapy. J R Stat Soc Ser B (Methodological) 1949; 11: 15–53. [Google Scholar]
- 2.Berkson J, Gage RP. Survival curve for cancer patients following treatment. J Am Stat Assoc 1952; 47: 501–515. [Google Scholar]
- 3.Pal S, Barui S, Davies K. et al. A stochastic version of the EM algorithm for mixture cure model with exponentiated Weibull family of lifetimes. J Stat Theory Pract 2022; 16: 48. [Google Scholar]
- 4.Farewell VT. The use of mixture models for the analysis of survival data with long-term survivors. Biometrics 1982; 38: 1041–1046. [PubMed] [Google Scholar]
- 5.Farewell VT. Mixture models in survival analysis: Are they worth the risk? Can J Stat 1986; 14: 257–262. [Google Scholar]
- 6.Kuk AY, Chen CH. A mixture model combining logistic regression with proportional hazards regression. Biometrika 1992; 79: 531–541. [Google Scholar]
- 7.Peng Y, Dear KB. A nonparametric mixture model for cure rate estimation. Biometrics 2000; 56: 237–243. [DOI] [PubMed] [Google Scholar]
- 8.Sy JP, Taylor JM. Estimation in a Cox proportional hazards cure model. Biometrics 2000; 56: 227–236. [DOI] [PubMed] [Google Scholar]
- 9.Gu Y, Sinha D, Banerjee S. Analysis of cure rate survival data under proportional odds model. Lifetime Data Anal 2011; 17: 123–134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Mao M, Wang JL. Semiparametric efficient estimation for a class of generalized proportional odds cure models. J Am Stat Assoc 2010; 105: 302–311. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Li CS, Taylor JM. A semi-parametric accelerated failure time cure model. Stat Med 2002; 21: 3235–3247. [DOI] [PubMed] [Google Scholar]
- 12.Zhang J, Peng Y. A new estimation method for the semiparametric accelerated failure time mixture cure model. Stat Med 2007; 26: 3157–3171. [DOI] [PubMed] [Google Scholar]
- 13.Zhang J, Peng Y. Accelerated hazards mixture cure model. Lifetime Data Anal 2009; 15: 455–467. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Lu W, Ying Z. On semiparametric transformation cure models. Biometrika 2004; 91: 331–343. [Google Scholar]
- 15.Barui S, Yi YG. Semiparametric methods for survival data with measurement error under additive hazards cure rate models. Lifetime Data Anal 2020; 26: 421–450. [DOI] [PubMed] [Google Scholar]
- 16.Balakrishnan N, Pal S. EM algorithm-based likelihood estimation for some cure rate models. J Stat Theory Pract 2012; 6: 698–724. [Google Scholar]
- 17.Balakrishnan N, Pal S. Lognormal lifetimes and likelihood-based inference for flexible cure rate models based on COM-Poisson family. Comput Stat Data Anal 2013; 67: 41–67. [Google Scholar]
- 18.Balakrishnan N, Pal S. An EM algorithm for the estimation of parameters of a flexible cure rate model with generalized gamma lifetime and model discrimination using likelihood- and information-based methods. Comput Stat 2015; 30: 151–189. [Google Scholar]
- 19.Balakrishnan N, Pal S. Likelihood inference for flexible cure rate models with gamma lifetimes. Commun Stat - Theory Method 2015; 44: 4007–4048. [DOI] [PubMed] [Google Scholar]
- 20.Balakrishnan N, Pal S. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes. Stat Methods Med Res 2016; 25: 1535–1563. [DOI] [PubMed] [Google Scholar]
- 21.Peng Y. Fitting semiparametric cure models. Comput Stat Data Anal 2003; 41: 481–490. [Google Scholar]
- 22.Cai C, Zou Y, Peng Y. et al. smcure: an R-package for estimating semiparametric mixture cure models. Comput Methods Programs Biomed 2012; 108: 1255–1260. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Tong EN, Mues C, Thomas LC. Mixture cure models in credit scoring: if and when borrowers default. Eur J Oper Res 2012; 218: 132–139. [Google Scholar]
- 24.Xu J, Peng Y. Nonparametric cure rate estimation with covariates. Can J Stat 2014; 42: 1–17. [Google Scholar]
- 25.López-Cheda A, Cao R, Jácome MA. et al. Nonparametric incidence estimation and bootstrap bandwidth selection in mixture cure models. Comput Stat Data Anal 2017; 105: 144–165. [Google Scholar]
- 26.Chen T, Du P. Mixture cure rate models with accelerated failures and nonparametric form of covariate effects. J Nonparametr Stat 2018; 30: 216–237. [Google Scholar]
- 27.Wang L, Du P, Liang H. Two-component mixture cure rate model with spline estimated nonparametric components. Biometrics 2012; 68: 726–735. [DOI] [PubMed] [Google Scholar]
- 28.Pal S, Aselisewine W. A semi-parametric promotion time cure model with support vector machine. Ann Appl Stat 2023; 17(3): 2680–2699. [Google Scholar]
- 29.Cortes C, Vapnik V. Support-vector networks. Mach Learn 1995; 20: 273–297. [Google Scholar]
- 30.Aselisewine W, Pal S. On the integration of decision trees with mixture cure model. Stat Med 2023; 42(23): 4111–4127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Li P, Peng Y, Jiang P. et al. A support vector machine based semiparametric mixture cure model. Comput Stat 2020; 35: 931–945. [Google Scholar]
- 32.Wang P, Pal S. A two-way flexible generalized gamma transformation cure rate model. Stat Med 2022; 41: 2427–2447. [DOI] [PubMed] [Google Scholar]
- 33.Pal S, Balakrishnan N. Expectation maximization algorithm for Box–Cox transformation cure rate model and assessment of model misspecification under Weibull lifetimes. IEEE J Biomed Health Inform 2018; 22: 926–934. [DOI] [PubMed] [Google Scholar]
- 34.Wiangnak P, Pal S. Gamma lifetimes and associated inference for interval-censored cure rate model with COM–Poisson competing cause. Commun Stat -Theory Method 2018; 47: 1491–1509. [Google Scholar]
- 35.Treszoks J, Pal S. A destructive shifted Poisson cure model for interval censored data and an efficient estimation algorithm. Communications in Statistics-Simulation and Computation 2022. DOI: 10.1080/03610918.2022.2067876. [DOI] [Google Scholar]
- 36.Treszoks J, Pal S. On the estimation of interval censored destructive negative binomial cure model. Stat Med 2023. DOI: 10.1002/sim.9904 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Sun J. The statistical analysis of interval-censored failure time data. New York: Springer, 2007. [Google Scholar]
- 38.Lindsey JC, Ryan LM. Methods for interval-censored data. Stat Med 1998; 17: 219–238. [DOI] [PubMed] [Google Scholar]
- 39.Kim YJ, Jhun M. Cure rate model with interval censored data. Stat Med 2008; 27: 3–14. [DOI] [PubMed] [Google Scholar]
- 40.Ma S. Cure model with current status data. Stat Sin 2009; 19: 233–249. [Google Scholar]
- 41.Ma S. Mixed case interval censored data with a cured subgroup. Stat Sin 2010; 20: 1165–1181. [Google Scholar]
- 42.Xiang L, Ma X, Yau KK. Mixture cure model with random effects for clustered interval-censored survival data. Stat Med 2011; 30: 995–1006. [DOI] [PubMed] [Google Scholar]
- 43.Aljawadi BA, Bakar MRA, Ibrahim NA. Nonparametric versus parametric estimation of the cure fraction using interval censored data. Communications in Statistics-Theory and Methods 2012; 41: 4251–4275. [Google Scholar]
- 44.Wood SN. Generalized Additive Models: An Introduction With R. Boca Raton, FL: Chapman & Hall/CRC, 2017. [Google Scholar]
- 45.Hastie T, Tibshirani R. Generalized Additive Models. Boca Raton, FL: Chapman & Hall, 1990. [Google Scholar]
- 46.Turnbull BW. The empirical distribution function with arbitrarily grouped, censored and truncated data. Journal of the Royal Statistical Society-Series B 1976; 38: 290–295. [Google Scholar]
- 47.Pal S, Peng Y, Aselisewine W. A new approach to modeling the cure rate in the presence of interval censored data. Comput Stat 2023. DOI: 10.1007/s00180-023-01389-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.McLachlan GJ, Krishnan T. The EM algorithm and extensions. 382. Hoboken, NJ: John Wiley & Sons, 2007. [Google Scholar]
- 49.Pal S. A simplified stochastic EM algorithm for cure rate model with negative binomial competing risks: an application to breast cancer data. Stat Med 2021; 40: 6387–6409. [DOI] [PubMed] [Google Scholar]
- 50.Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2011; 2: 1–27. [Google Scholar]
- 51.Platt J. Fast training of support vector machines using sequential minimal optimization. In Schlkopf B, Burges C and Smola A (eds.) Advances in Kernel Methods – Support Vector Learning. Cambridge, MA, USA: MIT Press, 1999. pp.185–208.
- 52.Platt J. et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv Large Margin Classifiers 1999; 10: 61–74. [Google Scholar]
- 53.Amico M, Van Keilegom I, Han B. Assessing cure status prediction from survival data using receiver operating characteristic curves. Biometrika 2021; 108: 727–740. [Google Scholar]
- 54.Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer, 2001. [Google Scholar]
- 55.Pal S, Roy S. On the estimation of destructive cure rate model: a new study with exponentially weighted Poisson competing risks. Stat Neerl 2021; 75: 324–342. [Google Scholar]
- 56.Pal S, Roy S. A new non-linear conjugate gradient algorithm for destructive cure rate model and a simulation study: illustration with negative binomial competing risks. Commun Stat - Simu Comput 2022; 51: 6866–6880. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Pal S, Roy S. On the parameter estimation of Box-Cox transformation cure model. Stat Med 2023; 42: 2600–2618. [DOI] [PubMed] [Google Scholar]
- 58.Wu Y, Yin G. Cure rate quantile regression for censored data with a survival fraction. J Am Stat Assoc 2013; 108: 1517–1531. [Google Scholar]
- 59.Conkin J, Bedahl SR, Van Liew HD. A computerized databank of decompression sickness incidence in altitude chambers. Aviat Space Environ Med 1992; 63: 819–824. [PubMed] [Google Scholar]
- 60.Davies K, Pal S, Siddiqua JA. Stochastic EM algorithm for generalized exponential cure rate model and an empirical study. J Appl Stat 2021; 48: 2112–2135. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Pal S, Balakrishnan N. Destructive negative binomial cure rate model and EM-based likelihood inference under Weibull lifetime. Stat Probab Lett 2016; 116: 9–20. [Google Scholar]
- 62.Pal S, Balakrishnan N. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes. Stat Methods Med Res 2017; 26: 2093–2113. [DOI] [PubMed] [Google Scholar]
- 63.Pal S, Balakrishnan N. Likelihood inference for the destructive exponentially weighted Poisson cure rate model with Weibull lifetime and an application to melanoma data. Comput Stat 2017; 32: 429–449. [Google Scholar]
- 64.Pal S, Balakrishnan N. Likelihood inference based on EM algorithm for the destructive length-biased Poisson cure rate model with Weibull lifetime. Commun Stat - Simu Comput 2018; 47: 644–660. [Google Scholar]
- 65.Pal S, Majakwara J, Balakrishnan N. An EM algorithm for the destructive COM-Poisson regression cure rate model. Metrika 2018; 81: 143–171. [Google Scholar]
- 66.Majakwara J, Pal S. On some inferential issues for the destructive COM-Poisson-generalized gamma regression cure rate model. Commun Stat-Simu Comput 2019; 48: 3118–3142. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-pdf-1-smm-10.1177_09622802231210917 for A support vector machine-based cure rate model for interval censored data by Suvra Pal, Yingwei Peng, Wisdom Aselisewine and Sandip Barui in Statistical Methods in Medical Research




