Summary
In many biomedical studies, patients may experience the same type of recurrent event repeatedly over time, such as bleeding, multiple infections and disease. In this article, we propose a Bayesian design to a pivotal clinical trial in which lower risk myelodysplastic syndromes (MDS) patients are treated with MDS disease modifying therapies. One of the key study objectives is to demonstrate the investigational product (treatment) effect on reduction of platelet transfusion and bleeding events while receiving MDS therapies. In this context, we propose a new Bayesian approach for the design of superiority clinical trials using recurrent events frailty regression models. Historical recurrent events data from an already completed phase 2 trial are incorporated into the Bayesian design via the partial borrowing power prior of Ibrahim et al. (2012, Biometrics 68, 578–586). An efficient Gibbs sampling algorithm, a predictive data generation algorithm, and a simulation-based algorithm are developed for sampling from the fitting posterior distribution, generating the predictive recurrent events data, and computing various design quantities such as the type I error rate and power, respectively. An extensive simulation study is conducted to compare the proposed method to the existing frequentist methods and to investigate various operating characteristics of the proposed design.
Keywords: Clinical trial design, Gibbs sampling, Myelodysplastic syndrome, Power prior, Recurrent events, Type I error rate and power
1. Introduction
Recurrent events data are increasingly playing a dominant role in the design of clinical trials in practice. One of the main reasons for this is that time-to-event data are now often collected on several endpoints, or the same endpoint at different points in time. There are many examples of recurrent events data that arise in practice, such as recurrent infections, disease episodes, bleeding, recurrences of tumors, etc. As a result of these types of settings, statistical clinical trials designs are now being formally considered in the recurrent events arena with the hope that such designs will more fully capture the disease process and hence yield greater efficiency and power than designs based on a univariate time-to-event endpoint. Simply put, having more time-to-event measurements for a subject generally yields greater power than having only one time-to-event endpoint for each subject.
There has been some literature on designing clinical trials in the recurrent events context, essentially all of it from a frequentist perspective. The design settings considered in Hughes (1997) and Bernardo and Harrington (2001) are based on a multiplicative intensity model and a marginal proportional hazards model, respectively. Using the test of Cook (1995), Lawless and Nadeau (1995) and Matsui (2005) consider designs using a nonhomogeneous Poisson process model. Their methods are parametric in that, conditional on a frailty, the intensity of a homogeneous Poisson process is needed as an input parameter for the sample size calculations. Song, Kosorok, and Cai (2008) propose a covariate-adjusted logrank test to improve the power of the tests and to adjust for random imbalances of the covariates at baseline using a semiparametric approach. They derive a sample size formula based on the limiting distribution of the robust log-rank statistic. Chen, Ibrahim, and Chu (2014) develop a sample size determination method for the shared frailty model to investigate the treatment effect on multivariate event times. They consider sample size determination for testing the treatment effect on one time-to-event while treating the other event times as nuisance, and compare the power from a multivariate frailty model to that of a univariate parametric and semi-parametric survival model. There has been essentially no literature on developing Bayesian design methods for recurrent events data. Bayesian methods are inherently attractive in this complex setting since they are more computationally flexible, making no large sample assumptions, are more amenable to adaptive design settings with multiple endpoints than frequentist methods, generally have excellent frequentist operating characteristics (e.g., Chen et al., 2011; Ibrahim et al., 2012), and allow for the natural incorporation of historical data and other prior information in the design that can lead to more desirable designs than their frequentist counterparts.
To motivate our methodology, we consider a case study of myelodysplastic syndromes (MDS), where the interest lies in evaluating a treatment effect of an investigational product (IP) for reducing recurrent bleeding or transfusion events among lower risk MDS patients receiving disease modifying therapy. General symptoms associated with MDS include fatigue, dizziness, weakness, bruising, and bleeding. The disease is characterized by progressive impairment of myelodysplastic stem cells that increases the risk of evolution into acute myeloid leukemia (AML) (Malcovati et al., 2005). In this article, we aim to design a pivotal clinical trial in which lower risk MDS patients are treated with MDS disease modifying therapies along with investigational product (IP). The primary study objective is to demonstrate the treatment effect on reduction of platelet transfusion utilization and bleeding event occurrences during the course of receiving MDS therapies. A superiority trial design is therefore considered since IP is expected to reduce the transfusion incidence. The most common side effect of MDS disease modifying therapies is thrombocytopenia (platelet count < 100 × 109/L). Thrombocytopenia prevents the on-schedule of MDS therapies administered and increases the risk of bleeding, and thus could result in morbidity. Platelet transfusion interventions are often required therapeutically (given to patients who are actively bleeding) and prophylactically (to prevent future bleeding) during the therapy cycles. Therefore, the incidence of platelet transfusion intervention and bleeding event occurrence are highly correlated, and as a result, the composite endpoint of bleeding events and platelet transfusion events is currently being explored as a primary endpoint.
In this article, we propose a novel approach to Bayesian clinical trials design for superiority trials with recurrent events data. We postulate a semiparametric cumulative intensity model with covariates that is very flexible and allows for very general shapes of the baseline intensity. By taking the baseline intensity to be piecewise constant, our model can accommodate a wide variety of general settings well beyond proportional intensities. In our Bayesian design development, we consider the sampling and fitting priors of Wang and Gelfand (2002) that lead to flexible and general classes of Bayesian designs allowing for incorporation of historical data. Using the Wang and Gelfand (2000) framework, we are able to define a key quantity that determines the frequentist operating characteristics of the design, namely the type I error rate and power. The MDS trial is perfectly suited for this type of design since our goal is to design a study with a moderate number of subjects, and therefore large sample approximations are not at all desirable. Also, we have historical data available that will facilitate a more efficient design with desirable operating characteristics. We compare our Bayesian design to the naÏve sample size calculation under a Poisson assumption as well as that of Song et al. (2008), and show that our design has more desirable operating characteristics than either of these two methods.
The rest of this article is organized as follows. In Section 2, we present the cumulative intensity model and derive the likelihood function. In Section 3, we present the proposed Bayesian methodology for superiority design. We define the sampling and fitting priors along with the key Bayesian criterion for determining the operating characteristics of the design. We also provide details on the predictive data generation algorithm. In Section 4, we present a general model for historical data, propose the partial borrowing power prior as the fitting prior, and discuss the sampling prior that is used for generating the predictive data. In Section 5, we derive several attractive properties of the full conditional distributions, develop an efficient Gibbs sampling algorithm via elegant reparameterizations, and provide a simulation-based algorithm for computing the type I error rate and power. Section 6 presents a detailed design for the MDS clinical trial in the presence of historical data as well as a detailed simulation study examining the operating characteristics of our proposed design. We close the article with some discussion in Section 7.
2. The Gamma Frailty Regression Model
For single-type recurrent events data, we consider the cumulative intensity model:
(1) |
where Λ0(t) is the baseline cumulative intensity function, zi denotes the treatment indicator, xi = (xi1, …, xip)′ is a p-dimensional vector of baseline covariates associated with subject i, γ and β = (β1, …, βp)′ are the corresponding regression coefficients, and ωi is the subject dependent frailty. We further assume that ωi ~ Gamma(1/τ, 1/τ), which is a gamma distribution with mean 1 and variance τ. The model defined by (1) is thus termed as the recurrent events frailty model. We note that a large value of τ implies high dependence among recurrent event times. Let denote the baseline intensity function. We assume a piecewise constant intensity function for λ0(t). Specifically, let s0 = 0 < s1 < s2 < ⋯ < sK =∞ denote a partition of R+ = (0,∞). We assume a constant intensity λk over the kth interval Ik = (sk−1, sk] for k = 1, 2, …, K. That is, λ0(t) = λk if t ∈ Ik for k = 1, …, K, and we write λ = (λ1, …, λK)′.
Let T denote the duration of the trial and let Ni(t) denote the number of recurrent events by time t for t ≤ T. We assume that the Ni(t)’s are independent across subjects. The observed data consists of the recurrent events Dn = {Ni(t ∧ ci), ci, zi, xi, i = 1, …, n}, where the censoring time ci ≤ T is conditionally independent of Ni(t) given zi and xi for i = 1, …, n. Under the model defined by (1), the observed-data likelihood function is given by
(2) |
where f (ωi|τ) is the density of a Gamma(1/τ, 1/τ) distribution and θ = (γ, β, λ, τ). We note that when τ → 0, P(wi = 1|τ) → 1 and in this case, the model in (1) reduces to a non-homogeneous Poisson process model with the intensity function given by , where Λ0(t) is defined by (1). The observed likelihood function for the Poisson process model is given by .
3. Proposed Methodology
3.1. Bayesian Design
The hypotheses for “superiority” testing are given as follows: H0: exp(γ) ≥ δ versus H1: exp(γ) < δ, where δ is a prespecified design margin. For a superiority trial, δ < 1 and the trial is successful if H1 is accepted. Note that exp(γ) is the intensity ratio of the treatment compared to the control arm (placebo).
Following Wang and Gelfand (2002) and Chen et al. (2011), let π(s)(θ) denote the sampling prior, which is used to generate the data, and also let π(f)(θ) denote the fitting prior, which is used to fit the model once the data are generated. Under the fitting prior, the posterior distribution of θ given the data Dn takes the form π(f)(θ|Dn) ∝ L(θ|Dn)π(f)(θ), where L(θ|Dn) is defined by (2). We note that π(f)(θ) may be improper as long as the resulting posterior π(f)(θ|Dn) is proper. Further we let f(s)(Dn) denote the marginal distribution of Dn based on the sampling prior. The key design quantity is defined as follows:
(3) |
where the indicator function 1{A} is 1 if A is true and 0 otherwise, φ0 > 0 is a prespecified quantity, the probability P(·|Dn, π(f)) is computed with respect to the posterior distribution given the data Dn and the fitting prior π(f)(θ), and the expectation Es[·] is taken with respect to the marginal distribution of Dn under the sampling prior π(s)(θ).
Let Θ0 and Θ1 denote the parameter spaces corresponding to H0 and H1. We let Θ̅0 and Θ̅1 denote the closures of Θ0 and Θ1 and specify the sampling prior with support ΘB = Θ̅0 ∩ Θ̅1 and another sampling prior with support . Using (3), for given 0 < α0 < 1 and 0 < α1 < 1, we compute and , where and given in (3) corresponding to and are the type I error rate and power, respectively. Then, the sample size is given by nB = max{nα0, nα1}. For the design of a superiority trial, we choose φ0 ≥ 0.95 to ensure that the entire 95% Bayesian credible interval of exp(γ) is below δ. Common choices of α0 and α1 include α0 = 0.05 and α1 = 0.20 so that the sample size nB guarantees that the type I error rate is less than or equal to 0.05 and the power is at least 0.80.
3.2. Predictive Data Generation Algorithm
We use the sampling prior, π(s)(θ), to generate the predictive data Dn. In other words, we view the distribution f(s)(Dn) as the prior predictive marginal distribution of the data. The prior predictive data generation algorithm is given as follows: Step 0. Set T, n, K, s0 = 0 < s1 < ⋯ < sK−1 < sK = ∞, and 0 < p < 1, which is the probability of treatment assignment; Step 1. Generate θ ~ π(s)(θ); Step 2. Generate a vector of covariates, x, from a known distribution f (x) and independently generate the treatment assignment indicator z ~ Bernoulli(p); Step 3. Generate ω ~ Gamma(1/τ, 1/τ); Step 4. Set , sequentially generate from a piecewise exponential regression model with density for , where , and compute the gap time for j = 1, 2, …; Step 5. Generate c* ~ g(c|x) independently, where g(c|x) is a pre-specified distribution for the censoring time, and let c = c * Λ T; Step 6. Compute for t ≤ c and record all jump times; and Step 7. Independently repeat Steps 2–6 n times to obtain Dn.
4. Specification of the Fitting and Sampling Priors
4.1. The Model for the Historical Data
Let T0 denote the duration of the historical trial and let N0i(t) denote the number of recurrent events by time t for t ≤ T0. We assume that the N0i(t)’s are independent across subjects. The observed data consists of the recurrent events D0 = {N0i(t Λ c0i), c0i, z0i, x0i, i = 1, …, n0}, where c0i ≤ T0 is the noninformative censoring time, z0i denotes the treatment indicator, and x0i is a p-dimensional vector of the baseline covariates associated with subject i. Here, we assume that z0i and zi are the indicators of the same treatment and x0i and xi are the vectors of the same baseline covariates. Let Λh0(t) denote the cumulative baseline intensity function based on the historical data and . We consider a piecewise constant intensity function for λh0(t), which may be different than λ0(t). Specifically, let s00 = 0 < s01 < ⋯ < s0K0 =∞ denote a partition of R+ = (0,∞). We assume a constant intensity λ0k over the kth interval I0k = (s0,k−1, s0k] for k = 1, 2, …, K0 as λh0(t) = λ0k if t ∈ I0k for k = 1, …, K0. We write λh0 = (λ01, …, λ0K0)′ and further assume that the cumulative intensity function for the recurrent events based on the historical data is given by
(4) |
where ω0i is the subject dependent frailty, which follows a Gamma(1/τ0, 1/τ0) distribution, and τ0 may be different than τ.
Comparing (4) to (1), we see that the models for the historical data and the current data share only two common parameters γ and β. This assumption is reasonable when the treatment and covariates in the current data are the same as those in the historical data. Thus, strength from the historical data is borrowed only through the common parameters γ and β. Having different parameters, λh0 and τ0, and a different partition, s00 = 0 < s01 ⋯ <s0K0 =∞, for λh0(t), for the historical data provides us with greater flexibility in accommodating different baseline intensity functions and different frailty distributions to capture the dependence among recurrent event times in the current and historical data. In addition, we use the partial borrowing power prior of Ibrahim et al. (2012) as the fitting prior to control the influence of the historical data on the current study when γ and β in the historical data are different than those in the current study.
4.2. Partial Borrowing Power Priors
Under the model defined by (4), the observed-data likelihood function based on D0 is given by
(5) |
where .
Following Ibrahim and Chen (2000) and Ibrahim et al. (2012), we propose the following partial borrowing power prior for θ,
(6) |
where 0 ≤ a0 ≤ 1, is an initial prior for (λh0, τ0), and is an initial prior for θ. The parameter a0 can be interpreted as a relative precision parameter for D0. One of the main roles of a0 is that it controls the heaviness of the tails of the prior for γ and β. As a0 becomes smaller, the tails of (6) become heavier. If a0 = 1, (6) corresponds to the update of using Bayes theorem while a0 = 0 is equivalent to a prior specification with no incorporation of historical data. Thus, the parameter a0 controls the influence of the historical data on the current study. In (6), the initial prior is further specified as
(7) |
where (an improper uniform prior), with af0k ≥ 0 and bf0k ≥ 0 for k = 1, …, K, and (an inverse Gamma (IG) distribution) with af0τ > 0 and bf0τ > 0. Similarly, we assume that , where with ahf0k > 0 and bhf0k > 0 for k=1, …,K0, and with ahf0τ > 0 and bhf0τ > 0.
4.3. Sampling Priors
For the sampling prior, , ℓ = 0, 1, we take
(8) |
In (8), we first specify a point mass prior for with for ℓ = 0 and Δ{γ= γ1} for ℓ = 1, where γ1 < log(δ). In addition, we specify a point mass prior for π(s)(β, λ, τ).
5. Computational Development
In this section, we develop an efficient Gibbs sampling algorithm to sample from the posterior distribution under the fitting prior and a simulation-based algorithm for computing the type I error and power. Using (2) and (6), the augmented posterior distribution of (θ, λ0, τ0) is given by π(θ, λh0, τ0|Dn, D0, a0) ∝ L (θ|Dn) . To develop an efficient Gibbs sampling algorithmm, we consider the reparameterizations λ*= τλ and . The Jacobians of these transformations are and . Letting θ* = (γ, β, λ*, τ, , τ0) the posterior distribution of θ* is thus given by
(9) |
Let [A|B] denote the conditional distribution of A given B. The Gibbs sampling algorithm requires us to sample from the following distributions in turn: (i) [γ, β|λ*, τ, , τ0, Dn, D0, a0]; (ii) [λ*|γ, β, τ,Dn]; (iii) , β , τ0, D0, a0]; (iv) [τ|γ, β, λ*, Dn], and (v) [τ0|γ, β, , D0, a0]. For (i), we can show that the full conditional density π(γ, β|λ*, τ, , τ0, Dn, D0, a0 is log-concave in γ or each component of β. For (ii), we first let for k = 1, …, K and ξ = (ξ1, …, ξK)′, and then we can show that the conditional density π(ξ|γ, β, τ,Dn) is log-concave in each component of ξ. For (iii), we use a similar transformation for (ii) to sample for k = 1, …, K0. For (iv), we take the transformation τ* = 1/τ. Then, the conditional density π(τ*|γ, β, λ,Dn) is log-concave again. Finally for (v), we use the same transformation for (iv) to sample τ0. The technical details regarding the log-concavity of these full conditional densities are given in Appendix A. With the log-concavity property, all of the model parameters can be more efficiently sampled from their respective full conditional distributions.
The simulation-based computational algorithm for computing in (3) is given as follows: Step 0. Specify n, δ, φ0, B (the number of simulations), and M (the Gibbs sample size); Step 1. Generate θ from π(s)(θ) given by (8); Step 2. Generate Dn using the predictive data generation algorithm; Step 3. Run the Gibbs sampler to generate a Gibbs sample {θ(m) = (γ(m), β(m), λ(m), τ(m)), m = 1, 2, …, M} of size M from the posterior distribution π(θ, λ0, τ0|Dn,D0, a0); Step 4. Compute ; Step 5. Check whether P̂f ≥ φ0; Step 6. Repeat Steps 1–5 B times; and Step 7. Compute the proportion of {P̂f ≥ φ0} in these B runs, which gives an estimate of .
6. Bayesian Design of the MDS Clinical Trial
6.1. The Historical MDS Data
A phase 2 trial of a randomized, double-blind, placebo-controlled study evaluating the efficacy and safety of an investigational product (IP) of subjects with low or intermediate risk MDS receiving hypomethylating agents was conducted. MDS subjects receiving hypomethylating agents frequently develop thrombocytopenia (low platelet counts), and this may cause death and disability caused by bleeding. Blood or platelet transfusion are used to prevent or treat subjects’ thrombocytopenia. This IP aims to assess the effectiveness of platelet counts increasing during the MDS regime. Overall, 69 subjects were enrolled in the study, 27 (39%) subjects received placebo (standard of care), and 42 (61%) subjects received the IP. Subjects were randomized and stratified by baseline platelet count (< 50 × 109/L vs. ≥ 50 × 109/L). Subjects were treated by IP weekly in combination with the hypomethylating agent for at least four cycles (around 4 months). Transfusion events were collected daily and bleeding events were collected weekly by diary. In order to properly consider the platelet transfusion impact, any transfusion incidents that occurred within a 3-day interval are considered as just one incident. Similarly, bleeding events happening in the same organ system class on the same date are considered as one incident. Based on this phase 2 trial data, a summary of the composite endpoint by important baseline characteristics is shown in Table 1. The observed event rates with a 95% confidence interval (CI) based on the Poisson model are presented. From Table 1, patients with baseline platelet count < 50 × 109/L on average experienced 68.7 events per 100 subject weeks in the placebo cohort and 52.0 events for the IP cohort. Further-more, low event rates were observed in the patients with base-line platelet count higher than 50 × 109/L, which was within the expectation from medical point of view. The higher one’s platelet count is, the less likely one would bleed. Throughout the rest of the article, we use this dataset as the historical MDS data with a binary treatment variable (z0 = 1 if IP and z0 = 0 if Placebo) and a baseline covariate (x0 = 1 if platelet count is higher than or equal to 50 × 109/L and x0 = 0 if platelet count is less than 50 × 109/L). We carry out a detailed Bayesian analysis of the historical MDS data in Appendix B. Under the best model according to DIC and LPML in Table A1, the posterior means (Est’s), the standard deviations (SD’s), and the 95% highest posterior density (HPD) intervals of γ, β, and λh0 are given in Table 2. These estimates of (γ, β, λh0, τ0) will be used as the guide values for the design of future trials.
Table 1.
Treatment | Baseline characteristics |
Observed event rate (event/per 100 subject-week) |
95% CI |
---|---|---|---|
Placebo | Platelet counts < 50 × 109/L | 68.7 | (58.9, 79.7) |
Platelet counts ≥ 50 × 109/L | 7.3 | (4.3, 11.7) | |
IP | Platelet counts < 50 × 109/L | 52.0 | (45.3, 59.5) |
Platelet counts ≥ 50 × 109/L | 8.8 | (5.9, 12.7) |
Table 2.
Parameter | Est | SD | 95% HPD | Parameter | Est | SD | 95% HPD |
---|---|---|---|---|---|---|---|
γ | −0.17 | 0.40 | (−0.93, 0.61) | β | −1.92 | 0.40 | (−2.71, −1.14) |
τ0 | 2.15 | 0.53 | (1.20, 3.18) | ||||
λ01 | 0.15 | 0.06 | (0.06, 0.28) | λ0,10 | 0.14 | 0.06 | (0.05, 0.24) |
λ02 | 0.16 | 0.07 | (0.06, 0.29) | λ0,11 | 0.10 | 0.04 | (0.03, 0.18) |
λ03 | 0.12 | 0.05 | (0.04, 0.21) | λ0,12 | 0.07 | 0.03 | (0.02, 0.13) |
λ04 | 0.31 | 0.13 | (0.10, 0.56) | λ0,13 | 0.11 | 0.05 | (0.04, 0.20) |
λ05 | 0.12 | 0.05 | (0.04, 0.21) | λ0,14 | 0.10 | 0.03 | (0.03, 0.19) |
λ06 | 0.11 | 0.05 | (0.03, 0.19) | λ0,15 | 0.09 | 0.04 | (0.03, 0.17) |
λ07 | 0.13 | 0.05 | (0.04, 0.23) | λ0,16 | 0.08 | 0.03 | (0.03, 0.15) |
λ08 | 0.18 | 0.08 | (0.06, 0.33) | λ0,17 | 0.06 | 0.03 | (0.02, 0.12) |
λ09 | 0.13 | 0.06 | (0.04, 0.24) | λ0,18 | 0.03 | 0.01 | (0.01, 0.05) |
6.2. The Design Setting
From Table 2, we see that many of the posterior estimates of the λ0k’s are similar. Based on the similarity of the posterior estimates, the 18 λ0k’s may be collapsed into eight groups, namely, (i) {λ01, λ02}, (ii) {λ03}, (iii) {λ04}, (iv) {λ05, λ06, λ07}, (v) {λ08}, (vi) {λ09, λ0,10}, (vii) {λ0,11, …, λ0,17}, and (viii) {λ0,18}. Then, by comparing the average values of the λ0k’s in these 8 groups, we may further collapse these eight groups to form 4 new groups, namely, (a) {(i), (ii)}, (b) {(iii)}, (c) {(iv), (v), (vi)}, and (d) {(vii), (viii)}. Thus, at the design stage, we collapse the partition with 18 pieces for the historical data to two smaller partitions with K = 4 and K = 8. We will then compare the designs with K = 8 to the designs with K = 4. We take the average values of those posterior estimates of the λ0k’s in these groups as the design values for the λk’s and in addition, according to these groups, we collapse the respective s0k’s to form the sk’s, in which some of those sk’s are rounded to the days nearest to a multiple of 7-days for scheduling convenience. Specifically, for K = 4, the design values for the si’s are s1 = 14, s2 = 21, and s3 = 49; and the corresponding values of λ are λ1 = 0.14, λ2 = 0.31, λ3 = 0.13, and λ4 = 0.08. For K = 8, the design values for the si’s are s1 = 9, s2 = 14, s3 = 16, s4 = 34, s5 = 38, s6 = 50, and s7 = 115; and the corresponding values of λ are λ1 = 0.16, λ2 = 0.12, λ3 = 0.31, λ4 = 0.12, λ5 = 0.18, λ6 = 0.13, λ7 = 0.09, and λ8 = 0.03. For both cases, the design value for β is β = −1.92 and various design values ranging from τ = 0 to τ = 3.18 are considered. These design values are then used to specify the point mass prior π(s) (β, λ, τ) in (8). In the predictive data generation algorithm, we further assume z ~ Bernoulli(0.5) and x ~ Bernoulli(0.44), where the proportion 0.44 was estimated from the historical MDS data. The superiority margin δ and the trial duration are set to be exp(−0.025) = 0.975 and 225 days. For the initial prior in (7), the hyperparameters are specified as af0k = bf0k = 0.001, k = 1, …, K, for λ, and df0τ = bf0τ = 0.001 for τ so that a noninformative initial prior is specified for (γ, β, λ, τ). Similarly, we take (ahf0k, bhf0k) = (0.001, 0.01), k = 1, …, K0, and (dhf0τ, bhf0τ) = (0.001, 0.01) for the hyperparameters in the initial prior for λh0 and τ0. Furthermore, K0 = 8 in all computations of the partial borrowing power prior with a0 > 0. To ensure that the type I error rate is controlled under 0.05, we consider φ0 = 0.95 as well as φ0 = 0.96. We then investigate the powers and type I error rates under various sample sizes n, effect sizes γ, and values of τ to account for various degrees of dependency among the recurrent event times in the simulation study. In all computations of the Bayesian type I error rates and powers, B = 10, 000 simulations and M = 2, 500 with 200 burn-in iterations within each simulation were used.
6.3. The Simulation Results
Using the rate 1.2 of placebo versus IP for platelet transfusion or bleeding, and based on the naÏve sample size calculation under the Poisson assumption, 88 and 116 subjects are required for a placebo-control randomized trial with 80% and 90% of power, respectively, with a treatment duration of 225 days. We note that in the naÏve sample size calculation, the homogeneous Poisson process model is assumed and the within-subject recurrent event times are assumed to be independent. These assumptions may essentially lead to an under-estimation of the sample size.
According to Song et al. (2008), the sample size for a two-sided alternative with type I error rate α0 and power 1 − α1 is given by , where γ is the effect size, p1 + p2 = 1, p1 and p2 are the proportions of sample size allocation, is the right censoring time, Λ̂j(Cij|xi) is the estimated cumulative intensity function, Nij(Cij) is the total number of events over the interval (0, Cij] for the ith subject in the jth treatment arm for i = 1, …, nj and j = 1, 2, and n = n1 + n2. As suggested by Song et al. (2008), we use the historical data to compute D̂1a, D̂2, D̂1g, D̂2, and . Based on the historical MDS data, we obtain D̂1a = 5.88, D̂1g = 5.98, D̂2 = 205.46, and . We note that D̂1a = 5.88 matches exactly the average number of events in the historical MDS data, and D̂1a and D̂1g are quite close. We take p1 = p2 = 0.5 and α0 = 0.05. Then, the frequentist sample sizes are 179 and 240 to achieve the powers of 80% and 90%, respectively, for γ = −0.17; and 130 and 173 to achieve the powers of 80% and 90%, respectively, for γ = −0.20. Finally, we would like to mention that the frequentist sample size n is derived under a two-sided alternative hypothesis, which is different than the one-sided alternative hypothesis for superiority testing. We also note that implies that independence is assumed on the recurrent event times.
Using the proposed Bayesian design, we first compute the type I error rates and powers under various design values with no incorporation of the historical MDS data (a0 = 0). The results are reported in Table 3. Under a non-homogeneous Poisson process model (i.e., τ = 0), for the effect size γ = −0.2 and K = 4, the powers were 0.89 and 0.87 for φ0 = 0.95 and φ0 = 0.96, respectively, for n = 88; and 0.95 and 0.94 for φ0 = 0.95 and φ0 = 0.96, respectively, for n = 116. However, when K = 8, n = 116, and γ = −0.2 are needed to achieve 80% power. In this case (τ = 0), the Bayesian sample sizes are comparable to those calculated using the naÏve sample size calculation under the Poisson assumption and are smaller than those using the approach of Song et al. (2008). We further observe from Table 3 that (i) the power increases when the sample size n increases or the effect size (namely, the absolute value of γ) becomes larger; and (ii) the power decreases when τ, φ0, and K increase. Specifically, when K = 4, τ = 1.5, and γ = −0.4, the powers are 0.79, 0.85, and 0.88 for n = 300, 350, and 400, respectively, for φ0 = 0.95; 0.76 for n = 300 and φ0 = 0.96. Also, When τ = 1.5, γ = −0.4, n = 300, and φ0 = 0.95, the power is 0.78 for K = 8. In addition, when K = 4, n = 300, γ = −0.3, and φ0 = 0.95, the powers are 0.90 for τ = 0.5, 0.75 for τ = 1.0, and 0.57 for τ = 1.5. Thus, the design value of τ has the most impact on the power of the Bayesian design. Compared to τ, the differences in the powers are much smaller between φ0 = 0.95 and φ0 = 0.96 or between K = 4 and K = 8. We note that with no incorporation of the historical MDS data (a0 = 0), the type I error rates are controlled at 5% for φ0 = 0.95 and 4% for φ0 = 0.96 for all cases. We also note that with no incorporation of historical data, n = 300 is sufficient to achieve 80% power when τ = 1.0 and γ = −0.35, and a larger sample size such as n = 400 for τ = 2.15 and n = 450 for τ = 3.18 and a larger effect size such as γ = −0.4 for τ = 2.15 and γ = −0.45 for τ = 3.18 are needed to achieve 80% power.
Table 3.
Design value of τ |
Power | ||||
---|---|---|---|---|---|
K | n | Effect size (γ) | ϕ0 = 0.95 | ϕ0 = 0.96 | |
4 | 0 | 88 | −0.17 | 0.78 | 0.74 |
−0.2 | 0.89 | 0.87 | |||
116 | −0.17 | 0.86 | 0.83 | ||
−0.2 | 0.95 | 0.94 | |||
0.5 | 250 | −0.3 | 0.85 | 0.82 | |
300 | −0.3 | 0.90 | 0.88 | ||
1.0 | 300 | −0.3 | 0.75 | 0.67 | |
300 | −0.35 | 0.82 | 0.80 | ||
300 | −0.4 | 0.91 | 0.89 | ||
1.5 | 300 | −0.3 | 0.57 | 0.53 | |
300 | −0.35 | 0.69 | 0.66 | ||
300 | −0.4 | 0.79 | 0.76 | ||
350 | −0.3 | 0.62 | 0.60 | ||
350 | −0.35 | 0.75 | 0.72 | ||
350 | −0.4 | 0.85 | 0.83 | ||
400 | −0.3 | 0.68 | 0.64 | ||
400 | −0.35 | 0.80 | 0.76 | ||
400 | −0.4 | 0.88 | 0.86 | ||
2.15 | 400 | −0.4 | 0.79 | 0.75 | |
400 | −0.45 | 0.87 | 0.85 | ||
3.18 | 400 | −0.4 | 0.65 | 0.62 | |
400 | −0.45 | 0.75 | 0.72 | ||
400 | −0.50 | 0.82 | 0.78 | ||
450 | −0.4 | 0.69 | 0.66 | ||
450 | −0.45 | 0.79 | 0.76 | ||
450 | −0.50 | 0.85 | 0.83 | ||
8 | 0 | 88 | −0.17 | 0.65 | 0.61 |
−0.2 | 0.78 | 0.75 | |||
116 | −0.17 | 0.76 | 0.73 | ||
−0.2 | 0.88 | 0.86 | |||
0.5 | 250 | −0.3 | 0.82 | 0.79 | |
300 | −0.3 | 0.88 | 0.86 | ||
1.0 | 300 | −0.3 | 0.69 | 0.65 | |
300 | −0.35 | 0.82 | 0.77 | ||
300 | −0.4 | 0.89 | 0.87 | ||
1.5 | 300 | −0.3 | 0.56 | 0.52 | |
300 | −0.35 | 0.68 | 0.64 | ||
300 | −0.4 | 0.78 | 0.75 | ||
350 | −0.3 | 0.62 | 0.58 | ||
350 | −0.35 | 0.74 | 0.70 | ||
350 | −0.4 | 0.84 | 0.81 | ||
400 | −0.3 | 0.67 | 0.63 | ||
400 | −0.35 | 0.78 | 0.75 | ||
400 | −0.4 | 0.88 | 0.85 | ||
2.15 | 400 | −0.4 | 0.78 | 0.74 | |
400 | −0.45 | 0.86 | 0.83 | ||
3.18 | 400 | −0.4 | 0.63 | 0.60 | |
400 | −0.45 | 0.73 | 0.69 | ||
400 | −0.50 | 0.81 | 0.78 | ||
450 | −0.4 | 0.68 | 0.64 | ||
450 | −0.45 | 0.77 | 0.74 | ||
450 | −0.50 | 0.84 | 0.81 |
Type I errors are around 5% for ϕ0 = 0.95 and around 4% for ϕ0 = 0.96 for all cases.
Next, we compute the type I error rates and powers under various design values with incorporation of the historical MDS data using the partial borrowing power prior given in (6). The results are shown in Table 4. We see from Table 4 that for given τ, γ, n, and φ0, both the power and the type I error rate increase as a0 increases when K = 4. Compared to the results in Table 3, we see some gain in power when the historical MDS data are incorporated. For instance, when K = 4, τ = 1.5, γ = −0.4, n = 300, and φ0 = 0.96, the power and type I error rate are 0.79 and 0.05 for a0 = 0.15, respectively, while the power and type I error rate are 0.76 and 0.04, respectively, without incorporation of the historical MDS data. When K = 4, τ = 2.15, γ = −0.4, and n = 400, the powers and type I error rates are 0.81 and 0.05, respectively, when φ0 = 0.95 and a0 = 0.1, and 0.79 and 0.05, respectively, when φ0 = 0.96 and a0 = 0.15. Under the same setting, the powers are 0.79 for φ0 = 0.95 and 0.75 for φ0 = 0.96 when a0 = 0. Thus, we see about a 2% increase in power when φ0 = 0.95 and a 4% increase in power when φ0 = 0.96. We further see that when K = 8, τ = 1.5, γ = −0.4, and n = 300, the powers and type I error rates are 0.80 and 0.05, respectively, for φ0 = 0.95, and 0.76 and 0.04, respectively, for φ0 = 0.96 when a0 = 0.15; and 0.78 and 0.05, respectively, for φ0 = 0.95, and 0.75 and 0.04, respectively, for φ0 = 0.96 when a0 = 0. When the model becomes more complex, namely, K = 8, the historical data play a minimum role in improving powers under the various design settings but the type I errors are well controlled at 5%. Since the sample size (n0 = 69) of the historical MDS data is small, we do not expect that the incorporation of the historical MDS data leads to a substantial gain in power. As demonstrated in Chen et al. (2011) and Ibrahim et al. (2012), more gain in power can be obtained when a large historical dataset is available. The general strategy for selecting a0 is to find the largest value of a0 such that the type I error rate is still controlled at about 5%. For example, when K = 4, τ = 2.15, n = 400, and φ0 = 0.95, the largest value of a0 is about 0.1 since when a0 = 0.15, the type I error rate exceeds 5%.
Table 4.
Design value of τ |
ϕ0 = 0.95 | ϕ0 = 0.96 | ||||||
---|---|---|---|---|---|---|---|---|
K | n | Effect size (γ) | a0 | Power | Type I error | Power | Type I error | |
4 | 0.5 | 250 | −0.3 | 0.05 | 0.86 | 0.05 | 0.83 | 0.04 |
250 | −0.3 | 0.2 | 0.87 | 0.06 | 0.84 | 0.05 | ||
250 | −0.3 | 0.5 | 0.90 | 0.08 | 0.89 | 0.07 | ||
250 | −0.3 | 1.0 | 0.95 | 0.16 | 0.93 | 0.13 | ||
1.0 | 300 | −0.3 | 0.05 | 0.72 | 0.05 | 0.68 | 0.04 | |
−0.35 | 0.83 | 0.80 | ||||||
300 | −0.3 | 0.1 | 0.73 | 0.06 | 0.69 | 0.05 | ||
−0.35 | 0.85 | 0.82 | ||||||
300 | −0.3 | 0.2 | 0.76 | 0.07 | 0.72 | 0.06 | ||
−0.35 | 0.85 | 0.83 | ||||||
300 | −0.3 | 0.5 | 0.82 | 0.10 | 0.79 | 0.09 | ||
−0.35 | 0.91 | 0.89 | ||||||
300 | −0.3 | 1.0 | 0.90 | 0.18 | 0.88 | 0.15 | ||
−0.35 | 0.95 | 0.92 | ||||||
1.5 | 300 | −0.4 | 0.05 | 0.80 | 0.05 | 0.78 | 0.04 | |
300 | −0.4 | 0.1 | 0.82 | 0.06 | 0.79 | 0.04 | ||
300 | −0.4 | 0.15 | 0.82 | 0.06 | 0.79 | 0.05 | ||
300 | −0.4 | 0.2 | 0.84 | 0.07 | 0.81 | 0.06 | ||
300 | −0.4 | 0.5 | 0.89 | 0.11 | 0.87 | 0.09 | ||
300 | −0.4 | 1.0 | 0.96 | 0.21 | 0.94 | 0.18 | ||
350 | −0.35 | 0.1 | 0.77 | 0.06 | 0.74 | 0.05 | ||
350 | −0.35 | 0.2 | 0.79 | 0.07 | 0.76 | 0.06 | ||
350 | −0.35 | 0.5 | 0.86 | 0.11 | 0.84 | 0.09 | ||
350 | −0.35 | 1.0 | 0.93 | 0.21 | 0.91 | 0.18 | ||
2.15 | 400 | −0.4 | 0.05 | 0.80 | 0.05 | 0.77 | 0.04 | |
400 | −0.4 | 0.1 | 0.81 | 0.05 | 0.78 | 0.04 | ||
400 | −0.4 | 0.15 | 0.82 | 0.07 | 0.79 | 0.05 | ||
400 | −0.4 | 0.2 | 0.83 | 0.07 | 0.80 | 0.06 | ||
400 | −0.4 | 0.5 | 0.89 | 0.11 | 0.87 | 0.09 | ||
400 | −0.4 | 1.0 | 0.95 | 0.22 | 0.94 | 0.19 | ||
3.18 | 400 | −0.45 | 0.1 | 0.77 | 0.06 | 0.73 | 0.05 | |
400 | −0.45 | 0.2 | 0.80 | 0.07 | 0.77 | 0.06 | ||
400 | −0.45 | 0.5 | 0.88 | 0.13 | 0.85 | 0.10 | ||
400 | −0.45 | 1.0 | 0.95 | 0.26 | 0.94 | 0.23 | ||
8 | 0.5 | 250 | −0.3 | 0.2 | 0.82 | 0.05 | 0.79 | 0.04 |
250 | −0.3 | 0.5 | 0.83 | 0.05 | 0.80 | 0.04 | ||
250 | −0.3 | 1.0 | 0.84 | 0.06 | 0.81 | 0.05 | ||
1.0 | 300 | −0.3 | 0.2 | 0.68 | 0.05 | 0.65 | 0.05 | |
−0.35 | 0.81 | 0.78 | ||||||
300 | −0.3 | 0.5 | 0.69 | 0.05 | 0.65 | 0.04 | ||
−0.35 | 0.82 | 0.78 | ||||||
300 | −0.3 | 1.0 | 0.69 | 0.05 | 0.65 | 0.04 | ||
−0.35 | 0.82 | 0.78 | ||||||
1.5 | 300 | −0.4 | 0.1 | 0.79 | 0.04 | 0.75 | 0.03 | |
300 | −0.4 | 0.15 | 0.80 | 0.05 | 0.76 | 0.04 | ||
300 | −0.4 | 0.5 | 0.79 | 0.05 | 0.75 | 0.04 | ||
300 | −0.4 | 1.0 | 0.78 | 0.06 | 0.74 | 0.05 | ||
2.15 | 400 | −0.4 | 0.1 | 0.78 | 0.05 | 0.75 | 0.04 | |
400 | −0.4 | 0.2 | 0.78 | 0.05 | 0.75 | 0.04 | ||
400 | −0.4 | 0.5 | 0.78 | 0.05 | 0.75 | 0.04 | ||
400 | −0.4 | 1.0 | 0.78 | 0.05 | 0.74 | 0.04 | ||
3.18 | 400 | −0.45 | 0.2 | 0.73 | 0.05 | 0.69 | 0.04 | |
400 | −0.5 | 0.2 | 0.81 | 0.78 | ||||
400 | −0.45 | 0.5 | 0.73 | 0.05 | 0.69 | 0.04 | ||
400 | −0.5 | 0.5 | 0.81 | 0.78 | ||||
400 | −0.45 | 1.0 | 0.73 | 0.05 | 0.69 | 0.04 | ||
400 | −0.5 | 1.0 | 0.80 | 0.76 |
We mention that under the design setting discussed in this section, the average numbers of events across the 10,000 simulated datasets range from 11.4 to 13.5 for K = 4 and 8.1 to 9.6 for K = 8. The difference in average numbers of events between K = 4 and K = 8 is one factor leading to higher power under the designs with K = 4. Another factor leading to lower power under the designs with K = 8 is that more λ0k parameters need to be estimated in the posterior inference once the predictive data Dn are generated. In estimating the baseline intensity for the historical data, the posterior estimate of λ04 is much higher than other λ0k’s as shown in Table 2. Both the historical data and the clinical literature (e.g., Silverman et al., 2006) suggest that the hematology values tend to reach the nadir values during the second or third week of the cycle under the study setting. Therefore, we set up λ02 for K = 4 and λ03 for K = 8 to be higher than other λ0k’s to characterize the simulated data better. However, we have also conducted additional simulations to examine whether the power and type I error are sensitive to the design value of λ02 under the designs with K = 4. Specifically, we use λ02 = 0.15 instead of λ02 = 0.31 for the designs with K = 4. The resulting powers and type I errors for τ = 1.5 and n = 300 are shown in Table 5. We see that the powers and type I errors for a0 = 0 in Table 5 are very similar to those shown in Table 3 and the powers and type I errors for a0 = 0.05, 0.1, 0.15, 0.2, 0.5, and 1.0 are very similar to those in Table 4. Thus, the power and type I error are quite robust to the design value of λ02. In addition, we carry out additional simulations to compute the powers and type I errors under non-point mass sampling priors for the key parameters γ, τ, and β. From Table A2 in Appendix C, we see that the powers under non-point mass sampling priors are lower than those in Table 4.
Table 5.
φ0 = 0.95 | φ0 = 0.96 | ||||
---|---|---|---|---|---|
Effect size (γ) | a0 | Power | Type I error | Power | Type I error |
−0.3 | 0 | 0.57 | 0.05 | 0.53 | 0.04 |
−0.35 | 0 | 0.69 | 0.66 | ||
−0.4 | 0 | 0.80 | 0.76 | ||
−0.4 | 0.05 | 0.81 | 0.05 | 0.78 | 0.04 |
−0.4 | 0.1 | 0.81 | 0.06 | 0.79 | 0.05 |
−0.4 | 0.15 | 0.83 | 0.06 | 0.80 | 0.05 |
−0.4 | 0.2 | 0.84 | 0.07 | 0.81 | 0.06 |
−0.4 | 0.5 | 0.89 | 0.12 | 0.87 | 0.10 |
−0.4 | 1.0 | 0.96 | 0.21 | 0.94 | 0.18 |
Although the power, type I error rate and sample sizes are the key components in the design, a detailed examination and discussion of the posterior estimates of the parameters in the recurrent events frailty model are given in Appendix D. From Table A3 in Appendix D, we see that the average of the posterior means of γ is very close to the corresponding design value, the coverage probability (CP) of γ is close to 95%, and the average of the posterior standard deviations (SD) and the simulation standard error (SE) are almost the same. We also see from the same table that the posterior estimates of β, τ, and λ all have similar frequentist properties. We have also examined the posterior estimates under other design values of (γ, β, τ, λ) and sample sizes, and similar results are obtained. These results empirically confirm that the simulation settings such as the number (B) of simulations and the Gibbs sample size are adequate and the Gibbs sampling algorithm performs well.
7. Discussion
In this article, we only considered the partial borrowing power prior to incorporate the historical recurrent events data. This prior is attractive since strength from the historical data is borrowed only through certain common parameters while the models for the current and historical data are allowed to have different parameters. Therefore, the partial borrowing power prior provides us with greater flexibility in accommodating heterogeneity between the current and historical data. The proposed methodology can be extended to allow for other types of priors discussed in Ibrahim and Chen (2000) and Hobbs, Sargent, and Carlin (2012). The empirical performance of the proposed methodology under other types of priors needs to be fully and thoroughly evaluated in future work. In addition, the proposed method can be generalized to handle more complex endpoints including more general event dependence structures, multivariate recurrent events with informative censoring, and time-dependent covariates. Specifically, joint frailty models as proposed in Zeng et al. (2014) can be adopted to model multivariate recurrent events and informative censoring time simultaneously, where event-specific frailty and shared frailty across different types of events can be built in the models to account for shared information from all the data within the same subject. The sampling algorithm will need to be modified accordingly to efficiently sample from the posterior distribution of model parameters; however, the similar partial borrowing power prior can be developed to borrow information from historical trials in designing a future trial. Additional discussion and implementation details of the proposed methodology are given in Appendix E.
Supplementary Material
Acknowledgements
We would like to thank the Editor, the Associate Editor, and the two anonymous reviewers for their very helpful comments and suggestions, which have led to an improved version of the paper. Dr M.-H. Chen and Dr J. G. Ibrahim’s research was partially supported by NIH grants #GM 70335 and #CA 74015.
Footnotes
Supplementary Materials
Web Appendices A, B, C, D, and E along with Tables A1, A2, and A3 referenced in Sections 5, 6, and 7 and the FORTRAN 95 code for Table 3 in a zip file including a README file are available with this paper at the Biometrics website on Wiley Online Library.
References
- Bernardo MVP, Harrington DP. Sample size calculation for the two-sample problem using the multiplicative intensity. Statistics in Medicine. 2001;20 doi: 10.1002/sim.693. 557–279. [DOI] [PubMed] [Google Scholar]
- Chen L, Ibrahim JG, Chu H. Sample size determination in shared frailty models for multivariate time-to-event data. Journal of Biopharmaceutical Statistics. 2014;24:817–833. doi: 10.1080/10543406.2014.901346. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen M-H, Ibrahim JG, Lam P, Yu A, Zhang Y. Bayesian design of non-inferiority trials for medical devices using historical data. Biometrics. 2011;67:1163–1170. doi: 10.1111/j.1541-0420.2011.01561.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cook RJ. The design and analysis of randomized trials with recurrent events. Statistics in Medicine. 1995;14:2081–2098. doi: 10.1002/sim.4780141903. [DOI] [PubMed] [Google Scholar]
- Hobbs BP, Sargent DJ, Carlin BP. Commensurate priors for incorporating historical information in clinical trials using general and generalized linear models. Bayesian Analysis. 2012;7:1–36. doi: 10.1214/12-BA722. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hughes MD. Power considerations for clinical trials using multivariate time-to-event data. Statistics in Medicine. 1997;16:865–882. doi: 10.1002/(sici)1097-0258(19970430)16:8<865::aid-sim541>3.0.co;2-d. [DOI] [PubMed] [Google Scholar]
- Ibrahim JG, Chen M-H. Power prior distributions for regression models. Statistical Science. 2000;15:46–60. [Google Scholar]
- Ibrahim JG, Chen M-H, Xia HA, Liu T. Bayesian meta-experimental design: evaluating cardiovascular risk in new antidiabetic therapies to treat type 2 diabetes. Biometrics. 2012;68:578–586. doi: 10.1111/j.1541-0420.2011.01679.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lawless JF, Nadeau JC. Nonparametric estimation of cumulative mean functions for recurrent events. Technometrics. 1995;37:158–168. [Google Scholar]
- Malcovati L, Porta MG, Pascutto C, Invernizzi R, Boni M, Travaglino E, Passamonti F, Arcaini L, Maffioli M, Bernasconi P, Lazzarino M, Cazzola M. Prognostic factors and life expectancy in myelodysplastic syndromes classified according to WHO criteria: A basis for clinical decision making. Journal of Clinical Oncology. 2005;23:7594–7603. doi: 10.1200/JCO.2005.01.7038. [DOI] [PubMed] [Google Scholar]
- Matsui S. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data. Statistics in Medicine. 2005;24:1339–1356. doi: 10.1002/sim.2011. [DOI] [PubMed] [Google Scholar]
- Silverman LR, McKenzie DR, Peterson BL, Holland JF, Backstrom JT, Beach CL, Larson RA. Further analysis of trials with azacitidine in patients with myelodysplastic syndrome: studies 8421, 8921, and 9221 by the Cancer and Leukemia Group B. Journal of Clinical Oncology. 2006;24:3895–3903. doi: 10.1200/JCO.2005.05.4346. [DOI] [PubMed] [Google Scholar]
- Song R, Kosorok MR, Cai J. Robust covariate-adjusted log-rank statistics and corresponding sample size formula for recurrent events data. Biometrics. 2008;64:741–750. doi: 10.1111/j.1541-0420.2007.00948.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang F, Gelfand AE. A simulation-based approach to Bayesian sample size determination for performance under a given model and for separating models. Statistical Science. 2002;17:193–208. [Google Scholar]
- Zeng D, Ibrahim JG, Chen M-H, Hu K, Jia C. Multivariate recurrent events in the presence of multivariate informative censoring with applications to bleeding and transfusion events in myelodyplastic syndrome. Journal of Biopharmaceutical Statistics. 2014;24:429–442. doi: 10.1080/10543406.2013.860159. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.