ABSTRACT
Decentralized clinical trials (DCTs) extend trial activities beyond traditional sites, enhancing access, convenience, efficiency, and result generalizability. They are particularly promising for chronic conditions like diabetes and obesity, which require longer study durations to evaluate drug effects. However, decentralized data collection raises concerns about increased variability and potential biases. This paper presents a novel Bayesian integrated learning procedure to analyze dose‐response relationships using longitudinal data from a phase II DCT that combines centralized and decentralized data collection. We generalize a parametric exponential decay model to handle mixed data sources and apply Bayesian spike‐and‐slab priors to address biases and uncertainties from decentralized measurements. Our model enables data‐adaptive integration of information from both centralized and decentralized sources. Through simulations and sensitivity analyses, we show that the proposed approach achieves favorable performance across various scenarios. Notably, the method matches the efficiency of traditional trials when decentralized data collection introduces no additional variability or error. Even when such issues arise, it remains less biased and more efficient than naïve methods that rely solely on centralized data or simply pool data from both sources.
Keywords: data integration, decentralized clinical trial, dose‐response model, phase II trial, spike‐and‐slab prior
1. Introduction
Decentralized clinical trials (DCTs) refer to clinical trials that include decentralized elements where trial‐related activities occur at locations other than traditional clinical trial sites [1]. This is typically achieved by leveraging digital technologies and innovative methods, such as remote monitoring devices, electronic patient‐reported outcomes, telehealth consultations, and home‐based visits by healthcare professionals. Compared to traditional clinical trials, DCTs offer greater convenience for participants and caregivers and provide access to more diverse patient populations, particularly those in remote or underserved areas, accelerating patient recruitment. DCTs also reduce the burden on clinicians, accelerate patient recruitment, and improve overall trial efficiency [2]. Additionally, the increased connection between patients and clinical investigators enables enriching datasets through more frequent or even continuous data collection in real‐world settings [3].
DCTs can be fully decentralized, with all activities conducted remotely, or follow a hybrid model, where some activities require in‐person visits to traditional trial sites while others occur at alternative locations, such as participants' homes, local clinics, local pharmacies, or mobile clinical units. The hybrid DCT model is particularly well‐suited for trials with longitudinal endpoints and extended follow‐up periods, and evaluating longitudinal response is critical in certain disease areas. Leveraging the advantages of DCTs, participants can complete the majority of visits remotely while requiring in‐person visits for several key assessments, such as the first and the last visits. This approach not only reduces the burden on both clinicians and participants but also improves patient adherence. Evaluating longitudinal response is critical in certain disease areas for assessing the relationship between treatment and the development of disease [4]. For example, diabetes and obesity treatments typically require several weeks or months to assess drug efficacy. Another example is chronic kidney disease, where proteinuria, a clinical endpoint that requires repeated measurement, is used in dose‐finding trials for early‐phase efficacy evaluation [5].
Despite these advantages, DCTs present several challenges. Ensuring participant safety is one of the most significant concerns, particularly when physical examinations and face‐to‐face interactions are limited [3]. Operationally, securing data storage and transmission poses additional risks. Decentralized data collection also introduces statistical concerns about data quality, including increased variability, potential biases or measurement errors, and issues related to the accuracy and validation of collected data [6]. Regulatory requirements may vary across countries or regions, complicating the implementation of DCTs. Challenges such as bridging the digital divide and achieving regulatory acceptance of digital endpoints persist [7]. To the best of our knowledge, statistical methods to address these emerging challenges, particularly increased variability and potential measurement errors in DCTs, are notably lacking.
In this paper, we focus on the hybrid DCT model [8, 9] in which the same patient may have both onsite and offsite measurements. Operationally, the distinction between DCTs and multicenter trials lies in trial conduct: DCTs allow certain patient activities, including data collection, to occur remotely, whereas multicenter trials rely entirely on in‐person site infrastructure across multiple centers. Statistically, the key difference is in the assumption and treatment of heterogeneity. In multicenter trials, heterogeneity arises primarily from regional differences, such as epidemiology, medical practices, or population‐specific drug metabolism. No single site's data are regarded as inherently more reliable, and the estimand is the average treatment effect across centers, typically modeled with center‐specific random effects. By contrast, DCTs introduce a different source of heterogeneity: decentralized measurements may carry additional uncertainty or bias because they lack centralized quality monitoring [1]. Consequently, hybrid DCTs typically regard centralized data as the primary basis for defining the estimand. The methodological challenge, however, is that each patient also provides decentralized measurements, raising the important question of whether incorporating these data can improve the efficiency of estimation.
In practice, analyses of hybrid DCTs often pool onsite and offsite data without distinction, a practice that can obscure source‐specific heterogeneity and compromise validity—an issue also supported by our simulation studies. As a real‐trial example, Figure 3 of a recent phase 1b/2a randomized trial in overweight or obese patients [10] demonstrated a “zig‐zag” pattern in the weight‐loss curve, largely attributable to differences between onsite and at‐home data collection. Similarly, a simulation study [6] showed that failing to account for such mixed data heterogeneity can lead to substantial bias. This paper aims to fill this gap by developing an innovative Bayesian integrated learning method for analyzing hybrid DCTs, integrating both traditional centralized and novel decentralized data measurements. Specifically, we consider a phase II dose‐ranging decentralized trial for a chronic condition, where a clinical efficacy endpoint is collected longitudinally through both centralized and decentralized methods. In such trials, fully evaluating the longitudinal dose‐response relationship and carefully selecting an appropriate dose are pivotal to the success of drug development [11]. Due to the relatively small or moderate sample sizes for phase II dose‐ranging trials, the Emax model is one of the most commonly used models to characterize the dose‐response relationships [12, 13]. Additionally, dose‐response modeling can also be based on pharmacokinetic and pharmacodynamic characteristics to enhance precision [14], with the appropriate model form tailored to the specific drug. To evaluate the dose‐response relationship with repeated measurements, Fu and Manner [15] proposed the parametric integrated two‐components prediction (ITP) model (also referred to as exponential decay model), which was further developed by Duan et al. [16]; Li and Fu [17]; Payne et al. [18]; Qu [19]. The family of ITP models has been applied in several real‐world phase II trials, for example, see Frias et al. [20]. While our paper focuses on the ITP model, we acknowledge the availability of other longitudinal dose‐response models [21, 22, 23, 24, 25] and believe that our proposed approach could also be applied to these models.
FIGURE 3.

Average bias, average root mean square error (RMSE), and average coverage probability and length of pointwise 95% credible intervals for the estimate of the area under the time‐response curve (AUC) using the four integrated two‐component prediction (ITP) models under Scenarios 1 to 8.
In some clinical trials, certain measurements can be collected frequently through a method at home or locally and infrequently on site. For example, body weight can be measured weekly at home via a standardized scale and monthly on site. It is an important question of how to utilize both types of measurements in estimating the dose‐response relationship in phase 2 studies. Research showed the remote weight measurements could be subject to some bias and increased variability [26]. This paper develops a novel statistical model for hybrid DCTs, combining longitudinal data evaluation with dose‐response analysis and providing flexibility to address biases and uncertainties arising from decentralized data collection. Using the parametric ITP model as an example, we extend it to account for the unique characteristics and challenges posed by decentralized measurements. Additionally, we propose a Bayesian integrated learning approach with spike‐and‐slab priors to adaptively adjust for biases and uncertainties associated with decentralized data.
The remainder of the paper is organized as follows: In Section 2, we review the ITP model and introduce the proposed approach for multidose longitudinal trials with outcomes collected in a hybrid data collection model. Section 3 applies the proposed method to reanalyze the motivating trial. In Section 4, we assess the operating characteristics of the proposed model through simulation studies and perform sensitivity analyses to evaluate the robustness of the design. Finally, we conclude with a discussion in Section 6. Additional simulation details and the R code for implementing the proposed method are provided in the Supporting Information.
2. Methods
2.1. The Integrated Two‐Component Prediction (ITP) Model
Consider a phase II multidose randomized trial with a longitudinal endpoint measured at preplanned (standardized) time points: . Following the motivating trial for weight management, the endpoint represents an observation (i.e., change from baseline ) for subject at the th time (, ) and a negative is expected for an effective treatment. Typically, does not exactly match and may fall within the interval , where denotes the visit tolerance, such as three days. If no observation is collected within this interval, is treated as missing. In a Bayesian framework, ignorable missingness can be effectively handled using Bayesian data augmentation (BDA) (see further details in Section 2.4). Assume that each patient is randomized to one of the prespecified dose arms , with indicating the placebo, and denoting active doses. Let indicate the corresponding dosage of , where . The standard ITP model comprises two components: the time‐effect (i.e., ) and the terminal dose effect (i.e., ), such that
| (1) |
where is the between‐subject error and is the within‐subject error, and and are mutually independent.
Based on empirical results from real‐world studies, [19] further modifies model (1) by assuming that the between‐subject variance depends on the mean response, while the within‐subject variance is independent of it, which was found to provide a better fit to real data. The modified ITP model is expressed as
| (2) |
where indicates the mean response at dose at .
In the ITP models, the time‐effect quantifies the percentage of the response achieved at and is modeled using a parametric approach:
where is the rate parameter that determines how fast the response changes over time for dose , and . Generally, a positive reflects a drug's rapid initial effect, followed by a plateau over time. Therefore, a larger suggests that patients are more likely to experience a quick response shortly after treatment, with the effect reaching a plateau earlier. The function quantifies the relationship between the dose and the maximum response, achieved at the terminal time . Depending on the efficacy outcome and the characteristics of the investigational drug, can take various forms, such as the Emax model or a log‐linear model [18]. Based on recent findings on weight loss with novel treatments, the drug effect appears to reach a plateau at high doses [27]. Therefore, we adopt the three‐parameter Emax model for , formally,
| (3) |
where and denote the placebo and maximum dose effects at time , is the dose that produces half of the maximum dose effect, and . When necessary, the three‐parameter Emax model can be extended to a four‐parameter Emax model by adding a Hill parameter to control the steepness of the dose–response curve. The meta‐analysis by Thomas et al. [28] demonstrated that the Emax model generally provides a good fit to real‐world dose‐response data across a wide range of pharmaceutical trials.
2.2. The Proposed DCT‐ITP Model
Let indicate a decentralized measurement is collected at the ‐th visit, and indicate a centralized measurement. Without confusion, we use the terms “centralized” and “onsite” interchangeably to refer to data collected through the traditional centralized approach with formal supervision, whereas “decentralized,” “offsite,” and “remote” are used interchangeably to denote data collected through decentralized approaches with less supervision. If for all visits, the trial reduces to a traditional clinical trial, with all prespecified visits conducted at the clinical center. We generalize the modified ITP model (2) by introducing additional parameters to account for the bias and uncertainty introduced by DCTs and to model the potential differences between centralized and decentralized measurements. Specifically, the DCT‐ITP model is proposed as follows
| (4) |
| (5) |
| (6) |
| (7) |
In the above model, three additional components are introduced to address potential discrepancies between centralized and decentralized data:
-
1.
In the time‐effect model (6), the new parameter quantifies the magnitude of decentralized shifts in the speed at which the drug takes effect for dose . A positive value of indicates that decentralized data suggest a “false” faster onset of the drug effect, that is, an overestimated treatment effect during the initial visits, and vice versa.
-
2.
Similar to the time‐effect model, the inclusion of in (7) distinguishes the decentralized dose‐response model from the centralized model.
-
3.
To account for potential inflation in data variability from decentralized data collection, we introduce two additional residual parameters in (4): , representing the additional between‐patient uncertainty, where , and , denoting the additional within‐patient uncertainty, where ().
Figure 1 visualizes these three components using a hypothetical example. In Panel (A), a shift in the time‐effect curve is shown in the decentralized measurements. In Panel (B), both a shift in the dose‐response curve and increased data variability are observed in the decentralized measurements. In general, when increased variability and measurement shifts are present in the decentralized data, our full model is flexible enough to capture these effects. Moreover, if no bias is introduced by the decentralized collection, the decentralized data can be seamlessly pooled with the centralized data by setting .
FIGURE 1.

A hypothetical example visualizing increased variability and measurement shifts in the decentralized data. The proposed DCT‐ITP model, based on the settings of the real‐world application (Section 3 and Scenario 3 in Table 1), is used to generate the two panels. Panel (A) depicts the time‐response curve, while Panel (B) shows the dose–response curve at the final visit, with the shaded area representing the 95% confidence region.
Remark 1
Although we implement the Emax model to quantify the dose‐response curves, the proposed method, building on the flexibility of the original ITP model, can be readily adapted to accommodate alternative dose‐response models by specifying a tailored function for in Equation (5). Additional evaluations of different dose‐response models in DCT‐ITP are provided in the Supporting Information.
Remark 2
We posit that the centralized and decentralized dose‐response (or time‐effect) models share the same functional forms but differ in their coefficients. Nonetheless, as evaluated in the sensitivity analysis, this relatively restrictive assumption can still accommodate scenarios in which the centralized and decentralized models follow different functional forms. This robustness is largely attributable to the flexibility of the Emax model and the assumed time‐effect model. Furthermore, the assumptions regarding variance terms in the DCT‐ITP model can be adapted to different scenarios. For instance, if the extent of variability introduced by decentralization is unknown a priori, the DCT‐ITP model can be extended to allow independent variance terms, rather than an additive variance structure, for decentralized and centralized measurements.
2.3. Bayesian Integrated Estimation
We cast the estimation of the DCT‐ITP model (4) to (7) into the Bayesian framework. In the time‐effect function (6), the original ITP model treats the rate parameters to for the active doses as independent. However, it is reasonable to assume that different doses of the same treatment follow a similar or exchangeable action pattern. To capture this assumption and enhance the performance of the proposed model, we enable adaptive information sharing across doses by applying Bayesian hierarchical priors [29, 30, 31] for to , while specifying an independent prior for the placebo‐specific parameter . Specifically, we define the following hierarchical priors for to :
where is the zero‐centered half‐Cauchy distribution with scale parameter , and , , and are hyperparameters, with and set to be reasonably large. The variance parameter determines the strength of information borrowing. Following Gelman [32], a half‐Cauchy distribution is assigned to to facilitate data‐driven information sharing.
For the placebo‐specific parameters , we assign a non‐informative prior instead of including it in the above hierarchy to avoid dilution or over‐borrowing between the placebo and active doses. More specifically, we assign a non‐informative normal prior distribution to the placebo‐specific with the prior variance set to be reasonably large. Furthermore, if external information is available for the placebo, an informative prior distribution for can be constructed, for example, using the power prior approach [33], the robust meta‐analytic‐predictive prior approach [34], or other appropriate techniques.
In the time‐effect function (6), we introduce additional terms to to enhance model flexibility and account for biases introduced by decentralization. However, in real‐world scenarios, the presence of such biases is unknown a priori. If the location of clinical measurements does not significantly impact the speed of the drug's effect, the term becomes redundant and may even undermine the precision of treatment effect estimation. To address this issue and enable adaptive information sharing across doses, we apply the Bayesian variable selection approach [35, 36, 37] by using hierarchical spike‐and‐slab priors on to . A discussion of alternative prior choices is provided in the final section of this paper. Specifically, the spike‐and‐slab priors on to are formulated as:
| (8) |
where is the weight of the slab part, denotes the degenerate prior distribution massed at 0, and are hyperparameters, with and set to be reasonably large. Without specific prior information, we assign a neutral Bernoulli prior distribution that takes the value 1 with probability 0.5. Small posterior values of indicate that decentralization introduces minimal bias, allowing for greater information sharing between centralized and decentralized measurements. Conversely, large posterior values of suggest that the bias from decentralization is nonnegligible. When , it indicates nonnegligible differences between the decentralized and centralized time‐effect functions. In addition, it is worth noting that we assume all doses share the same weight parameter , implying that the presence of bias in the rate parameters is consistent across dose levels. This assumption reduces the number of parameters and simplifies Bayesian variable selection.
In the Emax model (7), we assign independent non‐informative normal priors for and . However, the model is sensitive to the prior specification of due to the small sample size and the limited number of doses in a phase II trial. Furthermore, using a flat prior on certain parameters in nonlinear models can inadvertently induce an informative distribution on the dependent variable, potentially introducing bias [38]. To mitigate this issue, we apply the functional uniform prior on , a prior that generally provides more robust and theoretically sound performance for nonlinear models [39]. In summary, the priors for and are given as follows:
where are hyperparameters, denotes the indicator function, and the support for is restricted in . In general, the value of should be carefully chosen. Based on our experience, selecting a relatively large (but not excessively large) value ensures that the support of is reasonably covered while avoiding extreme values when . We choose the functional uniform prior for because it provides a systematic framework for eliciting priors on nonlinear parameters, while also delivering robust estimation performance as demonstrated by Bornkamp [38]. However, based on our evaluation (results not shown), other reasonable priors for can also be implemented without any deterioration in performance. Similar to the prior on , we specify independent spike‐and‐slab priors for to :
where and () are hyperparameters, with set to be reasonably large.
Lastly, to complete the prior specification of the DCT‐ITP model, we assign non‐informative half‐Cauchy priors on the residual standard deviation parameters , , , and as
where the hyperparameters and should take reasonably large values.
2.4. Handling Missing Data
Missing data is a common issue in longitudinal studies [40, 41]. Challenges related to longitudinal missing data arise from various sources, such as dropout (i.e., participants leaving the study before completion, resulting in missing observations at later time points), intermittent missingness (i.e., participants missing specific time points but continuing to participate in others), or measurement errors at certain time points. To address these challenges, we propose using BDA to handle longitudinal missing data under the mechanism of missing at random (MAR) [42, 43].
BDA is a Bayesian technique that treats the unobserved outcomes as unknown parameters to be estimated. This approach allows the missing data to be imputed at each step of the Markov Chain Monte Carlo (MCMC) algorithm, facilitating more accurate inference by integrating over the uncertainty associated with the missing values. Assume that the patient outcomes are decomposed into two parts: the observed data () and the missing data (). By treating as unknown parameters, the BDA approach can be described as an iterative process consisting of the imputation step and the updating step:
- Imputation step: Treat as unknown parameters and sample each from the posterior predictive distribution conditioned on the observed data and the current posterior sample of the model parameters :
where , , and for . Updating step: Update the model parameters based on the DCT‐ITP model (4) to (7), the prior distributions in Section 2.3, and the dataset .
In our paper, we adopt the “Just Another Gibbs Sampler (JAGS)” [44] for efficient MCMC sampling and implementing BDA. The JAGS framework can be effectively executed using the R package r2jags [45].
3. Trial Application
3.1. Trial Configuration
We apply the proposed model to analyze a hypothetical weight‐loss trial, where evidence suggests that outcomes collected through decentralized methods exhibit a reduced treatment effect [26]. To better reflect reality, our hypothetical trial is derived from a real trial by adopting the same data collection design, endpoints, dose‐response estimates, and variability patterns observed in both onsite and offsite assessments. Specifically, we assume that the average weight change from baseline for the maximum dose in the decentralized data is around 2 kg lower than that reported in the onsite data. Additionally, we assume that decentralized measurements lead to an inflation of about 1 unit in both within‐patient and between‐patient standard deviations. Using this information, we generate individual‐level data for a phase II multidose randomized trial with four active doses (1, 5, 10, and 15 mg) and a placebo arm, with patients in each arm. The Emax model is used to depict the dose‐response relationship by setting , , and . The bias introduced by decentralized measurements is modeled with , , and . The rate parameters for both onsite and decentralized measurements are kept the same, with and for . Besides, the variances are set as , , , , yielding and . For visualization, Figure 1 displays the time‐effect and dose‐response curves for both centralized and decentralized measurements. The trial includes 12 scheduled visits for each patient. We assume Visit 0 is the baseline visit (Day 0), with (post‐treatment) Visits 1 to 3 (inclusive), Visits 4 to 6 (inclusive) biweekly, and Visits 7 to 12 (inclusive) every four weeks. Due to individual circumstances, the actual timing of each visit may vary. For data generation, we assume an allowable visit window of [, 3] days from the scheduled visit day. This visit schedule is also illustrated in Figure S1 in the Supporting Information.
We generate patient dropouts and intermittent missing measurements, with an average missing rate of 20% across all visits, encompassing both decentralized and onsite measurements. During (post‐treatment) Visit 1 through the last Visit 12, we first generate patient dropouts under the mechanism of MAR by assuming
where , , and , yielding an accumulated dropout rate of 10% at the end of the trial. Consequently, patient dropouts at a specific visit depend on the outcome of the previous visit as well as the time . Similarly, if a patient does not drop out, we assume that the probability of intermittent missingness at a specific visit () as,
| (9) |
where , , , , , yielding an average intermittent missing rate of 16% throughout the trial. In Figure S2 of the Supporting Information, we display the accumulated dropout rates and intermittent missingness rates over time.
3.2. Model Evaluation
We compare the proposed DCT‐ITP model with the conventional ITP model across three different settings. In the first setting, we fit the ITP model using only centralized measurements (O‐ITP). In the second setting, we apply the ITP model without distinguishing between centralized and decentralized data (M‐ITP), thereby ignoring any potential biases and uncertainties introduced by decentralization. In the third setting, we consider a traditional trial where all 12 post‐treatment measurements are measured at traditional clinical trial sites (T‐ITP). Since the T‐ITP approach is expected to yield the best performance, we use it as a benchmark for comparison. To ensure comparable missing rates, we apply the same missing data generation mechanism described earlier to all four approaches. To implement the ITP models, we set the following hyperparameters: , , , , , ; , , for ; , ; .
In our analysis, we focus on the following four parameters of clinical interest:
AUC
: The area under the time‐effect curve (AUC) under all scheduled post‐treatment visits, AUC
, for .AUC
: The difference in AUC between an active dose and the placebo , for .: The mean response at the final visit , for .
: The difference in mean response between an active dose and the placebo, for .
Figure 2 summarizes the posterior mean estimates and 95% credible intervals for the above four quantities, based on the four ITP approaches. Compared to the proposed DCT‐ITP model, the O‐ITP approach produces similar point estimates but with wider credible intervals, indicating lower efficiency. In contrast, the M‐ITP model yields narrower credible intervals than the DCT‐ITP model, but these intervals sometimes fail to cover the true values. Among the four approaches, T‐ITP performs best, with minimal bias and uncertainty. Nevertheless, the DCT‐ITP model performs very similarly to T‐ITP overall. This indicates that a decentralized trial using our DCT‐ITP approach can achieve a similar level of accuracy and efficiency as a traditional trial with the conventional ITP model.
FIGURE 2.

Posterior mean estimates of the area under the time‐effect curve (AUC), the difference in AUC between an active dose and the placebo (AUC), the mean response at the final visit (), and the difference in mean response between an active dose and the placebo (), along with the associated 95% credible interval based on the four different ITP models for the real‐world application in Section 3.
4. Simulation Studies
We conduct simulation studies to further assess the performance of the proposed DCT‐ITP model using eight various scenarios. As shown in Table 1, these scenarios cover a wide range of potential scenarios relevant to the phase II dose‐ranging trials and various impacts of decentralization. Scenarios 1‐6 use the true DCT‐ITP parameterization to generate the data, while Scenarios 7 and 8 represent cases where the dose‐response model is misspecified. Scenario 1 represents the case where onsite and decentralized measurements produce identical results. Scenario 2 represents unbiased decentralized measurements with increased data variability. Scenario 3 corresponds to the motivating example, where both biases and inflated variability are expected due to decentralization. Scenarios 4 to 6 illustrate various potential differences between onsite and decentralized measurements. Scenario 7 assumes both the onsite and decentralized dose‐response curves follow a quadratic model, but with different parameter values. Scenario 8 assumes the onsite curve follows a quadratic model, while the decentralized curve follows a linear pattern. In all scenarios, we assume that the weight change at the final visit (i.e., ) ranged from 3 to 25 kg, consistent with clinical trial results reported for approved glucagon‐like peptide‐1 (GLP‐1) receptor agonists. Furthermore, based on findings from a recent weight‐loss study [10], which reported an average bias of 3 kg for off‐site measurements, we assume that decentralization could introduce an absolute bias ranging from 0–6 kg. The true values of , along with the AUC
values for each dose under each scenario, are presented in Tables S2 and S3 of the Supporting Information, respectively. The missing data generation follows the procedure outlined in Section 3.1, maintaining an average missing rate of 20% across all visits for each scenario. The corresponding missing model parameters are detailed in Table S1 of the Supporting Information.
TABLE 1.
True parameter settings for Scenarios 1 to 8.
| Scenario |
|
|
|
|
|
|
||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | −2.5 (0) | −30 (0) | 5 (0) | 2 (0) | 36 (0) | 25 (0) | ||||||
| 2 | −2.5 (0) | −30 (0) | 5 (0) | 2 (0) | 36 (13) | 25 (11) | ||||||
| 3 | −2.5 (0.5) | −30 (2) | 5 (0) | 2 (0) | 36 (13) | 25 (11) | ||||||
| 4 | −2.5 (−0.5) | −30 (−2) | 5 (0) | 2 (0) | 36 (13) | 25 (11) | ||||||
| 5 | −2.5 (0) | −25.5 (5.5) | 2 (0.5) | 1 (−0.5) | 64 (36) | 36 (28) | ||||||
| 6 | −2.5 (−0.5) | −28 (−2) | 6 (−0.5) | 1 (02) a | 64 (36) | 36 (28) | ||||||
| 7 | Onsite: | 2 (1) | 81 (40) | 64 (36) | ||||||||
| Decentralized: | ||||||||||||
| 8 | Onsite: | 2 (0) | 25 (11) | 36 (13) | ||||||||
| Decentralized: | ||||||||||||
In Scenario 6, varies by dose, that is, .
We apply the four ITP models described in Section 3.2, using the same prior specifications, to each simulation scenario, with results obtained from 1000 replications. For each parameter of interest (, , , ), we report the average bias (AB
), the average root mean square error (ARMSE
), the average coverage probability (ACP
) of pointwise 95% credible intervals (CIs), and the average length (AL
) of the 95% CIs, averaged across different doses:
| (10) |
where denotes the number of doses in the dose set , , and are the posterior mean estimate, the 2.5% and 97.5% posterior quantile estimates of in the th replicated trial, respectively. In our simulation, the target dose set corresponds to (0, 1, 5, 10, and 15 mg) for AUC
, (1, 5, 10, and 15 mg) for , (0, 1, 2, , 15 mg) for , and (1, 2, , 15 mg) for , respectively.
We employ the r2jags package [45] with an initial burn‐in of 5000 iterations, followed by 10 000 sampling iterations with a thinning interval of 2 to perform MCMC sampling. On a MacBook Pro 14 (Apple M2 Pro chip with 12‐core CPU), each simulation replication requires approximately 12.9 s, yielding a total runtime of about 3.6 h for 1000 replications. This computation time could be substantially reduced through parallel computing. To assess convergence, we apply Geweke's diagnostic by comparing the first 10% and the last 50% of the Markov chain. Across 1000 replications, the diagnostic consistently produces ‐values above the 0.05 threshold, providing strong evidence of satisfactory convergence.
The results for and are shown in Figures 3 and 4, while the results for and are presented in Figures S3 and S4 in the Supporting Information. In Scenarios 1 and 2, where decentralization does not introduce biases, the M‐ITP and proposed DCT‐ITP models perform comparably. However, the O‐ITP approach exhibits greater bias, higher ARMSE, and wider 95% CIs, as it relies solely on centralized measurements, which are taken at sparse time points. In Scenarios 3 to 6, where biases are present in decentralized measurements, the M‐ITP approach demonstrates significant shortcomings, leading to large biases, inflated ARMSE, and limited ACP. In contrast, our DCT‐ITP approach adaptively detects and corrects for decentralization bias, resulting in much better performance than M‐ITP. In Scenario 7, despite the misspecification of the dose‐response model, DCT‐ITP yields relatively robust results due to the similarities between the quadratic and Emax models. In Scenario 8, approximating the linear dose‐response relationship using the Emax model proves more challenging, resulting in limited performance across all methods. Nevertheless, the proposed model still outperforms both the M‐ITP and O‐ITP approaches. Overall, the O‐ITP approach performs relatively robustly across various scenarios, but its reliance on centralized measurements alone leads to information loss. The M‐ITP approach is highly sensitive to discrepancies between onsite and decentralized measurements, with larger differences leading to worse performance. Compared to these two naïve approaches, the proposed model provides a more effective method for analyzing longitudinal dose‐response data in DCTs. Additionally, the T‐ITP approach generally performs best across all scenarios, as expected. However, our DCT‐ITP approach offers similarly competitive performance. It is important to highlight the intrinsic limitations of traditional clinical trials, including participant inconvenience, limited diversity in the participant pool, and additional operational challenges for investigators.
FIGURE 4.

Average bias, average root mean square error (RMSE), and average coverage probability and length of pointwise 95% credible intervals for the estimate of the mean response at the final visit () using the four integrated two‐component prediction (ITP) models under Scenarios 1 to 8.
In Section S2 of the Supporting Information, we present additional simulation results to further illustrate the necessity of the complex modeling in DCT‐ITP. Specifically, we compare the proposed DCT‐ITP model with two simplified variations while keeping all other parameters and priors unchanged: (i) setting to deactivate Bayesian variable selection and (ii) specifying and to remove hierarchical borrowing across the time‐effect coefficients through and through . Under the same eight simulation scenarios, the proposed DCT‐ITP consistently outperforms these simplified alternatives in terms of ARMSE and AL, highlighting the importance of the complex modeling components.
Furthermore, to assess the versatility of DCT‐ITP in accommodating different dose‐response models , Section S3 of the Supporting Information evaluates the DCT‐ITP model under both linear and quadratic dose‐response specifications and compares its performance with the O‐ITP, M‐ITP, and T‐ITP counterparts. Overall, the simulation results show that, across different parameterizations of , the proposed DCT‐ITP model consistently demonstrates estimation advantages over O‐ITP and M‐ITP.
5. Sensitivity Analyses
We investigate the impact of different hyperparameter values, trial settings, and model configurations on the performance of the proposed DCT‐ITP method through a series of sensitivity analyses, with detailed results provided in the Supporting Information. In each of the following sensitivity analyses, unless otherwise specified, the prior distributions are the same as those described in Section 3.2.
Prior weights and : In addition to the neutral setting where both and follow a distribution, we also consider and to respectively discount or promote information sharing between centralized and decentralized measurements. As expected, Figure S7 shows that the prior weights have a relatively small but interpretable impact on estimation performance: with smaller prior weights, the DCT‐ITP tends to shrink decentralized measurements more strongly toward centralized measurements, and vice versa. Consequently, DCT‐ITP with smaller prior weights performs better when there is little or no deviation between centralized and decentralized measurements (i.e., Scenarios 1 and 2), whereas larger prior weights perform better when some deviation exists. Overall, the neutral prior setting with a distribution provides a balanced trade‐off between efficiency and robustness, and can be considered a reasonable default when no information about decentralization bias is available a priori.
Variance of the slab component: In the spike‐and‐slab prior, the slab variance governs the degree of information borrowing between centralized and decentralized measurements, with smaller values of leading to greater borrowing [46]. In Figure S8, we compare the performance of DCT‐ITP with and . The results show that the ARMSE, ACP, and AL remain visually identical across the three prior variance settings, although smaller values of yield larger estimation bias, particularly in Scenarios 3–8.
Prior scales ( and ) on the heterogeneity parameters: These scale parameters control the degree of shrinkage across dose levels in the hierarchical prior for the time‐effect coefficients in Equation (6). In addition, also determines the variance of the slab component in Equation (8). To assess their impact, we consider two alternative configurations: (, ) = (1, 0.5) and (3, 2). As shown in Figure S9, DCT‐ITP remains robust across all considered prior scale settings, with only minimal variation in performance.
Sample size: We also consider two additional sample sizes: 80 and 200. The results in Figure S10 indicate that increasing the sample size improves the performance of the proposed method, with larger sample sizes yielding better outcomes.
Visit schema: We evaluate the performance of the proposed method across different measurement schemas. Specifically, we simulate a trial with (1) a total of 8 visits, including 5 decentralized visits; (2) 12 visits, including 4 decentralized visits; and (3) 15 visits, including 9 decentralized visits. Schemas (1) and (3) maintain a similar proportion of decentralized visits but differ in the total number of visits, while schema (2) represents a lower proportion of decentralized visits. The three additional schemas are illustrated in Figure S1B–D in the Supporting Information, respectively. The results in Figure S11 show that increasing the total number of visits and reducing the degree of decentralization lead to improved performance.
Missing data: We also examine three different missing data settings. In the first two settings, we adjust the parameters and to achieve average missing rates of approximately 10% and 30% across all visits. In the third setting, we consider a combination of intermittent missing completely at random (MCAR), intermittent MAR, and random dropout, each with an average rate of approximately 10%, resulting in a total average missing rate of 30%. The missing data generation mechanism for the third setting is detailed in Section S1 of the Supporting Information. The specifications for these three settings are summarized in Table S1 in the Supporting Information. According to Figure S12, the results indicate that the performance of the proposed model improves as the missing rate decreases.
Model violation: The DCT‐ITP model assumes that the centralized and decentralized dose–response (as well as time‐effect) models share the same functional forms but may differ in their coefficients. To evaluate the performance of DCT‐ITP when this assumption is violated, we construct 18 additional scenarios in Section S4 of the Supporting Information by allowing the centralized and decentralized models to follow different functional forms. As shown in Figure S13, across these 18 scenarios, the DCT‐ITP model outperforms both the O‐ITP and M‐ITP models in terms of estimation efficiency and accuracy, consistent with the conclusions drawn in Section 4.
6. Concluding Remarks
We introduce a novel multilevel Bayesian modeling and learning approach for evaluating longitudinal data from decentralized multidose randomized trials. Our DCT‐ITP model builds upon the ITP model, enhancing it by integrating information across doses and between centralized and decentralized measurements. Given that the presence and magnitude of bias due to decentralization are typically unknown before the trial begins, we employ a Bayesian spike‐and‐slab prior technique on the relevant parameters to enable data‐adaptive information integration. We evaluate the performance of the proposed DCT‐ITP model using the endpoint of change in body weight, with parameters and settings derived from a real clinical trial where body weights are measured through both centralized and decentralized methods. Through extensive numerical studies, the proposed model demonstrates desirable precision and stability in estimation. Sensitivity analyses further confirm the robustness of the design under various trial and prior settings. Although patient covariates are not included in the DCT‐ITP model, incorporating them as an additive term would be straightforward.
In this paper, we choose the spike‐and‐slab prior to adaptively assess potential biases introduced by decentralization, given its wide adoption in clinical trial applications (e.g., historical borrowing), ease of hyperparameter elicitation, and straightforward interpretation. Nevertheless, with appropriate specification, other Bayesian variable selection priors, such as the horseshoe prior [47], could also be applied to model and . As a preliminary investigation, we have additionally implemented the horseshoe prior in our simulation scenarios. Specifically, the horseshoe priors for and are given as follows:
where the hyperparameters are all set to 1 in this simulation, ensuring that approximately 50% of the prior densities of and are concentrated near zero (i.e., comparable to our setting of the spike‐and‐slab priors). As shown in Figure S14, overall, the spike‐and‐slab and horseshoe priors produced very similar average RMSE values. However, the spike‐and‐slab prior generally yielded slightly smaller estimation bias across most scenarios and a narrower 95% credible interval in some cases. This finding is consistent with a recent comparative study by [48] on historical borrowing, which concluded that the spike‐and‐slab prior achieved the best overall performance when borrowing from heterogeneous historical controls. Nonetheless, extensive evaluation of different Bayesian variable selection priors is warranted in the context of DCTs.
We focus on addressing the increased variability and potential bias introduced by decentralization. However, additional challenges may arise in decentralized trials. A significant challenge is participant engagement, as geographical isolation can affect adherence and increase missing data. At present, the proposed DCT‐ITP model can only accommodate MAR scenarios. Further methodological development is warranted to extend the approach to settings with missing‐not‐at‐random data. From a practical standpoint, mitigating the risk of missing data requires anticipating potential missingness scenarios and carefully planning the frequency and timing of decentralized activities before trial initiation. During the trial, providing adequate support and maintaining consistent communication with participants are crucial for promoting engagement and minimizing missing data. Practical challenges such as technology infrastructure, data integration, security, logistics, and regulatory compliance are also critical considerations. Addressing these challenges will be an important area for future research.
Another unresolved question concerns the appropriate definition of the estimand for DCTs. In this paper, we treat the estimand as the parameter of interest, assuming that all visits are centralized, following the traditional clinical trial approach, which remains the gold standard for treatment evaluation. However, this assumption may not always hold. For example, decentralized trials may offer a less biased evaluation in clinical trials for sleep disorders. As far as we know, only one paper by [49] has explored the estimand framework for decentralized trials. However, achieving consensus on the appropriate estimand among various stakeholders in decentralized trials remains a long‐term goal.
Funding
The authors have nothing to report.
Conflicts of Interest
The authors declare no conflicts of interest.
Supporting information
Data S1: sim70338‐sup‐0001‐Supplementary Materials.pdf.
Acknowledgments
The authors thank the reviewers and editors for their insightful comments and suggestions, which have greatly improved the clarity and quality of this paper.
Zhang J., Wang T., Qu Y., Yan F., Liu S., and Lin R., “Bayesian Integrated Learning of Longitudinal Dose‐Response Relationships via Decentralized Clinical Trials,” Statistics in Medicine 44, no. 28‐30 (2025): e70338, 10.1002/sim.70338.
Data Availability Statement
The authors have nothing to report.
References
- 1. U.S. Food and Drug Administration , “Conducting Clinical Trials With Decentralized Elements Guidance for Industry, Investigators, and Other Interested Parties,” 2024. Updated September 2023.
- 2. Underhill C., Freeman J., Dixon J., et al., “Decentralized Clinical Trials as a New Paradigm of Trial Delivery to Improve Equity of Access,” JAMA Oncology 10, no. 4 (2024): 526–530. [DOI] [PubMed] [Google Scholar]
- 3. Jong d. A. J., Rijssel v. T. I., Zuidgeest M. G., et al., “Opportunities and Challenges for Decentralized Clinical Trials: European Regulators' Perspective,” Clinical Pharmacology and Therapeutics 112, no. 2 (2022): 344–352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Caruana E. J., Roman M., Hernández‐Sánchez J., and Solli P., “Longitudinal Studies,” Journal of Thoracic Disease 7, no. 11 (2015): E537. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Wellhagen G. J., Hamrén B., Kjellsson M. C., and Åstrand M., “Dose‐Response Mixed Models for Repeated Measures–a New Method for Assessment of Dose‐Response,” Pharmaceutical Research 37 (2020): 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Curtis A. and Qu Y., “Impact of Using A Mixed Data Collection Modality on Statistical Inferences in Decentralized Clinical Trials,” Therapeutic Innovation & Regulatory Science 56, no. 5 (2022): 744–752. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Dorsey E. R., Kluger B., and Lipset C. H., “The New Normal in Clinical Trials: Decentralized Studies,” Annals of Neurology 88, no. 5 (2020): 863–866. [DOI] [PubMed] [Google Scholar]
- 8. Clément K., Akker v. d. E., Argente J., et al., “Efficacy and Safety of Setmelanotide, an MC4R Agonist, in Individuals With Severe Obesity due to LEPR or POMC Deficiency: Single‐Arm, Open‐Label, Multicentre, Phase 3 Trials,” Lancet Diabetes and Endocrinology 8, no. 12 (2020): 960–970. [DOI] [PubMed] [Google Scholar]
- 9. Wilding J. P., Batterham R. L., Calanna S., et al., “Once‐Weekly Semaglutide in Adults With Overweight or Obesity,” New England Journal of Medicine 384, no. 11 (2021): 989–1002. [DOI] [PubMed] [Google Scholar]
- 10. Dahl K., Toubro S., Dey S., et al., “Amycretin, a Novel, Unimolecular GLP‐1 and Amylin Receptor Agonist Administered Subcutaneously: Results From a Phase 1b/2a Randomised Controlled Study,” Lancet 406 (2025): 149–162. [DOI] [PubMed] [Google Scholar]
- 11. Ye J., Tian H., Guo X. T., and Ting N., “Clinical Trial Design—What Is the Critical Question for Decision‐Making?,” Statistics in Biosciences 15, no. 2 (2023): 475–489. [Google Scholar]
- 12. Ting N., Dose Finding in Drug Development (Springer Science & Business Media, 2006). [Google Scholar]
- 13. Holford N. H. and Sheiner L. B., “Understanding the Dose‐Effect Relationship: Clinical Application of Pharmacokinetic‐Pharmacodynamic Models,” Clinical Pharmacokinetics 6, no. 6 (1981): 429–453. [DOI] [PubMed] [Google Scholar]
- 14. Emilien G., Meurs v. W., and Maloteaux J. M., “The Dose‐Response Relationship in Phase I Clinical Trials and Beyond: Use, Meaning, and Assessment,” Pharmacology & Therapeutics 88, no. 1 (2000): 33–58. [DOI] [PubMed] [Google Scholar]
- 15. Fu H. and Manner D., “Bayesian Adaptive Dose‐Finding Studies With Delayed Responses,” Journal of Biopharmaceutical Statistics 20, no. 5 (2010): 1055–1070. [DOI] [PubMed] [Google Scholar]
- 16. Duan R., Chen K., Du Y., Kulkarni P. M., and Qu Y., “Non‐Monotone Exponential Time (NEXT) Model for the Longitudinal Trend of a Continuous Outcome in Clinical Trials,” Therapeutic Innovation & Regulatory Science 58, no. 1 (2024): 127–135. [DOI] [PubMed] [Google Scholar]
- 17. Li W. and Fu H., “Bayesian Optimal Adaptive Designs for Delayed‐Response Dose‐Finding Studies,” Journal of Biopharmaceutical Statistics 21, no. 5 (2011): 888–901. [DOI] [PubMed] [Google Scholar]
- 18. Payne R. D., Ray P., and Thomann M. A., “Bayesian Model Averaging of Longitudinal Dose‐Response Models,” Journal of Biopharmaceutical Statistics 34, no. 3 (2024): 349–365. [DOI] [PubMed] [Google Scholar]
- 19. Qu Y., “Can a Multiple Ascending Dose Study Serve as an Informative Proof‐Of‐Concept Study?,” Statistics in Medicine 38, no. 3 (2019): 354–362. [DOI] [PubMed] [Google Scholar]
- 20. Frias J. P., Nauck M. A., Van J., et al., “Efficacy and Safety of LY3298176, a Novel Dual GIP and GLP‐1 Receptor Agonist, in Patients With Type 2 Diabetes: A Randomised, Placebo‐Controlled and Active Comparator‐Controlled Phase 2 Trial,” Lancet 392, no. 10160 (2018): 2180–2193. [DOI] [PubMed] [Google Scholar]
- 21. Lange M. R. and Schmidli H., “Analysis of Clinical Trials With Biologics Using Dose–Time‐Response Models,” Statistics in Medicine 34, no. 22 (2015): 3017–3028. [DOI] [PubMed] [Google Scholar]
- 22. Gabrielsson J., Jusko W. J., and Alari L., “Modeling of Dose–Response–Time Data: Four Examples of Estimating the Turnover Parameters and Generating Kinetic Functions From Response Profiles,” Biopharmaceutics & Drug Disposition 21, no. 2 (2000): 41–52. [DOI] [PubMed] [Google Scholar]
- 23. Gabrielsson J. and Peletier L. A., “Dose–Response–Time Data Analysis Involving Nonlinear Dynamics, Feedback and Delay,” European Journal of Pharmaceutical Sciences 59 (2014): 36–48. [DOI] [PubMed] [Google Scholar]
- 24. Buatois S., Ueckert S., Frey N., Retout S., and Mentré F., “cLRT‐Mod: An Efficient Methodology for Pharmacometric Model‐Based Analysis of Longitudinal Phase II Dose Finding Studies Under Model Uncertainty,” Statistics in Medicine 40, no. 10 (2021): 2435–2451. [DOI] [PubMed] [Google Scholar]
- 25. Hartley B. F., Lunn D., and Mander A. P., “Efficient Study Design and Analysis of Longitudinal Dose–Response Data Using Fractional Polynomials,” Pharmaceutical Statistics 23 (2024): 1128–1143. [DOI] [PubMed] [Google Scholar]
- 26. Gorber S. C., Tremblay M., Moher D., and Gorber B., “A Comparison of Direct vs. Self‐Reported Measures for Assessing Height, Weight, and Body Mass Index: A Systematic Review,” Obesity Reviews 8 (2007): 307–326. [DOI] [PubMed] [Google Scholar]
- 27. Jastreboff A. M., Aronne L. J., Ahmad N. N., et al., “Tirzepatide Once Weekly for the Treatment of Obesity,” New England Journal of Medicine 387, no. 3 (2022): 205–216. [DOI] [PubMed] [Google Scholar]
- 28. Thomas N., Sweeney K., and Somayaji V., “Meta‐Analysis of Clinical Dose–Response in a Large Drug Development Portfolio,” Statistics in Biopharmaceutical Research 6, no. 4 (2014): 302–317. [Google Scholar]
- 29. Thall P. F., Wathen J. K., Bekele B. N., Champlin R. E., Baker L. H., and Benjamin R. S., “Hierarchical Bayesian Approaches to Phase II Trials in Diseases With Multiple Subtypes,” Statistics in Medicine 22, no. 5 (2003): 763–780, 10.1002/sim.1399. [DOI] [PubMed] [Google Scholar]
- 30. Thall P. F. and Wathen J. K., “Bayesian Designs to Account for Patient Heterogeneity in Phase II Clinical Trials,” Current Opinion in Oncology 20, no. 4 (2008): 407–411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Berry S. M., Broglio K. R., Groshen S., and Berry D. A., “Bayesian Hierarchical Modeling of Patient Subpopulations: Efficient Designs of Phase II Oncology Clinical Trials,” Clinical Trials 10, no. 5 (2013): 720–734PMID: 23983156, 10.1177/1740774513497539. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Gelman A., “Prior Distributions for Variance Parameters in Hierarchical Models (Comment on Article by Browne and Draper),” Bayesian Analysis 1, no. 3 (2006): 515–534, 10.1214/06-BA117A. [DOI] [Google Scholar]
- 33. Ibrahim J. G., Chen M. H., Gwon Y., and Chen F., “The Power Prior: Theory and Applications,” Statistics in Medicine 34, no. 28 (2015): 3724–3749. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Schmidli H., Gsteiger S., Roychoudhury S., O'Hagan A., Spiegelhalter D., and Neuenschwander B., “Robust Meta‐Analytic‐Predictive Priors in Clinical Trials With Historical Control Information,” Biometrics 70, no. 4 (2014): 1023–1032. [DOI] [PubMed] [Google Scholar]
- 35. Ishwaran H. and Rao J. S., “Spike and Slab Variable Selection: Frequentist and Bayesian Strategies,” Annals of Statistics 33 (2005): 730–773. [Google Scholar]
- 36. Tadesse M. G. and Vannucci M., Handbook of Bayesian Variable Selection (CRC Press, 2021). [Google Scholar]
- 37. Frühwirth‐Schnatter S. and Wagner H., “Bayesian Variable Selection for Random Intercept Modeling of Gaussian and Non‐Gaussian Data,” in Bayesian Statistics 9 (Oxford University Press, 2011). [Google Scholar]
- 38. Bornkamp B., “Functional Uniform Priors for Nonlinear Modeling,” Biometrics 68, no. 3 (2012): 893–901. [DOI] [PubMed] [Google Scholar]
- 39. Bornkamp B., “Practical Considerations for Using Functional Uniform Prior Distributions for Dose‐Response Estimation in Clinical Trials,” Biometrical Journal 56, no. 6 (2014): 947–962. [DOI] [PubMed] [Google Scholar]
- 40. Daniels M. J. and Hogan J. W., Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis (Chapman and Hall/CRC, 2008). [Google Scholar]
- 41. Fitzmaurice G. M., Laird N. M., and Ware J. H., Applied Longitudinal Analysis (John Wiley & Sons, 2012). [Google Scholar]
- 42. Rubin D. B., “Inference and Missing Data,” Biometrika 63, no. 3 (1976): 581–592. [Google Scholar]
- 43. Tanner M. A. and Wong W. H., “The Calculation of Posterior Distributions by Data Augmentation,” Journal of the American Statistical Association 82, no. 398 (1987): 528–540. [Google Scholar]
- 44. Depaoli S., Clifton J. P., and Cobb P. R., “Just Another Gibbs Sampler (JAGS) Flexible Software for MCMC Implementation,” Journal of Educational and Behavioral Statistics 41, no. 6 (2016): 628–649. [Google Scholar]
- 45. Su Y. S., Yajima M., Su M. Y. S., and SystemRequirements J., “Package r2jags,” R Package Version 0.03–08, (2015), http://CRAN.R‐project.org/package=R2jags.
- 46. Malsiner‐Walli G. and Wagner H., “Comparing Spike and Slab Priors for Bayesian Variable Selection,” Austrian Journal of Statistics 40, no. 4 (2016): 241–264. [Google Scholar]
- 47. Carvalho C. M., Polson N. G., and Scott J. G., “Handling Sparsity via the Horseshoe,” in Artificial Intelligence and Statistics (PMLR, 2009), 73–80. [Google Scholar]
- 48. Ohigashi T., Maruo K., Sozu T., Sawamoto R., and Gosho M., “Potential Bias Models With Bayesian Shrinkage Priors for Dynamic Borrowing of Multiple Historical Control Data,” Pharmaceutical Statistics 24, no. 2 (2025): e2453. [DOI] [PubMed] [Google Scholar]
- 49. Izem R., Zuber E., Daizadeh N., et al., “Decentralized Clinical Trials: Scientific Considerations Through the Lens of the Estimand Framework,” Therapeutic Innovation & Regulatory Science 58 (2024): 1–504. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data S1: sim70338‐sup‐0001‐Supplementary Materials.pdf.
Data Availability Statement
The authors have nothing to report.
