Abstract
Background
Platform trials are innovative clinical trials governed by a master protocol that allows for the evaluation of multiple investigational treatments that enter and leave the trial over time. Interest in platform trials has been steadily increasing over the last decade. Due to their highly adaptive nature, platform trials provide sufficient flexibility to customize important trial design aspects to the requirements of both the specific disease under investigation and the different stakeholders. The flexibility of platform trials, however, comes with complexities when designing such trials. In the past, we reviewed existing software for simulating clinical trials and found that none of them were suitable for simulating platform trials as they do not accommodate the design features and flexibility inherent to platform trials, such as staggered entry of treatments over time.
Results
We argued that simulation studies are crucial for the design of efficient platform trials. We developed and proposed an iterative, simulation-guided “vanilla and sprinkles” framework, i.e. from a basic to a more complex design, for designing platform trials. We addressed the functionality limitations of existing software as well as the unavailability of the coding therein by developing a suite of open-source software to use in simulating platform trials based on the R programming language. To give some examples, the newly developed software supports simulating staggered entry of treatments throughout the trial, choosing different options for control data sharing, specifying different platform stopping rules and platform-level operating characteristics. The software we developed is available through open-source licensing to enable users to access and modify the code. The separate use of two of these software packages to implement the same platform design by independent teams obtained the same results.
Conclusion
We provide a framework, as well as open-source software for the design and simulation of platform trials. The software tools provide the flexibility necessary to capture the complexity of platform trials.
Keywords: Platform trials, Simulations, Software, Open-source, R
Introduction
The European Union Patient-Centric Clinical Trial Platforms (EU-PEARL) project, a collaboration of public and private interests, was created to accelerate the development of medicines through wider use of clinical platform trials [1, 2]. Such trials involve innovative statistical methodologies implemented under a master protocol for the ongoing evaluation of multiple investigative treatments, sequentially or concurrently, compared to a common control when targeting a common pathology [2–5].
Initially designed to accelerate testing of cancer treatments [6–10] and therapies for other pathologic conditions [11–14], a broader expansion of platform trials occurred during the global COVID-19 pandemic as the rapid evaluation of potential treatments was of urgent concern to public health [12, 15–17]. Despite increased application of platform trial designs, barriers to their uptake remain, including the inherent complexity of organizing, designing, implementing, managing, and analysing such trials [18–20]. To reduce barriers, EU-PEARL researchers have developed generic tools to guide the infrastructure, patient engagement, and workflow of such collaborative trials, as well as methods for meeting the complex ethical, regulatory, legal, statistical, and data requirements of platform trials [2, 21]. One specific EU-PEARL output to support cross-functional design and conduct of platform trials is the "Platform Best Practices Tool", which has been developed as a checklist to aid the various functions involved in platform trials [22]. Platform design discussions within EU-PEARL's disease specific work-packages were cross-functional and not based on statistical design properties alone. In particular, any potential statistical benefits need to be weighed against potential operational or clinical constraints [2]. Interestingly, some of those concerns may surface only due to cross-functional discussions of simulation scenarios and results. The importance for balancing requirements from different perspectives has also been identified outside of EU-PEARL. While EU-PEARL was completing its final deliverables, the DIA-working group on complex innovative designs developed a cross-functional scorecard to aid in the initial assessment of suitability of master-protocol approaches in a generic and efficient manner [23].
A vital component of a platform trial is its statistical design and supporting statistical software, which is used to run a priori trial simulations to estimate the operating characteristics of the trial and to guide decision making for adaptive participant randomisation and determinations of success or futility of treatments as the trial progresses [24–26]. Two systematic reviews of the available software to support platform trials and other complex clinical trial designs identified a lack of software with sufficient design flexibility [27] and limited reporting and availability of the code used in the design [28].
To address these ongoing statistical software needs within the framework developed by the EU-PEARL project, we developed a suite of software tools. We used R programming language and open-source licensing to enable users to access and modify the code. Adhering to the aim of long-term sustainability of the software, each software tool is available through GitHub [29], with access available through GPL-3 [30] or MIT [31] licenses. In the sections that follow, we make the case why simulation is crucial at the design stage of a platform trial and describe each of the software tools we developed to support clinical platform trials.
Importance of simulations
Optimal clinical trial design depends on a set of design requirements (e.g., the type 1 error rate and the statistical power under a certain assumed treatment effect), design features (e.g., the number and timing of interim analyses, the time between entry of new treatment arms, the maximum number of treatment arms), as well as scenario-specific assumptions, such as the recruitment rate and prevalence of certain participant subgroups. For simple designs, the properties of the study design can be evaluated using closed formulae. Traditionally, sample size calculations for a randomized controlled clinical trial (RCT) with two parallel arms (experimental vs. control) have been mainly based on chi-squared tests and t-tests (approximations), requiring only a minimum set of specifications (such as the appropriate statistical test, significance level and allocation ratio) and assumptions (such as the effect size, variability, and targeted power) to derive the appropriate sample sizes (see Fig. 1A).
Fig. 1.
Comparison of traditional randomized controlled clinical trials (RCTs) to platform trials. In traditional RCTs (panel A) with a fixed sample size and a frequentist test at the end of the study, we usually only consider the trade-off between sample size, statistical test used, assumed treatment effect and power (and there exists an analytical conversion between those). In platform trials (panel B), we need to consider many more design choices and assumptions, such as adaptive design elements, allocation ratios over time, staggered entry of treatments over time, and recruitment rate. Design choices and assumptions found in platform trials, but not in traditional RCTs, are highlighted in red
However, the design elements of platform trials will affect the properties and operating characteristics of the trial (such as bias, time to success, or chance for participants to be assigned to the best treatment) in ways that are often not anticipated, and which may be difficult to summarize. For instance, investigational cohorts in intervention-specific appendices (ISAs) could be sized to reach a targeted power of 90% under a certain assumed effect, when assuming a certain randomization ratio. Within a platform trial, one would typically aim for sharing control data across the platform. Adding a new ISA or dropping an ISA from the platform might result in an adjustment to the randomization ratio such that the final observed allocation ratio might deviate from the originally planned assumption. As a result, the total sample size for an investigational cohort will change, which has implications within the ISA itself, but also affects other ISAs due to the planned data sharing across ISAs. As another example, the family-wise error rate will depend on the concurrency of the assessment of interventions, while the actual overlap will depend on the time of entry and possibly on the interim results of the tested interventions. This will make it challenging to assess the family-wise error rate and the definition of the least favourable configuration for this operating characteristic.
More complex designs, including (Bayesian) adaptive designs and platform designs, usually require specification of a significantly larger amount of design elements at the design stage as compared to traditional RCTs (see Fig. 1B), and usually no closed formulae are available to calculate the operating characteristics. Simulations are required to derive the operating characteristics of complex trial designs. Such simulations can account for the design choice, as well as the underlying uncertainty on the assumptions and the randomness of the data. Simulation studies repeatedly emulate the conduct of the clinical trial within a computing environment, using the same design and scenario settings many times. When analysing all the replications, insights on the performance of the design can be gained, e.g., how likely a treatment with a certain effect size will be successful, or how likely a treatment will be stopped for futility if there was no effect at all. Simulations can be used to present, for example, either the results in a single hypothetical clinical trial, or the aggregated operating characteristics of a single scenario. Usually, operating characteristics are calculated over a range of scenarios to increase the robustness of the trial design, while also considering multiple stakeholders’ needs. The various stakeholders involved in a platform trial will have different study design needs. For example, the type 1 error rate for the evaluation of comparative effectiveness on the platform level or treatment level might be of interest to regulators, while the average and maximum sample sizes required might be of budgetary interest to the owners of the intervention, and the average likelihood of receiving a new investigational treatment as opposed to a placebo control might be of relevance to the trial participants.
Clinical trial simulations are indispensable for platform trials to motivate the choice of design (over a conventional trial design), to identify optimization needs, and for understanding the design characteristics. Some design characteristics in platform trials may appear counter-intuitive on first sight. For example, due to competitive enrolment within a platform trial, the evaluation of interventions within the platform trial could take longer as compared to the time to evaluation in a stand-alone trial. In addition, due to unintended shifts in the timing of an interim analysis, interim analyses could be conducted too early, such that false futility decisions might result, thereby reducing the power. In other situations, the evaluation of interventions might take “forever” if the allocation probability is too low due to suboptimal early results. Understanding the potential source of inefficiencies in a platform trial helps to optimize the design. In some cases, it may support the decision to recommend a conventional trial over a platform trial. Such considerations are in line with typical questions asked by regulatory assessors when evaluating adaptive clinical trial proposals [32], and represent one of the reasons we developed a “vanilla and sprinkles” concept for platform trial designs.
In comparison with conventional clinical trial designs, platform trials are adaptive by nature. Any inclusion of an additional ISA is an adaptive change to the design, which will typically have implications for ongoing interventional cohorts (even though not intended). In adaptive clinical trial designs, the conditional power is one standard tool for evaluating the chance of study success through interim analyses, given the observed amount of data. Based on this interim assessment (sometimes supported by simulations), an independent data monitoring committee will issue adaptation recommendations, such as changing the sample size [33, 34]. Ideally, during the implementation of platform trials, similar conditional evaluations should be considered whenever a decision to include a new intervention is undertaken. The inclusion of a new intervention might change the power or time to decision making for the ongoing treatments. This impact has typically been investigated during the initial development of the platform trial design. However, once more data on recruitment speed and possibly interim efficacy become available, an evaluation of the conditional operating characteristics given the observed data might further support decision making. Simulations provide a powerful and flexible tool for conducting these assessments and enable relevant discussions during platform trials.
Concept of a “vanilla and sprinkles” design – from a basic to a more complex design
Design options are numerous for platform trials, while uncertainty in the assumptions is typically extensive. During the design process, it may not be obvious how many interventions will be tested throughout the platform trial, when those interventions will be added, nor how many participants will enroll and by what time. The high dimension of design options coupled with the large uncertainty on the assumptions might result in a seemingly insurmountable barrier for the ideation of trial design discussions. Similar issues were faced in the ideation phase of the disease-specific platform trials for EU-PEARL. The concept of “vanilla and sprinkles” has been conceived to advance design discussions in such situations. Instead of opening the discussion with a “fully developed” complex platform trial design, the approach tends to develop a simplistic platform study design (a “vanilla” platform trial design; sometimes also referred to as a “skeleton” or “strawman”), which addresses the minimum requirements for assessing individual interventions with a shared control [27]. The conception of a “fully developed” complex platform trial design would typically take significant time and involve assessing different design options over a range of scenarios. After discussing the operating characteristics within a cross-functional team that potentially includes external stakeholders, the final “fully developed design” would typically still require numerous adjustments and re-simulations. From this perspective, starting with a clean, simple design allows one to open the following discussions within the cross-functional team: a) the appropriateness of scenario assumptions; b) the undesired features of the design, i.e., in which case is the platform trial worse than a stand-alone trial; and c) the minimum requirements for the platform design.
The discussions within the cross-functional team would typically consider aggregate simulation results, as well as individual simulation runs. Examples of aggregate simulation results are the average sample size of the platform trial, the average number of correctly established interventions, or the time to the first successful treatment. Individual simulation runs demonstrate single simulated data set characteristics such as the effect size observed in the different interventions at different decision time points, the decisions made for individual interventions, the duration of testing the interventions, as well as the individual participant response. In some situations, the display of individual simulation runs helps the cross-functional team identify unfavourable trial design properties and express decision rules [35].
Based on the simplistic vanilla platform design, a range of sprinkles can then be added to the platform trial design to address undesired features or optimize already promising design elements. Some sprinkles that might be added: 1) the use of more advanced statistical analysis methodology (e.g., Bayesian modelling or use of non-current controls); 2) timing of or additional interim analyses to identify success or futility; 3) modification of randomization ratios, e.g., by the use of response-adaptive randomization methods; 4) adjustments to sample sizes, including implementation of blinded/unblinded adaptive sample size re-estimation; and 5) limitation to the maximum number of concurrently evaluated interventions. The sprinkles may be evaluated in isolation or when combined, using the next round of simulations on an updated set of scenarios. The resulting design process is iterative. Promising sprinkles are maintained and further optimized, while complex and redundant design elements are subsequently removed from the candidate design. Figure 2 provides a schematic overview of this iterative process.
Fig. 2.
The vanilla and sprinkles concept in the design of a platform trial. This approach suggests one start by developing a simplistic platform trial design (the “vanilla” design shown on the far left) that addresses the minimal requirements of the trial. After an initial round of simulations and presentation of the operating characteristics to the cross-functional trial team and various stakeholders (pink hexagons), the potential shortcomings and gaps in the design (blue circles) will be identified and can then be addressed by adding more “sprinkles” to the design (differently colored ice cream scoops). This iterative process continues until a final, fully developed candidate design has been produced (a vanilla and sprinkles design; shown on the far right). HTA: health technology assessment
When multiple design choices and assumptions are evaluated simultaneously, challenges arise for understanding both their individual and joint effects on the operating characteristics. To better facilitate the fast exploration and visualization of large simulation studies that investigate several design choices and assumptions simultaneously, we developed an open-source R Shiny app called AIRSHIP [36, 37].
Practical example – designing a platform trial in non-alcohol steatohepatitis (NASH)
Within EU-PEARL, four work packages focused on creating master protocols and platform trial designs for specific diseases with high unmet medical need (“disease-specific work packages”). For each of these work packages, the final design and master protocol were developed using clinical trial simulations based on one or more of the software packages described later in this paper – a summary of the different work packages can be found in Table 1. More publicly available design materials (study protocols, simulation studies) exist for Major Depression Disorder (MDD) [38, 39], Tuberculosis (TB) [40], Non-Alcoholic Steatohepatitis (NASH) [41–44] and Neurofibromatosis (NF) (Jacko P, Heimann G, Parke T: Designing Master Protocol Trials for Single-Arm Studies, Under review) [45, 46] – learnings from interactions with other stakeholders were also previously published [47–49]. In this section, we exemplarily share in-depth insights on one of the disease-specific work packages in EU-PEARL, focusing on how the framework introduced in the previous section aided the design process.
Table 1.
Overview of the disease-specific working groups in EU-PEARL, along with a short description of the design, software used to simulate the design and current trial status
| Disease | Design Description | Software Used | Current Status of the Platform Trial |
|---|---|---|---|
| Major Depression | The phase 2 platform trial for major depressive disorder uses two-step randomization, shared placebo controls within treatment domains, and interim futility analyses to efficiently evaluate novel pharmacological interventions for treatment-resistant and partially responsive depression, utilizing the Montgomery-Åsberg Depression Rating Scale (MADRS) as the primary endpoint over a 6-week period | MDD | Generic master protocol template finalized; about to be launched |
| Tuberculosis | The Tuberculosis platform trial evaluates the efficacy and safety of multiple novel treatment regimens for drug-susceptible TB, using a shared control arm, response-adaptive randomization, and interim analyses to enable efficient resource allocation and dynamic decision-making based on predefined criteria for futility or efficacy | TBSimulator | Generic master protocol template finalized; not pursued |
| Non-Alcoholic Steatohepatitis | The NASH phase 2b platform trial uses a Bayesian analysis to evaluate multiple interventions, incorporating shared control arms, predefined success and futility criteria and multi-level decision rules, aiming to optimize efficiency and decision-making in a high-need therapeutic area | Cats, CohortPlat | Generic master protocol template finalized; not pursued |
| Neurofibromatosis | The platform trial for neurofibromatosis employs a Phase 1/2 adaptive design, where each investigational therapy is assessed independently using a binary (response/non-response) endpoint defined per manifestation, with interim analyses enabling early stopping for efficacy or futility, and response rates benchmarked against predefined historical or desired thresholds | SIMPLE |
Generic master protocol template finalized; about to be launched. https://www.gcaresearch.org/research/neurofibromatosis-platform-trial-overview/ |
NASH is a progressive liver disease characterized by the accumulation of fat in the liver (steatosis) along with inflammation and liver cell damage, which can lead to fibrosis and cirrhosis. It is a severe form of non-alcoholic fatty liver disease (NAFLD) and is often associated with obesity, type 2 diabetes, and metabolic syndrome. During the lifetime of EU-PEARL, no therapy had yet been approved for FDA or EMA, with the standard of care being diet and exercise – since then, FDA has approved Rezdiffra [50]. Design efforts in a cross-functional team were initially focused on a platform trial, in which combinations of two existing monotherapies would be evaluated. As a “vanilla” (or strawman/skeleton) design we used a sequence of two-arm RCTs, where each of the combinations would be compared against a control independently. We then “built” the platform step-by-step by first including all individual trials in a common platform and studied the impact a common control arm and the allocation thereto would have on the operating characteristics. We realized that the definition of new “per-platform” operating characteristics alongside the existing “per-treatment” operating characteristics was necessary to accurately capture the platform’s performance. Next, we included the monotherapies and used decision rules in line with regulatory requirements. Finally, we included adaptive design elements (e.g. an interim analysis based on a surrogate short-term endpoint). We conducted simulations to answer specific design questions such as: a) Under our assumptions regarding arrival of new combination therapies to be tested in the platform and uncertainty regarding treatment effects and decisions made at interim, how “big” (in the sense of how many combinations are evaluated in the platform trial) would the trials become? b) How “good” (in the sense of predicting the final endpoint) would our surrogate interim endpoint have to be for the interim analyses to be efficient? c) Depending on different stakeholder views, should we optimize the trial design for per-treatment or per-platform power and type 1 error? Answers to these questions, as well as more information can be found in a recent publication [42]. Throughout the project, the attention in the NASH design shifted towards the two correlated, binary endpoints required by FDA and EMA, resolution of NASH without worsening of fibrosis and improvement of fibrosis without worsening of NASH. Initially, it was unclear if FDA and EMA would be consistent in whether they would require a new treatment to show superiority on one or both histological endpoints. To understand all the implications, we again conducted extensive simulations under various assumptions using different design options [41]. These simulations were used in the briefing book and formed the basis for in-person discussions with FDA (CPIM meeting) and EMA (ITF meeting) [48].
EU-PEARL open-source R software
Although simulation is essential for designing platform trials, it is also a major challenge from a technical point of view. In our systematic software review [27], no single software solution that would cover all the needs of simulations studies for EU-PEARL could be identified. A platform trial simulation software development process could follow one of two trajectories: 1) develop many stand-alone packages that can be used to design some specific aspect of platform trials well, but which are limited in their flexibility for design customization; or 2) develop a common, more general platform-software-platform, which can allow users to combine different required design elements in a simple manner. Within EU-PEARL, both trajectories were followed, i.e., stand-alone software was developed for specific tasks and research questions; however, an attempt was also made to develop a fully modular platform trial simulator. The latter benefits from the learning experiences gleaned from programming the former.
Throughout the EU-PEARL project, software packages were developed to support statistical evaluation of platform designs, to support methodological considerations, and to provide general tools required when simulating platform trials. Some of the codes are non-generic. They were developed to assess specific methodological problems and platform trials and thus might not generalize to other situations.
Specifically, they needed to be developed because (a combination of) certain features essential to the intended simulation program did not previously exist in software. To give a few examples, the working group on non-concurrent controls required software that facilitates comparisons of different analysis methods related to (not) using non-concurrent controls, as well as different ways to simulate time trends to meaningfully compare these analysis methods, as well as simulate the staggered entry of treatments into the platform over time (NCC package). The Major Depression disease-specific working group needed a simulator that features adaptive allocation to the control group, multiple modes of drug administration, as well as staggered entry of treatments over time (MDD package). The NASH disease-specific working group needed a simulator that uses Bayesian decision rules, simulates the staggered entry of treatments over time, allows for different treatment effects to be sampled from a distribution, allows for flexible ways of sharing the common control data, investigates combination therapies and their respective monotherapies and uses a surrogate endpoint at interim (Cats and CohortPlat packages).
The simulation code (CohortPlat) developed to conduct the initial simulations in NASH [42] (platform trial design with combination therapies) was used as a backbone for the simulation software (MDD) in MDD [38] and the final simulation software in NASH [41] (Cats). Despite having been developed with customizability in mind, we soon realized that there are too many implicit assumptions in any software code that is aimed to satisfy ad-hoc project needs to be flexible enough to be used as-is in different projects – in a matter of weeks, the design could change dramatically with respect to number of treatments investigated, decision rules, adaptive design elements or operating characteristics investigated. Finally, after having experienced first-hand the difficulties of carrying over software code that was specifically developed to address the issues faced in a particular project into another project, we created a modular simulation architecture that allows combination of all required functionalities and is designed with code sharing and re-usability in mind (SIMPLE and TBSimulator). Both software packages were used to independently validate the results obtained in the MDD simulations.
The section that follows provides a summary of each of the software packages developed through EU-PEARL, together with the location of the open-source packages for continued use and adaptation. Table 2 gives an overview of the software packages developed and their respective code repositories.
Table 2.
Overview of the open-source R software packages developed in EU-PEARL, with their license and code repository location
| Name | Description | License | Repository link |
|---|---|---|---|
| Cats | Package for simulating cohort platform trials with co-primary, binary endpoints | GPL-3 | https://github.com/el-meyer/cats |
| CohortPlat | Package for simulating cohort platform trials that evaluate combination therapies | GPL-3 | https://github.com/el-meyer/CohortPlat |
| MDD | Functions to simulate platform trials in the field of major depressive disorders | GPL-3 | https://github.com/dariozchl/MDD-platform-trials |
| gsLOND | Functions to implement a group-sequential platform trial, controlling the online false discovery rate | GPL-3 | https://github.com/SoniaZehetmayer/gsLOND |
| NCC | Package for simulating and analysing platform trials with non-concurrent controls | MIT | https://pavlakrotka.github.io/NCC/ |
| TBSimulator | Generic platform trial simulation code, including specific code to execute simulations for the EU-PEARL design discussions in tuberculosis | GPL-3 | https://github.com/EUPEARL/TBSimulator |
| SIMPLE | Architecture to build nearly infinitely complex (platform) trials in easily re-usable modules | GPL-3 | https://github.com/el-meyer/simple |
Cats
Cats simulates a cohort platform trial design whereby every cohort consists of two arms (control and experimental treatment). Co-primary binary endpoints are used, and decisions at the interim or final analysis are made using either Bayesian or frequentist decision rules. Several options for sharing control data across cohorts are implemented. Realistic trial trajectories are simulated, and the operating characteristics of the designs are calculated [41].
CohortPlat
CohortPlat [51] facilitates the simulation of cohort platform trials with binary endpoints, where each cohort consists of a combination treatment, the respective monotherapies, and control. In this design, the cohorts enter and leave the platform trial over time. One of the monotherapies is identical in each cohort (the “backbone monotherapy”). Bayesian decision rules are used at the interim analysis (early futility or efficacy based on a surrogate endpoint) and at the final analysis to declare the combination therapy successful or futile. Sharing of the control and the backbone monotherapy data across cohorts is possible. The package offers extensive flexibility with respect to both platform trial trajectories, as well as treatment effect scenarios, and decision rules [42].
MDD
MDD simulates a platform trial with two populations, each consisting of several treatments with different routes of administration (“domains”) and an arbitrary number of treatment arms per domain and population [38]. Each domain has a shared control arm, and separate analyses for the populations are conducted. The primary endpoint is the change in a continuous variable from baseline to follow-up. An analysis of covariance is used to test the null hypothesis of no treatment effect at the interim analysis and the final analysis.
gsLOND
This software implements the group-sequential significance levels based on the number of discoveries (gsLOND) procedure for online false discovery rate control, which is described in more detail by Zehetmayer et al. [52] An input vector of p-values, together with the specification of the order of the p-values, stage, and time is entered by the user to calculate one of three gsLOND procedures. An output matrix on the test decision, stopping stage, individual significance level, and the group-sequential boundaries for each hypothesis is provided.
NCC
NCC [53] is an R package that allows users to simulate platform trials and perform treatment–control comparisons using non-concurrent control data. The package supports simulation of complex platform trial designs with continuous or binary endpoints and a flexible number of treatment arms that enter the trial at different time points. The software accommodates different treatment effects among the arms and includes several patterns for time trends. Analytic approaches currently implemented in the package cover frequentist modes (e.g., regression model adjusting for time as a fixed effect [54, 55], mixed model adjusting for time as a random factor, and regression splines), the Bayesian time machine [56], a meta-analytic predictive prior [57], separate analysis, and pooled analysis.
TBSimulator
TBSimulator comprises a generic platform trial simulator with specific customized functions to execute simulations for a flexible tuberculosis (TB) platform trial design. The TB-specific functions are not part of the core code but are integrated into the generic simulator via predefined entry points to the core code; thus, serving as application programming interfaces (APIs). This allows for the re-use of large parts of the code for other platform simulators. The generic simulator provides the architecture for executing many of the administrative parts in platform simulations, namely the recruitment of simulated participants, data creation, data management, monitoring of analyses triggers, and allocation between and within ISAs. Within the code, numerous APIs are predefined to enable “customized” project code. The APIs are used in particular to define specific functions for generating non-standard endpoint data, analysing those data, triggering the interim and final analyses, and simulating the decision processes in the analyses.
SIMPLE
Simulating platform trials efficiently (SIMPLE) [58] was developed to solve the problem of poor shareability and re-usability of existing software code in future projects. In the software architecture we proposed, different aspects of the simulation (e.g., participant recruitment, analysis strategies, and the inclusion and exclusion of further interventions into the platform) are governed by partially independent and re-usable “modules”. The main simulation wrapper, however, is completely general and makes no assumptions regarding the actual content of the modules—there are only minimal and transparent assumptions in the software code. Once implemented, the different aspects of platform trial simulation (e.g., checking whether any arms have reached analysis milestones or participant recruitment targets) can be re-used independently, and different interventions can have different analysis strategies within the same simulation. Due to its architecture, SIMPLE can be used as a backbone from which to create highly complex designs that are accessible to users with very limited R skills, while more advanced users can nearly infinitely tweak the designs.
Discussion and conclusion
Platform trials allow for various options of design features, e.g., adding or dropping treatment arms, changing randomization ratios, and population enrichment. When planning a platform trial, it is important to understand how the design settings impact the overall performance of the trial. Due to the complexity of the broad range of design and analysis options at hand within a platform trial, it is difficult to provide simple formulae to perform power calculations like those for traditional two-arm RCTs. There are many more performance indicators that need to be considered in a platform trial compared to those in standard clinical trials. For example, one could look at the power overall, but also at the treatment level. Questions to be considered include: How quickly are promising treatments detected? How quickly are inefficient treatment arms closed for futility without increasing the risk of also dropping efficient arms for mistaken futility? What is the interaction between the sample size, recruitment rate, timing of interim analyses, and the inclusion of new treatments? Assessing and understanding the set of important operating characteristics of platform trials requires clinical trial simulations; doing so efficiently requires customized software tools.
We have summarized the effort of EU-PEARL to create open-source software to be used for methodological research as well as drafting master protocols in disease-specific work packages. For the latter, the “vanilla and sprinkles” concept was conceived to guide the development of a platform trial in discussion with relevant stakeholders through an iterative process. Starting with a basic (vanilla) design and adding more features (sprinkles) step by step will allow for the evaluation of an added benefit. The set of different design options and assumptions can eventually result in the investigation of a large number of different scenarios.
We have provided a short description of each software package; however, more information can be found on the developer’s GitHub pages or in the software-specific or method-specific publications we have cited. All the software packages are licensed using open-source licenses that allow the software to be used and further developed by the public, while not giving any warranties nor accepting liability in any way. As platform trials can have very specific features, tailored software code had to be developed to run the simulations required for the master protocols of EU-PEARL’s disease-specific work packages, but also for the development of new methodology, e.g., use of non-concurrent data or online multiplicity adjustments.
Based on the learnings from the tailored software tools, a blueprint was developed for a more modular approach, which resulted in a modular platform-simulation platform called SIMPLE [58]. The main advantages of the modular approach are that different aspects of platform trial simulation, e.g., checking whether any arms have reached analysis milestones or participant recruitment targets, can be re-used independently, and different interventions can have different analysis strategies within the same simulation. The modular approach allowed for the validation of some results derived using the tailored approaches, such as the tailored simulation program that was developed to design a platform trial to evaluate non-alcoholic steatohepatitis [41].
Since we conducted our systematic review in 2021 [27], more progress has been made on simulating platform trials using commercial software. Of note, FACTS [59] is the first and, to our knowledge, only commercial software that has a fully validated platform trial engine for dichotomous and continuous endpoints. Ultimately, the choice of the optimal software tool for a specific project will depend on many factors, such as its intended use (e.g., research or actual trial), importance of freely available code and validation, simplicity of use, and included features or simplicity of extending the software tool to meet project-specific needs. In particular, (lack of) validation is usually a key topic when discussing the use of open-source software packages with key decision makers. As a sanity check, in EU-PEARL, two teams working on modular simulators (TBSimulator and SIMPLE) independently implemented the same platform design using their respective software architectures and obtained the same results. The openstatswareworking group of the American Statistical Association is dedicated to propagating good software engineering practices into routine biostatistical programming [60], which in time will allow for more efficient exchange of open-source software code and increased confidence in its validity.
We provide this set of freely available software tools, which can be easily modified to plan new platform trials, but also to conduct further methodological research. We hope that as a result, there will be increased understanding of platform trials, support for the design of future platform trials in various indications, and broader acknowledgement of the importance of simulations as a means to better understand the behaviour and operating characteristics of non-trivial clinical trial designs.
Acknowledgements
The authors thank LeeAnn Chastain for providing writing and editorial support.
Authors’ contributions
ELM, TM and FK conceived the idea for the paper. ELM, TM, MBR, MMF, PJ, PK, PM, TP, SZ, DZ and FK (all authors) developed the R packages mentioned in this paper. ELM, TM, TP and FK wrote the initial and final manuscript. ELM, TM, MBR, MMF, PJ, PK, PM, TP, SZ, DZ and FK (all authors) corrected and agreed on the final manuscript.
Funding
The authors disclose receipt of the following financial support for the research, authorship, and/or publication of this article: the EU-PEARL project, which received funding from the Innovative Medicines Initiative 2 Joint Undertaking [grant agreement number 853966]. This Joint Undertaking received support from the European Union’s Horizon 2020 research and innovation programme, the European Paediatric Influenza Analysis project, Children’s Tumor Foundation, Global Alliance for TB Drug Development, and SpringWorks Therapeutics, Inc. This publication reflects the authors’ views. Neither the Innovative Medicines Initiative, nor the European Union, European Paediatric Influenza Analysis project, nor any of the associated partners are responsible for any use that may be made of the information contained herein.
Data availability
No datasets were generated or analysed during the current study.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
Authors ELM, PJ, and TP are employees of Berry Consultants, LLC, which developed and owns the commercial software FACTS, which is noted in this publication. The other authors declare no conflict of interest.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.EU-PEARL consortium. EU-PEARL: EU Patient-Centric Clinical Trial Platforms. 2023. https://eu-pearl.eu/. Accessed 11 Jan 2024.
- 2.Koenig F, Spiertz C, Millar D, et al. Current state-of-the-art and gaps in platform trials: 10 things you should know, insights from EU-PEARL. EClinicalMedicine. 2024;67: 102384. 10.1016/j.eclinm.2023.102384. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Meyer EL, Mesenbrink P, Dunger-Baldauf C, et al. The evolution of master protocol clinical trial designs: a systematic literature review. Clin Ther. 2020;42(7):133–1360. 10.1016/j.clinthera.202.05.010. [DOI] [PubMed] [Google Scholar]
- 4.Woodcock J, LaVange LM. Master protocols to study multiple therapies, multiple disease, or both. N Eng J Med. 2017;377(1):70. 10.1056/NEJMra1510062. [DOI] [PubMed] [Google Scholar]
- 5.Angus DC, Alexander BM, Bery S, et al. Adaptive platform trials: definition, design, conduct and reporting considerations. Nat Rev Drug Discov. 2019;18(10):797–807. 10.1038/s41573-0034-3. [DOI] [PubMed] [Google Scholar]
- 6.Barker A, Sigman C, Kelloff G, et al. I-SPY 2: an adaptive breast cancer trial design in the setting of neoadjuvant chemotherapy. Clin Pharmacol Ther. 2009;86(1):97–100. 10.1038/clpt.2009.68. [DOI] [PubMed] [Google Scholar]
- 7.Sydes MR, Parmar MK, Mason MD, et al. Flexible trial design in practice — stopping arms for lack-of-benefit and adding research arms mid-trial in STAMPEDE: a multi-arm multi-stage randomized controlled trial. Trials. 2012;13:168. 10.1186/1745-6215-13-168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Parmar MK, Sydes MR, Cafferty FH, et al. Testing many treatments within a single protocol over 10 years at MRC Clinical Trials Unit at UCL: multi-arm, multi-stage platform, umbrella and basket protocols. Clin Trials. 2017;14(5):451–61. 10.1177/1740774517725697. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Alexander BM, Ba S, Berger MS, et al. Adaptive global innovative learning environment for glioblastoma: GBM AGILE. Clin Cancer Res. 2018;24(4):737–43. 10.1158/1078-0432.CCR-17-0764. [DOI] [PubMed] [Google Scholar]
- 10.Symmans WF, Yau C, Chen YY, et al. Assessment of residual cancer burden and event-free survival in neoadjuvant treatment for high-risk breast cancer: an analysis of data from the I-SPY2 randomized clinical trial. JAMA Oncol. 2021;7(11):1654–63. 10.1001/jamaoncol.2021.3690. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Berry SM, Petzold EA, Dull P, et al. A response adaptive randomization platform trial for efficient evaluation of Ebola virus treatments: a model for pandemic response. Clin Trials. 2016;13(1):22–30. 10.1177/1740774515621721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Angus DC, Berry S, Lewis RJ, et al. The REMAP-CAP (Randomized Embedded Multifactorial Adaptive Platform for Community-acquired Pneumonia) Study. Rationale and design. Ann Am Thorac Soc. 2020;17(7): 879–891. 10.1513/AnnalsATS.202003-192SD. [DOI] [PMC free article] [PubMed]
- 13.Aisen P, Bateman R, Carrillo M, et al. Platform trials to expedite drug development in Alzheimer’s disease: a report from the EU/US CTAD Task Force. J Prev Alzheimers Dis. 2021;8(3):306–12. 10.14283/jpad.2021.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Quintanta M, Saville BR, Vestrucci M, et al. Design and statistical innovations in a platform trial for amyotrophic lateral sclerosis. Ann Neurol. 2023;94(3):547–60. 10.1002/ana.26714. [DOI] [PubMed] [Google Scholar]
- 15.Vanderbeek AM, Bliss JM, Yin Z, et al. Implementation of platform trials in the COVID-19 pandemic: a rapid review. Contemp Clin Trials. 2022;112: 106625. 10.1016/j.cct.2021.106625. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Longini IM, Yang Y, Fleming TR, et al. A platform trial design for preventive vaccines against Marburg virus and other emerging infectious disease threats. Clin Trials. 2022;19(6):647–54. 10.1177/17407745221110880. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.RECOVERY Collaborative Group. Tocilizumab in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial. Lancet. 2021;397(10285):1637–45. 10.1016/S0140-6736(21)00676-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Collignon O, Burman CF, Posch M, et al. Collaborative platform trials to fight COVID-19: methodological and regulatory considerations for a better societal outcome. Clin Pharmacol Ther. 2021;110(2):311–20. 10.1002/cpt.2183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.European Medicines Agency. Complex clinical trials — Questions and answers. 2022. https://health.ec.europa.eu/system/files/2022-06/medicinal_qa_complex_clinical-trials_en.pdf. Accessed 8 Jan 2024.
- 20.Food and Drug Administration. Master protocols for drug and biological product development. Guidance for Industry. Draft Guidance. 2023. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/master-protocols-drug-and-biological-product-development. Accessed 11 Jan 2024.
- 21.Cash-Gibson L, Pericàs J, Spiertz C, et al. EU-PEARL: Changing the paradigm of clinical trials in Europe. Eur J Public Health. 2021;31(Suppl 3):ckab165–657. 10.1093/eurpub/ckab165.657.
- 22.EU-PEARL Consortium. D2.10 Final Report on Clinical Operations Best Practices. 2023.https://eu-pearl.eu/d2-10-final-report-on-clinical-operations-best-practices/. Accessed 18 Nov 2024
- 23.Broglio K, Lu C, Kleoudis C, et al. To Master Protocol or Not To Master Protocol. 2024. https://www.psiweb.org/docs/default-source/conference/2024-conference-slides/wednesday-19-june-2024/5-to-master-protocol-or-not-to-master-protocol---elizabeth-pilling.pdf?sfvrsn=17feafdb_2. Accessed 25 Nov 2024.
- 24.Holford N, Ma SC, Ploeger BA. Clinical trial simulation: a review. Clin Pharmacol Ther. 2010;88(2):166–82. 10.1038/clpt.2010.114. [DOI] [PubMed] [Google Scholar]
- 25.Berry SM, Connor JT, Lewis RJ. The platform trial: an efficient strategy for evaluating multiple treatments. JAMA. 2015;313(16):1619–20. 10.1001/jama.2015.2316. [DOI] [PubMed] [Google Scholar]
- 26.Saville BR, Berry SM. Efficiencies of platform clinical trials: a vision of the future. Clin Trials. 2016;13(3):358–66. 10.1177/1740774515626362. [DOI] [PubMed] [Google Scholar]
- 27.Meyer EL, Mesenbrink P, Mielke T, et al. Systematic review of available software for multi-arm multi-stage and platform clinical trial design. Trials. 2021;22(1):183. 10.1186/s13063-021-05130-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Grayling MJ, Wheeler GM. A review of available software for adaptive clinical trial design. Clin Trials. 2020;17(3):323–31. 10.1177/1740774520906398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.IMI EU-PEARL. Github software repositories. 2023. https://github.com/EUPEARL. Accessed 8 Jan 2024.
- 30.Free Software Foundation, Inc. GNU General Public License. 2007. https://www.gnu.org/licenses/gpl-3.0.html.en. Accessed 8 Jan 2024.
- 31.Massachusetts Institute of Technology. MIT License. 2007. https://opensource.org/license/mit/. Accessed 8 Jan 2024.
- 32.Elsäßer A, Regnstrom J, Vetter T, et al. Adaptive clinical trial designs for European marketing authorization: a survey of scientific advice letters from the European Medicines Agency. Trials. 2014;15:383. 10.1186/1745-6215-15-383. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Meinert CL. The evolution of data and safety monitoring boards. Clin Trials. 2023;20(1):81–3. 10.1177/17407745221119171. [DOI] [PubMed] [Google Scholar]
- 34.DeMets DL, Zarin DA, Rockhold F, et al. Bringing data monitoring committee charters into the sunlight. Clin Trials. 2023;20(4):441–51. 10.1177/17407745231169499. [DOI] [PubMed] [Google Scholar]
- 35.Schuckers M, Campbell S. Telling statistical stories. Significance. 2020;17(5):30–3. 10.1111/1740-9713.01447. [Google Scholar]
- 36.Meyer EL, Kumaus C, Majika M, et al. An interactive R-Shiny app for quickly visualizing a tidy, long dataset with multiple dimensions with an application in clinical trial simulations for platform trials. SoftwareX. 2023;22:101347. 10.1016/j.softx.2023.101347. [Google Scholar]
- 37.Meyer EL. Airship — an interactive R-Shiny app for visualizing tidy long data. 2023. https://github.com/el-meyer/airship. Accessed 8 Jan 2024.
- 38.Freitag MM, Zocholl D, Meyer EL, et al. Design considerations for a phase II platform trial in major depressive disorder. arXiv Open Access Repository. 2023. https://arxiv.org/abs/2310.02080. Accessed 8 Jan 2024.
- 39.EU-PEARL Consortium. D4.5 TRD and PRD Final Master Protocol for an Integrated Research Platform. 2023. https://eu-pearl.eu/wp-content/uploads/2023/05/D4.5-TRD-and-PRD-final-master-protocol-for-IRP.pdf. Accessed 18 Nov 2024.
- 40.EU-PEARL Consortium. D5.4 TB Final Master Protocol for an Integrated Research Platform. 2023. https://eu-pearl.eu/wp-content/uploads/2023/04/D5.4-TB-Final-Master-Protocol-for-IRP.pdf. Accessed 18 Nov 2024.
- 41.Meyer EL, Mesenbrink P, Di Prospero NA, et al. Designing an exploratory phase 2b platform trial in NASH with correlated, co-primary binary endpoints. PLoS ONE. 2023;18(3):e0281674. 10.1371/journal.pone.0281674. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Meyer EL, Mesenbrink P, Dunger-Baldauf C, et al. Decision rules for identifying combination therapies in open-entry, randomized controlled platform trials. Pharm Stat. 2022;21(3):671–90. 10.1002/pst.2194. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Pericàs JM, Tacke F, Anstee QM, et al. Platform trials to overcome major shortcomings of traditional clinical trials in non-alcoholic steatohepatitis? Pros and cons. J Hepatol. 2023;78(2):442–7. [DOI] [PubMed] [Google Scholar]
- 44.EU-PEARL Consortium. D6.3 NASH Final Master Protocol for an Integrated Research Platform. 2023. https://eu-pearl.eu/wp-content/uploads/2023/04/D6.3-NASH-Final-Master-Protocol-for-an-IRP.pdf. Accessed 18 Nov 2024.
- 45.Dhaenens BA, Heimann G, Bakker A, et al. Platform trial design for neurofibromatosis type 1, NF2-related schwannomatosis and non-NF2-related schwannomatosis: A potential model for rare diseases. Neuro-Oncology Practice. 2024;11(4):395–440. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.EU-PEARL Consortium. EU-PEARL D7.4 Neurofibromatosis Master Protocol for Integrated Research Platforms. 2023. https://eu-pearl.eu/wp-content/uploads/2023/05/EU-PEARL-D7.4-NF-Master-Protocol-for-IRPs.pdf. Accessed 18 Nov 2024.
- 47.Gidh-Jain M, Parke T, König F, et al. Developing generic templates to shape the future for conducting integrated research platform trials. Trials. 2024;25:204. 10.1186/s13063-024-08034-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Nguyen QL, Hees K, Hernandez Penna S, et al. Regulatory Issues of Platform Trials: Learnings from EU‐PEARL. Clin Pharmacol Ther. 2024;116(1):52-63. 10.1002/cpt.3244. [DOI] [PubMed]
- 49.Dhaenens BA, Mahler F, Batchelor H, et al. Optimizing expert and patient input in pediatric trial design: Lessons learned and recommendations from a collaboration between conect4children and European Patient-CEntric ClinicAl TRial PLatforms. Clin Transl Sci. 2023;16(8):1458–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Harrison SA, Taub R, Neff GW, et al. Resmetirom for nonalcoholic fatty liver disease: a randomized, double-blind, placebo-controlled phase 3 trial. Nat Med. 2023;29(11):2919–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Meyer EL, Mesenbrink P, Dunger-Baldauf C, et al. CohortPlat: Simulation of cohort platform trials investigating combination therapies. arXiv Open Access Repository. 2022. https://ar5iv.labs.arxiv.org/html/2202.02182. Accessed 8 Jan 2024.
- 52.Zehetmayer S, Posch M, Koenig F. Online control of the false discovery rate in group-sequential platform trials. Stat Methods Med Res. 2022;31(12):2470–85. 10.1177/09622802221129051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Krotka P, Hees K, Jacko P, et al. NCC: An R-package for analysis and simulation of platform trials with non-concurrent controls. SoftwareX. 2023;23:101437. 10.1016/j.softx.2023.101437. [Google Scholar]
- 54.Lee KM, Wason J. Including non-concurrent control patients in the analysis of platform trials: is it worth it? BMC Med Res Methodol. 2020;20(1):165. 10.1186/s12874-020-01043-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Bofill-Roig M, Krotka P, Burman CF, et al. On model-based time trend adjustments in platform trials with non-concurrent controls. BMC Med Res Methodol. 2022;22(1):228. 10.1186/s12874-022-01683-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Saville BR, Berry DA, Berry NS, et al. The Bayesian Time Machine: accounting for temporal drift in multi-arm platform trials. Clin Trials. 2022;19(5):490–501. 10.1177/17407745221112013. [DOI] [PubMed] [Google Scholar]
- 57.Schmidli H, Gsteiger S, Roychoudhury S, et al. Robust meta-analytic–predictive priors in clinical trials with historical control information. Biometrics. 2014;70(4):1023–32. 10.1111/biom.12242. [DOI] [PubMed] [Google Scholar]
- 58.Meyer EL, Mielke T, Parke T, et al. SIMPLE — a modular tool for simulating complex platform trials. SoftwareX. 2023;23:101515. 10.1016/j.softx.2023.101515. [Google Scholar]
- 59.Berry Consultants, LLC. FACTS: Fixed and Adaptive Clinical Trial Simulator. Computer software version 7.0. 2023. https://www.berryconsultants.com/software/facts/. Accessed 11 Jan 2024.
- 60.openstatsware working group of the American Statistical Association (ASA) Biopharmaceutical Section (BIOP). openstatsware. 2023. https://www.openstatsware.org/. Accessed 8 Jan 2024.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No datasets were generated or analysed during the current study.


