Clinical trial simulations |
-
•
A clinical trial simulation randomly generates artificial patient data using an underlying set of distributional assumptions to mimic the execution of a clinical trial design within the computer.
-
•
To fully evaluate trial design characteristics taking the underlying uncertainty on assumptions and the variability of the data into account, numerous simulations need to be run.
-
•
For each scenario and design of interest, the conduct of the trial is repeated multiple times so that when analysing all replications, insights on the performance of the design can be gained, e.g., how many times a sub-study within a platform trial with a certain treatment effect size will be successful or how many times a sub-study will be stopped for futility if there was no effect at all.
-
•
Planning a platform trial with clinical trial simulations will need more time compared to designing a classical randomised controlled clinical trial.
|
Vanilla-sprinkle concept |
-
•
The Vanilla-sprinkle concept was developed to guide the process of developing a platform trial in discussion with relevant stakeholders in an iterative process
-
•
Start with a basic (“Vanilla”) design and add additional features (“sprinkles”) step by step to allow an evaluation of the added benefit, but also the risks.
|
Platform simulation software |
-
•
Software to assess platform trials is still limited (see https://github.com/EUPEARL).
-
•
There are two approaches to setup simulation programs:
-
(i)
tailored software to assess very specific designs and features, e.g., trial designs for non-alcoholic steatohepatitis (NASH), assessment of new methodology, (non-concurrent control data, online multiplicity control).
-
(ii)
modular approach to allow more general platform.
-
•
The modular approach enables components of the platform trial software, as modules to simulate recruitment patterns and patient level data, being re-used and can be combined in a flexible way. Tailored software for specific scenarios, on the other hand can be quicker to set-up and have lower running times.
|
Multiplicity |
-
•
Multiplicity arises in platform trials through multiple treatment-control comparisons, subgroups, endpoints, and interim analysis.
-
•
One way of controlling the false positive rate on a sub study level is for the sub studies to be inferentially independent such that no extrapolation between the respective decision can performed. For example, the test of treatments with different mechanisms of action can be considered inferentially independent while the test of different doses of the same drug is not.
|
Sharing of data |
-
•
Sharing clinical trial data in platform trials has statistical and legal aspects.
-
•
Special caution is required that publishing shared control data from completed comparisons does not question the integrity of ongoing comparisons.
-
•
Model based methods to adjust for time trends when non-concurrent controls are used require data from all active arms, which may raise concerns of commercially competing compound owners.
|
Concurrent and non-concurrent control data |
-
•
The use of non-concurrent data from control groups in treatment-control comparisons is controversial because of potential time trends that can arise, for example due to changes in the patient population, the standard of care, endpoint assessment, or the participating centres.
-
•
In platform trials time trends can be adjusted for with statistical models that make use of data from all patients allocated to the control and all experimental treatment arms. However, they rely on model assumptions which need to be justified on a case-by-case basis.
|
Allocation ratio |
|
Adaptations |
-
•
Early stopping for efficacy or futility as well as adapting the sample sizes based on interim data can improve the operating characteristics of hypothesis testing procedures and reduce the required sample size.
-
•
Enrichment of promising subgroups may improve the power of hypothesis tests and can increase the likelihood for trial participants to receive an effective treatment.
-
•
Adaptive statistical testing and estimation procedures need to be applied to maintain the validity of statistical hypothesis tests and confidence intervals.
|