Skip to main content
Gates Open Research logoLink to Gates Open Research
. 2019 Mar 18;3:780. Originally published 2019 Mar 8. [Version 2] doi: 10.12688/gatesopenres.12912.2

Highly Efficient Clinical Trials Simulator (HECT): Software application for planning and simulating platform adaptive trials

Kristian Thorlund 1,a, Shirin Golchi 1, Jonas Haggstrom 1, Edward Mills 1
PMCID: PMC6556760  PMID: 31259314

Version Changes

Revised. Amendments from Version 1

We have limited affiliations to our primary affiliations. We have updated some of the sentences in the  Validation and Use Cases section due to vague language in the first version. We have therefore now added more detail on the internal validation via early-stage portfolio management use cases as well as external independent beta-testing. We believe the text now accurately reflects the intersecting stages of validation and use cases that the software has gone through prior to its release.

Abstract

Background: Adaptive designs and platform designs are among two common clinical trial innovations that are increasingly being used to manage medical intervention portfolios and attain faster regulatory approvals. Planning of adaptive and platform trials necessitate simulations to understand how a set of adaptation rules will likely affect the properties of the trial. Clinical trial simulations, however, remain a black box to many clinical trials researchers who are not statisticians.

Results: In this article we introduce a simple intuitive open-source browser-based clinical trial simulator for planning adaptive and platform trials. The software application is implemented in RShiny and features a graphical user interface that allows the user to set key clinical trial parameters and explore multiple scenarios such as varying treatment effects, control response and adherence, as well as number of interim looks and adaptation rules. The software provides simulation options for a number of designs such as dropping treatment arms for futility, adding a new treatment arm (i.e., platform design), and stopping a trial early based on superiority. All available adaptations are based on underlying Bayesian probabilities. The software comes with a number of graphical outputs to examine properties of individual simulated trials. The main output is a comparison of trial design performance across several simulations, graphically summarizing type I error (false positive risk), power, and expected cost/time to completion of the considered designs.

Conclusion: We have developed and validated an intuitive highly efficient clinical trial simulator for planning of clinical trials. The software is open-source and caters to clinical trial investigators who do not have the statistical capacity for trial simulations available in their team. The software can be accessed via any web browser via the following link: https://mtek.shinyapps.io/hect/

Keywords: Platform trial, adaptive design, trial simulation, highly efficient clinical trials, open-source software.

Introduction

Over the past two decades randomized clinical trials have become increasingly innovative 1. The surge in innovative designs stems from an increasing need to reduce waste and improve efficiencies in time and cost. Adaptive designs and platform designs are among two common clinical trial innovations that are increasingly being used by the pharmaceutical industry to manage their drug portfolios and get to faster regulatory approvals of new treatments 24. These types of trials, when designed appropriately, also have ethical advantages such as reducing the number of patients exposed to an inferior or harmful treatment.

Contrary to conventional randomized clinical trials where all patients are followed-up after a fixed period of time and the pre-planned trial protocol is adhered without deviation, adaptive trials and platform designs allow for pre-planned (and occasionally post-initiation) modifications to the protocol in the event of strong early treatment response signals 4. At face value, this makes the properties of adaptive and platform trials difficult to understand because trial investigators do not know up front whether or which pre-planned adaptations may take place over the course of a trial. Hitherto, planning of adaptive and platform trials necessitate simulations to understand how a set of adaptation rules will likely effect the properties of the trial (e.g., the probability of the detecting a true effect) 5.

Simulations, whether for clinical trials or other disciplines, are a complex branch of statistics and probability theory, and thus naturally, remain a black box to most clinical trial investigators. In the pharmaceutical industry, expensive comprehensive software packages (e.g., FACTS TM or ADD-PLAN ®) 6, 7 are often used to run simulations, but the technical level required to use such packages is typically that of a master’s degree in statistics (or similar) with years of experience. In other areas such as academia where funding typically is not available for these software packages, statisticians will either need to hard code simulations from scratch or code via available packages in statistical software (e.g., the ADCT: Adaptive Design in Clinical Trials package for R) 8. Arguably, both options contribute further to the black box perception and thus are not helpful in the education and promotion of innovative clinical trial designs.

In global health, the Bill & Melinda Gates Foundation is pushing for the use of Highly Efficient Clinical Trials (HECT) – trials where investigators are open to adaptation, where trial simulation is a natural part of the planning, and where the scope of the trial can be altered to sustain local infrastructure and leave a foot print 9. Platform adaptive trial designs fit well into this context, but the use of these designs may be hampered by limited access to methodologists with capabilities in trial simulations. To address these limitations, the Knowledge Integration (KI) trial services division at The Bill & Melinda Gates Foundation initiated the development of an open source software application with a simple user interface to complete platform and adaptive trial simulations. The software was developed between May 2017 and October 2018 and caters to clinical trial investigators, clinical trial methodologist, and researchers who are not statisticians or do not have access to commercial trial simulator software. This article documents the implementation, methods and functions of the software application.

Implementation

The Highly Efficient Clinical Trials (HECT) simulator is a web application written in RShiny, a package in the statistical software R and RStudio 10, 11. The HECT simulator is compiled to run on any browser. The HECT simulator requires a few manual inputs in the input bar (see also next section) to run. The software allows for simulation outputs to be saved and loaded. The simulation output can be saved in a temporary folder that accompanies the application for online exploration and comparisons. This folder is cleared on a daily basis to prevent accumulation of data. To save the simulation output permanently the user needs to use the download button which downloads the results in form of a table with rows and columns specified by the user.

The clinical trial adaptation rules implemented in the software are based on the calculation of Bayesian posterior probabilities of superiority and inferiority 12. These are hardcoded in R and further technical details can be found in the software manual’s appendix.

Overview of inputs and functions

The HECT software application comes with a set of functions and outputs for trial simulation and conventional sample size calculations. The software also provides a brief user manual. Figure 1 shows the opening window of the HECT software. The trial simulator inputs and outputs is found in the first tab to the left, the conventional sample size calculator is available in the middle tab, and the user manual is found in the third tab to the right. The manual outlines each of the input options individually and provides summaries of available outputs. The manual appendix includes a detailed account of the statistical methods used at the back-end of the software. For all functions, the input bar is found to the left in the browser window, and the outputs are obtained and found in the right side of the browser window.

Figure 1. Displays the start up window of the HECT simulator software application.

Figure 1.

Clinical trial designs implemented

Conventional trial design. For comparative purposes, all simulations will automatically incorporate simulations of a conventional 1:1 randomized clinical trial. The conventional trial will accumulate the maximum allowed number of patients in the simulation, which can be informed by a conventional clinical trial sample size calculation (see Conventional sample size calculation under Statistical methods implementation). Figure 2a display an example of a standard multiple simulations output figure from the software where key properties such as type I error, power, and cost, are compared between a simulated HECT and a conventional trial.

Figure 2.

Figure 2.

Displays example trial design performance outputs after 200 simulated trials: ( a) comparing the type 1 error, power and overall cost between the highly efficient clinical trial (HECT) and the conventional clinical trial design; and ( b) the distribution of sample size at trial termination with the highly efficient design (compared to 6,000 for all conventional designs).

Early stopping of trial for efficacy. The software includes an option to stop the trial early if a strong signal of superiority of one treatment over the other(s) is observed at an interim analysis. The signal is determined by a pre-set threshold for a Bayesian posterior probability of superiority. For example, if the threshold is set to 99%, a single simulated trial will be terminated at the first interim analysis where the probability of any treatment being superior to all other treatment exceeds 99%.

Early stopping or dropping of treatment arm for futility. The HECT software comes with two options for dropping a treatment arm for futility. For two-arm trials, early confirmation of futility may also lead to termination of the trial. First, a treatment arm may be dropped early for futility if the probability that it is superior to all other treatments fall below some pre-set threshold. For example, if the threshold is set to 1%, the treatment arm will be dropped at the first interim analysis where the probability of it being superior to all other treatments falls below 1%. Second, a treatment arm may be dropped with respect to some margin of observed treatment effect, typically an agreed upon minimally clinically important effect. For example, if we consider a relative risk reduction of 20% the minimally important effect for some clinical outcome, we can drop a treatment arm if we are highly certain the treatment effect (compared to control) does not exceed 20%. For example, investigators can insist on being 95% certain the effect is less than minimally important. In this case, the treatment arm will be dropped at the first interim analysis when there is a 95% probability or greater that the relative risk reduction is smaller than 20%.

Platform trial – adding new treatment arm. Similar to dropping a treatment arm, the HECT simulator also allows for the addition of a new treatment arm. This corresponds to using a platform design. The software allows the addition of a new arm to be triggered in two ways. First, if another treatment arm is dropped for futility (see above subsection) a new treatment arm will replace the dropped one. Second, if all arms are dropped except for one, a new arm can be added to compare to the winner of the previous stage. For the latter, this occurs if a trial is stopped for superiority of one treatment, except the platform trial design option allows for a perpetual continuation of the trial into a two-arm comparison.

Statistical methods implemented

Conventional sample size calculation. As a basis for comparison, the HECT simulator includes a sample size calculator for conventional randomized clinical trials (see appendix of User Manual). For multi-arm trials, the sample size is specified to detect the difference between the largest and second largest effect. The sample size calculator also allows for adjustment of multiplicity for multi-arm trials. The sample size calculator is found in the middle tab.

Adaptive design stopping rules. All stopping rules, whether for treatment arms or the whole trial, are based on the calculation of Bayesian posterior probabilities. Currently (Nov 2018), the software facilitates trial simulation for binary and continuous outcomes. As such, the Bayesian models comprise a conjugate Gaussian model for continuous outcomes and a beta-binomial model (i.e. a beta prior and a binomial likelihood) for binary outcomes. In both models a diffuse non-informative prior is specified.

Response adaptive allocation. Response adaptive adaptation is an optional feature under all trial designs implemented in the HECT simulator. With response adaptive allocation the allocation ratio between treatments is adapted based on which treatment appears most likely to be superior. In other words, on average more patients will be allocated to the treatment with the highest Bayesian probability of superiority 12. This method is particularly used when there is a strong incentive to reduce the number of patients exposed to an inferior treatment. While there are many methods of adjusting the allocation ratio, the HECT simulator uses the ratio between square roots of the posterior probabilities of being superior for each treatment 13. When response adaptive allocation is selected, the allocation rate is set to equal for all arms (e.g., 1:1 for 2-arm trial) up till a pre-specified ‘burn-in’ sample size and is subsequently altered for every accumulated patient.

Simulation analysis functions implemented

The HECT trial simulator allows the user to estimate the trial design properties via simulation of large number of trials as well as simulating individual clinical trials to gain a better understanding of within trial variability.

Trial design properties (multiple trials simulation). To evaluate the overall trial design properties such as type I error, power, expected time to trial termination/average sample size, expected costs, etc. it is necessary to simulate multiple trials and average the performance metrics of these. The trial design properties function does just that. Figure 2 illustrates an example of the graphical outputs provided under the Trial Design Properties function. The user can decide how many trials to simulate or for how long the simulation can run (time maximum defined by number of seconds). The user can inspect the overall (or treatment vs control specific) power and type I error (reported numbers), the distribution of final sample sizes and cost across simulated trials (histogram display), as well as the comparison of power, type I error and expected costs between the HECT trial and a conventional design trial (bar plot display). Lastly, the user can save all or some of the recorded variables for the simulation that has just been run.

Single trial simulation. The single trial simulation function provides several graphical and numerical outputs to inspect within trial trends and variability. Figure 3 illustrates an example of some of graphical outputs provided for single trial simulation. First, the single trial simulation function produces a diagram for the trial flow over time (accumulation of patients) which allows visual representation of when any treatment was dropped or added. Second, a visual representation of where data points fall over time is available (extension of the trial flow diagram). Third, the single trial simulator provides visual inspection of the probabilities of superiority for each treatment over time, specifically at each planned interim look. This option is also available as graphical representation of posterior distributions for each treatment arm at selected interim looks. Finally, the final treatment estimates both for the primary outcome (which adaptations are based on) and optionally some secondary outcome (only monitored) are available to inspect.

Figure 3.

Figure 3.

Displays example single trial outputs for: ( a) actualized (platform) trial design scheme; ( b) the probabilities of superiority for each treatment by interim look; and ( c) the probability densities for each treatment by interim look.

User interface

The user interface comprises of three tabs: Trial Simulation, Sample Size Calculation, User Manual. The latter is a brief account of all input options and general outputs. In addition to the user manual, individual explanations are available for several functions when the user hovers the mouse cursor over the function. Both the Trial Simulation and Sample Size Calculation tabs have their input field to the left and the output to the right. At start up, default values are set to avoid empty values. Quality checks are automatically run, and error messages with accompanying instructions will be provided if the user attempts a computation based on input values that are either beyond the allowed range or of the wrong format.

Where radio buttons are used to define the input format (binary or continuous outcomes) for trial design type (compare all arms or compare vs control), the input field will automatically change to match the selected.

The right side panels are for the outputs. For the Sample Size Calculations tab, the ‘Calculate’ button will initiate the conventional sample size calculation and if all inputs are of correct format, the resulting sample size will appear. For the Trial Simulation tab, the right side is split in two panels. The lower panel is for running single trial simulations (one at a time) and inspecting the within trial behaviour. The upper panel is for summarizing the performance properties across multiple trial simulations. In both, the user is required to press the ‘Run’ button for a simulation (single or multiple) to run. A single trial simulation typically only requires a few seconds, whereas multiple trial simulations may require a few minutes. A progress bar will appear in the lower right corner so the user can follow the back-end computations progress. Whether single or multiple simulations are run, the available output graphs will update once the computations are complete.

Operation

The HECT simulator is compiled to run on any browser and can be accessed via any web browser using the following link: https://mtek.shinyapps.io/hect/. All computations are conducted remotely on an RShiny server. The graphical layout of the software is automatically determined by the size of the browser window and the screen resolution. For example, for a 13-inch laptop manufactured in 2018 we recommend you maximize your browser window and use a high resolution setting the first time you open the software.

Validation and use cases

The functions of HECT simulator software has gone through multiple rounds of validation. At the time of writing (January 14, 2019), the software (including earlier versions of the raw statistical methods code and user interface) has been used in conjunction with early stage portfolio planning. To this end, the software has been used to examine the likely costs and probabilities of success for a large number of candidate designs under various scenarios for possible target countries. An earlier version of the software was beta-tested and independently used to further inform the design of a planned clinical trials. The software was developed alongside hard coded simulations in R v.3.5.1 11 for trial designs and scenarios that were explored. The trial design simulation code incorporated in the HECT simulator has been validated against multiple trial designs, collectively covering over 1,000 scenarios. The raw code for simulations of the trial designs incorporated in the HECT simulator is available as part of the source code and can be found in the sim_funs.R file. Lastly, the software was beta-tested by internal and external colleagues.

Concluding remarks

We have developed and validated an intuitive highly efficient clinical trial simulator for planning of platform adaptive clinical trials. The software is open-source and can be accessed via any web browser. It therefore caters to clinical trial investigators who do not have the statistical capacity for trial simulations available in their team or who do not have the funds to invest in available commercial software.

Data availability

Underlying data

All data underlying the results are available as part of the article and no additional source data are required.

Software availability

The source code for the software is via GitHub: https://github.com/MTEKSciencesInc/HECT

Archived source code at time of publication: http://doi.org/10.5281/zenodo.2552878 14

Licence: GNU General Public License v3.0

Funding Statement

This work was supported by the Bill and Melinda Gates Foundation [49294]

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

[version 2; peer review: 2 approved, 2 approved with reservations]

References

  • 1. Bauer P, Bretz F, Dragalin V, et al. : Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med. 2016;35(3):325–347. 10.1002/sim.6472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Hatfield I, Allison A, Flight L, et al. : Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150. 10.1186/s13063-016-1273-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Saville BR, Berry SM: Efficiencies of platform clinical trials: A vision of the future. Clin Trials. 2016;13(3):358–366. 10.1177/1740774515626362 [DOI] [PubMed] [Google Scholar]
  • 4. Thorlund K, Haggstrom J, Park JJ, et al. : Key design considerations for adaptive clinical trials: a primer for clinicians. BMJ. 2018;360:k698. 10.1136/bmj.k698 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Park JJ, Thorlund K, Mills EJ: Critical concepts in adaptive clinical trials. Clin Epidemiol. 2018;10:343–351. 10.2147/CLEP.S156708 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. FACTS - Fixed and Adaptive Clinical Trials Simulator.2014. Reference Source [Google Scholar]
  • 7. ADDPLAN Adaptive Design Software.2018. [Google Scholar]
  • 8. ADCT: Adaptive Design in Clinical Trials v. 0.1.0.2016. [Google Scholar]
  • 9. BMGF: in Leadership Update(ed Knowledge-Integration(KI)) (Bill & Melinda Gates Foundation, 2018). [Google Scholar]
  • 10. R Studio: Integrated Development for R.Studio, Inc.2017. [Google Scholar]
  • 11. R: A language and environment for statistical programming.2010. [Google Scholar]
  • 12. Berry SM, Carlin BP, Jack Lee J, et al. : Bayesian Adaptive Methods for Clinical Trials.2011. 10.1201/EBK1439825488 [DOI] [Google Scholar]
  • 13. Atkinson AC, Biswas A: Randomised Reponse-Adaptive Designs in Clinical Trials.2013. Reference Source [Google Scholar]
  • 14. sgolchi, MTEKSciences, grace: MTEKSciencesInc/HECT: First release (Version v1.0). Zenodo. 2019. 10.5281/zenodo.2552878 [DOI] [Google Scholar]
Gates Open Res. 2019 Jun 14. doi: 10.21956/gatesopenres.14050.r27046

Reviewer response for version 2

Chris Cameron 1

Overall, HECT is an interesting and timely tool. The application of the tool on RShiny will broaden it's applicability and use. Overall, the article is well written, and the sections in the publication are clearly labelled. The tool is intuitive and easy to use. 

Major comments:

  • None. I don't have any major concerns with the indexing of this article.

Minor comments (optional to address):

  • The authors may want to consider a more interactive graphics interface such as Highcharter to improve the quality of some of the graphics.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Gates Open Res. 2019 Jun 3. doi: 10.21956/gatesopenres.14050.r27014

Reviewer response for version 2

Roger J Lewis 1,2,3

General comments:

  • The authors report the capabilities of an open-access, hosted software solution for the simulation of simple platform clinical trials, namely clinical trials that are intended to evaluate a sequence of treatments for a disease, or group of related diseases, when the treatments may not all be available or included initially. This is an area of intense interest and activity within the clinical trials community and, moreover, the lack of readily available software limits the ability of some to evaluate the potential utility of this approach in their research setting. One concern, both in general and specific to this effort, is the challenge of creating software with broad enough capabilities so that the appropriately configured platform trial can be evaluated for each research setting, without creating a system of such complexity that it is inaccessible to all but specialists in this area. Overly simplistic approaches can lead to the erroneous conclusion that a platform approach is not advantageous or, alternatively, has important weaknesses. For example, the performance of response-adaptive randomization (RAR) can be highly dependent on the particular algorithm chosen or on the choice of allocation to the control arm (e.g., it is often important to maintain a fixed or minimum allocation to the control throughout the trial) and thus, without these options, a particular choice of simulation software can lead to erroneous conclusions regarding the advantages and limitation of RAR.

  • My most significant concern regarding this effort is the flexibility and capability of the RAR function to capture current “best practices” for RAR, including other allocation rules and fixed or minimum allocation to the ongoing control arm.

Specific comments:

  • Abstract: The advantage of the adaptive approach is less to lead to a faster result or approval and, instead, to ensure the trial is of the correct size to get the correct answer. Often this is smaller and faster but, sometimes, it is larger and slower. The goal is to avoid an indeterminate result that leaves the motivating research question unanswered. In my experience, many statisticians have misconceptions regarding the advantages and limitations of adaptive and platform trials. The implication that a lack of familiarity is limited to clinicians is inconsistent with my experience (this should be corrected both in the Abstract and throughout the manuscript). Please note the types of endpoints for which the software is applicable (e.g., continuous and dichotomous, but not time-to-event). What is meant by “validated?”

  • Introduction, 2 nd paragraph: An adaptive platform trial is pre-planned and the adherence to the pre-specified statistical design is critically important to realize the designed operating characteristics. This section should be rewritten to emphasize that pre-specification and adherence to the design are important for all trials; however, in an adaptive platform trial the design includes the pre-specification of changes in trials features, e.g., randomization ratios.

  • Introduction, 3 rd paragraph: As above, simulations are a “black box” to most academic statisticians as well, since simulation is not commonly taught in most PhD statistics programs.

  • Introduction, 4 th paragraph: A concise definition of an adaptive platform design should be provided. Again, the phrase “researchers who are not statisticians” implies that statisticians are generally versed in these methods, which is not the case.

  • Implementation, 4 th paragraph: It would be very helpful to include traditional group-sequential designs (e.g., a multi-arm O’Brien-Fleming with a Lan-DeMets alpha spending function) for comparison, since this is often the type of more traditional option that is being considered.

  • Response adaptive allocation, Page 6: Please see comments above regarding the flexibility of the available RAR options.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1. Saville BR, Berry SM: Efficiencies of platform clinical trials: A vision of the future. Clin Trials.13(3) : 10.1177/1740774515626362 358-66 10.1177/1740774515626362 [DOI] [PubMed] [Google Scholar]
  • 2. Woodcock J, LaVange L: Master Protocols to Study Multiple Therapies, Multiple Diseases, or Both. New England Journal of Medicine.2017;377(1) : 10.1056/NEJMra1510062 62-70 10.1056/NEJMra1510062 [DOI] [PubMed] [Google Scholar]
Gates Open Res. 2019 Apr 12. doi: 10.21956/gatesopenres.14050.r26985

Reviewer response for version 2

Howard Thom 1

General comments:

Thank you very much for the opportunity to review this paper. The authors present a commendable piece of open-source software that allows users to explore statistical properties (incl. type 1 error, power, average sample size, expected costs) of various adaptive trial designs and compare with conventional randomized controlled trial (RCT) designs. This is well motivated by the expense of existing specialist software (e.g. FACTS or ADD-PLAN) and high level of expertise required to perform calculations/simulations without specialist software. The software allows simulation of trials with many interim analyses (determined by sample cumulative sample size) at which adaptive randomization (changes in allocation ratio), platform adaptations (adding a treatment arm), group sequential designs (early stopping for futility, efficacy) can be performed. The analyses are conducted in the Bayesian framework. I have experimented with the software and find it mostly easy to use and fast for the default number of simulations (100). There are a few elements of the software and paper that could be clarified and limitations that should be noted (lack of sample size re-estimation, no use of surrogate outcomes, no correlation between primary and secondary outcomes, too few simulations for Federal Drug Administration (FDA) purposes). I also have a doubt that the tool will be viewed as any less of a black-box than existing software. However, I am confident that this is a useful software and that it is described sufficiently by the paper.

Specific comments:

  1. It would be interesting to know how this tool relates to the recent update of the FDA guidelines ( https://www.gmp-compliance.org/gmp-news/revision-fda-guideline-on-adaptive-designs-for-clinical-trials). As they require validated software, perhaps comment on the likelihood that they would accept calculations from this tool?

  2. My comment on the FDA may be irrelevant if the primary user base for this tool are researchers in low-and-middle income countries (LMIC), which are the focus of the Gates Foundation. It is here that researchers are likely not to have access to expensive professional services or colleagues with statistical skill necessary to perform simulations/calculations themselves. If this is the primary user base for the tool, it would be good to be explicit in the introduction.

  3. Some important adaptive trial designs aren’t included in the tool, such as enriched recruitment designs, use of surrogate outcomes (could be implemented given the inclusion of secondary outcomes), sample size re-estimation. This limitation should be noted in the discussion and it should be stated whether they may be implemented in the future.

  4. A general comment from the authors on whether further development of the tool is planned would be interesting.

  5. On page 4 of 8, the authors describe a method by which treatment arms are dropped if effects fall below some minimally clinically important effect. I couldn’t find this implemented in the software. If it’s there, it likely needs better signposting.

  6. A disadvantage of the simulation is that the primary and secondary outcomes aren’t correlated. Have I missed it somewhere? If it isn’t implemented, please highlight as a limitation.

  7. The default number of simulations is 100 but the recent FDA guidelines ( https://www.gmp-compliance.org/gmp-news/revision-fda-guideline-on-adaptive-designs-for-clinical-trials) recommend 100,000, which appears to take an impractical amount of time in the software. There is even a possibility that the internet connection would be interrupted while waiting for completion. This limitation should be noted.

  8. The authors claim (particularly in the first sentence of the concluding remarks) that their software is “highly efficient”. Is there some way to justify this claim? Is the simulation design efficient? How does it compare to FACTS or AD-PLAN?

  9. For the proportional/dichotomous outcomes, it would be good if the tool allowed users to input odds ratio estimates for non-control treatments, rather than having to provide absolute probabilities for all treatments. This could better match treatment effect estimates from earlier exploratory trials.

  10. For response adaptive allocation, only one re-allocation rule (ratio between square roots) is implemented. Are other rules used so rarely that their absence isn’t a limitation? If not, best to highlight the limitation.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Gates Open Res. 2019 Apr 10. doi: 10.21956/gatesopenres.14050.r26986

Reviewer response for version 2

Robert A Beckman 1, Valeriy Korostyshevskiy 2

Thorlund et al. have developed a simulator of adaptive and platform trials (collectively known as highly efficient clinical trials, HECT) which runs as a web application and has a very accessible, sensibly laid out graphical user interface. HECT are underutilized in part because of the need to perform complex simulations to assess their performance characteristics. These simulations often exceed the statistical resources and/or expertise of people or organizations that might otherwise use HECT to improve their cost efficiency of performing clinical trials.

The article meets all criteria for approval in our judgment.

Valeriy Korostyshevskiy: The article is very cleanly written; it is easy to follow the logic of the design and understand the principles of how the software operates. I have noticed some typos:

  • Page 4, 1st paragraph: "Figure 2a display s..."

  • Page 4, 3rd paragraph: "...if the probability that it is superior to all other treatments fall s below..."

  • Page 6, second column, 3rd paragraph: "...the input field will automatically change to match the select ion."

Thoughts on the tool itself:

  1. I played around with the tool and found it very user friendly, intuitive, and easy to use. 

  2. It would probably still require some learning period for someone using such a tool for the first time or not having the background, but I do not envision such a learning period to be long (see comment 1).

  3. The tool is self-contained - the manual and the explanations of how things work add value in this regard. This is a very important feature. 

  4. This software runs on R on a remote server, so the simulation time could be impacted by these two factors. (R is not the fastest although some of the routines maybe hard-coded using standard C or Fortran routines - these details are not presented by the authors (nor do I think they should be presented in this article)). A researcher could eliminate the potential remote server issue by downloading the package and using it locally, however, given current internet speeds, this is unlikely to result in substantial computation time decrease. 

  5. The availability of the package for downloading and using locally is a big plus.

  6. GUI is clean, right to the point, without unnecessary bells and whistles. One suggestion to the authors is to modify the HTML code so that the browser tab header would reflect the tool's name. Currently it has the HTML line with the name of the KI image used on the top of the page ().

Robert A. Beckman: I think the article is excellent. 

  1. Type I and type II errors can be defined in various ways for a multi-arm study. Are they defined in this simulation by arm or total? What is the null hypothesis?

  2. It would be useful to see a comparison of power/cost ratio (efficiency) under the constraint of a comparable type I error. This would easily quantify the benefit of HECT. The example shows a higher type I error for HECT. Is this just a fluctuation that would go down for a sufficiently large number of trials? 

  3. Typo in the section on "response adaptive allocation", referring to it initially as "response adaptive adaptation."

We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    Underlying data

    All data underlying the results are available as part of the article and no additional source data are required.


    Articles from Gates Open Research are provided here courtesy of Bill & Melinda Gates Foundation

    RESOURCES