Skip to main content
PLOS One logoLink to PLOS One
. 2024 May 23;19(5):e0304187. doi: 10.1371/journal.pone.0304187

Development of a conceptual framework for defining trial efficiency

Charis Xuan Xie 1,*, Anna De Simoni 1, Sandra Eldridge 1, Hilary Pinnock 2, Clare Relton 1
Editor: Germain Honvo3
PMCID: PMC11115328  PMID: 38781167

Abstract

Background

Globally, there is a growing focus on efficient trials, yet numerous interpretations have emerged, suggesting a significant heterogeneity in understanding “efficiency” within the trial context. Therefore in this study, we aimed to dissect the multifaceted nature of trial efficiency by establishing a comprehensive conceptual framework for its definition.

Objectives

To collate diverse perspectives regarding trial efficiency and to achieve consensus on a conceptual framework for defining trial efficiency.

Methods

From July 2022 to July 2023, we undertook a literature review to identify various terms that have been used to define trial efficiency. We then conducted a modified e-Delphi study, comprising an exploratory open round and a subsequent scoring round to refine and validate the identified items. We recruited a wide range of experts in the global trial community including trialists, funders, sponsors, journal editors and members of the public. Consensus was defined as items rated “without disagreement”, measured by the inter-percentile range adjusted for symmetry through the UCLA/RAND approach.

Results

Seventy-eight studies were identified from a literature review, from which we extracted nine terms related to trial efficiency. We then used review findings as exemplars in the Delphi open round. Forty-nine international experts were recruited to the e-Delphi panel. Open round responses resulted in the refinement of the initial nine terms, which were consequently included in the scoring round. We obtained consensus on all nine items: 1) four constructs that collectively define trial efficiency containing scientific efficiency, operational efficiency, statistical efficiency and economic efficiency; and 2) five essential building blocks for efficient trial comprising trial design, trial process, infrastructure, superstructure, and stakeholders.

Conclusions

This is the first attempt to dissect the concept of trial efficiency into theoretical constructs. Having an agreed definition will allow better trial implementation and facilitate effective communication and decision-making across stakeholders. We also identified essential building blocks that are the cornerstones of an efficient trial. In this pursuit of understanding, we are not only unravelling the complexities of trial efficiency but also laying the groundwork for evaluating the efficiency of an individual trial or a trial system in the future.

Introduction

Worldwide, trial efficiency is a longstanding priority for the pharmaceutical industry [1], academia and funding bodies [2,3]. In 2004 in the US, the Clinical Trials Working Group of the National Cancer Advisory Board set the goal of improving operational efficiency to facilitate timely and cost-effective trial execution [4]. In the UK, the National Institute for Health and Care Research offers additional funding to support clinical trial units to advance the design and execution of efficient, innovative research, aiming to provide robust evidence to inform clinical practice and policy [5]. A recent article in The Lancet Global Health examined the challenges faced by current clinical trial research in low- and middle-income countries, and argued that efficient trials are needed to address research questions related to the increasing burden of non-communicable diseases in a timely and affordable way [6].

Currently, the concept of efficiency in healthcare trials has been used to refer to accelerated ethical approval [6], addressing multiple complex questions in a single trial [7] and with a minimised sample size [6], trials conducted with shorter duration [7,8], lower costs [9], and reduced resource requirements [10]. In addition, existing literature has discussed trial efficiency in terms of operational efficiency [1113], scientific efficiency [11], statistical efficiency [13,14], and economic efficiency [15]. There is significant heterogeneity as to what is meant by efficiency in the context of trials, which may hinder effective communication and decision-making between stakeholders, and compromise the comparability of studies. Therefore, in this study we aimed to develop a conceptual framework for defining trial efficiency and to achieve expert consensus on the framework constructs.

Method

Study design

We undertook a literature review to identify items that define and comprise trial efficiency. We then conducted an e-Delphi study to refine and validate those items and to achieve consensus on the constructs and the building blocks of trial efficiency. The ethics approval was obtained from Queen Mary University of London research ethics committee (QMERC22.316). This study follows the Guidance on Conducting and Reporting Delphi Studies (CREDES) [16].

Literature review for generating items

Our goal in the literature review was to collate existing discussions on efficiency in the context of trials, including definitions and attributes described as constituting an efficient trial. As discussions specifically focused on this subject are scarce, we included a broad range of study types, such as full trial papers or protocols, editorials, and opinion pieces that discussed trial efficiency. We considered all types of human trials evaluating medical, surgical, or behavioural interventions, including efficacy trials, effectiveness trials, and implementation trials. The search was limited to English-language articles, and there was no restriction on publication dates. To carry out the review, we searched MEDLINE (via Ovid) database, for terms such as ’trial’ and ’efficien*’ in article titles and keywords. As ’efficiency’ is a common word in literature, we searched for these two keywords only within article titles (rather than within the abstracts) ensuring the results’ relevance to the discussion of trial efficiency. The detailed inclusion and exclusion criteria are listed in S1 Table.

e-Delphi

Panel selection and recruitment

The aim was to recruit a diverse panel of experts from the trial community, encompassing a range of roles and perspectives. This included international researchers identified through the literature review, colleagues who are part of professional trial networks such as UK trial managers’ network, representatives from funding bodies, journal editors, and members of the public who have been involved in trials. Purposive sampling and snowball sampling methods were then used to identify additional participants. We approached those participants with known contact details by individual emails generated through Clinvivo [17], while for colleagues within professional networks, where we didn’t have individual contact details, we sent a generic recruitment email to the network’s mailing list. Recruitment began in November 2022 and continued until March 2023. Written informed consent was obtained online through the Clinvivo Delphi system.

Data collection

We opted for two rounds of data collection because consensus was achieved by the end of the second round. These rounds were preceded by a pilot round to test the feasibility of the open round.

Pilot test. We pilot tested the feasibility of the open round questionnaire amongst colleagues with diverse experience in trial design and conduct at the Pragmatic Clinical Trial Unit of Queen Mary University of London. This provided valuable feedback on the clarity of the questions, the appropriateness of the response options, and the overall structure of the questionnaire. Based on the feedback received during the pilot testing, we made revisions and refinements to the questionnaire to enhance its usability.

Open round. In the open round, we invited panellists to share their thoughts on 1) their understanding of trial efficiency and 2) the most efficient or inefficient aspects they have encountered in the trials they have conducted or in which they have participated. These questions were designed as free-text to encourage detailed, narrative responses. To gain insights into the participants’ backgrounds, we collected information on countries of residence, and roles within the trials (see S1 File for the questionnaire). This open round allowed us to gather diverse viewpoints and experiences related to trial efficiency which contributed to the development of a comprehensive set of items for ranking in the subsequent round. The data collection for this round took place over four weeks, with reminder emails sent to participants after the second and third weeks.

Scoring round. Panel members from open round were emailed a link to the second questionnaire. They were asked to rate the importance of the proposed items on a scale of 1 to 9 (1: not at all important to 9: critically important). At the end of each question, there was a free text space for any comments they wished to share. The scoring round data collection spanned four weeks with weekly reminders to participants.

Data analysis and consensus

Descriptive statistics were used to analyse quantitative demographics and thematic analysis was used to summarise free text responses from both Delphi rounds. To assess disagreement and appropriateness, we used the Research ANd Development (RAND)/ University of California Los Angeles (UCLA) appropriateness method [18]. It involves calculating the median score, the inter-percentile range (IPR) (30th and 70th), and the inter-percentile range adjusted for symmetry (IPRAS), for each item being rated. Consensus was defined as items rated “without disagreement”, measured by the IPRAS.

Patient and public involvement

In this study, members of the public (n = 4) (including two who had participated in trials) were invited to share their thoughts, participate in the ranking process, provided with the outcomes of each round upon completion. They were considered experts due to their lived experience and offered £30 voucher as a compensation for their time.

Results

Delphi participants

Out of 106 international experts approached, and 4 e-mails sent to network mailing lists, forty-nine participants responded to the open round (United Kingdom (n = 37), United States (n = 7), Canada (n = 2), Australia (n = 1), Ireland (n = 1), and Kenya (n = 1)). The panel included a diversity of roles including statisticians (n = 17), trial managers (n = 12), principal investigators (n = 7), funders (n = 4), journal editors (n = 3), member of the public (n = 4), data managers (n = 3), site staff (n = 2), sponsors (n = 2), researchers (n = 2), monitors (n = 2), ethicist (n = 1), clinician (n = 1), CTU manager (n = 1), trial support officer (n = 1), and trial methodologist (n = 1). Many participants had more than one role. See Fig 1.

Fig 1. Delphi flowchart.

Fig 1

Literature review

We included a total of 78 studies for data analysis (see S1 Fig), including 6 (8%) reviews, 15 (19%) perspectives or commentaries, 1(1%) interview, 2 (3%) case studies, 2 (3%) surveys and 3 (4%) randomised trials, and 49 (63%) methodologies describing new trial designs. Only 8(10%) studies had explicitly defined or explained what ‘efficiency’ meant in the context of their trials (see S2 Table for details). We categorised discussions of efficiency from the literature into nine key items: 1)scientific efficiency [11,19,20], 2)operational efficiency [11,20,21], 3)statistical efficiency [14,2224] and 4)economic efficiency [15,25], 5)efficiency in trial designs [7,8,23,2645], 6)trial conduct [11,20,21,4666], and other aspects such as 7)improving efficiency using information technologies and mobile apps [53,6770]; 8)involving the public and stakeholders [20,71]; and 9)efficient trial reviews and regulatory approvals [28,66,7274]. (see Table 1 for details). These results were included as exemplars in the Delphi open round questionnaire. The detailed description of the literature review has previously been made available [75] to ensure full transparency and to facilitate open scholarly dialogue.

Table 1. Key themes synthesised from literature review.

How efficiency had been discussed Examples and references
Scientific efficiency Scientific efficiency refers to the methodological rigour of the trial design. That is, a design that uses fewer resources and less infrastructure to maximise the outputs [11], addresses the right research questions, considers the implications of the design decision, and is relevant to the stakeholders [19,20].
Operational efficiency Operational efficiency covers full trial processes, from concept development to protocol activation, from enrolment to closure stage [11]. Wu and colleagues [20] assessed operational efficiency in patient recruitment and trial duration, Hess and colleagues [21] increased operational efficiency through objective site selection and reduced site coordinator workload. In addition, the National Cancer Institute established the Operational Efficiency Working Group to identify barriers associated with trial operations, aiming to reduce trial activation time and timely complete the activated studies [76].
Statistical efficiency Statistical efficiency measures the choices of estimators [24], experimental designs and hypothesis testing procedures [22], type I error, the power, and the sample size [23], the use of endpoint events include the selection of an appropriately weighted test statistic [14].
Economic efficiency Economic efficiency concerns the expenditure of research resources [15] and the cost for completing the trial [25].
Trial designs Including adaptive designs [23,2633], master protocol trial designs [34] such as basket trials [35,36] and platform trials [37,38], sequential trial designs [7,39], clusters designs [4042], factorial trials [43,44] and registry-based trials [45]
Trial conduct 1) patient identification and recruitment [20,4653], for example, the automated eligibility screening tool increased the efficiency of patient accrual.
2) data analysis [5457], for example, “an alternative analytical approach that can enhance the signal-to-noise ratio would open the path for more efficient and rigorous clinical trials of Parkinson’s Disease therapies”.
3) selection of endpoints or outcome measures [5861], for example, the use of ordinal outcomes and composing outcomes within a patient could improve trial efficiency.
4) data collection and management [21,62], for example, collecting and processing routine health data from the existing registry would facilitate efficient trial conduct.
5) site selection and management [21,6365], such as reduced site workload and improved site operation contributed to trial efficiency. The central argument in this group was to improve trial efficiency by enhancing its operational efficiency [11,66].
Other aspects 1) using information technologies and mobile apps [53,6770]
2) involving the public and stakeholders [20,71]
3) efficient trial reviews and regulatory approvals [28,66,7274]

Open round

When asked to define trial efficiency, some participants referred to definitions from the literature review, while other cited similar definitions tailored to their trial context. When asked about the most efficient/inefficient facets of trial efficiency, the responses resonated closely with the findings from our literature review (Fig 2). Specifically, trial design emerged as the facet most frequently cited as enhancing efficiency, whereas data collection was often highlighted as the element that most impeded efficiency.

Fig 2. The efficient and inefficient aspects discussed in the open round.

Fig 2

The x-axis represents the frequency of responses.

By incorporating findings from this round, we further refined the nine items identified from the literature review and divided them into two groups: 1) theoretical and abstract constructs: scientific efficiency, operational efficiency, statistical efficiency, and economic efficiency; 2) empirical and fundamental building blocks: trial design (including endpoints selection, statistical analysis plan, protocol development, etc.), trial process (including recruitment and retention, data collection and analysis, trial administration, etc.), superstructure (including regulatory approvals, funding application etc.), infrastructure (including financial and physical resources such as cost, information technologies, routine healthcare data, etc.), and stakeholders. This resulted in a total of nine items for rating in the scoring round (see Table 2).

Table 2. Scoring round items and results: Appropriateness, disagreement, median item ratings, interpercentile range, and intercentile range adjusted for asymmetry.

Item Disagreement Median P30 P70 IPR IPRAS
1.1 Scientific efficiency: methodological rigour of the trial design No 9 8 9 1 7.6
1.2 Operational efficiency: optimal management, organization, and execution of trial processes and procedures No 9 8 9 1 7.6
1.3 Statistical efficiency: a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure No 8 7.5 9 1.5 7.225
1.4 Economic efficiency: optimal use of resources in the design, implementation, and analysis of clinical trials No 8 7 8.5 1.5 6.475
2.1 Trial design: planning and organisation of a trial No 9 9 9 0 8.35
2.2 Trial process: trial set up, conduct and closeout No 9 8 9 1 7.6
2.3 Stakeholders: individuals or groups who have an interest or concern in the design, execution, and outcomes of a trial No 8 7 9 2 6.85
2.4 Infrastructure: underlying framework, systems, and resources required to design, implement, manage, and analyse a trial No 8 8 9 1 7.6
2.5 Superstructure: overarching structure of a trial No 8 7 8 1 6.1

P30: inter-percentile range 30th.

P70: inter-percentile range 70th.

IPR: inter-percentile range.

IPRAS: inter-percentile range adjusted for symmetry.

Scoring round and consensus

Forty participants responded (82%) to the scoring round and there was no disagreement on any items (Table 2). We also conducted sub-analyses by five role groups: (1) funders and sponsors (n = 6); (2) statisticians (n = 13); (3) trial managers (n = 10); (4) principal investigators (n = 6); and (5) PPIs (n = 3). Group membership was not mutually exclusive. Stratified results showed widespread agreement that the items were appropriate, with the exception of one of the building blocks–superstructure. The funders and sponsors group disagreed this item was appropriate (S3 Table). As a result, no new items were added but we slightly modified the explanation of each proposed item, in line with free-text comments made by the participants.

Theoretical constructs of trial efficiency: Revised definitions incorporating Delphi comments

Scientific efficiency

Some participants were confused by the provided definition (Box 1. quote 1); while some suggested expanding the definition with the inclusion of feasibility and implementation (Box 1. quotes 2–3). As such, we refined the definition as the balance of methodological rigour, relevance of the research question, and feasibility of trial design. It prioritises effective use of resources, including data, to minimise research waste, considers the alignment of design and statistical strategies, and underscores the importance of the study’s practical impact on stakeholders and delivering value to end-users.

Box 1. Scoring round exemplar free-text comments related to the construct definitions
Scientific efficiency
  • Quote 1: “Not sure rigour equates to efficiency” (Participant n. 17, principal trial investigator)

  • Quote 2: “Feasibility of trial design needs to be included here. You could have the perfect trial design but no participants or high withdrawals and lack of site engagement.” (Participant n.2, trial manager)

  • Quote 3: “This may also need to include how important the findings will be to service users and the public and whether there are ways they are expected to be implemented in practice.” (Participant n.28, trial support officer)

Operational efficiency
  • Quote 4: “I’d make particular focus on the bureaucracy ‐ endless paperwork.” (Participant n.3, funder)

  • Quote 5: "Feasibility of operational efficiency. You may have participants and engaged sites but you need operational feasibility to align." (Participant n.2, trial manager)

  • Quote 6: “Would like to see reference to the ongoing assessment of a trial in the descriptor.” (Participant n.39, trial manager)

Statistical efficiency
  • Quote 7: “and accounting for missing data, and sources of bias or confounding” (Participant n.19, principal trial investigator)

  • Quote 8: “Also needs to encompass other aspects of analysis, e.g., health economics.” (Participant n.14, statistician)

Economic efficiency
  • Quote 9: “Allowing for the concept of data sharing beyond the life of the study” (Participant n.37, sponsor)

  • Quote 10: “Need to be clear that this is (I presume) related to the costs of delivering the trial and not the cost of the intervention (i.e. health economic analysis).” (Participant n.26, statistician)

Operational efficiency

Some comments suggested the definition should be expanded to consider operation feasibility, bureaucracy, and ongoing evaluation (Box 1. quotes 4–6). Therefore, we modified operational efficiency as the optimal management, organisation, execution, and continuous evaluation of trial processes and procedures. It emphasises operational feasibility (such as ensuring there are enough workforce, managing delays, and working effectively with third-party providers), reducing unnecessary bureaucracy and duplication, and continuously assessing the trial for potential improvements.

Statistical efficiency

The initial definition (Table 1) was expanded based on the participants’ comments (Box1. quotes 7–8), as the application of design and analytical methods that result in more accurate estimates of treatment effects or other parameters of interest. This includes considerations of minimising the amount of data to be collected, accounting for missing data, and managing sources of bias or confounding; its focus is specifically on maximising the accuracy and reliability of results given the data collected.

Economic efficiency

We increased the clarity of the initial definition according to scoring round feedback (Box 1.quotes 9–10): the optimal use of resources in the trial design, implementation and analysis, to ensure immediate and long-term cost-effectiveness of the trial. This focus on value ensures that resources are utilised to their fullest extent without compromising the quality of the research. It emphasises on the cost-effectiveness of conducting the trial.

Essential building blocks comprising an efficient trial

Overall, there was a strong consensus on the building blocks; the free-text comments did not suggest significant alterations, but recommended adding some details within each building block. Trial design concerns the planning and organisation of a trial, which may include the trial methodologies, research questions, sample size, interventions, control group, endpoints and outcomes; document development such as funding application; as well as planning feasibility and pilot studies. The trial process involves the setup, execution, and closeout phases of a trial (see S2 Fig for details). Stakeholders are the critical human factor, they are individuals or groups with an interest or concern in the design, execution, and outcomes of a trial. They could be trial participants (e.g. patients, practitioners, health system leaders, public health organisations, etc.), trialists (e.g. investigators, researchers, trial managers, statisticians, etc), funders, sponsors, trial sites and their staff, regulatory authorities, healthcare and clinical practitioners, the scientific community (researchers, academics, and clinicians interested in the trial’s outcomes and its implications for future research) and the general public (the broader population who may ultimately benefit from the knowledge generated by the clinical trial). Infrastructure is the underlying framework, systems, and resources required to design, implement, manage, and analyse a trial, such as resources (human, financial, physical), information systems and technologies, and healthcare data. Superstructure serves as the overarching structure of a trial, including laws, policy, and governance.

With these, we developed a Trial Efficiency Pentagon (Fig 3) to place the five building blocks and to illustrate the multiple connections among them ‐ improvements in one block may potentially lead to trade-offs in one or more other blocks.

Fig 3. Trial efficiency pentagon.

Fig 3

The final conceptual framework for defining trial efficiency

Fig 4 represents the finalised framework. The term trial efficiency is complex and multifaceted, encompassing four conceptual constructs with five essential building blocks.

Fig 4. The conceptual framework of trial efficiency.

Fig 4

The outer blue circle outlines theoretical constructs of trial efficiency: Scientific Efficiency, Statistical Efficiency, Operational Efficiency and Economic Efficiency. At its core, the inner pentagon outlines the empirical building blocks: Superstructure, Stakeholders, Infrastructure, Trial Process, and Trial Design. The cyclical arrows indicate the necessity for a balanced consideration of each building block within each construct to optimise trial efficiency.

Discussion

Main findings

Consensus was achieved on the four constructs that together define trial efficiency: scientific efficiency, operational efficiency, statistical efficiency and economic efficiency; and the five essential building blocks for considering an efficient trial: trial design, trial process, infrastructure, superstructure, and stakeholder.

The conceptual constructs, empirical building blocks, and interrelationships

Overall there was no disagreement over the constructs that conceptually define trial efficiency. However, some concerns were raised regarding potential overlaps, between scientific efficiency and statistical efficiency, and between operational efficiency and economic efficiency (S4 Table). These four constructs share some common elements. However, they are conceptually distinct and each construct brings unique aspects to the concept of trial efficiency. Scientific efficiency, for instance, focuses primarily on the methodological rigour [77] and feasibility of trial design, while statistical efficiency is concerned with achieving the most accurate results possible with the smallest amount of data collected [78]. The overlap lies in the fact that both aim to optimize the quality and validity of the trial’s findings, yet their distinct focus underlines their separate roles within the overarching construct of trial efficiency. Similarly, while operational and economic efficiency both aim to make the best use of resources [11], they do so in different ways and in different contexts. Operational efficiency is about the effective management and organization of trial processes and procedures [11,13], while economic efficiency involves optimizing resource use in relation to the cost of delivering the trial. By maintaining these conceptually distinct constructs, we were able to capture the broad spectrum of abstract factors that define trial efficiency, thus offering a nuanced theoretical framework for its comprehension.

The proposed building blocks create a foundation for the formulation of an efficient trial. In the Delphi scoring round, there was strong consensus regarding the significance of these building blocks, with an average median score of 8.4 on a 1–9 scale. However, some participants perceived hierarchy among the building blocks, suggesting that some (e.g., trial design and process) hold more importance than others. This was reflected in the literature review and responses in the Delphi open round, where certain building blocks ‐ such as trial design ‐ were more frequently discussed as critical determinants of trial efficiency. Despite these observations, we propose that all five building blocks have equal importance and they mutually contribute to the overall efficiency of the trial. These foundational elements are also interconnected, for instance, even the most rigorous and feasible trial design is contingent upon the availability of suitable infrastructure support and requires inputs from stakeholders. Therefore, we advocate for a balanced view where no single building block takes precedence in the trial efficiency pentagon.

There is a layered connection between the constructs and the building blocks: the constructs were conceptualised to provide a broad, overarching view of efficiency within healthcare trials. In contrast, the building blocks were identified as the essential, practical components that operationalise efficiency in real-world settings. In addition to this relationship, we suggest that for a comprehensive understanding, each efficiency construct takes into account all five building blocks. For instance, while it may seem apparent that scientific efficiency is closely linked with trial design, focusing on how the study is conceptualised to ensure methodological soundness; it also intersects with stakeholder involvement, where patient and public engagement can improve the trial design and thus the trial outcomes’ relevance and applicability.

Implications

According to the results from the literature review, few studies explicitly defined efficiency in the context of trials and no effort has been made to develop a unified and agreed definition for trial efficiency. Linguistically, ‘efficiency’ is defined as “the production of the desired effects or results with minimum waste of time, effort, or skill” [79]. This definition shares similarities with those from the literature (S2 Table), wherein the outstanding characteristic corresponds to the balance between the inputs (e.g. resources) and the outputs (e.g. the objectives of the trial). Nevertheless, these interpretations are often narrowly tailored. In this study we hoped to offer a holistic view that captures the nuances and complex aspects of trial efficiency and which may benefit policymakers, funders, and researchers in making informed decisions, leading to improved trial implementation and patient care. Enhancing efficiency was emphasised in the UK Department of Health and Social Care’s 2022–2025 strategic plan for clinical research [80]. As of the drafting of this paper, the U.S. Food and Drug Administration is announcing the updated recommendations for good clinical practices advocating for greater efficiency in trials by modernising both design and conduct [81]. Therefore, it is evident that our study is timely, positioning the urgency of comprehensively understanding trial efficiency.

Strengths and limitations

Drawing on both literature review and expert opinion, our study followed a rigorous approach to develop a conceptual framework of trial efficiency. We included a wide range of experts in trial communities including members of the public, enhancing the comprehensiveness and richness of our study. Nevertheless, nine participants did not respond to the scoring round, which could have introduced potential biases in reaching a consensus or perhaps missed subtle distinctions regarding the significance of certain trial elements. However, given the diverse range of participants who did engage, coupled with the triangulation with existing literature, this non-response is not expected to significantly impact the overall validity and comprehensiveness of our Delphi findings.

While we have sought to delineate each construct and building block distinctly, we acknowledge the potential for different interpretations of qualitative data. The interplay between the identified themes is likely to be more intricate, reflecting the complex nature of trial efficiency. Future research could delve deeper into this interplay to unravel the connections.

The ’trial efficiency pentagon’, emerging as a novel concept from this study, is a promising tool for assessing trial efficiency (proactively and retrospectively). For example, it could be developed to support group discussions and/or calibrated as an evaluation instrument to measure the efficiency of a trial. However, it is limited by lacking robust theoretical foundation. To elucidate, while we’ve pieced together insights and perspectives to shape the pentagon, we have not rooted it in any established theory or conceptual model. This could mean that certain fundamental aspects of trial efficiency might be overlooked or not holistically represented. In the future, we aspire to hone the pentagon into an evidence-based, theory-informed tool and we welcome insights from our readers and remain open to potential collaborations to its further development.

Conclusions

This is the first attempt to dissect the concept of trial efficiency into theoretical constructs. In this pursuit of understanding, we are not only unravelling the complexities of trial efficiency but also laying the groundwork for evaluating the efficiency of an individual trial or a trial system in the future.

Supporting information

S1 Fig. PRISMA flowchart.

(DOCX)

pone.0304187.s001.docx (177.3KB, docx)
S2 Fig. Trial process in general.

(DOCX)

pone.0304187.s002.docx (245.5KB, docx)
S1 Table. Literature review inclusion and exclusion criteria.

(DOCX)

pone.0304187.s003.docx (14.8KB, docx)
S2 Table. Efficiency definitions/explanations in the literature.

(DOCX)

pone.0304187.s004.docx (18.5KB, docx)
S3 Table. Scoring round stratified results.

(DOCX)

pone.0304187.s005.docx (28.6KB, docx)
S4 Table. Scoring round exemplar quotes related to potential overlaps among the four constructs.

(DOCX)

pone.0304187.s006.docx (17.2KB, docx)
S1 File. Open round questionnaire.

(DOCX)

pone.0304187.s007.docx (123.3KB, docx)

Acknowledgments

We thank Prof. Shaun Treweek for his insightful discussion on trial efficiency, which has largely inspired this work. We thank Ann Thomson, Senior Trial Manager at Queen Mary University of London’s Pragmatic Clinical Trials Unit, for her valuable discussions and insights into the trial process. Our thanks also go to the Health Research Board ‐ Trials Methodology Research Network for their assistance in promoting our Delphi study through their email newsletter. We acknowledge the support of the UKCRC Registered CTU Network. The views expressed are those of the author(s) and not of the UKCRC or its members. We are immensely thankful to all participants of the Delphi study rounds for their invaluable contributions and willingness to share their expertise. We have received consent to acknowledge the following participants by name (with no particular order): Monica Taljaard, Lelia Duley, Sarah Markham, Deb Smith, Catey Bunce, Stephen Brealey, Steff Lewis, Laura Miller, Jacqueline French, Fiona Hogarth, Gail Holland, Nikki Totton, Nick Kisengese, Joanne Haviland, Matthew Burns, Richard Hooper, Claire Ayling, Catherine Arundel, Ines Rombach, Seonaidh Cotton, Paula Kareclas. Lastly, we appreciate the reviewer’s comments, which have been instrumental in enhancing the development of the conceptual framework.

Data Availability

All relevant data are within the manuscript and its supporting information files.

Funding Statement

CX is funded by the Wellcome Trust (224863/Z/21/Z). URL: https://wellcome.org/. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The funder does not play any role in the study design, data collection and analysis, decision to publish, and preparation of the manuscript.

References

  • 1.Schulz G. Increasing the Efficiency of Clinical Trials: Tanner Pharma Group; 2023 [Available from: https://tannerpharma.com/increasing-the-efficiency-of-clinical-trials/.
  • 2.CENTRE TF. [Available from: https://www.trialforge.org/trial-forge-centres/.
  • 3.Medicine JH. Improving the Efficiency of Clinical Trials. [Available from: https://clinicalconnection.hopkinsmedicine.org/news/improving-the-efficiency-of-clinical-trials.
  • 4.GROUP CTW. Restructuring the National Cancer Clinical Trials Enterprise. National Cancer Institute; 2005.
  • 5.Research NIfHaC. Annual Efficient Studies funding calls for CTU projects 2019 [Available from: https://www.nihr.ac.uk/documents/ad-hoc-funding-calls-for-ctu-projects/20141.
  • 6.Park JJH, Grais RF, Taljaard M, Nakimuli-Mpungu E, Jehan F, Nachega JB, et al. Urgently seeking efficiency and sustainability of clinical trials in global health. Lancet Glob Health. 2021;9(5):e681–e90. doi: 10.1016/S2214-109X(20)30539-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.van Eijk RPA, Nikolakopoulos S, Ferguson TA, Liu D, Eijkemans MJC, van den Berg LH. Increasing the efficiency of clinical trials in neurodegenerative disorders using group sequential trial designs. J Clin Epidemiol. 2018;98:80–8. doi: 10.1016/j.jclinepi.2018.02.013 [DOI] [PubMed] [Google Scholar]
  • 8.Sessler DI, Myles PS. Novel Clinical Trial Designs to Improve the Efficiency of Research. Anesthesiology. 2020;132(1):69–81. doi: 10.1097/ALN.0000000000002989 [DOI] [PubMed] [Google Scholar]
  • 9.Zannad F, Pfeffer MA, Bhatt DL, Bonds DE, Borer JS, Calvo-Rojas G, et al. Streamlining cardiovascular clinical trials to improve efficiency and generalisability. Heart. 2017;103(15):1156–62. doi: 10.1136/heartjnl-2017-311191 [DOI] [PubMed] [Google Scholar]
  • 10.Cornelius VR, McDermott L, Forster AS, Ashworth M, Wright AJ, Gulliford MC. Automated recruitment and randomisation for an efficient randomised controlled trial in primary care. Trials. 2018;19(1):341. doi: 10.1186/s13063-018-2723-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kelly D, Spreafico A, Siu LL. Increasing operational and scientific efficiency in clinical trials. Br J Cancer. 2020;123(8):1207–8. doi: 10.1038/s41416-020-0990-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.England A, Wade K, Smith PB, Berezny K, Laughon M, Best Pharmaceuticals for Children Act ‐ Pediatric Trials Network Administrative Core C. Optimizing operational efficiencies in early phase trials: The Pediatric Trials Network experience. Contemp Clin Trials. 2016;47:376–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Park JJH, Sharif B, Harari O, Dron L, Heath A, Meade M, et al. Economic Evaluation of Cost and Time Required for a Platform Trial vs Conventional Trials. JAMA Netw Open. 2022;5(7):e2221140. doi: 10.1001/jamanetworkopen.2022.21140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Prentice RL. Opportunities for enhancing efficiency and reducing cost in large scale disease prevention trials: a statistical perspective. Stat Med. 1990;9(1–2):161–70; discussion 70–2. doi: 10.1002/sim.4780090123 [DOI] [PubMed] [Google Scholar]
  • 15.Torgerson D, Campbell M. Unequal randomisation can improve the economic efficiency of clinical trials. J Health Serv Res Policy. 1997;2(2):81–5. doi: 10.1177/135581969700200205 [DOI] [PubMed] [Google Scholar]
  • 16.Junger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684–706. doi: 10.1177/0269216317690685 [DOI] [PubMed] [Google Scholar]
  • 17.CLINVIVO. Clinvivo Limited 2015.
  • 18.Fitch K BS, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, van het Loo M, McDonnell J, Vader JP, Kahan JP. RAND/UCLA appropriateness method user’s manual. Monica, CA: RAND corporation; 2000. [Google Scholar]
  • 19.Treweek S, Born A. Clinical trial design: increasing efficiency in evaluating new healthcare interventions. Journal of Comparative Effectiveness Research. 2014;3(3):233–6. doi: 10.2217/cer.14.13 [DOI] [PubMed] [Google Scholar]
  • 20.Wu K, Wu E, M DA, Chitale N, Lim M, Dabrowski M, et al. Machine Learning Prediction of Clinical Trial Operational Efficiency. AAPS Journal. 2022;24(3):57. doi: 10.1208/s12248-022-00703-3 [DOI] [PubMed] [Google Scholar]
  • 21.Hess CN, Rao SV, Kong DF, Aberle LH, Anstrom KJ, Gibson CM, et al. Embedding a randomized clinical trial into an ongoing registry infrastructure: unique opportunities for efficiency in design of the Study of Access site For Enhancement of Percutaneous Coronary Intervention for Women (SAFE-PCI for Women). American Heart Journal. 2013;166(3):421–8. doi: 10.1016/j.ahj.2013.06.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Eshima N. Efficiency of Statistical Hypothesis Test Procedures. Statistical Data Analysis and Entropy. Singapore: Springer Singapore; 2020. p. 141–65. [Google Scholar]
  • 23.Jiang Y, Zhao W, Durkalski-Mauldin V. Impact of adaptation algorithm, timing, and stopping boundaries on the performance of Bayesian response adaptive randomization in confirmative trials with a binary endpoint. Contemp Clin Trials. 2017;62:114–20. doi: 10.1016/j.cct.2017.08.019 [DOI] [PubMed] [Google Scholar]
  • 24.Zhang Z MS. Machine learning methods for leveraging baseline covariate information to improve the efficiency of clinical trials. Statistics in Medicine. 2019;38(10):1703–14. doi: 10.1002/sim.8054 [DOI] [PubMed] [Google Scholar]
  • 25.Saag KG, Mohr PE, Esmail L, Mudano AS, Wright N, Beukelman T, et al. Improving the efficiency and effectiveness of pragmatic clinical trials in older adults in the United States. Contemp Clin Trials. 2012;33(6):1211–6. doi: 10.1016/j.cct.2012.07.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Metcalfe A, Gemperle Mannion E, Parsons H, Brown J, Parsons N, Fox J, et al. Protocol for a randomised controlled trial of Subacromial spacer for Tears Affecting Rotator cuff Tendons: a Randomised, Efficient, Adaptive Clinical Trial in Surgery (START:REACTS). BMJ Open. 2020;10(5):e036829. doi: 10.1136/bmjopen-2020-036829 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Mukherjee A, Wason JMS, Grayling MJ. When is a two-stage single-arm trial efficient? An evaluation of the impact of outcome delay. European Journal of Cancer. 2022;166:270–8. doi: 10.1016/j.ejca.2022.02.010 [DOI] [PubMed] [Google Scholar]
  • 28.Berry SM, Petzold EA, Dull P, Thielman NM, Cunningham CK, Corey GR, et al. A response adaptive randomization platform trial for efficient evaluation of Ebola virus treatments: A model for pandemic response. Clinical Trials. 2016;13(1):22–30. doi: 10.1177/1740774515621721 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Golub HL. The need for more efficient trial designs. Statistics in Medicine. 2006;25(19):3231–5; discussion 313–4, 326–47. doi: 10.1002/sim.2629 [DOI] [PubMed] [Google Scholar]
  • 30.Shen J, Preskorn S, Dragalin V, Slomkowski M, Padmanabhan SK, Fardipour P, et al. How Adaptive Trial Designs can Increase Efficiency in Psychiatric Drug Development: A Case Study. Innovations in Clinical Neuroscience. 2011;8(7):26–34. [PMC free article] [PubMed] [Google Scholar]
  • 31.Levin GP, Emerson SC, Emerson SS. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation. Statistics in Medicine. 2013;32(8):1259–75; discussion 80–2. doi: 10.1002/sim.5662 [DOI] [PubMed] [Google Scholar]
  • 32.Lu M, Ownby DR, Zoratti E, Roblin D, Johnson D, Johnson CC, et al. Improving efficiency and reducing costs: Design of an adaptive, seamless, and enriched pragmatic efficacy trial of an online asthma management program. Contemporary Clinical Trials. 2014;38(1):19–27. doi: 10.1016/j.cct.2014.02.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Sverdlov O, Ryeznik Y, Wong WK. Opportunity for efficiency in clinical development: An overview of adaptive clinical trial designs and innovative machine learning tools, with examples from the cardiovascular field. Contemporary Clinical Trials. 2021;105:106397. doi: 10.1016/j.cct.2021.106397 [DOI] [PubMed] [Google Scholar]
  • 34.Bitterman DS, Cagney DN, Singer LL, Nguyen PL, Catalano PJ, Mak RH. Master Protocol Trial Design for Efficient and Rational Evaluation of Novel Therapeutic Oncology Devices. Journal of the National Cancer Institute. 2020;112(3):229–37. doi: 10.1093/jnci/djz167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Cunanan KM, Iasonos A, Shen R, Begg CB, Gonen M. An efficient basket trial design. Statistics in Medicine. 2017;36(10):1568–79. doi: 10.1002/sim.7227 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.He L, Ren Y, Chen H, Guinn D, Parashar D, Chen C, et al. Efficiency of a randomized confirmatory basket trial design constrained to control the family wise error rate by indication. Stat Methods Med Res. 2022;31(7):1207–23. doi: 10.1177/09622802221091901 [DOI] [PubMed] [Google Scholar]
  • 37.Normington J, Zhu J, Mattiello F, Sarkar S, Carlin B. An efficient Bayesian platform trial design for borrowing adaptively from historical control data in lymphoma. Contemporary Clinical Trials. 2020;89:105890. doi: 10.1016/j.cct.2019.105890 [DOI] [PubMed] [Google Scholar]
  • 38.Berry SM, Connor JT, Lewis RJ. The platform trial: an efficient strategy for evaluating multiple treatments. JAMA. 2015;313(16):1619–20. doi: 10.1001/jama.2015.2316 [DOI] [PubMed] [Google Scholar]
  • 39.Boessen R, Knol MJ, Groenwold RH, Grobbee DE, Roes KC. Increasing trial efficiency by early reallocation of placebo nonresponders in sequential parallel comparison designs: application to antidepressant trials. Clinical Trials. 2012;9(5):578–87. doi: 10.1177/1740774512456454 [DOI] [PubMed] [Google Scholar]
  • 40.Connolly SJ, Philippon F, Longtin Y, Casanova A, Birnie DH, Exner DV, et al. Randomized cluster crossover trials for reliable, efficient, comparative effectiveness testing: design of the Prevention of Arrhythmia Device Infection Trial (PADIT). Canadian Journal of Cardiology. 2013;29(6):652–8. doi: 10.1016/j.cjca.2013.01.020 [DOI] [PubMed] [Google Scholar]
  • 41.Girling AJ. Relative efficiency of unequal cluster sizes in stepped wedge and other trial designs under longitudinal or cross-sectional sampling. Statistics in Medicine. 2018;37(30):4652–64. doi: 10.1002/sim.7943 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Matthews JNS. Highly efficient stepped wedge designs for clusters of unequal size. Biometrics. 2020;76(4):1167–76. doi: 10.1111/biom.13218 [DOI] [PubMed] [Google Scholar]
  • 43.Mdege ND, Brabyn S, Hewitt C, Richardson R, Torgerson DJ. The 2 x 2 cluster randomized controlled factorial trial design is mainly used for efficiency and to explore intervention interactions: a systematic review. Journal of Clinical Epidemiology. 2014;67(10):1083–92. [DOI] [PubMed] [Google Scholar]
  • 44.Piantadosi S. Highly efficient clinical trial designs for reliable screening of under-performing treatments: Application to the COVID-19 Pandemic. Clinical Trials. 2020;17(5):483–90. doi: 10.1177/1740774520940227 [DOI] [PubMed] [Google Scholar]
  • 45.Yndigegn T, Hofmann R, Jernberg T, Gale CP. Registry-based randomised clinical trial: efficient evaluation of generic pharmacotherapies in the contemporary era. Heart. 2018;104(19):1562–7. doi: 10.1136/heartjnl-2017-312322 [DOI] [PubMed] [Google Scholar]
  • 46.Ni Y, Kennebeck S, Dexheimer JW, McAneney CM, Tang H, Lingren T, et al. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department. J Am Med Inform Assoc. 2015;22(1):166–78. doi: 10.1136/amiajnl-2014-002887 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Cai T, Cai F, Dahal KP, Cremone G, Lam E, Golnik C, et al. Improving the Efficiency of Clinical Trial Recruitment Using an Ensemble Machine Learning to Assist With Eligibility Screening. ACR Open Rheumatology. 2021;3(9):593–600. doi: 10.1002/acr2.11289 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Ni Y, Wright J, Perentesis J, Lingren T, Deleger L, Kaiser M, et al. Increasing the efficiency of trial-patient matching: automated clinical trial eligibility pre-screening for pediatric oncology patients. BMC Medical Informatics & Decision Making. 2015;15:28. doi: 10.1186/s12911-015-0149-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Sampson R, Shapiro S, He W, Denmark S, Kirchoff K, Hutson K, et al. An integrated approach to improve clinical trial efficiency: Linking a clinical trial management system into the Research Integrated Network of Systems. Journal Of Clinical And Translational Science. 2022;6(1):e63. doi: 10.1017/cts.2022.382 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Schmickl CN, Li M, Li G, Wetzstein MM, Herasevich V, Gajic O, et al. The accuracy and efficiency of electronic screening for recruitment into a clinical trial on COPD. Respiratory Medicine. 2011;105(10):1501–6. doi: 10.1016/j.rmed.2011.04.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Smith KS, Eubanks D, Petrik A, Stevens VJ. Using web-based screening to enhance efficiency of HMO clinical trial recruitment in women aged forty and older. Clinical Trials. 2007;4(1):102–5. doi: 10.1177/1740774506075863 [DOI] [PubMed] [Google Scholar]
  • 52.Stewart RR, Dimmock AEF, Green MJ, Van Scoy LJ, Schubart JR, Yang C, et al. An Analysis of Recruitment Efficiency for an End-of-Life Advance Care Planning Randomized Controlled Trial. American Journal of Hospice & Palliative Medicine. 2019;36(1):50–4. doi: 10.1177/1049909118785158 [DOI] [PubMed] [Google Scholar]
  • 53.Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. Journal of the American Medical Informatics Association. 2009;16(6):869–73. doi: 10.1197/jamia.M3119 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Schuler A, Walsh D, Hall D, Walsh J, Fisher C, Critical Path for Alzheimer’s D, et al. Increasing the efficiency of randomized trial estimates via linear adjustment for a prognostic score. The International Journal of Biostatistics. 2022;18(2):329–56. doi: 10.1515/ijb-2021-0072 [DOI] [PubMed] [Google Scholar]
  • 55.Gomeni R, Merlo-Pich E. Trial Simulation to estimate Type I error when a population window enrichment strategy is used to improve efficiency of clinical trials in depression. European Neuropsychopharmacology. 2012;22(1):44–52. doi: 10.1016/j.euroneuro.2011.05.002 [DOI] [PubMed] [Google Scholar]
  • 56.Sheng Y, Zhou X, Yang S, Ma P, Chen C. Modelling item scores of Unified Parkinson’s Disease Rating Scale Part III for greater trial efficiency. British Journal of Clinical Pharmacology. 2021;87(9):3608–18. doi: 10.1111/bcp.14777 [DOI] [PubMed] [Google Scholar]
  • 57.Zhang L, Zhang X, Shen L, Zhu D, Ma S, Cong L. Efficiency of Electronic Health Record Assessment of Patient-Reported Outcomes After Cancer Immunotherapy: A Randomized Clinical Trial. JAMA Network Open. 2022;5(3):e224427. doi: 10.1001/jamanetworkopen.2022.4427 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Anker SD, Schroeder S, Atar D, Bax JJ, Ceconi C, Cowie MR, et al. Traditional and new composite endpoints in heart failure clinical trials: facilitating comprehensive efficacy assessments and improving trial efficiency. European Journal of Heart Failure. 2016;18(5):482–9. [DOI] [PubMed] [Google Scholar]
  • 59.Boessen R, Heerspink HJ, De Zeeuw D, Grobbee DE, Groenwold RH, Roes KC. Improving clinical trial efficiency by biomarker-guided patient selection. Trials [Electronic Resource]. 2014;15:103. doi: 10.1186/1745-6215-15-103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Evans SR, Knutsson M, Amarenco P, Albers GW, Bath PM, Denison H, et al. Methodologies for pragmatic and efficient assessment of benefits and harms: Application to the SOCRATES trial. Clinical Trials. 2020;17(6):617–26. doi: 10.1177/1740774520941441 [DOI] [PubMed] [Google Scholar]
  • 61.Forsyth R, Thuy V, Salorio C, Christensen J, Holford N. Review: efficient rehabilitation trial designs using disease progress modeling: a pediatric traumatic brain injury example. Neurorehabilitation & Neural Repair. 2010;24(3):225–34. doi: 10.1177/1545968309354534 [DOI] [PubMed] [Google Scholar]
  • 62.Ellenberg SS. Discussion of papers on cost and efficiency of data collection in clinical trials. Stat Med. 1990;9(1–2):145–8; discussion 8–51. doi: 10.1002/sim.4780090121 [DOI] [PubMed] [Google Scholar]
  • 63.Bechtel J, Chuck T, Forrest A, Hildebrand C, Panhuis J, Pattee SR, et al. Improving the quality conduct and efficiency of clinical trials with training: Recommendations for preparedness and qualification of investigators and delegates. Contemp Clin Trials. 2020;89:105918. doi: 10.1016/j.cct.2019.105918 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Marks R, Bristol H, Conlon M, Pepine CJ. Enhancing clinical trials on the internet: lessons from INVEST. Clin Cardiol. 2001;24(11 Suppl):V17-23. doi: 10.1002/clc.4960241707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Gomeni R, Merlo-Pich E. Trial Simulation to estimate Type I error when a population window enrichment strategy is used to improve efficiency of clinical trials in depression. Eur Neuropsychopharmacol. 2012;22(1):44–52. doi: 10.1016/j.euroneuro.2011.05.002 [DOI] [PubMed] [Google Scholar]
  • 66.Duley L, Gillman A, Duggan M, Belson S, Knox J, McDonald A, et al. What are the main inefficiencies in trial conduct: a survey of UKCRC registered clinical trials units in the UK. Trials [Electronic Resource]. 2018;19(1):15. doi: 10.1186/s13063-017-2378-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.French JA. Improving clinical trial efficiency: Is technology the answer? Epilepsia Open. 2017;2(2):121–2. doi: 10.1002/epi4.12042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Howat AP, Holloway PJ. The effect of diagnostic criteria on the efficiency of experimental clinical trials. J Dent Res. 1977;56 Spec No:C116-22. doi: 10.1177/002203457705600303011 [DOI] [PubMed] [Google Scholar]
  • 69.Lauer MS, Gordon D, Wei G, Pearson G. Efficient design of clinical trials and epidemiological research: is it possible? Nat Rev Cardiol. 2017;14(8):493–501. doi: 10.1038/nrcardio.2017.60 [DOI] [PubMed] [Google Scholar]
  • 70.Lokker C, Jezrawi R, Gabizon I, Varughese J, Brown M, Trottier D, et al. Feasibility of a Web-Based Platform (Trial My App) to Efficiently Conduct Randomized Controlled Trials of mHealth Apps For Patients With Cardiovascular Risk Factors: Protocol For Evaluating an mHealth App for Hypertension. JMIR Res Protoc. 2021;10(2):e26155. doi: 10.2196/26155 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Meier P. Polio trial: an early efficient clinical trial. Statistics in Medicine. 1990;9(1–2):13–6. doi: 10.1002/sim.4780090107 [DOI] [PubMed] [Google Scholar]
  • 72.Gale C, Hyde MJ, Modi N, group Wtd. Research ethics committee decision-making in relation to an efficient neonatal trial. Archives of Disease in Childhood Fetal & Neonatal Edition. 2017;102(4):F291–F8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Nicholas J. NCI’s clinical trial system: efficiencies grow, debate goes on. Journal of the National Cancer Institute. 2010;102(23):1750–1, 5. doi: 10.1093/jnci/djq478 [DOI] [PubMed] [Google Scholar]
  • 74.Hanna CR, Lynskey DM, Wadsley J, Appleyard SE, Anwar S, Miles E, et al. Radiotherapy Trial Set-up in the UK: Identifying Inefficiencies and Potential Solutions. Clinical Oncology (Royal College of Radiologists). 2020;32(4):266–75. doi: 10.1016/j.clon.2019.10.004 [DOI] [PubMed] [Google Scholar]
  • 75.Xie C. How have researchers defined and used the concept of ‘efficiency’ in the context of trials? A review of existing literature and a proposed conceptual framework: OSF Preprints; 2023. [Google Scholar]
  • 76.Group OEW. Report of the Operational Efficiency Working Group of the Clinical Trials and Translational Research Advisory Committee National Cancer Institute; 2010. [Google Scholar]
  • 77.Hofseth LJ. Getting rigorous with scientific rigor. Carcinogenesis. 2018;39(1):21–5. doi: 10.1093/carcin/bgx085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Efficiency (statistics) Wikipedia [Available from: https://en.wikipedia.org/wiki/Efficiency_(statistics).
  • 79.Medical Dictionary for the Health Professions and Nursing. 2012. efficiency.
  • 80.Care DoHaS. The Future of Clinical Research Delivery: 2022 to 2025 implementation plan 2022 [Available from: https://www.gov.uk/government/publications/the-future-of-uk-clinical-research-delivery-2022-to-2025-implementation-plan/the-future-of-clinical-research-delivery-2022-to-2025-implementation-plan.
  • 81.Administration tFaD. ICH HARMONISED GUIDELINE GOOD CLINICAL PRACTICE (GCP) E6(R3) 2023.

Decision Letter 0

Germain Honvo

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

14 Feb 2024

PONE-D-23-40114Development of a conceptual framework for defining trial efficiencyPLOS ONE

Dear Dr. Xie,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 30 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Germain Honvo, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is a fascinating and potentially very valuable study. By deploying a Delphi e-study, the authors achieved consensus on four theoretical constructs for defining trial efficiency, and on five empirical building blocks essential for

considering trial efficiency. This research will contribute to better evaluation of trial efficiency.

Reviewer #2: Dear Authors,

Well done on the piece of work! The authors have provided a nice framework to define efficiency in the context of trial design and conduct. However, there are a few comments/ suggestions from my end to make the paper accessible to a greater audience.

1. The most interesting question after reading the paper has been: as the authors identified four constructs and 5 building blocks of the trial efficiency concept, what is the interplay between them? As in, how each block will fall under the defined constructs? A researcher might be interested if he/she is looking for a particular kind of efficiency, which building blocks does he/she needs to consider? I would also be interested in looking at the potential overlap between these constructs (if there are any, which I think there might be). Any diagrammatic summary would be nice.

2. Please include the e-delphi questionnaire that have been used. It confuses me if Figure 1 summarise all the questions asked in the survey. As mentioned there, “Question 1. How do you define efficiency within the context of trials to improve healthcare?” How was this recorded? It will give the readers a clearer picture if the questionnaire is attached.

3. I think there might be some overlap between the identified themes from a literature review, for example: Efficient trial designs (as mentioned in the description e.g. Adaptive designs, Master protocols etc) are mostly more sophisticated statistical trial designs instead of a simple RCT (where the paradigm looks like design -> conduct -> analysis). In that case why it is not different from statistical efficiency or operational/economical efficiency (which the designs typically target to optimise)? Why can’t it not fall under them?

4. Please include some description of the figures in the text/caption of the figure. I am still unsure of what figure 2 describes. It is not clear whether it reports some sort of frequency or ranking or percentage.

5. A quick clarification: what do the authors mean by member of public? Are they trial participants?

A small typo in the appendix, all the inter-percentile ranges in IPRAS are mentioned as intercentile ranges.

Hope this helps! Good luck!

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Dr Sarah Markham

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 May 23;19(5):e0304187. doi: 10.1371/journal.pone.0304187.r002

Author response to Decision Letter 0


22 Mar 2024

Review Comments to the Author

Reviewer #1: This is a fascinating and potentially very valuable study. By deploying a Delphi e-study, the authors achieved consensus on four theoretical constructs for defining trial efficiency, and on five empirical building blocks essential for considering trial efficiency. This research will contribute to better evaluation of trial efficiency.

We appreciate your recognition of our study's value and its potential impact on the field.

Reviewer #2: Dear Authors,

Well done on the piece of work! The authors have provided a nice framework to define efficiency in the context of trial design and conduct. However, there are a few comments/ suggestions from my end to make the paper accessible to a greater audience.

1. The most interesting question after reading the paper has been: as the authors identified four constructs and 5 building blocks of the trial efficiency concept, what is the interplay between them? As in, how each block will fall under the defined constructs? A researcher might be interested if he/she is looking for a particular kind of efficiency, which building blocks does he/she needs to consider? I would also be interested in looking at the potential overlap between these constructs (if there are any, which I think there might be). Any diagrammatic summary would be nice

Thank you for highlighting this crucial aspect of our study.

The potential overlaps between the constructs have been discussed in the original manuscript in the discussion section, on pages 21 and 22.

Regarding the relationships between the constructs and the building blocks, we initially distinguish between the constructs and the building blocks to illustrate a layered understanding of trial efficiency. The four constructs were conceptualised to provide a broad, overarching view of efficiency within healthcare trials. In contrast, the five building blocks were identified as the essential, practical components that operationalise efficiency in real-world settings.

However, recognising the relationships among these identified themes is indeed essential in the theory-building process. As such, we have added the following descriptions in the revised discussion, on page 23.

“There is a layered connection between the constructs and the building blocks: the four constructs were conceptualised to provide a broad, overarching view of efficiency within healthcare trials. In contrast, the five building blocks were identified as the essential, practical components that operationalise efficiency in real-world settings. In addition to this relationship, we suggest that for a comprehensive understanding, each efficiency construct takes into account all five building blocks. For instance, while it may seem apparent that scientific efficiency is closely linked with trial design, focusing on how the study is conceptualised to ensure methodological soundness; it also intersects with stakeholder involvement, where patient and public engagement can improve the trial design and thus the trial outcomes' relevance and applicability. Figure 5 illustrates the potential interplay between the theoretical constructs and the empirical building blocks.”

And in light of your suggestion, we have included an updated conceptual framework of trial efficiency (figure 4 on page 21) to visually outline the theoretical constructs, building blocks and their relationship. This diagrammatic representation is part of our ongoing effort to refine our conceptual framework and enhance its applicability and understanding:

Figure 4. The conceptual framework of trial efficiency.

Footnote: The outer blue circle outlines four theoretical constructs of trial efficiency, which are Scientific Efficiency, Statistical Efficiency, Operational Efficiency and Economic Efficiency. At its core, the inner pentagon outlines five empirical building blocks: Superstructure, Stakeholders, Infrastructure, Trial Process, and Trial Design. The cyclical arrows indicate the necessity for a balanced consideration of each building block within each construct to optimise trial efficiency.

2. Please include the e-delphi questionnaire that have been used. It confuses me if Figure 1 summarise all the questions asked in the survey. As mentioned there, “Question 1. How do you define efficiency within the context of trials to improve healthcare?” How was this recorded? It will give the readers a clearer picture if the questionnaire is attached.

The e-Delphi questionnaire has been added to the supplementary file 1. Question one was recorded in free text. The following changes have been made to the manuscript on page 8, method section:

“In the open round, we invited panellists to share their thoughts on 1) their understanding of trial efficiency and 2) the most efficient or inefficient aspects they have encountered in the trials they have conducted or in which they have participated. These questions were designed as free text to encourage detailed, narrative responses. (see S1 File 1 for the questionnaire).”

3. I think there might be some overlap between the identified themes from a literature review, for example: Efficient trial designs (as mentioned in the description e.g. Adaptive designs, Master protocols etc) are mostly more sophisticated statistical trial designs instead of a simple RCT (where the paradigm looks like design -> conduct -> analysis). In that case why it is not different from statistical efficiency or operational/economical efficiency (which the designs typically target to optimise)? Why can’t it not fall under them?

Thank you for raising the point about the overlaps among the four constructs. We provide the following rationale for categorising trial design into scientific efficiency:

The theme synthesis was partially based on the previous discussions about scientific efficiency and operational efficiency [1], in which the authors argued that innovative clinical trial designs (such as adaptive studies) are integral to scientific efficiency.

[1] Kelly D, Spreafico A, Siu LL. Increasing operational and scientific efficiency in clinical trials. Br J Cancer. 2020;123(8):1207-8.

While these designs inherently enhance statistical efficiency through sophisticated design features, they are considered distinct due to their broader impact on the scientific process. They extend beyond statistical considerations to influence the trial's methodological soundness, ethical conduct, and the potential for quicker patient benefit.

Additionally, operational efficiency, as addressed in our review, involves the effective management and organisation of trial processes and procedures. Sophisticated trial designs aim to improve these aspects but do not encapsulate the entirety of operational considerations.

Economic efficiency, distinct yet linked, focuses on the cost implications of conducting a trial. Innovative trial designs can influence costs but are one of many factors that contribute to the overall economic efficiency of a trial.

By delineating these designs as a separate entity, we aim to emphasise their unique contribution to trial efficiency. They are a part of the broader constructs but deserve individual recognition for their specific impacts. As such, we clarified this interplay in the discussion section of the original manuscript, on page 22:

“Overall there was no disagreement over the four constructs ……thus offering a nuanced theoretical framework for its comprehension.”

Nonetheless, we recognise that interpretations of qualitative data can vary, and to account for this, we have included the following statement in the limitations section:

“While we have sought to delineate each construct and building block distinctly, we acknowledge the potential for different interpretations of qualitative data. The interplay between the identified themes is likely to be more intricate, reflecting the complex nature of trial efficiency. Future research could delve deeper into this interplay to unravel the connections.”

4. Please include some description of the figures in the text/caption of the figure. I am still unsure of what figure 2 describes. It is not clear whether it reports some sort of frequency or ranking or percentage.

Thank you, we have added the following note to figure 2:

“Figure footnote: The x-axis represents the frequency of responses.”

5. A quick clarification: what do the authors mean by member of public? Are they trial participants?

Thank you for seeking further clarification. In our study, 'trial participants' are referred to as individuals who have directly participated in clinical trials (e.g. receiving interventions or being part of control groups), offering invaluable insights from their lived experiences. This is to separate the role of 'members of the public', who may not have personally participated in clinical trials, but may have helped in patient engagement and/or assisted in developing study designs.

In light of this comment, we have clarified it in the revised manuscript under the section of “Patient and public involvement” on page 9:

“In this study, members of the public (n=4) (including two who had participated in trials) were invited to share their thoughts, participate in the ranking process, provided with the outcomes of each round upon completion.”

6. A small typo in the appendix, all the inter-percentile ranges in IPRAS are mentioned as intercentile ranges.

Thank you for spotting this typo. They have been corrected accordingly.

Attachment

Submitted filename: Delphi study_rebuttal letter.docx

pone.0304187.s008.docx (167.1KB, docx)

Decision Letter 1

Germain Honvo

8 May 2024

Development of a conceptual framework for defining trial efficiency

PONE-D-23-40114R1

Dear Dr. Xie,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Germain Honvo, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I am happy with the revisions made by the authors. This conceptual paper will make a positive contribution to the literature.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Acceptance letter

Germain Honvo

14 May 2024

PONE-D-23-40114R1

PLOS ONE

Dear Dr. Xie,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Germain Honvo

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. PRISMA flowchart.

    (DOCX)

    pone.0304187.s001.docx (177.3KB, docx)
    S2 Fig. Trial process in general.

    (DOCX)

    pone.0304187.s002.docx (245.5KB, docx)
    S1 Table. Literature review inclusion and exclusion criteria.

    (DOCX)

    pone.0304187.s003.docx (14.8KB, docx)
    S2 Table. Efficiency definitions/explanations in the literature.

    (DOCX)

    pone.0304187.s004.docx (18.5KB, docx)
    S3 Table. Scoring round stratified results.

    (DOCX)

    pone.0304187.s005.docx (28.6KB, docx)
    S4 Table. Scoring round exemplar quotes related to potential overlaps among the four constructs.

    (DOCX)

    pone.0304187.s006.docx (17.2KB, docx)
    S1 File. Open round questionnaire.

    (DOCX)

    pone.0304187.s007.docx (123.3KB, docx)
    Attachment

    Submitted filename: Delphi study_rebuttal letter.docx

    pone.0304187.s008.docx (167.1KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript and its supporting information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES