Abstract
Background
Globally, there is a growing focus on efficient trials, yet numerous interpretations have emerged, suggesting a significant heterogeneity in understanding “efficiency” within the trial context. Therefore in this study, we aimed to dissect the multifaceted nature of trial efficiency by establishing a comprehensive conceptual framework for its definition.
Objectives
To collate diverse perspectives regarding trial efficiency and to achieve consensus on a conceptual framework for defining trial efficiency.
Methods
From July 2022 to July 2023, we undertook a literature review to identify various terms that have been used to define trial efficiency. We then conducted a modified e-Delphi study, comprising an exploratory open round and a subsequent scoring round to refine and validate the identified items. We recruited a wide range of experts in the global trial community including trialists, funders, sponsors, journal editors and members of the public. Consensus was defined as items rated “without disagreement”, measured by the inter-percentile range adjusted for symmetry through the UCLA/RAND approach.
Results
Seventy-eight studies were identified from a literature review, from which we extracted nine terms related to trial efficiency. We then used review findings as exemplars in the Delphi open round. Forty-nine international experts were recruited to the e-Delphi panel. Open round responses resulted in the refinement of the initial nine terms, which were consequently included in the scoring round. We obtained consensus on all nine items: 1) four constructs that collectively define trial efficiency containing scientific efficiency, operational efficiency, statistical efficiency and economic efficiency; and 2) five essential building blocks for efficient trial comprising trial design, trial process, infrastructure, superstructure, and stakeholders.
Conclusions
This is the first attempt to dissect the concept of trial efficiency into theoretical constructs. Having an agreed definition will allow better trial implementation and facilitate effective communication and decision-making across stakeholders. We also identified essential building blocks that are the cornerstones of an efficient trial. In this pursuit of understanding, we are not only unravelling the complexities of trial efficiency but also laying the groundwork for evaluating the efficiency of an individual trial or a trial system in the future.
Introduction
Worldwide, trial efficiency is a longstanding priority for the pharmaceutical industry [1], academia and funding bodies [2,3]. In 2004 in the US, the Clinical Trials Working Group of the National Cancer Advisory Board set the goal of improving operational efficiency to facilitate timely and cost-effective trial execution [4]. In the UK, the National Institute for Health and Care Research offers additional funding to support clinical trial units to advance the design and execution of efficient, innovative research, aiming to provide robust evidence to inform clinical practice and policy [5]. A recent article in The Lancet Global Health examined the challenges faced by current clinical trial research in low- and middle-income countries, and argued that efficient trials are needed to address research questions related to the increasing burden of non-communicable diseases in a timely and affordable way [6].
Currently, the concept of efficiency in healthcare trials has been used to refer to accelerated ethical approval [6], addressing multiple complex questions in a single trial [7] and with a minimised sample size [6], trials conducted with shorter duration [7,8], lower costs [9], and reduced resource requirements [10]. In addition, existing literature has discussed trial efficiency in terms of operational efficiency [11–13], scientific efficiency [11], statistical efficiency [13,14], and economic efficiency [15]. There is significant heterogeneity as to what is meant by efficiency in the context of trials, which may hinder effective communication and decision-making between stakeholders, and compromise the comparability of studies. Therefore, in this study we aimed to develop a conceptual framework for defining trial efficiency and to achieve expert consensus on the framework constructs.
Method
Study design
We undertook a literature review to identify items that define and comprise trial efficiency. We then conducted an e-Delphi study to refine and validate those items and to achieve consensus on the constructs and the building blocks of trial efficiency. The ethics approval was obtained from Queen Mary University of London research ethics committee (QMERC22.316). This study follows the Guidance on Conducting and Reporting Delphi Studies (CREDES) [16].
Literature review for generating items
Our goal in the literature review was to collate existing discussions on efficiency in the context of trials, including definitions and attributes described as constituting an efficient trial. As discussions specifically focused on this subject are scarce, we included a broad range of study types, such as full trial papers or protocols, editorials, and opinion pieces that discussed trial efficiency. We considered all types of human trials evaluating medical, surgical, or behavioural interventions, including efficacy trials, effectiveness trials, and implementation trials. The search was limited to English-language articles, and there was no restriction on publication dates. To carry out the review, we searched MEDLINE (via Ovid) database, for terms such as ’trial’ and ’efficien*’ in article titles and keywords. As ’efficiency’ is a common word in literature, we searched for these two keywords only within article titles (rather than within the abstracts) ensuring the results’ relevance to the discussion of trial efficiency. The detailed inclusion and exclusion criteria are listed in S1 Table.
e-Delphi
Panel selection and recruitment
The aim was to recruit a diverse panel of experts from the trial community, encompassing a range of roles and perspectives. This included international researchers identified through the literature review, colleagues who are part of professional trial networks such as UK trial managers’ network, representatives from funding bodies, journal editors, and members of the public who have been involved in trials. Purposive sampling and snowball sampling methods were then used to identify additional participants. We approached those participants with known contact details by individual emails generated through Clinvivo [17], while for colleagues within professional networks, where we didn’t have individual contact details, we sent a generic recruitment email to the network’s mailing list. Recruitment began in November 2022 and continued until March 2023. Written informed consent was obtained online through the Clinvivo Delphi system.
Data collection
We opted for two rounds of data collection because consensus was achieved by the end of the second round. These rounds were preceded by a pilot round to test the feasibility of the open round.
Pilot test. We pilot tested the feasibility of the open round questionnaire amongst colleagues with diverse experience in trial design and conduct at the Pragmatic Clinical Trial Unit of Queen Mary University of London. This provided valuable feedback on the clarity of the questions, the appropriateness of the response options, and the overall structure of the questionnaire. Based on the feedback received during the pilot testing, we made revisions and refinements to the questionnaire to enhance its usability.
Open round. In the open round, we invited panellists to share their thoughts on 1) their understanding of trial efficiency and 2) the most efficient or inefficient aspects they have encountered in the trials they have conducted or in which they have participated. These questions were designed as free-text to encourage detailed, narrative responses. To gain insights into the participants’ backgrounds, we collected information on countries of residence, and roles within the trials (see S1 File for the questionnaire). This open round allowed us to gather diverse viewpoints and experiences related to trial efficiency which contributed to the development of a comprehensive set of items for ranking in the subsequent round. The data collection for this round took place over four weeks, with reminder emails sent to participants after the second and third weeks.
Scoring round. Panel members from open round were emailed a link to the second questionnaire. They were asked to rate the importance of the proposed items on a scale of 1 to 9 (1: not at all important to 9: critically important). At the end of each question, there was a free text space for any comments they wished to share. The scoring round data collection spanned four weeks with weekly reminders to participants.
Data analysis and consensus
Descriptive statistics were used to analyse quantitative demographics and thematic analysis was used to summarise free text responses from both Delphi rounds. To assess disagreement and appropriateness, we used the Research ANd Development (RAND)/ University of California Los Angeles (UCLA) appropriateness method [18]. It involves calculating the median score, the inter-percentile range (IPR) (30th and 70th), and the inter-percentile range adjusted for symmetry (IPRAS), for each item being rated. Consensus was defined as items rated “without disagreement”, measured by the IPRAS.
Patient and public involvement
In this study, members of the public (n = 4) (including two who had participated in trials) were invited to share their thoughts, participate in the ranking process, provided with the outcomes of each round upon completion. They were considered experts due to their lived experience and offered £30 voucher as a compensation for their time.
Results
Delphi participants
Out of 106 international experts approached, and 4 e-mails sent to network mailing lists, forty-nine participants responded to the open round (United Kingdom (n = 37), United States (n = 7), Canada (n = 2), Australia (n = 1), Ireland (n = 1), and Kenya (n = 1)). The panel included a diversity of roles including statisticians (n = 17), trial managers (n = 12), principal investigators (n = 7), funders (n = 4), journal editors (n = 3), member of the public (n = 4), data managers (n = 3), site staff (n = 2), sponsors (n = 2), researchers (n = 2), monitors (n = 2), ethicist (n = 1), clinician (n = 1), CTU manager (n = 1), trial support officer (n = 1), and trial methodologist (n = 1). Many participants had more than one role. See Fig 1.
Fig 1. Delphi flowchart.
Literature review
We included a total of 78 studies for data analysis (see S1 Fig), including 6 (8%) reviews, 15 (19%) perspectives or commentaries, 1(1%) interview, 2 (3%) case studies, 2 (3%) surveys and 3 (4%) randomised trials, and 49 (63%) methodologies describing new trial designs. Only 8(10%) studies had explicitly defined or explained what ‘efficiency’ meant in the context of their trials (see S2 Table for details). We categorised discussions of efficiency from the literature into nine key items: 1)scientific efficiency [11,19,20], 2)operational efficiency [11,20,21], 3)statistical efficiency [14,22–24] and 4)economic efficiency [15,25], 5)efficiency in trial designs [7,8,23,26–45], 6)trial conduct [11,20,21,46–66], and other aspects such as 7)improving efficiency using information technologies and mobile apps [53,67–70]; 8)involving the public and stakeholders [20,71]; and 9)efficient trial reviews and regulatory approvals [28,66,72–74]. (see Table 1 for details). These results were included as exemplars in the Delphi open round questionnaire. The detailed description of the literature review has previously been made available [75] to ensure full transparency and to facilitate open scholarly dialogue.
Table 1. Key themes synthesised from literature review.
| How efficiency had been discussed | Examples and references |
|---|---|
| Scientific efficiency | Scientific efficiency refers to the methodological rigour of the trial design. That is, a design that uses fewer resources and less infrastructure to maximise the outputs [11], addresses the right research questions, considers the implications of the design decision, and is relevant to the stakeholders [19,20]. |
| Operational efficiency | Operational efficiency covers full trial processes, from concept development to protocol activation, from enrolment to closure stage [11]. Wu and colleagues [20] assessed operational efficiency in patient recruitment and trial duration, Hess and colleagues [21] increased operational efficiency through objective site selection and reduced site coordinator workload. In addition, the National Cancer Institute established the Operational Efficiency Working Group to identify barriers associated with trial operations, aiming to reduce trial activation time and timely complete the activated studies [76]. |
| Statistical efficiency | Statistical efficiency measures the choices of estimators [24], experimental designs and hypothesis testing procedures [22], type I error, the power, and the sample size [23], the use of endpoint events include the selection of an appropriately weighted test statistic [14]. |
| Economic efficiency | Economic efficiency concerns the expenditure of research resources [15] and the cost for completing the trial [25]. |
| Trial designs | Including adaptive designs [23,26–33], master protocol trial designs [34] such as basket trials [35,36] and platform trials [37,38], sequential trial designs [7,39], clusters designs [40–42], factorial trials [43,44] and registry-based trials [45] |
| Trial conduct | 1) patient identification and recruitment [20,46–53], for example, the automated eligibility screening tool increased the efficiency of patient accrual. |
| 2) data analysis [54–57], for example, “an alternative analytical approach that can enhance the signal-to-noise ratio would open the path for more efficient and rigorous clinical trials of Parkinson’s Disease therapies”. | |
| 3) selection of endpoints or outcome measures [58–61], for example, the use of ordinal outcomes and composing outcomes within a patient could improve trial efficiency. | |
| 4) data collection and management [21,62], for example, collecting and processing routine health data from the existing registry would facilitate efficient trial conduct. | |
| 5) site selection and management [21,63–65], such as reduced site workload and improved site operation contributed to trial efficiency. The central argument in this group was to improve trial efficiency by enhancing its operational efficiency [11,66]. | |
| Other aspects | 1) using information technologies and mobile apps [53,67–70] |
| 2) involving the public and stakeholders [20,71] | |
| 3) efficient trial reviews and regulatory approvals [28,66,72–74] |
Open round
When asked to define trial efficiency, some participants referred to definitions from the literature review, while other cited similar definitions tailored to their trial context. When asked about the most efficient/inefficient facets of trial efficiency, the responses resonated closely with the findings from our literature review (Fig 2). Specifically, trial design emerged as the facet most frequently cited as enhancing efficiency, whereas data collection was often highlighted as the element that most impeded efficiency.
Fig 2. The efficient and inefficient aspects discussed in the open round.

The x-axis represents the frequency of responses.
By incorporating findings from this round, we further refined the nine items identified from the literature review and divided them into two groups: 1) theoretical and abstract constructs: scientific efficiency, operational efficiency, statistical efficiency, and economic efficiency; 2) empirical and fundamental building blocks: trial design (including endpoints selection, statistical analysis plan, protocol development, etc.), trial process (including recruitment and retention, data collection and analysis, trial administration, etc.), superstructure (including regulatory approvals, funding application etc.), infrastructure (including financial and physical resources such as cost, information technologies, routine healthcare data, etc.), and stakeholders. This resulted in a total of nine items for rating in the scoring round (see Table 2).
Table 2. Scoring round items and results: Appropriateness, disagreement, median item ratings, interpercentile range, and intercentile range adjusted for asymmetry.
| Item | Disagreement | Median | P30 | P70 | IPR | IPRAS |
|---|---|---|---|---|---|---|
| 1.1 Scientific efficiency: methodological rigour of the trial design | No | 9 | 8 | 9 | 1 | 7.6 |
| 1.2 Operational efficiency: optimal management, organization, and execution of trial processes and procedures | No | 9 | 8 | 9 | 1 | 7.6 |
| 1.3 Statistical efficiency: a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure | No | 8 | 7.5 | 9 | 1.5 | 7.225 |
| 1.4 Economic efficiency: optimal use of resources in the design, implementation, and analysis of clinical trials | No | 8 | 7 | 8.5 | 1.5 | 6.475 |
| 2.1 Trial design: planning and organisation of a trial | No | 9 | 9 | 9 | 0 | 8.35 |
| 2.2 Trial process: trial set up, conduct and closeout | No | 9 | 8 | 9 | 1 | 7.6 |
| 2.3 Stakeholders: individuals or groups who have an interest or concern in the design, execution, and outcomes of a trial | No | 8 | 7 | 9 | 2 | 6.85 |
| 2.4 Infrastructure: underlying framework, systems, and resources required to design, implement, manage, and analyse a trial | No | 8 | 8 | 9 | 1 | 7.6 |
| 2.5 Superstructure: overarching structure of a trial | No | 8 | 7 | 8 | 1 | 6.1 |
P30: inter-percentile range 30th.
P70: inter-percentile range 70th.
IPR: inter-percentile range.
IPRAS: inter-percentile range adjusted for symmetry.
Scoring round and consensus
Forty participants responded (82%) to the scoring round and there was no disagreement on any items (Table 2). We also conducted sub-analyses by five role groups: (1) funders and sponsors (n = 6); (2) statisticians (n = 13); (3) trial managers (n = 10); (4) principal investigators (n = 6); and (5) PPIs (n = 3). Group membership was not mutually exclusive. Stratified results showed widespread agreement that the items were appropriate, with the exception of one of the building blocks–superstructure. The funders and sponsors group disagreed this item was appropriate (S3 Table). As a result, no new items were added but we slightly modified the explanation of each proposed item, in line with free-text comments made by the participants.
Theoretical constructs of trial efficiency: Revised definitions incorporating Delphi comments
Scientific efficiency
Some participants were confused by the provided definition (Box 1. quote 1); while some suggested expanding the definition with the inclusion of feasibility and implementation (Box 1. quotes 2–3). As such, we refined the definition as the balance of methodological rigour, relevance of the research question, and feasibility of trial design. It prioritises effective use of resources, including data, to minimise research waste, considers the alignment of design and statistical strategies, and underscores the importance of the study’s practical impact on stakeholders and delivering value to end-users.
Box 1. Scoring round exemplar free-text comments related to the construct definitions
Scientific efficiency
Quote 1: “Not sure rigour equates to efficiency” (Participant n. 17, principal trial investigator)
Quote 2: “Feasibility of trial design needs to be included here. You could have the perfect trial design but no participants or high withdrawals and lack of site engagement.” (Participant n.2, trial manager)
Quote 3: “This may also need to include how important the findings will be to service users and the public and whether there are ways they are expected to be implemented in practice.” (Participant n.28, trial support officer)
Operational efficiency
Quote 4: “I’d make particular focus on the bureaucracy ‐ endless paperwork.” (Participant n.3, funder)
Quote 5: "Feasibility of operational efficiency. You may have participants and engaged sites but you need operational feasibility to align." (Participant n.2, trial manager)
Quote 6: “Would like to see reference to the ongoing assessment of a trial in the descriptor.” (Participant n.39, trial manager)
Statistical efficiency
Quote 7: “and accounting for missing data, and sources of bias or confounding” (Participant n.19, principal trial investigator)
Quote 8: “Also needs to encompass other aspects of analysis, e.g., health economics.” (Participant n.14, statistician)
Economic efficiency
Quote 9: “Allowing for the concept of data sharing beyond the life of the study” (Participant n.37, sponsor)
Quote 10: “Need to be clear that this is (I presume) related to the costs of delivering the trial and not the cost of the intervention (i.e. health economic analysis).” (Participant n.26, statistician)
Operational efficiency
Some comments suggested the definition should be expanded to consider operation feasibility, bureaucracy, and ongoing evaluation (Box 1. quotes 4–6). Therefore, we modified operational efficiency as the optimal management, organisation, execution, and continuous evaluation of trial processes and procedures. It emphasises operational feasibility (such as ensuring there are enough workforce, managing delays, and working effectively with third-party providers), reducing unnecessary bureaucracy and duplication, and continuously assessing the trial for potential improvements.
Statistical efficiency
The initial definition (Table 1) was expanded based on the participants’ comments (Box1. quotes 7–8), as the application of design and analytical methods that result in more accurate estimates of treatment effects or other parameters of interest. This includes considerations of minimising the amount of data to be collected, accounting for missing data, and managing sources of bias or confounding; its focus is specifically on maximising the accuracy and reliability of results given the data collected.
Economic efficiency
We increased the clarity of the initial definition according to scoring round feedback (Box 1.quotes 9–10): the optimal use of resources in the trial design, implementation and analysis, to ensure immediate and long-term cost-effectiveness of the trial. This focus on value ensures that resources are utilised to their fullest extent without compromising the quality of the research. It emphasises on the cost-effectiveness of conducting the trial.
Essential building blocks comprising an efficient trial
Overall, there was a strong consensus on the building blocks; the free-text comments did not suggest significant alterations, but recommended adding some details within each building block. Trial design concerns the planning and organisation of a trial, which may include the trial methodologies, research questions, sample size, interventions, control group, endpoints and outcomes; document development such as funding application; as well as planning feasibility and pilot studies. The trial process involves the setup, execution, and closeout phases of a trial (see S2 Fig for details). Stakeholders are the critical human factor, they are individuals or groups with an interest or concern in the design, execution, and outcomes of a trial. They could be trial participants (e.g. patients, practitioners, health system leaders, public health organisations, etc.), trialists (e.g. investigators, researchers, trial managers, statisticians, etc), funders, sponsors, trial sites and their staff, regulatory authorities, healthcare and clinical practitioners, the scientific community (researchers, academics, and clinicians interested in the trial’s outcomes and its implications for future research) and the general public (the broader population who may ultimately benefit from the knowledge generated by the clinical trial). Infrastructure is the underlying framework, systems, and resources required to design, implement, manage, and analyse a trial, such as resources (human, financial, physical), information systems and technologies, and healthcare data. Superstructure serves as the overarching structure of a trial, including laws, policy, and governance.
With these, we developed a Trial Efficiency Pentagon (Fig 3) to place the five building blocks and to illustrate the multiple connections among them ‐ improvements in one block may potentially lead to trade-offs in one or more other blocks.
Fig 3. Trial efficiency pentagon.
The final conceptual framework for defining trial efficiency
Fig 4 represents the finalised framework. The term trial efficiency is complex and multifaceted, encompassing four conceptual constructs with five essential building blocks.
Fig 4. The conceptual framework of trial efficiency.

The outer blue circle outlines theoretical constructs of trial efficiency: Scientific Efficiency, Statistical Efficiency, Operational Efficiency and Economic Efficiency. At its core, the inner pentagon outlines the empirical building blocks: Superstructure, Stakeholders, Infrastructure, Trial Process, and Trial Design. The cyclical arrows indicate the necessity for a balanced consideration of each building block within each construct to optimise trial efficiency.
Discussion
Main findings
Consensus was achieved on the four constructs that together define trial efficiency: scientific efficiency, operational efficiency, statistical efficiency and economic efficiency; and the five essential building blocks for considering an efficient trial: trial design, trial process, infrastructure, superstructure, and stakeholder.
The conceptual constructs, empirical building blocks, and interrelationships
Overall there was no disagreement over the constructs that conceptually define trial efficiency. However, some concerns were raised regarding potential overlaps, between scientific efficiency and statistical efficiency, and between operational efficiency and economic efficiency (S4 Table). These four constructs share some common elements. However, they are conceptually distinct and each construct brings unique aspects to the concept of trial efficiency. Scientific efficiency, for instance, focuses primarily on the methodological rigour [77] and feasibility of trial design, while statistical efficiency is concerned with achieving the most accurate results possible with the smallest amount of data collected [78]. The overlap lies in the fact that both aim to optimize the quality and validity of the trial’s findings, yet their distinct focus underlines their separate roles within the overarching construct of trial efficiency. Similarly, while operational and economic efficiency both aim to make the best use of resources [11], they do so in different ways and in different contexts. Operational efficiency is about the effective management and organization of trial processes and procedures [11,13], while economic efficiency involves optimizing resource use in relation to the cost of delivering the trial. By maintaining these conceptually distinct constructs, we were able to capture the broad spectrum of abstract factors that define trial efficiency, thus offering a nuanced theoretical framework for its comprehension.
The proposed building blocks create a foundation for the formulation of an efficient trial. In the Delphi scoring round, there was strong consensus regarding the significance of these building blocks, with an average median score of 8.4 on a 1–9 scale. However, some participants perceived hierarchy among the building blocks, suggesting that some (e.g., trial design and process) hold more importance than others. This was reflected in the literature review and responses in the Delphi open round, where certain building blocks ‐ such as trial design ‐ were more frequently discussed as critical determinants of trial efficiency. Despite these observations, we propose that all five building blocks have equal importance and they mutually contribute to the overall efficiency of the trial. These foundational elements are also interconnected, for instance, even the most rigorous and feasible trial design is contingent upon the availability of suitable infrastructure support and requires inputs from stakeholders. Therefore, we advocate for a balanced view where no single building block takes precedence in the trial efficiency pentagon.
There is a layered connection between the constructs and the building blocks: the constructs were conceptualised to provide a broad, overarching view of efficiency within healthcare trials. In contrast, the building blocks were identified as the essential, practical components that operationalise efficiency in real-world settings. In addition to this relationship, we suggest that for a comprehensive understanding, each efficiency construct takes into account all five building blocks. For instance, while it may seem apparent that scientific efficiency is closely linked with trial design, focusing on how the study is conceptualised to ensure methodological soundness; it also intersects with stakeholder involvement, where patient and public engagement can improve the trial design and thus the trial outcomes’ relevance and applicability.
Implications
According to the results from the literature review, few studies explicitly defined efficiency in the context of trials and no effort has been made to develop a unified and agreed definition for trial efficiency. Linguistically, ‘efficiency’ is defined as “the production of the desired effects or results with minimum waste of time, effort, or skill” [79]. This definition shares similarities with those from the literature (S2 Table), wherein the outstanding characteristic corresponds to the balance between the inputs (e.g. resources) and the outputs (e.g. the objectives of the trial). Nevertheless, these interpretations are often narrowly tailored. In this study we hoped to offer a holistic view that captures the nuances and complex aspects of trial efficiency and which may benefit policymakers, funders, and researchers in making informed decisions, leading to improved trial implementation and patient care. Enhancing efficiency was emphasised in the UK Department of Health and Social Care’s 2022–2025 strategic plan for clinical research [80]. As of the drafting of this paper, the U.S. Food and Drug Administration is announcing the updated recommendations for good clinical practices advocating for greater efficiency in trials by modernising both design and conduct [81]. Therefore, it is evident that our study is timely, positioning the urgency of comprehensively understanding trial efficiency.
Strengths and limitations
Drawing on both literature review and expert opinion, our study followed a rigorous approach to develop a conceptual framework of trial efficiency. We included a wide range of experts in trial communities including members of the public, enhancing the comprehensiveness and richness of our study. Nevertheless, nine participants did not respond to the scoring round, which could have introduced potential biases in reaching a consensus or perhaps missed subtle distinctions regarding the significance of certain trial elements. However, given the diverse range of participants who did engage, coupled with the triangulation with existing literature, this non-response is not expected to significantly impact the overall validity and comprehensiveness of our Delphi findings.
While we have sought to delineate each construct and building block distinctly, we acknowledge the potential for different interpretations of qualitative data. The interplay between the identified themes is likely to be more intricate, reflecting the complex nature of trial efficiency. Future research could delve deeper into this interplay to unravel the connections.
The ’trial efficiency pentagon’, emerging as a novel concept from this study, is a promising tool for assessing trial efficiency (proactively and retrospectively). For example, it could be developed to support group discussions and/or calibrated as an evaluation instrument to measure the efficiency of a trial. However, it is limited by lacking robust theoretical foundation. To elucidate, while we’ve pieced together insights and perspectives to shape the pentagon, we have not rooted it in any established theory or conceptual model. This could mean that certain fundamental aspects of trial efficiency might be overlooked or not holistically represented. In the future, we aspire to hone the pentagon into an evidence-based, theory-informed tool and we welcome insights from our readers and remain open to potential collaborations to its further development.
Conclusions
This is the first attempt to dissect the concept of trial efficiency into theoretical constructs. In this pursuit of understanding, we are not only unravelling the complexities of trial efficiency but also laying the groundwork for evaluating the efficiency of an individual trial or a trial system in the future.
Supporting information
(DOCX)
(DOCX)
(DOCX)
(DOCX)
(DOCX)
(DOCX)
(DOCX)
Acknowledgments
We thank Prof. Shaun Treweek for his insightful discussion on trial efficiency, which has largely inspired this work. We thank Ann Thomson, Senior Trial Manager at Queen Mary University of London’s Pragmatic Clinical Trials Unit, for her valuable discussions and insights into the trial process. Our thanks also go to the Health Research Board ‐ Trials Methodology Research Network for their assistance in promoting our Delphi study through their email newsletter. We acknowledge the support of the UKCRC Registered CTU Network. The views expressed are those of the author(s) and not of the UKCRC or its members. We are immensely thankful to all participants of the Delphi study rounds for their invaluable contributions and willingness to share their expertise. We have received consent to acknowledge the following participants by name (with no particular order): Monica Taljaard, Lelia Duley, Sarah Markham, Deb Smith, Catey Bunce, Stephen Brealey, Steff Lewis, Laura Miller, Jacqueline French, Fiona Hogarth, Gail Holland, Nikki Totton, Nick Kisengese, Joanne Haviland, Matthew Burns, Richard Hooper, Claire Ayling, Catherine Arundel, Ines Rombach, Seonaidh Cotton, Paula Kareclas. Lastly, we appreciate the reviewer’s comments, which have been instrumental in enhancing the development of the conceptual framework.
Data Availability
All relevant data are within the manuscript and its supporting information files.
Funding Statement
CX is funded by the Wellcome Trust (224863/Z/21/Z). URL: https://wellcome.org/. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The funder does not play any role in the study design, data collection and analysis, decision to publish, and preparation of the manuscript.
References
- 1.Schulz G. Increasing the Efficiency of Clinical Trials: Tanner Pharma Group; 2023 [Available from: https://tannerpharma.com/increasing-the-efficiency-of-clinical-trials/.
- 2.CENTRE TF. [Available from: https://www.trialforge.org/trial-forge-centres/.
- 3.Medicine JH. Improving the Efficiency of Clinical Trials. [Available from: https://clinicalconnection.hopkinsmedicine.org/news/improving-the-efficiency-of-clinical-trials.
- 4.GROUP CTW. Restructuring the National Cancer Clinical Trials Enterprise. National Cancer Institute; 2005.
- 5.Research NIfHaC. Annual Efficient Studies funding calls for CTU projects 2019 [Available from: https://www.nihr.ac.uk/documents/ad-hoc-funding-calls-for-ctu-projects/20141.
- 6.Park JJH, Grais RF, Taljaard M, Nakimuli-Mpungu E, Jehan F, Nachega JB, et al. Urgently seeking efficiency and sustainability of clinical trials in global health. Lancet Glob Health. 2021;9(5):e681–e90. doi: 10.1016/S2214-109X(20)30539-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.van Eijk RPA, Nikolakopoulos S, Ferguson TA, Liu D, Eijkemans MJC, van den Berg LH. Increasing the efficiency of clinical trials in neurodegenerative disorders using group sequential trial designs. J Clin Epidemiol. 2018;98:80–8. doi: 10.1016/j.jclinepi.2018.02.013 [DOI] [PubMed] [Google Scholar]
- 8.Sessler DI, Myles PS. Novel Clinical Trial Designs to Improve the Efficiency of Research. Anesthesiology. 2020;132(1):69–81. doi: 10.1097/ALN.0000000000002989 [DOI] [PubMed] [Google Scholar]
- 9.Zannad F, Pfeffer MA, Bhatt DL, Bonds DE, Borer JS, Calvo-Rojas G, et al. Streamlining cardiovascular clinical trials to improve efficiency and generalisability. Heart. 2017;103(15):1156–62. doi: 10.1136/heartjnl-2017-311191 [DOI] [PubMed] [Google Scholar]
- 10.Cornelius VR, McDermott L, Forster AS, Ashworth M, Wright AJ, Gulliford MC. Automated recruitment and randomisation for an efficient randomised controlled trial in primary care. Trials. 2018;19(1):341. doi: 10.1186/s13063-018-2723-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Kelly D, Spreafico A, Siu LL. Increasing operational and scientific efficiency in clinical trials. Br J Cancer. 2020;123(8):1207–8. doi: 10.1038/s41416-020-0990-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.England A, Wade K, Smith PB, Berezny K, Laughon M, Best Pharmaceuticals for Children Act ‐ Pediatric Trials Network Administrative Core C. Optimizing operational efficiencies in early phase trials: The Pediatric Trials Network experience. Contemp Clin Trials. 2016;47:376–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Park JJH, Sharif B, Harari O, Dron L, Heath A, Meade M, et al. Economic Evaluation of Cost and Time Required for a Platform Trial vs Conventional Trials. JAMA Netw Open. 2022;5(7):e2221140. doi: 10.1001/jamanetworkopen.2022.21140 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Prentice RL. Opportunities for enhancing efficiency and reducing cost in large scale disease prevention trials: a statistical perspective. Stat Med. 1990;9(1–2):161–70; discussion 70–2. doi: 10.1002/sim.4780090123 [DOI] [PubMed] [Google Scholar]
- 15.Torgerson D, Campbell M. Unequal randomisation can improve the economic efficiency of clinical trials. J Health Serv Res Policy. 1997;2(2):81–5. doi: 10.1177/135581969700200205 [DOI] [PubMed] [Google Scholar]
- 16.Junger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684–706. doi: 10.1177/0269216317690685 [DOI] [PubMed] [Google Scholar]
- 17.CLINVIVO. Clinvivo Limited 2015.
- 18.Fitch K BS, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, van het Loo M, McDonnell J, Vader JP, Kahan JP. RAND/UCLA appropriateness method user’s manual. Monica, CA: RAND corporation; 2000. [Google Scholar]
- 19.Treweek S, Born A. Clinical trial design: increasing efficiency in evaluating new healthcare interventions. Journal of Comparative Effectiveness Research. 2014;3(3):233–6. doi: 10.2217/cer.14.13 [DOI] [PubMed] [Google Scholar]
- 20.Wu K, Wu E, M DA, Chitale N, Lim M, Dabrowski M, et al. Machine Learning Prediction of Clinical Trial Operational Efficiency. AAPS Journal. 2022;24(3):57. doi: 10.1208/s12248-022-00703-3 [DOI] [PubMed] [Google Scholar]
- 21.Hess CN, Rao SV, Kong DF, Aberle LH, Anstrom KJ, Gibson CM, et al. Embedding a randomized clinical trial into an ongoing registry infrastructure: unique opportunities for efficiency in design of the Study of Access site For Enhancement of Percutaneous Coronary Intervention for Women (SAFE-PCI for Women). American Heart Journal. 2013;166(3):421–8. doi: 10.1016/j.ahj.2013.06.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Eshima N. Efficiency of Statistical Hypothesis Test Procedures. Statistical Data Analysis and Entropy. Singapore: Springer Singapore; 2020. p. 141–65. [Google Scholar]
- 23.Jiang Y, Zhao W, Durkalski-Mauldin V. Impact of adaptation algorithm, timing, and stopping boundaries on the performance of Bayesian response adaptive randomization in confirmative trials with a binary endpoint. Contemp Clin Trials. 2017;62:114–20. doi: 10.1016/j.cct.2017.08.019 [DOI] [PubMed] [Google Scholar]
- 24.Zhang Z MS. Machine learning methods for leveraging baseline covariate information to improve the efficiency of clinical trials. Statistics in Medicine. 2019;38(10):1703–14. doi: 10.1002/sim.8054 [DOI] [PubMed] [Google Scholar]
- 25.Saag KG, Mohr PE, Esmail L, Mudano AS, Wright N, Beukelman T, et al. Improving the efficiency and effectiveness of pragmatic clinical trials in older adults in the United States. Contemp Clin Trials. 2012;33(6):1211–6. doi: 10.1016/j.cct.2012.07.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Metcalfe A, Gemperle Mannion E, Parsons H, Brown J, Parsons N, Fox J, et al. Protocol for a randomised controlled trial of Subacromial spacer for Tears Affecting Rotator cuff Tendons: a Randomised, Efficient, Adaptive Clinical Trial in Surgery (START:REACTS). BMJ Open. 2020;10(5):e036829. doi: 10.1136/bmjopen-2020-036829 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Mukherjee A, Wason JMS, Grayling MJ. When is a two-stage single-arm trial efficient? An evaluation of the impact of outcome delay. European Journal of Cancer. 2022;166:270–8. doi: 10.1016/j.ejca.2022.02.010 [DOI] [PubMed] [Google Scholar]
- 28.Berry SM, Petzold EA, Dull P, Thielman NM, Cunningham CK, Corey GR, et al. A response adaptive randomization platform trial for efficient evaluation of Ebola virus treatments: A model for pandemic response. Clinical Trials. 2016;13(1):22–30. doi: 10.1177/1740774515621721 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Golub HL. The need for more efficient trial designs. Statistics in Medicine. 2006;25(19):3231–5; discussion 313–4, 326–47. doi: 10.1002/sim.2629 [DOI] [PubMed] [Google Scholar]
- 30.Shen J, Preskorn S, Dragalin V, Slomkowski M, Padmanabhan SK, Fardipour P, et al. How Adaptive Trial Designs can Increase Efficiency in Psychiatric Drug Development: A Case Study. Innovations in Clinical Neuroscience. 2011;8(7):26–34. [PMC free article] [PubMed] [Google Scholar]
- 31.Levin GP, Emerson SC, Emerson SS. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation. Statistics in Medicine. 2013;32(8):1259–75; discussion 80–2. doi: 10.1002/sim.5662 [DOI] [PubMed] [Google Scholar]
- 32.Lu M, Ownby DR, Zoratti E, Roblin D, Johnson D, Johnson CC, et al. Improving efficiency and reducing costs: Design of an adaptive, seamless, and enriched pragmatic efficacy trial of an online asthma management program. Contemporary Clinical Trials. 2014;38(1):19–27. doi: 10.1016/j.cct.2014.02.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Sverdlov O, Ryeznik Y, Wong WK. Opportunity for efficiency in clinical development: An overview of adaptive clinical trial designs and innovative machine learning tools, with examples from the cardiovascular field. Contemporary Clinical Trials. 2021;105:106397. doi: 10.1016/j.cct.2021.106397 [DOI] [PubMed] [Google Scholar]
- 34.Bitterman DS, Cagney DN, Singer LL, Nguyen PL, Catalano PJ, Mak RH. Master Protocol Trial Design for Efficient and Rational Evaluation of Novel Therapeutic Oncology Devices. Journal of the National Cancer Institute. 2020;112(3):229–37. doi: 10.1093/jnci/djz167 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Cunanan KM, Iasonos A, Shen R, Begg CB, Gonen M. An efficient basket trial design. Statistics in Medicine. 2017;36(10):1568–79. doi: 10.1002/sim.7227 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.He L, Ren Y, Chen H, Guinn D, Parashar D, Chen C, et al. Efficiency of a randomized confirmatory basket trial design constrained to control the family wise error rate by indication. Stat Methods Med Res. 2022;31(7):1207–23. doi: 10.1177/09622802221091901 [DOI] [PubMed] [Google Scholar]
- 37.Normington J, Zhu J, Mattiello F, Sarkar S, Carlin B. An efficient Bayesian platform trial design for borrowing adaptively from historical control data in lymphoma. Contemporary Clinical Trials. 2020;89:105890. doi: 10.1016/j.cct.2019.105890 [DOI] [PubMed] [Google Scholar]
- 38.Berry SM, Connor JT, Lewis RJ. The platform trial: an efficient strategy for evaluating multiple treatments. JAMA. 2015;313(16):1619–20. doi: 10.1001/jama.2015.2316 [DOI] [PubMed] [Google Scholar]
- 39.Boessen R, Knol MJ, Groenwold RH, Grobbee DE, Roes KC. Increasing trial efficiency by early reallocation of placebo nonresponders in sequential parallel comparison designs: application to antidepressant trials. Clinical Trials. 2012;9(5):578–87. doi: 10.1177/1740774512456454 [DOI] [PubMed] [Google Scholar]
- 40.Connolly SJ, Philippon F, Longtin Y, Casanova A, Birnie DH, Exner DV, et al. Randomized cluster crossover trials for reliable, efficient, comparative effectiveness testing: design of the Prevention of Arrhythmia Device Infection Trial (PADIT). Canadian Journal of Cardiology. 2013;29(6):652–8. doi: 10.1016/j.cjca.2013.01.020 [DOI] [PubMed] [Google Scholar]
- 41.Girling AJ. Relative efficiency of unequal cluster sizes in stepped wedge and other trial designs under longitudinal or cross-sectional sampling. Statistics in Medicine. 2018;37(30):4652–64. doi: 10.1002/sim.7943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Matthews JNS. Highly efficient stepped wedge designs for clusters of unequal size. Biometrics. 2020;76(4):1167–76. doi: 10.1111/biom.13218 [DOI] [PubMed] [Google Scholar]
- 43.Mdege ND, Brabyn S, Hewitt C, Richardson R, Torgerson DJ. The 2 x 2 cluster randomized controlled factorial trial design is mainly used for efficiency and to explore intervention interactions: a systematic review. Journal of Clinical Epidemiology. 2014;67(10):1083–92. [DOI] [PubMed] [Google Scholar]
- 44.Piantadosi S. Highly efficient clinical trial designs for reliable screening of under-performing treatments: Application to the COVID-19 Pandemic. Clinical Trials. 2020;17(5):483–90. doi: 10.1177/1740774520940227 [DOI] [PubMed] [Google Scholar]
- 45.Yndigegn T, Hofmann R, Jernberg T, Gale CP. Registry-based randomised clinical trial: efficient evaluation of generic pharmacotherapies in the contemporary era. Heart. 2018;104(19):1562–7. doi: 10.1136/heartjnl-2017-312322 [DOI] [PubMed] [Google Scholar]
- 46.Ni Y, Kennebeck S, Dexheimer JW, McAneney CM, Tang H, Lingren T, et al. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department. J Am Med Inform Assoc. 2015;22(1):166–78. doi: 10.1136/amiajnl-2014-002887 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Cai T, Cai F, Dahal KP, Cremone G, Lam E, Golnik C, et al. Improving the Efficiency of Clinical Trial Recruitment Using an Ensemble Machine Learning to Assist With Eligibility Screening. ACR Open Rheumatology. 2021;3(9):593–600. doi: 10.1002/acr2.11289 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Ni Y, Wright J, Perentesis J, Lingren T, Deleger L, Kaiser M, et al. Increasing the efficiency of trial-patient matching: automated clinical trial eligibility pre-screening for pediatric oncology patients. BMC Medical Informatics & Decision Making. 2015;15:28. doi: 10.1186/s12911-015-0149-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Sampson R, Shapiro S, He W, Denmark S, Kirchoff K, Hutson K, et al. An integrated approach to improve clinical trial efficiency: Linking a clinical trial management system into the Research Integrated Network of Systems. Journal Of Clinical And Translational Science. 2022;6(1):e63. doi: 10.1017/cts.2022.382 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Schmickl CN, Li M, Li G, Wetzstein MM, Herasevich V, Gajic O, et al. The accuracy and efficiency of electronic screening for recruitment into a clinical trial on COPD. Respiratory Medicine. 2011;105(10):1501–6. doi: 10.1016/j.rmed.2011.04.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Smith KS, Eubanks D, Petrik A, Stevens VJ. Using web-based screening to enhance efficiency of HMO clinical trial recruitment in women aged forty and older. Clinical Trials. 2007;4(1):102–5. doi: 10.1177/1740774506075863 [DOI] [PubMed] [Google Scholar]
- 52.Stewart RR, Dimmock AEF, Green MJ, Van Scoy LJ, Schubart JR, Yang C, et al. An Analysis of Recruitment Efficiency for an End-of-Life Advance Care Planning Randomized Controlled Trial. American Journal of Hospice & Palliative Medicine. 2019;36(1):50–4. doi: 10.1177/1049909118785158 [DOI] [PubMed] [Google Scholar]
- 53.Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. Journal of the American Medical Informatics Association. 2009;16(6):869–73. doi: 10.1197/jamia.M3119 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Schuler A, Walsh D, Hall D, Walsh J, Fisher C, Critical Path for Alzheimer’s D, et al. Increasing the efficiency of randomized trial estimates via linear adjustment for a prognostic score. The International Journal of Biostatistics. 2022;18(2):329–56. doi: 10.1515/ijb-2021-0072 [DOI] [PubMed] [Google Scholar]
- 55.Gomeni R, Merlo-Pich E. Trial Simulation to estimate Type I error when a population window enrichment strategy is used to improve efficiency of clinical trials in depression. European Neuropsychopharmacology. 2012;22(1):44–52. doi: 10.1016/j.euroneuro.2011.05.002 [DOI] [PubMed] [Google Scholar]
- 56.Sheng Y, Zhou X, Yang S, Ma P, Chen C. Modelling item scores of Unified Parkinson’s Disease Rating Scale Part III for greater trial efficiency. British Journal of Clinical Pharmacology. 2021;87(9):3608–18. doi: 10.1111/bcp.14777 [DOI] [PubMed] [Google Scholar]
- 57.Zhang L, Zhang X, Shen L, Zhu D, Ma S, Cong L. Efficiency of Electronic Health Record Assessment of Patient-Reported Outcomes After Cancer Immunotherapy: A Randomized Clinical Trial. JAMA Network Open. 2022;5(3):e224427. doi: 10.1001/jamanetworkopen.2022.4427 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Anker SD, Schroeder S, Atar D, Bax JJ, Ceconi C, Cowie MR, et al. Traditional and new composite endpoints in heart failure clinical trials: facilitating comprehensive efficacy assessments and improving trial efficiency. European Journal of Heart Failure. 2016;18(5):482–9. [DOI] [PubMed] [Google Scholar]
- 59.Boessen R, Heerspink HJ, De Zeeuw D, Grobbee DE, Groenwold RH, Roes KC. Improving clinical trial efficiency by biomarker-guided patient selection. Trials [Electronic Resource]. 2014;15:103. doi: 10.1186/1745-6215-15-103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Evans SR, Knutsson M, Amarenco P, Albers GW, Bath PM, Denison H, et al. Methodologies for pragmatic and efficient assessment of benefits and harms: Application to the SOCRATES trial. Clinical Trials. 2020;17(6):617–26. doi: 10.1177/1740774520941441 [DOI] [PubMed] [Google Scholar]
- 61.Forsyth R, Thuy V, Salorio C, Christensen J, Holford N. Review: efficient rehabilitation trial designs using disease progress modeling: a pediatric traumatic brain injury example. Neurorehabilitation & Neural Repair. 2010;24(3):225–34. doi: 10.1177/1545968309354534 [DOI] [PubMed] [Google Scholar]
- 62.Ellenberg SS. Discussion of papers on cost and efficiency of data collection in clinical trials. Stat Med. 1990;9(1–2):145–8; discussion 8–51. doi: 10.1002/sim.4780090121 [DOI] [PubMed] [Google Scholar]
- 63.Bechtel J, Chuck T, Forrest A, Hildebrand C, Panhuis J, Pattee SR, et al. Improving the quality conduct and efficiency of clinical trials with training: Recommendations for preparedness and qualification of investigators and delegates. Contemp Clin Trials. 2020;89:105918. doi: 10.1016/j.cct.2019.105918 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Marks R, Bristol H, Conlon M, Pepine CJ. Enhancing clinical trials on the internet: lessons from INVEST. Clin Cardiol. 2001;24(11 Suppl):V17-23. doi: 10.1002/clc.4960241707 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Gomeni R, Merlo-Pich E. Trial Simulation to estimate Type I error when a population window enrichment strategy is used to improve efficiency of clinical trials in depression. Eur Neuropsychopharmacol. 2012;22(1):44–52. doi: 10.1016/j.euroneuro.2011.05.002 [DOI] [PubMed] [Google Scholar]
- 66.Duley L, Gillman A, Duggan M, Belson S, Knox J, McDonald A, et al. What are the main inefficiencies in trial conduct: a survey of UKCRC registered clinical trials units in the UK. Trials [Electronic Resource]. 2018;19(1):15. doi: 10.1186/s13063-017-2378-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.French JA. Improving clinical trial efficiency: Is technology the answer? Epilepsia Open. 2017;2(2):121–2. doi: 10.1002/epi4.12042 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Howat AP, Holloway PJ. The effect of diagnostic criteria on the efficiency of experimental clinical trials. J Dent Res. 1977;56 Spec No:C116-22. doi: 10.1177/002203457705600303011 [DOI] [PubMed] [Google Scholar]
- 69.Lauer MS, Gordon D, Wei G, Pearson G. Efficient design of clinical trials and epidemiological research: is it possible? Nat Rev Cardiol. 2017;14(8):493–501. doi: 10.1038/nrcardio.2017.60 [DOI] [PubMed] [Google Scholar]
- 70.Lokker C, Jezrawi R, Gabizon I, Varughese J, Brown M, Trottier D, et al. Feasibility of a Web-Based Platform (Trial My App) to Efficiently Conduct Randomized Controlled Trials of mHealth Apps For Patients With Cardiovascular Risk Factors: Protocol For Evaluating an mHealth App for Hypertension. JMIR Res Protoc. 2021;10(2):e26155. doi: 10.2196/26155 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Meier P. Polio trial: an early efficient clinical trial. Statistics in Medicine. 1990;9(1–2):13–6. doi: 10.1002/sim.4780090107 [DOI] [PubMed] [Google Scholar]
- 72.Gale C, Hyde MJ, Modi N, group Wtd. Research ethics committee decision-making in relation to an efficient neonatal trial. Archives of Disease in Childhood Fetal & Neonatal Edition. 2017;102(4):F291–F8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Nicholas J. NCI’s clinical trial system: efficiencies grow, debate goes on. Journal of the National Cancer Institute. 2010;102(23):1750–1, 5. doi: 10.1093/jnci/djq478 [DOI] [PubMed] [Google Scholar]
- 74.Hanna CR, Lynskey DM, Wadsley J, Appleyard SE, Anwar S, Miles E, et al. Radiotherapy Trial Set-up in the UK: Identifying Inefficiencies and Potential Solutions. Clinical Oncology (Royal College of Radiologists). 2020;32(4):266–75. doi: 10.1016/j.clon.2019.10.004 [DOI] [PubMed] [Google Scholar]
- 75.Xie C. How have researchers defined and used the concept of ‘efficiency’ in the context of trials? A review of existing literature and a proposed conceptual framework: OSF Preprints; 2023. [Google Scholar]
- 76.Group OEW. Report of the Operational Efficiency Working Group of the Clinical Trials and Translational Research Advisory Committee National Cancer Institute; 2010. [Google Scholar]
- 77.Hofseth LJ. Getting rigorous with scientific rigor. Carcinogenesis. 2018;39(1):21–5. doi: 10.1093/carcin/bgx085 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Efficiency (statistics) Wikipedia [Available from: https://en.wikipedia.org/wiki/Efficiency_(statistics).
- 79.Medical Dictionary for the Health Professions and Nursing. 2012. efficiency.
- 80.Care DoHaS. The Future of Clinical Research Delivery: 2022 to 2025 implementation plan 2022 [Available from: https://www.gov.uk/government/publications/the-future-of-uk-clinical-research-delivery-2022-to-2025-implementation-plan/the-future-of-clinical-research-delivery-2022-to-2025-implementation-plan.
- 81.Administration tFaD. ICH HARMONISED GUIDELINE GOOD CLINICAL PRACTICE (GCP) E6(R3) 2023.


