Abstract
ABSTRACT
Objectives
To evaluate the feasibility of conducting a clinically integrated randomised comparative effectiveness trial using digital clinical trial infrastructure within an electronic patient record (EPR).
Design
A mixed-methods, unblinded, feasibility study of digital clinical trial system incorporating testing of two designs of electronic point-of-care randomisation prompt.
Setting
The study was conducted at University College London Hospitals NHS Trust between March and November 2022. The study used a real clinical research question for context, comparing liberal vs restrictive strategies for magnesium supplementation to prevent new-onset atrial fibrillation in critical care.
Participants
Adult patients undergoing elective, non-cardiac surgical procedures expecting postoperative admission to critical care were recruited.
Interventions
A digital trial system screened participants continuously against eligibility criteria. Participants were automatically randomised (1:1) to (1) magnesium supplementation strategy and (2) one of two electronic randomisation prompt designs (nudge or preference).
Electronic point-of-care randomisation prompts displayed to clinicians at regular intervals, inviting them to follow a randomised magnesium supplementation suggestion.
Main outcome measures
The primary outcome measure was a composite determination of study design feasibility (including recruitment, technical performance and concordance between the randomised suggestion and the observed clinician action).
Results
23 patients were recruited and 11 successfully randomised. The implemented digital systems for automated eligibility screening, randomisation, data collection and follow-up demonstrated technical feasibility. 47 electronic point-of-care randomisation prompts successfully deployed across 11 patients. Clinician actions were concordant with randomised suggestions in 32 prompts (68%).
Technical and implementational barriers to delivering the electronic point-of-care randomisation prompts were identified. Patients were followed up to 30 days following discharge from hospital, with no serious adverse events attributable to participation identified.
There was insufficient data to make a quantitative determination on the superiority of either prompt design. Clinician feedback suggested the simplified design (nudge) had greater utility.
Conclusions
This study demonstrates that digitally embedding clinical trial infrastructure into a site-level EPR and integrating conduct into clinical care is safe and feasible. Future work will focus on improving and expanding the integrated digital trial design across multiple centres.
Trial registration number
Keywords: Clinical Decision-Making, Perioperative Medicine, Evidence-Based Practice, Critical Care
WHAT IS ALREADY KNOWN ON THIS TOPIC
Routinely learning from variations in clinical decision-making is a key aim for modern Learning Health Systems. Current methodologies for evidence generation are limited to observational and quality improvement methodologies.
WHAT THIS STUDY ADDS
This study demonstrates that digital trial infrastructure may be replicated within comprehensive electronic patient records. Prospective randomisation by clinicians, at the point of care, may be achieved safely, with minimal disruption to clinical care delivery.
HOW THIS STUDY AFFECTS RESEARCH, PRACTICE OR POLICY
By demonstrating the technical and implementational feasibility of integrating randomisation into clinical decision-making, this study paves the way for future work into creating definitive digital trials in the future. This has the potential to improve study efficiency and expand the scope of addressable treatment questions.
Introduction
Evidence-based medicine is the accepted paradigm for clinical decision-making in modern healthcare systems.1 However, due to the financial cost and logistical difficulty of conducting randomised controlled trials, many routinely administered treatments continue to lack evidence of their effectiveness in practice.2 3
When clinicians make treatment decisions in this context, variation in treatment application may be observed. Some variation will represent appropriate personalisation of care to the individual, but in all cases, the impact of this variation often goes unstudied and unlearned from.4 The scale of this problem is significant. Braithwaite and colleagues have described this as the ‘60-30-10 challenge’—where around 60% of administered treatments conform to evidence, 30% are ineffective and 10% result in harm.5
The increasing availability and complexity of site-level electronic patient records (EPRs) has created an opportunity to learn about the comparative effectiveness of treatments using routine clinical decision-making. Digitally embedding clinical trial infrastructure within EPRs may allow clinical trials to be conducted in parallel to normal care delivery.6 This offers distinct advantages regarding efficiency and cost but also affords opportunities to broaden trial participation with respect to questions asked and participants recruited.
Integrating research conduct into clinical care delivery aligns with the principles of creating learning health systems (LHS). LHS use patient data collected through routine clinical care to derive new knowledge, which is then returned to clinical teams to improve care.7
Existing work on developing LHS has focused on obtaining high-quality data infrastructure. Knowledge generation mechanisms have been limited to quality improvement techniques and observational methods. However, to generate high-quality evidence that permits clinicians the confidence to change practice, addressing systematic bias with randomisation is required.
The clinically integrated randomised trial design, originally described by Vickers et al represents a possible approach to introducing randomised experiments into a LHS construct.8 The main barriers to digitally integrating research conduct within clinical care have been described previously.4 Key challenges are the technical development and deployment of these systems, and, importantly, understanding how patients should be appropriately consented for low or minimal risk comparative effectiveness trials.9
The PROSPECTOR feasibility study investigated a novel LHS design, which parallels the work by Vickers et al by integrating the trial within a clinical workflow, adopting a highly pragmatic design, and by using EPR data for all study outcomes.
The study builds on existing work in three ways. First, by modifying existing clinical decision support infrastructure to allow randomisation by clinicians at the point of clinical decision-making. Second, by investigating a system designed to be responsive to changes in patient state, to allow multiple opportunities for randomisation to occur (eg, prompted by regular blood testing or vital sign measurement), thus mirroring normal clinical decision-making. Third, two designs of electronic point-of-care randomisation (ePOCR) prompt are evaluated. A preference design, which emulates a partially randomised preference trial approach,10 is compared against a nudge design, based on the behavioural science principle of the same name.11 The latter design encourages clinicians to follow a randomised treatment suggestion where they share clinical equipoise with the trial, with anticipated advantages in the ease of use and reduced alert fatigue.
The study design is evaluated in the context of an example comparative effectiveness research (CER) question—liberal vs restrictive magnesium supplementation strategies for the prevention of new-onset atrial fibrillation in critical care. Magnesium supplementation is a common practice in critical care but lacks evidence of effectiveness outside the cardiac surgery population. Previous work has demonstrated variation in supplementation practices attributable to individual clinicians.12
Objectives
The primary objective was to assess the feasibility of conducting a future definitive clinically integrated randomised comparative effectiveness trial, using digital clinical trial infrastructure within an EPR.
Feasibility was judged through combining technical success of each element of the digital trial system, the ability of the ePOCR prompts to effectively randomise treatment and the acceptability of system design principles to clinicians.
The secondary objective was to compare the two designs of ePOCR prompt to determine which was more effective and least disruptive to clinical care.
Methods
Design
The full details of PROSPECTOR-critical care have been previously published in the study protocol paper.13 Here, the study findings are presented in accordance with the CONSORT extension for pilot and feasibility studies.14
This was a single-centre, mixed methods, feasibility study of a digital clinically integrated randomised trial design. The study evaluated the implementation of digital trial infrastructure in the context of an example CER question. In particular, two designs of ePOCR prompt were evaluated (online supplemental materials (Ai)).
There were several modifications to the protocol between publication and implementation. These are described in table 1.
Table 1. Modifications to published study protocol13 between publication and implementation.
| Protocol element | Modification | Explanation |
|---|---|---|
| Automated patient screening | First-stage eligibility screening (elective surgery, postoperative critical care booking) was conducted manually. |
|
| Link ePOCR prompt deployment to individual clinician | Prompts were tethered to individual participants and linked to specific clinical groups (eg, critical care nurses). |
|
| Two-stage, linked prompt for preference design | Incorporation of all relevant information in single stage prompt. |
|
| Study outcome—compliance with randomisation prompt suggested action | Compliance changed to concordance. |
|
The study protocol also described a parallel programme of semi-structured interviews to assess the acceptability of the study design and explore the digital workflow for magnesium supplementation.
Ethics and governance
The study was approved by the Riverside Research Ethics Committee (Ref. 21/LO/0785) and sponsored by University College London. The study was registered at ClinicalTrials.gov (NCT 05149820) and adopted into the National Institute for Health Research Clinical Trials Portfolio.
The study was overseen by a Trial Management Group, who determined whether the final criteria for feasibility had been met at an interim data analysis in November 2023.
Study data was extracted from the EPR and stored in a dedicated server accessible by members of the study team. Study data always remained within the hospital’s IT infrastructure, and no individual participant data was removed.
Population
The study was conducted across four critical care units within the University College London Hospitals NHS Trust (UCLH). These critical care units care for a mix of elective surgical specialties including thoracic, colorectal and urological surgery, but excluding cardiac and neurosurgery. UCLH has used the Epic (Epic Systems Corporation, Verona, WI) EPR since 2019.
Patients over 18 years of age, attending hospital for elective surgery and who were scheduled for postoperative admission to critical care were invited to participate in the study. Patient recruitment was mediated through the clinical team. Patients were contacted by a member of the research team to gauge interest in participation. They received written trial information either via email or post and were reviewed in person to provide written consent prior to participation.
Following enrolment in the study, participants proceeded to undergo surgery as scheduled. If they were subsequently admitted to the critical care unit postoperatively, the digital trial system screened participants against a second set of eligibility criteria for each new serum magnesium laboratory result received (trial enrolment, trial group allocation, serum magnesium result, no previous ePOCR prompt for given result, provider and location context, age and pregnancy status). If these criteria were satisfied, the ePOCR prompt would be displayed to the clinician.
Interventions
ePOCR prompts
Two designs of ePOCR prompts were evaluated in the study. Both were designed to allow clinicians to follow, or not, the allocated magnesium supplementation action (‘ADMINISTER’ or ‘NOT ADMINISTER’). The design was flexible, presenting randomisation as a suggestion rather than a mandated course of action. After reviewing the prompt, clinicians were able to determine if they shared equipoise with the trial question, for the individual participant, at that specific moment in time.
Both designs (online supplemental materials (Ai)) introduced basic information to the clinician: that the patient was participating in a trial of magnesium supplementation, the value of the latest available serum magnesium result (a primary driver of the decision to supplement magnesium) and a suggested action. In the case of the nudge design, this was a simple instruction advising either administration of magnesium or not. In the case of the preference design, the clinician was invited to select their preferred course of action. If ‘no preference’ was selected, then the randomised action could be followed.
Both designs were hypothesised to have utility within the clinically integrated randomised trial design. The nudge design is simple and easy to understand and interpret quickly. The preference design is more complex, but enables the capturing of clinician preferences at the time of randomisation, in keeping with a partially randomised preference trial design.10
Both prompts were displayed to the clinician via the same pathway. The entire process from screening to prompt deployment is illustrated in online supplemental materials (Aii) alongside a detailed technical description of the system in online supplemental materials (B). The prompt displaying to a clinician marked the start of an observation window during which supplemental magnesium could be administered, and the patient might experience the clinical outcome of interest (new-onset atrial fibrillation). A new window was created following the receipt of a new serum magnesium result (mirroring the clinical decision-making process). With each new laboratory result received, the system repeated the eligibility screening process and activated the ePOCR prompt again. The process was repeated for the duration of the participant’s critical care admission, and the system was deactivated on discharge to the ward.
Magnesium supplementation strategies
The study used a real CER question to test the digital trial infrastructure and ePOCR components. Participants were randomly assigned to one of two magnesium supplementation strategies:
Liberal supplementation: supplement magnesium if serum magnesium <1.0 mmol/L.
Restrictive supplementation: supplement magnesium if serum magnesium <0.75 mmol/L.
These treatment arms were derived from a previous analysis of existing variation in practice for the study centre.12 The ePOCR prompts were suppressed for serum magnesium results of <0.5 mmol/L and >1.5 mmol/L, under the hypothesis that clinicians would lack equipoise to randomise at these extremes.
In keeping with a pragmatic trial design, the study made no alteration to the mechanism of magnesium supplementation. An ‘as required’ prescription is available for postoperative patients routinely as part of the critical care admission order set. Decisions regarding the frequency, route and dose of magnesium supplementation remained at the discretion of the clinical teams but were recorded in the EPR.
Co-interventions
During the study period, clinicians were invited to participate in a parallel programme of semi-structured interviews to assess their views on the study design and gain further information on the digital and clinical workflows for the study question. This doubled as an educational programme, as clinicians were able to observe the prompts in a simulated environment.
Participants were automatically randomised by the digital system in two stages. Participants were first randomised to either nudge or preference ePOCR prompt design (1:1). Following ePOCR prompt allocation, they then randomised to a liberal or restrictive magnesium supplementation strategy (1:1). Allocations remained consistent for each participant throughout the study period. The research team was blinded to prompt design and magnesium strategy allocation until analysis following deployment. Clinicians were unblinded to both randomisations, although the magnesium action displayed did not reveal the allocated strategy directly.
The randomisation and eligibility checking system was constructed using Epic Systems ‘Best Practice Advisory’ architecture. This enabled the required elements to be combined using a series of logical statements to facilitate all the required allocation permutations.
Prompts displayed to clinicians following eligibility screening at the next login to the participant’s electronic record.
The ePOCR prompts were designed to present the randomised action to the clinician (administer or not administer magnesium). This was based on the assigned magnesium strategy, in the context of the current serum magnesium laboratory result.
Outcome measures
Feasibility was judged by the Trial Management Group across four domains: (1) performance of the technical components of the digital trial system (including screening, eligibility criteria emulation, ePOCR and data collection), (2) safety, with respect to technical deployment into live clinical systems and monitoring for Serious Adverse Events, (3) qualitative feedback from clinicians and (4) implementation aspects of trial conduct, for example, recruitment.
To assess the effectiveness of the ePOCR prompts at delivering successful treatment randomisation, each prompt was associated with an observed action from the clinician within the defined observation window. Concordance was also evaluated across each magnesium supplementation arm. Concordance was defined as either:
The appropriate supplementation (‘ADMINISTER’) of magnesium following prompt deployment, where the measured serum magnesium is less than the randomised threshold
The appropriate withholding (‘NOT ADMINISTER’) of supplemental magnesium following prompt deployment, where the serum magnesium is greater than the randomised threshold.
Participant demographics, magnesium supplementation and heart rhythm data were extracted from the EPR, alongside specific safety data as described in the study protocol.
Analysis
The study protocol called for a between-group comparison of prompt effectiveness (based on proportion of concordant actions) using a X2 test. Due to the small numbers of prompts seen across each prompt/magnesium group, formal statistical testing was not carried out. Descriptive data is presented alongside illustrative examples of summarised data for individual study participants.
Patient and public involvement
The study design was informed by two focus groups comprised of patients and members of the public recruited by the UCL Biomedical Research Centre Patient and Public Involvement and Engagement team. These sessions addressed themes including utilisation of routinely collected electronic patient data for research and the ePOCR system described. The study results will be disseminated via the University College London and UCLH websites and used to inform further patient and public engagement activities for future work in this area.
Results
Between March and November 2022, 206 patients were screened at four sites. 173 were excluded, the majority (136, 79%) due to the inability for site staff to gain written consent prior to their surgical procedure. The post-pandemic move to reduce the number of face-to-face consultations of patients require before surgery and the increased role of telephone pre-assessment, coupled with often short times from booking to surgery, meant the number of opportunities to meet patients was limited.
Following initial contact and the provision of written study information, 33 patients were approached for consent. Of these, four were unable to be contacted for written consent following initial verbal agreement via telephone, five declined to participate and one patient had their operation cancelled before providing consent.
Figure 1 illustrates participant flow through the study, including reasons for exclusion and final participant numbers in each group. Of the 23 participants who agreed to participate in the study, 11 went on to be randomised. The main reason for not being randomised was not being admitted to the critical care unit postoperatively (seven participants, 58%). This was the result of the clinical decision that critical care admission was no longer required after completion of surgery. The digital trial eligibility screening system prevented one participant from being randomised on the basis of a positive pregnancy test.
Figure 1. Screening and recruitment of study participants.
A planned interim analysis was conducted 9 months after the start of the study. The Trial Management Group reviewed unblinded data for all randomised participants, against the described feasibility criteria. While the total number of recruited participants was below the target set out in the study protocol, it was felt that sufficient data had been collected to assess feasibility outcomes and that further participant recruitment would be unlikely to contribute further meaningful data with respect to addressing the secondary objective (quantitative comparison of nudge vs preference design), given the technical limitations identified in table 1.
Participant characteristics for those randomised are shown in table 2.
Table 2. Characteristics of randomised study participants (n=11).
| Characteristic | Summary |
|---|---|
| Age, years (mean, SD) | 58 (14) |
| Sex, female (n, %) | 6 (55) |
| Ethnicity (n, %): | |
| Black Caribbean | 1 (9) |
| Other Asian background | 2 (18) |
| White British | 5 (45) |
| Not stated/unknown | 3 (27) |
| Surgical specialty (per procedure, n, %) | |
| Colorectal | 1 (5) |
| Gynaecology | 1 (5) |
| Thoracic | 4 (20) |
| Upper gastrointestinal | 3 (15) |
| Urology | 11 (55) |
| Operative status (per procedure, n, %): | |
| Elective | 14 (70) |
| Immediate | 1 (5) |
| Non-elective | 1 (5) |
| Urgent | 4 (20) |
| Procedure length (hours, mean, SD) | 4.3 (2.5) |
| Critical care length of stay (days, mean, SD) | 4.6 (4.7) |
| Admission serum magnesium (mmol/L, mean, SD) | 0.93 (0.13) |
| Admission serum potassium (mmol/L, mean, SD) | 4.16 (0.63) |
The mean serum magnesium on admission to the critical care unit was 0.93 mmol/L. 54% (6/11) of participants received supplemental magnesium during the study period. When supplementation was administered and a post-supplementation serum magnesium was measured, the serum magnesium increased in 81% of cases (13/16). The mean increase was 0.22 mmol/L (range 0.03–0.57 mmol/L).
Withholding supplementation (regardless of randomised suggestion) resulted in a lower serum magnesium concentration in 76% of cases where a subsequent serum magnesium result was available (16/21), with a mean decrease of −0.22 mmol/L (range −0.56–0.04 mmol/L).
47 ePOCR prompts were deployed for the 11 participants. 33 ePOCR prompts were of the nudge design, and 14 were preference design prompts. Supplementary Material (C) shows the breakdown of prompts by type across the liberal and restrictive magnesium supplementation arms and across observed clinician action. All prompts deployed correctly after a new serum magnesium result was received. All prompts deployed once per new serum magnesium result as intended. The mean time between the receipt of serum magnesium result and prompt display was 86 min (SD 142 min). 85% of prompts were reviewed within 2 hours and 55% within 1 hour of the serum magnesium result registering on the EPR.
Figure 2 illustrates an example of a study participant’s data, including observed change in serum magnesium concentration across the duration of their critical care admission, the hard limits imposed on prompt deployment and the randomised threshold at which magnesium supplementation was suggested. ePOCR prompts deployed are shown underneath, with resultant clinician action highlighted as green or red (concordant or not) for the observation period.
Figure 2. Example of measured serum magnesium values relative to nudge prompt deployment and magnesium administration for participant 12, randomised to restrictive magnesium supplementation threshold.
Of the 47 ePOCR prompts deployed, the subsequent clinician action observed was more often concordant with randomisation in 61% (20/33) of the nudge prompts and 86% of the preference design, with an overall concordance rate of 68% (32/47). Rates of concordance per group are summarised in figure 3.
Figure 3. Concordance with electronic point-of-care randomisation prompt recommended action, by prompt design and magnesium supplementation arm.
Where possible, feedback was sought from critical care nurses following prompt deployment, either through an in-person debrief or via the EPR instant messaging facility. This is presented in online supplemental materials (D).
Of the 11 participants randomised, one experienced new-onset atrial fibrillation (figure 2). This participant was randomised to the nudge:restrictive study arm and had generated seven ePOCR prompts over a 6 day critical care admission. Throughout all the observation windows, the serum magnesium remained >0.75 mmol/L, resulting in prompts suggesting no magnesium supplementation. The participant went on to receive treatment with beta blockade and anticoagulation but did not require haemodynamic support or cardioversion.
Participants were monitored for predetermined serious adverse events and followed up 30 days after discharge from hospital. 14 events were identified, none were judged attributable to study participation; these are described in online supplemental materials (E).
21 clinicians were approached to undertake a semi-structured interview during the study. No one declined to participate. The final cohort consisted of 13 nurses, seven doctors and one nursing assistant. Five themes were addressed within the interview. Online supplemental materials (F) report the findings of all the themes in detail.
Two themes evaluated clinicians’ attitudes towards the Epic EPR system, including computerised clinical decision support systems, and sought feedback on the study and electronic randomisation prompt designs. Overall, participants had positive views about Epic, citing its comprehensive nature and features, which improved communication like instant messaging.
The majority of participants had negative views on Best Practice Advisories. Recurring themes were the inappropriate timing of alert presentations (out of context with workflows), irrelevance or difficulty navigating prompts. Alert fatigue was mentioned, with one participant describing building up a ‘tolerance’ to prompts and ultimately ignoring them. Many participants described instantly dismissing alerts without reading them:
…are they the ones which come up in the middle of the screen? Yeah, we just exit them. Yeah, I don’t read them. [Participant 9, nurse]
…sometimes when we file data, they will say, ‘oh, you have a pending nursing care plan to do’, like, it will pop up. But usually we do it on the notes already. So we just disregard that nursing care plan. And we have our own notes in the ICU. [Participant 14, nurse]
Attitudes towards the study design and ePOCR prompts were mixed. Several participants highlighted difficulty in communicating and understanding the premises underpinning the study—particularly the ability to choose to randomise treatment if the clinician was uncertain on the optimal treatment decision. This was particularly the case for more junior clinicians (less than 5 years clinical experience), who struggled to differentiate a randomised suggestion to consider from an instruction to carry out an action. Others expressed positive views that the study design would allow greater involvement in research conduct and in driving improvements in care.
The study is quite good and will help nurses [Participant 7, nurse]
It’s amazing. Yeah, I think it’s cool…and it’s simple, not time consuming. [Participant 14, nurse]
Discussion
Learning Health Systems represent a potential mechanism for efficiently integrating the conduct of important research into clinical care, with the intention of widening the scope of addressable questions, particularly pertaining to routine treatment effectiveness. To date, experimental designs for evidence generation have not featured prominently within LHS, in part due to challenges surrounding their technical implementation.
This feasibility study sought to address this challenge by evaluating the digital implementation of a clinically integrated randomised trial, designed to allow clinicians to randomise pre-specified, evidence-light treatments for future learning.
Technical feasibility
The primary study outcome measure was a determination of whether this design would be feasible to use in a future definitive trial, adequately powered to answer a clinical question. Furthermore, how well the design extends to other questions like the example of magnesium supplementation used here should be considered.
This was the first digitally embedded clinical trial conducted at UCLH. It successfully implemented a system of ePOCR within a clinical workflow. The system modified existing clinical decision support functionality, reproducible across other sites using the same EPR and potentially extendable to other comprehensive EPRs. Building a reproducible infrastructure that could be easily translated across multiple sites could enable the conduct of multicentre studies in the future, improving trial efficiency.
Multiple eligibility criteria were built into an automated screening process, conducted repeatedly for each new serum magnesium result prior to prompt deployment. This allows multiple treatment administrations to be eligible for randomisation, potentially improving study efficiency for questions, which are frequently occurring. It also adds safety by acknowledging changes in patient state throughout trial participation. This is crucial in a system which relies heavily on remote monitoring from research teams.
The technical application of the randomisation system was successful, providing clinicians with randomised actions reflecting changing patient states and randomised treatment strategies. While this element was complicated to build, the decrease in friction experienced by the clinician viewing the prompt was important. Without predetermining required action (supplementation or not), clinicians would have to calculate whether to supplement given the serum magnesium result and the randomised allocation (liberal or restrictive) threshold. Finally, all participant data pertaining to magnesium supplementation, clinical and safety outcomes was extracted from the EPR.
There were three key technical limitations which affected the study design. In most cases, this was a result of hard limits on functionality present within the EPR. First, the inability to fully integrate the ePOCR prompt into the clinical workflow at a granular level. Prompt display was limited to being triggered by user log in, rather than task-specific triggers such as blood test result review. Distancing the alert from the clinical decision increases the probability that the clinician will dismiss the alert or forget the intended message. While the time between new serum magnesium results being received and the prompt firing does not represent adequate integration within a workflow, it was nonetheless reassuring to see the majority of alerts being reviewed within 1 to 2 hours of a new result registering.
The second technical limitation was the inability to specifically link prompt deployment to an individual clinician. This resulted in many alerts displaying to the wrong clinician (ie, critical care nurse not directly assigned to care of the participant). This will have reduced the effectiveness of the prompt, especially given that it was designed to only display once per new serum magnesium result, to reduce alert fatigue. Finally, it was not possible to create functionality to re-trigger the prompt to fire manually. This would have afforded the bedside nurse the opportunity to review the recommended action on multiple occasions at a convenient time.
Comparable studies have used different approaches for achieving randomisation. Clinically integrated randomised trials conducted by Ehdaie, Spaliviero and Touijer et al have all used cluster randomisation methods facilitated through a biostatistics department or clinical trials unit.15,17 D’Avolio et al constructed a bespoke randomisation system using software incorporated into the Veterans’ Affairs EPR, an open source initiative co-developed between clinicians, statisticians and informaticians in contrast to the proprietary EPR used in this study.18
To achieve more complex randomisation (including support of adaptive randomisation), an approach similar to that used by Simon et al holds promise. The authors used a multilayered approach, combining front-end EPR functionality with an external data warehouse and randomisation delivered through custom-built separate applications.19
Trial implementation
Recruitment of participants proved a challenging issue. As it was not possible to automate participant screening, this was done manually by a member of the research team. Automated screening against complex eligibility criteria using machine learning has shown feasibility in other settings.20
The model of written informed consent used required multiple patient contacts including an in-person meeting with a member of the research team. This proved a time-intensive process. The majority of recruitment exclusions (136/173 participants screened, 79%) were due to lack of patient and researcher availability for obtaining consent prior to surgery. Furthermore, there was significant attrition of participants who could not be randomised due to bypassing critical care postoperatively. While it is standard practice for patients to be assessed at the end of their procedure for suitability for ward-level care, this represents a significant waste of research resources.
The solution to both problems could lie in changing the model of consent used. Further research is required to establish whether alternative lightweight consent approaches for low-risk research questions are acceptable. Our centre has recently undertaken a similar study using clinician-mediated randomisation under a waiver of consent supplemented by provision of a participant information leaflet.21 Alternatives such as opt-out consent frameworks and verbal point-of-care consent may also be acceptable.22
Once patients were successfully contacted regarding the study, a high proportion (23/28, 82%) was happy to participate, suggesting the study premise was generally acceptable.
The results demonstrate that the study design was safe for the example question of magnesium supplementation. No serious adverse events detected were attributable to study participation on review.
Flexible randomisation
While the small study numbers precluded a direct comparison of ePOCR designs, both successfully generated randomisation events within the study design. While it is not possible to directly assess clinician compliance with the prompt, given the limitations described, considering the degree of concordance observed in the results, it might be reasonable to conclude that this approach is feasible.
One of the clear limitations with this approach is the degree of treatment contamination inherent to the design, with expected high levels of non-compliance from clinicians declining randomisation in favour of their own preferences. This is countered by the potential for rapid accrual of large sample sizes within these types of trial, through high frequency of treatment decisions (ie, at least one serum magnesium result and supplementation decision per day, per patient in critical care). UCLH critical care units cared for 848 surgical patients during the study period. Assuming a high level of participation (acknowledging the difficulties with recruitment and consent already discussed), the trial could rapidly accrue data. Other clinically integrated trials using similar methodology have demonstrated efficient accrual within single centres.23
The qualitative study elements also raised concerns regarding the complex nature of the ideas underpinning the study design. Evaluating equipoise around treatment administration is a learnt skill, and unless this is fostered within a supportive environment (akin to the learning culture espoused by Friedman7), it may be difficult for more inexperienced team members to effectively engage with this study design.
Acceptability and risk
The study was designed around a low-risk treatment question, to provide minimal disruption to clinical care delivery and to add no additional burden to patients through participation. We chose to evaluate a low-risk treatment question in order to align with existing LHS literature and safely evaluate the implementation of the digital trial system.
The design of the digital trial system itself was risk averse. By nudging and not mandating randomisation, participants always receive the clinician’s preferred treatment choice. Additionally, a system which provides continuous screening against eligibility criteria can quickly suppress nudges if required, for example, developing an allergy to magnesium preparation during the study. In implementing automated systems, which are intended to seamlessly screen, randomise and nudge treatment actions with minimal human input, it is essential that these are rigorously tested prior to live deployment. These are potentially high-risk interventions as they are intended to use and alter information within live clinical systems. As such, we would advocate, in addition to extensive sandbox testing and a period of silent running (prompts active but not displayed to clinicians), a phased approach to live deployment with regular manual checks that the system is performing as intended. In addition, rapid feedback from clinicians on the ground to EHR teams and study coordinators helped to rapidly detect and troubleshoot issues during deployment.
Despite an increasing body of literature, it remains unclear whether stakeholders (patients, families, ethics committees) find alternative approaches to consent for low or minimal risk treatment questions acceptable. Further test cases are required. The recently completed THIRST study was a digital clinically integrated randomised trial, conducted under waiver of consent due to its low risk status.21
Conclusion
In conclusion, using a digitally embedded, clinically integrated randomised trial approach proved technically feasible for studying a commonly encountered, low risk, evidence-light treatment in critical care. However, there were significant limitations to implementing the design that require further improvement and evaluation prior to the use in a definitive trial setting, particularly to improve methods for integrating randomisation into clinical workflows.
Additional barriers to widespread implementation are the current mechanisms for recruitment and consent for these types of study. Work is ongoing to evaluate patient and public views on modified consent mechanisms for low or minimal risk clinical trials.
Notwithstanding the challenges, the arguments for continuing to innovate and improve digital clinical trial designs are powerful. By reducing costs and broadening participation, a greater number of research questions may be answered, and clinicians will gain ready access to evidence to guide clinical decision-making and improve patient care.
Supplementary material
Acknowledgements
The authors would like to thank the Clinical Research Informatics Unit and the Joint Research Office at University College London for supporting the study design and implementation. Additionally, Dr. Dan Stein, Dr. Conor Foley and Dr. Sigrún Clarke for their assistance with qualitative aspects of the study.
Footnotes
Funding: This study is supported by the NIHR Central London Patient Safety Research Collaboration (NIHR 204297) and the NIHR University College London Hospitals Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care.
Provenance and peer review: Not commissioned; externally peer reviewed.
Patient consent for publication: Not applicable.
Ethics approval: This study involves human participants and was approved by Riverside NHS Research Ethics Committee Reference 21/L0/0785. Participants gave informed consent to participate in the study before taking part.
Data availability free text: Data pertaining to the design and implementation of the interventions described in the manuscript can be made available on request.
Patient and public involvement: Patients and/or the public were involved in the design, conduct, reporting or dissemination plans of this research. Refer to the Methods section for further details.
Data availability statement
Data are available upon reasonable request.
References
- 1.Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312:71–2. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. BMJ. 1998;317:273–6. doi: 10.1136/bmj.317.7153.273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Vickers AJ. Clinical trials in crisis: Four simple methodologic fixes. Clin Trials . 2014;11:615–21. doi: 10.1177/1740774514553681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Wilson MG, Palmer E, Asselbergs FW, et al. Integrated rapid-cycle comparative effectiveness trials using flexible point of care randomisation in electronic health record systems. J Biomed Inform. 2023;137:104273. doi: 10.1016/j.jbi.2022.104273. [DOI] [PubMed] [Google Scholar]
- 5.Braithwaite J, Glasziou P, Westbrook J. The three numbers you need to know about healthcare: the 60-30-10 Challenge. BMC Med. 2020;18:102. doi: 10.1186/s12916-020-01563-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Harris S, Bonnici T, Keen T, et al. Clinical deployment environments: Five pillars of translational machine learning for health. Front Digit Health . 2022;4:939292. doi: 10.3389/fdgth.2022.939292. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Friedman CP, Allee NJ, Delaney BC, et al. The science of Learning Health Systems: Foundations for a new journal. Learn Health Syst . 2017;1:e10020. doi: 10.1002/lrh2.10020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Vickers AJ, Scardino PT. The clinically-integrated randomized trial: proposed novel method for conducting large trials at low cost. Trials. 2009;10:14. doi: 10.1186/1745-6215-10-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Faden RR, Beauchamp TL, Kass NE. Informed consent, comparative effectiveness, and learning health care. N Engl J Med. 2014;370:766–8. doi: 10.1056/NEJMhle1313674. [DOI] [PubMed] [Google Scholar]
- 10.Wasmann KA, Wijsman P, van Dieren S, et al. Partially randomised patient preference trials as an alternative design to randomised controlled trials: systematic review and meta-analyses. BMJ Open. 2019;9:e031151. doi: 10.1136/bmjopen-2019-031151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Thaler RH, Sunstein CR. Nudge. Penguin Books; 2009. [Google Scholar]
- 12.Wilson MG, Rashan A, Klapaukh R, et al. Clinician preference instrumental variable analysis of the effectiveness of magnesium supplementation for atrial fibrillation prophylaxis in critical care. Sci Rep. 2022;12:17433. doi: 10.1038/s41598-022-21286-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Wilson MG, Asselbergs FW, Miguel R, et al. Embedded point of care randomisation for evaluating comparative effectiveness questions: PROSPECTOR-critical care feasibility study protocol. BMJ Open. 2022;12:e059995. doi: 10.1136/bmjopen-2021-059995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Eldridge SM, Chan CL, Campbell MJ, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. BMJ. 2016;355:i5239. doi: 10.1136/bmj.i5239. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Ehdaie B, Jibara G, Sjoberg DD, et al. The Duration of Antibiotics Prophylaxis at the Time of Catheter Removal after Radical Prostatectomy: Clinically Integrated, Cluster, Randomized Trial. J Urol . 2021;206:662–8. doi: 10.1097/JU.0000000000001845. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Spaliviero M, Power NE, Murray KS, et al. Intravenous Mannitol Versus Placebo During Partial Nephrectomy in Patients with Normal Kidney Function: A Double-blind, Clinically-integrated, Randomized Trial. Eur Urol. 2018;73:53–9. doi: 10.1016/j.eururo.2017.07.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Touijer KA, Sjoberg DD, Benfante N, et al. Limited versus Extended Pelvic Lymph Node Dissection for Prostate Cancer: A Randomized Clinical Trial. Eur Urol Oncol. 2021;4:532–9. doi: 10.1016/j.euo.2021.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.D’Avolio L, Ferguson R, Goryachev S, et al. Implementation of the Department of Veterans Affairs’ first point-of-care clinical trial. J Am Med Inform Assoc. 2018;7 doi: 10.1136/amiajnl-2011-000623. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Simon KC, Tideman S, Hillman L, et al. Design and implementation of pragmatic clinical trials using the electronic medical record and an adaptive design. JAMIA Open. 2018;1:99–106. doi: 10.1093/jamiaopen/ooy017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Tissot HC, Shah AD, Brealey D, et al. Natural Language Processing for Mimicking Clinical Trial Recruitment in Critical Care: A Semi-Automated Simulation Based on the LeoPARDS Trial. IEEE J Biomed Health Inform . 2020;24:2950–9. doi: 10.1109/JBHI.2020.2977925. [DOI] [PubMed] [Google Scholar]
- 21.Chen Y, Shah A, Jani Y, et al. Rationale and design of the THIRST Alert feasibility study: a pragmatic, single-centre, parallel-group randomised controlled trial of an interruptive alert for oral fluid restriction in patients treated with intravenous furosemide. BMJ Open. 2024;14:e080410. doi: 10.1136/bmjopen-2023-080410. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Whicher D, Kass N, Faden R. Stakeholders’ Views of Alternatives to Prospective Informed Consent for Minimal-Risk Pragmatic Comparative Effectiveness Trials. J Law Med Ethics . 2015;43:397–409. doi: 10.1111/jlme.12256. [DOI] [PubMed] [Google Scholar]
- 23.Tokita HK, Assel M, Serafin J, et al. Optimizing accrual to a large-scale, clinically integrated randomized trial in anesthesiology: A 2-year analysis of recruitment. Clin Trials . 2025;22:57–65. doi: 10.1177/17407745241255087. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data are available upon reasonable request.



