Abstract
Objective
HL7 SMART on FHIR apps have the potential to improve healthcare delivery and EHR usability, but providers must be aware of the apps and use them for these potential benefits to be realized. The HL7 CDS Hooks standard was developed in part for this purpose. The objective of this study was to determine if contextually relevant CDS Hooks prompts can increase utilization of a SMART on FHIR medical reference app (MDCalc for EHR).
Materials and Methods
We conducted a 7-month, provider-randomized trial with 70 providers in a single emergency department. The intervention was a collection of CDS Hooks prompts suggesting the use of 6 medical calculators in a SMART on FHIR medical reference app. The primary outcome was the percentage of provider–patient interactions in which the app was used to view a recommended calculator. Secondary outcomes were app usage stratified by individual calculators.
Results
Intervention group providers viewed a study calculator in the app in 6.0% of interactions compared to 2.6% in the control group (odds ratio = 2.45, 95% CI, 1.2–5.2, P value .02), an increase of 130%. App use was significantly greater for 2 of 6 calculators.
Discussion and Conclusion
Contextually relevant CDS Hooks prompts led to a significant increase in SMART on FHIR app utilization. This demonstrates the potential of using CDS Hooks to guide appropriate use of SMART on FHIR apps and was a primary motivation for the development of the standard. Future research may evaluate potential impacts on clinical care decisions and outcomes.
Keywords: SMART on FHIR, CDS Hooks, clinical decision support, electronic health records, standards
BACKGROUND AND SIGNIFICANCE
The Health Level Seven International (HL7) Substitutable Medical Applications, Reusable Technologies (SMART) on Fast Healthcare Interoperability Resources (FHIR) standard allows for a standard method for launching external applications from an electronic health record (EHR) system.1 SMART on FHIR has seen a quick rise in adoption,2 and the development of an extensive EHR app-based ecosystem appears to be on the horizon.
The proliferation of SMART on FHIR apps creates a need for suggesting relevant apps to end-users. Among the reasons for this need, clinical decision support (CDS) is generally more effective when it is presented at the time and location of decision-making and within the clinician’s workflow.3,4 Thus, suggesting the use of add-on apps in the right clinical contexts could improve their use and impact. Another motivation for suggesting apps to users at appropriate times is that the proliferation of SMART on FHIR apps could result in app fatigue, where there are too many available apps for the end-user to choose from, leading to the apps being largely ignored.5 This is akin to what has increasingly occurred to CDS alerts, in the form of alert fatigue.6 Recommending a relevant SMART on FHIR app in real-time and within the right clinical context could help address both of these issues.
Like SMART on FHIR, CDS Hooks is a FHIR-based HL7 standard for delivering CDS, and a primary motivation behind its development was to recommend relevant SMART on FHIR apps to end-users.7 In CDS Hooks, the CDS service is triggered by a “hook” and the CDS logic is executed using data exchanged via the FHIR standard. A CDS card is then returned to the EHR, where it provides care guidance by displaying a combination of text, alternative suggestions and/or links to apps or reference material to the end-user.8
CDS Hooks is a promising mechanism for recommending SMART on FHIR apps for several reasons. First, like SMART on FHIR, CDS Hooks uses FHIR as its data model and exchange standard. This reduces the burden of development, since the same logic can be shared between the CDS Hooks service and the app. Second, CDS Hooks services, like SMART on FHIR apps, can be shared across EHR platforms and health systems due to their interoperable, EHR-agnostic nature. Thus, CDS prompting logic that is too time consuming to reimplement at each health system could potentially be broadly shared via CDS Hooks services, just as the associated SMART on FHIR apps can be shared in this way. Third, the use of an external CDS engine by CDS Hooks services can enable the implementation of more complex decision logic than can be implemented using native EHR CDS approaches. Finally, CDS Hooks includes a feature to provide a hyperlink to a SMART on FHIR app (“app link card”), allowing the user to launch the app from the CDS Hooks prompt with a single click.
Several studies have described how CDS Hooks can be integrated into an EHR to provide clinical care recommendations.9–13 In the majority of these studies, the CDS Hooks deployments were limited to development platforms.9–11 Two studies described the use of CDS Hooks-based CDS systems in production environments. Semenov et al12 described a CDS platform implemented in a production environment, but they did not evaluate the efficacy of the system. Rubin et al deployed a CDS Hooks-based CDS system in a production environment to recommend HIV screening in patients where the HIV status was unknown. They performed a before-and-after study and found an action was taken by providers on 1% of the CDS Hooks reminders.13
To our knowledge, there have been no randomized controlled trials (RCTs) studying the use of CDS Hooks to provide clinical care recommendations. In addition, we are not aware of any study evaluating the use of CDS Hooks to suggest a SMART on FHIR app. We believe this is a unique and important use case for CDS Hooks, because opening and using an application can take substantially more clicks and effort than other responses to EHR prompts, such as ordering HIV screening laboratory tests with a single click. Thus, we believe that identifying how best to facilitate the appropriate use of such apps is of specific importance, especially given the amount of time and resources required to develop these applications.
In this study, we conceived and conducted the first RCT to formally evaluate the use of CDS Hooks to provide clinical care recommendations. In addition, this represents the first formal evaluation of using CDS Hooks to suggest a SMART on FHIR app. We hypothesized that contextually relevant CDS Hooks prompts would increase appropriate use of a SMART on FHIR app.
OBJECTIVE
The objective of this study was to determine if contextually relevant CDS Hooks prompts can increase utilization of 6 medical calculators in a SMART on FHIR medical reference app (MDCalc for EHR). There were additional clinical objectives related to the CDS system, but these were outside the scope of this publication.
MATERIALS AND METHODS
Study design
This study was a cluster-randomized clinical trial involving resident physicians and advanced practice providers (APPs). We deployed the CDS Hooks service on February 17, 2021 and discontinued the service on November 18, 2021 because study enrollment goals were met. Since first-year and non-emergency medicine (“off-service”) resident physicians see on average fewer patients per shift,14 we randomized them separately. Randomization was performed twice, at the beginning of the study in February 2021 and in June 2021 when a new group of resident physicians was on-boarded at the institution. A provider-randomized design was chosen due to patient safety concerns with randomizing by patient, since in a patient-randomized design a provider would see the prompts for some patients who meet criteria but not others.
This study was approved by the University of Utah Institutional Review Board (IRB 00137874) and was registered on ClinicalTrials.gov (Identifier: NCT04702308).
Setting
This study was conducted at the University of Utah Health (UUH) Emergency Department (ED). UUH is an academic medical center in Salt Lake City, UT. The UUH ED has an annual volume of approximately 50 000 patients.
Study participants
Study participants included all emergency medicine (EM) resident physicians, any non-EM resident physicians who rotated through the ED during the study period and all full-time EM APPs working at the study site. Attending physicians were excluded from the study to avoid randomizing members of the same attending-resident team to different study groups.
Six study calculators and MDCalc for EHR app
We selected 6 clinical decision support rules (“study calculators”) from the calculators available on the MDCalc for EHR SMART on FHIR app. MDCalc is a popular CDS tool created by MD Aware. It contains hundreds of medical calculators and is used by more than 65% of US physicians every month.15 Previously, the MDCalc tool was only accessible via the Web or a smartphone app. The University of Utah partnered with MD Aware to create a SMART on FHIR version that is integrated in the EHR. The EHR-integrated app auto-fills calculator fields with relevant data from the EHR (Figure 1). The MDCalc for EHR app (previously called MDCalc Connect) was deployed at the study site in September 2018. It is available to all end-users at the study site and is pinned to the top of the toolbar for all providers (physicians and APPs) (Figure 2, purple box). Additional information on the MDCalc for EHR app can be found in a previous publication.5
Figure 1.
Heart Score Calculator in the MDCalc for EHR app.
Figure 2.
Two CDS Hooks prompts recommending the use of the MDCalc for EHR SMART on FHIR app and a sample hover over message.
The 6 study calculators were selected from a combination of perceived value and best practice recommendations from the American College of Emergency Physicians.16 The combined clinical goals of recommending the use of the study calculators were to decrease the number of unnecessary advanced imaging studies performed and to improve risk-stratification of associated medical conditions. For each study calculator, an algorithm was developed by 2 EM attending physicians (KLM and JH) for identifying potentially relevant patients for whom to suggest use of the calculator. The app suggestion algorithms used a combination of patient location, chief complaint, patient age and/or vital signs as data inputs. The algorithms were intentionally designed to prioritize sensitivity over specificity (Table 1).
Table 1.
App suggestion algorithms
| Calculator | Calculator goal | App suggestion criteria |
|---|---|---|
| Canadian CT head rule | Clear head injury without imaging |
|
| Canadian C-spine rule | Clear cervical spine injury without imaging |
|
| HEART score | Risk stratify patients with undifferentiated chest pain |
|
| PERC rule | Rule out pulmonary embolism (PE) without imaging |
|
| Wells’ criteria for PE | Risk stratify patients being evaluated for PE |
|
| Wells’ criteria for deep venous thrombosis (DVT) | Risk stratify patients being evaluated for DVT |
|
C-spine: cervical spine; CT: computed tomography; DVT: deep venous thrombosis; ED: emergency department; PE: pulmonary embolism; PERC: pulmonary embolism rule-out criteria; SpO2: peripheral oxygen saturation; yo: years old.
Intervention
The primary intervention was a collection of noninterruptive CDS Hooks prompts recommending the use of 6 medical calculators in the MDCalc for EHR app (Table 1). The CDS Hooks prompts were shown in a left-hand summary column of the patient chart that is always displayed when in the context of an individual patient, referred to as the “Storyboard” in the EHR utilized for this study (Epic).
Each CDS Hooks prompt recommended the use of a potentially relevant calculator, and more than 1 prompt could be displayed. The prompts consisted of a short title recommending the use of a single calculator (Figure 2, red box), along with a hover-over summary of how to access the calculator in the MDCalc for EHR app (Figure 2, green box). If the prompt was clicked, a pop-up version would display the same summary along with a clickable hyperlink that would launch the MDCalc for EHR app (Figure 3). Providers could also access the app directly through the MDCalc tab in the EHR (Figure 2, purple box).
Figure 3.
Sample message that displays when the provider clicks on the CDS Hooks prompt. Selecting the hyperlink opens the Suggested Calculators landing page in the MDCalc for EHR app (see Figure 4).
A new landing page was built for the MDCalc for EHR app (“Suggested Calculators,” Figure 4) which recommended the same calculators as the CDS Hooks prompts to allow easy access to the suggested calculators. Regardless of how they accessed the app, intervention providers accessing the app were always presented with the Suggested Calculators landing page if the patient met criteria for one or more prompts. Otherwise, the standard “My Favorites” landing page was displayed.
Figure 4.
Suggested Calculators landing page in MDCalc for EHR, which suggests the same calculators as the CDS Hooks prompts for intervention group providers.
All EM resident physicians received baseline education on the study at either a weekly education conference or during EHR on-boarding training. For non-EM resident physicians and APPs, a short (4-min) tutorial video was emailed to intervention group providers. The video reviewed the motivation for implementing the calculator prompts, demonstrated the appearance of the prompts and showed how to access the calculators in the MDCalc for EHR app.
All participants, including control group providers, had access to the MDCalc for EHR app. The app functioned identically between the control and intervention groups with the exception of the “Suggested Calculators” landing page, which was available only to providers in the intervention group. To facilitate impact analysis, the CDS Hooks system was run in silent mode for the control group, wherein each patient was evaluated in the same manner as the intervention group but generated prompts were suppressed.
System architecture
The system architecture is shown in Figure 5. The 8 steps involved in intervention group users’ interaction with CDS Hooks prompts and the SMART on FHIR app are shown in the figure and described here. (1) The EHR invokes the CDS Hooks service when a clinician opens a patient’s chart. (2) The CDS Hooks service retrieves patient data from the EHR through the EHR’s FHIR server. (3) The CDS Hooks service executes the app suggestion algorithms using the data retrieved from the EHR’s FHIR server. (4) If the patient meets criteria, a prompt to consider app use is displayed in the EHR (Figure 2, red box). Hovering over the prompt displays additional information (Figure 2, green box). If the prompt is clicked, a pop-up version displays (Figure 3). (5) The user triggers the launch of the app by clicking the hyperlink in the CDS Hooks pop-up prompt (Figure 3) or by opening the app directly from the EHR toolbar (Figure 2, purple box). (6) The app is launched by the EHR using the SMART on FHIR protocol. (7) The app retrieves data as needed via the EHR’s FHIR server. (8) The MDCalc for EHR app is displayed to the end-user, with the app suggesting the same calculators as recommended by the CDS Hooks prompts (Figure 4).
Figure 5.
CDS system architecture.
Chart opening was chosen as the CDS trigger due to constraints in displaying noninterruptive prompts prior to order entry. The prompt priority (care guidance) did not meet the study institution’s typical standard for an interruptive alert. Thus, it was designed to be noninterruptive. Chart opening was the best trigger option for a noninterruptive prompt that ensured the provider would see the prompt prior to order entry.
CDS Hooks supports single-click app launching, but this was not implemented in our system architecture. This feature is available in the Epic EHR for certain prompt types (eg, pop-up alerts) but not for Storyboard prompts, which were used in this study. To launch the app, the user had to click on the Storyboard prompt, which would launch a pop-up version of the prompt with the embedded app hyperlink. Instructions on launching the app from the pop-up message were included in the Storyboard prompt (Figure 2, green box), since we felt most users would not know how to access the hyperlink intuitively. We have discussed this issue with colleagues at Epic and have requested that the single-click app launching feature be added to the Storyboard prompt.
The CDS Hooks specification also supports deep linking to a specific page of an app (eg, opening directly to a specific calculator in the MDCalc for EHR app). We were unable to successfully configure this feature in our local EHR environment prior to the start of this study, but we have since confirmed that it is an available feature in Epic. The pop-up prompt response options (Accept and Cancel, Figure 3) are configured by the EHR vendor. The options listed depend on what actions (if any) are included in the prompt, and they cannot be modified. Selecting Accept or Cancel results in the prompt closing. Selecting the app hyperlink is considered equivalent to Accept and also closes the prompt.
Study power
Power analysis was performed prior to study initiation using the following variables: alpha = 0.05, power = 0.8, intraclass correlation coefficient (ICC)=0.3, proportion of times the control group used the MDCalc for EHR app = 0.1, proportion of times the experiment group used the app = 0.2, and number of study subjects = 4000. Based on these numbers, it was determined that at least 60 clusters (providers) would be needed to adequately power the study. An ICC of 0.3 was estimated based off recommendations by Campbell et al.17 Proportions of app use for the control group were estimated from retrospective data. The number of study subjects (4000) was chosen because this was the estimated number of patients that would meet app suggestion criteria in a 6-month period. All power calculations were performed using the R powerCluster package (version 0.7.0).
Outcome measures
The primary outcome of this study was the percentage of unique provider–patient interactions (“unique interactions”) in which the MDCalc for EHR app was used to view a study calculator. A unique interaction was defined as a study provider opening the chart of a patient who met criteria for at least 1 CDS Hooks prompt during a given ED encounter. If a provider opened a patient’s chart multiple times, that would only be counted once. Unique interactions were chosen because more than 1 provider could be involved in the care of a single patient. Thus, it was important to capture each provider’s involvement in the care process. The unique interactions were then stratified by individual calculators and by EM provider type and included as secondary outcomes. This study was not adequately powered to definitively determine statistical significance for the secondary outcomes.
Statistical analysis
Logistic regression was used to estimate mean percentages and 95% confidence intervals (CIs) for intervention and control groups, odds ratios (ORs), and P values. Generalized estimating equations were used to account for correlation between the interactions of the same provider using an exchangeable correlation matrix structure. Odds ratios were calculated by exponentiation of the regression coefficient. Probabilities and confidence intervals were back-transformed from the logit scale. Fisher’s exact test was used for analysis of unadjusted data included in the Supplementary Material. A P value of <.05 was considered statistically significant. All analyses we performed using the R geepack package (version 1.3.2).
RESULTS
Provider participants
Seventy providers participated in the study. Study participants at time of enrollment included 18 postgraduate year (PGY)-1 EM resident physicians, 18 PGY-2 and PGY-3 EM resident physicians, 17 non-EM resident physicians (internal medicine, obstetrics-gynecology or psychiatry), and 17 full-time EM APPs. Thirty-five participants were randomized to the intervention group and 35 to the control group. The control group consisted of 17 junior and 18 senior EM providers, and the intervention group consisted of 18 junior and 17 senior EM providers (Table 2).
Table 2.
Study participants at time of enrollment
| Control group | Intervention group | |
|---|---|---|
| Number of providers | 35 | 35 |
| Junior EM providers | 17 | 18 |
| Non-EM resident physicians | 9 | 8 |
| PGY-1 EM resident physicians | 8 | 10 |
| Senior EM Providers | 18 | 17 |
| APPs | 9 | 8 |
| PGY-2/PGY-3 EM resident physicians | 9 | 9 |
APPs: advanced practice providers; EM: emergency medicine; PGY: postgraduate year.
Patient population
Intervention group providers saw 3424 patients who received at least 1 CDS Hooks prompt while control group providers saw 3335 patients who met criteria for at least 1 CDS Hooks prompt. The number of unique interactions between the groups was similar (3978 in the intervention group vs 3776 in the control group). Further stratification by calculator type is shown in Table 3.
Table 3.
Study patient population
| Outcomes | Control group | Intervention group |
|---|---|---|
| Number of patients | 3335 | 3424 |
| Number of unique interactions | 3776 | 3978 |
| Canadian CT head rule | 1058 | 1069 |
| Canadian C-spine rule | 821 | 815 |
| HEART score | 1388 | 1536 |
| PERC rule | 857 | 875 |
| Wells’ criteria for PE | 1630 | 1788 |
| Wells’ criteria for DVT | 466 | 488 |
C-spine: cervical spine; CT: computed tomography; DVT: deep venous thrombosis; PE: pulmonary embolism; PERC: pulmonary embolism rule-out criteria.
Outcomes measures
Primary outcome
Providers in the intervention group used the MDCalc for EHR app to view a study calculator in 6.0% of the unique interactions compared to 2.6% in the control group (OR, 2.45 [95% CI, 1.15–5.22], P value .02) (Table 4). This represents an increase in app utilization of 130%.
Table 4.
Percentage of unique interactions where a study calculator was viewed in the control and intervention groups
| MDCalc for EHR app use | Percentage of unique interactions with a study calculator viewed (estimate and 95% CI) |
Odds ratio | P value | |
|---|---|---|---|---|
| Control group | Intervention group | |||
| Any study calculator | 2.6 (1.5–4.4) | 6.0 (3.7–9.6) | 2.45 (1.15–5.22) | .02* |
| Canadian CT head rule | 1.9 (1.0–3.7) | 3.6 (2.1–6.0) | 1.92 (0.79–4.64) | .15 |
| Canadian C-spine rule | 0.9 (0.4–2.0) | 2.9 (1.6–5.2) | 3.38 (1.21–9.42) | .02* |
| HEART score | 3.2 (1.7–5.6) | 6.5 (3.9–10.6) | 2.15 (0.94–4.89) | .069 |
| PERC rule | 4.8 (2.8–8.1) | 7.1 (4.2–11.7) | 1.52 (0.7–3.32) | .29 |
| Wells’ criteria for PE | 2.6 (1.4–4.8) | 6.2 (3.6–10.4) | 2.44 (1.05–5.64) | .038* |
| Wells’ criteria for DVT | 1.9 (0.7–5.4) | 4.0 (2.2–7.1) | 2.11 (0.62–7.26) | .24 |
| Senior EM providers, any study calculator | 3.5 (1.9–6.4) | 4.6 (2.4–8.6) | 1.31 (0.52–3.28) | .57 |
| Junior EM providers, any study calculator | 1.2 (0.4–3.5) | 8.2 (4.2–15.5) | 7.67 (2–29.42) | .003* |
| All EM residents, any study calculator | 1.6 (0.6–4.3) | 2.9 (1.2–6.7) | 2.11 (0.62–7.26) | .36 |
CI: confidence interval; C-spine: cervical spine; CT: computed tomography; DVT: deep venous thrombosis; EM: emergency medicine; PE: pulmonary embolism; PERC: pulmonary embolism rule-out criteria.
P value <.05.
Secondary outcomes
MDCalc for EHR app usage was significantly greater for the intervention group providers for 2 of the 6 calculators (Canadian C-spine rule and Wells’ criteria for pulmonary embolism). App usage was greater for the remaining 4 calculators, but these differences were not statistically significant. App use with any study calculator increased from 3.5% to 4.6% (P value .57) for senior EM providers and from 1.2% to 8.2% (P value .003) for junior EM providers (Table 4). In addition, the percentage of providers who viewed at least 1 study calculator in the app during a unique interaction increased from 37.1% in the control group to 80% in the intervention group (P value <.001). This represents an increase of 116%. The percentage of users increased for all study calculators, but the difference was only statistically significant for 2 (Supplementary Table S5). As previously noted, this study was not adequately powered to detect differences in app usage for individual calculators or participant subgroups. The raw app use data without adjusting for intraclass correlation is included in Supplementary Table S6.
DISCUSSION
The results from this study demonstrate that contextually relevant CDS Hooks prompts recommending the use of a SMART on FHIR app can significantly increase app utilization. In this study, it increased app utilization by 130% (Table 4) and the number of app users by 116% (Supplementary Table S5). To our knowledge, this is the first RCT to evaluate the use of CDS Hooks to provide clinical care recommendations and is the first study to evaluate using CDS Hooks to recommend a relevant SMART on FHIR app. This is an important finding since recommending relevant HL7 SMART on FHIR apps was a primary motivation for the development of the HL7 CDS Hooks standard,7 and it also demonstrates the potential for CDS Hooks to be used to mitigate usability issues that may arise with the development of an extensive EHR app-based ecosystem, such as app fatigue.
CDS Hooks offers many advantages over current, native CDS systems in commercial EHRs. Examples include improved scalability, decreased system development cost and enhanced functionality. Unfortunately, CDS Hooks is not currently available across all EHR platforms, and this is a limitation of this study. To our knowledge, CDS Hooks is available at this time in production environments in Epic (all sites) and Cerner (limited deployment). AllScripts publicly announced support of the standard in February 2020,18 but it is not yet available in production systems.
A second limitation was that the intervention consisted of 3 components (EHR prompts, a new landing page on the MDCalc for EHR app and provider education), and the observed impact cannot solely be attributed to the EHR prompts. In order to better understand the impact of each, we conducted post hoc analysis that included only EM resident physicians, since this was the only subgroup where all providers (intervention and control) received the baseline education. This analysis showed that app use was higher in the intervention group compared to the control (2.9% vs 1.6%, respectively), but these results were not statistically significant (P value .36) (Table 4). This study was not powered to detect differences in this subgroup, and traditional education alone has consistently been shown to be ineffective in increasing compliance with clinical guidelines.19 However, it is still possible that at least part of the intervention effect was due to the education provided or the updates made to the landing page of the MDCaclc for EHR app.
A third limitation was that the overall rate of app use was relatively low (6.0% of unique interactions in the intervention group). There are several possible explanations for this observation. First, we purposely designed the CDS logic to prioritize sensitivity over specificity, so a low use rate was expected. Second, it is not expected that a provider will view the calculator in the app for every patient with a prompt. In many cases, it is obvious a patient does not meet criteria without needing to view the calculator (eg, a significant trauma mechanism is an exclusion criterion for the Canadian CT head rule). In addition, a provider may be viewing the chart well past the point of making diagnostic decisions. Finally, providers could have accessed the calculators outside the SMART on FHIR app, such as by using the Web or smartphone versions of MDCalc. While we did not measure the utilization of calculators external to the app, a previous study conducted at the study site found that the PERC rule, a calculator included in our study, was not frequently applied in routine clinical care.20
Another limitation is that this study was performed using 1 commercial EHR platform (Epic) at a single academic medical center. Further evaluation in different EHR platforms and hospital systems is needed to confirm the generalizability of these results. In addition, implementation at a single center created the potential for crossover effects between the study groups, with providers in the intervention group potentially educating control group providers about the application. However, such contamination would have been expected to decrease the observed impact of the intervention.
This study was also limited to resident physicians and APPs, and these results may not be generalizable to experienced, board-certified emergency physicians, who may not find it necessary to utilize the app or be prompted to use the app due to a real or perceived familiarity with the calculators. Our subgroup analysis by provider type supports this conclusion. Control group senior EM providers used the app more frequently than control group junior EM providers (3.5% vs 1.2%, respectively), and app use increased significantly between the control group and intervention group for the junior EM providers (1.2%–8.2%, P value .003) but not for the senior EM providers (3.5%–4.6%, P value .57). These results suggest that these prompts may be more useful to more novice EM providers, who are not as familiar with the calculators. A poststudy survey could have helped to further understand these usage patterns, and this represents a limitation of the study and a potential area of further research.
Finally, this study was not powered to evaluate clinical outcomes. Thus, further research is needed to evaluate whether CDS Hooks can be used to improve clinical outcomes through context-appropriate suggestions of SMART on FHIR apps.
CONCLUSION
In this provider-randomized study, we demonstrated that using CDS Hooks to recommend a SMART on FHIR app can enable a significant increase in app usage. To our knowledge, this represents the first RCT to evaluate the use of CDS Hooks for providing clinical care recommendations and the first study to evaluate using CDS Hooks to recommend a SMART on FHIR app, which was one of the primary motivations behind developing the CDS Hooks standard. This study illustrates how CDS Hooks may help mitigate usability issues that may arise with the proliferation of SMART on FHIR EHR add-on apps. Future research may evaluate potential impacts on clinical care decisions and outcomes.
FUNDING
This project was supported by grant number T15LM007124 from the National Library of Medicine. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Library of Medicine or the National Institutes of Health.
AUTHOR CONTRIBUTIONS
Each author made substantial contributions to: (1) the design and implementation of the CDS system (all authors); (2) study design (KLM, PVK, KK, TM, DH, and JH); (3) the acquisition, analysis, or interpretation of data (PVK, KLM, and KK); (4) the creation of new software used in the work (KK, PBW, JW, MS, and JH); or (5) the drafting or substantial revision of the manuscript (KLM, PVK, and KK).
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
Supplementary Material
ACKNOWLEDGMENTS
We would like to thank Scott Goodell and Christopher Reed for the contributions they made in deploying the described intervention.
CONFLICT OF INTEREST STATEMENT
JH is cofounder, CEO, and part owner of MD Aware, LLC, which owns and operates MDCalc. JW and MS are employees of MD Aware. The University of Utah has codevelopment and licensing agreements with MD Aware to assist with the EHR integration of MDCalc. KK and PBW may benefit financially if MDCalc for EHR is commercially successful. The other authors have no competing interest to declare.
Contributor Information
Keaton L Morgan, Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, USA; Department of Emergency Medicine, University of Utah, Salt Lake City, Utah, USA.
Polina V Kukhareva, Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, USA.
Phillip B Warner, Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, USA.
Jonah Wilkof, MDCalc (MD Aware, LLC), New York, New York, USA.
Meir Snyder, MDCalc (MD Aware, LLC), New York, New York, USA.
Devin Horton, Department of Internal Medicine, University of Utah, Salt Lake City, Utah, USA.
Troy Madsen, Department of Emergency Medicine, University of Utah, Salt Lake City, Utah, USA.
Joseph Habboushe, MDCalc (MD Aware, LLC), New York, New York, USA.
Kensaku Kawamoto, Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, USA.
DATA AVAILABILITY STATEMENT
The data underlying this article cannot be shared publicly due to UUH policies regarding the privacy of patients and the protection of sensitive patient health information.
REFERENCES
- 1. Mandel JC, Kreda DA, Mandl KD, Kohane IS, Ramoni RB.. SMART on FHIR: a standards-based, interoperable apps platform for electronic health records. J Am Med Inform Assoc 2016; 23(5): 899–908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Taber P, Radloff C, Del Fiol G, Staes C, Kawamoto K.. New standards for clinical decision support: a survey of the state of implementation. Yearb Med Inform 2021; 30(1): 159–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Osheroff JA, Teich JM, Levick D, et al. Improving Outcomes with Clinical Decision Support: An Implementer’s Guide. Chicago, IL: Himss Publishing; 2012. [Google Scholar]
- 4. Kawamoto K, Houlihan CA, Balas EA, Lobach DF.. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. Br Med J 2005; 330(7494): 765–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Kawamoto K, Kukhareva PV, Weir C, et al. Establishing a multidisciplinary initiative for interoperable electronic health record innovations at an academic medical center. JAMIA Open 2021; 4(3): 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Van Der Sijs H, Aarts J, Vulto A, Berg M.. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13(2): 138–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Strasberg HR, Rhodes B, Del Fiol G, Jenders RA, Haug PJ, Kawamoto K.. Contemporary clinical decision support standards using health level seven international fast healthcare interoperability resources. J Am Med Inform Assoc 2021; 28(8): 1796–806. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.CDS Hooks Overview. https://cds-hooks.org/. Accessed May 14, 2022.
- 9. Dolin RH, Boxwala A, Shalaby J.. A pharmacogenomics clinical decision support service based on FHIR and CDS Hooks. Methods Inf Med 2018; 57(S 02): E115–23. [DOI] [PubMed] [Google Scholar]
- 10. Nguyen BP, Reese T, Decker S, Malone D, Boyce RD, Beyan O.. Implementation of clinical decision support services to detect potential drug-drug interaction using clinical quality language. Stud Health Technol Inform 2019; 264(viii): 724–8. [DOI] [PubMed] [Google Scholar]
- 11. Watkins M, Eilbeck K.. FHIR lab reports: using SMART on FHIR and CDS Hooks to increase the clinical utility of pharmacogenomic laboratory test results. AMIA Jt Summits Transl Sci Proc 2020; 2020: 683–92. http://www.ncbi.nlm.nih.gov/pubmed/32477691%0Ahttp://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC7233102. [PMC free article] [PubMed] [Google Scholar]
- 12. Semenov I, Osenev R, Gerasimov S, Kopanitsa G, Denisov D, Andreychuk Y.. Experience in developing an FHIR medical data management platform to provide clinical decision support. Int J Environ Res Public Health 2019; 17(1): 73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Rubin L, López NP, Gaiera A, Campos F, Luna D, De Quirós FBG.. Development, implementation and preliminary results of an electronic reminder for HIV screening using a service oriented architecture. Stud Health Technol Inform 2019; 264: 763–7. [DOI] [PubMed] [Google Scholar]
- 14. Henning DJ, Mcgillicuddy DC, Sanchez LD.. Evaluating the effect of emergency residency training on productivity in the emergency department. J Emerg Med 2013; 45(3): 414–8. [DOI] [PubMed] [Google Scholar]
- 15.MDCalc: About. Published 2021. https://www.mdcalc.com/about-us. Accessed May 14, 2022.
- 16.Choosing Wisely. American College of Emergency Physicians: Five Things Physicians and Patients Should Question. Published 2014. https://www.choosingwisely.org/wp-content/uploads/2015/02/ACEP-Choosing-Wisely-List.pdf. Accessed May 14, 2022.
- 17. Campbell M, Grimshaw J, Steen N, Professional C.. Sample size calculations for cluster randomised trials. J Health Serv Res Policy 2000; 5(1): 12–6. [DOI] [PubMed] [Google Scholar]
- 18.Allscripts TouchWorks® EHR 20.0 now generally available. https://investor.allscripts.com/news-releases/news-release-details/allscripts-touchworksr-ehr-200-now-generally-available. Accessed May 14, 2022.
- 19. Prior M, Guerin M, Grimmer-Somers K.. The effectiveness of clinical guideline implementation strategies – a synthesis of systematic review findings. J Eval Clin Pract 2008; 14(5): 888–97. [DOI] [PubMed] [Google Scholar]
- 20. Buchanan I, Teeples T, Carlson M, Steenblik J, Bledsoe J, Madsen T.. Pulmonary embolism testing among emergency department patients who are pulmonary embolism rule-out criteria negative. Acad Emerg Med 2017; 24(11): 1369–76. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article cannot be shared publicly due to UUH policies regarding the privacy of patients and the protection of sensitive patient health information.





