Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2023 Mar 27;30(6):1205–1218. doi: 10.1093/jamia/ocad040

Evaluating the costs and consequences of computerized clinical decision support systems in hospitals: a scoping review and recommendations for future practice

Nicole M White 1, Hannah E Carter 2, Sanjeewa Kularatna 3, David N Borg 4, David C Brain 5, Amina Tariq 6, Bridget Abell 7, Robin Blythe 8, Steven M McPhail 9,10,
PMCID: PMC10198542  PMID: 36972263

Abstract

Objective

Sustainable investment in computerized decision support systems (CDSS) requires robust evaluation of their economic impacts compared with current clinical workflows. We reviewed current approaches used to evaluate the costs and consequences of CDSS in hospital settings and presented recommendations to improve the generalizability of future evaluations.

Materials and Methods

A scoping review of peer-reviewed research articles published since 2010. Searches were completed in the PubMed, Ovid Medline, Embase, and Scopus databases (last searched February 14, 2023). All studies reported the costs and consequences of a CDSS-based intervention compared with current hospital workflows. Findings were summarized using narrative synthesis. Individual studies were further appraised against the Consolidated Health Economic Evaluation and Reporting (CHEERS) 2022 checklist.

Results

Twenty-nine studies published since 2010 were included. Studies evaluated CDSS for adverse event surveillance (5 studies), antimicrobial stewardship (4 studies), blood product management (8 studies), laboratory testing (7 studies), and medication safety (5 studies). All studies evaluated costs from a hospital perspective but varied based on the valuation of resources affected by CDSS implementation, and the measurement of consequences. We recommend future studies follow guidance from the CHEERS checklist; use study designs that adjust for confounders; consider both the costs of CDSS implementation and adherence; evaluate consequences that are directly or indirectly affected by CDSS-initiated behavior change; examine the impacts of uncertainty and differences in outcomes across patient subgroups.

Discussion and Conclusion

Improving consistency in the conduct and reporting of evaluations will enable detailed comparisons between promising initiatives, and their subsequent uptake by decision-makers.

Keywords: clinical decision support system, electronic medical records, health economics, hospital-based care

BACKGROUND AND SIGNIFICANCE

Hospitals are complex adaptive systems that rely on frequent information sharing between healthcare providers to deliver continuous, efficient, and safe patient care.1 Hospital-wide adoption of electronic medical records has led to patient information becoming increasingly digitized, offering end-users centralized access to data for decision-making. Investment in electronic medical records and related technologies, such as computerized provider order entry, has primarily been driven by their potential to improve patient safety and process efficiency.2,3 Despite this potential, system usability has been cited as a critical barrier to adoption by healthcare providers.4,5 Efforts to encourage broader uptake have therefore prioritized decision support functionality to support the interpretation of stored information amenable to clinical decision-making.

Computerized clinical decision support systems (CDSS) represent a meaningful way in which electronic medical records can be used to enhance healthcare safety and quality.6 CDSS combine patient information with clinical knowledge to give tailored decision support at the point of care.7 Current CDSS are predominantly knowledge-based, where patient data are aligned with external information, including clinical guidelines and testing protocols, to promote adherence to evidence-based practices.8 Conversely, nonknowledge-based CDSS use patient data to estimate the likelihood of clinical outcomes, for example, via early warning scores and risk stratification tools based on statistical or machine learning approaches. Depending on the objectives of CDSS, decision support may be communicated as interruptive alerts, automated reminders, medication dosing support, or relevant information display to guide clinical workflows.9 In hospital environments, the generic functionality offered by CDSS has led to applications across emergency,10 admitted patient,11 and ambulatory services.12

When implemented successfully, CDSS can reduce unwarranted variation in processes of care. By reducing unwarranted variation, CDSS may reduce the inappropriate use of healthcare resources without compromising patient outcomes.2 The full realization of these downstream consequences has been hampered by mixed reports of CDSS adherence, leading to uncertainty about their impacts on patient and health system outcomes.13,14 Randomized controlled and observational studies have shown considerable heterogeneity in adherence to CDSS-based interventions, averaging 5%–10% improvements in patients receiving recommended care.15 Reported heterogeneity is likely influenced by several factors related to the design and integration of CDSS into clinical workflows, including alert fatigue, administrative workarounds, and over-reliance on recommendations.2,8,16 Contextual factors affecting uptake and engagement with CDSS,17–19 and the impact of decision support features on clinical effectiveness18,20,21 have also been examined. Organizational investment in the development, evaluation, and adaptation of CDSS is an important step towards addressing these barriers, to encourage a shift to business-as-usual activity.

Strategic healthcare investments are ideally informed by evidence about the safety, effectiveness, and cost-effectiveness of a new intervention compared with current practices.22 While published studies have indicated benefits from adopting CDSS, different evaluation study designs have limited comparisons of promising initiatives across studies. Reviews of in-hospital CDSS have primarily focused on the clinical consequences of implementing CDSS, using several different outcome measures.2,13,23 Similar reviews on the cost impacts of CDSS have highlighted inconsistencies in included costs, and subsequent methods used to value resources affected by the implementation.11,24 These inconsistencies highlight the need for consensus on how best to evaluate CDSS that jointly consider proposed initiatives’ costs and consequences. Access to comparable evidence on the impacts of CDSS is essential to help decision-makers optimize investment to maximize health outcomes whilst representing value for money to healthcare payers.

OBJECTIVE

This review aimed to strengthen the existing evidence base by summarizing current approaches for the economic evaluation of CDSS in hospital settings. The research question asks what approaches have been used to evaluate the economic impacts of CDSS in hospital settings. We defined an economic evaluation as any study that presented findings on the costs and consequences of CDSS, before and after being introduced into hospital workflows. We conducted a scoping review as our intention was to summarize a broad range of approaches reported in the literature and provide recommendations to inform future evaluations.

MATERIALS AND METHODS

Reporting was guided by The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR)25 (Supplementary File S1). A protocol was developed but not registered or published.

Search strategy and selection criteria

Our search strategy combined indexed terms with keywords for common types of clinical decision support, hospital-based care, and methods for the joint analysis of costs and consequences in healthcare settings. All coauthors developed and approved search strings (Supplementary File S2). Online searches were completed in the PubMed, Ovid Medline, Embase, and Scopus databases. Searches covered peer-reviewed studies published between January 1, 2010, and February 14, 2023 (date of last search). The review timeframe was designed to capture increases in CDSS uptake following the introduction of the Health Information Technology for Economic and Clinical Health (HITECH) Act in the United States,14 and the substantial uptake of electronic medical records after 2010 in the United States and internationally.

Studies were required to be peer-reviewed research articles written in English that reported findings on the costs and consequences associated with a CDSS-based intervention. Conference proceedings, study protocols, letters to the editor, commentaries, and systematic or scoping reviews were excluded. Articles that only described methodological developments were excluded, for example, the development of a new clinical risk prediction tool that was neither implemented nor evaluated.

Observational and experimental study designs were included. Studies that reported costs or consequences only were excluded. For screened abstracts where changes to costs and outcomes were reported but the study design was unclear, final eligibility was determined by full-text review. Studies were limited to evaluations of CDSS in hospital settings only, defined as care delivered as part of emergency, inpatient, or ambulatory services. Studies conducted in primary care, aged care, and community health services were excluded.

We defined CDSS as any feature that contextualized information available in digital hospital systems to guide clinical decision-making. Studies on the introduction of digital hospital infrastructure only without evaluating specific decision support were excluded. Examples of digital hospital infrastructure included electronic medical record systems and health information exchanges. Computer-based interventions designed to modify patient behavior only, such as text message reminders to improve appointment attendance, were excluded. Studies that evaluated medical technologies such as robot-assisted surgery were also excluded. We considered studies that evaluated CDSS as a single intervention or as part of a bundle, where CDSS was supported with other evidence-based interventions to address implementation barriers.

All studies that reported costs and consequences associated with CDSS and a usual care comparator were included. Eligible studies could report costs and outcomes per study phase, as a before and after change, or as a projected change in costs based on changes in resource use after CDSS was introduced. Changes in costs and outcomes could be reported separately or as a combined measure, for example, the cost per adverse event prevented. Studies could report on multiple outcomes aligned with the consequences of adhering to CDSS recommendations, associated with clinical effectiveness or processes of care. This broad scope aligned with our aim to identify how economics evaluations of CDSS are conducted.

Title and abstract screening were completed using Rayyan.26 At least 2 authors determined the eligibility of each record for full-text review. Disagreements from screening were resolved by group discussion. Full-text reviews were completed by 5 authors (NMW, HEC, SMK, DNB, and DCB). Reference lists of included articles at the full-text review stage were hand-searched to identify further studies for inclusion.

Charting the data

Data collection described the features of the CDSS-based intervention, study population, evaluation study design, outcome measurement, analysis, and reporting (Supplementary File S3). Data were entered into an Excel spreadsheet independently by 4 authors (NMW, HEC, SMK, and DCB) and checked by 3 authors (NMW, DNB, and SMM). Outcome measurement considered patient-centered and hospital-centered outcomes. Approaches to resource valuation before and after CDSS introduction were summarized by including cost drivers and data sources used to estimate costs.

Collating, summarizing, and reporting the results

Data were summarized in tabular form, indexed by primary study author and year of publication. Narrative synthesis was used to describe overall themes in evaluation approaches. We further considered how studies reported their chosen evaluation approach against methods items in the Consolidated Health Economic Evaluation and Reporting (CHEERS) 2022 checklist.27 Individual items were classified for each study as fully reported, partially reported, not reported, or not applicable. Rather, study appraisal against the CHEERS checklist sought to identify gaps in current reporting to inform the consistency of future evaluations and enable comparisons across studies.

Results from narrative synthesis and reporting against the CHEERS checklist were combined to develop a set of recommendations to guide future evaluations.

RESULTS

Search strategy to identify relevant studies

Our search strategy found 3113 records after removing duplicates (Figure 1). Title and abstract screening identified 81 studies for full-text review. The main reasons for exclusion after full-text review were studies that did not meet our definition of CDSS (n = 29), used an ineligible study design (n = 20), or evaluated CDSS outside of the hospital setting (n = 8). Studies excluded based on study design evaluated changes to costs or consequences only (n = 6), did not have a comparator (n = 12), or were modeling studies on associations between patient- and system-level variables with electronic medical record or computerized provider order entry usage (n = 2). Five additional studies were included after searching reference lists. Data extraction was completed for 29 studies, 19 studies were from North America, and 8 were from Europe.

Figure 1.

Figure 1.

Search strategy.

CDSS characteristics

CDSS were introduced as part of patient safety initiatives and for promoting the appropriate use of hospital resources (Table 1). Patient safety initiatives introduced CDSS for monitoring adverse event risk,28–32,57 medication safety,52–56 and antimicrobial stewardship.33–36 Applications aimed at improving hospital resource use targeted laboratory testing45–51 and blood product management.37–44

Table 1.

CDSS-based interventions evaluated by clinical area

Clinical application Author (year of publication; country) Process of care targeted CDSS functionality
Adverse events
 Horton et al (2020; USA)28 Detection of sepsis decompensation Modified early warning scores presented on real-time EHR patient dashboard. Rule-based alerts paged to staff. Alerts linked to relevant order sets.
 Kharbanda et al (2021; USA)29 Diagnostic imaging for suspected appendicitis Risk calculator integrated into electronic health records. Best practice alert generated at time of ordering of CT or ultrasound for eligible patients.
 Lecumberri et al (2011; Spain)30 Thromboprophylaxis for the prevention of venous thromboembolism Automated daily screening for risk of thrombosis, based on data stored in hospital databases. Rule-based alert displayed on-screen to guide use of prophylaxis. Alerts linked to guidelines for prevention of venous thromboembolism.
 Majid et al (2019; USA)31a Sepsis management in patients with suspected infection Modified early warning system integrated into EHR; Sepsis order sets; Real-time physician feedback system.
 Schroeder et al (2021; UK)32 Monitoring of fetal heart rate during labor Decision-support software for computerized interpretation of cardiotocographs. Real-time information display combined with color-coded alerts based on severity of events detected.
Antimicrobial stewardship
 Bond et al (2017; Australia)33a Appropriate use of common antimicrobials based on hospital classification Intranet-based decision support with links to guidelines on appropriate use of antimicrobials. Automatic generation of antimicrobial approvals based on traffic-light system, including restriction of selected agents.
 Calloway et al (2013; USA)34 Pharmacy clinical interventions targeting the appropriate use of antimicrobials. Dosing support and contraindications for selected agents communicated using rule-based alerts. Tiered alerts to prioritize patient review, sent to pharmacists and treating clinicians.
 Chen et al (2018; USA)35 Targeted use of aztreonam Interruptive alert prompting treating clinician to order penicillin allergy testing for patients on aztreonam, recorded in their EHR.
 Nault et al (2017; Canada)36 Point-of-care antibiotic prescribing Dosing support and recommendations at the point of prescribing. Rule-based alerts for potentially inappropriate prescriptions. Automated severity scores provided to pharmacists to prioritize patient review.
Blood product management
 Goodnough et al (2014a; USA)37 Red blood cell transfusion Rule-based interruptive alert generated if order request does not meet guidelines, based on documented diagnoses and last recorded hemoglobin level. Reason required to proceed to order. Relevant information links provided to relevant literature within system.
 Goodnough et al (2014b; USA)38 Red blood cell transfusion Rule-based interruptive alert generated if order request does not meet guidelines, based on documented diagnoses and last recorded hemoglobin level. Reason required to proceed to order. Relevant information links provided to relevant literature within system.
 Ikoma et al (2021; USA)39 Red blood cell transfusion Rule-based interruptive alert generated for if request falls outside of recommended clinical guidelines, stored in CPOE. Relevant information display gives patient's past hemoglobin levels and corresponding clinical guidance based on the most recent hemoglobin level recorded and other clinical indicators recorded in EHR.
 Murphy et al (2021; USA)40 Platelet transfusion Rule-based interruptive alert generated at the time of platelet order request, if outside of clinical guidelines. Alert triggered based on patient’s most recent platelet count with relevant information display based on clinical guidelines.
 Razavi et al (2014; USA)41 Administration of packed red blood cells Rule-based interruptive alert generated based on orders for packed red blood cells via CPOE. Alert gives relevant information on recent patient hemoglobin level and vital signs. Reason required to proceed with order.
 Saag et al (2016; USA)42a Red blood cell transfusions Hard-stop alert at time of ordering within CPOE. Providers required to select the clinical indication for the order to proceed, in line with current transfusion guidelines.
 Swart et al (2020; UK)43a Red blood cell transfusions Rule-based interruptive alert generated based on orders for packed red blood cells at the time of request. Alerts combined information on the most recent blood count and clinical indications based on user inputs.
 Zuckerberg et al (2015; USA)44 Red blood cell transfusions Rule-based interruptive alert generated if order request was outside of hospital transfusion guidelines, based on timing of last hemoglobin measurement and most recent hemoglobin level recorded. Reason required to proceed to order.
Laboratory testing
 Algaze et al (2016; USA)45a Complete blood cell counts, chemistry, and coagulation panels Rule-based restrictions on test orders within a 24-h timeframe. Triggered at the time of order request within CPOE.
 Bellodi et al (2017; Italy)46 Common laboratory tests Interruptive alert at time of ordering via CPOE. Relevant information display included a patient's last test result and recommended testing intervals. Alerts include hard-stops and warnings with ability to override after a reason is provided.
 Bridges et al (2014; USA)47 Acute hepatitis profile testing Interruptive alert generated if duplicate test was ordered within 15 days of the last test result stored in CPOE. Relevant information display included clinical evidence on testing timeframes and test that were completed, pending results, or scheduled later.
 Jun et al (2019; USA)48 Thrombophilia testing for venous thromboembolism Interruptive alert embedded in EHR. Triggered at the time of order for selected thrombophilia tests. Relevant information provided recommendations for testing based on the Choosing Wisely guidelines.
 Nikolic et al (2017; USA)49 Stool cultures and parasitological studies Hard-stop alert generated at time of ordering if patient has been hospitalized for more than 3 days. Alert provided information on clinical evidence for the 3-day rule, stored in CPOE. Option for treating clinician to override conditional on discussion with laboratory staff.
 Strockbine et al (2020; USA)50 Phlebotomy testing Interruptive alert at time of blood type and screen test orders. Information provided with the alert include the patient's blood type, if available, datetime of current test expiry, and datetime of the last test ordered.
 Tawadrous et al (2019; UK)51a Coagulation testing Uncoupling of international normalized ratio (INR) and activated partial thromboplastin time (aPTT) tests within CPOE, combined with relevant information display. Information was provided at the time or ordering as a pop-up reminder about clinical indications and costs or testing. Provider acknowledge required to proceed with testing order.
Medication safety
 Gallagher et al (2016; Ireland)52 Appropriate prescribing Automated review of patient medications to flag drug-related problems, drug appropriateness and drug-drug interactions. Output generated for pharmacist review and generation of pharmaceutical care plan sent to treating clinical team.
 Vermeulen et al (2014; The Netherlands)53 Appropriate prescribing Decision support based on CPOE. Alerts generated based in dosage and drug-drug interactions.
 Pregnall et al (2022; USA)54a Appropriate use of anethesia EHR-based dashboard to monitor the use of high-dose sugammadex, for reversing the effects neuromuscular blocking agents during surgery. The dashboard displaying information on a patient’s actual and adjusted body weight, coupled with real-time reporting on the administration of high-dose sugammadex. Dashboard reporting accessible to all providers within the anesthesia department. Automated daily reporting of high-dose cases sent to treating staff.
 Saad et al (2018; Argentina)55 Appropriate use of stress prophylaxis Interruptive alert triggered by ordering of acid-suppression therapy. Alarm provided relevant information on therapeutic indicators and risk factors for stress ulcer prophylaxis
 Touchard et al (2021; France)56a Appropriate drug administration for iron deficiency anemia Decision support integrated into CPOE, to guide appropriate prescribing of intravenous versus oral drug administration. Recommendations based on a standardized protocol, providing dosage and administration guidance.
a

CDSS evaluated as part of a multicomponent intervention or bundle.

Interruptive alerts and relevant information display were the most common CDSS functions evaluated (Table 1). Alert-based feedback displayed to end-users included patient-level risk scores,28–31,36 tiered alerts for prioritizing specialist review,32,34 clinical guidelines alongside relevant patient data,35,37–41,43,44,46,47,49,50,52,53,55 and clinical guidelines alone to inform further management.42,45,48,51,56

Information stored in computerized provider order entry systems was most used to produce alerts based on documented care history. For example, laboratory testing studies evaluated CDSS that displayed the timing and outcome of a patient's previous test results at the time of request.37,39–41 Similarly, interruptive alerts for medication safety provided information on drug-drug interactions, drug appropriateness, and dosing for requested prescriptions based on medications already administered.52,53,55,56

Patient safety applications emphasized providing real-time feedback for guiding patient management and streamlining clinical workflows. Four studies used patient-level information from electronic medical records to calculate early warning scores for managing sepsis,28,31 appendicitis,29 and venous thromboembolism.30 Studies evaluating CDSS as part of antimicrobial stewardship programs implemented automated processes for antimicrobial review and approval33,34,36 and targeted testing to guide appropriate antimicrobial use.35

Study population

Clinical populations varied based on care type, patient characteristics, and healthcare behaviors targeted by CDSS. Studies reported evaluations among admitted inpatients (26 studies), surgical admissions (2 studies), ambulatory patients (1 study), and emergency department presentations (2 studies) (Table 2). Eleven studies evaluated CDSS within specific patient subgroups, based on patient age,52 confirmed or suspected conditions,31,45 or care already received during the same hospital admission.35–37,41,47,49,50,58 Target clinical populations were typically defined by care already received during the same hospital admission when evaluating the impacts of decision support on laboratory testing. For example, an intervention aimed at reducing unnecessary phlebotomy evaluated outcomes of patients who had already received blood type and screen testing within the past 3 days, to determine compatibility for blood transfusion.50 In another study, an interruptive alert recommending penicillin allergy testing was evaluated among patients treated with aztreonam, to improve the efficacy of broad-spectrum antibiotics.35

Table 2.

Characteristics of evaluation study designs

Study Study population and setting
Data collection timeframe
Author (year) Target patient population Hospital setting Pre-CDSS Post-CDSS
Observational/quasiexperimental studies
 Algaze et al (2016)45 Pediatric admissions with congenital or acquired heart disease, admitted to the cardiovascular intensive care unit Single hospital 12 months 7 months
 Bellodi et al (2017)46 Admissions to Cardiology and General Medicine wards Multiple hospitals (n = 3) 3 months 3 months
 Bond et al (2017)33 Adult inpatient admissions Multiple hospitals (n = 5) 2 years 2 years
 Bridges et al (2012)47 Adult admissions with an acute hepatitis profile test documented within the evaluation timeframe. Single hospital system 3 months 3 months
 Calloway et al (2013)34 All patient admissions Single hospital 12 months 12 months
 Chen et al (2018)35 Admissions with a documented penicillin allergy diagnosis and actively receiving aztreonam Single hospital 13 months 9 months
 Goodnough et al (2014a)37 Admissions to medical and surgical wards without a documented diagnosis of hemorrhage or bleeding at hospital discharge Single hospital 22 months 18 months
 Goodnough et al (2014b)38 Adult inpatient admissions; discharged from hospital Single hospital 28 months 36 months
 Horton et al (2020)28 Adult admissions with documented diagnosis of sepsis, selected acute wards Single hospital 12 months 12 months
 Ikoma et al (2021)39 Adult inpatient admissions with documented elective red blood cell transfusions Multiple hospitals (n = 2) 12 months 11 months
 Jun et al (2018)48 Inpatient and outpatient admission with a documented diagnosis of acute venous thromboembolism Single hospital 8 months 8 months
 Lecumberri et al (2011)30 Adult inpatient admissions Single hospital 6 months 2 years
 Majid et al (2019)31 Adult inpatient admissions with suspected infection Single hospital 18 months 18 months
 Nault et al (2017)36 Adult admissions receiving intravenous or oral antimicrobials, selected wards Single hospital 2 years 3 years
 Nikolic et al (2017)49 Inpatient admissions with documented stool cultures and parasitological stool studies Single hospital 11 months 11 months
 Pregnall et al (2022)54 Surgical admissions Single hospital 6 months 8 months
 Razavi et al (2014)41 Adult surgical admissions who underwent an isolated coronary artery bypass graft or surgical aortic valve replacement Multiple hospitals (n not stated) 12 months 12 months
 Saad et al (2018)55 Adult inpatients admitted for more than 24 h Single hospital 4 months 4 months
 Saag et al (2016)42 Adult inpatient admissions who received red blood cell transfusion Single hospital 12 months 9 months
 Strockbine et al (2020)50 Inpatient admissions Single hospital 3 months 3 months
 Swart et al (2020)43 Admissions to hospital hematology unit Single hospital 28 months 32 months
 Tawadrous et al (2019)51 Adult emergency department presentations Multiple hospitals (n = 2) 6 months 6 months
 Touchard et al (2021)56 Inpatient admissions Single hospital 8 months 8 months
 Vermeulen et al (2009)53 Inpatient admissions hospitalized for more than 24 h, selected wards Multiple hospitals (n = 2) 20 weeks 20 weeks
 Zuckerberg et al (2015)44 Surgical admissions Single hospital 51 months 8 months
Randomized controlled trials
 Gallagher et al (2016)52a Older patients (65+ years) admitted to medical and surgical wards from the emergency department Single hospital 12 months
 Kharbanda et al (2021)29b Paediatric emergency department presentations with abdominal pain Multiple hospitals (n = 17)
  • 6–9 months (pre-CDSS)

  • 12 months (post-CDSS)

 Murphy et al (2021)40a Adult inpatient admissions Single hospital 9 months
 Schroeder et al (2020)32a,c Maternity admissions receiving continuous electronic fetal monitoring during labor Multiple hospitals (n not stated) 42 months
a

Patient-level randomization to control (no-CDSS) and intervention (CDSS) trial arms.

b

Cluster randomized controlled trial.

c

Evaluated costs and consequences over a 2-year time horizon.

Study design

Observational or “quasiexperimental” designs were the most common study design used for data collection (25 studies, Table 2). Among observational studies, 20 collected data from a single hospital28,30,31,34–38,42–45,47–50,54–56 and 5 collected data from multiple hospitals.33,39,41,46,51,53 Data collection for all observational studies was described as retrospective and extracted from existing hospital data sources. The remaining studies evaluated outcomes using data collected from cluster randomized controlled trials; randomization to the intervention was applied either the patient32,40,52 or hospital level.29

Evaluation timeframes were generally balanced between pre- and postimplementation phases. Post-CDSS timeframes were defined as 3 months,46,47,50 between 4 and 12 months,28,29,34,35,39–42,44,45,48,49,51–56 or more than 1 year.30–33,36–38,43 One study used results from a randomized controlled trial to estimate long-term costs and outcomes over a 2-year time horizon.32

Study designs were further categorized according to how costs and outcomes were reported (Table 3). Four studies were classified as cost-effectiveness analyses30,31,52,53 where results were reported as an incremental cost-effectiveness ratio (ICER). Remaining studies were classified as cost-consequence analyses, where costs and outcomes were reported separately. Thirteen cost-consequence analyses summarized the impact of CDSS by reporting costs and outcomes by study phase.28,29,32–36,45–47,50,51,55 Twelve studies reported projected cost savings based on observed changes in the primary outcome.37–44,48,49,54,56

Table 3.

Evaluation outcomes and methods

Author (year) Selection and measurement of outcomes Measurement and valuation of resources Primary analysis methods Characterization of uncertainty
Cost-effectiveness analysis
 Gallagher et al (2016)52 Nontrivial adverse drug reactions Unit costs for CDSS software and training, staff time and inpatient days. ICER (cost per nontrivial ADR prevented) Cost-effectiveness acceptability curve; 95% uncertainty intervals
 Lecumberri et al (2011)30 VTE incidence per 1000 hospitalized patients, prophylaxis use
  • Reported costs of CDSS software development and annual maintenance.

  • Direct medical costs of treatment during initial hospitalization and follow-up (if applicable)

ICER (cost per VTE per 1000 patients) 95% uncertainty intervals for changes in outcomes; 1-way sensitivity analysis (cost-effectiveness
 Majid et al (2019)31 In-hospital death, 28-day hospital free days, time to first antibiotics Unit costs for staff time, CDSS establishment and maintenance; total per-patient hospital charges ICER (cost per % decrease in in-hospital death) 95% uncertainty intervals
 Vermeulen et al (2014)53 Adverse drug events, medication errors, length of stay Unit costs for staff, CDSS software/equipment and ongoing maintenance, patient bed days.
  • ICER (cost per medication error averted, cost per potential adverse drug event).

  • Interrupted time series

95% confidence intervals (outcomes); 1-way sensitivity analysis (cost-effectiveness)
Cost-consequence analysis: results reported by evaluation phase
 Algaze et al (2016)45 Laboratory utilization, in-hospital death, hospital length of stay, ICU length of stay Unit cost per test Group based hypothesis testing 95% confidence intervals (outcomes only)
 Bellodi et al (2017)46 Total tests performed, user satisfaction Laboratory testing fees based on regional reimbursement model Descriptive statistics Not reported
 Bond et al (2017)33
  • Mortality (standardized mortality ratio)

  • Healthcare-associated C. difficile rates (cases for 10 000 OBD)

  • Length of stay (days)

  • Antimicrobial use (DDD per 10 000 OBD)

Total monthly antimicrobial costs Interrupted time series 95% uncertainty intervals for costs and outcomes (not reported for length of stay)
 Bridges et al (2014)47 Duplicated AHP tests Unit cost per duplicated test Group-based hypothesis testing Not reported
 Calloway et al (2013)34 Documented pharmacy interventions Time-based staffing costs by intervention type Descriptive statistics Not reported
 Chen et al (2018)35 Total patients tested; consultations; time from admission to testing; antibiotic use (per patient day, monthly hospital use) Per-patient antibiotic costs based on unit costs and number of inpatient days receiving antibiotic therapy Group-based hypothesis testing Not reported
 Horton et al (2020)28 Length of stay, in-hospital death Direct visits overall and by major cost component Interrupted time series 95% confidence intervals
 Kharbanda et al (2021)29 CT/ultrasound imaging; related complications (perforation, neg. appendectomies; missed appendicitis); ED length of stay; disposition; 7-day intensive care admissions Total costs of care measured using billing codes, estimated using Total Care Relative Resource Value.a Group based hypothesis testing 95% confidence intervals
 Nault et al (2017)36 Length of stay; antimicrobial use (DDD per 1000 inpatient days); nonconcordance with guidelines Monthly average prices of antimicrobials Interrupted time series Model-based standard errors
 Saad et al (2018)55 Use of stress ulcer prophylaxis, 30-day complications (GI bleeding, nosocomial pneumonia, C. difficile diarrhea) Per-protocol unit costs for treatment based on route of administration and length of therapy Group based hypothesis testing Not reported
 Schroeder et al (2021)32 Poor neonatal outcomes (composite measure); developmental assessment at 2 years of age; health-related quality of life Unit costs for birth-related and follow-up related care, covering consumables, staff time, hospital (inpatient/outpatient/emergency) and out-of-hospital (GP/community) service use. Trial based economic evaluation 95% confidence intervals
 Strockbine et al (2020)50 Appropriate/inappropriate tests ordered Time-driven activity-based costing model Descriptive statistics: Group based hypothesis testing Not reported
 Tawadrous et al (2019)51 Tests ordered, blood product transfusion rates Reimbursement costs for laboratory testing Descriptive statistics Median and interquartile range
Cost-consequence analysis: outcomes reported by evaluation phase with projected cost savings
 Goodnough et al (2014a)37 Blood transfusions (nonconcordance); Blood components transfused per 1000 discharges Unit costs for red blood cells Group based hypothesis testing Not reported
 Goodnough et al (2014b)38 RBC units per 100 days at risk; mortality (%); 30-day readmissions (%); length of stay (days); hemoglobin levels at admission and discharge; percentage of patients transfused Unit costs for red blood cells Group based hypothesis testing; interrupted time series Not reported
 Ikoma et al (2021)39 RBC orders in agreement with evidence-based guidelines (%); multiunit RBC orders (%) Activity-based costing model for red blood cell transfusion Group based hypothesis testing Not reported
 Jun et al (2019)48 Thrombophilia testing rates, PE/DVT cases (monthly) Unit costs for thrombophilia testing Group based hypothesis testing Median (IQR) by study phase
 Murphy et al (2021)40 Platelet orders, overall and based on CDSS-recommendations Unit costs for platelet transfusions Group based hypothesis testing Not reported
 Nikolic et al (2017)49 Tests ordered within and outside of CDSS-recommended timeframes; positive test results Internal costs for consumables and staff time Descriptive statistics Not reported
 Pregnall et al (2022)54 Percentage of cases administered high-dose sugammadex (weekly) Unit costs for sugammadex Interrupted time series Model-based confidence intervalsb
 Razavi et al (2014)41 Packed red blood cells transfused; length of stay (postoperative, ICU); pretransfusion hemoglobin levels; surgical site infection; mortality (in-hospital, 30-day) Unit costs for blood products and pretransfusion workup. Group based hypothesis testing Not reported
 Saag et al (2016)42 Red blood cell transfusion orders outside of CDSS-recommended guidelines; length of stay Unit cost of red blood cells Statistical process control; descriptive statistics Not reported
 Swart et al (2020)43 Total volumes of red blood cells and platelets used (monthly) Unit costs for staff time, red blood cells and platelets Interrupted time series Model-based confidence intervalsb
 Touchard et al (2021)56 FCM and ICS orders dispensed, biological iron deficiency check-ups, packed blood cell units dispense; totals per 10 000 hospitalizations Unit costs for iron supplements dispensed Interrupted time series Mean (SD) by study phase Model-based confidence intervalsb
 Zuckerberg et al (2015)44 Red blood cell units transfused; percentage of patients transfused Unit costs for red blood cells; activity-based costing model for red blood cell transfusion Group based hypothesis testing Mean (SD) by study phase

Abbreviations: ICER: incremental cost-effectiveness ratio; ADR: adverse drug reaction; DDD: daily defined dose; OBD: occupied bed days.

b

Reported for consequences only.

Outcome measurement

Outcome measures varied by clinical application and emphasized the direct impacts of decisions informed by CDSS (Table 3). Outcomes evaluated by laboratory testing and blood product management studies reflected targeted resource use in line with system-based feedback, such as numbers of tests ordered,45–47,50,51 and red blood cell transfusions outside of evidence-based guidelines.39,41,42 In contrast, patient safety applications often evaluated patient outcomes alongside changes in targeted resource use, to describe improvements in care or to monitor unintended consequences from behavior changes prompted by CDSS. Patient outcomes considered mortality,28,31,33,38,41,45 length of stay,28,29,33,36,38,41,42,45,53 complications,29,30,33,41,48,55 and drug-related adverse events.52,53 Six studies measured a single outcome.34,40,47,50,52,54

Measurement and valuation of resources

All studies stated or implied resource valuation from the hospital perspective, however, included resources and costing methods were key sources of heterogeneity across studies (Table 3). Data sources used to value resources comprised unit costs,32,34,35,37,38,40,43–45,47–49,52–56 activity-based funding or reimbursement models29,39,44,46,50,51 and total hospital charges per patient admission.28,30,31,33,36 Unit costs defined the value of staff time31,32,34,41,49,52,53 and the direct cost of medical consumables32,35,37,38,40–49,51,54–56 before and after CDSS was introduced. Eight studies estimated changes to costs based on medical consumables only.35,37,40,42,45,47,48,55

Four studies included the costs of implementing CDSS.30,31,52,53 Implementation costs were broadly described in terms of initial establishment and ongoing maintenance. Establishment costs described the initial setup of CDSS in existing digital systems, including the purchase of software and hardware. Three studies accounted for ongoing maintenance costs on a monthly31 or annual30,53 basis.

Analysis

Nine studies used an interrupted time series design to account for background trends when modeling outcomes.28,30,33,36,38,43,53,54,56 Other methods reported among cost-consequence studies were group-based hypothesis testing29,35,37,39–41,44,45,47,48,50,55 and descriptive statistics.34,42,46,49,51 Evaluations informed by descriptive statistics often reported results as percentages/proportions or using a standardized population-based denominator; for example, events cases per 1000 patients30,37 or 10 000 occupied bed days.33,46

Characterization of uncertainty

Thirteen studies reported uncertainty associated with cost and outcome estimates. Reporting of uncertainty in cost-consequence studies took the form of medians with interquartile ranges or means with standard deviations,44,48,51 95% confidence intervals28,29,32,33,45 and model-based standard errors.36 All cost-effectiveness studies considered the impact of uncertainty on expected costs and outcomes using 1-way sensitivity analysis30,31,53 or probabilistic sensitivity analysis.52

Reporting

Consistencies in study reporting reflected the context in which CDSS was implemented, comparator definitions, and methods for evaluating changes to costs and outcomes (Figure 2). All studies adequately described the study setting and location (CHEERS item 6), intervention being compared (item 7) and evaluation time horizon (item 9). Most studies described the target population (item 5, n = 28). The perspective adopted for evaluating costs was fully described by 8 studies (item 8), and the remaining studies provided partial information about the costing perspective. Details on how resources were valued (item 14, n = 28) and methods to estimate costs and consequences (item 17, n = 27) were reported by most studies.

Figure 2.

Figure 2.

Study reporting based on the Consolidated Health Economic Evaluation Reporting Standards 2022 checklist. For all methods items (Items 4–21), study reporting was classified as fully reported (Yes), partially reported (Partial), not reported (No/not stated), or not applicable.

Reporting on costing methods and the management of analysis-based uncertainty varied between studies. Dates of estimated resource quantities and unit costs, currency and year of conversion were described in full by 10 studies (CHEERS item 15); partial information provided by 9 studies omitted details on the year of conversion. Six studies reported cost estimates for specific patient subgroups (item 18), and 4 studies reported distributional effects for characterizing uncertainty (item 19). Twelve of the included studies described methods to characterize analysis-based uncertainty (item 20). Only 2 studies undertook economic modeling, with the rationale and description of the model described.

Recommendations

By summarizing approaches, we propose 7 recommendations to support the conduct and reporting of economic evaluations for CDSS-based interventions (Table 4). Our recommendations represent actionable steps towards conducting more detailed evaluations of CDSS and may support more in-depth appraisals of future published evaluations.

Table 4.

Recommendations for future economic evaluations on CDSS

Recommendation Description
1 Report economic evaluations following the recommendations of the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement27
2 Study designs should consider evaluating CDSS implementation across multiple hospitals/sites where possible.
3 When randomization to CDSS-based intervention is not possible, consider quasiexperimental designs that account for appropriate confounders, adjust for secular trends and account for differences between hospitals (if applicable).
4 When valuing resources, consider the direct costs of CDSS implementation, the downstream hospital resource use implications of CDSS adherence, and related opportunity costs.
5 Measure relevant patient- and hospital-centered outcomes associated with clinical pathways targeted by CDSS. Selected measures should comprise health and nonhealth outcomes.
6 Evaluate costs and consequences over the full life cycle of CDSS, to monitor short- and long-term impacts across selected measures.
7 Where appropriate, consider heterogeneity and uncertainty related to measured costs and consequences, including data variability and impacts in specific patient subgroups.

DISCUSSION AND CONCLUSION

Health economic evidence has become increasingly valued as health systems seek to improve patient safety and quality within constrained budgets.22 Previous reviews have highlighted the need for robust evidence on the costs and consequences of CDSS-based interventions to inform investment decisions, including cost-effectiveness studies.2,11,24 Our review indicates that this gap has started to be addressed for selected hospital service areas, using a range of evaluation approaches. Notwithstanding, we identified several sources of heterogeneity related to study conduct and reporting that might affect the generalizability of published findings. Similar sources of variation have been identified among studies evaluating the implementation of electronic medical records,59 suggesting a lack of consensus on economic evaluation in digital health settings. Variations in evaluation methods and reporting have been cited as key barriers affecting the accessibility of economic evidence by health policy-makers.60

Assessment of study reporting against the CHEERS checklist indicated that elements related to study design and intervention/comparator characteristics were consistently reported. Several items specific to evaluation methods, including the characterization of heterogeneity and uncertainty were inconsistently reported. Appropriate management of heterogeneity and uncertainty is crucial to improve the generalizability of economic evaluations,61 and should be reflected by the appropriate choice of statistical methods. Given current inconsistencies in reporting, following CHEERS guidance is an easily implemented step that should be considered when publishing future evaluations.

Economic evaluations of healthcare interventions are resource-intensive, requiring analysts to consider trade-offs between the feasibility and limitations of different study designs. Randomized controlled trials are widely regarded as the gold standard for evaluating clinical effectiveness. In hospital settings, randomization may at times be impractical, due to limitations such as inadequate resourcing, logistics around randomization, and potential ethical implications.62 Quasiexperimental study designs offer a pragmatic alternative, to evaluate the impact of interventions as part of everyday workflows. Well-designed quasiexperimental studies based on routinely collected data can provide valuable insights on changes related to intervention and, in some cases, provide preliminary evidence to inform larger-scale trials.63 While not used widely by studies identified in our review, we recommend using interrupted time-series methods to appropriately account for common features of data collected under quasiexperimental designs.64 Multisite evaluations should also be considered to strengthen the external validity of results where feasible.65

Costing information emphasized accounting costs directly impacted by CDSS adherence. The use of accounting costs reflects direct expenditure incurred by hospitals, however, evaluations that rely on accounting costs alone are likely to underestimate the true cost of trialed interventions.66 To capture the broader economic implications of CDSS, opportunity costs from implementing CDSS should also be considered. Irrespective of the clinical application, resources required to establish and maintain CDSS over its full life cycle should be valued, as investment is likely to redirect resources away from competing initiatives throughout the life cycle of a CDSS. Resources related to CDSS implementation should include both the costs incurred from acquiring necessary technology, and implementation-related activities such as workforce education and training. For the downstream consequences of CDSS adherence, the economic value of hospital resources released should also be considered. For example, patient safety initiatives may reduce hospital length of stay due to a reduction in adverse events. The economic value of hospital bed days released for use by other patients should therefore be captured, from either an accounting or willingness-to-pay perspective.67

Outcomes measured were appropriate for monitoring behavior changes targeted by CDSS, with similar outcomes analyzed by studies within the same clinical area. This narrow scope fails to recognize CDSS as a complex intervention, including their potential impact on downstream clinical, process, and patient-reported outcomes.68 When designing evaluations of CDSS, we recommend collecting data on outcomes directly impacted by decisions prompted by CDSS, as well as secondary outcomes that may be affected by the subsequent redistribution of healthcare resources. Adopting a broader scope when practicable will help to determine the full scope of intended and unintended consequences from introducing CDSS into existing clinical workflows. For example, interruptive alerts may successfully draw attention to the target clinical concern, but this may carry opportunity cost by diverting healthcare providers' attention away from other important tasks that influence patient care. Reporting on process measures would further strengthen the current evidence base, to understand contextual factors that may have influenced CDSS acceptance and fidelity, and subsequent impacts on intervention effectiveness.68 Reporting process outcomes may also help to reconcile mixed reports on CDSS adherence, to inform future development of effective implementation strategies. This is particularly important as identifying barriers to behavior change and employing strategies to address them has been highlighted as key to successful CDSS uptake.17

Adapting to context and sustaining interventions is essential for successful implementation in complex adaptive systems. The ability to periodically monitor outcomes to identify challenges and changes to intervention effectiveness over the short- and long-term is a key facilitator to such sustainability.69 More than half of the studies in our review evaluated costs and consequences over short time horizons, up to 12 months after CDSS implementation. The use of short time horizons is a limitation affecting generalizability, as it implies estimated benefits are maintained over time.

As a counterexample, 1 study excluded from our review (due to lack of pre-CDSS period) evaluated changes in catheter-associated urinary tract infections over time after introducing clinical decision support as part of a bundled intervention.58 Their evaluation indicated that infection rates decreased during the first 2 years of the program, before steadily increasing to preintervention levels over the next 12 months. Study designs should incorporate longer timeframes that capture the full life cycle of proposed CDSS, with changes over time analyzed using appropriate statistical methods.14 Doing so will provide decision-makers with more detailed evidence about the return on investment for CDSS across different clinical applications.

Our search strategy was designed to broadly capture evaluation approaches reported in the literature, where both costs and consequences were measured in real-world implementation settings. A limitation of our search strategy was restricting to peer-reviewed journal articles available in English. As a rapidly evolving area of research, these inclusion criteria potentially underestimate the number of published evaluations, for example, as conference proceedings, commissioned reports, or preprints. Furthermore, our search strategy did not return any cohort modeling or microsimulation studies, which are commonly used in health economics to estimate changes in costs and consequences over long time horizons, at the population level.70 A possible explanation is that relatively few studies were unlikely planned as formal economic evaluations, despite meeting our broad definition. Finally, all studies indicated benefits from adopting CDSS when compared with usual processes of care. While it was encouraging that findings were typically favorable, we were unable to draw overall conclusion due to several sources of heterogeneity in study designs, analysis, and reporting. The extent to which findings have been influenced by publication bias also remains uncertain, as unsuccessful implementations CDSS are less likely to have been published.71

Our recommendations for future evaluations are intended to improve consistency in study conduct and reporting. Improving consistency across studies will enable more in-depth examination of findings, included major cost drivers and subsequent returns on investment. Whilst not all recommendations may be feasible due to constrained study resources, guidance provided by the CHEERS checklist will help others understand important details and assumptions underpinning study findings.

Supplementary Material

ocad040_Supplementary_Data

Contributor Information

Nicole M White, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

Hannah E Carter, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

Sanjeewa Kularatna, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

David N Borg, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

David C Brain, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

Amina Tariq, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

Bridget Abell, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

Robin Blythe, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia.

Steven M McPhail, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, Queensland, Australia; Digital Health and Informatics Directorate, Metro South Health, Brisbane, Queensland, Australia.

FUNDING

The Digital Health Cooperative Research Centre (“DHCRC”); DHCRC is funded under the Commonwealth's Cooperative Research Centres (CRC) Program; an NHMRC administered fellowship grant number 1181138 (to SMM).

AUTHOR CONTRIBUTIONS

NMW led the review process. All authors contributed to search strategy development and title and abstract screening. Data extraction was completed by NMW, HEC, SMK, DNB, and DCB with oversight by SMM. The manuscript was drafted by NMW, DNB, and SMM. All authors provided feedback on the draft manuscript and approach the final version before submission.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

CONFLICT OF INTEREST STATEMENT

None declared.

DATA AVAILABILITY

The data underlying this article are available in the article and in its online Supplementary Material.

REFERENCES

  • 1. Pype P, Mertens F, Helewaut F, Krystallidou D.. Healthcare teams as complex adaptive systems: understanding team behaviour through team members’ perception of interpersonal interaction. BMC Health Serv Res 2018; 18 (1): 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Black AD, Car J, Pagliari C, et al. The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med 2011; 8 (1): e1000387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Subbe CP, Tellier G, Barach P.. Impact of electronic health records on predefined safety outcomes in patients admitted to hospital: a scoping review. BMJ Open 2021; 11 (1): e047446. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Bloom BM, Pott J, Thomas S, Gaunt DR, Hughes TC.. Usability of electronic health record systems in UK EDs. Emerg Med J 2021; 38 (6): 410–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Kruse CS, Kristof C, Jones B, Mitchell E, Martinez A.. Barriers to electronic health record adoption: a systematic literature review. J Med Syst 2016; 40 (12): 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Keasberry J, Scott IA, Sullivan C, Staib A, Ashby R.. Going digital: a narrative overview of the clinical and organisational impacts of eHealth technologies in hospital practice. Aust Health Rev 2017; 41 (6): 646–64. [DOI] [PubMed] [Google Scholar]
  • 7. World Health Organization. Classification of digital health interventions. 2018. Contract No.: WHO/RHR/18.06. Licence: CC BY-NC-SA 3.0 IGO.
  • 8. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI.. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3 (1): 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Wright A, Sittig DF, Ash JS, et al. Development and evaluation of a comprehensive clinical decision support taxonomy: comparison of front-end tools in commercial and internally developed electronic health record systems. J Am Med Inform Assoc 2011; 18 (3): 232–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Bennett P, Hardiker NR.. The use of computerized clinical decision support systems in emergency care: a substantive review of the literature. J Am Med Inform Assoc 2017; 24 (3): 655–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Fillmore CL, Bray BE, Kawamoto K.. Systematic review of clinical decision support interventions with potential for inpatient cost reduction. BMC Med Inform Decis Mak 2013; 13 (1): 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. O'Connor P, Sperl‐Hillen J, Fazio C, Averbeck B, Rank B, Margolis K.. Outpatient diabetes clinical decision support: current status and future directions. Diabet Med 2016; 33 (6): 734–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Blythe R, Parsons R, White NM, Cook D, McPhail S.. A scoping review of real-time automated clinical deterioration alerts and evidence of impacts on hospitalised patient outcomes. BMJ Qual Saf 2022; 31 (10): 725–34. [DOI] [PubMed] [Google Scholar]
  • 14. Murphy EV. Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success. Yale J Biol Med 2014; 87 (2): 187. [PMC free article] [PubMed] [Google Scholar]
  • 15. Kwan JL, Lo L, Ferguson J, et al. Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials. BMJ 2020; 370: m3216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Ranji SR, Rennke S, Wachter RM.. Computerised provider order entry combined with clinical decision support systems to improve medication safety: a narrative review. BMJ Qual Saf 2014; 23 (9): 773–80. [DOI] [PubMed] [Google Scholar]
  • 17. Kouri A, Yamada J, Lam Shin Cheung J, Van de Velde S, Gupta S.. Do providers use computerized clinical decision support systems? A systematic review and meta-regression of clinical decision support uptake. Implement Sci 2022; 17 (1): 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Van de Velde S, Heselmans A, Delvaux N, et al. A systematic review of trials evaluating success factors of interventions with computerised clinical decision support. Implement Sci 2018; 13 (1): 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Moxey A, Robertson J, Newby D, Hains I, Williamson M, Pearson S-A.. Computerized clinical decision support for prescribing: provision does not guarantee uptake. J Am Med Inform Assoc 2010; 17 (1): 25–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Roshanov PS, Fernandes N, Wilczynski JM, et al. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ 2013; 346: f657. [DOI] [PubMed] [Google Scholar]
  • 21. Murphy ZR, Wang J, Boland MV.. Association of electronic health record use above meaningful use thresholds with hospital quality and safety outcomes. JAMA Netw Open 2020; 3 (9): e2012529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Drummond MF, Sculpher MJ, Claxton K, Stoddart GL, Torrance GW.. Methods for the Economic Evaluation of Health Care Programmes. Oxford: Oxford University Press; 2015. [Google Scholar]
  • 23. Curtis CE, Al Bahar F, Marriott JF.. The effectiveness of computerised decision support on antibiotic use in hospitals: a systematic review. PLoS One 2017; 12 (8): e0183062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Lewkowicz D, Wohlbrandt A, Boettinger E.. Economic impact of clinical decision support interventions based on electronic health records. BMC Health Serv Res 2020; 20 (1): 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018; 169 (7): 467–73. [DOI] [PubMed] [Google Scholar]
  • 26. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A.. Rayyan—a web and mobile app for systematic reviews. Syst Rev 2016; 5 (1): 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Husereau D, Drummond M, Augustovski F, et al. Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) statement: updated reporting guidance for health economic evaluations. Int J Technol Assess Health Care 2022; 38 (1) [DOI] [PubMed] [Google Scholar]
  • 28. Horton DJ, Graves KK, Kukhareva PV, et al. Modified early warning score-based clinical decision support: cost impact and clinical outcomes in sepsis. JAMIA Open 2020; 3 (2): 261–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Kharbanda AB, Vazquez-Benitez G, Ballard DW, et al. ; Clinical Research on Emergency Services and Treatments Network (CREST) and the Critical Care Research Center, HealthPartners Institute. Effect of clinical decision support on diagnostic imaging for pediatric appendicitis: a cluster randomized trial. JAMA Network Open 2021; 4 (2): e2036344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Lecumberri R, Panizo E, Gomez‐Guiu A, et al. Economic impact of an electronic alert system to prevent venous thromboembolism in hospitalised patients. J Thromb Haemost 2011; 9 (6): 1108–15. [DOI] [PubMed] [Google Scholar]
  • 31. Majid A, Arain E, Ye C, et al. Patient outcomes and cost-effectiveness of a sepsis care quality improvement program in a health system. Crit Care Med 2019; 47 (10): 1371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Schroeder E, Yang M, Brocklehurst P, Linsell L, Rivero-Arias O.. Economic evaluation of computerised interpretation of fetal heart rate during labour: a cost-consequence analysis alongside the INFANT study. Arch Dis Child-Fetal Neonatal Ed 2021; 106 (2): 143–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Bond SE, Chubaty AJ, Adhikari S, et al. Outcomes of multisite antimicrobial stewardship programme implementation with a shared clinical decision support system. J Antimicrob Chemother 2017; 72 (7): 2110–8. [DOI] [PubMed] [Google Scholar]
  • 34. Calloway S, Akilo HA, Bierman K.. Impact of a clinical decision support system on pharmacy clinical interventions, documentation efforts, and costs. Hosp Pharm 2013; 48 (9): 744–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Chen JR, Tarver SA, Alvarez KS, Wei W, Khan DA.. Improving aztreonam stewardship and cost through a penicillin allergy testing clinical guideline. Open Forum Infect Dis 2018; 5 (6). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Nault V, Pepin J, Beaudoin M, Perron J, Moutquin J-M, Valiquette L.. Sustained impact of a computer-assisted antimicrobial stewardship intervention on antimicrobial use and length of stay. J Antimicrob Chemother 2017; 72 (3): 933–40. [DOI] [PubMed] [Google Scholar]
  • 37. Goodnough LT, Shieh L, Hadhazy E, Cheng N, Khari P, Maggio P.. Improved blood utilization using real‐time clinical decision support. Transfusion 2014; 54 (5): 1358–65. [DOI] [PubMed] [Google Scholar]
  • 38. Goodnough LT, Maggio P, Hadhazy E, et al. Restrictive blood transfusion practices are associated with improved patient outcomes. Transfusion 2014; 54 (10 Pt 2): 2753–9. [DOI] [PubMed] [Google Scholar]
  • 39. Ikoma S, Furukawa M, Busuttil A, et al. Optimizing inpatient blood utilization using real-time clinical decision support. Appl Clin Inform 2021; 12 (1): 49–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Murphy C, Mou E, Pang E, Shieh L, Hom J, Shah N.. A randomized study of a best practice alert for platelet transfusions. Vox Sang 2022; 117 (1): 87–93. [DOI] [PubMed] [Google Scholar]
  • 41. Razavi SA, Carter AB, Puskas JD, Gregg SR, Aziz IF, Buchman TG.. Reduced red blood cell transfusion in cardiothoracic surgery after implementation of a novel clinical decision support tool. J Am Coll Surg 2014; 219 (5): 1028–36. [DOI] [PubMed] [Google Scholar]
  • 42. Saag HS, Lajam CM, Jones S, et al. Reducing liberal red blood cell transfusions at an academic medical center. Transfusion 2017; 57 (4): 959–64. [DOI] [PubMed] [Google Scholar]
  • 43. Swart N, Morris S, Murphy MF.. Economic value of clinical decision support allied to direct data feedback to clinicians: blood usage in haematology. Vox Sang 2020; 115 (4): 293–302. [DOI] [PubMed] [Google Scholar]
  • 44. Zuckerberg GS, Scott AV, Wasey JO, et al. Efficacy of education followed by computerized provider order entry with clinician decision support to reduce red blood cell utilization. Transfusion 2015; 55 (7): 1628–36. [DOI] [PubMed] [Google Scholar]
  • 45. Algaze CA, Wood M, Pageler NM, Sharek PJ, Longhurst CA, Shin AY.. Use of a checklist and clinical decision support tool reduces laboratory use and improves cost. Pediatrics 2016; 137 (1) [DOI] [PubMed] [Google Scholar]
  • 46. Bellodi E, Vagnoni E, Bonvento B, Lamma E.. Economic and organizational impact of a clinical decision support system on laboratory test ordering. BMC Med Inform Decis Mak 2017; 17 (1): 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Bridges SA, Papa L, Norris AE, Chase SK.. Duplicated laboratory tests: evaluation of a computerized alert intervention abstract. J Healthc Qual 2014; 36 (3): 46–53. [DOI] [PubMed] [Google Scholar]
  • 48. Jun T, Kwang H, Mou E, et al. An electronic best practice alert based on choosing wisely guidelines reduces thrombophilia testing in the outpatient setting. J Gen Intern Med 2019; 34 (1): 29–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Nikolic D, Richter S, Asamoto K, Wyllie R, Tuttle R, Procop G.. Implementation of a clinical decision support tool for stool cultures and parasitological studies in hospitalized patients. J Clin Microbiol 2017; 55 (12): 3350–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Strockbine VL, Gehrie EA, Zhou QP, Guzzetta CE.. Reducing unnecessary phlebotomy testing using a clinical decision support system. J Healthc Qual 2020; 42 (2): 98–105. [DOI] [PubMed] [Google Scholar]
  • 51. Tawadrous D, Detombe S, Thompson D, Columbus M, Van Aarsen K, Skoretz T.. Reducing unnecessary testing in the emergency department: the case for INR and aPTT. Can J Emerg Med 2020; 22 (4): 534–41. [DOI] [PubMed] [Google Scholar]
  • 52. Gallagher J, O'Sullivan D, McCarthy S, et al. Structured pharmacist review of medication in older hospitalised patients: a cost-effectiveness analysis. Drugs Aging 2016; 33 (4): 285–94. [DOI] [PubMed] [Google Scholar]
  • 53. Vermeulen K, van Doormaal JE, Zaal RJ, et al. Cost-effectiveness of an electronic medication ordering system (CPOE/CDSS) in hospitalized patients. Int J Med Inform 2014; 83 (8): 572–80. [DOI] [PubMed] [Google Scholar]
  • 54. Pregnall AM, Gupta RK, Clifton JC, Wanderer JP.. Use of provider education, intra-operative decision support, and an email-feedback system in improving compliance with sugammadex dosage guideline and reducing drug expenditures. J Clin Anesth 2022; 77: 110627. [DOI] [PubMed] [Google Scholar]
  • 55. Saad EJ, Bedini M, Becerra AF, et al. Benefit of an electronic medical record-based alarm in the optimization of stress ulcer prophylaxis. Gastroenterol Hepatol 2018; 41 (7): 432–9. [DOI] [PubMed] [Google Scholar]
  • 56. Touchard J, Perrin G, Berdot S, Pouchot J, Loustalot MC, Sabatier B.. Effects of a multifaceted intervention to promote the use of intravenous iron sucrose complex instead of ferric carboxymaltose in patients admitted for more than 24 h. Eur J Clin Pharmacol 2021; 77 (2): 189–95. [DOI] [PubMed] [Google Scholar]
  • 57. Chong J, Curtain C, Gad F, et al. Development and implementation of venous thromboembolism stewardship across a hospital network. Int J Med Inform 2021; 155: 104575. [DOI] [PubMed] [Google Scholar]
  • 58. Sutherland T, Beloff J, McGrath C, et al. A single-center multidisciplinary initiative to reduce catheter-associated urinary tract infection rates: quality and financial implications. Health Care Manag (Frederick) 2015; 34 (3): 218–24. [DOI] [PubMed] [Google Scholar]
  • 59. Nguyen K-H, Wright C, Simpson D, Woods L, Comans T, Sullivan C.. Economic evaluation and analyses of hospital-based electronic medical records (EMRs): a scoping review of international literature. NPJ Digit Med 2022; 5 (1): 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60. Merlo G, Page K, Ratcliffe J, Halton K, Graves N.. Bridging the gap: exploring the barriers to using economic evidence in healthcare decision making and strategies for improving uptake. Appl Health Econ Health Policy 2015; 13 (3): 303–9. [DOI] [PubMed] [Google Scholar]
  • 61. Briggs A, Sculpher M, Buxton M.. Uncertainty in the economic evaluation of health care technologies: the role of sensitivity analysis. Health Econ 1994; 3 (2): 95–104. [DOI] [PubMed] [Google Scholar]
  • 62. Vedelø TW, Lomborg K.. Reported challenges in nurse‐led randomised controlled trials: an integrative review of the literature. Scand J Caring Sci 2011; 25 (1): 194–200. [DOI] [PubMed] [Google Scholar]
  • 63. Bärnighausen T, Tugwell P, Røttingen J-A, et al. Quasi-experimental study designs series—paper 4: uses and value. J Clin Epidemiol 2017; 89: 21–9. [DOI] [PubMed] [Google Scholar]
  • 64. Kontopantelis E, Doran T, Springate DA, Buchan I, Reeves D.. Regression based quasi-experimental approach when randomisation is not an option: interrupted time series analysis. BMJ 2015; 350 (5): h2750. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Maciejewski ML. Quasi-experimental design. Biostat Epidemiol 2020; 4 (1): 38–47. [Google Scholar]
  • 66. Page K, Graves N, Halton K, Barnett A.. Humans,‘things’ and space: costing hospital infection control interventions. J Hosp Infect 2013; 84 (3): 200–5. [DOI] [PubMed] [Google Scholar]
  • 67. Page K, Barnett AG, Graves N.. What is a hospital bed day worth? A contingent valuation study of hospital Chief Executive Officers. BMC Health Serv Res 2017; 17 (1): 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Skivington K, Matthews L, Simpson SA, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ 2021; 374: n2061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Cowie J, Nicoll A, Dimova ED, Campbell P, Duncan EA.. The barriers and facilitators influencing the sustainability of hospital-based interventions: a systematic review. BMC Health Serv Res 2020; 20 (1): 1–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Brennan A, Chick SE, Davies R.. A taxonomy of model structures for economic evaluation of health technologies. Health Econ 2006; 15 (12): 1295–310. [DOI] [PubMed] [Google Scholar]
  • 71. Bell CM, Urbach DR, Ray JG, et al. Bias in published cost effectiveness studies: systematic review. BMJ 2006; 332 (7543): 699–703. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocad040_Supplementary_Data

Data Availability Statement

The data underlying this article are available in the article and in its online Supplementary Material.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES