Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2003;2003:361–365.

Clinical Decision Support Provided within Physician Order Entry Systems: A Systematic Review of Features Effective for Changing Clinician Behavior

Kensaku Kawamoto 1, David F Lobach 1
PMCID: PMC1480005  PMID: 14728195

Abstract

Computerized physician order entry (CPOE) systems represent an important tool for providing clinical decision support. In undertaking this systematic review, our objective was to identify the features of CPOE-based clinical decision support systems (CDSSs) most effective at modifying clinician behavior. For this review, two independent reviewers systematically identified randomized controlled trials that evaluated the effectiveness of CPOE-based CDSSs in changing clinician behavior. Furthermore, each included study was assessed for the presence of 14 CDSS features. We screened 10,023 citations and included 11 studies. Of the 10 studies comparing a CPOE-based CDSS intervention against a non-CDSS control group, 7 reported a significant desired change in professional practice. Moreover, meta-regression analysis revealed that automatic provision of the decision support was strongly associated with improved professional practice (adjusted odds ratio, 23.72; 95% confidence interval, 1.75-∞). Thus, we conclude that automatic provision of decision support is a critical feature of successful CPOE-based CDSS interventions.

INTRODUCTION

A significant gap exists between actual clinical practice and optimal patient care. For example, in a recent systematic review of the quality of health care in the United States, Schuster et al. found that only about 70% of patients received recommended acute care, and that only about 60% of patients received recommended care for chronic conditions.1 Moreover, the Institute of Medicine recently estimated that 44,000 to 98,000 patients die each year in hospitals as a result of preventable medical errors.2 Even when using the lower estimate, this statistic implies that more Americans die each year from medical errors than from motor vehicle accidents (43,458), breast cancer (42,297), or AIDS (16,516).3

Given this gap between actual clinical practice and ideal patient care, the Institute of Medicine and other key stakeholders have identified computerized physician order entry (CPOE) as an important strategy for improving professional practice and reducing medical errors.2 Indeed, by requiring clinicians to directly enter orders online, CPOE systems can virtually eliminate medical errors due to lost, incomplete, or illegible orders.4 Moreover, CPOE systems can significantly improve professional practice through the integration of clinical decision support systems (CDSSs). For example, a recent time series study found that a CPOE system with decision support features reduced the incidence of serious medication errors in a large hospital by 86%.5

While there is significant evidence for the effectiveness of CPOE-based CDSS interventions as a whole, little is known with regard to the specific CDSS elements that are most important in producing a desired change in clinician behavior. Thus, despite the insights gained from relevant qualitative studies,6 there remains a lack of evidence-based understanding as to why some CPOE-based CDSS interventions succeed, while others fail. Indeed, this is a problem that affects CDSS interventions in general,7 and we have conducted a systematic review of CDSSs provided both within and outside of CPOE systems in order to address this issue.8 The current systematic review represents a subset analysis of this larger work, in which special focus is placed on CDSSs provided in the context of CPOE systems. In undertaking this endeavor, our objective was to help guide the efforts of CPOE designers by providing a rigorous, evidence-based assessment of the CDSS features that are most effective at influencing clinician behavior in the context of computerized physician order entry.

METHODS

Inclusion and Exclusion Criteria.

We defined a computerized physician order entry (CPOE) system as a computer-based system that allows clinicians to enter orders directly. Also, we defined a clinical decision support system (CDSS) as any system designed to directly aid in clinical decision making, in which characteristics of individual patients are matched to a knowledge base for the purpose of generating patient-specific assessments or recommendations that are then presented to clinicians for consideration.9 In selecting studies, we included randomized controlled trials that evaluated the effectiveness of a CPOE-based CDSS or one of its features in changing an important clinician behavior in a real clinical setting. We considered physicians, physician assistants, and nurse practitioners to be valid clinician subjects. Moreover, we excluded studies with less than 7 units of randomization per study arm and studies scoring less than 5 points on the 10-point quality rating scale described below. In addition, we excluded studies not in English, studies in which compliance with the CDSS was mandatory, studies that did not describe the content of the decision support provided to clinicians, and studies that did not describe how clinicians interacted with the system.

Data Sources.

We searched MEDLINE (1966-December 2002), CINAHL (1982-October 2002), and the Cochrane Controlled Trials Register (Fourth Quarter, 2002) for relevant studies. We did not limit the search by study type, and we used combinations of multiple search terms, which included the following: decision support systems, clinical; decision making, computer-assisted; reminder systems; feedback ; guideline adherence; medical informatics; communication; physician’s practice patterns; reminder$; feedback$; decision support$; and expert system. We also systematically searched the reference lists of included studies and of relevant reviews for potential studies.

Study Selection and Quality Evaluation.

We created a screening algorithm based on a subset of the inclusion criteria in order to facilitate the selection of papers. Using this algorithm, two independent reviewers examined all titles, and index terms and abstracts if available, and rated each paper as “potentially relevant” or “not relevant.” We rated the citation as being “potentially relevant” if there was any uncertainty regarding the inclusion of the paper. The raw agreement at this stage was 99.8% and the level of agreement beyond chance was 96.3% (κ = 96.3%; 95% confidence interval [CI], 94.8%–97.8%). Disagreements were resolved by discussion, and full-text articles were retrieved for all studies considered to be “potentially relevant.” Two reviewers then independently evaluated the full-text articles using the screening algorithm to determine whether the papers were “potentially relevant” or “not relevant,” and disagreements were resolved by discussion. The raw agreement at this stage was 98.8% and the level of agreement beyond chance was 82.7% (κ = 82.7%; 95% CI, 65.8%–99.6%). Next, all studies still considered to be “potentially relevant” were assessed independently by two reviewers to determine their inclusion status using the full set of inclusion criteria. The raw agreement at this stage was 100%.

Finally, all remaining studies were assessed independently by two reviewers for methodological quality. For this purpose, we used a 10-point rating scale derived from the metric used by Hunt et al. in a previous systematic review of electronic CDSSs.9 This scale assessed for 5 potential sources of bias, including the method of allocation to study groups, the unit of allocation, the presence of baseline differences between groups that were potentially linked to the study outcome, the type of outcome measure, and completeness of follow-up. Disagreements at this stage were resolved by discussion. There was 100% raw agreement as to whether a study possessed a total quality score of at least 5 points. Only studies meeting the minimum score of 5 points were included in the final analysis.

Data Extraction.

For each included study, two reviewers independently assessed for the presence or absence of a statistically and clinically significant desired change in clinician behavior. Effect size was considered as an alternative outcome measure, but the use of this metric was ruled out for two reasons. First, we anticipated that the use of effect size would have led to the exclusion of relevant trials, as studies oftentimes fail to report all of the statistical elements necessary for effect size reconstruction. Second, we felt that the use of effect size would be misleading given the significant heterogeneity among the outcome measures reported by the included studies.

In addition to the outcome, each comparison was assessed for general study characteristics, including the setting, subjects, and domain of decision support. Moreover, the two reviewers independently determined the presence or absence of 14 specific CDSS features, which included general system features, system-clinician interaction features, communication content features, and auxiliary features (Table 1). These 14 features were abstracted because they had been suggested as being important in the primary or secondary literature, and because it was felt that the presence or absence of these features could be abstracted reliably from most studies. Any disagreements at this stage were resolved by discussion. The raw agreement for the outcome measure was 100%. For the assessment of the 14 interventions features, the raw agreement ranged from 81.8% to 100%, and the level of agreement beyond chance ranged from 62.1% to 100% (κ = 62.1%–100%).

Table 1.

Frequency of Specific CDSS Features Among the 10 Control-CDSS Comparisons

CDSS Feature Freq
System features
 Local user involvement in development 20%
Clinician-system interaction features
 Recommendations executed by simply noting agreement 80%
*No need for additional clinician data entry 70%
*Automatic provision of decision support 70%
 Request documentation of reason for any non-compliance 20%
Communication content features
 Provision of a recommendation 80%
 Provision of unambiguous recommendations 70%
 Promotion of action rather than inaction 70%
 Justification via reasoning 60%
 Justification via research evidence 10%
 Justification via citation of authority 10%
 Supplant need for calculator use 0%
Auxiliary features
 Provision of conventional education 10%
 Provision of periodic performance feedback 0%

*: Variables considered for meta-regression analysis

Identification of Critical CDSS Features.

We sought to identify the CDSS features important in changing clinician behavior through the use of two approaches. As one approach, we conducted a meta-regression analysis, wherein the presence or absence of a statistically and clinically significant desired change in clinician behavior constituted the binary outcome variable, and where the presence or absence of the intervention features constituted binary explanatory variables. For this analysis, we only included studies in which the CPOE-based CDSS was compared against a control group that did not receive decision support. Moreover, given the limited sample size, we restricted the set of candidate explanatory variables to the three features that were most strongly associated with a significant outcome in our larger systematic review of CDSSs provided both within and outside of CPOE systems (noted with asterisks in Table 1).8 The logistic regression analysis was conducted using LogXact-5,10 a commercial statistical program that uses the exact permutation distribution of sufficient statistics to generate parameter estimates.11 Variables were included into the model using forward selection and a significance level of 0.05.

As a second approach for identifying critical CDSS features, we looked for direct exp erimental evidence supporting the importance of specific CDSS features. This assessment was done through the systematic identification of studies in which a given CPOE-based CDSS was directly compared against the same CDSS with one or more additional features.

RESULTS

Summary of Identified Studies.

Of 10,023 potentially relevant citations screened, 12 papers describing 11 studies met our inclusion criteria.1223 Of the 11 included studies, 10 trials compared a CPOE-based CDSS against a control group that did not receive decision support (control-CDSS comparisons). One study compared a CPOE-based CDSS against the same system with additional features (CDSS-CDSS comparison).

Table 1 summarizes the distribution of the 14 abstracted CDSS features among the 10 control-CDSS comparisons. Of note, 2 of the features were not present in any of the CDSS interventions.

Furthermore, Table 2 provides a more detailed description of the interventions and outcomes of the 10 control-CDSS comparisons. Among these studies, 70% reported a statistically and clinically significant desired change in clinician behavior (the first 7 studies listed in Table 2), whereas 30% failed to find a significant improvement (the last 3 studies in Table 2). Among these 10 studies, decision support was provided most frequently for laboratory test ordering (70%), pharmacotherapy (60%), and radiology requisition (40%). Overall, 70% of the CPOE-based CDSSs supported decision making related to acute medical conditions, and 70% supported decision making related to chronic medical conditions.

Table 2.

Randomized Controlled Trials that Evaluated a CPOE-based CDSS Against a Control Group without Decision Support

Source, Year # Clinicians* / # Patients* / Duration (mo) Control Intervention Outcome Measure Effect (Intervention vs. Control)
Christakis et al., 200112 28/851 visits/8 Usual outpatient care, including use of computerized prescription writer At time of prescription, display of research evidence to adjust duration of treatment with antibiotics for acute otitis media Change in % of treatments of duration < 10 d
Change in % of acute otitis media treated without antibiotics
44.4% vs 10.5%, p < 0.001
−4.3% vs −16.8%, p = 0.095
van Wijk et al., 200113 60/20242 order forms/12 CPOE system for outpatient laboratory tests, with initial display of restricted list of tests CPOE system for out patient laboratory tests, with recommendation of tests based on user-selected national and regional guidelines Average number of tests per order form 5.5 vs 6.9, p = 0.003
Bates et al., 199914 & 199515 1 hospital/939 tests/4 Usual inpatient care, including use of CPOE system Within CPOE system, display of reminders to cancel redundant laboratory tests % of redundant test orders that were performed 27% vs 51%, p < 0.001
Kuperman et al., 199916 1 hospital/178/2 Critical laboratory results telephoned to patient floor, then message relayed to responsible clinician Responsible clinician automatically paged regarding results, and appropriate treatment options offered on computer terminals Median time to ordering of appropriate treatment
Median time to resolution of alerting condition
Total adverse event rate per patient
1.0 vs 1.6 hours, p = 0.003
8.4 vs 8.9 hours, p = 0.11
33% vs 28%, NS
Overhage et al., 199717 89/1686/7 Usual inpatient care, including use of CPOE system; written guidelines on corollary orders (orders needed to monitor/ameliorate the effects of other orders) made available Within CPOE system, reminders about corollary orders presented when ordering 1 of 87 tests or treatments; written guidelines made available as in control % immediate compliance with relevant corollary orders 46.3% vs 21.9%, p < 0.0001
Tierney et al., 199318 68 teams/5219/17 Usual inpatient care, including use of computerized patient record system CPOE system, which displayed problem-specific menus of cost-effective orders and recommended against certain expensive orders Total charge per admission ($)
Mean length of stay (days)
6077 vs 6964, p = 0.02
7.6 vs 8.5, p = 0.11
Tierney et al., 198819 112/~6077/6 Usual outpatient care, including use of CPOE system For 8 common outpatient tests, display of predicted probabilities of test abnormalities during order entry Patient study test charges per scheduled visit ($)
Estimated probability of abnormality among ordered tests
11.18 vs 12.27, p < 0.05
24% vs 18%, p < 0. 0001
Flanagan et al., 199920 89/817/9 Usual outpatient care, including optional use of computer-based immunization charting and ordering system Computer-generated reminders for tetanus, hepatitis B, influenza, pneumococcal, and MMR vaccines presented in optional immunization charting and ordering system % of immunization charting system sessions resulting in the appropriate administration of tetanus, hepatitis B, influenza, pneumococcal, and MMR vaccines NS
Rotman et al., 199621 37/2645 prescriptions/3 Usual outpatient care, including use of computerized patient record system Optional use of a new CPOE system that gave recommendations for cost-effective prescribing and alerts on drug interactions Cost of prescribed medications
Rate of clinically relevant drug interactions
NS
NS
Overhage et al., 199622 78/1622/6 Usual inpatient care, including use of computerized patient record system Reminders for 22 preventive care measures, provided on rounds reports and made available within CPOE system when the user elected to view the suggested orders for a given patient % overall clinician compliance with preventive care recommendations 23% vs 24%, NS

*: The number of subjects for whom the primary outcome measure was reported

NS: No significant difference

Identification of Critical CDSS Features through Meta-Regression Analysis.

As one approach for identifying important features of successful CPOE-based CDSS interventions, we conducted a meta-regression analysis on the 10 control-CDSS comparisons included in our study. As shown in Table 3, the meta-regression analysis revealed a significant association between the automatic provision of decision support and the finding of a statistically and clinically significant desired change in clinician behavior (adjusted odds ratio, 23.72; 95% confidence interval, 1.75-∞). Indeed, of the 10 studies described in Table 2, all 7 of the successful studies provided the decision support automatically, without the need for clinician initiative. On the other hand, this critical feature was absent from all 3 of the unsuccessful studies. For these studies, delivery of the decision support was dependent on the presence of user initiative.

Table 3.

Results of Meta-Regression Analysis

CDSS Feature Adjusted Odds Ratio (95% CI) p-value
Automatic provision of decision support 23.72 (1.75-∞) 0.017

Identification of Critical CDSS Features through Survey of Direct Experimental Evidence.

As a second approach for identifying important CDSS features, we systematically searched for studies in which a CPOE-based CDSS intervention was compared directly against the same CDSS with additional features. As shown in Table 4, we identified one such study. In this study,23 the control clinicians received real-time critiques regarding the appropriateness of abdominal radiograph orders through a CPOE system, whereas the intervention clinicians received modified critiques which were worded more strongly, and which were accompanied by institutional evidence to support the critiques. However, this study failed to find a change in clinician behavior attributable to the provision of these additional CDSS features.

Table 4.

Randomized Controlled Trials that Directly Evaluated the Effectiveness of Specific CDSS Features

Source, Year # Clinicians* / # Patients* / Duration (mo) Control Intervention Outcome Measure Effect (Intervention vs. Control)
Harpole et al., 199723 1 hospital/491/5 CPOE system, in which real-time critiques were presented regarding appropriateness of abdominal radiograph (KUB) orders CPOE system as in control, but with critiques altered to be more strongly worded and to include institutional evidence to support critiques % compliance with recommendations to cancel KUBs unlikely to be of diagnostic value
% compliance with recommendations to order alternate views with greater diagnostic value
NS
NS

*: The number of subjects for whom the primary outcome measure was reported

NS: No significant difference

DISCUSSION

Through a systematic review of 11 randomized controlled trials, we have identified automatic provision of decision support as a critical feature of successful CPOE-based CDSS interventions. Indeed, meta-regression analysis revealed that automatic provision of decision support was strongly associated with a significant desired change in clinician behavior (adjusted odds ratio, 23.72; 95% confidence interval, 1.75-∞). This finding is consistent with the results from our systematic review of electronic as well as non-electronic CDSSs; even in this larger data set of over 75 randomized controlled trials, we have not identified a single CDSS intervention that resulted in a significant desired change in clinician behavior in the absence of the automatic provision of decision support.8 We speculate that this factor is critical because the demands on clinicians preclude them from actively seeking out advice from a clinical decision support system.

In interpreting the findings of this review, several limitations should be kept in mind. First, we used a binary outcome measure, instead of a continuous measure such as effect size. Because of this approach, we were unable to account for variations in the magnitude of effects. Second, the scope and power of our meta-regression analysis were limited by the relatively small sample size. Thus, we cannot rule out the potential importance of the 13 other CDSS features that we abstracted for this study. Finally, our analysis was limited to features that could be abstracted reliably. Thus, we were unable to assess the significance of several other potentially important features, including system response time, consistency of the system interface, and commitment from top levels of management.4, 6

Despite these limitations, we feel that this systematic review makes a significant contribution to our understanding of the specific features important in the success of CPOE-based CDSS interventions. We speculate that our findings will be of use to the designers and implementers of CPOE-based decision support systems as they leverage this technology to influence clinician behavior and improve patient care.

Acknowledgments

We thank Caitlin Houlihan for assisting with the study selection process and Dr. Andrew Balas for assisting with the study design. This research was supported by NIH grants T32-GM07171, R01-HS10472, and R03-HS10814.

REFERENCES

  • 1.Schuster MA, McGlynn EA, Brook RH. How good is the quality of health care in the United States? Milbank Q. 1998;76(4):509. doi: 10.1111/1468-0009.00105. 517–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Kohn LT, Corrigan J, Donaldson MS, editors. To err is human: building a safer health system. Washington, DC: National Academy Press; 1999. [PubMed]
  • 3.Centers for Disease Control and Prevention (National Center for Health Statistics) Births and deaths: preliminary data for 1998. Natl Vital Stat Rep. 1999;47(25):6. [PubMed] [Google Scholar]
  • 4.Sittig DF, Stead WW. Computer-based physician order entry: the state of the art. J Am Med Inform Assoc. 1994;1(2):108–23. doi: 10.1136/jamia.1994.95236142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bates DW, Teich JM, Lee J, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc. 1999;6(4):313–21. doi: 10.1136/jamia.1999.00660313. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ash JS, Gorman PN, Lavelle M, et al. A cross-site qualitative study of physician order entry. J Am Med Inform Assoc. 2003;10(2):188–200. doi: 10.1197/jamia.M770. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kaplan B. Evaluating informatics applications—some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inf. 2001;64(1):39–56. doi: 10.1016/s1386-5056(01)00184-8. [DOI] [PubMed] [Google Scholar]
  • 8.Kawamoto K, Houlihan C, Balas EA, Lobach DF. Features of clinical decision support systems effective at modifying clinician behavior: a systematic review [in preparation].
  • 9.Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998;280(15):1339–46. doi: 10.1001/jama.280.15.1339. [DOI] [PubMed] [Google Scholar]
  • 10.LogXact [computer program]. Version 5.0. Cambridge (MA): Cytel Software Corporation; 2002.
  • 11.Bull SB, Mak C, Greenwood CMT. A modified score function estimator for multinomial logistic regression in small samples. Computational Statistics and Data Analysis. 2002;39:57–74. [Google Scholar]
  • 12.Christakis DA, Zimmerman FJ, Wright JA, Garrison MM, Rivara FP, Davis RL. A randomized controlled trial of point-of-care evidence to improve the antibiotic prescribing practices for otitis media in children. Pediatrics. 2001;107(2):E15. doi: 10.1542/peds.107.2.e15. [DOI] [PubMed] [Google Scholar]
  • 13.van Wijk MA, van der Lei J, Mosseveld M, Bohnen AM, van Bemmel JH. Assessment of decision support for blood test ordering in primary care: a randomized trial. Ann Intern Med. 2001;134(4):274–81. doi: 10.7326/0003-4819-134-4-200102200-00010. [DOI] [PubMed] [Google Scholar]
  • 14.Bates DW, Kuperman GJ, Rittenberg E, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med. 1999;106(2):144–50. doi: 10.1016/s0002-9343(98)00410-0. [DOI] [PubMed] [Google Scholar]
  • 15.Bates DW, Kuperman GJ, Rittenberg E, et al. Reminders for redundant tests: results of a randomized controlled trial. Proc Ann Symp Comp Appl Med Care. 1995:935. [Google Scholar]
  • 16.Kuperman GJ, Teich JM, Tanasijevic MJ, et al. Improving response to critical laboratory results with automation: results of a randomized controlled trial. J Am Med Inform Assoc. 1999;6(6):512–22. doi: 10.1136/jamia.1999.0060512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Overhage JM, Tierney WM, Zhou XH, McDonald CJ. A randomized trial of “corollary orders” to prevent errors of omission. J Am Med Inform Assoc. 1997;4(5):364–75. doi: 10.1136/jamia.1997.0040364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA. 1993;269(3):379–83. [PubMed] [Google Scholar]
  • 19.Tierney WM, McDonald CJ, Hui SL, Martin DK. Computer predictions of abnormal test results. Effects on outpatient testing. JAMA. 1988;259(8):1194–8. [PubMed] [Google Scholar]
  • 20.Flanagan JR, Doebbeling BN, Dawson J, Beekmann S. Randomized study of online vaccine reminders in adult primary care. Proc AMIA Symp. 1999:755–9. [PMC free article] [PubMed] [Google Scholar]
  • 21.Rotman BL, Sullivan AN, McDonald TW, et al. A randomized controlled trial of a computer-based physician workstation in an outpatient setting: implementation barriers to outcome evaluation. J Am Med Inform Assoc. 1996;3(5):340–8. doi: 10.1136/jamia.1996.97035025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Overhage JM, Tierney WM, McDonald CJ. Computer reminders to implement preventive care guidelines for hospitalized patients. Arch Intern Med. 1996;156(14):1551–6. [PubMed] [Google Scholar]
  • 23.Harpole LH, Khorasani R, Fiskio J, Kuperman GJ, Bates DW. Automated evidence-based critiquing of orders for abdominal radiographs: impact on utilization and appropriateness. J Am Med Inform Assoc. 1997;4(6):511–21. doi: 10.1136/jamia.1997.0040511. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES