Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2006;2006:594–598.

Determinants of Success for Computerized Clinical Decision Support Systems Integrated into CPOE Systems: a Systematic Review

Julie Niès a,c, Isabelle Colombet a,b, Patrice Degoulet a,b, Pierre Durieux a,b
PMCID: PMC1839370  PMID: 17238410

Abstract

We carried out a systematic review of published trials to identify the methodological characteristics of studies and technical characteristics of computerized clinical decision support systems (CCDSSs) associated with efficacy for the main outcome of the study.

Four characteristics of the content of decision support and the way in which the user is provided with assistance seem to be associated with the success of CCDSSs: a) System-initiated interventions, b) Assistance without user control over output, c) Systems in which data are automatically retrieved from the electronic medical record and d) Systems providing corollary actions in CPOE.

Major differences in outcome reporting between studies could be reduced by the use of dedicated tools to standardize methodological reporting.

Introduction

Many efforts have been made to evaluate the effectiveness of computerized clinical decision support systems (CCDSSs) for improving medical practice [1, 2] and to help healthcare organizations to use such systems [3, 4]. In modern hospital information systems, the complete computerized physician order entry (CPOE) (including laboratory tests, imaging, and drug prescription) is integrated into both the electronic medical record (EMR) and the other components of the system: the radiology, laboratory and pharmacy information subsystems [5]. In such environments, the next step is the design and development of CCDSS integrating the CPOE. Evaluation studies have highlighted difficulties in implementing such systems and making them accepted by physicians [6].

Several expert groups have formulated a broad definition of CCDSSs, including structured order forms, reactive alerts and reminders, and user-initiated guideline support. CCDSSs are usually developed to decrease the incidence of medication errors and adverse medical events, to tailor care more effectively to the individual, to encourage the appropriate and cost-effective use of drugs and tests and to increase compliance with regulations. This wide range of objectives is reflected in the large variety of outcomes considered in evaluation studies.

Two systematic reviews with slightly different objectives have recently been published [1, 2]. Garg et al. reviewed controlled trials evaluating the effectiveness of CCDSSs for improving the physician's performance and/or patient outcome [1]. They also analyzed the characteristics of the study predictive of efficacy. They selected 100 studies: 65% of which described systems yielding a significant improvement in clinical practice. Two study characteristics were frequently found to be associated with improvement: the system’s developers acting as investigators in the evaluation study and the automatic prompting of users by the decision aid. However, this review included both studies in which recommendations were delivered electronically and studies in which computer-generated recommendations were printed out and attached to the paper record by a third party.

Kawamoto et al. focused on the features of clinical decision support systems predictive of their ability to improve clinical practice [2]. They selected 88 papers (relating to 70 studies), 32 of which were also analyzed by Garg et al. They highlighted four features significantly associated with success: the automatic provision of decision support as part of clinician workflow, the provision of decision support at the time and location of decision-making, the provision of recommendations rather than just assessment, and the computer-based generation of decision support. However, they included both computerized and non computerized clinical decision support systems.

We focus here on clinical decision support systems that automatically provide the clinician with electronically formatted recommendations (i.e. computerized interventions). The aim was to identify, from published data, the features of such systems essential for their successful and durable use in practice and for care improvement. We carried out a systematic review, based on the bibliography selected by Garg et al., [1] with the addition of further references up to July 2005, and restricted to studies describing computerized interventions (i.e. CCDSSs). The aim was to identify the methodological characteristics of studies and the technical characteristics of CCDSSs associated with efficacy, for the main outcome of the study.

Methods

Search strategy and selection

Garg et al. selected randomized and non randomized trials with a contemporaneous control group that were published in English and compared patient care with and without a CCDSS and evaluated clinical performance or patient outcome. They defined a CCDSS as any system providing patient-specific information, recommendations or advice to any healthcare professional in clinical practice. We selected studies from Garg's systematic review, and updated the list of studies to July 2005, by searching Medline with the same keywords: hospital information systems, computer-assisted decision making, computer-assisted diagnosis, computer-assisted therapy, clinical decision support systems, randomized controlled trial and cohort studies. Of the studies identified in this search, we selected those with the inclusion criteria listed above. We also assigned the CCDSSs to two groups, according to the type of intervention:

  • - Computerized intervention (the decision-making aid is targeted at the user of the CCDSS),

  • - Computer-generated paper reminder (the CCDSS is used by a third party who forwards the printed ecision-making aid to the targeted health professional).

The references of the selected papers were systematically checked to complete, if necessary, the description of the CCDSS. Only studies evaluating computerized interventions are described and analyzed in this review.

Data collection

Study description checklist

For each evaluation study, we noted the year of publication, number of included patients, number of participants, and the participants’ involvement in the choice, design, and implementation of the CCDSS or in the design of the study.

We assessed study quality, using the same 10-point scale as Garg et al. This scale takes into account the method of allocation to study groups, the unit of allocation, the presence of baseline differences between the groups potentially linked to study outcomes, the objectivity of the outcome measure, and the completeness of follow-up for the appropriate unit of analysis.

We assessed the following study outcomes [7]:

  • - Process of care, related to the health professional: compliance with guidelines, knowledge, attitudes, skill (e.g. time taken to respond to an alert), and satisfaction.

  • - Outcome of care, related to the patient: morbidity or mortality, quality of life, surrogate outcomes (e.g. time taken to achieve a stable therapeutic dose), indicators of resource use (e.g. duration of hospital stay).

The overall outcome of a study was considered to be positive when the primary outcome was significantly positive in statistical tests. If the primary outcome was not explicitly defined by the authors, we selected the most relevant primary outcome based on other studies with similar objectives.

CCDSS description checklist

We considered the detailed characteristics of CCDSSs, based on their content and the logistics of decision-making support (Table 1). We contacted the authors of all the original studies and asked them to confirm the abstracted information and to complete it if necessary. All studies were analyzed independently by two investigators (JN, and IC or PDx). Disagreements were resolved by consensus. A narrative summary of the results of this systematic review was then produced.

Table 1.

CCDSS description checklist

Clinical objective class Examples
Prevention reminders (to increase appropriate referrals for prevention and screening) Reminders for annual flu shot, regular mammography referrals
Diagnosis (to increase appropriate gathering of key patient history findings and/or to suggest diagnoses or organization of care) Assessment of diagnosis in patients presenting mental disorders
Drug prescription (to optimize drug management) Dose adjustment for anticoagulants
Disease or risk factor management (to set up initial management orders, to optimize treatment regimen for patients with specific clinical or disease conditions) Interventions shown to decrease morbidity and mortality in patients with cardiovascular risk factors
Utilization (to check orders, to monitor the effect of corollary orders or to reduce unnecessary healthcare utilization) Serum creatinine determinations to monitor potential adverse effects of drugs, redundancy of laboratory tests or, drug-drug interactions
Detailed functions of the program Examples
For each type of order: drug, laboratory test, imaging, counseling, care, education etc. Choice of an item, drug dosage adjustment, reminder to prescribe a test, attempt to limit testing
Content of the decision-making aid Items
Source of knowledge (used to provide assistance) Pharmacokinetic model, guidelines, decision rules
Access to knowledge User access or no
Type of information output Simple or more complex pieces of medical knowledge
Logistics (interaction between CCDSS with users) Items
  • Initiation of intervention, and, if system-initiated:

    • ⇒Integration into workflow (at which step in the workflow is the intervention integrated?)

    • ⇒User control (can the user modulate the interventions?)

  • System- or user-initiated intervention

    • ⇒At the time at which the patient's record is opened and at the time of prescription/order

    • ⇒Can the decision-making aid be activated and inactivated or the assistance display switched off?

  • Data input (way to input data into the system). If data entered manually:

    • ⇒Timing of data request

  • Automatic retrieval from the electronic medical record or manual input by the user

    • ⇒Before or during CCDSS execution

  • Nature of the decision-making aid

  • Simple display or provision by the system of corollary actions in the CPOE

Results

Collection of studies

Our search retrieved 232 papers published between September 2004 and July 2005. We included six of these papers [813], in addition to the 100 papers identified in the review by Garg et al.. These 106 papers described 59 studies evaluating computerized interventions. The authors of 17 (28.8%) studies confirmed the accuracy of the abstracted data or provided additional information.

Description of the studies

The number of papers on this subject has increased over time, with 22 (37%) studies published after 2000 (Table 2). However, the proportion of positive studies has remained stable. These 59 studies included four to 300 practices and 18 to 22,509 patients. Almost half of studies scored more than 8/10 on the methodological grading scale. The reported outcome measures essentially addressed the physician's compliance with guidelines and surrogate patient outcomes. “Drug prescribing” (n=22) and “Disease or risk factor management” (n=19) were used in 70% of the studies, but in only 58% of the positive studies (n=18). Eleven of the 14 studies on “Prevention reminder” and “Utilization” gave positive results.

Table 2.

Methodological characteristics of studies

Total N = 59 Positive N = 31 Negative N = 28
Study description checklist
Year of publication
 >= 1995

38 (64.4%)

21 (67.7%)

17 (60.7%)
Number of patients *
 Median (IQR)
120379
254 (2117)
105624
724 (5128)
14755
164 (616)
Number of practitioners$
 Median (IQR)
2375
53 (103)
1514
38 (131)
861
57 (61)
Developers Affiliation:
 Academic
 Private industry
 Both
 Other

43 (72.9%)
6 (10.2%)
8 (13.6%)
2 (3.4%)

22 (70.9%)
2 (6.5%)
5 (16.1%)
2 (6.5%)

21 (75%)
4 (14.3%)
3 (10.7%)
0
Participants Involvement:
 System choice
 System design/development
 Study design

1 (1.7%)
12 (20.3%)
2 (3.4%)

1 (3.2%)7 (22.6%)6 (19.4%)

0
5 (17.9%)
3 (1.7%)
Methodological score  >= 8 26 (44.1%) 12 (38.7%) 14 (50%)
Process of care:
 Guidelines compliance
 Attidudes/Skills/Satisfaction
Outcome of care:
 Morbidity
 Surrogate outcomes
 Resource use indicator

20 (33.9%)
8 (1.6%)

5 (8.5%)
22 (37.3%)
4 (6.8%)

12 (38.7%)
6 (19.4%)

3 (9.7%)
6 (19.4%)
4 (1.9%)

8 (28.6%)
2 (7.1%)

2 (7.1%)
16 (57.1%)
0
Clinical objective class
Prevention reminder 8 (13.6%) 6 (19.4%) 2 (7.1%)
Diagnosis 4 (6.8%) 2 (6.5%) 2 (7.1%)
Drug prescription 22 (37.3%) 8 (25.8%) 14 (50%)
Disease and risk factor Management 19 (32.2%) 10 (32.3%) 9 (32.1%)
Utilization 6 (10.2%) 5 (16.1%) 1 (3.6%)
*

Estimation from 57 studies (30 positive and 27 negative).

$

Estimation from 27 studies (17 positive and 10 negative).

IQR: inter-quartile range

Description of the CCDSSs

The main characteristics of the CCDSSs (detailed program goals, content of the decision-making aid, and logistics of decision-making support) are presented, according to clinical objective class, in table 3. Drug dosage adjustment was less frequently observed in positive studies (29%) than in negative studies (71%). Conversely, reminders to order a laboratory test were more frequent in positive studies (16% versus 7%). The knowledge output of CCDSSs was frequently complex, but this did not seem to be associated with their success. Half the evaluated CCDSSs were system-initiated, a criterion more frequent in positive than in negative studies. Only six studies described the possibility of corollary actions targeted by the CCDSS, and five of these studies gave positive results. In 12 (20%) cases, the authors confirmed that use of the evaluated CCDSS continued in routine care after completion of the study.

Table 3.

Characteristics of the computerized systems by clinical objective class: prevention reminder (PR), diagnosis (D), drug prescription (DP), disease or risk factor management (DRFM), utilization (U) (non described criteria are considered absent)

PR N = 8 D N = 4 DP N = 22 DRFM N = 19 U N = 6 Total N = 59 Positive N = 31 Negative N = 28
Detailed functions of the programΔ
Drug order entry:
 - Drug dosage adjustment
 - Reminder to order a drug

1
6

0
0

19$
0

9
8

0
1

29 (49.2%)
15 (25.4%)

9 (29%)
9 (29%)

20 (71.4%)
6 (21.4%)
Diagnostic act (lab test, imaging etc.):
 - Choice of date for the next lab test
 - Reminder to order a lab test
 - Reminder to order an act

0
3
2

0
0
0

1
0
0

1
3
5

0
1
0

2 (3.4%)
7 (11.9%)
7 (11.9%)

0
5 (16.1%)
4 (12.9%)

2 (7.1%)
2 (7.1%)
3 (10.7%)
Patient care, counseling, education 2 3 0 9 0 14 (23.7%) 9 (29%) 5 (17.9%)
Care organization 1 2 5 2 2 12 (20.3%) 6 (19.4%) 6 (21.4%)
Content of the decision-making aid
Source of knowledgeΔ:
 - Pharmacokinetic model
 - Guidelines
 - Decision rules

0
8
2

0
3
1

11
1
3

0
13
3

0
3
4

11 (18.6%)
28 (47.5%)
13 (22%)

5 (16.1%)
15 (48.4%)
10 (32.3%)

6 (21.4%)
13 (46.4%)
3 (10.7%)
Access to knowledge 6 1 0 8 1 16 (27.1%) 8 (25.8%) 8 (28.6%)
Type of information output:
 - Simple information
 - Complex information

7
1

4
2

1
20

6
13

4
2

18 (30.5%)
40 (67.8%)

13 (41.9%)
17 (54.8%)

5 (17.9%)
23 (82.1%)
Logistics
Starter of intervention:
- User-initiated
- System-initiated
 ⇒Integration into workflow*:
 - When the patient record is opened
 - When the prescription/order is made
 ⇒User control*:
 - Display cannot be inactivated by user

2
6

4
1

4

2
2

1
0

2

10
2

1
1

2

9
9

6
2

6

1
5

1
4

5

24 (40.7%)
24 (40.7%)

13 (54.2%)
8 (33.3%)

19 (79.2%)

11 (35.5%)
17 (54.8%)

10 (58.8%)
6 (35.3%)

14 (82.4%)

13 (46.4%)
7 (25%)

3 (42.9%)
2 (28.6%)

5 (71.4%)
Data input:
- Automatically, from EMR databases
- Manually entered by user
 ⇒Timing of data request*:
 - Before execution
 - During execution

7
1

0
1

1
3

2
1

4
12

7
1

13
5

2
3

4
2

0
2

29 (49.2%)
23 (38.9%)

11 (47.8%)
8 (34.8%)

19 (61.3%)
10 (32.3%)

3 (30%)
6 (60%)

10 (35.7%)
13 (46.4%)

8 (61.5%)
2 (15.4%)
Nature of the decision-making aid:
- Simple display
- Corollary actions provided in the CPOE

4
4

3
1

13
0

17
0

5
1

42 (71.2%)
6 (10.2%)

23 (74.2%)
5 (16.1%)

19 (67.9%)
1 (3.6%)
Δ

multiple-choice data.

*

criteria restrictively evaluated in previous subcategories (in system-initiated and manually entered by user).

$

3 studies of the “Drug prescription” class of objectives addressed drug interactions, choice of drug and parenteral nutrition and had no drug dosage adjustment function.

Discussion

Of the 106 studies included in this review, 59 evaluated computerized interventions and 48 evaluated computer-generated paper reminders or decision-making aids. CCDSSs aiming to produce preventive reminders or to ensure the appropriate use of targeted healthcare resources, gave positive results in a large proportion of studies. Conversely, CCDSSs designed to provide support for diagnosis, drug prescription and disease or risk factor management tended to be less successful. This finding for the drug prescribing class of clinical objectives is not consistent with previous findings [1].

A few characteristics of the content of the decision-making aid and the logistics of decision support seem to be associated with the success of the CCDSS: system-initiated interventions, the provision of assistance without user control over output, systems in which data are automatically retrieved from the electronic medical record and systems providing corollary actions in the CPOE. Overall, these results are consistent with those of previous reviews [1, 2], despite several important differences in the methods of data selection and collection.

Garg et al. dealt with systems in which decision support was delivered either directly by computer or by printouts attached to paper records by a third party (who specifically used the system for the purpose of the evaluation study). We chose to exclude this second type of intervention from our study, based on the hypothesis that the complete automation and computerization of the intervention was a condition for durable success and transferability. Our results are therefore more specific to computerized interventions, and may therefore be more readily transferable to other contexts.

It was difficult to set up an appropriate description checklist, for the standardization of system characteristics. Kawamoto et al. proposed a list of criteria describing the general features of the system, clinician-system interaction features, communication content features and auxiliary features, such as “local user involvement in the development process”. These features were used to evaluate all types of clinical decision support systems integrated into the clinicians’ routine workflow. Our checklist was largely inspired from the criteria proposed by Kawamoto et al., but are described across the various systems categorized by clinical objective class. We also tried to differentiate between consequences of the technical characteristics of the system itself (transferable to other contexts) and consequences of the technical and organization context of system implementation. This made it possible to show that system-initiated interventions (e.g. most systems producing preventive reminders and utilization control) were more frequently successful than user-initiated interventions (e.g. systems for diagnosis support or drug prescription).

The studies included varied considerably in terms of the type and definition of outcome criteria. As sample size was not reported in an equivalent manner in all studies, Kawamoto et al. pooled results relating to “improvement in clinical practices that was both statistically and clinically significant”. Garg et al. considered a study positive if a statistically significant improvement was reported for at least 50% of the outcomes measured. Both these choices are rather conservative. We used a different classification of the type of outcome, to describe the results of the studies more accurately, allowing the reader to appreciate the clinical significance of the improvement in outcome.

This review of intervention studies evaluating CCDSSs is limited by methodological difficulties and by study heterogeneity. Indeed, studies differ in terms of methodological quality, completeness of the description of the systems and of their study settings or organizational contexts of implementation, combinations of the different types of system, intervention modes and types of outcome measured. Further studies should address two major research needs. Firstly, reports should provide as much detail as possible in descriptions of systems and their interactions with users, as recommended in a previous study [2]. Secondly, reports would gain from the use of tools like the Cochrane EPOC “Data Collection Checklist” [14], ensuring the standardization of methodological reporting in studies of this type, which would facilitate more instructive systematic reviews, perhaps even focusing on certain clinical objective classes.

Acknowledgments

This study was funded by a grant from the ANRT (convention number: 498/2004)

References

  • 1.Garg AX, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293:1223–38. doi: 10.1001/jama.293.10.1223. [DOI] [PubMed] [Google Scholar]
  • 2.Kawamoto K, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330:765. doi: 10.1136/bmj.38398.500764.8F. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Jenders RA, et al. Improving Outcomes with ClinicalDecisionSupport. Chicago: HIMSS; 2005. [Google Scholar]
  • 4.Joint Clinical Decision Support Workgroup. Clinical Decision Support in Electronic Prescribing: Recommendations and an Action Plan. [(last access on March 6, 2006)]. http://www.amia.org. [DOI] [PMC free article] [PubMed]
  • 5.Degoulet P, et al. The HEGP component-based clinical information system. Int J Med Inform. 2003;69:115–26. doi: 10.1016/s1386-5056(02)00101-6. [DOI] [PubMed] [Google Scholar]
  • 6.Durieux P. Electronic medical alerts-so simple, so complex. N Engl J Med. 2005;352:1034–6. doi: 10.1056/NEJMe058016. [DOI] [PubMed] [Google Scholar]
  • 7.Haynes RB, et al. Clinical Epidemiology: How to Do Clinical Practice Research. Philadelphia: Lippincott Williams & Wilkins; 2005. [Google Scholar]
  • 8.Thomas HV, et al. Computerised patient-specific guidelines for management of common mental disorders in primary care: a randomised controlled trial. Br J Gen Pract. 2004;54:832–7. [PMC free article] [PubMed] [Google Scholar]
  • 9.Javitt JC, et al. Using a claims data-based sentinel system to improve compliance with clinical guidelines: results of a randomized prospective study. Am J Manag Care. 2005;11:93–102. [PubMed] [Google Scholar]
  • 10.Kucher N, et al. Electronic alerts to prevent venous thromboembolism among hospitalized patients. N Engl J Med. 2005;352:969–77. doi: 10.1056/NEJMoa041533. [DOI] [PubMed] [Google Scholar]
  • 11.Tierney WM, et al. Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? A randomized, controlled trial. Health Serv Res. 2005;40:477–97. doi: 10.1111/j.1475-6773.2005.00368.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Derose SF, et al. Point-of-service reminders for prescribing cardiovascular medications. Am J Manag Care. 2005;11:298–304. [PubMed] [Google Scholar]
  • 13.Mitra R, et al. Efficacy of computer-aided dosing of warfarin among patients in a rehabilitation hospital. Am J Phys Med Rehabil. 2005;84:423–7. doi: 10.1097/01.phm.0000163716.00164.23. [DOI] [PubMed] [Google Scholar]
  • 14.Cochrane Effective Practice and Organisation of Care Review Group. The Data Collection Checklist. [(last access on March 6, 2006)]. http://www.epoc.uottawa.ca.

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES