Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 1997 Sep-Oct;4(5):364–375. doi: 10.1136/jamia.1997.0040364

A Randomized Trial of “Corollary Orders” to Prevent Errors of Omission

J Marc Overhage 1, William M Tierney 1, Xiao-Hua (Andrew) Zhou 1, Clement J McDonald 1
PMCID: PMC61254  PMID: 9292842

Abstract

Abstract Objective: Errors of omission are a common cause of systems failures. Physicians often fail to order tests or treatments needed to monitor/ameliorate the effects of other tests or treatments. The authors hypothesized that automated, guideline-based reminders to physicians, provided as they wrote orders, could reduce these omissions.

Design: The study was performed on the inpatient general medicine ward of a public teaching hospital. Faculty and housestaff from the Indiana University School of Medicine, who used computer workstations to write orders, were randomized to intervention and control groups. As intervention physicians wrote orders for 1 of 87 selected tests or treatments, the computer suggested corollary orders needed to detect or ameliorate adverse reactions to the trigger orders. The physicians could accept or reject these suggestions.

Results: During the 6-month trial, reminders about corollary orders were presented to 48 intervention physicians and withheld from 41 control physicians. Intervention physicians ordered the suggested corollary orders in 46.3% of instances when they received a reminder, compared with 21.9% compliance by control physicians (p < 0.0001). Physicians discriminated in their acceptance of suggested orders, readily accepting some while rejecting others. There were one third fewer interventions initiated by pharmacists with physicians in the intervention than control groups.

Conclusion: This study demonstrates that physician workstations, linked to a comprehensive electronic medical record, can be an efficient means for decreasing errors of omissions and improving adherence to practice guidelines.


Almost half of all industrial disasters have been reported to be errors of omission resulting from oversights and distractions.1,2 Physicians are also prone to such errors.3,4 Despite good intentions and adequate knowledge, they overlook new abnormalities,5,6,7 fail to perform preventive care,8 and do not appropriately monitor drug therapy.9 These errors are probably due to man's limitations as a data processor rather than to correctable human deficiencies.8

Certain medical decisions are simple and require primarily that the physician recognize that the decision needs to be made. Ordering gentamicin (the stimulus) should, with few exceptions, trigger a decision to order gentamicin levels. Many such drug-test and drug-drug decisions must be made: coumadin and prothrombin times; angiotensin converting enzyme (ACE) inhibitors and serum creatinine levels; intravenous theophylline and theophylline levels; and insulin and blood glucose monitoring. In each of these pairs of orders, the second follows from the first as a proposition to its corollary. Thus, we refer to the first as the trigger order and the second the corollary order.

Although the decision to carry out the corollary order in the above case is simple, the need to make a decision may not be recognized.10 Physicians frequently fail to do pre-intervention testing (e.g., checking creatinine levels before ordering an intravenous pyelogram) or follow-up testing (e.g., ordering serum drug levels to monitor gentamicin treatments). Hospitals invest in drug utilization review programs, chart reviews, and educational efforts to reduce these types of mistakes, but with limited long-term success.

We and others have shown that computer-generated reminders can reduce mistakes in physicians' ordering practices; in particular, reminders reduce errors of omission in outpatient settings.11,12,13,14,15,16 These outpatient reminders were printed on paper reports and placed in the patient's chart before a clinic visit. Reminders delivered as the physician writes orders should be particularly effective, since informational interventions made at the decision point have greater influence than those delivered later.17 We hypothesized that reminding the physician to make the decision, presented as a fully formed order in inpatient settings, would have an even greater effect on errors of omission regarding corollary orders.

At the time of this study, internal medicine physicians in our institution had been entering all of their patient orders directly into an electronic patient record system for more than 4 years.18 (At present all physicians write all hospital orders through the computer). The computer system could provide feedback to the physicians as they enter orders. When a physician writes an order for certain drugs or tests, the system can suggest the orders that are the natural corollaries to the first. Such suggested orders are presented as fully formed orders that the physician can accept or reject with a single keystroke. These reminders reduce reliance on memory and provide standardization of care. Here, we report the result of a randomized, controlled clinical trial to determine whether suggesting corollary orders to the physician, while they are writing their order, could reduce errors of omission during inpatient stays.

Methods

Setting

We studied the inpatient general medicine wards of Wishard Memorial Hospital, an inner-city public teaching hospital. Patients are cared for by one of six independent services (Red service, Green service, and so on). A group of physicians consisting of a faculty internist (usually a generalist), a senior resident, and two interns (usually categorical medical housestaff) cover each service. A different set of physicians rotate onto the service every 6 weeks. We refer to a specific group of physicians who cover one service for one rotation as a team. During a year, eight different teams would have worked on one service. As described below, teams were randomly assigned to intervention or control services.

Patients were not formally randomized to services, but rather admitted to the services in sequence so that all six services received equal numbers of admissions over time. On average, a team admitted approximately 80-90 patients per rotation, and cared for an average of 16 patients at once. Prior analyses, however, have shown no significant difference in patient demographics, clinical characteristics, or severity of illness among the patients admitted to different services.18 Patients remained on the same service when the team of physicians staffing the service changed at the end of each 6-week rotation. When a patient had multiple admissions during the study, we only included data for the first admission.

The Electronic Patient Record and Order Entry Workstations

The Regenstrief Medical Record System (RMRS) provides a nearly complete electronic patient record that integrates inpatient and outpatient data.19 The patient's electronic record includes demographic information, diagnoses and problem lists, inpatient and outpatient visits, admitting history and physical examination reports, discharge summaries, vital signs, immunizations given, nearly all diagnostic test results (including serologies, cervical cytology, and mammograms), procedures, and outpatient prescriptions. Data from the record are available to physicians as printed flowsheets, via “online” data retrieval terminals, and through the order entry workstations located throughout the hospital and associated clinics.

When this study began, all medicine physicians had been entering all inpatient orders directly into physician workstations for 12 months.18 At that time, providers had access to more than 70 personal computer (PC) workstations distributed around the hospital, emergency room, and clinics. The workstations are linked via a network to a central file server and a cluster of Digital Equipment Corporation's VAX computers. Since orders no longer have to be written in the paper chart, 75% of orders are now written from sites other than the patient's ward. Once orders are entered, the system sends them electronically to the nurses' workstation on the patient's home ward, and requisitions are printed at appropriate locations (e.g., pharmacy, radiology, or heart station). Less than 5% of orders are entered by nursing staff as verbal orders from physicians.

Development of Rules to Automate Guidelines

We used standard reference texts20 and drug package inserts supplemented by our knowledge of local practice to identify 87 target orders (76 drugs and 11 tests; see Table 1) that could be paired with one or more corollary orders; for example, aminoglycosides being paired with peak and trough aminoglycoside levels, or warfarin and prothrombin time. We chose target orders that are used frequently enough to produce usable data and for which there was some support for corollary orders. Three-quarters of these target order-corollary order pairs were already part of our hospital's armementarium of drug utilization review criteria, which were developed independently by a hospital committee of staff physicians and clinical pharmacists. These criteria were always applied retrospectively, whereas the computer-based rules were designed to be prospective.

Table 1.

Example Trigger and Corollary Orders

Trigger Orders Response Orders
Heparin infusion (1) Platelet count once before heparin started, then once in 24 hours
(2) APTT at start, again after 6 hours of a dosage change
(3) Protime once before heparin started
(4) Hemoglobin at start of therapy, then QAM
(5) Test stools for occult blood while on heparin
IV fluids (1) Place a saline lock when IV fluids are discontinued
Insulin (all kinds) (1) Capillary glucoses (four times a day)
(2) Glycosylated HGB (once if not done in preceding 180 days)
Oral hypoglycemic agents (1) Capillary glucose (twice per day)
(2) Glycosylated HGB (once if not done in preceding 180 days)
Narcotics (class II) (1) Docusate (stool softener) if not on any other form of stool softener or laxative
Nonsteroidals (1) Creatinine (if not done in previous 10 days: SMA12, BUN counted as equivalent)
Aminoglycosides (1) Peak and troughs levels after dosage changes, and q week if no change
(2) Creatinine twice per week (q Monday and Thursday)
Vancomycin intravenously (1) Measures of serum levels pre and post 4th dose
(2) Audiometry
(3) Baseline creatinine for dose adjustment
Warfarin (1) Prothrombin time each morning
Amphotericin B (1) Creatinine twice per week (q Monday and Thursday)
(2) Magnesium level (twice per week while on therapy)
(3) Electrolytes (twice per week while on therapy)
(4) Acetaminophen (650 mg po 30 min before each amphotericin dose)
(5) Benadryl (50 mg 30 min before each amphotericin dose)
Angiotensin converting enzyme inhibitions (1) Creatinine at baseline then 2 weeks after dosage changes
(2) Potassium (q Monday and Thursday)
Chloramphenicol (1) CBC (twice per week)
(2) Retic count (twice per week)
Air contrast barium enema, IVP, UGI (1) Pregnancy test (if patient is female, in childbearing years, had no hysterectomy, and no pregnancy tests within 3 days)
Isoniazid (1) SGOT, SGPT (as baseline when drug started)
Potassium supplements (1) Electrolytes once each morning
Pulmonary artery catheter (1) Portable AP chest x-ray (when first placed to check for placement)
Ventilator orders (1) Arterial blood gas after changes
Vasopressin drip (1) Nitroglycerin drip or nitroglycerin paste (if patient having chest pain or known CAD)

Each of the rows in Table 1 defines corollary orders for drugs or tests, but each row identifies a class of drugs. (Full details of the reminders are available from the authors.) The category “oral hypoglycemics,” for example, represents three different oral agents from our formulary. The first column identifies a trigger order; the second column identifies its corollary orders. The corollary order either prepares the patient to receive the item in the trigger order, prevents adverse effects of the trigger order, or monitors for adverse effects of the trigger order. The first row, for example, says that a heparin drip order requires a platelet count—the corollary order—before the heparin is started and again 24 hours later.

When suggesting orders, the computer took into account other factors, such as the status of the order (is it a new order or a revision of an old order?), the time elapsed since the last time the order being suggested was written, and whether any orders for a near equivalent item (e.g., a blood urea nitrogen level versus serum creatinine level) had already been written.

Intervention

We made human-readable versions of the corollary order guidelines available to both study and control physicians. More than half of the guidelines were also being actively promoted through the hospital's drug utilization review (DUR) program. During the study, all medicine physicians wrote their orders using the computer order entry system. When a physician entered a trigger order (an order from the first column of Table 1) for a particular patient, a rule-based reminder program analyzed the data in that patient's electronic medical record. The program determined which, if any, of the corollary orders from Table 1 should be presented. For intervention physicians, the computer displayed the suggested corollary orders in a workstation window as shown in Figure 1. Notice that these orders are fully formed and that the physician can accept, reject, or modify them with a few keystrokes. When the computer suggested corollary orders to the physician, the physician was free to accept or reject them as he or she saw fit. For control physicians, the computer recorded the corollary orders for later analysis but did not inform the physician about them.

Figure 1.

Figure 1

Order entry screen showing suggested orders generated when intravenous gentamicin is ordered.

Study Design

The study was a randomized, controlled trial conducted over 30 weeks, starting in October 1992. At the beginning of the study, three of the six services were randomly assigned to be intervention services, and the remaining three were assigned to be controls.

The Chief Medical Resident constructed teams of faculty, housestaff, and students based on scheduling issues, clinical skills, and personalities. The study biostatistician then randomly assigned the teams to services. Physicians assumed the study status of their assigned service throughout their rotation. Physicians on intervention services received reminders about suggested corollary orders. Those on control services did not.

The system assigned patients to intervention or control status based on the service to which they were admitted. Patients never changed study status during a hospital admission. If the patient's hospitalization crossed rotation periods, he or she remained on the same service and retained that service's study status, even though a different team of physicians was randomly assigned to provide the care.

Physicians care for patients from more than one service at night and on weekends. The Chief Medical Resident constructed the residents' evening coverage schedule to separate coverage for patients based on their services' study status so that, if there were no coverage switches, control physicians provided overnight and weekend coverage only for control patients, and intervention physicians cared only for intervention patients. To avoid contamination that could occur when scheduling conflicts put intervention physicians in charge of control physicians' patients and vice versa, the computer suggested corollary orders to intervention physicians only when they were writing orders for intervention physicians' patients. It suppressed the display when any physician wrote orders for control physicians' patients. Furthermore, the computer never displayed corollary orders to control physicians when they were writing orders. Nurses and pharmacists could enter verbal orders from physicians, but the computer never suggested corollary orders during verbal order writing sessions.

Corollary orders are presented to medical students when they draft orders for the physician's approval, but were not presented to the physician when they reviewed these orders prior to electronically signing them.

Data sources

The order entry system's databases provided us with information about the trigger orders and suggested corollary orders. We obtained information about the physician compliance with the corollary orders from the ordering system, which carried records of all orders, and the RMRS, which contains all test results and drug administration records.

We obtained information about length of stay and hospital charges from our hospital discharge records and billing system, respectively. Pharmacists' interventions with physicians were extracted from a database maintained by the pharmacy for administrative purposes. We looked at creatinine as an outcome to see if drug monitoring for renal falure had any outcome effect on serum creatinine levels. We obtained information about the creatinine levels during the hospital stay from the RMRS.

Analysis

We examined several outcome variables. The variable on which we expected the main effect was the per physician “compliance” with the automated guidelines about corollary orders: i.e., the number of times a physician ordered the suggested corollary orders divided by the total number of suggested corollary orders. We computed three different compliance rates: (1) immediate compliance: physicians wrote orders for the suggested corollary orders during the same ordering session in which they wrote the triggering order; (2) 24-hour compliance: the physician ordered the suggested corollary order within 24 hours of a trigger order; and (3) hospital stay compliance: the physician ordered the suggested corollary order any time during the hospital stay after the trigger order was entered.

The denominator for all three per-physician measures was the number of corollary orders suggested to the physician by the computer. The numerator for immediate compliance was the number of corollary orders that the physician wrote during the same ordering session as the triggering order for that corollary. A single order could trigger suggestions about more than one corollary order; e.g., an order for intravenous gentamicin would trigger suggestions to order both serum creatinine and serum gentamicin levels (see Fig. 1). If the physician ordered only one of these two corollary orders, the immediate compliance score for that triggering order would be 50%. A suggestion for the same corollary order could occur more than once during the hospital stay, whether the physician responded on the first occasion or not. For example, each change in dose of intravenous heparin would trigger a suggestion for another measure of the activated partial thromboplastin time (APTT). Each such order for heparin would count as a separate triggering event and would be associated with a separate compliance score. The physician's overall immediate compliance score was the arithmetic mean of the immediate compliance scores for each of the trigger events. In computing immediate compliance we did not distinguish between a physician accepting the computer's suggested orders and the physician independently writing the order during that same ordering session.

We computed the physician's 24-hour compliance by the same method used for the immediate compliance, except that the ordering of a suggested corollary order any time within 24 hours after the triggering order counted as compliance with the suggestion. We averaged the 24-hour compliance score for each triggering event to obtain a physician's overall 24 hour compliance score. By definition, the 24-hour compliance was greater than or equal to the immediate compliance.

To calculate a physician's hospital stay compliance, we counted an order for the suggested item written any time after the triggering order until the end of the hospitalization as a complying response. This is the most liberal definition of compliance, but it ignores potential problems of timing (e.g., ordering a gentamicin level later than the fourth dose). It is the least strict definition because one order for APTT written at discharge would count as compliance for all APTT orders that the computer suggested during the hospital stay.

Housestaff physicians were the target of the intervention, so they were the unit of analysis. When physicians served more than one rotation and could not be assigned to the same study status for all rotations, we excluded the data from all rotations after the physician's original study status changed. We also excluded data about suggested orders that occurred when physicians' and patients' study status differed—as could occur if a physician traded his or her night call with a physician who had a different study status.

Faculty are proscribed from writing orders (other than “do not resuscitate” orders) except during emergencies. Therefore, the analysis was limited to housestaff physicians. Because the physicians practice within teams, they are not fully independent units. Interns write orders independently, but they still might be influenced by the resident or staff leaders of their teams. Further complicating the association, some physicians served with different residents and/or interns on different rotations during the study. To allow for this clustering of physicians within teams, we used generalized estimating equations (GEEs). This method can account for the hierarchical relationships in the data set without the need to discard repeated observations within clusters.21 We analyzed the immediate, 24-hour, and hospital stay compliance using GEEs.

To complement the above analysis, with its complex hierarchical model, we also analyzed a subset of the above data using a simpler approach. For this analysis, we considered only the physician's response to the first occurrence of a unique trigger-corollary order pair per patient. So, for example if the patient had multiple changes in heparin drip rate and the computer suggested an APTT to follow up each of these dosage changes, in this analysis we would only count the physician's response to the first suggestion. From this point we computed the per-physician immediate, 24-hour, and hospital stay compliance as above, and we compared the intervention and control physicians' mean compliance scores by Student's t test. In this simpler analysis, we ignored possible interactions among physicians within teams.

We also examined several patient-specific “outcomes:” length of stay, hospital charges, number of pharmacist interventions, and average creatinine during the hospital stay (a common suggested response order to evaluate potential nephrotoxicity of stimulus drug orders). The distributions of length of hospital stay and charges were highly skewed to the right, so we applied log transformations to these two variables to produce more normal distributions. For the few measures of patient status (creatinine levels) we compared intervention patients with control patients using Student's t test, ignoring the clustering within physician or physician teams.

We examined intervention and control patients on some clinical and demographic variables to be sure that the two groups were comparable, using Student's t test and χ2 test to compare patient attributes between the study and control groups.

Results

The randomized, controlled trial ran for 30 weeks, beginning in October 1992. There were 6 different housestaff rotations during the 30 week period, with 6 teams of faculty and housestaff per rotation.

Six physicians were excluded from the study because they received fewer than five suggestions about corollary orders. This cutoff was chosen by inspection of the distribution of number of suggested consequent orders. These were mostly off-service physicians who covered night calls for one or two nights but were not part of teams assigned to a service. A total of 86 housestaff physicians received more than 5 suggestions about corollary orders during the study: 45 intervention physicians and 41 control physicians. Nine physicians changed study status when they returned for a second rotation during the study. For these physicians we only included data for the rotations before they changed study status.

During the study, the intervention and control physicians cared for 2,181 different patients during 2,955 different admissions. Table 2 shows the demographic and clinical characteristics of these patients. No significant differences between intervention and control patients exist for any of these variables.

Table 2.

Demographic Characteristics of Study and Control Patient Groups

Characteristic Study Control
Caucasian (%) 50 49
Age (mean years/std dev) 54/18 53/18
Male (%) 45 51
Problem list (%):
Hypertension 5.2 5.6
Heart failure 3.4 3.2
Diabetes mellitus 3 3
Chest pain 3.5 2.9
Pneumonia 2.5 2.7
Urinary tract infection 2.4 2.2
Anemia 2.4 2.2
Gastrointestinal bleeding 1.9 1.7
Diabetic ketoacidosis 0.6 0.5

Of these 2,181 patients, 1,686 (77.3%) had at least 1 order written (814 intervention patients and 872 control patients) that would trigger a suggestion for a corollary order. In all, intervention and control physicians entered 7,394 trigger orders which resulted in 11,404 suggestions for corollary orders. On average, a trigger order generated suggestions for 1.5 corollary orders. Trigger orders made up 9.6% of all orders written for the 2,181 patients. Patients with at least 1 suggested corollary order per admission had an average of 6.8 such suggestions per admission.

The effect of the computer suggestions was very strong, whether measured as immediate, 24-hour, or hospital stay compliance. Intervention physicians ordered the corollary orders required by our guidelines twice as often as control physicians did, when measured by immediate compliance (46.3% versus 21.9%, p < 0.0001). Significant differences between study and control physicians also appear in 24 hour compliance (50.4% vs 29.0%, p < 0.0001) and hospital-stay compliance (55.9% vs 37.1%, p < 0.0001). Because corollary orders for saline lock had such a large effect and are the least significant clinically, we repeated the simple analyses excluding saline lock orders and found immediate compliance was 46.4% vs. 27.6% (p < 0.0001), 24-hour compliance was 50.9% versus 35.3% (p < 0.0001) and hospital-stay compliance was 56.0% vs. 43.5% (p < 0.0001). The effects were almost identical whether measured on all of the data using a complicated GEE model or measured as first occurrence compliance using a simple Student's t test. The mean immediate compliance to the first occurrence of a suggestion was 48% among intervention physicians and 23% among control physicians (p < 0.001). The other first compliance scores and the significance levels were also very close to their GEE counterparts.

Figure 2 is a histogram comparing the 24 hour compliance of study and control physicians. There is little overlap between the study and control populations. Several control physicians had compliance rates below 20%, and no control physician reached a compliance rate greater than 50%. On the other hand, study physicians all maintained compliance rates of at least 30%, and some reached levels of 70%.

Figure 2.

Figure 2

Histogram of individual physician 24-hour compliance.

There is very little difference between the immediate and 24-hour compliance scores, indicating that corollary orders that are not written at the same time as their trigger order are unlikely to be written later during the same day.

The difference in the compliance scores of intervention and control physicians shrinks by almost one fifth from immediate to hospital stay compliance. This results from a greater increase in the control compliance. Nonetheless, a large difference (18 percentage points) separates the compliance scores of intervention and control physicians even when measured as hospital stay compliance.

Breakdowns of compliance by trigger and corollary order illustrate the kinds of items the intervention affected most extensively. Table 3 shows the 24-hour compliance scores for intervention and control physicians broken down by the 25 most common trigger orders. Table 4 shows comparable data broken down by the 25 most common corollary orders. In both cases, the top 25 orders account for more than 80% of the suggestions provided.

Table 3.

24-Hour Compliance Rate by Triggering Order for 25 Most Common Triggering Orders

Order Total Orders Study Compliance (%) (n = 45) Control Compliance (%) (n = 41) Compliance Increase (%)
Heparin infusion 1476 77.42 40.24 37.18
IV fluid orders 1061 64.66 0.00 64.66
Cimetidine po 1055 12.66 5.18 7.48
Type & cross 542 22.90 14.64 8.26
Insulin lente humulin 518 40.00 31.01 8.99
Furosemide po 410 75.38 62.09 13.29
Ferrous sulfate 394 21.43 16.47 4.96
Furosemide IV 360 60.88 51.85 -0.98
Warfarin 303 68.18 35.09 33.09
Ventilator settings 242 80.14 21.78 58.36
Insulin NPH humulin 241 52.17 26.19 25.98
Vancomycin IV 224 60.44 44.36 16.08
Sustained release theophylline 215 73.33 45.46 27.88
Gentamicin IV 197 78.35 61.00 17.35
Insulin reg Humulin 197 53.33 35.87 17.46
Digoxin po 178 96.88 84.15 12.73
Glyburide po 177 51.28 43.43 7.85
Meperidine IM/IV 177 24.24 5.41 18.84
Captopril po 177 74.42 55.06 19.36
Enteral feeding 170 23.08 7.60 15.48
Enalapril po 161 73.68 70.59 3.10
Kayexalate suspension 161 26.09 18.48 18.48
Timentin IV 161 45.24 14.29 30.95
Spironolactone po 158 42.25 20.69 21.56
Glipizide po 147 47.22 36.00 11.22

Table 4.

24-Hour Compliance by Triggering Order for 25 Most Common Corollary Orders

Suggested Order Total Orders Study Compliance (%) Control Compliance (%) Compliance Increase (%)
Serum creatinine 1209 48.28 41.18 7.10
Saline lock 1065 64.73 0.00 64.73
Serum electrolytes 1034 87.03 70.86 16.18
Glycosylated Hgb A-1 821 23.71 7.39 16.32
Activated partial thromboplastin time 615 89.21 59.56 29.65
SGPT (ALT) 569 12.63 1.87 10.76
Sodium docusate 506 79.35 79.26 0.09
SGOT (AST) 467 7.14 0.00 7.14
Capillary glucose 446 30.77 4.41 26.36
Blood cell profile 382 80.46 51.44 29.02
Stool occult blood test 374 60.94 12.09 48.85
Prothrombin time 320 64.57 45.52 19.05
Theophylline level 270 75.89 46.51 29.38
Diphenhydramine 267 16.41 7.19 9.21
Platelet count 236 70.00 15.09 54.91
Acetaminophen 232 19.66 14.78 4.88
Reticulocyte count 205 19.66 11.36 8.29
NG feeding tube 170 23.08 7.60 15.48
Fe-TIBC 149 12.64 0.00 12.64
Vancomycin 143 90.74 65.17 25.57
Phenytoin level 140 73.13 38.36 34.78
Portable AP CXR 127 81.69 33.93 47.76
A-V blood gas 123 72.60 0.00 72.60
Simplate bleed time 123 26.23 0.00 26.23
Gentamicin level 118 90.00 75.86 14.14

The effect of the intervention varied by specific trigger-corollary order pair. Computer reminders increased adherence to guidelines concerning many important corollary orders. For example, they increased 24-hour compliance for monitoring serum levels of gentamicin, vancomycin (though the value of monitoring is debatable), and theophylline by 9, 26, and 24 percentage points respectively. Differences persisted when hospital compliance was assessed. We were surprised by these results because we had assumed that most physicians were already complying fully with guidelines about antibiotic and theophylline level monitoring.

The reminders also caused large improvements in compliance with suggestions to order prothrombin times after coumadin dosage changes, APTT after heparin dose changes, baseline creatinines before vancomycin and aminoglycoside antibiotics, and radiographs to check for line placement and lung status during mechanical ventilation. The difference between intervention and control compliance rates for these suggestions was as much as 25 percentage points. On the other hand, computer suggestions to order baseline creatinine measurements before starting administration of cimetidine or ranitidine had no effect. In retrospect, we considered this a possible appropriate response to a guideline with only a theoretic basis.

Pharmacists made 105 interventions with intervention physicians and 156 with control physicians (two-tailed p = 0.003) for errors considered to be life threatening, severe, or significant.

There was no difference in maximum serum creatinine levels between the groups (1.51 ± 1.25 for intervention patients versus 1.42 ± 0.88 for controls; p = 0.28).

Length of stay and total inpatient charges were not different for intervention patients compared with control patients. The average length of stay was 7.62 days for intervention patients and 8.12 days for control patients, a difference of -0.5 days (95% confidence interval of the difference is -0.17 to 1.19; p = 0.94). Average hospital charges were $8,073.52 for intervention patients and $8,589.47 for control patients, a difference of -$515.95 (95% confidence interval of the difference is -$828.41 to $1,316.85; p = 0.68).

An increase in charges might have been expected, since the aim of all the reminders was to increase the utilization of the suggested order items. However, the variance and confidence intervals of charges and length of stay are too large to conclude anything from these results.

Discussion

Computer suggestions about corollary orders had large effects on the adherence to our guidelines about corollary orders, especially when measured in terms of immediate or 24-hour compliance. Thus, they reduced errors of omission. That we observed smaller differences in compliance when measured over the entire hospital stay is not surprising. With a larger time window, there is more time for providers to remember to order the item, for other physicians (and consultants) to write (or induce) the order, and for other indications to arise for the order. Further, active institutional controls, such as pharmacokinetics consulting services, may have had more time to influence the ordering process. The interventions increased adherence to many guidelines that were being promoted by our Pharmacy and Therapeutic Committee, such as the requirement for APTT measures after each heparin dosage change and the follow-up of amnioglycoside therapy with measurements of serum levels. The reminders also significantly reduced the number of adverse or potential adverse effects as measured by the pharmacy's intervention log. Pharmacists had to call physicians to ask about drug-related interventions one third less often for study patients than for control patients.

We did not see any effects on outcomes such as length of stay, serum creatinine, or charges. However, the guidelines touched only 9.6% of the orders written during this study, so we had not expected to see important outcome effects when the study affected such a small part of the overall care process.

The clinical importance of the suggestions about corollary orders varied. Some with low clinical significance, such as ordering a saline lock when IVFs are discontinued, can have large economic impact because the service often cannot be billed without an order. The intervention had large effects on some practices that were already part of the pharmacy review process, such as recommendations about ordering peak and trough gentamicin levels to monitor for adverse effects of intravenous gentamicin, and APTTs to monitor the efficacy of heparin therapy. The reminder had little or no effect in some corollary orders: e.g., suggestions to measure creatinine levels before using cimetidine. Indeed, the rate of response to this common suggestion was less than 20% in both intervention and control cases; when creatinine was ordered, it may have been for reasons other than the cimetidine order. The large difference in response rates by suggestion indicates that physicians did not blindly accept the suggested orders. In past studies, we have seen a similar phenomenon; computer reminders had their greatest effect on compliance when the physicians agreed with, and intended to comply with, the rules.12 Physicians may choose not to accept corollary orders for several reasons: (1) the orders were not appropriate (i.e., the rules are in error); (2) the physician did not agree with the basis for the reminder (disagree with guideline); or (3) they chose not to deal with the suggestion mentally and so dismissed them (no time).

In previous studies, we have used the microcomputer workstations to discourage the ordering of unnecessary tests and treatments, with significant reduction in costs.22,23,24 Given that this current intervention only suggested ordering tests or drugs (it never suggested not testing or discouraged ordering a drug), we had expected an increase in resource use. However, there were no significant differences in the hospital charges for study and control patients. It is possible that even though the guidelines suggested more resource use, that the better care they promoted led to lower costs by avoiding complications. Given the high cost of drug-induced complications,37 increased resource use may be significantly affected by avoiding even one complication.

The study was done in a teaching hospital where only residents write orders. (Staff physicians guide the care process, but by policy they do not write orders.) However, in other studies both inside and outside of academic centers, reminders influenced residents and staff significantly, and reminders have influenced family practice physicians in private practice.16,25 We believe that similar effects would be observed in most settings.

Physicians forget to do baseline testing (e.g., measuring creatinine levels before ordering an intravenous pyelogram) or follow-up testing (e.g., using serum drug levels to monitor gentamicin treatments). These errors of omission, one type of the errors Reason refers to as latent error, are difficult to prevent because it is difficult to identify the omission.2 One way to improve physician compliance with such guidelines is face-to-face “reverse detailing” (this is done for errors of commission, not omission), but these and other educational efforts are labor intensive and cannot always be scaled up to a large practice environment.

Errors in medical practice can have dire effects, yet errors commonly occur. Physicians have difficulty accepting that mistakes are inevitable, and they take responsibility for mistakes made by others caring for their patients.26,27,28 When errors are investigated, the immediate cause of the error is typically identified and corrected, but the root causes are not. The way to reduce errors is to design systems that will prevent or detect them. Leape3 outlines four mechanisms for redesigning health care systems to significantly reduce the chance of error: (1) reduce reliance on memory; (2) improve access to information; (3) standardize; and (4) train. Computer order entry systems provide easy access to patient and textbook level information29; they provide standardization through preformed order sets, and they provide for active “training” via patient-specified reminders.

We derived most of the guidelines about corollary orders from pharmacy rules for quality assurance, rules about monitoring therapy for therapeutic effects, and for measuring renal function. Our Pharmacy and Therapeutics Committee has clear recommendations on these subjects. Pharmacy and Therapeutics Committees nationwide have long worked to improve compliance with such guidelines. They institute drug utilization review programs, chart reviews, and education efforts, but have had little long term success, despite the money and effort invested.30,31,32 These programs are difficult to sustain because they require ongoing investment and continuous renewal. Even when physicians are aware of the appropriate monitoring guidelines, they fail to carry them out.10

The use of computers to remind physicians about corollary orders as they write trigger orders can be sustained without significant ongoing costs, assuming physicians are already writing orders with computer workstations. Furthermore, analysis at another institution suggests that computer interventions during order entry have the potential to reduce adverse effects by 25-49%,33 and many hospitals are now introducing such systems. There are costs associated with writing and maintaining the guidelines, but those costs are not large. It took one of the authors (JMO) about 2 weeks to write the rules used in this study. (These same rules have been running untended since this study ended.)

Presentation of fully formed suggested orders has another, though lesser, benefit. If the physician already intended to write the order, this approach makes computer order entry more time competitive with the alternative paper method.34 The execution of the rules and display of the suggested order screen took less than half a second on 33-MHz, Intel 80486-based microcomputers, and this saved the physician the 10 to 20 seconds it might have taken to order the same tests if he or she had to find the item on a menu and type in the order instructions. Physicians do not have to pause to think about ordering follow-up tests, find the test's name on a menu (or type it in), or enter the instructions related to the order; they accept the order with a single keystroke.

Many other opportunities exist to improve care and speed the order entry process using order feedback. For another study, we are now building computer guidelines that suggest orders for hypertension management according to the patient's blood pressure control, co-morbidities, age, gender, and race. Without impeding the physician's goals, the computer can remind them of the “preferred” approach and simplify the order entry process simultaneously, while leaving the physician in ultimate control of the decision. As physician order entry systems become more common, this will be an efficient way to disseminate and implement guidelines.

These findings must be interpreted in light of limitations in the study. First, the data are from internal medicine housestaff at a single institution, and it may not be possible to generalize from them. Second, the design does not allow us to separate the effects of the intervention (corollary orders) and the guidelines on which they are based. Finally, the relatively small study size limits the ability to detect changes in patient outcomes.

Computer systems can definitely increase compliance with guidelines that reflect the current beliefs of the ordering physicians. While not universally available at present, such systems are available at leading academic centers and throughout the Veterans' Administration.35 By demonstrating how clinical decision support systems can decrease errors in physician practice, our results may stimulate more widespread implementation of these systems.36

Acknowledgments

We acknowledge the technical assistance provided by Burke Mamlin, Jeff Warvel, and Jill Warvel. We thank the physicians, students, nurses, and staff of Wishard Memorial Hospital, Indianapolis, Indiana, for their patience. In particular, acknowledge the indispensable efforts of Terry Hogan, RN, Brenda Smith, RN, and Cheryl Wodniak, RN.

Financial support provided by Agency for Health Care Policy and Research, grants HS 05626 and HS 07719, National Library of Medicine, contract NO1-LM-3-3410.

References

  • 1.Alluisi EA. Attention and vigilance as mechanisms of response. In: EA Bilodeau (ed). Acquisition of Skill. New York: Academic Press, 1966; 201-213.
  • 2.Reason J. Human error. New York: Cambridge University Press, 1990; 184.
  • 3.Leape LL. Error in medicine. JAMA. 1994;272: 1851-7. [PubMed] [Google Scholar]
  • 4.Clark CM, Kinney ED. The potential role of diabetes guidelines in the reduction of medical injury and malpractice claims involving diabetes. Diabetes Care. 1994;17: 155-9. [DOI] [PubMed] [Google Scholar]
  • 5.Brennan TA, Leape LL, Laird N, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324: 370-6. [DOI] [PubMed] [Google Scholar]
  • 6.Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324: 377-84. [DOI] [PubMed] [Google Scholar]
  • 7.Palmer RH, Strain R, Rothrock JK, Hsu LN. Evaluation of operational failures in clinical decision making. Med Decis Making. 1983;3: 299-310. [DOI] [PubMed] [Google Scholar]
  • 8.McDonald CJ. Protocol-based computer reminders. The quality of care and the non-perfectibility of man. N Engl J Med. 1976;295: 1351-5. [DOI] [PubMed] [Google Scholar]
  • 9.Hatoum HT, Hitchinson RA, Witte KW, Newby GP. Evaluation of the contribution of clinical pharmacists: inpatient care and cost reduction. Drug Intell Clin Pharm. 1988;22: 252-9. [DOI] [PubMed] [Google Scholar]
  • 10.Norman D. Things That Make Us Smart. Reading, MA: Addison-Wesley, 1993; 147-8.
  • 11.McDonald CJ. Use of a computer to detect and respond to clinical events. Its effect on clinical behavior. Ann Intern Med. 1976;84: 162-7. [DOI] [PubMed] [Google Scholar]
  • 12.McDonald CJ, Hui SL, Smith DM, Tierney WM, Cohen SJ, Weinberger M. Reminders to physicians from an introspective computer medical record. Ann Intern Med. 1984;100: 130-8. [DOI] [PubMed] [Google Scholar]
  • 13.Litzelman, DK, Dittus RS, Miller ME, Tierney WM. Requiring physicians to respond to computerized reminders improves their compliance with preventive care protocols. J Gen Intern Med. 1993;8: 311-7. [DOI] [PubMed] [Google Scholar]
  • 14.Cohen DI, Littenberg B, Wetzel C, Neuhauser DB. Improving physician compliance with preventive care guidelines. Med Care. 1982;20: 1040-5. [DOI] [PubMed] [Google Scholar]
  • 15.Barnett GO, Winickoff RM, Morgan MM, Zielstorff RD. A computer-based monitoring program for follow-up of elevated blood pressure. Med Care. 1983;21: 400-9. [DOI] [PubMed] [Google Scholar]
  • 16.McPhee SJ, Bird JA, Fordham D, Rodnick JE, Osborn EH. Promoting cancer prevention activities by primary care physicians: results of a randomized, controlled trial. JAMA. 1991;267: 538-44. [PubMed] [Google Scholar]
  • 17.Tierney WM, Hui SL, McDonald CJ. Delayed feedback of physician performance vs. immediate reminders to perform preventive care: effects of physician compliance. Med Care. 1986;24: 659-66. [DOI] [PubMed] [Google Scholar]
  • 18.Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. Effects on resource utilization. JAMA. 1993;269: 379-83. [PubMed] [Google Scholar]
  • 19.McDonald CJ, Tierney WM, Overhage JM, Martin DK, Wilson GA. The Regenstrief Medical Record System: 20 years of experience in hospitals, clinics, and neighborhood health centers. MD Comput. 1992;9: 206-17. [PubMed] [Google Scholar]
  • 20.Knoben J, Anderson P. Handbook of Clinical Drug Data, 6th edition. Hamilton IL: Drug Intelligence Publications, 1988.
  • 21.Zeger SL, Liang KY, Albert PS. Models for longitudinal data: a generalized estimating equations approach. Biometrics. 1988;44: 1049-60. [PubMed] [Google Scholar]
  • 22.Tierney WM, Miller ME, McDonald CJ. Informing physicians of test charges reduces outpatient test ordering. N Engl J Med. 1990;322: 1499-1504. [DOI] [PubMed] [Google Scholar]
  • 23.Tierney WM, McDonald CJ, Hui SL, Martin DK. Computer predictions of abnormal test results. Effects on outpatient testing. JAMA. 1988;259: 1194-8. [PubMed] [Google Scholar]
  • 24.Tierney WM, McDonald CJ, Martin DK, Hui SL, Rogers MP. Computerized display of past test results. Effect on outpatient testing. Ann Intern Med. 1987;107: 569-74. [DOI] [PubMed] [Google Scholar]
  • 25.Frame PS, Kowulich BA, Llewellyn AM. Improving physician compliance with a health maintenance protocol. J Fam Pract. 1984;19: 341-4. [PubMed] [Google Scholar]
  • 26.Hilfiker D. Facing our mistakes. N Engl J Med. 1984;310: 118-22. [DOI] [PubMed] [Google Scholar]
  • 27.Christensen JF, Levinson W, Dunn PM. The heart of darkness: the impact of perceived mistakes on physicians. J Gen Intern Med. 1992;7: 424-31. [DOI] [PubMed] [Google Scholar]
  • 28.Wu AQ, Folkman S, McPhee SJ, Lo B. Do house officers learn from their mistakes? JAMA. 1991;265: 2089-94. [PubMed] [Google Scholar]
  • 29.Overhage JM, Tierney WM, McDonald CJ. Design and implementation of the Indianapolis Network for Patient Care and Research. Bull Med Libr Assoc. 1995;83: 48-56. [PMC free article] [PubMed] [Google Scholar]
  • 30.Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med. 1989;320: 53-6. [DOI] [PubMed] [Google Scholar]
  • 31.David DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: a review of 70 randomized controlled trials. JAMA. 1992;268: 1111-7. [PubMed] [Google Scholar]
  • 32.Gray BH, Field MJ (eds). Controlling costs and changing patient care? The role of utilization management. Washington, DC: National Academy Press, 1989. [PubMed]
  • 33.Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995;274: 35-43. [PubMed] [Google Scholar]
  • 34.Overhage JM, Tierney WM, McDonald CJ, Pickett KE. Computer assisted order entry: impact on intern time use. Clinical Research 1991;39: 794A. [Google Scholar]
  • 35.Sittig DT, Stead WW. Computer-based physician order entry: state of the art. J Am Med Inform Assoc. 1994;1: 108-23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.McDonald CJ, Overhage JM. Guidelines you can follow and trust: an ideal and an example. JAMA. 1994;217: 872-3. [PubMed] [Google Scholar]
  • 37.Classen DC, Pestotnik SL, Evans RS, Lloyd JF, Burke JP. Adverse drug events in hospitalized patients: excess length of stay, extra costs, and attributable mortality. JAMA. 1997;277: 301-6. [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association are provided here courtesy of Oxford University Press

RESOURCES