Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2007 Jul-Aug;14(4):451–458. doi: 10.1197/jamia.M2369

A Systematic Review of the Performance Characteristics of Clinical Event Monitor Signals Used to Detect Adverse Drug Events in the Hospital Setting

Steven M Handler a , b ,, Richard L Altman c , Subashan Perera a , d , Joseph T Hanlon a , e , g , h , Stephanie A Studenski a , g , James E Bost f , Melissa I Saul b , Douglas B Fridsma b
PMCID: PMC2244905  PMID: 17460130

Abstract

Objective

We conducted a systematic review of pharmacy and laboratory signals used by clinical event monitor systems to detect adverse drug events (ADEs) in adult hospitals.

Design and Measurements

We searched the MEDLINE, CINHAL, and EMBASE databases for the years 1985–2006, and found 12 studies describing 36 unique ADE signals (10 medication levels, 19 laboratory values, and 7 antidotes). We were able to calculate positive predictive values (PPVs) and 95% confidence intervals (CIs) for 15 signals.

Results

We found that PPVs ranged from 0.03 (95% CI, 0.03–0.03) for hypokalemia, to 0.50 (95% CI, 0.39–0.61) for supratherapeutic quinidine level. In general, antidotes (range = 0.09–0.11) had the lowest PPVs, followed by laboratory values (range = 0.03–0.27) and medication levels (range = 0.03–0.50).

Conclusion

Data from this study should help clinical information system and computerized decision support producers develop or improve existing clinical event monitor systems to detect ADEs in their own hospitals by prioritizing those signals with the highest PPVs.

Introduction and Background

Clinical decision support (CDS) systems have been shown to improve patient care and treatment outcomes by providing physicians and other health care providers with patient-specific information that is intelligently filtered and presented at appropriate times. 1 Clinical event monitors, one of the most common types of CDS systems, provide feedback through alerts and reminders to health care providers when triggered by certain information available in electronic format (i.e., by signals). 2 Clinical event monitors can be used to detect medication-related problems by processing pharmacy order signals 3, 4 and laboratory test result signals, 5 generated by systems with varying levels of automation and sophistication. 6

The most clinically significant medication-related problems are adverse drug events (ADEs). Various definitions have been proposed and used throughout the literature to describe ADEs. For this paper, we use the Institute of Medicine definition which defines ADEs as “injuries resulting from a medical intervention related to a drug.” 7–9 ADEs are common and occur in 2.4–5.2 per 100 hospitalized adult patients. 10–13 A meta-analysis of fatal ADEs suggest that these events are between the fourth and sixth leading causes of death in the United States. 14 Each ADE is estimated to increase the length of hospital stay by 2.2 days and to increase the hospital cost by $3,244. 15

Compared with manual methods of ADE detection (e.g., chart review or voluntary reporting), clinical event monitors are less expensive and faster, and they often identify ADEs not normally detected by clinicians during the course of routine hospital care. 16–19 Through the early detection and prevention of ADEs, clinical event monitors can improve the quality of care, while reducing health care costs by as much as $760,000 per year in a teaching hospital. 20–23 Despite the potential benefits of clinical event monitors and the fact that several prominent national organizations have recommended their use to detect ADEs, 24,25 few health care systems have implemented them. 26 Moreover, when they have implemented them, they have done so in non-standardized ways that make it difficult to compare and synthesize the results. 9,27 The lack of generalizability of results in turn contributes to the problems and suboptimal performance of hospitals in the U.S. health care system. 1

To begin to address these concerns and to help clinical information system and CDS producers develop, select, or improve systems to detect ADEs, we conducted a systematic review of individual pharmacy and laboratory signals that are currently used by clinical event monitors to detect ADEs in the adult hospital setting. When possible, we calculated the positive predictive values (PPVs) of individual signals.

Methods

Study Identification and Eligibility

Before we implemented our literature search, we established criteria for inclusion and exclusion of studies. We included studies that met the following four criteria: their results were published between January 1, 1985, and July 1, 2006; they described a clinical event monitoring system to detect ADEs in an adult hospital setting; they described laboratory or pharmacy ADE signals; and they provided PPVs or information to allow the calculation of PPVs for individual ADE signals. We excluded studies if they focused on ADE prevention rather than detection (e.g., if they focused on computerized physician order entry systems) as this has recently been reviewed elsewhere. 28 We also excluded studies if they described non-laboratory or non-pharmacy ADE signals, including signals to monitor physiologic data (e.g., blood pressure or heart rate) or administrative data (e.g., diagnostic or procedural codes [ICD-9 or CPT]), or if they described free-text search strategies to detect potential ADEs. Because of concerns that non–peer-reviewed data might introduce bias into our systematic review, 29,30 we also excluded studies in which data was presented as an abstract, poster presentation, or editorial.

Information Sources and Search Strategy

We searched OVID MEDLINE, OVID CINHAL, and EMBASE for articles published in all languages between January 1, 1985, and July 1, 2006. In OVID, we searched for the following medical subject headings (MeSH) keywords, and text words: adverse drug event, adverse drug reaction, adverse drug reaction reporting systems, clinical event monitor, clinical decisions support systems, clinical laboratory information systems, clinical pharmacy information system, computer generated signals, decision support system, drug monitoring, medication errors, and physiologic monitoring. In EMBASE, we searched for the above terms plus the following EMTREE keywords: computer assisted drug therapy and drug surveillance program. We supplemented the computerized search by reviewing the reference lists of all articles selected for inclusion.

Study Selection, Data Extraction, and Review Criteria

Two reviewers (SMH and RLA) independently assessed each article for eligibility criteria, with adjudication by a third reviewer (JTH) in cases of disagreement. While reviewing each study that met the eligibility criteria, the same two authors (SMH and RLA) used standardized forms to independently extract and record: hospital characteristics (e.g., teaching or community hospital, number of beds); patient characteristics (e.g., number of patients included); the signals monitored by the hospitals; and, data necessary to record or calculate positive predictive values. To collect the necessary data to calculate a PPV, we reviewed the data from each signal in the individual included studies. For every signal in an included study, we recorded the number of times that a specific signal fired and the number of times that a health professional determined that the signal represented an ADE. Study authors were contacted by e-mail for data clarification when necessary.

Signals from each of the studies that met eligibility criteria were included and combined if they measured the same parameter (e.g., digoxin level, serum potassium level, or use of vitamin K) independent of the reference interval or dosage used in the particular study. Signals were then grouped into one of three categories: antidote signals (triggered by administration of medications given to counteract the effects of a poison, toxin, or other agent with toxic effects), medication level signals (triggered by elevated or supratherapeutic drug levels), and laboratory result signals (triggered by abnormal values in blood tests).

Quantitative Data Synthesis and Statistical Analysis

To calculate a study-specific PPV for each signal, we divided the number of times that a signal fired and an ADE was confirmed (i.e., the number of true-positives), by the number of times the signal fired with or without an ADE being confirmed (i.e., the sum of true-positives and false-positives). PPVs were chosen as the performance characteristic of interest since the majority of studies conducted a targeted verification of signal firings and did not include a corollary gold-standard measure, such as an independently conducted chart review looking for the presence of ADEs. As a result, the sensitivity and specificity of individual signals used to detect ADEs could not be calculated.

To determine the appropriateness of computing a pooled PPV, we compared the individual study-specific PPVs using the chi-square test for homogeneity of proportions. 31 For those signals for which there was no evidence of heterogeneity (p > 0.05) we calculated an overall estimate of pooled PPVs and corresponding 95% confidence intervals (CIs). We used a generalized estimating equations (GEE) model by combining the PPVs for signals reported in at least two studies. This model included an exchangeable correlation structure to account for within-study correlation, using the total number of signal firings in each study as the weighing factor. 32–34 We also examined the sensitivity of the overall PPV estimates using a fixed effects model recommended in the meta-analytic literature. 35

To determine whether certain studies were heavily influencing the overall PPV estimate for each signal, we performed an influence analysis in which we excluded studies, one at a time, and reestimated the overall PPVs. We also examined the cumulative effect on the overall PPV estimate by adding studies, one at a time, ordered by year of publication, and hospital bed size. If there were any publication bias, it would most likely be caused by the greater probability of publication of studies with a larger number of firings or of studies with a smaller number of firings but a greater PPV. We examined this possibility by visually inspecting a scatter plot of the PPV and the square root of the number of signals (which is proportional to the reciprocal of the standard error) and testing for a significant linear trend between them. If we found a lack of data points near the origin or a statistically significant negative linear trend, we would consider it to be evidence of publication bias. 36 We conducted all statistical analyses with either SAS version 8.2 for Windows (SAS Institute, Inc., Cary, NC) or Stata version 9.0 for Windows (StataCorp, LP, College Station, TX).

Results

Of the 6,649 titles that were initially identified, 4,243 were from MEDLINE, 859 were from CINHAL, and 1,547 were from EMBASE. After removing duplicates and going through a thorough screening process (), we identified 12 observational studies that met our eligibility criteria. 18, 37–48 lists the 12 studies and the characteristics of the study sites. All but two of the studies were conducted in teaching hospitals.

Figure 1.

Figure 1

Flow diagram of included and excluded studies.

Table 1.

Table 1 Characteristics of Studies Included in the Systematic Review

Author/Year/Reference Study Site
Evans et al., 199137 500-bed tertiary teaching hospital
Azaz-Livshits et al., 199838 34 bed medical ward in teaching hospital
Jha et al., 199839 726 bed tertiary teaching hospital
Raschke et al., 199846 650 bed community teaching hospital
Levy et al., 199918 34 bed medical ward in teaching hospital
Dormann et al., 200041 9 bed medical ward in a teaching hospital
Brown et al., 200048 238 bed Veterans Administration Medical Center
Jha et al., 200142 726 bed tertiary care teaching hospital
Thuermann et al., 200243 86 bed neurology department in teaching hospital
Dormann et al., 200444 29 bed gastroenterology ward in teaching hospital
Silverman et al., 200447 726 bed tertiary care teaching hospital
Hartis et al., 200545 1,952 beds in six community hospitals

Of the total 36 signals that we identified in two or more publications and included in our analysis, 7 were administrations of antidotes, 10 were supratherapeutic medication levels, and 19 were abnormal laboratory test results. Fifteen signals (three antidotes, eight laboratory tests, and four medication levels) contained no evidence of heterogeneity (p > 0.05) and were pooled to calculate overall PPVs and 95% CIs. Naloxone was not included in the analysis because of the 12 studies that met eligibility criteria, only one study provided sufficient information about naloxone to calculate PPVs. 37 Because we could not calculate a pooled PPV (our primary unit of analysis) with the PPV from only one study, naloxone was not included in our systematic review.

Of the antidote signals (), sodium polystyrene administration, had the lowest pooled PPV 0.09 (95% CI, 0.06–0.13), and metronidazole or vancomycin administration had the highest 0.11 (95% CI, 0.06–0.20). Of the laboratory test result signals (), hypokalemia had the lowest pooled PPV 0.03 (95% CI, 0.03–0.03), and hypoglycemia had the highest 0.28 (95% CI, 0.24–0.32). Of the medication level signals (), cyclosporine had the lowest pooled PPV 0.03 (95% CI, 0.02–0.06) and quinidine had the highest 0.50 (95% CI, 0.39–0.61). Among the pooled signals considered, the antidote category had the lowest PPVs (range = 0.09–0.11), followed by the laboratory test result category (range = 0.03–0.27), and the medication level category (range = 0.03–0.50).

Table 2.

Table 2 Signals Associated with Antidotes

Signal Number of Studies PPV Range p-value Test for Heterogeneity Overall Estimate of PPV† (95% CI) Overall Estimate of PPV‡ (95% CI)
Vitamin K given 3 0.02–0.30 <0.01
Activated charcoal given 2 0.08–0.45 0.03
Antihistamine (e.g., diphenhydramine or hydroxyzine) given 3 0.03–0.14 <0.01
Oral metronidazole or vancomycin given 2 0.07–0.16 0.06 0.11 (0.06–0.20) 0.10 (0.06–0.14)
Antidiarrheal (e.g., loperamide, diphenoxylate, bismuth) given 3 0–0.11 0.06 0.09 (0.07–0.13) 0.07 (0.00–0.15)
Sodium polystyrene (Kayexalate®) given 3 0.06–0.12 0.44 0.09 (0.06–0.13) 0.08 (0.05–0.12)
Oral or topical steroids (e.g., prednisone, prednisolone) given 2 0.04–0.09 <0.01

Naloxone not included as data were available from only a single study.

PPV calculated using GEE pooled estimate and CI.

PPV calculated using fixed effects pooled estimate and CI.

PPV= positive predictive value.

Table 3.

Table 3 Signals Associated with Laboratory Test Results

Signal Number of Studies PPV Range p-value Test for Heterogeneity Overall Estimate of PPV (95% CI) Overall Estimate of PPV† (95% CI)
Serum creatinine elevated or increasing 5 0.08–0.39 <0.01
Hypoglycemia (as indicated by low or decreasing glucose) 2 0–0.33 0.49 0.27 (0.27–0.27) 0.10 (0.00–0.27)
Hyperbilirubinemia (as indicated by high or increasing bilirubin) 4 0.05–0.39 <0.01
Hyponatremia (as indicated by low or decreasing sodium) 2 0.24–0.33 0.72 0.25 (0.23–0.28) 0.25 (0.09–0.41)
Blood urea nitrogen (BUN) elevated or increasing 3 0–0.30 0.41 0.22 (0.14–0.32) 0.17 (0.08–0.26)
Eosinophilia (as indicated by high or increasing eosinophils) 5 0–0.62 <0.01
Hyperkalemia (as indicated by high or increasing potassium) 5 0–0.67 <0.01
Alanine aminotransferase (ALT) elevated or increasing 3 0.12–0.38 <0.01
Anemia (as indicated by a low or decreasing hemoglobin/hematocrit) 5 0.12–0.30 0.14 0.19 (0.12–0.29) 0.16 (0.11–0.22)
Partial thromboplastin time (PTT) elevated or increasing 3 0.04–0.92 <0.01
Gamma-Glutamyl Transferase (GGTP) elevated or increasing 4 0.03–0.19 0.03
Alkaline phosphatase (ALP) level elevated or increasing 5 0–0.31 <0.01
Aspartate aminotransferase (AST) elevated or increasing 4 0.01–0.23 <0.01
Agranulocytosis or leukopenia (as indicated by low or decreasing white blood cells) 4 0.09–0.5 0.15 0.11 (0.07–0.17) 0.10 (0.04–0.15)
International normalized ratio (INR) elevated or increasing 4 0.05–1.0 <0.01
Lactate dehydrogenase (LDH) elevated or increasing 3 0.02–0.17 0.06 0.06 (0.02–0.14) 0.03 (0.00–0.06)
Thrombocytopenia (as indicated by low or decreasing platelets) 4 0.03–0.12 0.01
Hypocalcemia (as indicated by low or decreasing calcium) 2 0–0.11 0.25 0.06 (0.02–0.18) 0.02 (0.00–0.08)
Hypokalemia (as indicated by low or decreasing potassium) 2 0–0.03 0.86 0.03 (0.03–0.03) 0.03 (0.01–0.04)

PPV calculated using GEE pooled estimate and CI.

PPV calculated using fixed effects pooled estimate and CI.

PPV= positive predictive value.

Table 4.

Table 4 Signals Associated with Supratherapeutic Medication Levels

Signal Number of Studies PPV Range P-value Test for Heterogeneity Overall Estimate of PPV (95% CI) Overall Estimate of PPV† (95% CI)
Quinidine 2 0.43–0.60 0.56 0.50 (0.39–0.61) 0.50 (0.22–0.78)
Phenobarbital 3 0–1.0 <0.01
Theophylline trough 5 0.25–1.0 0.01
Vancomycin peak or trough levels 3 0.18–0.33 0.31 0.26 (0.22–0.32) 0.26 (0.20–0.32)
Procainamide 3 0–0.42 <0.01
Lidocaine 3 0.17–0.50 0.51 0.19 (0.17–0.21) 0.18 (0.09–0.28)
Aminoglycoside antibiotic 3 0.04–1.0 <0.01
Digoxin 8 0.08–1.0 <0.01
Phenytoin 7 0.07–1.0 <0.01
Cyclosporine 2 0–0.04 0.29 0.03 (0.02–0.06) 0.03 (0.00–0.06)

PPV calculated using GEE pooled estimate and CI.

PPV calculated using fixed effects pooled estimate and CI.

PPV= positive predictive value.

There were no meaningful differences in overall PPV estimates calculated with GEE models or fixed effects models. The influence analysis suggested that the removal of certain studies affected the PPVs for particular signals. For example, when the Evans et al. study was removed from the analysis of the signal for agranulocytosis or leukopenia, the pooled PPV increased from 0.11 to 0.23. 37 Similarly, when the Theurmann et al. study was removed from the analysis of the anemia signal, the PPV increased from 0.19 to 0.26. 43 No effects were noted on the overall PPV estimates when stratified by study year or bed size.

Some evidence of publication bias was found for the signal agranulocytosis or leukopenia. Specifically, a significant negative association between the number of firings and the PPV (p < 0.05), suggested the possibility that smaller studies with lower PPVs may not have been published and may therefore have eluded our systematic review. For the remaining signals, we found no evidence of publication bias.

Discussion

This systematic review analyzed the performance characteristics of individual pharmacy and laboratory signals that are currently used by clinical event monitors to detect ADEs in the adult hospital setting. Our review of the PPVs of 36 signals from 12 studies published between 1985 and 2006 revealed two important findings.

First, there was evidence of significant between-study heterogeneity for the majority of signals, limiting our ability to pool the PPVs of signals across studies. Of the 36 signals identified in two or more publications, 21 contained evidence of heterogeneity and could therefore not be pooled to calculate overall PPVs. There are at least two plausible explanations for this heterogeneity. First, it may be due to the use of different reference intervals for therapeutic medication levels and laboratory values in different studies. Second, it may be attributable to the different hospital and/or patient characteristics which affect the underlying prevalence of ADEs. This is particularly important because PPVs are by definition affected by the underlying prevalence of the condition of interest.

The second important finding was that there was significant variability in the PPVs for different individual signals, both across studies and within signal categories (e.g., antidotes, medication levels, and laboratory test results). The overall PPV estimates for the 15 pooled signals in the analysis ranged from 0.03 for hypokalemia to 0.50 for a supratherapeutic quinidine level. Moreover, antidotes had the lowest PPVs, followed by laboratory test results, and medication levels. It is not surprising that PPVs were highest for medication levels. For this category of signal, the prior odds of an ADE are increased, since the underlying assumption is that patients in each case are already receiving the medication of interest and their prescribing clinicians are aware of the possibility of an ADE. 49 In contrast, the other two categories of signals would not necessarily be expected to be associated with an ADE. Laboratory values are often abnormal because of the onset or worsening of medical conditions unrelated to the use of medications. Likewise, the majority of antidotes analyzed in our study can be used to treat multiple medical conditions, only a fraction of which are related to the presence of an ADE.

Limitations and Strengths

Our systematic review has several limitations that deserve mention. First, systematic reviews of effect sizes often limit their selection of studies to those involving randomized controlled trials (RCTs). 50 However, analyzing RCTs is not always feasible or preferable for evaluating the performance characteristics of individual signals used to detect ADEs. 51,52 For purposes of our analysis, we did not limit our systematic review to RCTs, so we were not able to apply instruments commonly used to assess the quality of RCTs. 53,54 Second, although we found 12 studies that could be included in the overall analysis, we found few studies that covered each ADE signal. This may have limited our ability to identify the dependence of overall PPVs on factors such as facility bed size and to detect publication bias, a problem to which all systematic reviews are susceptible. 55,56 Third, our analysis focused on data that is widely available in electronic format (such as laboratory and pharmacy information) and was thus biased against data that cannot be readily computed. It also excluded some sources of electronic data available to enhance ADE detection, such as administrative data (e.g., ICD-9 and CPT codes), allergy rules, and free-text searching of clinician progress and discharge notes. 57

Despite these limitations, we believe that our results are important and represent the most comprehensive information available on the performance characteristics of ADE signals in the adult hospital setting. Our analysis employed the “best practice” methods recommended for conducting systematic reviews of the literature. 50 Moreover, in keeping with suggestions of the Roadmap for National Action on Clinical Decision Support, the study was designed to capture, organize, and assess studies available internationally. 1

Implications

While the benefits of health information technology are clear at least in theory, adapting information systems to health care has proven difficult, partly because there are so many non-standardized and independent approaches to creating and representing clinical knowledge and CDS systems. 58,59 In this regard, our systematic review may provide a foundation for and influence the future design and implementation of computerized decision support systems used to detect ADEs in the hospital setting. Having comprehensive information on the performance characteristics of individual signals may help hospitals prioritize the signals to be included in their systems to maximize the detection of ADEs and to minimize the number of false-positive alerts (i.e., alert burden), which is a growing problem. 60,61 To further reduce false-positive alerts, investigators have also begun to integrate data from multiple sources, including pharmacy, laboratory, and demographic data. 62,63 Taking the false-positive rate into account is especially important when large-scale information systems are being developed, since as many as 30% of information system projects fail and a significantly larger number have cost overruns. 64

The fact that many of the signals to detect ADEs have relatively low PPVs should not impede the adoption of clinical event monitors. 65 In many respects, the monitors can be treated as a type of screening test that allows for early ADE identification and intervention, and thereby reduces morbidity and mortality rates. 66 Indeed, the monitors have been shown to detect ADEs not normally detected by clinicians during the course of routine care, and to decrease the length of time until diagnosis and treatment. 18,19,67 Screening tests such as fecal occult blood testing to detect colorectal cancer are recommended despite having PPVs that range from 0.02 to 0.18 in adults over 50 years old, and are thus similar to the ranges of some signals described in our study. 68

Recommendations for Future Work

Additional studies are needed to improve the performance characteristics of individual ADE signals and CDS systems, apply these systems to other clinical environments, develop interoperable systems, and perform economic analyses of these systems. Studies have suggested that ADE detection rates can be improved by combing multiple data sources and having a better understanding of the context of the data as they relate to patients’ underlying medical conditions. 69–72 Investigators have begun to use clinical decision support systems to detect ADEs in other clinical care settings, such as ambulatory care clinics and nursing homes. 57,73–75 These systems may be particularly useful in the nursing home setting where patients are frail, have multiple comorbid medical conditions, and take more medications per patient than in any other clinical setting. 74,76,77 Since most systems lack standardized methods to export or share ADE algorithms, additional studies are required to develop interoperable systems. 78,79 Additional cost-benefit and cost-effectiveness studies are needed not only to determine the rational selection, optimal use, and potential success of systems used to detect ADEs, but also to determine the costs of developing and maintaining the systems and of responding to true-positive and false-positive alerts.

Conclusions

Our systematic review provides the PPVs of pharmacy and laboratory signals used to detect ADEs in the adult hospital setting, and suggests that the PPVs of individual signals vary widely. Our findings should help clinical information system and clinical decision support producers create and modify clinical decision support systems to detect ADEs in their own institutions. Future studies are needed to improve the performance characteristics of individual ADE signals and CDS systems, apply these systems to other clinical environments, develop interoperable systems, and perform economic analyses of the systems.

Footnotes

This study was supported in part by NIH grants K12 HD049109 (NIH Roadmap Multidisciplinary Clinical Research Career Development Award Grant), 5T32AG021885, P30AG024827, R01AG027017, P30AG024827 and a Merck/AFAR Junior Investigator Award in Geriatric Clinical Pharmacology.

The authors thank Alice B. Kuller, MLS, for her help in conducting the literature search for this systematic review.

References

  • 1.Osheroff J, Teich J, Middleton B, Steen E, Wright A, Detmer D. A roadmap for national action on clinical decision support J Am Med Inform Assoc 2007;14(2):141-145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Hripcsak G, Clayton PD, Jenders RA, Cimino JJ, Johnson SB. Design of a clinical event monitor Comp Biomed Res 1996;29(3):194-221. [DOI] [PubMed] [Google Scholar]
  • 3.Rind DM, Safran C, Phillips RS, et al. Effect of computer-based alerts on the treatment and outcomes of hospitalized patients Arch Int Med 1994;154(13):1511-1517. [PubMed] [Google Scholar]
  • 4.Haug PJ, Gardner RM, Tate KE, et al. Decision support in medicine: examples from the HELP system Comp Biomed Res Oct 1994;27(5):396-418. [DOI] [PubMed] [Google Scholar]
  • 5.Haug PJ, Rocha BH, Evans RS. Decision support in medicine: lessons from the HELP system Int J Med Inform 2003;69(2–3):273-284. [DOI] [PubMed] [Google Scholar]
  • 6.Bates DW, Evans RS, Murff H, Stetson PD, Pizziferri L, Hripcsak G. Detecting adverse events using information technology J Am Med Inform Assoc 2003;10(2):115-128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kohn L, Corrigan J, Donaldson M. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000. [PubMed]
  • 8.Handler SM, Wright RM, Ruby CM, Hanlon JT. Epidemiology of medication-related adverse events in nursing homes Am J Ger Pharma 2006;4(3):264-272. [DOI] [PubMed] [Google Scholar]
  • 9.Institute of Medicine Preventing Medication Errors. Washington, DC: National Academy Press; 2006.
  • 10.Classen DC, Pestotnik SL, Evans RS, Lloyd JF, Burke JP. Adverse drug events in hospitalized patientsExcess length of stay, extra costs, and attributable mortality. JAMA 1997;277(4):301-306. [PubMed] [Google Scholar]
  • 11.Senst BL, Achusim LE, Genest RP, et al. Practical approach to determining costs and frequency of adverse drug events in a health care network Am J Health-Sys Pharm 2001;58(12):1126-1132. [DOI] [PubMed] [Google Scholar]
  • 12.Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug eventsImplications for prevention. ADE Prevention Study Group. JAMA 1995;274(1):29-34. [PubMed] [Google Scholar]
  • 13.Nebeker JR, Hoffman JM, Weir CR, Bennett CL, Hurdle JF. High rates of adverse drug events in a highly computerized hospital Arch Int Med 2005;165(10):1111-1116. [DOI] [PubMed] [Google Scholar]
  • 14.Lazarou J, Pomeranz BH, Corey PN. Incidence of adverse drug reactions in hospitalized patients: a meta-analysis of prospective studies JAMA 1998;279(15):1200-1205. [DOI] [PubMed] [Google Scholar]
  • 15.Bates D, Spell N, Cullen DJ, et al. The costs of adverse drug events in hospitalized patients JAMA 1997;277(4):307-311. [PubMed] [Google Scholar]
  • 16.Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement J Comm J Qual Improv 1995;21(10):541-548. [DOI] [PubMed] [Google Scholar]
  • 17.Gandhi TK, Seger DL, Bates DW. Identifying drug safety issues: from research to practice Int J Qual Health Care 2000;12(1):69-76. [DOI] [PubMed] [Google Scholar]
  • 18.Levy M, Azaz-Livshits T, Sadan B, Shalit M, Geisslinger G, Brune K. Computerized surveillance of adverse drug reactions in hospital: implementation Eur J Clin Pharm 1999;54(11):887-892. [DOI] [PubMed] [Google Scholar]
  • 19.Tegeder I, Levy M, Muth-Selbach U, et al. Retrospective analysis of the frequency and recognition of adverse drug reactions by means of automatically recorded laboratory signals Br J Clin Pharm 1999;47(5):557-564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Classen DC, Pestotnik SL, Evans RS, Burke JP. Description of a computerized adverse drug event monitor using a hospital information system Hosp Pharm 1992;27(9):774776–9, 783. [PubMed] [Google Scholar]
  • 21.Payne TH, Savarino J. Development of a clinical event monitor for use with the Veterans Affairs Computerized Patient Record System and other data sources Proc AMIA Annu Symp 1998:145-149. [PMC free article] [PubMed]
  • 22.Evans RS, Pestotnik SL, Classen DC, Bass SB, Burke JP. Prevention of adverse drug events through computerized surveillance Proc Annu Symp Comp Appl Med Care 1992:437-441. [PMC free article] [PubMed]
  • 23.Kaushal R, Jha AK, Franz C, et al. Return on investment for a computerized physician order entry system J Am Med Inform Assoc 2006;13(3):261-266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Shojania KG, Duncan BW, McDonald KM, Watchter RM. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Rockville, MD: Agency for Healthcare Research and Quality; 2001. [PMC free article] [PubMed]
  • 25.Institute of Medicine Patient Safety: Achieving a New Standard of Care. Washington, DC: The National Academy Press; 2004.
  • 26.Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care Ann Int Med 2006;144(10):742-752. [DOI] [PubMed] [Google Scholar]
  • 27.Institute of Medicine Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. [PubMed]
  • 28.Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review J Am Med Inform Assoc 2007;14:29-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis[see comment] JAMA 1999;282(11):1054-1060. [DOI] [PubMed] [Google Scholar]
  • 30.Huwiler-Muntener K, Juni P, Junker C, Egger M. Quality of reporting of randomized trials as a measure of methodologic quality JAMA 2002;287(21):2801-2804. [DOI] [PubMed] [Google Scholar]
  • 31.Fleiss JL. The statistical basis of meta-analysis Stat Meth Med Res 1993;2(2):121-145. [DOI] [PubMed] [Google Scholar]
  • 32.Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models Biometrika 1986;73:13-22. [Google Scholar]
  • 33.Lipsitz SR, Fitzmaurice GM, Orav EJ, Laird NM. Performance of generalized estimating equations in practical situations Biometrics 1994;50(1):270-278. [PubMed] [Google Scholar]
  • 34.Diggle P, Heagerty P, Liang K, Zeger S. Analysis of longitudinal data. 2nd ed.. New York, NY: Oxford University Press; 2002.
  • 35.Sutton A, Abrams K, Jones D, Sheldon T, Song F. Methods for Meta-Analaysis in Medical Research. Sussex England: John Wiley and Sons; 2000.
  • 36.Song F, Khan KS, Dinnes J, Sutton AJ. Asymmetric funnel plots and publication bias in meta-analyses of diagnostic accuracy Int J Epidemiol 2002;31(1):88-95. [DOI] [PubMed] [Google Scholar]
  • 37.Evans RS, Pestotnik SL, Classen DC, et al. Development of a computerized adverse drug event monitor Proc Annu Symp Comp Appl Med Care 1991:23-27. [PMC free article] [PubMed]
  • 38.Azaz-Livshits T, Levy M, Sadan B, Shalit M, Geisslinger G, Brune K. Computerized survelliance of adverse drug reactions in hospital: pilot study Br J Clin Pharmacol 1998;45(3):309-314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Jha AK, Kuperman GJ, Teich JM, et al. Identifying adverse drug events: development of a computer-based monitor and comparison with chart review and stimulated voluntary report J Am Med Inform Assoc 1998;5(3):305-314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Henz S. In: James B, editor. Improved ADR Detection without Using a Computer-Assisted Alert System. Salt Lake City, Utah: Institute of Healthcare Delivery Research; 2000.
  • 41.Dormann H, Muth-Selbach U, Krebs S, et al. Incidence and costs of adverse drug reactions during hospitalisation: computerised monitoring versus stimulated spontaneous reporting Drug Safety 2000;22(2):161-168. [DOI] [PubMed] [Google Scholar]
  • 42.Jha AK, Kuperman GJ, Rittenberg E, Teich JM, Bates DW. Identifying hospital admissions due to adverse drug events using a computer-based monitor Pharmacoepidem Drug Safety 2001;10(2):113-119. [DOI] [PubMed] [Google Scholar]
  • 43.Thuermann PA, Windecker R, Steffen J, et al. Detection of adverse drug reactions in a neurological department: comparison between intensified surveillance and a computer-assisted approach Drug Safety 2002;25(10):713-724. [DOI] [PubMed] [Google Scholar]
  • 44.Dormann H, Criegee-Rieck M, Neubert A, et al. Implementation of a computer-assisted monitoring system for the detection of adverse drug reactions in gastroenterology Aliment Pharm Ther 2004;19(3):303-309. [DOI] [PubMed] [Google Scholar]
  • 45.Hartis CE, Gum MO, Lederer Jr JW. Use of specific indicators to detect warfarin-related adverse events Am J Health-Syst Pharm 2005;62(16):1683-1688. [DOI] [PubMed] [Google Scholar]
  • 46.Raschke RA, Gollihare B, Wunderlich TA, et al. A computer alert system to prevent injury from adverse drug events: development and evaluation in a community teaching hospital JAMA 1998;280(15):1317-1320. [DOI] [PubMed] [Google Scholar]
  • 47.Silverman JB, Stapinski CD, Huber C, Ghandi TK, Churchill WW. Computer-based system for preventing adverse drug events Am J Health-Syst Pharm 2004;61(15):1599-1603. [DOI] [PubMed] [Google Scholar]
  • 48.Brown S, Black K, Mrochek S, et al. RADARx: Recognizing, Assessing, and Documenting Adverse Rx events Proc AMIA Annu Symp 2000:101-105. [PMC free article] [PubMed]
  • 49.Goroll AH, Mulley AG. Primary Care Medicine: Office Evaluation and Management of the Adult Patient. 5th ed.. Philadelphia, PA: Lippincott Williams and Wilkins; 2006.
  • 50.Owens DK, Nease Jr RF. A normative analytic framework for development of practice guidelines for specific clinical populations Med Decis Making 1997;17(4):409-426. [DOI] [PubMed] [Google Scholar]
  • 51.Burkle T, Ammenwerth E, Prokosch HU, Dudeck J. Evaluation of clinical information systemsWhat can be evaluated and what cannot?. J Eval Clin Prac 2001;7(4):373-385. [DOI] [PubMed] [Google Scholar]
  • 52.Poissant L, Pereira J, Tamblyn R, Kawasumi Y. The impact of electronic health records on time efficiency of physicians and nurses: a systematic review J Am Med Inform Assoc 2005;12(5):505-516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Contr Clin Trials 1996;17(1):1-12. [DOI] [PubMed] [Google Scholar]
  • 54.Cook DJ, Sackett DL, Spitzer WO. Methodologic guidelines for systematic reviews of randomized control trials in health care from the Potsdam Consultation on Meta-Analysis J Clin Epidem 1995;48(1):167-171. [DOI] [PubMed] [Google Scholar]
  • 55.Irwig L, Tosteson AN, Gatsonis C, et al. Guidelines for meta-analyses evaluating diagnostic tests[see comment] Ann Intern Med 1994;120(8):667-676. [DOI] [PubMed] [Google Scholar]
  • 56.Irwig L, Macaskill P, Glasziou P, Fahey M. Meta-analytic methods for diagnostic test accuracy J Clin Epidem 1995;48(1):119-130discussion 131–12. [DOI] [PubMed] [Google Scholar]
  • 57.Honigman B, Lee J, Rothschild J, et al. Using computerized data to identify adverse drug events in outpatients J Am Med Inform Assoc 2001;8(3):254-266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Ash JS, Gorman PN, Seshadri V, Hersh WR. Computerized physician order entry in U.S. hospitals: results of a 2002 survey J Am Med Inform Assoc 2004;11(2):95-99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Kaushal R, Bates DW, Poon EG, Jha AK, Blumenthal D, Harvard Interfaculty Program for Health Systems Improvement NWG. Functional gaps in attaining a national health information networkWhat will it take to get there in five years?. Health Affairs 2005;24(5):1281-1289. [DOI] [PubMed] [Google Scholar]
  • 60.Judge J, Field TS, DeFlorio M, et al. Prescribers’ responses to alerts during medication ordering in the long term care setting J Am Med Inform Assoc 2006;13(4):385-390. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry J Am Med Inform Assoc 2006;13(2):138-147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Kilbridge PM, Alexander L, Ahmad A. Implementation of a system for computerized adverse drug event surveillance and intervention at an academic medical center J Clin Out Manag 2006;13(2):94-100. [Google Scholar]
  • 63.Kilbridge PM, Campbell UC, Cozart HB, Mojarrad MG. Automated surveillance for adverse drug events at a community hospital and an academic medical center J Am Med Inform Assoc 2006;13(4):372-377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Southon FC, Sauer C, Grant CN. Information technology in complex health services: organizational impediments to successful technology transfer and diffusion J Am Med Inform Assoc 1997;4(2):112-124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Bates DW, Evans RS, Murff H, Stetson PD, Pizziferri L, Hripcsak G. Policy and the future of adverse event detection using information technology J Am Med Inform Assoc 2003;10(2):226-228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Aschengrau A, Seage GR. Essentials of epidemiology in public health. Sudbury, MA: Jones and Bartlett Publishers; 2003. 0-7637-2537-4.
  • 67.Kuperman GJ, Teich JM, Tanasijevic MJ, et al. Improving response to critical laboratory results with automation: results of a randomized controlled trial J Am Med Inform Assoc 1999;6(6):512-522. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Pignone M, Rich M, Teutsch SM, Berg AO, Lohr KN. Screening for colorectal cancer in adults at average risk: a summary of the evidence for the U.S. Preventive Services Task Force[see comment][summary for patients in Ann Intern Med. 2002 Jul 16;137(2):I38; PMID: 12118986]. Ann Intern Med 2002;137(2):132-141. [DOI] [PubMed] [Google Scholar]
  • 69.Schiff GD, Klass D, Peterson J, Shah G, Bates DW. Linking laboratory and pharmacy: opportunities for reducing errors and improving care Arch Intern Med 2003;163(8):893-900. [DOI] [PubMed] [Google Scholar]
  • 70.Wick JY, Zanni GR. Integrating pharmacy and laboratory systems: quality improvement impications Cons Pharm 2000;15(10):1009-10101013–4, 1016–7, 1021–3. [Google Scholar]
  • 71.Becich MJ, Gilbertson JR, Gupta D, Patel A, Grzybicki DM, Raab SS. Pathology and patient safety: the critical role of pathology informatics in error reduction and quality initiatives Clin Lab Med 2004;24(4):913-943. [DOI] [PubMed] [Google Scholar]
  • 72.Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T. Reducing the frequency of errors in medicine using information technology J Am Med Inform Assoc 2001;8(4):299-308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Gurwitz JH, Field TS, Harrold LR, et al. Incidence and preventability of adverse drug events among older persons in the ambulatory setting JAMA 2003;289(9):1107-1116. [DOI] [PubMed] [Google Scholar]
  • 74.Gurwitz JH, Field TS, Judge J, et al. The incidence of adverse drug events in two large academic long-term care facilities Am J Med 2005;118(3):251-258. [DOI] [PubMed] [Google Scholar]
  • 75.Honigman B, Light P, Pulling RM, Bates DW. A computerized method for identifying incidents associated with adverse drug events in outpatients Int J Med Inform 2001;61(1):21-32. [DOI] [PubMed] [Google Scholar]
  • 76.Doshi JA, Shaffer T, Briesacher BA. National estimates of medication use in nursing homes: findings from the 1997 Medicare Current Beneficiary Survey and the 1996 Medical Expenditure Survey J Am Ger Soc 2005;53(3):438-443. [DOI] [PubMed] [Google Scholar]
  • 77.Guay DR, Artz MB, Hanlon JT, Schmader KE. The pharmacology of agingIn: Tallis RC, Fillit HM, editors. Brocklehurst’s Textbook of Geriatric Medicine and Gerontology. New York, New York: Churchill Livingstone; 2003. pp. 155-161.
  • 78.Bakken S, Campbell KE, Cimino JJ, Huff SM, Hammond WE. Toward vocabulary domain specifications for health level 7-coded data elements J Am Med Inform Assoc 2000;7(4):333-342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Kawamoto K, Lobach DF. Proposal for fulfilling strategic objectives of the U.SRoadmap for National Action on Decision Support through a service-oriented architecture leveraging HL7 services. J Am Med Inform Assoc 2007;14(2):146-155. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES