Skip to main content
Quality & Safety in Health Care logoLink to Quality & Safety in Health Care
editorial
. 2006 Jun;15(3):148–149. doi: 10.1136/qshc.2006.018218

Automated surveillance for adverse events in hospitalized patients: back to the future

P M Kilbridge 1,2, D C Classen 1,2
PMCID: PMC2464863  PMID: 16751458

Short abstract

Only by studying the true nature and frequency of adverse events through effective surveillance approaches can patient safety interventions be formulated, implemented, and properly evaluated for efficacy

Keywords: adverse events, automated surveillance, electronic triggers


The goal of patient safety efforts is to reduce the harm we do to our patients while providing them with the care they need, and recognizing the true nature and sources of harm is critical to this endeavor. The paper by Szekendi et al1 in this issue of QSHC describes a return to automated methods for detecting adverse events, and provides an opportunity to review the evolution of adverse event detection as well as the challenges associated with different models.

First, however, we must emphasize why some form of surveillance for detection of harm to patients is indispensable to modern patient safety practices: it allows us to overcome the serious defects associated with dependence upon spontaneous reporting as a method for detecting adverse events. While such reporting can play an important role in supporting a culture of safety—for example, encouraging the candid discussion of errors—it is by its nature anecdotal and superficial. In addition to the obvious barriers to reporting (time constraints, fear of retribution, liability concerns), we know that most events causing harm to patients are not even recognized as such by clinicians at the time they occur.2 Thus, voluntary reporting describes a small—and by no means representative—minority of the universe of harm to our patients. It is useless for the quantitative study of adverse events, and is not reliable either as an indicator of the principle sources of harm or as a measure by which to assess improvement.

Automated surveillance for adverse drug events was first demonstrated on a large scale in the early 1990s by Classen et al at LDS Hospital;3 this methodology was refined and extended by investigators at Harvard4 and Duke.5 These groups used rules based computer systems to identify combinations of clinical data (antidotes, toxic drug levels, drug‐laboratory combinations, etc) that suggest that a patient has suffered or is suffering an adverse drug event. Each computer alert is then evaluated within 24 hours by a medication safety pharmacist to determine whether it represents a true adverse drug event or an event in progress; the latter provides an opportunity to intervene and ameliorate harm to the patient. These investigators used proven scoring methodologies for determining event causality, and demonstrated the ability to detect adverse drug events reliably and reproducibly at rates 4–10 times that of voluntary reporting. In recent years others have applied the principles of automated surveillance to events beyond adverse drug events—for example, using various technologies to search text documents such as discharge summaries for key words suggestive of adverse events.6,7

Automated surveillance using this model has three significant difficulties that have limited its usefulness and broad adoption. Firstly, many hospitals lack the technical knowledge and resources to build the sophisticated, rules based computer systems needed to operate comprehensive surveillance; as yet, these capabilities are not available in most commercial systems. Secondly, automated surveillance depends upon the availability in electronic form of data suggestive of an adverse event. The general availability of inpatient pharmacy and laboratory data in electronic form made possible the early work in surveillance of adverse drug events in hospitalized patients. While these systems detect certain types of adverse events very effectively, other event types for which electronic trigger data do not exist are not detected. Finally, perhaps the greatest limitation of comprehensive surveillance is the significant investment in resources required to evaluate the computer alerts. While the time requirements described by Szekendi et al1 (35–45 minutes each) are extraordinary (at Duke our investigators evaluate each alert in less than 5 minutes on average), there is no question that alert evaluation is time consuming and requires an ongoing resource commitment.

Recognizing these limitations as well as the value of the surveillance approach, a number of investigators have in recent years developed modified “trigger” methodologies based on the data types and methods of automated surveillance.8,9 These tools permit any hospital to conduct a focused explicit chart review based evaluation of safety in a small sample of their patient population. While losing the ability to perform comprehensive surveillance of all hospitalized patients and the opportunity to intervene to prevent harm outside the small population sampled, one gains the ability to use data types (such as hand written progress notes) that are not easily adopted for computerized detection. These tools can therefore increase the sensitivity of event detection relative to automated systems. Szekendi et al1 have “reverse engineered” these trigger methodologies—automating the easily computerized manual triggers—showing once again that the use of electronically available flags suggestive of adverse events can effectively identify them.

Clearly, more study and innovation are required in this area, but we can speculate on what the future might hold. Certainly no one strategy will fit all environments. Some hospitals will be able to afford the investment needed to build and operate automated surveillance systems; some will restrict their efforts to chart based methods; others may apply hybrid strategies using automation for some event types or environments and chart review for others. In the area of manual methodologies, investigators with the Institute for Healthcare Improvement are building a series of chart review based trigger tools for detection of adverse events in various care settings including the intensive care unit, labor and delivery, the emergency room, and surgical environments. This work has culminated in the development of a more comprehensive method for detecting adverse events called the global trigger tool. Increasing computerization of care processes—for example, the growing use of systems for electronic clinical documentation, medication administration documentation, and others—should improve the yield of automated surveillance by offering new data sources for event detection. Vendors of electronic medical record systems are under pressure to build systems with better decision support mechanisms,10 which should lower the barriers to implementation of rules based detection systems. As hospitals learn more about the costs and risks associated with adverse events, and as regulators and other groups demand greater accountability for patient safety, we may see an increased willingness on the part of hospitals to invest in the resources needed to take full advantage of our increasingly sophisticated clinical information systems.

Indeed, in the end, implementing and maintaining adverse event surveillance systems is only useful if there exists an interested and motivated executive audience for the data, and many in healthcare delivery organizations are not interested in knowing their rates of adverse events, at least unless one is immediately able to offer a definitive strategy for their reduction. While this may be understandable, it is only by studying the nature and frequency of these events that effective improvement strategies can be formulated, implemented, and evaluated. Otherwise, hospitals will continue to be limited to the implementation of various generic improvement strategies with which to focus on what we can only guess are the most pressing problems, and with no hope of ever really knowing whether the time and resources committed have made a difference to patient safety.

Footnotes

DCC is an employee of FCG, a technology services company, and has an interest in Theradoc, a medical software company.

References

  • 1.Szekendi M K, Sullivan C, Bobb A.et al Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care 200615184–190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Cullen D J, Bates D W, Small S D.et al The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv 199521541–548. [DOI] [PubMed] [Google Scholar]
  • 3.Classen D C, Pestotnik S L, Evans R S.et al Computerized surveillance of adverse drug events in hospital patients. JAMA 19912662847–2851. [PubMed] [Google Scholar]
  • 4.Jha A K, Kuperman G J, Teich J M.et al Identifying adverse drug events: development of a computer‐based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc 19985305–314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Kilbridge P M, Alexander L, Ahmad A. Implementation of a system for computerized adverse drug event surveillance and intervention at an academic medical center. J Clin Outcomes Manage 20061394–100. [Google Scholar]
  • 6.Forster A J, Andrade J, van Walraven C. Validation of a discharge summary term search method to detect adverse events. J Am Med Inform Assoc 200512200–206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Melton G B, Hripcsak G. Automated detection of adverse events using natural language processing of discharge summaries. J Am Med Inform Assoc 200512448–457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Resar R K, Rozich J D, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care 200312(Suppl II)ii39–ii45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Rozich J D, Haraden C R, Resar R K. Adverse drug event trigger tool: a practical methodology for measuring medication related harm. Qual Saf Health Care 200312194–200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kilbridge P M, Welebob E M, Classen D C. Development of the Leapfrog methodology for evaluating hospital implemented inpatient computerized physician order entry systems. Qual Saf Health Care. 20061581–84. [DOI] [PMC free article] [PubMed]

Articles from Quality & Safety in Health Care are provided here courtesy of BMJ Publishing Group

RESOURCES