Two outstanding articles in this issue of the Journal of the American Medical Informatics Association (JAMIA) provide exciting quantitative findings about how adverse drug events (ADEs) are detected and measured and might be prevented using computerized methods.1,2 The articles use different populations and methods yet are complementary in their findings and in their suggestions of methods for preventing ADEs.
In a 2003 Journal of the American Medical Association publication, Gurwitz et al.3 outlined their measurement of the incidence and preventability of ADEs in older people in an ambulatory setting. They found that ADEs are common and often preventable in the older ambulatory population. Then, in the current issue of JAMIA, Field et al. evaluate strategies to better detect ADEs among older people in the ambulatory setting.1 They used multiple signals to detect ADEs, including computer-generated signals. They found that computer-generated signals were the source of 31% of the ADEs detected and were the source of 37% of the preventable ADEs. They also found that voluntary reporting of ADEs by health care providers was inadequate and that multiple strategies for detection and prevention of ADEs are needed.
The paper by Hsieh et al.2 reports a study of the characteristics of drug-allergy alert overrides and how often these overrides lead to preventable ADEs. They found that drug-allergy alert overrides occurred approximately 80% of the time and noted that most of the overrides were clinically justifiable. Based on their findings, they made specific recommendations for increasing the specificity of their computerized drug-allergy alerting. They further noted that similar analysis should be performed to refine and improve the utility of other medication-related decision support systems including drug–drug interactions and drug–laboratory checking.
It has been about three decades since similar computerized mediation monitoring systems were initiated using the Health Evaluation Through Logical Processing (HELP) system with inpatient pharmacy at LDS Hospital in Salt Lake City.4,5 The system at that time was entirely based on a “local” set of rules executed by the HELP system. Alert feedback was presented to pharmacists when they entered the physician medication orders into the HELP system. The alerts were classified as informational or action oriented. The action-oriented alerts were triggered based on drug–drug, drug–allergy, drug–laboratory, and other criteria. The pharmacist then screened the alerts and contacted the ordering physicians for relevant alerts.4,5,6 Therapeutic changes were made in more than 75% of the situations in which action-oriented alerts were presented.5 More recently, Classen et al.6 developed methods for better detection of ADEs at LDS Hospital. Their work substantiated the personal burden of ADEs by documenting that ADEs increased the length of hospital stays, increased economic burden, and resulted in an almost twofold increased risk of death.6 In addition, assessment of physician attitudes toward acceptance of the HELP clinical expert system showed that alerting for important laboratory findings and medication monitoring were the two most helpful decision support applications.7
Since the early work at LDS Hospital, the complexity of knowledge about medication alerting has changed dramatically. Several commercial “knowledge vendors” including First DataBank, Medi-span, and Multum have come into the marketplace. These vendors have taken on the task of “knowledge engineering” for medication monitoring for many of the computerized systems now in operation. Although these vendors provide an excellent source of “knowledge,” they tend to be “totally inclusive” and will warn or “alert” for every known situation that has ever been reported. We suspect that the vendors took the conservative position for medical and legal reasons. The vendor systems present “alerts” for every situation that has been reported or that they are aware of. Our experience and that presented in the two papers in this issue of JAMIA suggest that such a broad coverage of “alerting” presents a high number of false-positive alerts that can cause clinicians to ignore potentially important alerts. To help minimize this overalerting problem, most clinical sites “customize” the alerts for their local sites. This customization process is complex and time-consuming and requires obtaining consensus from medical staff and the pharmacy. As a consequence, alerts are not standardized across the country or even across some health care enterprises.
As the two papers presented in this issue of JAMIA have noted, the process of implementing, validating, measuring, and activating mechanisms to help prevent ADEs is complex. The situation is similar to the situation of those of us living in the beautiful mountainous area of the western United States. Here in Utah we love to hike and use topographic maps and global positioning systems to help us find our way. However, for hikes for which we want to find the best route, the best scenery, and the least distance, we often seek additional guidance by using books written by experts who have taken multiple trails at different times of the year.
It is our recommendation that we as a medical informatics community start sharing our “ADE Guide Books” in a free and open way. We believe that, by sharing “rule sets,” understanding positive predictive values for each rule, and gaining a better knowledge of important versus informational alerts, we will jointly improve our ability to detect and prevent ADEs. The complexity of the situation is pointed out by Nebeker et al.,8 who provide guidance to physicians to help clarify and classify ADEs. It is clear for both inpatient and outpatient situations that computerized methods can be improved and that investigators in our field must learn from each other.9
We commend both teams of authors for their careful, insightful, and forthright presentation of the limitations that currently exist for detecting, measuring, and preventing ADEs. Let us take their work as a model of how to improve the care of our patients by better detecting, measuring, and preventing ADEs with computerized decision support systems.
References
- 1.Field TS, Gurwitz JH, Harrold LR, et al. Strategies for detecting adverse drug events among older persons in the ambulatory setting. J Am Med Inform Assoc. 2004;11:492–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Hsieh TC, Kuperman GJ, Jaggi T, et al. Characteristics and consequences of drug allergy alert overrides in a computerized physician order entry system. J Am Med Inform Assoc. 2004;11:482–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Gurwitz JH, Field TS, Harrold LR, et al. Incidence and preventability of adverse drugs among older persons in the ambulatory setting. JAMA. 2003;289:1107–16. [DOI] [PubMed] [Google Scholar]
- 4.Hulse RK, Clark SJ, Jackson JC, Warner HR, Gardner RM. Computerized medication monitoring system. Am J Hosp Pharm. 197633:1061–4. [PubMed] [Google Scholar]
- 5.Gardner RM, Hulse RK, Larsen KG. Assessing the effectiveness of a computerized pharmacy system. Symposium on Computer Applications in Medical Care. 199014:668–72. [Google Scholar]
- 6.Classen DC, Pestotnik SL, Evans RS, Burke JR. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991; 2667:2847–51. [PubMed] [Google Scholar]
- 7.Gardner RM, Lundsgaarde HP. Evaluation of user acceptance of a clinical expert system. J Am Med Inform Assoc. 1994;1:428–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Nebeker JR, Barach P, Samore MH. Clarifying adverse drug events: a clinician's guide to terminology, documentation and reporting. Ann Intern Med. 2004;140:795–801. [DOI] [PubMed] [Google Scholar]
- 9.Classen D. Medication safety: moving from illusion to reality [editorial]. JAMA. 2003;289:1154–6. [DOI] [PubMed] [Google Scholar]