Abstract
BACKGROUND
Although iatrogenic injury poses a significant risk to hospitalized patients, detection of adverse events (AEs) is costly and difficult.
METHODS
The authors developed a confidential reporting method for detecting AEs on a medicine unit of a teaching hospital. Adverse events were defined as patient injuries. Potential adverse events (PAEs) represented errors that could have, but did not result in harm. Investigators interviewed house officers during morning rounds and by e-mail, asking them to identify obstacles to high quality care and iatrogenic injuries. They compared house officer reports with hospital incident reports and patients' medical records. A multivariate regression model identified correlates of reporting.
RESULTS
One hundred ten events occurred, affecting 84 patients. Queries by e-mail (incidence rate ratio [IRR ]=0.16; 95% confidence interval [95% CI], 0.05 to 0.49) and on days when house officers rotated to a new service (IRR =0.12; 95% CI, 0.02 to 0.91) resulted in fewer reports. The most commonly reported process of care problems were inadequate evaluation of the patient (16.4%), failure to monitor or follow up (12.7%), and failure of the laboratory to perform a test (12.7%). Respondents identified 29 (26.4%) AEs, 52 (47.3%) PAEs, and 29 (26.4%) other house officer-identified quality problems. An AE occurred in 2.6% of admissions. The hospital incident reporting system detected only one house officer-reported event. Chart review corroborated 72.9% of events.
CONCLUSIONS
House officers detect many AEs among inpatients. Confidential peer interviews of front-line providers is a promising method for identifying medical errors and substandard quality.
Keywords: Adverse event, medical error, house officer
Errors and mishaps pose a substantial risk to hospitalized patients. Iatrogenic injuries affect as many as 18% of patients admitted to hospitals in the United States, at a cost estimated to exceed $100 billion per year.1–4
Managing this problem is difficult in part because adverse events (AEs) often go unrecognized. The complexity of medical care makes it difficult to attribute poor outcomes to treatment. In addition, clinicians may be reluctant to report errors because they may face legal and financial penalties, administrative sanction, and criticism by their colleagues. The intensive data collection efforts required in research studies of medical errors are expensive, time consuming, and probably unsuitable for ongoing use.1,3,5 Incident reports miss most AEs.6 Screening algorithms using administrative data are not yet validated.7,8 Information system-based screening methods and physician order entry are powerful but not widely available.9,10 As a result, and despite the good faith efforts of peer review organizations and quality improvement professionals, a minority of events come to the attention of department chairs, medical directors, and physician-administrators who are responsible for clinical care. Understanding the nature and magnitude of quality problems may direct improvement efforts and motivate the use of clinical pathways and the adoption of “best practices.” A cost-effective, timely method to identify AEs would represent a major innovation in quality improvement.
A promising approach to the problem of medical error reporting relies on clinicians to identify AEs. House officers are particularly well positioned to identify these problems.11 They are front-line workers, intimately involved in many aspects of patient care. They interact with and coordinate the delivery of services to hospitalized patients. Many are willing to report their own errors.12 House officers at a Colorado Veterans Affairs medical center reported as many AEs during morning report as the hospital incident reporting system.13 House officers at a Boston teaching hospital who were prompted by daily e-mail reminders identified adverse events as well as intensive chart review.14 Neither project was designed for ongoing surveillance.
The purpose of this study was to develop a replicable and potentially sustainable reporting system that relies on house officers to identify AEs. The project was designed explicitly as a method for surveillance of AEs, integrating several elements that were thought to be essential to an ongoing system into the design. Using this approach, the authors sought to answer 2 questions. First, are house officers willing to identify AEs and quality of care problems among hospitalized patients? Second, do reports by house officers provide adequate information to understand the nature and magnitude of errors affecting hospitalized patients?
METHODS
Study Site
The study site was the general medicine service of a 371-bed, Boston-based teaching hospital. A single interviewer (SNW) met 1to 3 times each week with postgraduate year 2 and 3 medicine residents whose patients were assigned to one of two 40-bed general medicine nursing units. House officers work on the unit in 3 to 8 weeklong rotations. They admit all patients on the unit and provide 24 hour/day coverage.
The interviewer was a fellow (postgraduate year 4) in the Department of Medicine who completed an internal medicine residency at the hospital and was well known to most respondents. He notified house officers by e-mail at the start of the rotation that he would query them for a project designed to identify obstacles to high quality care. No written consent was obtained in advance as the project was conceived as a peer review activity under the auspices of the Medicine Department Quality Improvement Committee rather than as a research study. Investigators assured potential respondents that their participation was entirely voluntary, that reports would be handled confidentially, that patients' and respondents' identities would be protected, that analyses would be performed using aggregated data, and that the information collected would be used to improve the quality of care.
Study Protocol
Interviews took place during Care Management Rounds (CMR), a daily, ongoing interdisciplinary meeting on each nursing unit that included medical house officers and representatives from nursing, social work, physical therapy, and case management. At CMR, the resident physician from each house officer team reviewed the diagnostic and treatment plan of his or her patients. At that point, the interviewer asked the resident physician if he or she had “encountered any barriers to high-quality patient care”—a purposefully vague phrase, chosen to avoid connotations of guilt or fault. He asked respondents who required further clarification to identify problems that interfered with their ability to deliver excellent care. He also asked each respondent to identify patients who were injured or their hospitalizations extended as a result of their care.
The interviewer recorded an abbreviated narrative summary for each event, along with the patient's age, gender, and hospital identification number. The interviewer asked respondents questions to clarify the report, and CMR members familiar with the case occasionally added details. He accepted, but did not elicit, reports from nonphysician participants of CMR. Multiple unrelated events affecting a patient during a single admission or in a single report were recorded as separate events. Duplicate reports were recorded only once and attributed to the first respondent who made the report. Most interviews took less than 5 minutes.
The interviewer attended 28 CMR meetings over a 12-week period from August to November 1997, and performed 102 interviews with 28 different house officers. On average, 3.6 interviews took place at each meeting. In addition, he sent 48 e-mail queries to 8 house officers during one 3-week period. E-mails substituted for face-to-face CMR interviews on these days.
Event Classification and Verification
The interviewer classified each event by the problematic process of care that contributed to the event, the adverse consequence for the patient, the respondent-identified responsible party, and whether the event was probably, possibly, or unlikely preventable. A single most important process, outcome, and responsible party were identified for each event. Events that involved conflict with other clinicians or hospital personnel or that required a significant amount of time to resolve were classified as a “hassle.” This classification system was informed by Leape's work and sought to distinguish between clinicians' errors and patients' adverse outcomes.1,15 The investigators read event narratives closely and collected similar incidents into higher order categories. Sample reports and examples of the classification scheme are included in the Appendix.
In addition, injuries that resulted from medical care were classified by the interviewer as AEs. This definition was modified from the Medical Practice Study (MPS), which defined an AE as “an injury that was caused by medical management (rather than underlying disease) and that prolonged the hospitalization, produced a disability at the time of discharge, or both.”1 The study definition relaxed the MPS requirement for disability or delayed discharge among injured patients. This adaptation reflected the study's emphasis on error surveillance rather than attributions of negligence. A potential adverse event (PAE) represented an error that did not result in patient harm but could have. Quality of care problems that did not meet the criteria for AE or PAE were grouped together; most involved problems with patient comfort or convenience (i.e., service quality) rather than diagnosis and therapy.
To assess the reliability of this scheme, a second board certified internist (ANS) classified each event independently, using the abbreviated event narrative and a coding form that specified 24 process problems, 42 adverse outcomes, and 25 responsible parties. She also addressed whether the event was an AE or PAE, its preventability, and the presence of conflict among health care workers.
Event Verification
At the conclusion of the study, the investigators compared house officer-reported events with entries recorded in the hospital incident reporting system. This is an electronic system maintained by the hospital's Department of Healthcare Quality, in accordance with regulatory requirements of the state and of the Joint Commission on Accreditation of Healthcare Organizations.
In addition, 2 board-certified internists (SNW, ANS) reviewed the medical records of patients with house officer-identified events. To maximize the likelihood of confirming a report, each reviewer received a narrative of the event and the date it was reported. The reviewer indicated if evidence present in the record confirmed the narrative. Fifteen percent of events were selected at random and reviewed independently by both reviewers. Reviewers examined records in accordance with guidelines established by and with the approval of the hospital institutional review board.
Analyses
We evaluated inter-rater reliability using the kappa statistic for categorical and weighted κ for ordinal variables. Because of the large number of items in the classification scheme, we collapsed the number of process, outcome, and responsible parties into the higher order categories listed as subheadings in Tables 1 through 3. We calculated AE rates using admission and patient-days data provided by the hospital admissions office. To assess correlates of event reporting, we constructed a multivariate Poisson regression model (Stata Corporation, College Station, Tex) of the number of events reported as a function of the following variables: study week, day of week, number of respondents per day, type of query (e-mail or CMR), respondent (house officer or nonphysician), whether the interns or residents changed their rotation assignment on the interview day, and number of days since the last query. The cost of the project was calculated by multiplying the interviewer's hourly salary ($35/hour) by the number of hours per week collecting and analyzing data. The cost of each respondent's time was not included, but no respondent spent more than 10 minutes per week. Results of the analysis were shared with hospital senior management, the Department of Healthcare Quality, the Medicine Department Quality Improvement Committee, and representatives of the study unit.
Table 1.
Events | ||
---|---|---|
n | % | |
Diagnosis | ||
Inadequate evaluation | 18 | 16.4 |
Diagnostic error | 7 | 6.4 |
Delayed consultation | 4 | 3.6 |
Therapy | ||
Medication-related | ||
Drug reaction | 5 | 4.5 |
Delay in providing medication | 5 | 4.5 |
Failure to order medication | 1 | 0.9 |
Inappropriate medication | 1 | 0.9 |
Operative or procedure-related | ||
Delayed procedure or operation | 2 | 1.8 |
Postprocedure complication | 6 | 5.5 |
Other treatment-related problem | ||
Inappropriate treatment | 2 | 1.8 |
Prevention | ||
Failure to monitor or follow-up | 14 | 12.7 |
Inadequate supervision | 3 | 2.7 |
Clinical services | ||
Failure of laboratory or radiologyto perform a test | 14 | 12.7 |
Failure of laboratory or radiologyto report abnormal results | 2 | 1.8 |
Laboratory error | 2 | 1.8 |
Support services | ||
Failure to transport a patient | 4 | 3.6 |
Failure to draw blood | 3 | 2.7 |
Inadequate supplies | 2 | 1.8 |
Failure to provide for patient comfort | 2 | 1.8 |
Discharge planning and code status | ||
Difficult or unsafe discharge | 6 | 5.5 |
Problem with code status | 4 | 3.6 |
Other events | ||
Unavailable intensive care unit bed | 2 | 1.8 |
Inadequate staffing | 1 | 0.9 |
Total | 110 | 100.0 |
Table 3.
Events | ||
---|---|---|
n | % | |
Primary providers | ||
House officers | 15 | 13.6 |
Attending physicians | 10 | 9.1 |
Nurses | 9 | 8.2 |
Outside physicians | 3 | 2.7 |
Emergency unit | 21 | 19.1 |
Clinical services | ||
Laboratory | 10 | 9.1 |
Radiology | 6 | 5.5 |
Pharmacy | 2 | 1.8 |
Respiratory therapy | 1 | 0.9 |
Subspecialty services | ||
Cardiology | 3 | 2.7 |
Gastroenterology | 3 | 2.7 |
Interventional radiology | 2 | 1.8 |
Neurosurgery | 2 | 1.8 |
Anesthesia | 1 | 0.9 |
General surgery | 1 | 0.9 |
Obstetrics and gynecology | 1 | 0.9 |
Ophthalmalogy | 1 | 0.9 |
Orthopedics | 1 | 0.9 |
Psychiatry | 1 | 0.9 |
Support services | ||
Transportation | 4 | 3.6 |
Phlebotomy | 3 | 2.7 |
Nutrition | 2 | 1.8 |
Supplies | 2 | 1.8 |
Intravenous team | 1 | 0.9 |
Other | ||
Admitting | 2 | 1.8 |
Unit coordinator | 1 | 0.9 |
Security | 1 | 0.9 |
No identifiable party | 1 | 0.9 |
Total | 110 | 100 |
RESULTS
Interrater Reliability
Reviewers coded each event narrative independently, using a common classification scheme. Inter-rater reliability was substantial for process problems (κ = 0.78), adverse outcomes (κ = 0.78), and responsible party (κ = 0.70). Agreement was only fair for judgments about prevention (κ = 0.36), perhaps reflecting a failure to calibrate reviewers' judgments in advance. Agreement was moderate for events that involved interpersonal conflict (κ = 0.56).
Respondent Reporting
Of 150 respondent contacts at CMR or by e-mail, house officers reported 100 AEs, PAEs, and other quality of care problems involving 79 patients. Another 10 events involving 5 patients were reported by nonphysician participants at CMR. E-mail queries were less likely to result in the report of an event (6 events in 48 queries) compared with a face-to-face interview (94 events in 102 queries). In the multivariate analysis, we found no statistically significant association between the number of events reported and day of the week, number of respondents per day, week of the study, or number of days since the last e-mail or CMR contact. Fewer events were reported by e-mail compared to CMR (incidence rate ratio [IRR], 0.16; 95% confidence interval [CI], 0.05 to 0.49) and on days when house officers had rotated onto a new service (IRR, 0.12; 95% CI, 0.02 to 0.91).
Process Problems
Respondents identified a variety of problems with the processes of care (Table 1). Inadequate evaluation of the patient was the most frequently reported process problem (n = 18, 16.4% of total); most cases involved inadequate assessment in the emergency room. Problems involved more than triage misdiagnoses. Patients with presumed infection were admitted without blood tests or cultures. Patients with shortness of breath, poor oximetry, and presumptive diagnoses of congestive heart failure and pulmonary embolism had no chest radiograph performed. Clinicians diagnosed acute pancreatitis in a patient with normal amylase and liver function tests.
In 14 (12.7%) cases, providers failed to monitor or follow up on a patient. Examples of poor follow-up included several patients who developed fluid overload following intravenous hydration or transfusion; one delirious patient was found wandering outside the hospital by security officers. Another 14 (12.7%) cases involved failure of the laboratory to perform a test. Examples included lost or misplaced specimens, an episode in which a technician refused to run cell counts on a cerebrospinal fluid sample of a patient evaluated for meningitis, and a difficult interaction between house officers and a radiology technician who refused to perform a lateral chest radiograph in a patient whose portable chest film was normal. The 12 (10.8%) medication-related problems included failure to anticipate side effects (e.g., hypotension or aspiration associated with sedatives and narcotics), failure to flag an order for heparin in a patient diagnosed with unstable angina, and delays in the delivery or administration of antibiotics.
Adverse Consequences
A list of adverse consequences, grouped by the nature and severity of injury, is presented in Table 2). Delays were the most common problem, accounting for 41% of adverse consequences. Many delays involved phlebotomies, invasive procedures, and subspecialty consultations. Respondents reported 3 deaths: one following an aspiration pneumonia in an agitated patient who was sedated to permit a CT scan of the head, another with an unexplained cardio-respiratory arrest thought potentially related to a narcotic overdose, and a third in a patient who died unexpectedly with no working diagnosis and a preliminary workup.
Table 2.
Events | ||
---|---|---|
n | % | |
Death | 3 | 2.7 |
Injury | ||
Cardiovascular | ||
Hypotension | 2 | 1.8 |
Acute myocardial infarction | 1 | 0.9 |
Congestive heart failure | 2 | 1.8 |
Hypovolemia | 1 | 0.9 |
Pulmonary | ||
Pulmonary embolism or deep veinthrombosis | 1 | 0.9 |
Respiratory failure | 3 | 2.7 |
Aspiration | 1 | 0.9 |
Gastrointestinal | ||
Gastrointestinal bleeding | 1 | 0.9 |
Pancreatitis | 1 | 0.9 |
Renal | ||
Acute renal failure | 1 | 0.9 |
Infection | ||
Fever | 2 | 1.8 |
Bacteremia | 1 | 0.9 |
Neurological | ||
Neuroleptic-malignant syndrome | 2 | 1.8 |
Other injuries | ||
Excessive sedation | 2 | 1.8 |
Inadequate analgesia | 1 | 0.9 |
Peripheral edema | 1 | 0.9 |
Retained bone fragment | 1 | 0.9 |
Postprocedure bleeding | 1 | 0.9 |
Resuscitation of “do not resuscitate”patient | 1 | 0.9 |
Delays | ||
Delayed diagnosis | 43 | 39.1 |
Delayed treatment | 10 | 9.1 |
Problematic discharge | ||
Unsafe discharge | 1 | 0.9 |
Delayed discharge | 7 | 6.4 |
None | 20 | 18.2 |
Total | 110 | 100.0 |
Investigators classified all 29 cases with in-hospital injury or death as AEs. Among AEs, 24.1% were judged probably preventable and 44.8% possibly preventable. PAEs accounted for 48 of 53 delays, 1 unsafe discharge, and 3 cases with no adverse consequences for the patient. Among PAEs, 73.1% were judged probably preventable and 25.0% possibly preventable. The remaining events represented other house officer-identified quality problems. An AE occurred in 2.6% of admissions, a PAE in 4.7%, and other quality problems in 2.6%. Overall, AEs, PAEs, and other quality problems occurred in 1 of every 10 admissions, or 26.5 events per 1,000 patient-days.
Responsible Party
Respondents identified clinicians (including house officers) as the responsible party in 37 (33.6%) events (Table 3). Emergency department staff accounted for 21 events (19.1%); subspecialty consultants for 16 (14.5%); laboratory, radiology, and pharmacy for 18 (16.4%); and support services such as transportation, phlebotomy, and nutrition for 12 (10.9%). After emergency department staff, house officers were most likely to identify themselves as the responsible party (N = 15, 13.6%). Examples of house officer events included failure to flag a new order for heparin in a patient with unstable angina, treatment of a patient with a known penicillin allergy using pipercillin, and failure to identify an arrhythmia on an admission electrocardiogram. Respondents found interactions with attending physicians and with staff in the emergency department, the clinical laboratory, and support services particularly difficult and prone to conflict, characterizing 41% of events as a hassle.
Cost of Study
The interviewer spent 11/2hours per day for interviews, data coding, and analysis. At $35/hour × 28 sessions, the cost was $1,470 or $13.36 per event detected.
Incident Reports
During the study period, the hospital incident reporting system identified 58 incidents involving 51 patients on the study unit. Incidents included 32 slips and falls, 19 medication-related events (wrong dosage, wrong patient, etc.), 2 cases of missing or damaged property, 2 specimen collection or laboratory problems, and 3 events not otherwise classified. Thirty-eight events were classified as Level I (no injury to the patient), the remainder were Level II (minor injury). The incident reporting system included only one event (a Level II fall) identified by house officer report. Three patients, each with a slip or fall identified by incident report, had a different house officer-identified event during the same admission: one unsafe discharge (a PAE), one delayed diagnosis (a PAE), and one delayed discharge (“other” quality problem).
Chart Review
Of the 110 events, we excluded 7 because no unique medical record number could be identified. Five additional cases involved problems that affected a group of patients rather than an individual (e.g., pneumatic compression boots were unavailable on a nursing unit for 36 hours). Of the remaining 98 events, we obtained medical records for all but 2 cases. In 14 of 15 randomly selected cases, reviewers agreed about the presence of a reported event (92.3% agreement, κ = 0.81). Reviewers confirmed 70 of 96 events (72.9%). Reviewers did not confirm 6 events due to inaccuracies in the identity of the patient, the nature of the event, or the consequences to the patient. There was inadequate documentation to confirm 19 other events; the medical record rarely provided sufficient detail to prove delays associated with requests for subspecialty consultation or laboratory evaluation or to document conflicts among caregivers.
DISCUSSION
This paper describes a simple and inexpensive method to identify quality problems and sources of iatrogenic injury among hospitalized patients. The method integrates confidential peer interviews of house officers into the workday. Employing this approach, the investigators identified an AE in 2.6% of medical admissions. Recognizing that the study definition of AEs may make direct comparisons difficult, the prevalence of AEs is similar in magnitude to previous work: 2.8% in O'Neil's study of house officer self-reporting14 and 3.7% in the MPS.1 An AE, PAE, or other house officer-identified quality problem occurred in 10% of admissions. Many events were preventable. A large number involved interpersonal conflict among front-line caregivers.
Although a majority of house officer-reported events were corroborated in the medical record (73%), only one such event was recorded in the hospital incident reporting system. Conversely, house officers failed to report 57 of 58 events recorded in the incident reporting system. In practice, nurses recorded the overwhelming majority of incident reports; it is possible that house officers perceive incident reports as a nursing responsibility. House officers may be unaware of slips and falls without injury, drug-dose discrepancies, or other events that do not require their intervention. As a result, incident reports and the confidential clinician-reported surveillance method described here are complementary approaches for detecting AEs and iatrogenic injury.
There are several reasons why the results must be interpreted cautiously. First, events are based on the report of individual clinicians. Although medical record review corroborated a substantial number of reports, prospective corroboration with the patient or other providers would enhance a report's validity. Second, clinicians' reports offer limited information about the systems problems that account for most medical errors. The study identified a series of process problems, but did not examine the ultimate or “root” causes that led to the event. Third, assessment of preventability may be subject to bias. Knowledge of adverse outcomes may cause reviewers to judge quality more harshly.16 Event narratives often contained information about adverse consequences, so investigators were not blinded to outcome. The fact more preventable events occurred among PAEs than AEs suggests that hindsight bias played a relatively small role.
Fourth, the adverse event rate is almost certainly an underestimate. House officers may fail to report events because of their own perceived vulnerability to supervisors' approbation, fear of developing a bad reputation, or a sense of powerlessness. A culture of fear in health care holds perfect performance as an ideal and imposes blame and shame for those who fail to meet the mark.17 Many take error for granted as a necessary part of the learning process, or as a necessary consequence of the complexity, toxicity, or heroics of modern medical care. They may be unaware of events that occur on evenings or weekends, when another house officer provides cross-coverage. They also may be ignorant of events that are intercepted by a nurse or pharmacist and never communicated verbally or in the medical record. House officers' reports may be enriched in events that were most recent, most annoying, or reflect selective attention to a controversial service area. The sample includes events that clinicians found vexing, and which they may feel motivated to address.
Fifth, the generalizability of the approach requires further study. The interviewer was a general medicine fellow who knew most respondents well and was likely viewed as a peer. Respondents may be more likely to report mistakes to a trusted peer than to a supervisor or stranger. The approach needs to be tested with nurses and other front-line clinicians. Attending physicians may be less willing to participate because of their concerns regarding liability exposure and uncertainty about the durability and scope of peer review protections. Finally, the culture of the academic medical center under study may have offered a non-punitive environment that was conducive to self-disclosure.
Despite these caveats, the approach presented here has many attractive features. It is timely, inexpensive, and acceptable to clinicians. It identified more events and more serious events than those recorded in the hospital incident reporting system. Its face-to-face peer interviews yielded more reports than e-mail prompts, and may be adapted to a variety of settings. The authors used a similar approach with house officer respondents in the medical intensive care, oncology, and cardiac step-down units, and in an outpatient primary care practice. They also collected reports from nurse-respondents on inpatient general surgery and oncology units. In each setting, clinician interviewers were known to respondents and held informal interviews in staff meetings, work rounds, and other settings that were part of the regular workday.
While the model is replicable in a single hospital, it has not yet been implemented as an ongoing quality improvement activity. To create a sustainable model, participation should become part of a hospital quality improvement strategy with data collection and analysis assigned to a chief resident or other responsible staff physician. It must become integrated into the daily rhythm of the work and viewed as complementary to, rather than a distraction from, the mission of patient care. Creating a mechanism to act on clinicians' reports will enhance the credibility of the effort and reinforce respondents' willingness to participate. We recognize that organizations must take available resources into consideration and recommend that they solicit the views of house officers about potential interviewers.
CONCLUSION
Inviting house officers and other front-line clinicians to participate in abbreviated confidential interviews is a promising way to detect AEs and medical errors. House officers are particularly well suited to this activity, and potentially valuable partners in quality improvement.
Acknowledgments
This research was supported in part by the CareGroup Center for Quality and Value. Preliminary data were presented at the Second Annenberg Conference on Enhancing Patient Safety and Reducing Errors in Health Care, Rancho Mirage, Calif, November 8–10, 1998, and an extended abstract published in the conference proceedings.
Appendix: Sample Narratives and Classification
Event 1
Flash congestive heart failure in patient with similar history while given a transfusion of packed red blood cells and concurrent diuretic. Intubated and transferred to the intensive care unit.
Process problem: inadequate monitoring or follow-up
Adverse consequence: respiratory failure (adverse event)
Responsible parties: house officers
Preventable: possible
Hassle: no
Event 2
Paracentesis performed in patient with ascites, cytology sample sent to laboratory. Sample never logged in. After multiple calls and the assistance of the laboratory supervisor, sample discovered on desk at laboratory control the following day.
Process problem: delay in performing a test
Adverse consequence: delayed diagnosis (potential adverse event)
Responsible parties: laboratory
Preventable: probable
Hassle: yes
Event 3
Patient presented to the emergency room with confusion and dehydration. Head CT scan at 8 pmshowed plum-sized posterior fossa mass compressing the fourth ventricle, midline shift, and obstructive hydrocephalous, but interpreted as arachnoid cyst by radiology resident. House staff notified urgently at 8 amnext day by neuroradiology attending physician of new mass lesion. Patient remained stable.
Process problem: diagnostic error
Adverse consequence: delayed diagnosis (potential adverse event)
Responsible parties: radiology
Preventable: possible
Hassle: no
Event 4
A patient was transferred from an outside hospital for treatment of hip fracture. On admission, found to be in congestive heart failure. Anesthesia wrote preoperative orders on the night of admission, including intravenous fluids (normal saline at 80 cc/hr) while patient received concurrent diureses by the medical team. No improvement overnight in patient's shortness of breath, examination, oximetry.
Process problem: inappropriate therapy
Adverse consequence: congestive heart failure (adverse event)
Responsible parties: subspecialty service
Preventable: probable
Hassle: no
Event 5
An elderly woman was admitted from an outside clinic for treatment of pneumonia. No attending note in the chart or visit for the first 48 hours, prompting inquiries. Ultimately discovered that hospital admitting office had admitted patient to a physician without attending privileges at the hospital. The patient was assigned to a new attending physician.
Process problem: inadequate supervision
Adverse consequence: none
Responsible parties: admitting office
Preventable: possible
Hassle: no
Event 6
A middle-aged woman was transferred from a rehabilitation hospital for treatment of diabetic leg ulcer. Seen by emergency room staff, cleared for admission. Admitting team found patient to have room air oximetry of 84%, no vital signs performed. They requested chest x-ray and electrocardiogram in the emergency department. The patient arrived on the medical floor with EKG but no chest x-ray, and was promptly heparinized for working diagnosis of pulmonary embolism.
Process problem: inadequate evaluation
Adverse consequence: delayed diagnosis (potential adverse event)
Responsible parties: emergency department staff
Preventable: probable
Hassle: yes
REFERENCES
- 1.Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. New Engl J Med. 1991;324:370–6. doi: 10.1056/NEJM199102073240604. [DOI] [PubMed] [Google Scholar]
- 2.Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. New Engl J Med. 1991;324:377–84. doi: 10.1056/NEJM199102073240605. [DOI] [PubMed] [Google Scholar]
- 3.Andrews LB, Stocking C, Krizek T, et al. An alternative strategy for studying adverse events in medical care. Lancet. 1997;349:309–13. doi: 10.1016/S0140-6736(96)08268-2. [DOI] [PubMed] [Google Scholar]
- 4.Bates DW, Spell N, Cullen DJ, et al. The costs of adverse drug events in hospitalized patients. JAMA. 1997;277:307–11. [PubMed] [Google Scholar]
- 5.Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events. JAMA. 1995;274:29–34. [PubMed] [Google Scholar]
- 6.Cullen DJ, Bates DW, Small SD, et al. The incident reporting system does not detect adverse drug events: A problem for quality improvement. Jt Comm J Qual Improv. 1995;21:541–52. doi: 10.1016/s1070-3241(16)30180-8. [DOI] [PubMed] [Google Scholar]
- 7.Iezzoni LI, Daley J, Heeren T, et al. Using administrative data to screen hospitals for high complication rates. Inquiry. 1994;31:40–55. [PubMed] [Google Scholar]
- 8.Iezzoni LI, Daley J, Heeren T, et al. Identifying complications of care using administrative data. Med Care. 1994;32:700–15. doi: 10.1097/00005650-199407000-00004. [DOI] [PubMed] [Google Scholar]
- 9.Classen DC, Pestotnik SL, Evans RS, et al. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991;266:2847–51. [PubMed] [Google Scholar]
- 10.Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998;280:1311–5. doi: 10.1001/jama.280.15.1311. [DOI] [PubMed] [Google Scholar]
- 11.Weingart SN. House officer education and organizational obstacles to quality improvement. Jt Comm J Qual Improv. 1996;22:640–6. doi: 10.1016/s1070-3241(16)30271-1. [DOI] [PubMed] [Google Scholar]
- 12.Wu AW, Folkman S, McPhee SJ, et al. Do house officers learn from their mistakes? JAMA. 1991;265:2089–94. [PubMed] [Google Scholar]
- 13.Welsh CH, Pedot RP, Anderson RJ. Use of morning report to enhance adverse event detection. J Gen Intern Med. 1996;11:454–60. doi: 10.1007/BF02599039. [DOI] [PubMed] [Google Scholar]
- 14.O'Neil AC, Petersen LA, Cook F, et al. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med. 1993;119:370–6. doi: 10.7326/0003-4819-119-5-199309010-00004. [DOI] [PubMed] [Google Scholar]
- 15.Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. JAMA. 1995;274:35–43. [PubMed] [Google Scholar]
- 16.Caplan RA, Posner KL, Cheney FW. Effect of outcome on physician judgments of appropriateness of care. JAMA. 1991;265:1957–60. [PubMed] [Google Scholar]
- 17.Leape LL. Error in medicine. JAMA. 1994;272:1851–7. [PubMed] [Google Scholar]