Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2011 Jul-Aug;18(4):491–497. doi: 10.1136/amiajnl-2011-000187

Comparison of computerized surveillance and manual chart review for adverse events

Aldo Tinoco 1,, R Scott Evans 1,2, Catherine J Staes 1, James F Lloyd 2, Jeffrey M Rothschild 3, Peter J Haug 1,2
PMCID: PMC3128408  PMID: 21672911

Abstract

Objective

To understand how the source of information affects different adverse event (AE) surveillance methods.

Design

Retrospective analysis of inpatient adverse drug events (ADEs) and hospital-associated infections (HAIs) detected by either a computerized surveillance system (CSS) or manual chart review (MCR).

Measurement

Descriptive analysis of events detected using the two methods by type of AE, type of information about the AE, and sources of the information.

Results

CSS detected more HAIs than MCR (92% vs 34%); however, a similar number of ADEs was detected by both systems (52% vs 51%). The agreement between systems was greater for HAIs than ADEs (26% vs 3%). The CSS missed events that did not have information in coded format or that were described only in physician narratives. The MCR detected events missed by CSS using information in physician narratives. Discharge summaries were more likely to contain information about AEs than any other type of physician narrative, followed by emergency department reports for HAIs and general consult notes for ADEs. Some ADEs found by MCR were detected by CSS but not verified by a clinician.

Limitations

Inability to distinguish between CSS false positives and suspected AEs for cases in which the clinician did not document their assessment in the CSS.

Conclusion

The effect that information source has on different surveillance methods depends on the type of AE. Integrating information from physician narratives with CSS using natural language processing would improve the detection of ADEs more than HAIs.

Keywords: Electronic surveillance, adverse drug event surveillance, hospital-associated infection surveillance, healthcare quality

Background

Healthcare organizations are under increasing pressure to demonstrate the quality of care they deliver.1–6 This pressure originates from healthcare consumers, state and national legislation, healthcare accreditation and credentialing agencies, and quality improvement organizations. Healthcare organizations generally rely on manual chart review (MCR) to retrospectively measure quality and safety. Yet this ‘gold standard’ is too time-intensive and costly to be the sole means of routinely identifying patient events of interest.7 To improve the efficiency of quality monitoring, hospitals have implemented computerized surveillance systems (CSS).8–14 Through automation, some hospitals have replaced retrospective, passive monitoring with prospective, active surveillance that allows concurrent interventions and improvement in quality and safety.

Different surveillance methods use different types of information to detect events. Computerized surveillance system relies on clinical data that is numeric or coded, such as pharmacy orders, laboratory results, and claims data. Yet, prior studies have shown that physician narratives and nursing notes contain information about adverse events (AE) not found elsewhere in the patient record.15 CSS that can access both coded and freetext data—such as that found in unstructured narratives—may improve surveillance without requiring the time and cost associated with MCR alone.

Prior studies on CSS that use information in physician narratives have focused on specific types of documents, such as discharge summaries.16–18 Discharge summaries are valuable for retrospective, post-discharge measurement of AEs. However, to provide timely notification to providers and patient safety personnel, prospective surveillance for AEs requires real-time access to information throughout the admission. Thus, we wanted to evaluate the utility of other types of physician narratives commonly found electronically in the inpatient record.

The CSS developers need to know what information about AEs is relevant, where to look for it, how it is represented, and how to extract it from documents. Since MCR utilizes information from both narrative and non-narrative sources, we assumed that MCR would detect some cases missed by CSS. To investigate this, we designed a study to: (a) compare the type of AEs detected by either MCR or CSS, (b) assess the features associated with events found only by MCR, and (c) identify opportunities for improving event detection by computerized surveillance. From a sample of inpatient admissions during a specific time period, we identified AEs that were detected only by MCR, only by CSS, or by both methods. To aid efforts to improve CSS, we collected actual phrases from all electronic physician narratives throughout a hospital admission that contained information about adverse drug events (ADE) and hospital-associated infections (HAI).

Methods

Setting

The study was performed at LDS Hospital, a major teaching hospital in Salt Lake City, Utah. The patient record at LDS Hospital has both electronic and paper-based components. The HELP (Health Evaluation through Logical Processing) system has been operational at LDS Hospital since 1972 and manages both clinical and financial patient information.19 In addition to billing and administrative codes for each hospital admission, this electronic system manages information from several clinical domains: admission, discharge, and transfer (ADT)/registration, pharmacy, laboratory, microbiology, nurse charting, and physician narratives, etc. The following physician narratives are dictated, transcribed, and stored in the HELP system as freetext documents: emergency department report, admission history and physical report, consultant note (including bedside procedures), radiology report, surgical procedure note, and discharge summary. However, daily inpatient progress notes are handwritten, paper-based documents. Other paper-based parts of the patient record include intraoperative physician orders and anesthesiology notes. Printouts from the electronic record and the paper-based content of the patient record are stored together as a hardcopy chart.

Computerized surveillance system

The HELP system has an integrated CSS that prospectively screens electronic patient data for indicators of AEs, including HAIs and ADEs. The HAI detection criteria used by CSS were originally based on the guidelines from the Study of the Efficacy of Nosocomial Infection Control and the Centers for Disease Control and Prevention (CDC).8 20–22 Using these criteria, CSS evaluates each patient's ADT, microbiology, serology, radiology, and surgery data for evidence of an HAI. In addition to routine HAI surveillance, daily urine samples from all catheterized patients were obtained as part of an existing, hospital-wide urinary catheter surveillance program.23 The ADE detection criteria used by CSS include various clinical triggers such as medication discontinuation orders, dose decrease orders, antidote orders, laboratory test orders, abnormal laboratory test results and vital signs.9 24 Suspected cases are flagged by CSS and reported to surveillance personnel for validation. An infection preventionist (previously ‘infection control practitioner’) or a clinical pharmacist verifies each HAI or ADE, respectively, using information from the patient record, direct bedside observations, and interviews with patients and their providers. All suspected and verified cases are stored in the CSS database.

Study sample

A sample of inpatient admissions to LDS Hospital had been selected earlier for a comprehensive, multi-institutional research investigation of AEs (‘workload study’).25 The approach used in the workload study to prescreen hospital admissions for possible AEs was originally developed during the Harvard Medical Practice Study and refined in subsequent studies. This method for prescreening possible AEs used a set of diagnostic and procedure codes associated with a higher-than-average likelihood of an AE.26 27 In the workload study, admissions to the medical-surgical services of LDS Hospital between October 1, 2000 and December 31, 2001 were screened, which resulted in a sample of 2137 unique, prescreened patient admissions. Manual chart review by trained chart abstractors was used to identify AEs including HAIs and ADEs. In addition to inpatient events, we included events detected by MCR that spanned multiple admissions and events that led to new admissions to LDS Hospital (eg, a postoperative wound infection presenting several days or weeks after discharge for a surgical procedure).

We built a gold standard cohort of AEs by aggregating the AEs detected by either CSS or MCR from the study sample of admissions. We compared the AEs missed prospectively by CSS with those detected retrospectively by MCR from this study cohort, paying particular attention to the information about each event contained in physician narratives. We sought to understand what information CSS would need in order to detect events it missed, the type of physician narrative in which the information was found, and how it was represented in the narrative.

Case finding by method

Each detection method used formal case definitions of HAIs and ADEs. For HAIs, both MCR and CSS used the CDC's National Nosocomial Infections Surveillance System surveillance case definitions: infections acquired during hospital care that were not present or incubating at admission.28 29 Both methods used CDC detection criteria to describe and verify the following types of HAIs: bloodstream infections (BSIs), lower respiratory tract infections (LRTIs), surgical site infections (SSIs), and urinary tract infections (UTIs). For ADEs, MCR used the Institute of Medicine's case definition: ‘an injury to the patient resulting from medical intervention related to a drug.’27 30 Injuries resulting from use of fluids and blood products were also included. The CSS used the following ADE case definition: ‘a response to a drug that is noxious and unintended and occurs at doses normally used in man for the prophylaxis, diagnosis, or therapy of disease, or for modification of physiological function.’9 The definitions of ADEs used by MCR and CSS require a causal association between the drug and manifestations related to the action of the drug. During the workload study, information was abstracted from the patient chart that fulfilled surveillance criteria for each HAI and ADE.

We matched the HAIs and ADEs identified retrospectively by MCR from the workload study sample with those identified prospectively using CSS. For HAIs, an event was considered to be a match between methods when the event occurred at the same anatomic site during the same hospital admission. For ADEs, an event was considered a match between methods when the ADE was attributed to the same causative drug or fluid during the same hospital admission. Only the first occurrence of an HAI or ADE per admission was counted; thus, recurrences of the same AE for the same patient during a single inpatient admission were not included.

Review of events only identified using MCR

The phrases contained in physician narratives used to identify AEs were the primary outcome of interest. One investigator (AT) reviewed the information collected during the workload study about each HAI and ADE identified only by MCR. All electronic physician narratives for the corresponding inpatient admissions were reviewed. Handwritten, inpatient progress notes were not included in this study because their content is not currently stored in the electronic medical record and would not be accessible to the CSS. Each phrase related to an AE from an electronic physician narrative and the corresponding type of narrative was recorded in a study database. All phrases collected are available in a separate appendix available as an online data supplement (www.jamia.org). Specific event attributes for HAIs and ADEs were used to classify each phrase (table 1). Investigators matched the data from each phrase about HAIs to CDC surveillance criteria, which were grouped into the following categories: manifestation, intervention, response to treatment, and assessment. For example, the phrase ‘The patient developed wound cellulitis of the right inguinal lymphadenectomy and was begun on Keflex and received 1 day of IV antibiotics of cefazolin and was improving at time of discharge’ was categorized in the following fashion:

Table 1.

Attributes used to categorize the information contained in physician narratives about ADE or HAI events

Attribute Description ADE example HAI example
Causative drug The actual or missed administration of a drug, fluid, or biological product that was stated to have a causal association with an ADE manifestation or is the target of an intervention to treat an ADE
  • Drug administered

  • Missed drug administration

  • Missed drug monitoring

  • Blood product administered

  • Fluid administered

Manifestation Patient signs, symptoms, and laboratory values that followed the actual or missed administration of a drug or blood product
  • Sign or symptom

  • Abnormal laboratory value

  • Abnormal drug level

  • Abnormal test result (radiology, pathology)

  • Abnormal laboratory value

  • Abnormal sign or symptom

Intervention Actions taken to treat the manifestations of an ADE
  • Add new allergy

  • Change medication, dose, route

  • Escalate care (transfer to ICU, obtain specialty consult)

  • Start fluid restriction

  • Start new medication, biological product, or fluid

  • Start new procedure

  • Stop medication

  • Stop procedure

  • Aggressive wound care

  • Anti-infective treatment

  • Drain abscess

  • Escalate care

  • Re-open surgical wound intentionally

  • Mechanical ventilation

  • Collect specimen for culture

  • Place drain in surgical wound

  • Return to operating room

Response to treatment Information about patient status or a specific manifestation documented in a physician note following treatment of the ADE
  • Improvement from prior observation

  • Improved manifestation

  • Resolved manifestation

  • Return to baseline

  • Improved manifestation

  • Resolved manifestation

Assessments
  • Physician-recognized drug reaction

  • Overdose

Diagnosis of HAI

ADE, adverse drug event; HAI, hospital-associated infection.

  • MANIFESTATION: Abnormal sign/symptom=wound cellulitis

  • INTERVENTION: Anti-infective treatment=cefalexin

  • INTERVENTION: Anti-infective treatment=cefazolin

  • RESPONSE: Improved manifestation=resolving cellulitis.

At the time of the study, no formal event criteria were available for ADEs. For each confirmed ADE case, investigators matched the data in each text phrase with one of the following general event attributes of an ADE: drug, manifestation, intervention, response to treatment, and assessment by a physician (eg, recognition of an ADE) (table 1). For example, the phrase ‘pt. developed significant angioedema to tongue felt due to the ACE inhibitor and this med had to be stopped’ was categorized in the following manner:

  • DRUG: Therapeutic category=ACE inhibitor

  • MANIFESTATION: Abnormal sign/symptom=angioedema

  • INTERVENTION: Stop medication=ACE inhibitor

  • ASSESSMENT: Present.

Data analysis and management

We calculated the proportion of AEs in the sample detected by each surveillance method and classified each event by AE type, AE attributes, and sources of the information. Unique, meaningless identifiers were assigned to each AE, the corresponding patient, and the hospital admissions involved with each AE. Information about each case, its abstracted text phrases, and the source of information were stored in a password-protected database application built using Microsoft Access 2003 and Microsoft Visual Basic 6. This project was approved by the institutional review boards of the University of Utah and LDS Hospital.

Results

Type of AEs detected using CSS and MCR

The distribution of AEs detected by MCR and CSS differed between HAIs and ADEs. As shown in table 2, CSS detected more than twice as many HAIs from the same sample of admissions as MCR. For ADEs, similar numbers of events were detected by CSS and MCR. However, the overlap for ADEs detected by both methods was smaller than the overlap for HAIs (3% vs 26%).

Table 2.

Number of HAIs and ADEs in the study sample that were detected by either computerized surveillance system, by manual chart review, or by both

Type of event Number (%) of events detected by each method Number (%) events detected by only one method or both methods
CSS MCR CSS only Both methods MCR only
HAI (n=393) 362 (92%) 135 (34%) 258 (66%) 104 (26%) 31 (8%)
SSI (n=130) 106 (82%) 73 (56%) 57 (44%) 49 (38%) 24 (18%)
LRTI (n=66) 60 (91%) 19 (29%) 47 (71%) 13 (20%) 6 (9%)
UTI (n=134) 133 (99%) 37 (28%) 97 (72%) 36 (27%) 1 (1%)
BSI (n=63) 63 (100%) 6 (10%) 57 (90%) 6 (10%) 0
ADE (n=195) 102 (52%) 99 (51%) 96 (49%) 6 (3%) 93 (48%)

ADE, adverse drug event; BSI, bloodstream infection; CSS, computerized surveillance system; HAI, hospital-associated infection; LRTI, lower respiratory tract infection; MCR, manual chart review; SSI, surgical site infection; UTI urinary tract infection.

We identified reasons why CSS had missed the cases that were only identified by MCR. All BSIs in the sample were detected by CSS, since CSS detected the microbiological evidence of each BSI. Several SSIs identified only by MCR had positive microbiology culture results that could have been used by CSS's logic. However, the specimen sources assigned to those culture results were freetext values that CSS had not been programmed to interpret. The remaining SSIs did not have microbiological evidence of an infection. The LRTIs and UTI identified only by MCR also did not have microbiological evidence. The MCR detected those HAIs using information contained only in the physician narratives, such as anti-infective treatments, radiographic findings, and physician diagnoses of an infection.

For ADEs, we identified multiple reasons why CSS had not detected events that were found using MCR. The ADEs were not detected by CSS when their event triggers were not used in the CSS logic (eg, patient symptoms and physician assessments). For example, all ADEs caused by narcotic analgesics manifested as mental status changes. The CSS missed other ADEs when a trigger was not available in the expected coded form or data source. For instance, rescue medications administered to a patient during cardiopulmonary arrest or during a surgical procedure may have been recorded on a pre-printed paper form but not the electronic medication administration record. In some cases, CSS did not generate an alert as expected even though the electronic signal was available, such as an electronic medication order for vitamin K. Finally, some ADEs were missed by CSS when a suspected case had been flagged by CSS but not verified by a clinical pharmacist at LDS Hospital.

Type of information about AEs detected by MCR but missed by CSS

Since all BSIs in the sample were detected by CSS, no information from electronic physician narratives was collected for this type of HAI. All other HAIs and ADEs detected only by MCR had at least one event attribute found in a physician narrative (table 3). For all event attributes, a greater percentage of HAIs than ADEs had information contained in an electronic physician narrative. The type of event trigger most likely to be found in an electronic physician narrative was an intervention for HAIs (97%) and a manifestation for ADEs (52%). The event attribute least likely to be found for both HAIs and ADEs was a response to the treatment of the event (32% and 17%, respectively). Only 58% of HAIs and 34% of ADEs had a physician assessment, such as a diagnosis of an infection or a recognized drug reaction, recorded in an electronic physician narrative.

Table 3.

Number of HAIs and ADEs found only by MCR that had at least one attribute found in an electronic physician narrative grouped by category of attribute

Attributes HAIs (n=31) ADEs (n=93)
Causative drugs NA 34 (37%)
Manifestations 28 (90%) 48 (52%)
Interventions 30 (97%) 39 (42%)
Response to treatment 10 (32%) 16 (17%)
Assessments 18 (58%) 32 (34%)

Information about a causative drug was included only when it was associated with another ADE attribute in the same phrase. We did not include instances in which the causative drug was mentioned outside of the context of an ADE manifestation, intervention, or response to treatment. Causative drugs were not applicable (NA) to HAI surveillance for the purposes of this study.

ADE, adverse drug event; HAI, hospital-associated infection; MCR, manual chart review.

Source of information

Information about each event attribute and the type of electronic physician narrative in which it was found is summarized in table 4. To detect differences between the different subtypes of infections, we grouped the HAI results according to SSIs, LRTIs, and UTIs. Most types of narratives contained information about at least one SSI. For all types of event attributes, information about SSIs was found most often in discharge summaries. Information was found most often in a discharge summary for interventions to treat the SSI (92%). Emergency department reports and admission history and physical reports contained information about SSIs that were attributed to previous hospitalizations. Information about SSIs that required surgical intervention or draining of an abscess was found in general surgery reports and procedural radiology reports, respectively. Information about SSI-related physician assessment (eg, diagnosis of infection) was found in discharge summaries (42%), emergency department reports (17%), admit history/physical reports (8%), and general consult notes (4%). The SSIs were the only type of HAI with information contained in an emergency department report, an admission history and physical report, a general surgery report, or a report about a radiology-guided procedure. For manifestations of LRTIs, diagnostic radiology reports contained information about more events than discharge summaries (100% and 67%). Interventions to treat the LRTI (67%), responses to this treatment (33%), and assessments by a physician (50%) were found most often in discharge summaries. General consult reports contained information about patients who were transferred to a critical care unit following cardiopulmonary arrest which had led to aspiration pneumonia (a specific type of LRTI). Information about one patient with an LRTI who died was found in a death summary report. All attributes about the one UTI missed by CSS were found in the patient's discharge summary.

Table 4.

Number of HAIs and ADEs found only by MCR with information in each type of electronic physician narrative grouped by attribute category

Attributes Number (%) of events*
MCR data sources ED report History and physical report Diagnostic radiology report Procedural radiology report Surgery report Consult report Discharge summary Death summary report
SSIs (n=24)
 Manifestation (n=21) 6 (25) 3 (12) 4 (17) 3 (12) 2 (8) 1 (4) 19 (79)
 Intervention (n=24) 5 (21) 3 (12) 1 (4) 4 (17) 1 (4) 22 (92)
 Response to treatment (n=8) 8 (33)
 Assessment (n=13) 4 (17) 2 (8) 1 (4) 10 (42)
LRTIs (n=6)
 Manifestation (n=6) 6 (100) 2 (33) 4 (67) 1 (17)
 Intervention (n=5) 4 (67) 1 (17)
 Response to treatment (n=2) 2 (33)
 Assessment (n=4) 1 (17) 3 (50) 1 (17)
UTIs (n=1)
 Manifestation (n=1) 1 (100)
 Intervention (n=1) 1 (100)
 Response to treatment (n=1)
 Assessment (n=1) 1 (100)
ADEs (n=93)
 Causative drug (n=34) 1 (1) 2 (2) 7 (8) 28 (30) 1 (1)
 Manifestation (n=48) 1 (1) 1 (1) 1 (1) 2 (2) 10 (11) 41 (44) 1 (1)
 Intervention (n=39) 1 (1) 1 (1) 2 (2) 8 (9) 33 (35) 1 (1)
 Response to treatment (n=16) 3 (3) 14 (15)
 Assessment (n=32) 1 (1) 1 (1) 1 (1) 8 (9) 25 (27) 1 (1)
*

Percent of the number of SSIs, LRTIs, UTIs, and ADEs, respectively.

One ADE had drug, manifestation, and intervention in a single endoscopy procedure report.

ADE, adverse drug event; ED, emergency department; HAI, hospital-associated infection; LRTI, lower respiratory tract infection; MCR, manual chart review; SSI, surgical site infection; UTI urinary tract infection.

Like HAIs, discharge summaries contained information about more ADEs than any other type of electronic physician narrative. The most frequent type of attribute found was manifestation of an ADE (64%), followed by an intervention to treat an ADE (35%). Emergency department reports and admit history/physical reports contained information about events that manifested prior to admission. Information about several ADEs (11%) had information found in general consult reports, such as patients with analgesic-related oversedation or a reaction to anesthesia. Diagnostic radiology reports had the least amount of information about ADEs. Physician recognition of an ADE was found most often in discharge summaries (27%) and in general consult reports (9%).

Only 58% of HAIs and 34% of ADEs missed by CSS were explicitly acknowledged in at least one physician narrative. The information about the remaining AEs that were not explicitly mentioned in physician narratives was spread out across different phrases and/or documents.

Discussion

Healthcare organizations use different methods to detect and measure AEs: voluntary incident reports, random chart abstraction, and concurrent clinical surveillance. Prior studies attributed differences between surveillance methods to the data sources used by each method, to differences in the subject matter expertise among human reviewers, and to cognitive challenges faced by the reviewers.15 17 31 32 Other differences, such as timing, scope, and workflow of surveillance, may also contribute to these differences.33 We expected that MCR would detect AEs missed by CSS and, taking advantage of these differences, we could improve CSS identification of AEs. Because agreement between MCR and CSS was less for ADEs than for HAIs, the potential for improving CSS is greater for ADEs than for HAIs. Integrating information from physician narratives with CSS would potentially capture a greater proportion of additional ADEs than HAIs.

Improving CSS with information from physician narratives

Bates et al suggest that integrating information from physician narratives with automated surveillance methods would increase the number of AEs detected.34 Based on our findings, adding data from physician narratives would have helped CSS detect some, but not all, missed cases. Review of the phrases we collected suggested that detection of LRTIs, SSIs, and ADEs would improve if patient signs, symptoms, interventions, and physician assessments from physician narratives were integrated with CSS.

Using microbiology culture results and the urinary catheter surveillance used at LDS Hospital, CSS detected all of the BSIs and all but one of the UTIs in the study. Thus, detection of these types of events would not benefit much from integration of data from physician narratives. We did not find microbiology culture results for the single UTI, the LRTIs, or several SSIs in either the laboratory system or the physician narratives. The CSS missed some deep incisional and organ space SSIs, because the specimen was entered into the laboratory information system as unstructured freetext as opposed to the expected coded format. In the absence of microbiology data, the signs, symptoms, radiographic evidence (for LRTIs and organ space SSIs), treatment, and diagnoses contained in physician narratives could serve as triggers for HAIs.

The CSS missed ADEs for the following reasons: (a) information needed to trigger an alert was not available to the system, (b) information was available to the system but no alert was triggered, and (c) an assessment of a suspected ADE was not documented by the clinical pharmacist in CSS. We encountered two instances where an intervention (eg, administration of vitamin K or naloxone) was mentioned in the physician narrative but not recorded in the pharmacy system as either an order or an administration event. Thus, physician narratives proved to be an alternate source of information for medication-related events that were not recorded electronically in the pharmacy information system. However, addition of data from physician narratives would not improve CSS for cases for which no alert was generated or where the alert was generated but not reviewed by the clinical pharmacist. For cases where no alert was generated, time-driving the ADE logic and scanning the data from all patients may be more effective rather than depending on using data-driven triggers to activate the logic.

Content of physician narratives

Our analysis of physician narratives revealed challenges in using narrative text to support CSS. Physicians may respond to AEs as part of routine course of care and not document observations and interventions with surveillance in mind.35 In this study, only 58% of HAIs and 34% of ADEs missed by CSS were explicitly documented in dictated reports. If natural language processing could detect these phrases, then CSS would most likely be able to pick up these additional AEs. The lack of explicit physician acknowledgment for the remaining AEs presents a challenge for automated surveillance methods.17 35 In many AEs, supporting evidence for an LRTI, SSI, or ADE was distributed across multiple physician narratives. In the absence of explicit recognition of the AE, CSS would need to handle information from multiple places in the same document or from multiple documents to identify ADEs and HAIs.

Sources of AE information

By examining the content of physician narratives, we identified those that were likely to contain information about each type of AE. Discharge summaries contained information about more HAIs and ADEs than any other electronic physician narrative. Discharge summaries would be a valuable source of information for retrospective measurement and for confirmation of AEs detected earlier in the admission by other methods. But their benefit to prospective surveillance would be limited, since discharge summaries are not available prior to discharge.

We had to look at the particular subtypes of HAIs to find specific opportunities to improve HAI surveillance by CSS. In order to improve surveillance of SSIs that required readmission, CSS would need access to the information found in emergency department reports and admission history and physical reports. These reports contained information about signs, symptoms, significant white blood cell counts, antimicrobial treatment, bedside interventions, diagnostic imaging, and physician impressions. To improve the surveillance of SSIs that occurred within the current admission, CSS would need access to information found in general surgery reports; these reports contained phrases that suggested the presence of a ‘post-operative wound infection.’ Improving the detection of LRTIs would require access to general consult reports, which contained signs, symptoms, antimicrobial treatment, and physician impressions. In addition to signs of pneumonia (important for LRTIs), diagnostic radiology reports contained important evidence of intra-abdominal and retroperitoneal abscesses, which were important indicators of SSIs that required radiologically-guided drainage. Information about one post-procedural LRTI was found in a death summary report.

Information about outpatient ADEs was found in emergency department reports and admission history and physical reports. Information about anticoagulation-related bleeding events was found in general surgery reports, radiology reports, and discharge summaries. For example, one patient with repeated bleeding episodes on Coumadin received an inferior vena cava filter, which was documented in a radiology report because it was placed under radiographic guidance. Another anticoagulation-related gastrointestinal bleeding event was recorded in an endoscopy report. The ADEs severe enough to require transfer to the intensive care unit were mentioned in a general consult report, which included general anesthesia-related events, cardiac arrests secondary to cardiovascular medications, and opiate-related sedation. Almost all ADEs involving narcotic analgesics were mentioned in a general consult report, which contained signs (eg, mental status changes and decreased respiratory rate), response to naloxone, and physician assessments.

Limitations

If the clinician did not document their assessment of a suspected case in the CSS, we were unable to distinguish between false positive cases and suspected AEs that were not reviewed. This is an important area for additional investigation, since it would affect the benefit obtained by the integration of additional data from physician narratives.

Recommendations for future work

Physician narratives must be available in electronic form, so that CSS can access their content. The ideal narrative for a concurrent system like CSS is the progress note, since it is typically created daily throughout a hospitalization. As hospitals implement electronic progress notes, we need to understand what information about AEs is more likely to be recorded in progress notes than other physician narratives.

Additional investigation of ADEs missed by CSS is needed to troubleshoot the system and the surveillance workflow. In these cases, improvements may be attained in the cognitive burden, staffing, and prioritization of patient safety activities.

Conclusion

As public reporting requirements increase, providers must consider the role of surveillance technologies. In this study, we identified and described differences between two such systems, including how each system used information from different sources. Computerized surveillance system detection of LRTIs, SSIs, and ADEs would improve if patient signs, symptoms, interventions, and physician assessments from physician narratives were integrated using technologies such as natural language processing.

Supplementary Material

Supplementary Data
supp_18_4_491__index.html (20.2KB, html)

Acknowledgments

We would like to thank Vikrant Deshmukh, MSc, MS for his expert advice in the design of the database application used for this study.

Footnotes

Funding: This project was funded in part by an institutional medical informatics training grant from the National Library of Medicine (contract number 5T 15LM007124).

Competing interests: None.

Ethics approval: This study was approved by Intermountain Healthcare and the University of Utah.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1.Institute of Medicine Performance Measurement: Accelerating Improvement. Washington, DC: National Academies Press, 2006:1–16 [Google Scholar]
  • 2.Pennsylvania Health Care Cost Containment Council Hospital-Acquired Infections in Pennsylvania. 2007. http://www.phc4.org/reports/hai/ (accessed 26 Nov 2010). [Google Scholar]
  • 3.Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007;356:486–96 [DOI] [PubMed] [Google Scholar]
  • 4.Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008;148:111–23 [DOI] [PubMed] [Google Scholar]
  • 5.The Joint Commission The Joint Commission Health Care Quality Data Download Website. 2010. http://www.healthcarequalitydata.org/ (accessed 25 Nov 2010). [Google Scholar]
  • 6.Lagu T, Lindenauer PK. Putting the public back in public reporting of health care quality. JAMA 2010;304:1711–12 [DOI] [PubMed] [Google Scholar]
  • 7.Klompas M, Yokoe DS. Automated surveillance of health care-associated infections. Clin Infect Dis 2009;48:1268–75 [DOI] [PubMed] [Google Scholar]
  • 8.Evans RS, Larsen RA, Burke JP. Computer surveillance of hospital-acquired infections and antibiotic use. JAMA 1986;256:1007–11 [PubMed] [Google Scholar]
  • 9.Classen DC, Pestotnik SL, Evans RS, et al. Computerized surveillance of adverse drug events in hospital patients. JAMA 1991;266:2847–51 [PubMed] [Google Scholar]
  • 10.Bates DW, Pappius E, Kuperman GJ, et al. Using information systems to measure and improve quality. Int J Med Inform 1999;53:115–24 [DOI] [PubMed] [Google Scholar]
  • 11.Frank L, Galanos H, Penn S, et al. Using BPI and emerging technology to improve patient safety. J Healthc Inf Manag 2004;18:65–71 [PubMed] [Google Scholar]
  • 12.Chen ES, Wajngurt D, Qureshi K, et al. Automated real-time detection and notification of positive infection cases. AMIA Annu Symp Proc 2006:883. [PMC free article] [PubMed] [Google Scholar]
  • 13.Kilbridge PM, Campbell UC, Cozart HB, et al. Automated surveillance for adverse drug events at a community hospital and an academic medical center. J Am Med Inform Assoc 2006;13:372–7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ferranti JM, Langman MK, Tanaka D, et al. Bridging the gap: leveraging business intelligence tools in support of patient safety and financial effectiveness. J Am Med Inform Assoc 2010;17:136–43 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Jha AK, Kuperman GJ, Teich JM, et al. Identifying adverse drug events: development of a computer-based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc 1998;5:305–14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Fiszman M, Chapman WW, Aronsky D, et al. Automatic detection of acute bacterial pneumonia from chest x-ray reports. J Am Med Inform Assoc 2000;7:593–604 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Murff HJ, Forster AJ, Peterson JF, et al. Electronically screening discharge summaries for adverse medical events. J Am Med Inform Assoc 2003;10:339–50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Melton GB, Hripcsak G. Automated detection of adverse events using natural language processing of discharge summaries. J Am Med Inform Assoc 2005;12:448–57 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gardner RM, Pryor TA, Warner HR. The HELP hospital information system: update 1998. Int J Med Inf 1999;54:169–82 [DOI] [PubMed] [Google Scholar]
  • 20.Special issue: The SENIC Project. Am J Epidemiol 1980;111:465–653 [PubMed] [Google Scholar]
  • 21.Evans RS, Gardner RM, Bush AR. Development of a computerized infectious disease monitor (CIDM). Comput Biomed Res 1985;18:103–13 [DOI] [PubMed] [Google Scholar]
  • 22.Evans RS, Pestotnik SL, Classen DC, et al. Development of a computerized adverse drug event monitor. Proc Annu Symp Comput Appl Med Care 1991:23–7 [PMC free article] [PubMed] [Google Scholar]
  • 23.Jacobson JA, Burke JP, Kasworm E. Effect of bacteriologic monitoring of urinary catheters on recognition and treatment of hospital-acquired urinary tract infections. Infect Control 1981;2:227–32 [DOI] [PubMed] [Google Scholar]
  • 24.Classen DC, Pestotnik SL, Evans RS, et al. Adverse drug events in hospitalized patients: excess length of stay, extra costs, and attributable mortality. J Am Med Assoc 1997;277:301–6 [PubMed] [Google Scholar]
  • 25.Weissman JS, Rothschild JM, Bendavid E, et al. Hospital workload and adverse events. Med Care 2007;45:448–55 [DOI] [PubMed] [Google Scholar]
  • 26.Iezzoni LI, Daley J, Heeren T, et al. Identifying complications of care using administrative data. Med Care 1994;32:700–15 [DOI] [PubMed] [Google Scholar]
  • 27.Bates DW, O'Neil AC, Petersen LA, et al. Evaluation of screening criteria for adverse events in medical patients. Med Care 1995;33:452–62 [DOI] [PubMed] [Google Scholar]
  • 28.Garner JS, Jarvis WR, Emori TG, et al. CDC definitions for nosocomial infections, 1988. Am J Infect Control 1988;16:128–40 [DOI] [PubMed] [Google Scholar]
  • 29.Gerberding J, Gaynes R, Horan T, et al. National Nosocomial Infections Surveillance (NNIS) system report, data summary from January 1990-May 1999, issued June 1999. Am J Infect Control 1999;27:520–32 [DOI] [PubMed] [Google Scholar]
  • 30.Kohn LT, ed. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press, 1999:33. [PubMed] [Google Scholar]
  • 31.Kongkaew C, Noyce PR, Ashcroft DM. Hospital admissions associated with adverse drug reactions: a systematic review of prospective observational studies. Ann Pharmacother 2008;42:1017–25 [DOI] [PubMed] [Google Scholar]
  • 32.Phansalkar S, Hoffman JM, Hurdle JF, et al. Understanding pharmacist decision making for adverse drug event (ADE) detection. J Eval Clin Pract 2009;15:266–75 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Michel P, Quenon JL, de Sarasqueta AM. Comparison of three methods for estimating rates of adverse events and rates of preventable adverse events in acute care hospitals. BMJ 2004;328:199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Bates DW, Evans RS, Murff H, et al. Detecting adverse events using information technology. J Am Med Inform Assoc 2003;10:115–28 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Nebeker JR, Barach P, Samore MH. Clarifying adverse drug events: a clinician's guide to terminology, documentation, and reporting. Ann Intern Med 2004;140:795–801 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data
supp_18_4_491__index.html (20.2KB, html)

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES