Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2004 Nov-Dec;11(6):468–478. doi: 10.1197/jamia.M1317

Organization and Representation of Patient Safety Data: Current Status and Issues around Generalizability and Scalability

Aziz A Boxwala 1, Meghan Dierks 1, Maura Keenan 1, Susan Jackson 1, Robert Hanscom 1, David W Bates 1, Luke Sato 1
PMCID: PMC524625  PMID: 15298992

Abstract

Recent reports have identified medical errors as a significant cause of morbidity and mortality among patients. A variety of approaches have been implemented to identify errors and their causes. These approaches include retrospective reporting and investigation of errors and adverse events and prospective analyses for identifying hazardous situations. The above approaches, along with other sources, contribute to data that are used to analyze patient safety risks. A variety of data structures and terminologies have been created to represent the information contained in these sources of patient safety data. Whereas many representations may be well suited to the particular safety application for which they were developed, such application-specific and often organization-specific representations limit the sharability of patient safety data. The result is that aggregation and comparison of safety data across organizations, practice domains, and applications is difficult at best. A common reference data model and a broadly applicable terminology for patient safety data are needed to aggregate safety data at the regional and national level and conduct large-scale studies of patient safety risks and interventions.


Errors and adverse events occurring in the course of medical management have been identified as a significant cause of patient morbidity and mortality in a number of different countries. A report published by the Institute of Medicine (IOM) in 1999 estimated that more than one million preventable adverse events occur in the United States each year.1 A similar prevalence was reported in the United Kingdom, where one pilot study found that than 10% of patients in a hospital setting experienced adverse events, about half of which were preventable.2 Data from other countries provide further evidence that preventable medical errors are a significant and widespread problem.3,4

To gain a better understanding of these phenomena, learn more about the incidence and types of medical errors, and the factors contributing to these errors, we first need to acquire a broad range of raw data relating to patient safety and quality of care and perform a comprehensive analysis of the data. Patient safety data* are available from many different sources for analysis. The most obvious sources are traditional incident reports5,6,7,8 that providers (typically nurses9), prepared in response to a clinical event or variation in a standard process of care. Traditional paper-based strategies prevail but increasingly are being replaced by computer-based incident reporting systems in an effort to make analysis more efficient.5,6,7,8 Departmental and institutional case reviews (e.g., root cause analyses,10,11 monthly morbidity and mortality reviews) performed for deaths or cases with unexpectedly poor outcomes are another source of data. Adverse event data are also generated through routine audits of clinical records and active surveillance of signals and alerts from computerized order-entry or electronic patient records.12 Some institutions collect data in a “prospective” manner, performing periodic quality assurance audits or rounds in an effort to identify safety issues and hazards before events occur.13 Less conventional sources of patient safety data include patient complaints, end-of-shift reports, multidisciplinary rounds in which patient management issues and coordination of care issues are discussed, executive walk rounds summaries, and malpractice allegation abstracts.

Individually, data from each of these sources tend to be used for a specific and somewhat narrow set of purposes including:

  1. Care Improvement: Identifying opportunities to implement solutions directly.

  2. Benchmarking: Determining the frequency and range of errors for purposes of benchmarking safety-related performance parameters.3,4

  3. Causal Modeling: Identifying or inferring the causes of a specific event, and developing an intervention to reduce the likelihood of recurrence.14,15

  4. Compliance with Regulatory Agencies: Mandatory and voluntary reporting to regulatory agencies of the government, and to accreditation and licensing bodies.16,17

Because of the narrow purpose for which the data are collected, they may, in fact, be underutilized. If the data from different sources could be combined and subjected to aggregate analysis, collectively they might lead to a better understanding of broad system-based safety issues such as cross-disciplinary coordination, communication, interactions between policy and process of care, and understanding the effects of safety initiatives. The aggregate data could be used for benchmarking, for quality improvement, and for scientific research on types and causes of errors and interventions to reduce errors. Quality assurance managers within departments and institutions then would use the data for operational improvements. Researchers would find aggregated data to be a valuable resource for identifying errors and analyzing their causes. Administrators would analyze the data to identify problems that need further attention and could direct research efforts or new interventions to address the identified problems.

There are several reasons why patient safety data are underutilized despite their potential value. First, data from many of these sources are formatted as free text, and we currently lack the necessary tools to analyze or process complex, unstructured forms of data in a reliable and efficient manner. Even when these data are structured and coded, such as in computerized incident reporting systems, variations in the terminologies (often hierarchically organized as taxonomies of errors and contributing factors) and data models (that describe entities, their attributes, and their relationships) between different reporting systems make aggregate analysis of data difficult. Some of the variation in terminologies and data models reflects a narrow application focus (e.g., pharmacy incident reporting systems). But even for general safety concepts, e.g., patient-based, provider-based, and system-based factors that contribute to or mitigate risk, adverse events, and unsafe acts, there are wide variations in terms, definitions and their hierarchical organizations for describing similar concepts. As the President of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) noted: “It is no small irony that the progressively expanding national discussions on patient safety over the past several years are not based on a common language. This critical missing element has hindered our collective ability to collect patient safety data in a consistent fashion, analyze process failures, mine data (e.g., trends, pattern analysis), and disseminate new knowledge about patient safety.18 In the absence of a common and principled design approach to patient safety terminologies and data models it can be difficult reconciling even minor variations.

A comprehensive and formal representation of adverse clinical events and concepts relating to them would significantly improve our ability to manage and analyze an increasingly large and heterogeneous store of patient safety data. One generalizable approach to integration of heterogeneous patient safety data would be to create a representation that consists of a reference data model and a logically modeled terminology. The reference data model would then be refined to create specialized data models that represent data from various sources. Health Level 7 (HL7) uses an analogous approach to creating representation for various messaging standards.19 The data models rely on a controlled terminology to provide the values for attributes of the entities in the model. To support comprehensive modeling, the representation must be logically sound and cover a wide range of concepts including the human factors, organizational factors, system processes, information technology, and communication factors that contributed to or compensated for the severity of the event, and also the sequelae of the event itself.

In the sections that follow, we discuss the sources and properties of patient safety data to which institutions have access, how such data are currently managed and analyzed, and how the specific properties of the data from various sources limit the extent to which such data can be utilized. We focus on data models and terminologies currently being used to encode errors, adverse events, risks, and outcomes. We conclude by discussing how a common reference data model and a well-designed, broad, structured and logically sound terminology are needed to provide scalable and generalizable methods for managing and extracting knowledge from increasingly large repositories of patient safety data. Of note, such a representation would not attempt to define “error” per se, but instead, provide a means for representing all of the features of an adverse event along as many dimensions as possible.

Sources of Patient Safety Data

From a functional standpoint, patient safety data sources can be divided into two large classes. One class consists of sources that contain data about specific incidents. The other class consists of sources that contain data that describe general issues related to safety and hazards at a particular institution. Different sources of data from both these classes complement each other in the types of incidents and the content of the data. Thus, aggregating data from these sources can result in more comprehensive analyses than by examining data from each source separately.

The representation of data vary considerably even within each of these sources. In some sources, the data largely are semistructured blocks of narrative text that are very difficult to analyze using standard database methods. In other sources, containing structured, coded data, there is variation and inconsistency in the data model and terminology used within and between different reports. The heterogeneity in representation, among these sources of data described in the following subsections, raises challenges for data management and analysis.

Sources of Data on Incidents

Incident Reports

Incident reports are accounts by front-line personnel about specific errors, events, or system states that are perceived to have compromised patient safety, deviated from an organizational standard, or sometimes simply constrained the delivery of care.

These reports usually contain some or all of the following5,20:

  • time of occurrence

  • site of occurrence

  • roles of participants and reporters

  • patient demographics and clinical attributes

  • free-text description of the event or state

  • characterization, classification or typing including severity, preventability

  • identification of systems that failed

  • outcome

Reporting templates, whether paper based or electronically formatted, often contain fields for free text narrative and fixed categorical selection options. A typical electronically formatted incident reporting system7 can be seen at http://www.anaesthesie.ch/cirs/.

The final product of an incident report typically is a combination of structured data and unstructured free text, the content of which often is limited to the description of the discrete event.

Although incident reporting can be used to identify a large number of incidents,21 studies have found that an even larger number of incidents are unreported.22,23,24 A number of reasons have been identified for the lack of reporting including fears of blame, high workload, and the belief by the staff that the incident did not warrant a report.9 Thus, analyses of incident reports may not yield reliable estimates of errors. However, their analyses still can provide valuable insight on common causes and types of errors.

Root Cause Analyses Reports

A root cause analysis (RCA) report is the product of a comprehensive investigation conducted by a hospital team to identify the causal or contributing factors associated with a specific incident.11 The RCA process usually is multidisciplinary and led by a risk manager, safety officer, or other designated personnel. Although the RCA process has been a safety tool in high-risk industrial domains for some time, it has only recently been adopted for use in the health care domain.25 JCAHO now requires institutions to conduct RCAs of major adverse events (so-called “sentinel events”).26 The final product of an RCA report is a description of the incident, detailed information on factors (human, environmental, equipment, and organizational) contributing to the incident and an action plan for reducing risk of recurrence.11 In addition, for certain types of events, JCAHO requires that institutions investigate a set of contributory factors that are potential causes.27 For example, for an incident involving surgery performed on the wrong anatomic site, the institution must conduct detailed inquiries into the following seven factors: staffing levels, patient identification process, physical assessment process, orientation and training of staff, communication with patient or family members, communication among staff members, and availability of information.

Departmental Case Reviews (Morbidity and Mortality Case Reviews)

To fulfill accreditation requirements or hospital bylaws, clinical departments and divisions perform regular (monthly or quarterly) reviews of cases the clinical courses of which included an unexpected complication or outcomes of which deviated from a standard or expected course (e.g., death, prolonged length of stay, unexpected intensive care admission). The purpose of such reviews is to present the details of the case and the outcome in a peer-review forum, and to encourage open discussion and critique.

Because most departments rely on self-reporting by the attending or admitting physician, the process is subject to bias and may not capture all cases that meet criteria for review. For the identified cases, a manual chart review is performed by either the principal clinician or an independent departmental auditor to acquire relevant data and prepare a report. Typically, case data are summarized in a combination of narrative text and structured tabular format. The narrative text includes some of the preadmission historical data (e.g., comorbidities, previous therapeutic interventions and outcomes, preadmission evolution of the illness or condition). The narrative text also tends to preserve the temporal sequence of events and may include a description of some of the decision-making processes involved. Structured data, if included, often are limited to patient attributes (e.g., medical record number, age, gender, admission diagnosis, discharge or postmortem diagnosis, procedures performed, and a classification of the unexpected outcomes or complications).

In addition to report preparation, cases also are discussed and critiqued in a peer forum. Additional information relating to decision making, constraints, and issues relating to planning and diagnostic or treatment uncertainty can emerge from these discussions. Occasionally, the discussion is recorded and transcribed as a permanent record.

Malpractice Claims

Malpractice claims occur when there were actual or perceived medical errors, and hence can serve as a potentially useful source of patient safety data. In fact, claims data have been used to analyze adverse drug events (ADEs)28 and patient safety risks in anesthesia.29,30 Despite their strengths, claims data have important limitations. First, because claims represent the perceptions of the different parties involved, such data can contain conflicting details of the actual event. Second, malpractice claims do not accurately reflect the quality or safety of care for a specific institution or practitioner. Comparing multiple sources of data, there appears to be poor correlation between a claim and the actual quality of care (i.e., whether care was negligent or nonnegligent).31,32 These studies have found that the incidence of negligent care exceeded the incidence of malpractice cases and that malpractice claims frequently were made even in the absence of negligent care. Despite these limitations, claims data are valuable because the thorough investigations pursuant to a claim can reveal specific safety or risk issues that led to the event. Furthermore, claims data emerge from multiple different institutions and can reflect common systemic problems. Finally, malpractice claims often represent events that are associated with especially severe outcomes.

Electronic Patient Data Surveillance and Analysis

Data from electronic medical records and administrative databases can be used to identify the occurrence of adverse events. Computer-based surveillance programs can monitor these databases to identify the occurrence of safety-related events in “real-time.” In addition, retrospective review of these data can be used for identification of events for research and reporting.

Surveillance programs have been implemented for ADE detection in the hospital setting24,33 and successfully tested for use in the ambulatory setting.34 Similar programs have been used to detect nosocomial infections.35,36 The programs typically operate by monitoring the electronic patient record for patterns of data indicating a possible event. For ADEs, the signals suggesting an ADE include certain diagnostic codes, sudden medication stop orders, orders for antidotes, and certain abnormal laboratory values when on particular drugs (e.g., low serum potassium with digoxin). The monitoring programs typically have a fairly low positive predictive value for events. In one report, only about 10% of events screened by the monitor were determined to be ADEs.24 Hence, human intervention usually is required for following up each case that is identified by the program.

Computer-based surveillance complements incident reporting systems in identifying safety-related events. One of the limitations of the latter, as mentioned earlier, is that there are selection biases in the cases that are reported and, therefore, not all cases may be reported.23 Surveillance techniques identify some additional ADEs that otherwise are unreported.24,33 Jha et al.24 found that the computer monitor identified approximately 45% of ADEs, whereas only 4% of ADEs were voluntarily reported by providers. However, surveillance techniques are more effective for detecting actual events than identifying errors or near misses. Also, such techniques do not yet cover a broad range of events, although more comprehensive detection tools are being developed. This approach is inexpensive, and it should become possible to detect an increasingly large and representative proportion of adverse events as more clinical data become available electronically.

In addition to real-time monitoring, clinical data have been analyzed retrospectively to identify occurrence of adverse events. One study used ICD-9-CM coded diagnoses and procedures and other clinical data to screen the records for the occurrence of events that required mandatory reporting in the state of New York.37 More than one third of the screened cases were found to be reportable events and previously were unreported. A study using ICD-10 codes had similar success with identifying cases that were not reported by the providers.38 These techniques rely on the reliability of the coding process to identify the occurrence of events. Similar screening for adverse events using paper-based charts has been found to be expensive compared with reporting by providers.39

Sources of Data on General Issues Related to Safety and Hazards

Failure Mode Effects and Analysis (FMEA) is a technique for proactive evaluation of vulnerabilities in a system or a process before near misses or adverse events occur.40 For each component of the system or process, the potential failure modes are listed. For each failure mode, possible causes are identified, the risk of failure is determined (risk is a function of the probability of occurrence and the severity of the outcome), and actions to control or eliminate the failure are listed. An FMEA report thus contains a wealth of information on patient safety presented in a predictive or anticipatory context. JCAHO requires accredited institutions to perform at least one FMEA every year.13

Other kinds of prospective investigations also can provide patient safety related data. For example, the Risk Management Foundation (RMF) conducts an office practice evaluation regularly at its member institutions. The clinic being surveyed is evaluated for various quality-related parameters such as organization of medical record files, completeness of records, and follow-up system for laboratory results and other studies. These data can be useful to provide context for a causal analysis of an incident or related incidents.

A few other reliability and hazards analysis techniques currently in use in industrial domains are being adapted for use in medicine. Sophisticated tools including event fault trees and fault hazard analysis,41 human reliability models,42 and simulation models43 are being used on an investigational basis to help estimate the likelihood and consequences of such medical adverse events.

Unconventional Sources of Data

Less conventional but valuable sources of patient safety data may include quality assurance field observations or video recordings of clinical activities (e.g., trauma resuscitations or patient simulator scenarios), patient satisfaction surveys or patient complaints, health services research data (e.g., volume outcome data), and third-party insurer audits of performance, diagnostic test utilization, and procedural-diagnostic codes assigned to claims submissions. These sources vary widely in terms of format, content, and structure. While these sources of data often reveal general safety issues, some, such as patient complaints, may also address a specific incident.

Issues Around Scalability and Generalizability of Representation

The sources cited above represent a very heterogeneous set of data from a variety of different institutions. To support aggregation and analysis under such circumstances, it is necessary to have a representational scheme for the data that are both scalable and generalizable. Scalability refers to the ability of the representational scheme to facilitate aggregation and analyses of large volumes of data. Generalizability refers to the ability of the representational scheme to reliably model concepts across organizations, application, and patient safety domains and to be applied to different sources of data.

Volume of Data

The traditional focus of healthcare safety management has been outcome driven and retrospective, starting from an unexpected hospital- or specialty service–based poor outcome and investigating backwards. This limited the scope of what data were collected and analyzed. More recently, with the realization that many serious events do not result in measurable patient injury (i.e., near misses44)§ yet share many of the same features of cases with unexpected outcomes, a wider range of events are being reported and investigated.45,46 Because the number of so-called “near misses” is much larger than the number of serious adverse events (), the potential size of the patient safety data set and the range of concepts that might be used to classify specific events begins to enlarge dramatically. The number of reportable events including near misses has been estimated to exceed 5 million per year in the United States.16 In comparison, the incident reporting system for the aviation industry in the United States, the Aviation Safety Reporting System, averages less than 35,000 incident reports every year.47 Thus, any system created for representing patient safety data must be capable of supporting the analyses of large volumes of data. For example, efficient index-based information retrieval on this large patient safety data set would require a terminology of sufficient detailed granularity so that the user would not be overwhelmed with a large number of irrelevant results.

Figure 1.

Figure 1.

The patient safety iceberg. Reported adverse events form the tip of the iceberg. However, a large volume of adverse events and an even larger volume of unreported near-miss incidents lie hidden underneath the surface. Many other adverse events, not preventable given what we know today, also occur.

Aggregating Data Across Organizations

To conduct meaningful analyses on types of events that occur rarely, or perform comparative analyses across classes of institutions, it is necessary to collect data at a regional or national level.48 The larger data sets generated through aggregation strengthen statistical analysis and also enable individual institutions to learn from the experience of others.16 These are some of the reasons why the Institute of Medicine1 and the Quality Interagency Coordination (QuIC) Task Force44 of the U.S. Federal Government recommend development of systems for mandatory and voluntary reporting of adverse events across all 50 states in the United States. A recent report from the Institute of Medicine also calls for the development of standard format (or data models) and coding terminologies for patient safety data reporting for a broad range of applications.49 On a local level, and in a smaller scale, efforts are being made to aggregate data across different institutions. The RMF has been leading a project to aggregate and analyze patient safety data from medical institutions affiliated with the Harvard Medical School. Even at this local level, aggregation and analysis are hampered because multiple different proprietary incident reporting systems in use by member institutions have led to variations in coding.

Sharing Data Across Applications and Clinical Domains

As noted above, there are a number of different systems currently available or under development to collect and manually classify adverse event data. Even within a single institution, there may be different reporting systems across different departments with different classification schemes. Custom systems exist for reporting medication errors,20 intensive care unit (ICU) errors,8 and transfusion errors.6 The custom systems capture data that are at least partially idiosyncratic to the specific clinical practice domain or the safety application (e.g., medication error reporting). Medication error reporting systems, for example, feature tools that enable classification of the event in terms of the specific type of error (e.g., wrong drug, wrong dose), the step in which the error occurred (e.g., prescribing, administering), and the cause of the error (e.g., mislabeled medication container, handwritten prescription). While facilitating data collection and analysis for pharmacy settings, this approach restricts the usefulness of the data for more fundamental analyses of patient safety issues.

To illustrate the importance of this issue, consider a hypothetical scenario in a hospital in which the following errors occur:

  1. A patient is given an incorrect chemotherapy medication because the provider administering the medication misidentified the patient.

  2. A patient is administered blood of the wrong type. The blood administered was intended for another patient on the same floor.

  3. An operation is performed on the wrong anatomic site because the surgeon viewed another patient's radiographs during the surgery.

Analyzed in isolation, these cases might appear as individual lapses. Analyzed collectively, the cases suggest a systematic problem with patient identification, which, as Chassin and Becher50 have emphasized, is widespread and can have serious consequences (n.b., “Improve the accuracy of patient identification” was one of the six National Goals for Patient Safety51 established by JCAHO for the year 2003). An integrated approach to the analysis of patient safety data would improve the ability to identify general or systemic factors associated with a series of otherwise disparate incidents. Such an integrated system would allow global representation of high-level issues, support the representation of detailed information that is specific to different types of errors, and allow assessment using a variety of “lenses.”

Combining Data from Different Sources

In addition to analyzing different incidents or events from a common cause perspective, there also is value in looking at similar events from multiple different perspectives. So-called “sentinel” events, for example, often generate several different reports. Within each report, the event may describe different attributes or features of the event that might collectively lead to a comprehensive understanding of what happened. An incident report generated shortly after the incident by one of the care team members involved might identify the type of the index event itself but might be relatively vague on the sequences of events leading up to or after an event. A root-cause analysis, which retrospectively describes the event, often provides detailed descriptions of causative and contributory factors, specific interventions prompted by the incident, and recommendations for preventing future incidents. Documents associated with a malpractice claim provide details about patient perceptions and expectations surrounding the event as well as long-term consequences of one or more adverse events. The claims' documents also include multiple, different perspectives from the patient, family members, providers, lawyers, jury members, and the insurers.

Each of these sources contains different but potentially interrelated information that might be used in many different ways. As an example, consider a scenario in which a Director of Patient Safety is reviewing incident reports of a type of error that appears to have increased in frequency. To understand the underlying causes, she checks if any root cause analyses were performed and reviews those documents. She then would like to find what actions were taken and preventive measures were implemented to counter the causative factors at her hospital and at other institutions. She also might wish to check national databases to see if this error is common at a national level and whether there are recent recommendations for preventing the error. A unified representation of patient safety data that can integrate different sources of data is needed to support this type of use.

Content of Patient Safety Data and Existing Terminologies

Model of Patient Safety Data

The examples cited above illustrate the variation in detail and comprehensiveness of patient safety data. In spite of this apparent variation, there is a subset of data that is common to all sources and that might serve as a framework for a canonical or reference data model of clinical adverse events. Reports about a specific incident, such as from RCAs and incident reporting systems, tend to organize data around the following key categories or questions:

  • the type of the error

  • the context of the error
    • where it occurred
    • when it occurred
    • who was involved
    • the patient's state
    • the environmental conditions prior to the occurrence of the error
  • associated causative and contributing factors

  • the clinical outcome for the patient

  • the remedial and preventive actions taken

Similarly, data about general hazards identify factors that might lead to certain types of errors and adverse clinical events and possible preventive actions to mitigate the risk of an untoward incident from occurring.

We believe that a common reference model can be created for representation of patient safety data. As mentioned earlier, the reference model can be refined to create concrete data models to represent data from each of the sources. HL7 has created a methodology called HL7 Development Framework that has among its objectives the refinement of the HL7 Reference Information Model to create a data model for a messaging standard. This refinement framework could be modified for application to patient safety data representation.

In addition to a reference data model, a common terminology is needed to enable us to associate data from different sources and systems with one another. The following sections discuss the issue of terminologies in patient safety data representation.

Terminologies for Patient Safety Data

Existing medical terminologies and indexing schemes such as SNOMED, ICD-9 CM, or CPT52,53,54 that are used widely to support other clinical applications, do not contain terms or concepts relating to medical errors and their attributes.55 Most notably, these terminologies lack detailed representation of error typology (e.g., wrong-sided surgery) and other attributes such as causative and contributing factors (e.g., patient misidentification, failure of communication).56 To address these deficiencies, several new, but limited terminologies have been created to support institutional or commercial incident reporting applications (e.g., enable annotation of free text narrative reports).

We are not aware of and could not find any reports in Medline of a broadly applicable, nonproprietary patient safety terminology that could support the range of analytical tasks described earlier. Commercially available general incident reporting tools such as DoctorQuality Inc.'s Risk Prevention and Management System use proprietary but limited terminologies. Specific applications such as for medication error reporting,20 blood transfusion error reporting,46 or for specific domains such as primary care57,58 have very narrowly focused terminologies. The Medical Dictionary for Regulatory Activities (MedDRA) is a terminology created by the “International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use.” MedDRA is intended for recording data during pre- and postmarketing phases of drug development59 and does not contain terms for medical errors and their causes outside of its drug regulation perspective. The RMF has developed and uses a broad terminology related to errors. However, this proprietary terminology has a focus on medico-legal claims. contains a listing of some of these terminologies and representative categories and terms.

Table 1.

Partial Listing of Medical Error Terminologies with Representative Categories and Terms

Purpose Selected categories Sample terms
NCC-MERP's Taxonomy of Medication Errors
Medication error reporting Patient information Written miscommunication-trailing zero
Patient outcome Dose omission
Product information Floor stock
    Dosage form Adult day health care
    Packaging
Type (of error)
Causes
    Name confusion
Contributing factors
DoctorQuality Inc.'s Risk Prevention and Management System's Terminology
Hospital incident reporting Error type Fall from bed chemical restraints involved
Event classification Critical results not reported
    Adverse Clinical Event Hemolytic reaction confirmed
        Fall
Level of impact
Contributing factor
Drug name
Medical Event Reporting System for Transfusion Medicine's Terminology (developed by Columbia University)
Blood transfusion Product check-in Data entry incomplete
Order entry Sample tubes mixed up
Error reporting Sample testing Outdated component in stock
Product storage Knowledge-based errors
Latent errors
Active errors
International Taxonomy for Errors in General Practice
Primary care error classification Process errors Records unavailable
    Office administration Wrong test ordered
    Investigations Wrong or missed diagnosis
Knowledge and skill errors
Risk Management Foundation's Taxonomy
Claims coding Locations Post mortem procedures
Services Sexual misconduct
Allegations Lack of any consent
    Medical treatment Clinician did not receive results because report went to wrong clinician
    Provider behavior
    Hospital policy and procedure
Risk management issues
    Communication
    Clinical systems

Among the proprietary terminologies, no one single system is broad enough in coverage yet sufficiently detailed to encode patient safety data from all clinical domains.60 Often, it may not be possible to map terminologies to each other because of differences in granularity (e.g., the NCC-MERP Taxonomy of Medication Errors20 has a very detailed classification of product labeling issues as a cause of error that is not matched by the codes in DoctorQuality's system), or asymmetries in classification (i.e., assigning terms under different categories in different terminologies).

Additionally, many of the terminologies have been developed using ad hoc approaches, typically by organizing terms in simple type–subtype hierarchies. More complex semantic relationships among terms often are lacking. illustrates how the lack of formal semantic representation may cause the analysis of data to be unreliable. Ad hoc development also can lead to incomplete, inconsistent, or redundant terminological representations. An excerpt from the NCC-MERP Taxonomy of Medication Errors20 is shown in . As is seen, there are five terms for different types of health care professionals listed. In another section (Section 60—Personnel Involved) of the taxonomy, there are 28 terms for health care professionals listed. In fact, four of the five terms (except “Other”) from have a different semantic or lexical form in section 60. For these and other reasons, then, existing patient safety terminologies lack the fundamental properties required for reliable and consistent data annotation, aggregation, and intelligent analysis.

Table 2.

Selected Terms Related to Misidentification of Patient, Substance, or Record from a Terminology for Medical Errors

Lab Issues
    Specimen mislabeled
    Incorrect patient ID used
Operative/invasive procedure
    Wrong patient
Radiology
    Mislabeled
    Wrong patient
Transfusion-related
    Incorrect patient received product

The patient misidentification terms are represented under several different categories. Selection of cases, such as those described in the “Sharing Data Across Applications and Clinical Domains” section, from an incident report database relies on the human to know and recall all related terms for patient misidentification.

Table 3.

Excerpt from the NCC-MERP Taxonomy of Medication errors

90.6 Lack of availability of health care professional
    90.6.1 Medical
    90.6.2 Other Allied Health Care Professional
    90.6.3 Pharmacy
    90.6.4 Nursing
    90.6.5 Other

Five terms for different types of medical professional are listed.

There are several possible reasons for the lack of formal structure in current patient safety terminologies. First, many of the proprietary and application-specific terminologies were developed in response to an urgent demand for incident reporting systems, and their ad hoc properties likely reflect the urgency with which these systems were designed and deployed. Second, until very recently, developers of incident reporting systems did not plan for aggregation of data from various applications, systems, or institutions or the extension or evolution of terminologies over time. Finally, although terminologies that are developed using formal methods are far more useful, the development process requires considerable time, effort, and expertise.

General Models of Errors from Other Industries

In comparison with terminologies and taxonomies for medical errors, which are very domain specific in classifying or describing errors, industrial and transportation safety applications are based on generic models of human performance and organizational–operational aspects of incidents. These classification schemes and terminologies cover such domain-independent concepts as organizational structure, basic human cognition and task execution, process management, control theory, communications, regulatory controls, and policy science.

Donald Norman's categorization of action slips61 and Stages of Action Model62 and Hollnagel's classification,63 for example, were developed initially as a high-level framework for characterizing human–computer interaction but have been applied more generically to classify cognition–action couplings. Rasmussen's Skills-Rules-Knowledge model64 and Reason's Generic Error-Modelling System (GEMS)65 have been used in a variety of applications. A variety of methods and taxonomies have also been developed for analyzing the causal factors of accident and incident data from the industrial and transportation domains. These include the TRIPOD Basic Risk Factors,66 PRISMA,67 and UK CIRAS rail reporting system classification schemes;68 the classification scheme associated with the Accident Evolution and Barrier Function Method;69 and the Taxonomy of Unsafe Operations.70 The Eindhoven Classification Model focuses specifically on organizational factors,71 and the Office of Aviation Medicine's HFACS system focuses on both human and organizational factors.72 Each of these models strives to classify or characterize latent (“hidden”) and system-based factors that contributed to an incident.

Several of these models have been applied to medical errors. Reason's GEMS framework has been used as the basis of a protocol for investigating medical accidents.73 Norman's Stages of Action Model has been proposed as the basis of a taxonomy of human errors in medicine.74 The Eindhoven classification model has been adopted for causal classification and analysis of blood transfusion errors.75 Because these models and taxonomies contribute to our understanding of errors, it would be important to incorporate their features in patient safety terminologies.

Development of a Common Patient Safety Terminology Using Formal Methods

As discussed earlier, a common patient safety terminology is needed to extend the use of data across organizations, applications, clinical domains, and proprietary information systems. However, a terminology that can be applied across this wide range of domains does not exist. Hence, a terminology for patient safety must be developed.

While commonly used clinical terminologies are inadequate for safety reporting, a new terminology effort might be able to build on existing clinical terminologies. First, in addition to terms describing error types and contributing factors, safety reporting also requires terms describing the clinical context such as roles of various actors in the incident, the location of an incident (e.g., intensive care unit, pharmacy, nursing floor), and the specific medication administered or procedure that was performed. Such terms usually are well covered56 in the clinical terminologies and would not have to be created de novo. Second, if a compositional approach76 is used to create the terminology, as suggested in the next paragraph, several of the atomic terms required to compose compound terms for error types or contributing factors can be found in clinical terminologies. Our analysis56 indicates that one or more of the atomic terms exist in the source vocabularies of the UMLS metathesaurus77 for more than 90% of safety-related terms.

To maximize scalability and generalizability, formal terminology modeling principles should be used to develop a standard patient safety terminology. Clinical terminologies have been modeled using conceptual graphs78 and description logics.79 These logic-based approaches can support the development of consistent, valid, and extensible terminologies. In addition to assertions of type–subtype relationships, other semantic relationships among terms can be defined, e.g., part–whole, temporal. On the basis of these definitions, terms can be classified more flexibly, consistently, and automatically into multiple hierarchies.

The terminology should be founded on one or more theoretical models of error and reliability to enable axiomatic analyses of the data. For example, terms could be classified according to concepts in Reason's GEMS framework as Unsafe Acts, Preconditions, and Latent Failures. The terminology should be broadly applicable across clinical domains and specific applications to enable aggregation of data from different sources. Clinical terminologies such as SNOMED-CT52 show the feasibility of creating a terminology that spans across a broad range of clinical domains. In a broadly applicable terminology, domain- and application-specific terms can be derived from the more general terms, e.g., miscalculation of a medication dose can be derived from a calculation error term and can thus be associated with miscalculation of radiation dose and miscalculation of cardiac output. One recent effort to develop a terminology for communications errors based on Reason's model of errors and expressed in conceptual graphs exemplifies such a principled terminology modeling approach.80

An issue with the current patient safety terminologies and with safety terminologies from other domains is the lack of formal validation of these terminologies prior to their use and of continuing evaluation. A standard patient safety terminology should be evaluated prior to large-scale use to assess its coverage, validity, and consistency. Furthermore, the terminology should also be continuously evaluated for its value in identifying and analyzing patient safety problems.

Conclusion

Lack of standards for representation of patient safety data has significant implications for the success of data-driven interventions, i.e., causal modeling, care improvement policies, benchmarking, and compliance. Without a common representation of the data, it becomes very difficult to analyze data across event types or event sites. Benchmarking, in particular, becomes worthwhile only when standard adverse event definitions are in place and similar surveillance is being conducted across different sites. To date, regulators have not insisted on the use of standard surveillance methodologies or data reporting formats, but that eventually may change.

A critical ingredient of large-scale efforts to collect, share, and analyze patient safety data is a common representation for the data. The representation for patient safety data would consist of a common reference data model and a standard terminology. The representation must be well designed,81 broad, extensible, and flexible to accommodate the needs of a variety of applications.

The authors thank Dr. Qing Zeng for reviewing a draft of this manuscript.

Footnotes

*

For the purpose of this discussion, the term patient safety data will refer to the broad and heterogeneous information that includes, but is not limited to, the descriptions of incidents with medical errors or near misses, their causes, the follow-up corrective actions, interventions that reduce future risk, and patient safety hazards.

While there are many definitions in the current medical literature, we have used the following definition of adverse event: “an undesirable event occurring in the course of medical care that produces a measurable change in patient status.” We have made an effort to uncouple the event from the outcome. Therefore, in our framework, the change in patient status produced by an adverse event may be transient, and self-correcting, with no change in expected outcome. Alternatively, active intervention and additional care may be provided to prevent or limit the significance of an adverse outcome. Finally, an adverse event may produce an adverse outcome and temporary or permanent injury, even if intervention and additional care is provided.

RMF is the administrative arm of the Controlled Risk Insurance Company (CRICO)— the medical malpractice insurance captive of the Harvard medical institutions.

§

A situation in which a medical error could have resulted in an accident, injury, or illness, but did not, either by chance or through timely intervention.

References

  • 1.Kohn LT, Corrigan JM, Donaldson MS, (eds). To err is human: building a safer health system. Washington, DC: National Academy Press; 2000. [PubMed]
  • 2.Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ. 2001;322:517–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Garcia-Martin M, Lardelli-Claret P, Bueno-Cavanillas A, Luna-del-Castillo JD, Espigares-Garcia M, Galvez-Vargas R. Proportion of hospital deaths associated with adverse events. J Clin Epidemiol. 1997;50:1319–26. [DOI] [PubMed] [Google Scholar]
  • 4.Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163:458–71. [DOI] [PubMed] [Google Scholar]
  • 5.Britt H, Miller GC, Steven ID, Howarth GC, Nicholson PA, Bhasale AL, et al. Collecting data on potentially harmful events: a method for monitoring incidents in general practice. Fam Pract. 1997;14:101–6. [DOI] [PubMed] [Google Scholar]
  • 6.Kaplan HS, Callum JL, Rabin Fastman B, Merkley LL. The Medical Event Reporting System for Transfusion Medicine: will it help get the right blood to the right patient? Transfus Med Rev. 2002;16:86–102. [DOI] [PubMed] [Google Scholar]
  • 7.Staender S, Davies J, Helmreich B, Sexton B, Kaufmann M. The anaesthesia critical incident reporting system: an experience based database. Int J Med Inf. 1997;47(1-2):87–90. [DOI] [PubMed] [Google Scholar]
  • 8.Wu AW, Pronovost P, Morlock L. ICU incident reporting systems. J Crit Care. 2002;17:86–94. [DOI] [PubMed] [Google Scholar]
  • 9.Vincent C, Stanhope N, Crowley-Murphy M. Reasons for not reporting adverse incidents: an empirical study. J Eval Clin Pract. 1999;5:13–21. [DOI] [PubMed] [Google Scholar]
  • 10.Williams L, Grayson D, Gosbee J. Patient safety-incorporating drawing software into root cause analysis software. J Am Med Inform Assoc. 2002;9(6 Suppl):S52–3. [Google Scholar]
  • 11.Bagian JP, Gosbee J, Lee CZ, Williams L, McKnight SD, Mannos DM. The Veterans Affairs root cause analysis system in action. Jt Comm J Qual Improv. 2002;28:531–45. [DOI] [PubMed] [Google Scholar]
  • 12.Bates DW, Evans RS, Murff H, Stetson PD, Pizziferri L, Hripcsak G. Detecting adverse events using information technology. J Am Med Inform Assoc. 2003;10:115–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Joint Commission on Accreditation of Healthcare Organizations. Failure Mode and Effects Analysis in Health Care: Proactive Risk Reduction. Oakbrook Terrace, IL: Joint Commission Resources, Inc; 2002. [PubMed]
  • 14.Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995;274:35–43. [PubMed] [Google Scholar]
  • 15.Bhasale A. The wrong diagnosis: identifying causes of potentially adverse events in general practice using incident monitoring. Fam Pract. 1998;15:308–18. [DOI] [PubMed] [Google Scholar]
  • 16.Leape LL. Reporting of adverse events. N Engl J Med. 2002;347:1633–8. [DOI] [PubMed] [Google Scholar]
  • 17.Rosenthal J, Booth M, Flowers L, Riley T. Current State programs addressing medical errors: an analysis of mandatory reporting and other initiatives. Portland, ME: National Academy for State Health Policy; 2001. Report No. GNL35.
  • 18.Reducing Medical Errors: A Review of Innovative Strategies to Improve Patient Safety. Testimony of Dennis S. O'Leary, President, The Joint Commission on Accreditation of Healthcare Organizations. In: House Committee on Energy and Commerce Subcommittee on Health. Washington, DC: 2002.
  • 19.Russler DC, Schadow G, Mead C, Snyder T, Quade LM, McDonald CJ. Influences of the Unified Service Action Model on the HL7 Reference Information Model. Proc AMIA Symp. 1999:930–4. [PMC free article] [PubMed]
  • 20.National Coordinating Council for Medication Error Reporting and Prevention. Taxonomy of Medication Errors: United States Pharmacopeia; 1998. Available at http://www.nccmerp.org/taxo0731.pdf. Accessed Jan 3, 2003.
  • 21.Santell JP, Hicks RW, McMeekin J, Cousins DD. Medication errors: experience of the United States Pharmacopeia (USP) MEDMARX reporting system. J Clin Pharmacol. 2003;43:760–7. [PubMed] [Google Scholar]
  • 22.Stanhope N, Crowley-Murphy M, Vincent C, O'Connor AM, Taylor-Adams SE. An evaluation of adverse incident reporting. J Eval Clin Pract. 1999;5:5–12. [DOI] [PubMed] [Google Scholar]
  • 23.Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv. 1995;21:541–8. [DOI] [PubMed] [Google Scholar]
  • 24.Jha AK, Kuperman GJ, Teich JM, et al. Identifying adverse drug events: development of a computer-based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc. 1998;5:305–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Carroll JS, Rudolph JW, Hatakenaka S. Lessons learned from non-medical industries: root cause analysis as culture change at a chemical plant. Qual Saf Health Care. 2002;11:266–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Policy And Procedures (Revised: July 2002); 2002. Available at http://www.jcaho.org/accredited+organizations/ambulatory+care/sentinel+events/se_pp.htm. Accessed Jan 3, 2003.
  • 27.Joint Commission on Accreditation of Healthcare Organizations. Root Cause Analysis in Health Care: Tools and Techniques. 2nd ed. Oakbrook Terrace, IL: Joint Commission Resources, Inc; 2002.
  • 28.Rothschild JM, Federico FA, Gandhi TK, Kaushal R, Williams DH, Bates DW. Analysis of medication-related malpractice claims: causes, preventability, and costs. Arch Intern Med. 2002;162:2414–20. [DOI] [PubMed] [Google Scholar]
  • 29.Domino KB, Posner KL, Caplan RA, Cheney FW. Airway injury during anesthesia: a closed claims analysis. Anesthesiology. 1999;91:1703–11. [DOI] [PubMed] [Google Scholar]
  • 30.Larson SL, Jordan L. Preventable adverse patient outcomes: a closed claims analysis of respiratory incidents. AANA J. 2001;69:386–92. [PubMed] [Google Scholar]
  • 31.Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325:245–51. [DOI] [PubMed] [Google Scholar]
  • 32.Studdert DM, Thomas EJ, Burstin HR, Zbar BI, Orav EJ, Brennan TA. Negligent care and malpractice claiming behavior in Utah and Colorado. Med Care. 2000;38:250–60. [DOI] [PubMed] [Google Scholar]
  • 33.Classen DC, Pestotnik SL, Evans RS, Burke JP. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991;266:2847–51. [PubMed] [Google Scholar]
  • 34.Honigman B, Lee J, Rothschild J, et al. Using computerized data to identify adverse drug events in outpatients. J Am Med Inform Assoc. 2001;8:254–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Evans RS, Burke JP, Classen DC, et al. Computerized identification of patients at high risk for hospital-acquired infection. Am J Infect Control. 1992;20:4–10. [DOI] [PubMed] [Google Scholar]
  • 36.Rocha BH, Christenson JC, Pavia A, Evans RS, Gardner RM. Computerized detection of nosocomial infections in newborns. Proc Annu Symp Comput Appl Med Care. 1994:684–8. [PMC free article] [PubMed]
  • 37.Tuttle D, Panzer RJ, Baird T. Using administrative data to improve compliance with mandatory state event reporting. Jt Comm J Qual Improv. 2002;28:349–58. [DOI] [PubMed] [Google Scholar]
  • 38.Cox AR, Anton C, Goh CH, Easter M, Langford NJ, Ferner RE. Adverse drug reactions in patients admitted to hospital identified by discharge ICD-10 codes and by spontaneous reports. Br J Clin Pharmacol. 2001;52:337–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.O'Neil AC, Petersen LA, Cook EF, Bates DW, Lee TH, Brennan TA. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med. 1993;119:370–6. [DOI] [PubMed] [Google Scholar]
  • 40.DeRosier J, Stalhandske E, Bagian JP, Nudell T. Using health care Failure Mode and Effect Analysis: the VA National Center for Patient Safety's prospective risk analysis system. Jt Comm J Qual Improv. 2002;28:209, 248–67. [DOI] [PubMed] [Google Scholar]
  • 41.Goetsch SJ. Risk analysis of Leksell Gamma Knife Model C with automatic positioning system. Int J Radiat Oncol Biol Phys. 2002;52:869–77. [DOI] [PubMed] [Google Scholar]
  • 42.Busse DK, Wright DJ. Classification and analysis of incidents in complex medical environments. Topics in Health Information Management. 2000;20:1–11. [PubMed] [Google Scholar]
  • 43.Salb T, Burgert O, Gockel T, et al. Risk reduction in craniofacial surgery using computer-based modeling and intraoperative immersion. In: Medicine Meets Virtual Reality (MMVR); 2002 January 23-26. Newport Beach, CA: Aligned Management Associates, Inc; 2002. [PubMed]
  • 44.Quality Interagency Coordination Task Force. Doing What Counts for Patient Safety: Federal Actions to Reduce Medical Errors and Their Impact. Report of the Quality Interagency Coordination Task Force (QuIC) to the President. Washington, DC: 2000. Available at http://www.quic.gov/report/toc.htm. Accessed Jan 3, 2003.
  • 45.Kivlahan C, Sangster W, Nelson K, Buddenbaum J, Lobenstein K. Developing a comprehensive electronic adverse event reporting system in an academic health center. Jt Comm J Qual Improv. 2002;28:583–94. [DOI] [PubMed] [Google Scholar]
  • 46.Callum JL, Kaplan HS, Merkley LL, Pinkerton PH, Rabin Fastman B, Romans RA, et al. Reporting of near-miss events for transfusion medicine: improving transfusion safety. Transfusion. 2001;41:1204–11. [DOI] [PubMed] [Google Scholar]
  • 47.National Aeronautics and Space Administration. Aviation Safety Reporting System Program Overview. Available at http://asrs.arc.nasa.gov/briefing/program_briefing_nf.htm. Accessed Jan 3, 2003.
  • 48.Runciman WB, Edmonds MJ, Pradhan M. Setting priorities for patient safety. Qual Saf Health Care. 2002;11:224–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Asden P, Corrigan JM, Wolcott J, Erickson SM, (eds). Patient Safety: Achieving a New Standard for Care. Washington, DC: National Academy Press; 2003. [PubMed]
  • 50.Chassin MR, Becher EC. The wrong patient. Ann Intern Med. 2002;136:826–33. [DOI] [PubMed] [Google Scholar]
  • 51.Joint Commission on Accreditation of Healthcare Organizations. 2003 National Patient Safety Goals; 2002. Available at http://www.jcaho.org/accredited+organizations/patient+safety/npsg/npsg_03.htm. Accessed Jan 3, 2003.
  • 52.Stearns MQ, Price C, Spackman KA, Wang AY. SNOMED clinical terms: overview of the development process and project status. Proc AMIA Symp. 2001:662–6. [PMC free article] [PubMed]
  • 53.American Medical Association. CPT 2003 Prof. Ed. Chicago, IL: AMA Press; 2003.
  • 54.International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) (ed 6). Baltimore, MD: The Centers for Medicare & Medicaid Services; 2002.
  • 55.Sangster W, Patrick T. Talking about medical errors: the void in existing controlled terminologies. Proc AMIA Symp. 2002:1152.
  • 56.Boxwala AA, Zeng QT, Chamberas A, Sato L, Dierks M. Coverage of patient safety terms in the UMLS Metathesaurus. In: Musen MA, editor. Proc AMIA Annu Fall Symp; 2003. Washington, DC: American Medical Informatics Association; 2003, pp 110. [PMC free article] [PubMed]
  • 57.Dovey SM, Meyers DS, Phillips Jr RL, et al. A preliminary taxonomy of medical errors in family practice. Qual Saf Health Care. 2002;11:233–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Makeham MA, Dovey SM, County M, Kidd MR. An international taxonomy for errors in general practice: a pilot study. Med J Aust. 2002;177:68–72. [DOI] [PubMed] [Google Scholar]
  • 59.Brown EG, Wood L, Wood S. The medical dictionary for regulatory activities (MedDRA). Drug Saf. 1999;20:109–17. [DOI] [PubMed] [Google Scholar]
  • 60.Brixey J, Johnson TR, Zhang J. Evaluating a Medical Error Taxonomy. Proc AMIA Symp. 2002:71–5. [PMC free article] [PubMed]
  • 61.Norman DA. Categorization of action slips. Psychol Rev. 1981;88:1–15. [Google Scholar]
  • 62.Norman DA. The psychology of everyday things. New York, NY: Basic Books; 1988.
  • 63.Hollnagel E. The phenotype of erroneous actions: implications for HCI design. In: Weir G, Alty J, (eds). Human Computer Interaction and Complex Systems. London: Academic Press; 1991, 73–121.
  • 64.Rasmussen J. Skills, rules, knowledge: signals, signs and symbols and other distinctions in human performance models. IEEE Transactions: Systems, Man and Cybernetics. 1983(SMC-13):257–67.
  • 65.Reason J. Human Error. Cambridge, UK: Cambridge University Press; 1990.
  • 66.Groeneweg J. Controlling the Controllable, The Management of Safety (ed 2). Leiden, Netherlands: DSWO Press; 1994.
  • 67.van der Schaaf TW. PRISMA: A risk management tool based on incident analysis. In: Bier VM ed. Proceedings of Workshop on Accident Sequence Precursors and Probabilistic Risk Analysis. College Park, MD: University of Maryland Press (Center for Reliability Engineering); 1998:31–41.
  • 68.Davies JB, Wright L, Courtney E, Reid H. Confidential Incident Reporting on the UK Railways: The ‘CIRAS’ System. Cognition, Technology & Work. 2000;2:117–25. [Google Scholar]
  • 69.Svenson O. Accident and incident analysis based on the Accident Evolution and Barrier Function (AEB) Model. Cognition, Technology & Work. 2001;3:42–52. [Google Scholar]
  • 70.Shappell S, Wiegmann D. A human error approach to accident investigation: The taxonomy of unsafe operations. International Journal of Aviation Psychology. 1998;7:269–91. [Google Scholar]
  • 71.van Vuuren W, van der Schaaf T. Modelling organizational factors of human reliability in complex man-machine systems. In: 6th IFAC Symposium on Analysis, Design and Evaluation of Man-Machine Systems; 1995.
  • 72.Shappell SA. Human Factors Analysis and Classification System-HFACS. Washington, DC: Office of Aviation Medicine, 2000, Report No. DOT/FAA/AM-00/7.
  • 73.Vincent C, Taylor-Adams S, Stanhope N. Framework for analysing risk and safety in clinical medicine. BMJ. 1998;316:1154–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Zhang J, Patel VL, Johnson TR, Shortliffe EH. Toward an action-based taxonomy of human errors in medicine. In: Gray W, Schunn C, (eds). Twenty-Fourth Annual Conference of the Cognitive Science Society; 2002. Mahwah, NJ: Lawrence Erlbaum Associates; 2002, 970–5.
  • 75.Kaplan HS, Battles JB, Van der Schaaf TW, Shea CE, Mercer SQ. Identification and classification of the causes of events in transfusion medicine. Transfusion. 1998;38(11-12):1071–81. [DOI] [PubMed] [Google Scholar]
  • 76.Spackman KA, Campbell KE. Compositional concept representation using SNOMED: towards further convergence of clinical terminologies. Proc AMIA Symp. 1998:740–4. [PMC free article] [PubMed]
  • 77.Lindberg DA, Humphreys BL, McCray AT. The Unified Medical Language System. Methods Inf Med. 1993;32:281–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Sowa JF. Knowledge representation: logical, philosophical, and computational foundations. Pacific Grove, CA: PWS Publishing Co; 1999.
  • 79.Baader F, Calvanese D, McGuinness D, Nardi D, Patel-Schneider P, (eds). The description logic handbook: theory, implementation and applications. Cambridge, UK: Cambridge University Press; 2003.
  • 80.Stetson PD, McKnight LK, Bakken S, Curran C, Kubose TT, Cimino JJ. Development of an ontology to model medical errors, information needs, and the clinical communication space. J Am Med Inform Assoc. 2002;9(11 Suppl):S86–91. [PMC free article] [PubMed] [Google Scholar]
  • 81.Cimino JJ. Desiderata for controlled medical vocabularies in the twenty-first century. Methods Inf Med. 1998;37(4-5):394–403. [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES