Skip to main content
PLOS One logoLink to PLOS One
. 2025 Jan 17;20(1):e0317128. doi: 10.1371/journal.pone.0317128

Exploring medical error taxonomies and human factors in simulation-based healthcare education

Tamara Skrisovska 1,2,#, Daniel Schwarz 1,*,#, Martina Kosinova 1,2,, Petr Stourac 1,2,
Editor: Mukhtiar Baig3
PMCID: PMC11741583  PMID: 39823397

Abstract

This study aims to provide an updated overview of medical error taxonomies by building on a robust review conducted in 2011. It seeks to identify the key characteristics of the most suitable taxonomy for use in high-fidelity simulation-based postgraduate courses in Critical Care. While many taxonomies are available, none seem to be explicitly designed for the unique context of healthcare simulation-based education, in which errors are regarded as essential learning opportunities. Rather than creating a new classification system, this study proposes integrating existing taxonomies to enhance their applicability in simulation training. Through data from surveys of participants and tutors in postgraduate simulation-based courses, this study provides an exploratory analysis of whether a generic or domain-specific taxonomy is more suitable for healthcare education. While a generic classification may cover a broad spectrum of errors, a domain-specific approach could be more relatable and practical for healthcare professionals in a given domain, potentially improving error-reporting rates. Seven strong links were identified in the reviewed classification systems. These correlations allowed the authors to propose various simulation training strategies to address the errors identified in both the classification systems. This approach focuses on error management and fostering a safety culture, aiming to reduce communication-related errors by introducing the principles of Crisis Resource Management, effective communication methods, and overall teamwork improvement. The gathered data contributes to a better understanding and training of the most prevalent medical errors, with significant correlations found between different medical error taxonomies, suggesting that addressing one can positively impact others. The study highlights the importance of simulation-based education in healthcare for error management and analysis.

Introduction

Medical error is an act of omission or commission during planning or execution that contributes to or could contribute to unintended harm to a patient [1]. Data show that medical errors are among the top three causes of death worldwide, as highlighted by pivotal studies [2,3]. Recognizing the severity of the situation, the World Health Organization (WHO) has made patient safety a global health priority with its “Global Action on Patient Safety: A Decade of Patient Safety 2020–2030” initiative [4]. This underscores that unsafe patient care remains a critical global issue that places a heavy burden on healthcare systems. Given the complexity of healthcare systems and the involvement of human factors, achieving an error-free system is not feasible, and the focus is on minimizing the occurrence and impact of medical errors [5,6]. Key aspects for enhancing patient safety involve not only understanding and analyzing the causes of errors at both individual and systemic levels but also establishing resilient systems that can respond to, recover from, and adapt to these errors [7,8]. Achieving resilience requires robust error reporting, which fundamentally depends on standard terminology and classification systems. Such a system enables consistent communication, tracking, and comparison of incidents, thereby forming the foundation of a structured framework for medical error analysis [9,10]. A taxonomy, which is an organized, hierarchical classification system, is particularly valuable in this context as it allows for a detailed understanding of the relationships between error categories [11,12]. An effective medical error taxonomy should support documentation, prediction, and error reduction, all of which are integral to meaningful reporting [13]. Using consistent language and standard terminology enables the coding, filtering, sorting, and organizing of error data [14], making it suitable for both individual and systemic analyses and interventions. Different types of errors require tailored intervention strategies, and a well-structured taxonomy can guide these approaches, thereby strengthening the overall resilience of healthcare practices (Fig 1). The terms “error taxonomy” and “error classification systems” are often used interchangeably because they organize and categorize errors for better understanding and management. However, some error classification systems can overwhelm healthcare providers, leading to potential underreporting or incorrect categorization, which may complicate the analysis of such reports [15].

Fig 1. The process of medical error reporting and classification as a foundation for enhancing healthcare system resilience and safety.

Fig 1

Medical error taxonomies can be generic or domain-specific. Generic taxonomies, such as Rasmussen’s (1987) classification of errors into skill-, rule-, and knowledge-based types (SRK) [16], offer broad terminology applicable across various fields, including high-reliability organizations such as aviation and nuclear power [17]. Reason’s (1990) model builds upon Rasmussen’s work through the Generic Error-Modelling System (GEMS), which incorporates diverse error mechanisms such as slips, lapses, and mistakes within the SRK framework. Another prominent generic taxonomy, the Human Factors Analysis and Classification System (HFACS), is based on Reason’s concepts but was recently found to be inadequate for analyzing adverse medical events. A new framework has since been validated for root cause analysis and the recording of such events, tailored more closely to healthcare contexts. In addition, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) Patient Safety Event Taxonomy (2005) organizes reporting systems into five main classifications: impact, type, domain, cause, and prevention and mitigation [10]. Although these generic taxonomies cover a wide range of domains, their broad generalization may limit their effectiveness in supporting patient safety recommendations within specific fields [18,19].

Conversely, domain-specific medical error taxonomies, which contain terms unique to their respective domains, display enhanced reliability because of the consistent terminology used in taxonomies, incident reports, and the final classification of errors. These taxonomies focus on specific areas of medicine such as primary care [20] and surgery [21,22]. Additional examples can be found in Critical Care, which includes emergency care and intensive medicine [2326].

These studies commonly analyze actual patient safety incidents, subsequently creating new taxonomies or modifying existing ones to meet the practical needs of patient-centered care in the fast-paced settings of Emergency Departments and Intensive Care Units. However, the categorizations in these domain-specific taxonomies may pose challenges for information sharing across various healthcare fields, especially when considering differences in patient case severity and the time-sensitive nature of decision-making, and the potential for serious or even fatal outcomes due to errors in Critical Care compared to other domains [24].

To improve patient safety and the quality of care in Critical Care settings, it is essential to employ a systematic approach for identifying the root causes of medical errors. Our study addresses this need by exploring effective error-management strategies within simulation-based education. We assessed various taxonomies to categorize medical errors in Critical Care. Although creating a new taxonomy is beyond the scope of our study, this exploratory research offers a starting point for future efforts aimed at establishing a taxonomy that can effectively support the understanding and reduction of medical errors in this context. Our study also builds on a comprehensive review published in 2011, providing an updated perspective on medical error taxonomies and determining the essential features of a taxonomy suitable for postgraduate training in Critical Care simulation settings.

Methods

This exploratory analysis was supported by a literature review, building on the comprehensive work of Taib [14], who investigated 26 medical error taxonomies from a human factor perspective up to 2008. Using the same methodology and keywords (patient safety, medical errors, taxonomy, and classification), we searched MEDLINE, Embase, and PubMed for relevant publications from 2009 to 2024. This review aimed to assess the evolution of these taxonomies, particularly their applicability to simulation-based healthcare education, in which medical errors are viewed as learning opportunities. Two independent researchers verified the findings to ensure consistency. Our analysis revealed a knowledge gap: no existing taxonomy adequately addresses the unique needs of simulation-based medical education, particularly in domains such as Critical Care. Such taxonomy could enhance the analysis of prevalent errors, guide the design of simulation scenarios that reflect real-world complexities, and ultimately improve the effectiveness of simulation-based medical education.

We further explored this knowledge gap by comparing two established medical error taxonomies in the environment of our simulation center. The first was Reason’s GEMS [25]. One potential shortcoming of this system is its limited direct applicability in healthcare. These classifications are abstract and can be applied universally, as evidenced by the scheme and error description depicted in Fig 2.

Fig 2.

Fig 2

(A) Reason’s classification: Error categories. Adapted from [25]. (B) Reason’s classification: Definitions.

Round et al. proposed a second practical taxonomy for education sessions, which is commonly referred to as the “deadly ten errors” [26]. This taxonomy, illustrated in Fig 3, offers clinical concepts for comprehending the origins of medical errors with potential applications in various medical education settings.

Fig 3. Round’s classification: Descriptions.

Fig 3

We conducted a survey with postgraduate participants attending high-fidelity simulation courses in Critical Care at Masaryk University’s Simulation Centre (SIMU) in Brno, the Czech Republic. The survey collected data from 124 participants and 13 tutors regarding the types and occurrence of medical errors observed or performed during the simulations, as well as their frequency in daily practice. Each simulation session was preceded by a standard briefing that included an introduction to the survey and an overview of the medical error types with the definitions provided in the questionnaire (S1 Appendix). After each simulation session, structured feedback was provided to learners in the form of a debriefing, emphasizing technical and non-technical skills, with a focus on incidents that may compromise patient safety. The survey was conducted between March 1, 2021, and October 27, 2022. All participants provided informed consent at the start of the survey and the data were anonymized. The study adhered to the local laws and institutional standards and did not involve any interventions.

In our survey, we explored the occurrence and nature of medical errors in order to link observations from simulation-based courses with real-world clinical practice. Participants were asked whether they observed any type of medical error during the simulation course, with binary “yes” or “no” responses, and to specify the error types using predefined categories from both Reason’s and Round’s taxonomies. The participants also ranked the frequency of medical errors in their daily practice. They had the opportunity to describe the specific errors observed during the simulations and those they encountered in their own clinical experiences. This structured approach allowed for a deeper understanding of the connections between errors in simulated scenarios and those occurring in actual clinical settings, thus highlighting the educational potential of simulation-based training.

Statistics and results

Following data collection, we examined the relationships between Reason’s and Round’s medical error taxonomies to identify significant associations. Fig 4 outlines the data preprocessing and analysis processes, from initial data handling to the identification of key links between these classification systems.

Fig 4. Flowchart illustrating the methodology for examining medical error associations.

Fig 4

The methodology involved collecting data from 124 participants and 13 tutors, exploring the relationship between medical error classifications using Kendall’s tau correlation coefficient, and identifying significant associations with thresholding. Data cleaning, inputting missing values, and linear rescaling of the resulting data matrix (105 × 15) were performed, followed by analysis to identify the most common types of medical errors and their associations with different classification systems.

We further analyzed the observed occurrence matrix by determining the relative frequencies of medical errors. Fig 5 presents a stacked bar chart comparing the distribution of medical errors recorded during the simulation, as reported by both the participants and tutors.

Fig 5. Relative frequency of medical errors observed during the simulation sessions as reported by participants and tutors.

Fig 5

The bar charts distinguish and compare two distinct medical error classification systems—the Reason’s and Round’s taxonomies. The x-axis denotes the categories of medical errors, revealing the most and least frequent types of errors within each taxonomy. RBM—Rule-based Mistakes; KBM—Knowledge-based Mistakes.

Fig 5 shows that according to Reason’s taxonomy, knowledge-based mistakes and lapses were the most common errors observed. Under Round’s taxonomy, frequent instances of communication breakdown and poor team performance have been reported. In contrast, violations (Reason), a deliberate deviation from the rule or procedure, and sloths (Round), neglecting necessary actions due to the belief that the effort required outweighs the perceived benefit or reward, were among the least frequent types of errors within each taxonomy, indicating their rarity compared with other types.

To further explore the error patterns, we analyzed the ranking matrices provided by the participants and tutors. The results (Fig 6) use boxplots to visualize the rankings of medical error types, highlighting the alignment between the tutor and participant perspectives.

Fig 6. Boxplot showing participants’ and tutors’ rankings of medical errors.

Fig 6

The y-axis represents the ranking, while the x-axis displays the categories of medical errors. The order of medical mistakes on the x-axis corresponds to the sorted frequencies, as shown in Fig 5. RBM—Rule-based Mistakes; KBM—Knowledge-based Mistakes.

Upon examining Fig 6, it is evident that the participants’ and tutors’ rankings were highly similar, suggesting a strong agreement in their assessment of medical error types.

We used the Kendall’s tau correlation coefficient to investigate the relationship between the classification systems for medical errors by Reason and Round. Kendall’s tau is a nonparametric association measure between two ranked variables, which is suitable for categorized data such as the ranked error frequencies in our study. This is ideal because Kendall’s tau makes no assumptions about the data distribution and depends solely on the observation order, not the magnitude. This implies that Kendall’s tau can identify variable associations even in nonlinear relationships and remains resistant to outliers.

We executed a thresholding process on the Kendall tau matrix, utilizing a p-value under 0.1 and a correlation coefficient (tau) absolute value exceeding 0.1 as thresholds. A p-value threshold of 0.1 was selected to increase sensitivity to potentially meaningful associations, acknowledging that a more relaxed threshold is appropriate for exploratory analyses like ours, where the aim is to identify trends that may warrant further investigation. Additionally, the correlation coefficient threshold of |τ| > 0.1 was chosen to capture associations with at least a minimal strength of relationship, ensuring that any identified links are practically relevant while accounting for the fact that, in nonparametric data, stronger correlations are less common. This step helped us identify significant associations between various medical error taxonomies according to the frequency rankings by the participants and tutors.

Our analysis identified six significant relationships between Reason’s ten error types and Round’s five error types. We found two associations among participants and four among tutors, as shown in Table 1.

Table 1. Detected correlations between the two presented medical error classification systems: Reason’s and Round’s taxonomies.

Data from participants/tutors Correlation coefficient p-value Reason’s vs Round’s classification
Participants 0.157 0.044 * Lapses vs. Playing the odds
Participants -0.138 0.076 RBM vs. Ignorance
Tutors 0.493 0.047 * KBM vs Lack of skill
Tutors 0.482 0.048 * Lapses vs. sloth
Tutors -0.472 0.050 KBM vs. Competence
Tutors -0.468 0.074 Slips vs Teamwork

RBM—Rule-based Mistakes; KBM—Knowledge-based Mistakes. An asterisk (*) next to a p-value indicates a statistically significant result.

Discussion

This study provides new insights into the relationships between generic and domain-specific medical error taxonomies, particularly in simulation-based medical education. By examining the correlations between Reason’s and Round’s classification systems, our findings underscore the significance of combining these approaches to address diverse educational and clinical needs. Using Kendall’s tau correlation analysis, we identified significant relationships that highlighted the value of integrating multiple classification systems to enhance error detection and training efficacy. The proposed correlations and their potential implications illustrated in Fig 7 suggest that relying solely on a single classification system could lead to missed opportunities for identifying and addressing specific medical errors during simulation scenarios.

Fig 7. Identified correlations between Reason’s and Round’s error taxonomies, with proposed interpretations and simulation training strategies.

Fig 7

Our results further demonstrated the critical role of communication and teamwork in error prevention. In line with existing literature, miscommunication remains the predominant cause of medical errors in both simulation and clinical settings, often linked to issues such as unclear medication orders or insufficient verification of patient details [27,28]. These findings reaffirm that addressing communication and teamwork deficiencies is paramount for improving patient safety and outcomes.

Simulation-based education offers a unique environment for understanding and mitigating medical errors, without jeopardizing patient safety. This setting allowed the participants to experience and analyze clinical events under controlled conditions, thereby fostering the development of error management strategies. Our study emphasizes the integration of principles such as Crisis Resource Management into simulation training, which has proven to be effective in enhancing communication, situational awareness, and role clarity among healthcare teams [28,29].

Given the identified correlations between the taxonomies, we propose the development of a future pilot simulation course to explore the practicality of combining Reason’s and Round’s taxonomies. Although this course was not a part of the current study, its implementation could generate additional data to validate our hypothesis that an optimal taxonomy for simulation-based medical education may involve a combination of these classification systems. This forward-looking approach is aligned with the broader goal of refining medical error taxonomies for enhanced applicability in simulation training.

Strengths of the study

The key strength of our study lies in its innovative focus on bridging generic and domain-specific medical error taxonomies. By leveraging data from high-fidelity simulation sessions, meaningful associations with practical implications for healthcare education and training were identified. The inclusion of both participant and tutor perspectives enriched the dataset, providing a comprehensive view of error occurrences and their implications for simulation-based learning. Moreover, this study highlighted the educational value of simulated environments that serve as safe spaces for healthcare professionals to make mistakes. By fostering a non-punitive culture that views errors as learning opportunities, simulation training promotes the adoption of error management strategies and strengthens the overall safety culture in healthcare settings.

Limitations of the study

Although this study offers valuable insights, it has some limitations must be acknowledged. First, the participants’ and tutors’ experiences reflected a specific subset of healthcare professionals involved in postgraduate simulation courses, which may not be representative of the broader medical community. Therefore, these findings warrant cautious generalization. Second, our reliance on survey data introduces the possibility of subjective bias, as responses may have been influenced by personal perceptions or the safe learning environment of the simulation center and the particular simulation scenario. Additionally, the exploratory nature of the study precludes definitive conclusions on the suitability of the proposed taxonomy blend for simulation-based medical education.

Future research should aim to develop and validate a comprehensive medical error taxonomy specifically tailored to simulation-based medical education. This would involve the collection of new datasets to ensure the reliability and applicability of the taxonomy across diverse clinical and educational contexts. Such efforts will address the current gap and provide a standardized framework for analyzing and mitigating medical errors.

Conclusion

This study addresses a knowledge gap in medical error taxonomies for simulation-based training in medical fields related to Critical Care. By evaluating and identifying the correlations between generic and domain-specific taxonomies, we propose a blended framework that meets the unique needs of simulation education. Our findings emphasize the importance of simulation as a tool for understanding, managing, and preventing medical errors, thereby enhancing the overall healthcare performance regarding patient safety. Although further validation is required, the integration of Reason’s and Round’s medical error classification systems presents a promising direction for improving error management and fostering a culture of safety in healthcare.

Supporting information

S1 Appendix. Questionnaire employed for data collection in this study.

(DOCX)

pone.0317128.s001.docx (22.1KB, docx)

Acknowledgments

We would like to thank Editage (www.editage.com) for English language editing.

Data Availability

All relevant data for this study are publicly available from the Zenodo repository (https://doi.org/10.5281/zenodo.14591686).

Funding Statement

This research was partially supported by the Specific University Research grant provided to Masaryk University by the Ministry of Education of the Czech Republic (MUNI/A/1595/2023, MUNI/A/1551/2023) and also in part supported by the Ministry of Health of the Czech Republic (FNBr, 65269705).

References

  • 1.Raab SS, Grzybicki DM, Janosky JE, Zarbo RJ, Geisinger KR, Meier FA, et al. The Agency for Healthcare Research and Quality hospital labs studying patient safety consortium and error reduction. Pathol Patterns Rev. 2006;126(suppl_1):S12–20: S12–S20. doi: 10.1309/7GWUCGFLYG2VFHJ1 [DOI] [Google Scholar]
  • 2.Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard medical practice Study I. N Engl J Med. 1991;324: 370–376. doi: 10.1056/NEJM199102073240604 [DOI] [PubMed] [Google Scholar]
  • 3.Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The quality in Australian health care study. Med J Aust. 1995;163: 458–471. doi: 10.5694/j.1326-5377.1995.tb124691.x [DOI] [PubMed] [Google Scholar]
  • 4.Dhingra DN. WHO flagship ‘A decade of patient safety 2020–2030’. Patient Saf.
  • 5.Lipsitz LA. Understanding health care as a complex system: the foundation for unintended consequences. JAMA. 2012;308: 243–244. doi: 10.1001/jama.2012.7551 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Carayon P. Human factors of complex sociotechnical systems. Appl Ergon. 2006;37: 525–535. doi: 10.1016/j.apergo.2006.04.011 [DOI] [PubMed] [Google Scholar]
  • 7.Alami H, Lehoux P, Denis JL, Motulsky A, Petitgand C, Savoldelli M, et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag. 2020;35: 106–114. doi: 10.1108/JHOM-03-2020-0074 [DOI] [PubMed] [Google Scholar]
  • 8.Kim L, Lyder CH, McNeese-Smith D, Leach LS, Needleman J. Defining attributes of patient safety through a concept analysis. J Adv Nurs. 2015;71: 2490–2503. doi: 10.1111/jan.12715 [DOI] [PubMed] [Google Scholar]
  • 9.Kirwan B. Human error identification in human reliability assessment. Part 1: Overview of approaches. Appl Ergon. 1992;23: 299–318. doi: 10.1016/0003-6870(92)90292-4 [DOI] [PubMed] [Google Scholar]
  • 10.Chang A, Schyve PM, Croteau RJ, O’Leary DS, Loeb JM. The JCAHO patient safety event taxonomy: a standardized terminology and classification schema for near misses and adverse events. Int J Qual Health Care. 2005;17: 95–105. doi: 10.1093/intqhc/mzi021 [DOI] [PubMed] [Google Scholar]
  • 11.The difference between classification & taxonomy and why it matters. Bounteous [Internet]. [Cited 2024 Mar 1]. https://www.bounteous.com/insights/2020/11/18/difference-between-classification-taxonomy.
  • 12.Donaldson LJ, Fletcher MG. The WHO World Alliance for Patient Safety: towards the years of living less dangerously. Med J Aust. Internet. 2006. [Cited 2024 Mar 1];184(S 10). Available from: https://onlinelibrary.wiley.com/doi/abs/10.5694/j.1326-5377.2006.tb00367.x [DOI] [PubMed] [Google Scholar]
  • 13.WHO_IER_PSP_2010.2_eng.pdf [Internet]. [Cited 2023 Jun 19]. https://apps.who.int/iris/bitstream/handle/10665/70882/WHO_IER_PSP_2010.2_eng.pdf
  • 14.Taib IA, McIntosh AS, Caponecchia C, Baysari MT. A review of medical error taxonomies: A human factors perspective. Saf Sci. 2011;49: 607–615. doi: 10.1016/j.ssci.2010.12.014 [DOI] [Google Scholar]
  • 15.Fong A, Behzad S, Pruitt Z, Ratwani RM. A machine learning approach to reclassifying miscellaneous patient safety event reports. J Patient Saf. 2021;17: e829–e833. doi: 10.1097/PTS.0000000000000731 [DOI] [PubMed] [Google Scholar]
  • 16.Rasmussen J. Information processing and human-machine interaction. An approach to cognitive engineering. New York: North-Holland; 1987. [Google Scholar]
  • 17.Christianson MK, Sutcliffe KM, Miller MA, Iwashyna TJ. Becoming a high reliability organization. Crit Care. 2011;15: 314. doi: 10.1186/cc10360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Astion ML, Shojania KG, Hamill TR, Kim S, Ng VL. Classifying laboratory incident reports to identify problems that jeopardize patient safety. Am J Clin Pathol. 2003;120: 18–26. doi: 10.1309/8EXC-CM6Y-R1TH-UBAF [DOI] [PubMed] [Google Scholar]
  • 19.Shah RK, Kentala E, Healy GB, Roberson DW. Classification and consequences of errors in otolaryngology. Laryngoscope. 2004;114: 1322–1335. doi: 10.1097/00005537-200408000-00003 [DOI] [PubMed] [Google Scholar]
  • 20.Amalberti R, Brami J. ‘Tempos’ management in primary care: a key factor for classifying adverse events, and improving quality and safety. BMJ Qual Saf. 2012;21: 729–736. doi: 10.1136/bmjqs-2011-048710 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Suliburk JW, Buck QM, Pirko CJ, Massarweh NN, Barshes NR, Singh H, et al. Analysis of human performance deficiencies associated with surgical adverse events. JAMA Netw Open. 2019;2: e198067. doi: 10.1001/jamanetworkopen.2019.8067 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Stone A, Jiang ST, Stahl MC, Yang CJ, Smith RV, Mehta V. Development and interrater agreement of a novel classification system combining medical and surgical adverse event reporting. JAMA Otolaryngol Head Neck Surg. 2023;149: 424–429. doi: 10.1001/jamaoto.2023.0169 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Griffey RT, Schneider RM, Todorov AA, Yaeger L, Sharp BR, Vrablik MC, et al. Critical review, development, and testing of a taxonomy for adverse events and near misses in the Emergency Department. Acad Emerg Med. 2019;26: 670–679. doi: 10.1111/acem.13724 [DOI] [PubMed] [Google Scholar]
  • 24.Taib IA, McIntosh AS, Caponecchia C, Baysari MT. Comparing the usability and reliability of a generic and a domain-specific medical error taxonomy. Saf Sci. 2012;50: 1801–1805. doi: 10.1016/j.ssci.2012.03.021 [DOI] [Google Scholar]
  • 25.Reason J. Human error. 1st ed. Cambridge: Cambridge University Press; 1990. 320 p. ISBN: 978–0521314190. [Google Scholar]
  • 26.Vaughan S, Bate T, Round J. Must we get it wrong again? A simple intervention to reduce medical error. Trends Anaesth Crit Care. 2012;2: 104–108. doi: 10.1016/j.tacc.2012.01.007 [DOI] [Google Scholar]
  • 27.Rodziewicz TL, Houseman B, Hipskind JE. Medical error reduction and prevention. Florida: StatPearls Publishing; 2023. [PubMed] [Google Scholar]
  • 28.Street RL, Petrocelli JV, Amroze A, Bergelt C, Murphy M, Wieting JM, et al. How communication “failed” or “saved the day”: counterfactual accounts of medical errors. J Patient Exp. 2020;7: 1247–1254. doi: 10.1177/2374373520925270 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Østergaard D, Dieckmann P, Lippert A. Simulation and CRM. Best Pract Res Clin Anaesthesiol. 2011;25: 239–249. doi: 10.1016/j.bpa.2011.02.003 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Mukhtiar Baig

15 Jul 2024

PONE-D-24-13204Exploring Medical Error Taxonomies and Human Factors in Simulation-based Healthcare EducationPLOS ONE

Dear Dr. Schwarz,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 29 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mukhtiar Baig, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript: 

"This research was partially supported by the Specific University Research grant provided to  Masaryk University by the Ministry of Education of the Czech Republic (MUNI/A/1595/2023, MUNI/A/1551/2023) and also in part supported by the Ministry of Health of the Czech Republic (FNBr, 65269705). We thank Mr. Radomír Beneš and the American Manuscript Editors for the English language editing that greatly improved the quality of this manuscript."

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. 

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: 

"This research was partially supported by the Specific University Research grant provided to Masaryk University by the Ministry of Education of the Czech Republic (MUNI/A/1595/2023, MUNI/A/1551/2023) and also in part supported by the Ministry of Health of the Czech Republic (FNBr, 65269705)."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. Thank you for stating the following financial disclosure: 

"This research was partially supported by the Specific University Research grant provided to Masaryk University by the Ministry of Education of the Czech Republic (MUNI/A/1595/2023, MUNI/A/1551/2023) and also in part supported by the Ministry of Health of the Czech Republic (FNBr, 65269705)."

Please state what role the funders took in the study.  If the funders had no role, please state: ""The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."" 

If this statement is not correct you must amend it as needed. 

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

5. Thank you for stating in your Funding Statement: 

"This research was partially supported by the Specific University Research grant provided to Masaryk University by the Ministry of Education of the Czech Republic (MUNI/A/1595/2023, MUNI/A/1551/2023) and also in part supported by the Ministry of Health of the Czech Republic (FNBr, 65269705)."

Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.  Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement. 

Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf.

6. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

7. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: i. Literature review is shallow

ii. Future scope is missing

iii. All abbreviations should only be used after their first definition

iv. Section should present the methodology algorithmically

iv. Novelty of the proposed work should be established by comparing the same with comparable work.

v. Sections break directly into sub-sections breaking the continuity of the manuscript.

vi. Sound literature review in tabulated form would be desirable to perform meta-analysis of available work and establish.

vii. There are areas that require further practical discussion and better linking of the results discussion together. The research methodology is weak. Why did you choose this methodology? The technical and practical discussion, as well as the comparison with recent previous work on this topic, should be thoroughly considered.

viii. The quality of the paper is weak in the technical discussion, and the explanatory results have been discussed before. Also, some parts in the comparison of figures don’t make sense. The results are not sufficient, and the conclusion is weak. The references are not sufficient and need a comprehensive update, as well as an update to the sources list until 2024. Major revisions are recommended so that the authors address all of this through examination and validation adequately to meet the standards and strength of the journal. The technical and practical discussion, as well as the comparison with recent previous work on this topic, should be thoroughly considered.

ix. Most sections have written with an AI tools and this missed sense in the information and understanding.

x. Both the discussion and practical application and the technical sense are missing. There is no discussion that conveys a true understanding.

xi. The introduction may provide a solid background on the importance of addressing medical errors and patient safety.

xii. Consider expanding on why existing taxonomies are inadequate for simulation-based settings upfront to justify the need for your proposed approach.

xiii. It is clearly stated that a p-value of under 0.1 and a correlation coefficient (tau) absolute value exceeding 0.1 were used as thresholds. However, the rationale behind choosing these thresholds could be elaborated. Provide a brief explanation for selecting these specific p-value and correlation coefficient thresholds. Example: “A p-value threshold of 0.1 was chosen to capture marginally significant associations, while a tau value exceeding 0.1 was used to identify moderately strong relationships.

xiv. Challenges related to error reporting are well-articulated. However, it would be beneficial to propose some strategies for overcoming these barriers.

xv. Suggest methods for mitigating gaps in literature review and enhancing future taxonomy validation. Example: “Future research should include a systematic review of less widely known taxonomies and consider the application of natural language processing to identify relevant literature.”

xvi. The manuscript discusses classifications and their relationships, but there is insufficient analysis on the accuracy and reliability of these classifications. The methodology for evaluating the accuracy of the proposed or merged classifications isn’t explicitly detailed.

xvii. The accuracy of the proposed classifications needs to be rigorously analyzed. Include metrics such as precision, recall, and F1 score to evaluate the performance of the classifications. Benchmark these against existing classifications to demonstrate their relative effectiveness.

xviii. The technical discussion is not currently high-level and lacks depth. Provide detailed descriptions of the methodologies and tools used to develop and implement the classification system in the simulation environment. Include discussions of any technical challenges encountered and how they were addressed.

xix. The proposed classification system needs stronger validation. Conduct validation studies in real-world simulation settings and present detailed results and analysis to support the efficacy of the modified taxonomy. Discuss how the findings from these validation studies enhance the credibility of your proposed approach.

xx. Sections of the manuscript involving AI are not well integrated and lack relevance. Clearly explain how AI technologies were utilized in the study, specifying the AI methods used and their role in the classification process. Highlight the impact of AI on the accuracy, efficiency, or reliability of the classification system with specific examples.

xxi. The paper needs proofreading to correct any typographical, grammatical, or structural errors. Ensure that terms and abbreviations are defined upon first use and employed consistently throughout the manuscript. Address any incomplete or unclear phrases, such as “human limitatUnderstanding.

xxii. Eaborate on how the study’s findings will be practically applied in simulation training. Provide specific examples of scenarios or training modules where the insights will be implemented. Detail how “red flags” will be incorporated into debriefing sessions.

Reviewer #2: Please include tables and charts for quantitative data collected through questionnaire. Please clarify how the study could be replicated in both national and international context. Also incorporate the suggested changes in the research manuscript, highlighted with comments attached in comment box.

Please refer to the reviewed manuscript for reference.

Reviewer #3: The methodology of the manuscript is robust, comprehensive and well-structured and thorough to identify critical gaps in literature. The topic is extremely crucial to the progress of medical education.

However, Methodology can be improved by clearly stating the study design of this manuscript in the beginning of the section. As in first glance it appears to be a review.

It would be good to clarify the various taxonomies and their usage in both types of medical errors with clarity instead of just a table.

The number of participants should be stated in methodology section instead of statistical analysis.

The level of the learner should be clearly specified. Is the study conducted on undergraduates, interns or residents?

Discussion should be more detailed and comprehensive, comparing the study results with previous studies.

A lot of material elaborated in the limitation's sections can be a part of discussion

Last paragraph of limitations is way forward, it should be separated from limitations

Literature is comprehensive and thoroughly cited, however the manuscript can be improved by citing latest research.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Ebrahim E. Elsayed

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PONE-D-24-13204.pdf

pone.0317128.s002.pdf (1.7MB, pdf)
PLoS One. 2025 Jan 17;20(1):e0317128. doi: 10.1371/journal.pone.0317128.r002

Author response to Decision Letter 0


13 Dec 2024

We have provided a rebuttal letter with detailed responses to all reviewers' comments.

Attachment

Submitted filename: ExpMedErr-Responses to comments of reviewers.pdf

pone.0317128.s003.pdf (263.5KB, pdf)

Decision Letter 1

Mukhtiar Baig

22 Dec 2024

Exploring Medical Error Taxonomies and Human Factors in Simulation-based Healthcare Education

PONE-D-24-13204R1

Dear Dr. Schwarz,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Mukhtiar Baig, Ph.D.

Academic Editor

PLOS ONE

Acceptance letter

Mukhtiar Baig

10 Jan 2025

PONE-D-24-13204R1

PLOS ONE

Dear Dr. Schwarz,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Mukhtiar Baig

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Questionnaire employed for data collection in this study.

    (DOCX)

    pone.0317128.s001.docx (22.1KB, docx)
    Attachment

    Submitted filename: PONE-D-24-13204.pdf

    pone.0317128.s002.pdf (1.7MB, pdf)
    Attachment

    Submitted filename: ExpMedErr-Responses to comments of reviewers.pdf

    pone.0317128.s003.pdf (263.5KB, pdf)

    Data Availability Statement

    All relevant data for this study are publicly available from the Zenodo repository (https://doi.org/10.5281/zenodo.14591686).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES