Abstract
Introduction:
Chart review is central to understanding adverse events (AEs) in medicine. In this paper, we describe the process and results of educating chart reviewers assigned to evaluate dental AEs.
Methods:
We developed a web-based training program, “Dental Patient Safety Training” which utilizes both independent and consensus-based curricula, for identifying AEs recorded in electronic health records (EHRs) in the dental setting. Training included 1) didactic education, 2) skills training using videos and guided walkthroughs, 3) quizzes with feedback, and 4) hands-on learning exercises. Additionally, novice reviewers were coached weekly during consensus review discussions. TeamExpert was composed of two experienced reviewers, and TeamNovice included two chart reviewers in training. McNemar’s test, interrater reliability, sensitivity, specificity, positive predictive value and negative predictive value were calculated to compare accuracy rates on the identification of charts containing AEs at the start of training and seven months following consensus building discussions between the two teams.
Results:
TeamNovice completed independent and consensus development training. Initial chart reviews were conducted on a shared set of charts (n=51) followed by additional training including consensus building discussions. There was a marked improvement in overall percent agreement, PABAK correlation, and diagnostic measures (Sensitivity, Specificity, PPV, and NPV) of reviewed charts between both teams from the Phase I training program to Phase II consensus building.
Conclusion:
This study detailed the process of training new chart reviewers and evaluating their performance. Our results suggest that standardized training and continuous coaching improves calibration between experts and trained chart reviewers.
Keywords: Dentistry, Patient Safety, Adverse Events, Chart Reviewer Training, Abstractor Training, Triggers
INTRODUCTION
Medical adverse events (AEs) are one of the leading causes of death in the US,1, 2 and are widely studied.3–5 Dental AEs have until recently not received much attention except for the few sentinel events that make the news.6, 7 To understand and prevent AEs, various methods of collecting and analyzing data around their occurrence have been utilized including voluntary reporting systems,8 retrospective chart reviews,9–11 mining of administrative or claims data,12, 13 natural language processing of discharge summaries,14 and patient interviews and surveys.15, 16 Retrospective chart review, in which the medical record is evaluated for the conditions and documentation of an event, is one of the most common and respected methods for identifying AEs.17 For example, 25% of all scientific investigation studies published in emergency medicine relied on abstracted data from medical charts.18 In the dental setting, chart reviews unearth mostly unexpected post-surgical pain, hard tissue injury (e.g. tooth perforation) and soft tissue injuries (e.g. lacerations) as AEs.19 Chart reviews are often combined with other tools such as structured queries for antecedent or trigger events.20–23 Dental triggers rely mostly on structured data, such as dental procedure codes (CDT), dental diagnosis, and medications taken by the patient. We have found that pain, soft tissue, hard tissue and nerve injuries are the most common types of AEs.19
A typical two-stage medical chart review involves nurses initially screening the charts followed by two reviewers performing independent reviews. In case of any discrepancy, the supervising reviewer independently reviews the chart.24 In a chart review (with or without a review tool or triggered discovery23), physicians use their implicit clinical judgment, including their knowledge, skills, experience and a continuous critical analysis of the information contained within patients’ charts to determine the presence of AEs.25, 26 Since this is a subjective process, inter-rater reliability between reviewers can vary, with studies finding low to fair (kappa correlation coefficients between 0.39 to 0.60) agreement across individuals.27–31 Physicians with greater experience in reviewing charts have higher agreement.24, 32
Methods to improve reliability such as operationalizing variables, use of standardized abstraction form, blinding (masking), periodic monitoring and meeting with abstractors,18, 33 have been used to increase validity and the quality of data collected from medical charts. However, there is very little description of the methods used to train abstractors or studies on best practices in training abstractors. Gilbert et al.,18 in their systematic review of emergency medicine studies found that, although 18% of the studies mention “abstractor training” in their papers, the methodologies used were not described. Given the paucity of scientific literature describing the methodology of training chart reviewers, we conducted a study to determine the impact of structured training on chart review for novice abstractors.
We developed the “Dental Patient Safety” training program using a commercial learning management system (LMS) for independent training. This web-based training program was used in conjunction with consensus building discussions to onboard a novice team of reviewers. Our objectives were to 1) develop and test the training for chart reviewers to consistently identify AEs in the electronic health record (EHR); and 2) assess the quality of the reviews by novice reviewers, using the reviews by a group of experts as the gold standard”.
METHODS
The study was reviewed and approved by the Institutional Review Board (IRB). As part of the study, a Patient Safety Toolkit was developed by eight research team members who are clinicians with extensive clinical and chart review experience. Through an iterative process of independent coding and consensus building discussion, the research team developed the definitions and protocols for defining and categorizing AEs.34, 35 For the research, we narrowly defined a dental AE as “physical harm associated with dental treatment within a timeframe relevant to the clinical scenario.”
The methods used to identify and characterize dental adverse events have been described earlier.21, 34, 36, 37 In short we developed and validated electronic health record (EHR)- based triggers to find potential adverse events (AEs) in dental patients’ EHRs. AEs were next classified in 12 categories (see Box 2). When identifying AEs, reviewers would also characterize AEs using a modified IHI category indicating if severe or mild permanent or temporary harm had occurred. Concordance was calculated between the individual reviewers within their respective teams (TeamExpert vs TeamNovice).
Box 2: Adverse Event: Classification and AEs identified in the study.
| # | Adverse Event | Potential AEs Phase (40/233 charts) n |
|---|---|---|
| 1 | Allergy/Toxicity/Foreign Body response | 0 (0.0%) |
| 2 | Aspiration/Ingestion of Foreign Body | 0 (0.0%) |
| 3 | Infection | 14 (35.0%) |
| 4 | Wrong-site, wrong-procedure, wrong-patient | 0 (0.0%) |
| 5 | Bleeding | 2 (5.0%) |
| 6 | Pain | 12 (30.0%) |
| 7 | Hard tissue injury | 1 (2.5%) |
| 8 | Soft tissue injury | 9 (22.5%) |
| 9 | Nerve injury | 2 (5.0%) |
| 10 | Other systemic harm | 0 (0.0%) |
| 11 | Other oro-facial harm | 0 (0.0%) |
| 12 | Other harm | 0 (0.0%) |
| Severity of Event | ||
| E1 | 11 (27.5%) | |
| E2 | 28 (70.0%) | |
| G1 | 1 (2.5%) | |
| G2 | 0 (0.0%) |
Chart Reviewers
Four chart reviewers who are general dentists and dental public health professionals were split into two teams depending on their expertise; (TeamExpert (>3 years chart review experience) and TeamNovice limited or no chart review experience). TeamExpert members consisted of highly experienced dentists and well-calibrated chart reviewers. TeamNovice members were general dentists and had no prior experience in chart reviews.
Chart Reviewer Training
The training program “Dental Patient Safety” was designed and deployed on a commercial LMS. The training program is comprised of eight modules [see Box 1] that include definitions of patient safety, AEs, contributing factors, degree of harm, detection and error,38, 39 WHO’s International Conceptual Patient Safety Framework,40 and AHRQ’s Patient Safety Network.41 Videos and guided walkthroughs honed the skills of chart reviewers in detecting and documenting AEs. At the end of each training component, self-assessments were completed, and concrete feedback was provided to each participant with detailed explanations to help shape their understanding. Lastly, participants were asked to complete six standardized cases and received feedback on their performance for each of those cases. Supporting materials included a standard operating procedure manual as reminders for the process of chart reviews, information regarding the data collection form, coding instruction, and a quick reference guide on AE definition, dental AE classification (Box 2) and dental AE severity rating scale (Figure 1). In addition to the independent training, all four reviewers participated in weekly meetings. At these meetings a select set of AEs that were previously distributed were discussed to further develop a shared mental model regarding what is considered an AE in dentistry.42
Box 1: Dental Patient Safety Training Modules.
Patient Safety – Overview of Patient Safety, Definition of Patient Safety, WHO International Conceptual Patient Safety, Key Concepts from the WHO Patient Safety Curriculum (e.g., definitions of “adverse events”, “contributing factor”, “degree of harm”, “detection” and “error”), AHRQ Patient Safety Network and Quizzes
Adverse Events (AEs) in Dentistry – Overview of AEs in Dentistry, Definition of Dental AEs, Example of AEs and Non-AEs, Quality of Care Issues vs. AEs, Trigger Definition, Trigger List, Trigger Description and Logic and Quiz
Training: Scoring for Severity – AE Classification, AE Classification Examples, AE Severity, AE Severity Examples and Quiz
Navigating REDCap for chart reviews – Orientation of REDCap and EHR, Overview of Chart Review Process Overview and lastly, Quizzes on EHR and REDCap
Training: Identifying AEs within Charts – Video Demonstration of Identifying AEs within Charts and Calibration Cases
Training: Bringing it All Together- Documents of Standardized Calibration case on REDCap, REDCap Reviewer Guide and Quizzes Related to the Calibration Case
Self-Assessment for Chart Review - Additional Standardized Calibration Cases and Manual of Standard Operating Procedure for AEs, Study Protocols, Key Areas of Data Collection Forms and Coding Instructions.
Calibration: Reliability to Gold Standard – Brief Assessment of Calibration Cases and Results Comparison with Gold Standard Review Results.
Figure 1:

Dental Adverse Event Severity Scale (modified from Institute for Healthcare Improvement)
E1: Temporary (reversible or transient) minimal/mild harm to the patient (healed or resolved without permanent defect or disability)
E2: Temporary moderate to severe harm to the patient
F: Patient transferred to emergency room and/or hospital
G1: Permanent minimal/mild patient harm (healed with permanent defect or disability)
G2: Permanent moderate to severe patient harm
H: Intervention required to sustain life
I: Patient death
Comparisons between the performance of TeamNovice and TeamExpert were carried out in two stages. The first phase of the assessment occurred after the independent training using the LMS. In this phase, Phase I, TeamNovice independently reviewed and documented AEs from 51 randomly chosen charts. They discussed their findings among themselves to consolidate the final list of AEs. At the same time, TeamExpert audited and followed the similar process for these charts. Results from TeamExpert were defined as the gold standard and results of TeamNovice were compared against it. Both teams had multiple face-to-face meetings where the final list of AEs were discussed. No modifications were made to the AE list following these meetings. The second stage of assessment, Phase II, was conducted after seven months of consensus building discussions following training on LMS. Again, both teams independently reviewed 233 specific charts identified by the automated triggers and documented the resulting AEs. As with earlier reviews, each team member first reviewed the charts independently and then again in consultation with their team member. Here we compare the performance of the teams.
Analysis
A descriptive analysis was performed to determine the total number of charts reviewed and the total number adverse events identified. Information on the type and number of adverse events is found in Box 2. Please note that Box 2 identifies all potential AEs identified by all reviewers, before concensus was reached if the identified AE was considered an AE. In addition, the frequency and percent agreement for the type and severity of each adverse event was calculated. In order to determine the percent agreement between the expert reviewers considered to be the gold standard and the novice reviewers, both diagnostic measures and correlation coefficients were computed. The diagnostic measures used were sensitivity, specificity, positive predictive value (PPV), and negative predictive values (NPV) and correlation coefficient computed was the prevalence and bias-adjusted kappa (PABAK) (κ).43 PABAK was selected, as our prevalence index was very high as compared to the bias index. Lastly, all analyses were performed using R (R Version 4.0.2) statistical software.44
RESULTS
In Phase 1, TeamExpert and TeamNovice independently reviewed the same 51 patient charts for adverse events. Table 1 shows the resulting confusion matrix detailing the areas of concordance and discordance. Of the 51 charts reviewed, concordance was found in 39 patient charts for an overall percent agreement of 76.5%. There were 12 (23.5%) patient charts where there was discordance between the experts and novice reviewers. The PABAK correlation coefficient was 52.9% (PABAK = 52.9%, 95%CI = [25.0–74.4]) representing “moderate” agreement according to Landis and Koch’s.45 The number of adverse events identified by TeamExpert that were also identified by TeamNovice reveals a true positive rate of 37.5% (sensitivity = 37.5%, 95%CI = [8.5–75.5]). The number of patient charts determined to not “be adverse events” identified by TeamExpert that were also classified as “not adverse events” by TeamNovice yielded a true negative rate of 83.7% (specificity = 83.7%, 95%CI = [69.3–93.2]). The positive predictive value was 30.0% (PPV = 30.0%, 95%CI = [6.7–65.2]) and the negative predicted value was 87.8% (NPV = 87.8%, 95%CI = [73.8–95.9]).
Table 1:
Matrix for concordance and discordance pilot chart review (N=51)
| TeamNovice (Trained) | TeamExpert (Experts) - Gold standard | ||
| Charts with AE | Charts without AE | ||
| Charts with AE | 3 | 7 | |
| Charts without AE | 5 | 36 | |
In Phase II, TeamNovice and TeamExpert each reviewed 233 total patient charts for adverse events. Table 2 shows the confusion matrix for the concordant and discordant reviews. The overall percent agreement between the TeamNovice and TeamExpert was 80.7% and the PABAK correlation coefficient was 61.4% (PABAK = 63.9%, 95%CI = [50.0–71.1]) representing a “substantial” agreement. The total positivity rate was 71.4% (sensitivity = 73.2%95%CI = [57.7–82.7]), the total negative rate was 83.6% (specificity = 83.6%, 95%CI = [77.3–88.7]), the PPV was 50.6% (PPV = 50.6%, 95%CI = [39.1–62.1]) and NPV was 90.2% (NPV = 90.2%, 95%CI = [84.6–94.3]).
Table 2:
Matrix for concordance and discordance phase II chart review (N=249)
| TeamNovice (Trained) | TeamExpert (Experts) - Gold standard | ||
| Charts with AE | Charts without AE | ||
| Charts with AE | 40 | 29 | |
| Charts without AE | 16 | 148 + 16(NA) = 164 | |
Tables 3 and 4 compares the agreement between TeamNovice and TeamExpert classification and severity ratings of adverse events. Among the 40 agreed upon AEs (see Box 2), the reviewers in TeamNovice and TeamExpert were in full agreement with the AE classification 57.5% (23 charts) of the time, in at least partial agreement 77.5% of the time, and full disagreement 22.5% of the time. Additionally, Table 4 shows that among the 40 agreed upon AEs, there was full agreement between the reviewers in TeamNovice and TeamExpert on 29 (72.5%) of the dental patient charts with AEs while 11(27.5%) charts had disagreement.
Table 3:
Comparison of AE Category Classification between TeamExperts and TeamNovice
| Charts with AEs (n=40) [N, %] | |
|---|---|
| No match between TeamExperts and TeamNovice (DIFFERENT) | 9 (22.5%) |
| Both teams used Identical AE category (SAME) | 23 (57.5%) |
| There was partial agreement | 8 (20.0%) |
Table 4:
Comparison of AE Severity Rating between TeamExperts and TeamNovice
| Charts with AEs (n=40) [N, %] | |
|---|---|
| Both teams rated identically AE severity rating | 29 (72.5%) |
| Both teams had different severity ratings | 11 (27.5%) |
There was a marked improvement in overall percent agreement, PABAK correlation, and diagnostic measures (Sensitivity, Specificity, PPV, and NPV) of reviewed charts between both teams from the Phase I training program to Phase II consensus building. There was no clear pattern related to the areas of discordance among the categories of AEs. However, we noted that TeamNovice and TeamExperts sometimes had a different understanding of adverse events that could be expected after a treatment, such as denture sores, or if the event occurred during a relevant clinical timeframe.
DISCUSSION
For this project, we used two previously developed trigger tools to facilitate the finding of dental AEs. Specifically, we used the Institute for Healthcare Improvement (IHI) trigger tool, which was developed to effectively help identify AEs in the clinical setting.46 Originally developed for the inpatient setting, it has been evolved for the outpatient setting47 and, recently, for the dental clinical arena.21 Triggers do not in themselves represent AEs; rather “triggered charts” are more likely to document an AE. Hence using triggered charts has proven to be more efficient for detecting AEs than conducting random chart reviews.48 Using triggers to find AEs is the first step of quality improvement,6 as it identifies harm to patients. Once we identify any harm, only then can we measure it, analyze it, and explore what underlying systems need to be addressed to prevent such harm from occurring again.
As we have noted in previous work, the perceptions of leadership around how well patient safety is managed may be quite different than the perceptions from dental clinic staff.7 Making effective changes in underlying systems to diminish patient harm has to start with understanding the clinic’s current culture around patient safety.7 Using triggers to conduct targeted systematic chart review on a regular basis to unearth AEs would be a significant sea change for dentistry. However, as our colleagues in medicine have also discovered, it is an effective beginning towards the development of learning organizations,49, 50 and we hope of a “learning profession”.
Published case reports provide a window into understanding the nature and extent of dental AEs However, these siloed and incomplete contributions to dentistry’s understanding of AEs in the dental office are not enough to fully understand all threats to dental patients’ safety.51, 52 More complete data around patient harm will help inform individual providers, entire clinics, and the profession about underlying systemic issues that need reform. Patient records are valuable data sources that can help identify AEs. Traditionally, a random sample of health records were selected for audit. Classen et al., however, found that a focused chart review identifies more AEs than a random chart review,48 and detecting AEs automatically in EHRs greatly facilitates this work.53–55 However, it is important to realize that our chart review process does not allow us to discern what the underlying cause is of the harm that was caused. There are many reasons for AEs to occur, including diagnostic failure,52 inexperience, or case complexity and there is indeed still a lot to learn in the dental arena about why and how AEs happen.6, 21, 35, 56 In future work we will conduct an analysis of the AEs and determine contributing factors.
As dentistry enters this realm of quality improvement/patient safety, we envision clinics will run a few specific triggers against their EHR to identify specific patient safety care issues. Dental clinics may encounter turnover in their chart reviewers just as they start feeling comfortable with the process. New chart reviewers will lack the historical knowledge and consensus experience of the original chart reviewers. Collecting accurate and consistent data from retrospective chart reviews is challenging for any chart reviewer, especially when multiple chart reviewers are involved despite having standardized protocols and data collection forms. We believe that our Dental Patient Safety Training will facilitate onboarding of new team members.
Our training yielded successful results, which can be attributed to the combination of reading materials, video demonstration of how to detect AEs, hands-on learning exercises and a unique interactive approach. The reading material supplied TeamNovice with cognitive knowledge, while videos, hand-on learning exercises and interactive quizzes with feedback provided the skills needed to use this knowledge. Another explanation of training effectiveness may be due to the practice participants received.57 Attending weekly conference calls also enriched team members by shifting their decision-making process from individual to a collaborative team-based approach through a shared mental model.42, 58
Limitations of this study include the fact that the Dental Patient Safety Training program was developed and tested at one academic institution, using one EHR. Hence, results may not be easily generalizable to non-academic dental practice sites and dental practices that use a different EHR. We only measured training around potential AEs as identified by two triggers. We have developed a number of other validated EHR-based triggers37, 59 and also have conducted unstructured reviews using a random sample of charts.36 Additionally, we are starting to understand the importance of the voice of the patient in patient safety measurement.60, 61 We acknowledge that the total sample size of reviewers is small and that training included both an online component and consensus building meetings. As such it is difficult to determine the impact of each component separately.
We conclude that it is critical to develop standardized training approaches for calibrating chart reviewers to increase the reliability, validity and quality of collected data as one of the first important steps towards improving patient safety in the dental setting.
CONCLUSION
We developed a web-based dental patient safety training program to train inexperienced chart reviewers. Standardized training with continuous coaching appears to be an effective way to reach calibration between experienced and new chart reviewers.
Acknowledgements
Thank you to Dr. V. F. Delattre and Dr. N. Dhillon who served as chart reviewers for the research project. Research reported in this publication was supported by the National Institute of Dental & Craniofacial Research of the National Institutes of Health under Award Number R01DE022628.
Conflict of Interest and Sources of Funding
Research reported in this publication was supported by the National Institute of Dental & Craniofacial Research of the National Institutes of Health under Award Number R01DE022628. The authors declare that there is no conflict of interest.
REFERENCES
- 1.Makary MA, Daniel M. Medical error-the third leading cause of death in the US. BMJ 2016;353:i2139. [DOI] [PubMed] [Google Scholar]
- 2.Mazer BL, Nabhan C. Strengthening the Medical Error “Meme Pool”. J Gen Intern Med 2019;34(10):2264–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kizer KW, Blum LN. Safe Practices for Better Health Care. In: Henriksen K, Battles JB, Marks ES, Lewin DI, editors. Advances in Patient Safety: From Research to Implementation (Volume 4:. Rockville (MD); 2005. [Google Scholar]
- 4.Leape LL. Scope of problem and history of patient safety. Obstet Gynecol Clin North Am 2008;35(1):1–10, vii. [DOI] [PubMed] [Google Scholar]
- 5.Leape LL. Errors in medicine. Clin Chim Acta 2009;404(1):2–5. [DOI] [PubMed] [Google Scholar]
- 6.Ramoni RB, Walji MF, White J, et al. From good to better: toward a patient safety initiative in dentistry. J Am Dent Assoc 2012;143(9):956–60. [DOI] [PubMed] [Google Scholar]
- 7.Ramoni R, Walji MF, Tavares A, et al. Open wide: looking into the safety culture of dental school clinics. J Dent Educ 2014;78(5):745–56. [PubMed] [Google Scholar]
- 8.Milch CE, Salem DN, Pauker SG, et al. Voluntary Electronic Reporting of Medical Errors and Adverse Events. Journal of General Internal Medicine 2006;21(2):165–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991;324(6):377–84. [DOI] [PubMed] [Google Scholar]
- 10.Gawande AA, Thomas EJ, Zinner MJ, Brennan TA. The incidence and nature of surgical adverse events in Colorado and Utah in 1992. Surgery 1999;126(1):66–75. [DOI] [PubMed] [Google Scholar]
- 11.Thomas EJ, Brennan TA. Incidence and types of preventable adverse events in elderly patients: population based review of medical records. Bmj 2000;320(7237):741–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Zhan C, Miller M. Administrative data based patient safety research: a critical review. Quality & safety in health care 2003;12(Suppl 2):ii58–ii63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Kaafarani HM, Rosen AK. Using administrative data to identify surgical adverse events: an introduction to the Patient Safety Indicators. Am J Surg 2009;198(5 Suppl):S63–8. [DOI] [PubMed] [Google Scholar]
- 14.Melton GB, Hripcsak G. Automated Detection of Adverse Events Using Natural Language Processing of Discharge Summaries. Journal of the American Medical Informatics Association : JAMIA 2005;12(4):448–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Southwick FS, Cranley NM, Hallisy JA. A patient-initiated voluntary online survey of adverse medical events: the perspective of 696 injured patients and families. BMJ Quality & Safety 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Fowler FJ Jr., Epstein A, Weingart SN, et al. Adverse events during hospitalization: results of a patient survey. Jt Comm J Qual Patient Saf 2008;34(10):583–90. [DOI] [PubMed] [Google Scholar]
- 17.Rosen AK. Are We Getting Better at Measuring Patient Safety?
- 18.Gilbert EH, Lowenstein SR, Koziol-McLain J, Barta DC, Steiner J. Chart reviews in emergency medicine research: Where are the methods? Ann Emerg Med 1996;27(3):305–8. [DOI] [PubMed] [Google Scholar]
- 19.Walji MF, Yansane A, Hebballi NB, et al. Finding Dental Harm to Patients through Electronic Health Record-Based Triggers. JDR Clin Trans Res 2019:2380084419892550. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Rosen AK, Mull HJ, Kaafarani H, et al. Applying trigger tools to detect adverse events associated with outpatient surgery. J Patient Saf 2011;7(1):45–59. [DOI] [PubMed] [Google Scholar]
- 21.Kalenderian E, Walji MF, Tavares A, Ramoni RB. An adverse event trigger tool in dentistry: a new methodology for measuring harm in the dental office. J Am Dent Assoc 2013;144(7):808–14. [DOI] [PubMed] [Google Scholar]
- 22.Griffin FA, Classen DC. Detection of adverse events in surgical patients using the Trigger Tool approach. Qual Saf Health Care 2008;17(4):253–8. [DOI] [PubMed] [Google Scholar]
- 23.Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care 2003;12 Suppl 2:ii39–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Localio AR, Weaver SL, Landis JR, et al. Identifying adverse events caused by medical care: degree of physician agreement in a retrospective chart review. Ann Intern Med 1996;125(6):457–64. [DOI] [PubMed] [Google Scholar]
- 25.Kienle GS, Kiene H. Clinical judgement and the medical profession. Journal of Evaluation in Clinical Practice 2011;17(4):621–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Brennan TA, Localio RJ, Laird NL. Reliability and validity of judgments concerning adverse events suffered by hospitalized patients. Med Care 1989;27(12):1148–58. [DOI] [PubMed] [Google Scholar]
- 27.Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. Bmj 2001;322(7285):517–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Brennan T, Leape L, Laird N, et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study I(*). Quality & Safety in Health Care 2004;13(2):145–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. Cmaj 2004;170(11):1678–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Davis P, Lay-Yee R, Briant R, et al. Adverse events in New Zealand public hospitals I: occurrence and impact. N Z Med J 2002;115(1167):U271. [PubMed] [Google Scholar]
- 31.Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38(3):261–71. [DOI] [PubMed] [Google Scholar]
- 32.Ashton CM, Kuykendall DH, Johnson ML, Wray NP. An empirical assessment of the validity of explicit and implicit process-of-care criteria for quality assessment. Med Care 1999;37(8):798–808. [DOI] [PubMed] [Google Scholar]
- 33.Horwitz RI, Yu EC. Assessing the reliability of epidemiologic data obtained from medical records. J Chronic Dis 1984;37(11):825–31. [DOI] [PubMed] [Google Scholar]
- 34.Kalenderian E, Obadan-Udoh E, Maramaldi P, et al. Classifying Adverse Events in the Dental Office. Journal of Patient Safety 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Maramaldi P, Walji MF, White J, et al. How dental team members describe adverse events. J Am Dent Assoc 2016;147(10):803–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Tokede O, Walji M, Ramoni R, et al. Quantifying Dental Office-Originating Adverse Events: The Dental Practice Study Methods. J Patient Saf 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Walji MF, Yansane A, Hebballi NB, et al. Finding Dental Harm to Patients through Electronic Health Record-Based Triggers. JDR Clin Trans Res 2020;5(3):271–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Key Concepts from the WHO Patient Safety Curriculum Guide 2011. “http://www.who.int/patientsafety/education/curriculum/course1a_handout.pdf“. Accessed July 09 2016.
- 39.Emanuel L, Berwick D, Conway J, et al. What Exactly Is Patient Safety? In: Henriksen K, Battles JB, Keyes MA, Grady ML, editors. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 1: Assessment). Rockville (MD); 2008. [Google Scholar]
- 40.The World Alliance For Patient Safety Drafting G, Sherman H, Castro G, et al. Towards an International Classification for Patient Safety: the conceptual framework. International Journal for Quality in Health Care 2009;21(1):2–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Agency for Healthcare Research and Quality Patient Safety Network (PSNet). “https://psnet.ahrq.gov/“. Accessed Jul 09 2016. [PubMed]
- 42.Klimoski R, Mohammed S. Team Mental Model: Construct or Metaphor? Journal of Management 1994;20(2):403–37. [Google Scholar]
- 43.Cunningham M More than just the kappa coefficient: a program to fully characterize inter-rater reliability between two raters. Paper presented at: SAS global forum, 2009; Gaylord National Resort and Convention Center, Washington, DC. [Google Scholar]
- 44.R Core Team R: A language and environment for statistical computing Vienna, Austria: R Foundation for Statistical Computing; 2020. “ https://www.R-project.org/“. Accessed 04/10/2021. [Google Scholar]
- 45.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33(1):159–74. [PubMed] [Google Scholar]
- 46.Griffin F, Resar R. IHI Global Trigger Tool for measuring adverse events. Institute for Healthcare Improvement Innovation Series White Paper 2009.
- 47.Resar R Outpatient Adverse Event Trigger Tool. Cambridge MA: Institute for Healthcare Improvement (in association with Kaiser Permanente and Baylor Health Care System) 2006. “http://www.ihi.org/resources/Pages/Tools/OutpatientAdverseEventTriggerTool.aspx“. Accessed 4/11/2016. [Google Scholar]
- 48.Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood) 2011;30(4):581–9. [DOI] [PubMed] [Google Scholar]
- 49.Friedman C, Rubin J, Brown J, et al. Toward a science of learning systems: a research agenda for the high-functioning Learning Health System. J Am Med Inform Assoc 2015;22(1):43–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Senge PM. The fifth discipline : the art and practice of the learning organization. Rev. and updated. ed. New York: Doubleday/Currency; 2006. [Google Scholar]
- 51.Obadan E, Kalenderian E, Ramoni RB. CASE REPORTS HAILED. The Journal of the American Dental Association 2014;145(9):912–14. [DOI] [PubMed] [Google Scholar]
- 52.Obadan EM, Ramoni RB, Kalenderian E. Lessons learned from dental patient safety case reports. The Journal of the American Dental Association 2015;146(5):318–26.e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Landrigan CP, Parry GJ, Bones CB, et al. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010;363(22):2124–34. [DOI] [PubMed] [Google Scholar]
- 54.Hripcsak G, Bakken S, Stetson PD, Patel VL. Mining complex clinical data for patient safety research: a framework for event discovery. J Biomed Inform 2003;36(1–2):120–30. [DOI] [PubMed] [Google Scholar]
- 55.Bates DW, Evans RS, Murff H, et al. Detecting adverse events using information technology. J Am Med Inform Assoc 2003;10(2):115–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Obadan EM, Ramoni RB, Kalenderian E. Lessons learned from dental patient safety case reports. J Am Dent Assoc 2015;146(5):318–26.e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Allison JJ, Wall TC, Spettell CM, et al. The Art and Science of Chart Review. The Joint Commission Journal on Quality Improvement;26(3):115–36. [DOI] [PubMed] [Google Scholar]
- 58.Johnson-Laird PN. Mental models: Towards a cognitive science of language, inference, and consciousness: Harvard University Press; 1983. [Google Scholar]
- 59.Kalenderian E, Obadan-Udoh E, Yansane A, et al. Feasibility of Electronic Health Record-Based Triggers in Detecting Dental Adverse Events. Appl Clin Inform 2018;9(3):646–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.6Obadan-Udoh E, Van der Berg S, Ramoni R, Kalenderian E, G. W. Patient-reported Dental Safety Events. J Dent Res 2017;96(0569). [DOI] [PubMed] [Google Scholar]
- 61.Obadan-Udoh E, Nayudu A, Yansane A, et al. Engaging Patients as Vigilant Partners in Safety Reporting at the Dental Office. J Dent Res 2018;97(2860361). [Google Scholar]
