Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2007 Jun 6;22(8):1150–1154. doi: 10.1007/s11606-007-0247-8

Patient Safety Knowledge and Its Determinants in Medical Trainees

B Price Kerfoot 1,3,, Paul R Conlin 1,2,3, Thomas Travison 4, Graham T McMahon 1,2,3
PMCID: PMC2305739  PMID: 17551796

Abstract

Background

Patient safety is a core educational topic for medical trainees.

Objectives

To determine the current level and determinants of patient safety knowledge in medical trainees.

Design

Multi-institutional cross-sectional assessment of patient safety knowledge.

Participants

Residents and medical students from seven Harvard-affiliated residencies and two Harvard Medical School courses.

Measurements

Participants were administered a 14-item validated test instrument developed based on the patient safety curriculum of the Risk Management Foundation (Cambridge, MA). The primary outcome measure was the amount of patient safety knowledge demonstrated by trainees on the validated test instrument. The secondary outcome measure was their subjective perceptions as to their baseline knowledge level in this domain.

Results

Ninety-two percent (640/693) of residents and medical students completed the patient safety test. Participants correctly answered a mean 58.4% of test items (SD 15.5%). Univariate analyses show that patient safety knowledge levels varied significantly by year of training (p = 0.001), degree program (p < 0.001), specialty (p < 0.001), country of medical school (p = 0.006), age (p < 0.001), and gender (p = 0.050); all but the latter two determinants remained statistically significant in multivariate models. In addition, trainees were unable to assess their own knowledge deficiencies in this domain.

Conclusions

Patient safety knowledge is limited among medical trainees across a broad range of training levels, degrees, and specialties. Effective educational interventions that target deficiencies in patient safety knowledge are greatly needed.

KEY WORDS: safety, medical errors, medical education

INTRODUCTION

The Accreditation Council for Graduate Medical Education now requires that all U.S. residency programs teach and assess their residents in each of six general competencies: patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice.1,2 Many medical schools are now adopting similar competency frameworks.37 The competency of systems-based practice is defined as the awareness of and responsiveness to the larger context and system of health care and the ability to effectively call on system resources to provide care that is of optimal value.1 Patient safety is one of the core topics that falls under this competency domain. The 1999 release of the Institute of Medicine report To Err is Human highlighted the patient safety challenges within the U.S. healthcare system and has dramatically elevated the importance of patient safety as a core educational topic for physician trainees.8

To properly design effective educational programs on patient safety for medical students and residents, it is critical to understand the current knowledge level of medical trainees in this topic area. To this end, we developed and administered a validated test instrument of patient safety knowledge to medical students and residents across a broad range of training levels, specialties, and hospitals and investigated the determinants of increased trainee knowledge in this domain. We also assessed the relationship of trainees’ subjective perception of their knowledge level with their actual performance on the test instrument.

METHODS

Study Participants

Residents and students from seven Harvard-affiliated residencies and two Harvard Medical School courses were enrolled between August and October 2005 (Table 1). Each course director or residency program director agreed to have their students and residents complete the test. The institutional review board at Harvard Medical School approved the protocol. To protect the residents’ and students’ rights as potential research subjects, participants were free to withhold their test data from the research data set.

Table 1.

Univariate Analyses of Patient Safety Test Scores

Determinant Number of trainees Percentage of test items correct [mean .(SD)] p values
All trainees 640 58.4 (15.5)
Year of training 0.001*
 Medical school year 2 171 55.8 (15.2)
 Medical school year 3 97 55.1 (15.9)
 Postgraduate year 1 83 57.1 (15.6)
 Postgraduate year 2 109 63.0 (15.4)
 Postgraduate year 3 123 60.6 (14.5)
 Postgraduate year 4 38 58.8 (15.7)
 Postgraduate year 5 19 62.0 (15.4)
Gender 0.050
 Female 302 57.1 (15.6)
 Male 338 59.9 (15.4)
Specialty <0.001*
 Emergency medicine (1 residency program) 29 64.0 (11.5)
 Medicine (1 program) 106 64.7 (14.9)
 Obstetrics & Gynecology (2 programs) 47 57.1 (13.7)
 Surgery (3 programs) 190 58.4 (15.8)
 Undeclared/medical students (2 classes) 268 58.4 (15.5)
Degree program* <0.001*
 MD 528 58.4 (15.4)
 MD/PhD 33 63.4 (16.6)
 MD/MPH 32 65.4 (13.9)
 DMD 35 48.2 (14.0)
 Other 10 57.1 (9.5)
 No response 2 57.1 (20.2)
Country of medical school (residents only) 0.006*
 United States 345 61.1 (15.2)
 International 21 50.7 (13.2)
 No response 6 54.7 (15.2)

*Univariate statistical calculations performed by ANOVA

Significant differences were retained in this variable under multivariate modeling, which controlled for covariates

Univariate statistical calculations performed by t test

Development of a Validated Test Instrument

A content-validated curriculum on patient safety focusing on error prevention and systems theory was adopted from the Risk Management Foundation (RMF, Cambridge, MA) on the basis of its relevance to the systems-based practice competency and its perceived educational value for students and residents. A provisional set of 16 multiple-choice questions was developed by two investigators (BPK, GTM) based on this curricular content, and content validity of the items was established by two RMF content experts. To determine the psychometric properties of the validated test questions, the 16 items were pilot tested online with a group of 18 medical students (years 2–4) and 16 medical residents [postgraduate years (PGY) 1–3]. Point-biserial correlation and Kuder–Richardson 20 calculations were performed for each test item (Integrity, Edmonton, Alberta, Canada). Two poorly performing items were eliminated to optimize the reliability of the instrument. The resulting validated test of patient safety contained 14 items. At the beginning of the test, trainees were asked to provide demographic information and to rate their knowledge level on patient safety using a five-point Likert-type scale (1 = poor, 5 = excellent).

Administration of the Test Instrument

The patient safety test was administered to 693 residents and medical students (two medical school classes and seven residency programs in three hospitals). Hyperlinks to the online test were distributed to the residents and students via email, and the test responses were collected online using the SurveyMonkey™ platform (Portland, OR, USA).

Outcome Measures and Statistical Methods

The primary outcome measure was the amount of patient safety knowledge demonstrated by trainees on the validated test instrument. The secondary outcome measure was their subjective perception as to their baseline knowledge level in this domain.

Reliability of the validated test instrument was measured utilizing Cronbach’s alpha, a measure of internal consistency.9 In addition, test–retest reliability (stability of measurement over time) was calculated by Pearson correlation using the scores of a subset of 272 participants who repeated the test 4 weeks later in the absence of any education on patient safety.10

Two-tailed Student’s t tests and ANOVA were employed to test the statistical significance (univariate) of differences in test scores between groups. Potential associations between test scores and subject characteristics were examined via graphical and tabular exploration and formally assessed using multiple linear regression analyses. Subjects were classified according to age, gender, year of training, degree (or degree program), specialty, and country of their medical school. Because some subject descriptors were related by definition (for instance, medical students’ specialties were classified as “undecided,” and no students were attending international medical schools), multivariate models were restricted to combinations of covariates that yielded valid results. Because of the low number of participants who self-rated their patient safety knowledge as “excellent” (rating of 5), self-ratings of 4 and 5 were collapsed prior to statistical analysis. Results were considered statistically significant if null hypotheses could be rejected at the 0.05 level. Statistical calculations were performed with Stata 9.0 (College Station, TX, USA) and SPSS for Windows 13.0 (Chicago, IL, USA).

RESULTS

Ninety-two percent (640/693) of residents and medical students completed the patient safety test, excluding two residents who elected to have their test data removed from the research database. Cronbach alpha reliability for the 14-item online test instrument was 0.43; four-week test–retest reliability was 0.55.

Participants correctly answered a mean 58.4% of test items (SD 15.5%). Test scores were found to vary significantly with year of training (p = 0.001), with medical students scoring significantly lower than residents [mean 55.5% (SD 15.3%) test items correct vs. 60.5% (SD 15.4%), respectively; p < 0.001]. Test scores also varied significantly with degree (p < 0.001), with MD/MPH recipients/candidates scoring the highest [mean 65.4% (SD 13.9)] and DMD candidates scoring the lowest [mean 48.2% (SD 14.0), Table 1]. Males performed slightly better than females [mean 59.9% (SD 15.4%) vs. 57.1% (SD 15.6%), respectively; p = 0.050]. Age was significantly correlated with test scores (Pearson r = 0.143, p < 0.001), but accounted for only 2.0% of test score variance. Significant specialty-related differences were demonstrated (p < 0.001, ANOVA), with Internal Medicine residents scoring the highest [mean 64.7% (SD 14.9%)] and Obstetrics–Gynecology residents scoring the lowest [mean 57.1% (SD 13.7%)]. Residents from international medical schools performed significantly worse than residents from U.S. medical schools [mean 50.7% (SD 13.2%) vs. 61.1% (SD 15.2%), respectively; p = 0.006].

In a multivariate regression model, year of training remained significantly associated with test scores [p = 0.014 for those residents PGY-2 or higher in comparison to the referent (second-year medical students)]. Conversely, age and gender no longer showed any significant association with test scores when the effect of training year was controlled. When medical students (undeclared specialty) were excluded from the model, specialty remained significantly associated with test scores: Obstetrics–Gynecology residents scored significantly lower than emergency medicine residents (referent) (p = 0.025). Residents from U.S. medical schools significantly outperformed graduates from international medical schools when controlling for covariates (p = 0.004).

There was no significant association or linear trend between trainees’ perception of their own knowledge level in patient safety and their subsequent scores on the test. (p = 0.37, ANOVA; p = 0.09, linear trend analysis).

DISCUSSION

This multi-institutional assessment of patient safety knowledge among medical trainees demonstrates that knowledge levels are limited across a broad range of training levels, degrees, and specialties. Even so, significant determinants of patient safety knowledge among trainees were identified. Patient safety knowledge levels vary significantly by year of training, degree, specialty, and country of medical school using both univariate and multivariate models. In addition, trainees were unable to assess their own knowledge deficiencies in this domain.

The results of our study argue strongly that effective educational interventions on patient safety are greatly needed given trainees’ substantial knowledge deficits in this topic. To date, a wide variety of pedagogical approaches have been attempted to educate trainees in patient safety: independent study projects,11 monthly workshops,12 outcome cards,13 root cause analyses by interprofessional teams,14 web-based teaching,15 etc. Further research is needed to evaluate the relative effectiveness of these varied methodologies.16

While it appears that all groups of trainees can benefit substantially from patient safety education, our study identified specific determinants associated with different levels of patient safety knowledge. It is not unexpected that more-senior residents (PGY-2 or higher) would have greater patient safety knowledge than medical students. For these residents, patient safety is an active day-to-day concern with concrete consequences for the patients under their care. In contrast, medical schools traditionally have placed little emphasis on patient safety education, especially in the first 2 years of training.17 Previous research has documented specialty-level differences in attitudes regarding error disclosure and patient safety, with surgeons reporting the highest support for the disclosure of serious errors to patients.18 Of note, attitudes and knowledge may not necessarily align: in our study, residents in Surgery and Obstetrics–Gynecology had the lowest levels of patient safety knowledge. While it is unclear why residents who completed medical school outside of the U.S. have significantly less patient safety knowledge, this subgroup of residents appears to have the most to gain from educational interventions targeting this domain.

The finding that trainees were unable to assess their own knowledge deficits on patient safety issues (and, by extension, their learning needs) is not unexpected given the large number of psychological studies documenting that humans have difficulty recognizing their own knowledge deficiencies.19,20 These results highlight the need to utilize validated assessment instruments to assist in distinguishing between self-perceived learning needs and trainees’ true knowledge deficits.

Several factors should be considered when interpreting the results of the study. The test was designed for program improvement and is not appropriate for determining individual competency standards. Although the psychometric properties of the instrument were constrained by the homogeneity of knowledge across the sample, the test performed well when utilized for group-level comparisons. In addition, although the patient safety curriculum and test instrument were developed to address issues germane to all specialties, it is possible that the differences in test scores between specialties reflect some systematic biases regarding patient-safety practices in those specialties. Strengths of the study include the methodologic rigor of its test construction and the inclusion of large numbers of participants across a broad range of specialties and levels of training.

In conclusion, trainees’ knowledge about patient safety issues is quite limited across a wide range of specialties, institutions, and training levels. In addition, trainees are unable to appreciate their knowledge deficits and learning needs in topics related to patient safety. Effective educational interventions that target deficiencies in patient safety knowledge are greatly needed.

Acknowledgements

We thank the RMF (Cambridge, MA) for use of their web-based educational materials; Robert B. Hanscom and Elizabeth G. Armstrong for their support of the program; Lucean L. Leape and Saul N. Weingart for editing and content validation of the patient safety test items; Ronald A. Arky, Stanley W. Ashley, Christopher C. Baker, Eugene Beresin, Lori R. Berkowitz, Charlie M. Fergusen, Joel T. Katz, Hope A. Riccotti, William Taylor, and Carrie D. Tibbles for including their programs/courses in the web-based program; Daniel D. Federman for support in the conception of the program and assistance in its financial administration; and Susan Herlihy, Jessica E. Hyde, and Colleen E. Graham for administrative support. The views expressed in this article are those of the authors and do not necessarily reflect the position and policy of the United States Federal Government or the Department of Veterans Affairs. No official endorsement should be inferred. This study was supported by a grant from the RMF, Cambridge, MA. Additional support was obtained from the Research Career Development Award Program and research grants TEL-02-100 and IIR-04-045 from the Veterans Affairs Health Services Research & Development Service, the American Urological Association Foundation (Linthicum, MD), Astellas Pharma U.S., the National Institutes of Health (K24 DK63214 and R01 HL77234), and the Academy at Harvard Medical School. The study protocol was reviewed and approved by the institutional review board at Harvard Medical School.

Conflict of interest None disclosed.

Author Contributions Dr. Kerfoot had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Conception and design: Kerfoot, Conlin, Travison, and McMahon. Acquisition of data: Kerfoot and McMahon. Analysis and interpretation of data: Kerfoot, Conlin, Travison, and McMahon. Drafting of the manuscript: Kerfoot and Travison. Critical revision of the manuscript for important intellectual content: Conlin and McMahon. Statistical analysis: Kerfoot and Travison. Obtaining funding: Kerfoot and Conlin. Administrative, technical, or material support: Conlin, Travison, and McMahon. Supervision: Conlin and McMahon

Appendix

Items on the Validated Test of Patient Safety Knowledge

Test items were developed based on the content-validated patient safety curriculum (error prevention and systems theory) from the RMF (Cambridge, MA). Difficulty refers to the percentage of participants who answered an item correctly.

1. What is the reported frequency of serious adverse events (injuries that results from medical care) among hospitalized patients in the United States?

(A) <1 percent.

(B) 1–5 percent.

(C) 6–10 percent.

(D) >10 percent.

Answer: B

Difficulty: 55%

2. Two different paralytic agents, one with a long half-life and the other with a short half-life, are packaged in similar glass vials with yellow caps. This is an example of

(A) a forcing function.

(B) a latent error.

(C) a medication error.

(D) description error.

Answer: B

Difficulty: 55%

3. If the process of ordering and administering a medication has 20 steps, each with 99% accuracy, what is the likelihood of a medication error occurring each time the medication is ordered and administered?

(A) 0.2 percent.

(B) 1 percent.

(C) 2 percent.

(D) 20 percent.

Answer: D

Difficulty: 61%

4. A 21 year-old college student with a documented penicillin allergy is given doxycycline for yet another episode of chlamydia. He develops a rash from the medication. This incident is best described as

(A) a potential adverse drug event.

(B) a preventable adverse drug event.

(C) a nonpreventable adverse drug event.

(D) a latent error.

Answer: C

Difficulty: 42%

5. A harried resident connects the oxygen tubing to the intravenous (IV) line of a pediatric patient who subsequently dies from a massive gas embolus. This tragedy is best described as

(A) a latent error.

(B) an active failure.

(C) a forcing function.

(D) a knowledge deficit.

Answer: B

Difficulty: 72%

6. In general, blaming the individual who makes an error does not help fix the problem or prevent it in the future. Even so, under the framework proposed by error theorist James Reason, two types of misconduct by practitioners should be punished. One is intentional injury to a patient (or anyone else, for that matter), and the other is...

(A) injury from willful disobedience of practice guidelines.

(B) injury from provider incompetence.

(C) injury caused by substance abuse.

(D) injury from violation of an unworkable rule.

Answer: C

Difficulty: 24%

7. Which one of the following is the best example of an active failure?

(A) Different chemotherapy medications with similar bottles and labeling.

(B) An infusion pump that requires complex dosage calculations.

(C) Scheduling residents to work more than 60 hours in a row to cover a “power weekend.”

(D) Overlooking a pneumothorax on a postcentral line chest film.

Answer: D

Difficulty: 73%

8. Which one of the following is the best example of a latent error?

(A) Ordering of a chest radiograph on the wrong patient.

(B) Using bar codes as patient-identifiers.

(C) Confirming a drug dose on a computerized directory.

(D) Understaffing an intensive care unit.

Answer: D

Difficulty: 67%

9. Anesthesia machines are designed so that the tube carrying the anesthetic gas physically cannot be attached to the oxygen port. What Human Factors Principle does this best exemplify?

(A) Constraint.

(B) Forcing function.

(C) Reduced reliance on memory.

(D) Elimination of look-alikes.

Answer: B

Difficulty: 58%

10. A computerized medication order-entry system has been implemented which presents a limited range of doses to the ordering practitioner. What Human Factors Principle does this best exemplify?

(A) Constraint.

(B) Forcing function.

(C) Simplification.

(D) Reduced reliance on vigilance.

Answer: A

Difficulty: 43%

11. Most preventable errors are caused by

(A) Factual deficiencies.

(B) Process deficiencies.

(C) Performance deficiencies.

(D) Defensive practices.

Answer: B

Difficulty: 83%

12. What are latent errors?

(A) The injuries caused by medical management rather than the underlying disease.

(B) The faulty interrelationships between humans, the tools they use, and the environment in which they live and work.

(C) The unsafe acts of front-line workers.

(D) The hidden properties of a system that permit individuals to make mistakes.

Answer: D

Difficulty: 81%

13. Which one of the following is the most frequent error of daily life?

(A) An arithmetic miscalculation.

(B) Misreading of a label.

(C) Forgetting to turn off a switch.

(D) Mixing drug dosages.

Answer: A

Difficulty: 24%

14. When describing how errors occur, the proximal cause refers to which one of the following?

(A) The unsafe acts of front-line workers

(B) The individual responsible for the error.

(C) The apparent reason the error was made.

(D) The pharmaco-physiological interactions that occurred in the affected patient.

Answer: C

Difficulty: 79%

References

  • 1.Accreditation Council for Graduate Medical Education. Outcome project. http://www.acgme.org. Cited 22 November 2006.
  • 2.Leach DC. A model for GME: shifting from process to outcomes. A progress report from the Accreditation Council for Graduate Medical Education. Med Educ. 2004;38(1):12–4. [DOI] [PubMed]
  • 3.Mercer University School of Medicine Medical Student Competencies. http://medicine.mercer.edu/news?news_id=87. Cited 22 November 2006.
  • 4.University of California, San Francisco (UCSF) School of Medicine. Competencies and outcome learning objectives for the doctor of medicine program. http://medschool.ucsf.edu/curriculum/outcome_objs.aspx. Cited 22 November 2006.
  • 5.Wayne State University School of Medicine. Medical school competencies. http://www.med.wayne.edu/educational_programs/form.asp. Cited 22 November 2006.
  • 6.Dartmouth Medical School. Essential standards for matriculation, promotion and graduation. http://dms.dartmouth.edu/admin/olads/esmpg.shtml. Cited 22 November 2006.
  • 7.Indiana University School of Medicine. Competency curriculum. http://meded.iusm.iu.edu/Programs/ComptCurriculum.htm. Cited 22 November 2006.
  • 8.Corrigan J, Kohn LT, Donaldson MS, eds. To Err is Human: Building A Safer Health System (Institute of Medicine). Washington, DC: National Academies Press; 2000 (report released 1999). [PubMed]
  • 9.Cohen J. Statistical Power Analysis for the Behavioural Sciences (2nd ed.). Hillsdale, NJ: Erlbaum; 1988.
  • 10.Aiken LR. Psychological Testing and Assessment (10th ed.). Boston: Allyn and Bacon; 2000.
  • 11.Allen E, Zerzan J, Choo C, Shenson D, Saha S. Teaching systems-based practice to residents by using independent study projects. Acad Med. 2005;80(2):125–8. [DOI] [PubMed]
  • 12.David RA, Reich LM. The creation and evaluation of a systems-based practice/managed care curriculum in a primary care internal medicine residency program. Mt Sinai J Med. 2005;72(5):296–9. [PubMed]
  • 13.Tomolo A, Caron A, Perz ML, Fultz T, Aron DC. The outcomes card. Development of a systems-based practice educational tool. J Gen Intern Med. 2005;20(8):769–71. [DOI] [PMC free article] [PubMed]
  • 14.Johnson AW, Potthoff SJ, Carranza L, Swenson HM, Platt CR, Rathbun JR. CLARION: a novel interprofessional approach to health care education. Acad Med. 2006;81(3):252–6. [DOI] [PubMed]
  • 15.Kerfoot BP, Conlin PR, Travison T, McMahon GT. Web-based education in systems-based practice: a randomized trial. Arch Intern Med. 2007;167(4):361–6. [DOI] [PubMed]
  • 16.Kerfoot BP, Conlin PR, McMahon GT. Comparison of delivery modes for online medical education. Med Educ. 2006;40(11):1137–8. [DOI] [PubMed]
  • 17.Meiris DC, Clarke JL, Nash DB. Culture change at the source: a medical school tackles patient safety. Am J Med Qual. 2006;21(1):9–12. [DOI] [PubMed]
  • 18.Gallagher TH, Waterman AD, Garbutt JM, et al. US and Canadian physicians’ attitudes and experiences regarding disclosing errors to patients. Arch Intern Med. 2006;166(15):1605–11. [DOI] [PubMed]
  • 19.Dunning D. Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself. New York, NY: Psychology Press; 2005.
  • 20.Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121–34. [DOI] [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES