The article by Pearlman and colleagues,1 entitled “Program Director Perceptions of Proficiency in the Core Entrustable Professional Activities,” raises several interesting questions. Consistent with the concept of competency-based education, there is agreement in the medical education community that medical students should graduate and enter residency with a minimum level of competence in certain domains of knowledge and skills. The Association of American Medical Colleges has defined 13 entrustable professional activities (EPAs) that all medical students should be able to perform without direct supervision by the time of graduation.
The informative study by Pearlman et al1 suggests that the majority of program directors believed there are several EPAs that first-year residents cannot perform without direct supervision, even after 6 months of training. Similar findings were reported by Lindeman et al2 in the Journal of Surgical Education. This raises several important questions: Is there a gap in the medical school curriculum? Alternatively, are these EPAs not being attained because the expectation is not appropriate or feasible? Should there be an improved system of assessment in medical school? Should there be an improved system of assessment in residency?
Before addressing the question whether there is a curricular gap in undergraduate medical education, it is incumbent upon the medical education community to determine whether the EPAs of concern are a reasonable expectation for medical school graduation. The following are EPAs that program directors feel have not been achieved by medical school graduates1:
EPA 4: Enter and discuss patient orders/prescriptions;
EPA 7: Form clinical questions and retrieve evidence to advance patient care;
EPA 8: Give or receive a patient handover to transaction care responsibility to another health care provider or team;
EPA 11: Obtain informed consent for tests and/or procedures the day an intern is expected to perform or order without supervision; and
EPA 13: Identify system failures and contribute to a culture of safety and improvement.
With the implementation of the electronic health record and e-prescribing, students have limited opportunities to perform some of these activities in real-life situations and therefore lack the opportunity for deliberate practice. Regarding handoffs, medical students carry few patients when on clerkship rotations, and some programs still do not have standardized handoff processes.3 Thus, for handoffs, students may have limited opportunity and little guidance and feedback. Students have even more limited real-life experience with obtaining informed consent. Only a small number of the patients assigned to medical students will undergo procedures, requiring the student to obtain informed consent; therefore, the student would be unlikely to gain significant experience with this skill during typical clinical clerkship experiences.
Even in the graduate medical education community, the identification of system failures and contribution to a culture of safety and improvement is not considered a beginner-level skill. Dr. Thomas Nasca, CEO of the Accreditation Council for Graduate Medical Education (ACGME), has presented data demonstrating that competency in systems-based practice is generally achieved in the later years of residency training, with gains made typically after those in the medical knowledge and patient care domains.4 While there is increasing emphasis on patient safety and quality improvement in the medical school curriculum, there is less focused education in real patient situations during the clinical years. Students also are rarely involved in quality activities, such as the review of specific cases or hospital performance improvement.5,6
If we accept the premise that all of the current EPAs are reasonable expectations for medical school graduation, one needs to consider where and how the curriculum fails to teach the required skills. There is currently no consensus across medical schools regarding teaching and assessment of these skills, and no evidence that consistently successful performance of these EPAs can be demonstrated. Additionally, in undergraduate medical education there is no generally accepted framework for longitudinal observation of these skills. This contrasts with graduate medical education, where residents are observed and tracked to ensure that they are competent in their specialty-specific ACGME milestones. Creating structure and standardized guidelines for longitudinal observation and assessment would likely enhance the ability of preceptors to track student performance and competency in each of the 13 core EPAs.
If medical students are expected to achieve competency in all 13 EPAs, it may be useful to create a standardized assessment process that could be implemented across institutions several months prior to graduation, allowing time for remediation of those not meeting competency criteria. This would benefit both graduating students and the residency programs they will enter.
It is not surprising that Pearlman and colleagues1 did not find a significant correlation between EPA and United States Medical Licensing Examination (USMLE) achievement. While the USMLE tests medical knowledge and some critical thinking skills, those are not the focus of the 13 core EPAs. Holistic review of residency applicants certainly has merit. At the same time, it is not a compensatory model (eg, successful performance in other competency domains important to the role of a physician does not obviate the need for assessment of medical knowledge). Ideally, the assessments would complement each other, with students demonstrating evidence of an adequate knowledge base by a passing score on the USMLE examinations and successful achievement of EPA competency through longitudinal teaching and assessment.
The findings of Pearlman et al1 should enrich the conversation among undergraduate and graduate medical educators about what should be included in core medical school graduation EPAs, methods to educate students so that they can achieve competency in the core EPAs, and how to assess that achievement. This suggests that further study and innovation are needed to move forward in the realm of evidence-based, clinically useful outcomes assessment.
References
- 1. . Pearlman RE, Pawelczak M, Yacht AC, et al. Program director perceptions of proficiency in the core entrustable professional activities. J Grad Med Educ. 2017; 9 5: xx– xx. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. . Lindeman BM, Sacks BC, Lipsett PA. . Graduating students' and surgery program directors' views of the Association of American Medical Colleges Core Entrustable Professional Activities for entering residency: where are the gaps? J Surg Educ. 2015; 72 6: e184– e192. [DOI] [PubMed] [Google Scholar]
- 3. . Wagner R, Koh NH, Patow C, et al. Detailed findings from the CLER national report of findings. J Grad Med Educ. 2016; 8 2 suppl 1: 35– 54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. . Nasca TJ. . Graph presented at the 2012 and 2013 ACGME Annual Educational Conference. Concept described by Gonnella JS, et al. In: Assessment Measures in Medical Education, Residency and Practice. New York, NY: Springer; 1993: 155– 173.
- 5. . Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med. 2010; 85 9: 1425– 1439. [DOI] [PubMed] [Google Scholar]
- 6. . Teigland CL, Blasiak RC, Wilson LA, et al. Patient safety and quality improvement education: a cross-sectional study of medical students' preferences and attitudes. BMC Med Educ. 2013; 13: 16. [DOI] [PMC free article] [PubMed] [Google Scholar]