Abstract
Background
Educational autopsy (EA) is an innovative technique designed to improve the quality of feedback provided to conference presenters. In response to survey fatigue and suboptimal feedback from online evaluations, this postlecture group debrief was adapted to emergency medicine residency didactics, with a goal of collecting timely, specific, and balanced feedback for presenters. Other aims include encouraging participants to think critically about educational methods and providing presenters with formal feedback for a portfolio or promotion packet. It was hypothesized that EA provides more specific and actionable feedback than traditional online evaluations deployed individually to conference attendees.
Methods
The authors analyzed 4 months of evaluations pre‐ and postimplementation of EA. Rate of completion, presence of comments, and types of comments were compared. Comments were coded as specific, nonspecific, and unrelated/unclear. Specific comments were further categorized as about audiovisual presentation design, speaker presentation style, and educational methods of the session.
Results
A total of 46 of 65 (71%) preimplementation presentations eligible for evaluation received comments through traditional online evaluations. A total of 44 of 75 (59%) eligible postimplementation presentations generated comments via EA. Among presentations that received comments, none received nonspecific comments via EA, compared to 46% of lectures through traditional evaluations. EA generated specific comments for more presentations regarding presentation design (91% vs. 63%), presentation style (66% vs. 24%), and educational methods (48% vs. 28%). EA produced no unclear comments; traditional evaluations resulted in unclear comments for 15% of lectures.
Conclusions
EA generated more specific feedback for residency conference presenters, although there were a number of sessions not evaluated by EA. Although this limited analysis suggested that EA produced higher‐quality presenter feedback, it also showed a drop‐off in the proportion of didactic sessions that received narrative feedback.
Keywords: educational innovation, feedback, didactics, graduate medical education, assessment
NEED FOR INNOVATION
Feedback is essential for the development of health professions educators. Just as the master clinician does not achieve mastery overnight, the expert educator owes their expertise to years of practice and formative feedback.
The Accreditation Council for Graduate Medical Education (ACGME) requires structured didactics as part of the emergency medicine (EM) residency training experience. While there are no formal requirements for didactic feedback, the ACGME does require “ongoing feedback that can be used by residents to improve their learning in the context of provision of patient care or other educational opportunities.” 1
Historically, our residency conference audience received an online feedback form for each of several weekly presentations (Data Supplement S1, Table S1, available as supporting information in the online version of this paper, which is available at http://onlinelibrary.wiley.com/doi/10.1002/aet2.10628/full). With a format of 4 to 5 hours of clustered didactic sessions each week, this often resulted in each faculty member and resident receiving six to eight evaluation requests at the conclusion of conference.
Both faculty and residents reflected that the evaluation forms often felt like busy work rather than useful feedback. Evaluations were often not completed. Those who did complete the forms frequently utilized only numerical scales but skipped opportunities for richer, qualitative, free‐text feedback. Free‐text commentary, when present, often contained short, nonspecific statements such as “good job.” Responses rarely suggested action plans for development as instructors. We felt that the lack of quality feedback diminished opportunities for presenter reflection and development.
BACKGROUND
Feedback, the “heart of medical education,” 2 is conceptualized as the communication of information on performance in comparison to a standard, in an attempt to create growth and engender progress. 3 , 4 Burgess et al. 5 describe the features of quality feedback. These include essential elements such as timeliness, descriptiveness, and specificity. Generalizations, such as “well done,” are not informative and do not lead to evolution of skills. Quality feedback should also allow the recipient to react to and reflect upon the information they are receiving with the intent of forming an action plan for future development.
There is very little published on the subject of generating feedback for residency didactics; 6 most examples of such feedback were collected specifically to measure a new type of educational session. 7 When discussed, “survey fatigue” has been cited as an impediment. 8 , 9 Our previous didactic evaluation system exemplified this problem, with poor response rates to large numbers of evaluation forms. Our experience that free‐text responses were often underutilized is also supported by the survey development literature. 10
OBJECTIVE OF INNOVATION
In response to challenges collecting feedback for didactics, we adapted a postconference debrief called educational autopsy (EA). The EA is facilitated by members of the education leadership team and involves all interested didactics participants. Its goal is to collect timely, specific, well‐rounded feedback for presenters to improve future iterations of their presentations and refine their presentation skills. It also encourages participants to think critically about educational methods and materials, so that they too will be equipped to deliver engaging and effective didactic presentations. Additionally, EA summary reports provide presenters with formal, written feedback which can contribute to an educator’s portfolio or promotion packet.
DEVELOPMENT PROCESS
We drew inspiration from a similar mechanism used within the University of Michigan Medical Education Scholars Program (MESP), where the EA model has been shown to create a higher quality and quantity of feedback for presenters and dually provide an educational experience for participants. 11 A faculty alumnus of the MESP worked with a group of residents and faculty to adapt guidelines for EA to the residency didactics setting. These guidelines have been periodically refined by faculty and residents to standardize EA approach and content (Appendix S1).
Educational autopsy is rooted in the framework of the experiential learning cycle. 12 Learners first experience the didactic session as it is delivered. Learners then reflect on that didactic, discussing elements that contribute to or detract from their learning. By externalizing this feedback, and hearing feedback given by other participants, learners are exposed to new concepts about what makes an effective educational session, which may be congruent or dissonant with their own habits as teachers. Finally, everyone involved with EA will give presentations in the future, with opportunities to apply these lessons learned.
IMPLEMENTATION PHASE
Our residency has utilized the EA since July 2016. It was gradually expanded as residents and faculty gained familiarity and gauged its feasibility, until it replaced evaluation surveys as the primary source for didactic feedback in August 2017. At the conclusion of each weekly conference, one assigned faculty facilitator and available residents convene to reflect on each presentation. Conference presenters dismiss themselves while the group discusses their sessions. Participants provide feedback on areas of effective instruction and areas for improvement. The faculty facilitator collates this feedback and enters it into one aggregate online survey for each presenter. The final narrative summary is visible only to program leadership and the presenter. Feedback surveys are stored and accessed through the same Web‐based residency management system that was previously used for evaluations.
Although organic discussion is encouraged, EA guidelines include a series of standardized questions to prompt feedback in the domains of presentation style, presentation design, and educational methods (Table S2). The discussion of presentation style addresses issues such as body language and speaking style as well as whether the presenter effectively engaged the audience. The presentation design component includes structure and chosen content as well as slide design compared to best practices (balance between text and images, color scheme, and quality of visual aids or multimedia). 13 Comments about educational methods refer to techniques used by the speaker with an emphasis on interactive learning, as espoused by Wolff et al. 14
OUTCOMES
To determine whether EA generates higher quality feedback for presenters, we compared 4 months of comments from traditional evaluations prior to EA (March 2016–June 2016) to 4 months of comments after full implementation of EA (March 2020–June 2020). Specific sessions, such as invited Grand Rounds speakers and residency town hall meetings, were intentionally not discussed during EA and were excluded. This project was given exempt status by the institutional review board.
During the preimplementation study period, one or more evaluations were completed for 62 of 65 eligible didactics (95%, mean ± SD = 11.29 ± 5.34 evaluations per presentation, range = 1–24). A total of 44 of 65 (71%) of these preimplementation presentations received one or more evaluations with comments (mean ± SD = 1.77 ± 1.85 comments per presentation, range = 0–7). A total of 46 of 75 (59%) eligible postimplementation presentations had evaluations generated by EA, all including comments. A qualitative comparison of narrative feedback from traditional evaluations and EA is summarized in Table 1. Two authors (MG, CB) coded comments into categories of nonspecific (NS) and specific (S). The authors’ initial coding showed 91% agreement for NS and S designations for traditional evaluations and 100% agreement for EA evaluations. Discrepancies were settled through discussion between coders. Specific comments were further sorted into comments about presentation design (PD), presentation style (PS), and educational methods (EM). A single comment could address multiple categories. Irrelevant comments were marked as unrelated/unclear meaning (U).
TABLE 1.
Percentage of presentations receiving each type of comment for traditional evaluations and EA
| Traditional evaluations (n = 46) | EA (n = 44) | Postintervention change | Example | |
|---|---|---|---|---|
| NS | 21 (46%) | 0 (0%) | –46% | “Great talk” |
| S (PD) | 29 (63%) | 40 (91%) | +28% | “Slides were simple and effective; one way to improve slide design would be to replace some of the text with photos” |
| S (PS) | 11 (24%) | 29 (66%) | +42% | “Concise, able to be very focused, calm. Didn't get flustered by occasionally slow responses. Supportive when calling on people with ECGs. Great gentle coaching” |
| S (EM) | 13 (28%) | 21 (48%) | +20% | “We liked the way you used teach back with visual aids” |
| U | 7 (15%) | 0 (0%) | –15% | “I was the speaker” |
Abbreviations: EA, educational autopsy; NS, nonspecific; S (EM), specific (educational methods); S (PD), specific (presentation design); S (PS), specific (presentation style); U, unclear meaning or unrelated to session content.
REFLECTIVE DISCUSSION
A comparison of narrative comments from traditional evaluation forms and EA shows that a greater proportion of EA feedback is specific and potentially actionable. Additionally, and by design, EA feedback is more balanced, addressing the domains of presentation style and educational methods, while traditional evaluations mostly focused on presentation design, when providing any specifics at all.
Our analysis shows that a larger proportion of presentations received no narrative comments through EA. This is likely predominantly due to administrative error (e.g., facilitators overlooking a presentation during EA, failure to upload a completed EA, or last‐minute scheduling changes resulting in an EA facilitator not receiving a prompt to evaluate a particular lecture). Of note, no EA was filed for 3 weeks of didactics for unclear reasons. While our study suggests EA provides higher‐quality feedback, attention should be dedicated to capturing narrative feedback consistently across lectures.
EA was initially implemented for in‐person didactics, but has more recently occurred virtually for conferences during the COVID pandemic. For this analysis, we selected the most recent EA data available. The data in this paper, therefore, reflect virtual EAs for virtual presentations. It is possible that this impacted EA audience participation, and the nature of their feedback, in ways which would be difficult to predict. However, the fundamental process of EA remained the same.
EA is most effective within certain size limitations. Too small of a group results in the same individuals repeatedly offering similar opinions, while too large of a gathering loses the intimacy of discussion and limits engagement from quieter participants. Anecdotally, we have found that groups of eight to 10 participants provide an ideal balance. Because EA adds time to didactics for busy residents and faculty, the goal is to keep sessions brief, generally 15 to 20 minutes.
EA as a novel tool for generating feedback is well suited for residency conference, but generalizable to any didactic session, including virtual teaching. While reception to EA has generally been favorable within our program, future research needs to focus on measuring its value to presenters and participants and its impact on use of best educational practices during conference.
CONFLICT OF INTEREST
The authors have no potential conflicts to disclose.
AUTHOR CONTRIBUTIONS
Max Griffith: study concept and design, acquisition of the data, analysis and interpretation of the data, drafting of the manuscript, critical revision of the manuscript. Charles Brown: study concept and design, analysis and interpretation of the data, drafting of the manuscript. Mary R. C. Haas: study concept and design, drafting of the manuscript, critical revision of the manuscript, study supervision. Robert D. Huang: study concept and design, drafting of the manuscript, critical revision of the manuscript, study supervision. Laura R. Hopson: study concept and design, acquisition of the data, drafting of the manuscript, critical revision of the manuscript, statistical expertise, study supervision.
Supporting information
Supplementary Material
ACKNOWLEDGMENTS
The authors thank Raven‐Olivia Kellum and Shemya Gilmore for their help obtaining and organizing evaluation data.
Griffith M, Brown C, Haas MRC, Huang RD, Hopson LR. Educational autopsy: An innovative structured debrief for residency didactic teaching. AEM Educ Train. 2021;5:e10628. 10.1002/aet2.10628
Supervising Editor: Jaime Jordan, MD, MAEd.
REFERENCES
- 1. ACGME Common Program Requirements (Residency). Accreditation Council of Medical Education website. 2020. Accessed May 10, 2020. https://www.ACGME.org/Portals/0/PFAssets/ProgramRequirements/CPRResidency2020.pdf
- 2. Branch WTJ, Paranjape A. Feedback and reflection: teaching methods for clinical settings. Acad Med. 2002;77(12 Pt 1):1185–1188. [DOI] [PubMed] [Google Scholar]
- 3. Ende J. Feedback in clinical medical education. JAMA. 1983;250(6):777–781. [PubMed] [Google Scholar]
- 4. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112. [Google Scholar]
- 5. Burgess A, van Diggele C , Roberts C, Mellis C. Feedback in the clinical setting. BMC Med Educ. 2020;20(2):460. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Nelson AM, Traba CM. Redesigning evaluation forms improves didactic conference assessment. Med Educ. 2019;53(5):505–506. [DOI] [PubMed] [Google Scholar]
- 7. Volerman A, Poeppelman RS. A pilot study of team‐based learning in one‐hour pediatrics residency conferences. BMC Med Educ. 2019;19(1):266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Adams M. No Evaluation Left Behind: Nonresponse in Online Course Evaluations [dissertation]. Raleigh, NC: NC State University Libraries; 2010. [Google Scholar]
- 9. Porter S, Whitcomb M, Weitzer W. Multiple surveys of students and survey fatigue. New Dir Institutional Res. 2004;2004:63–73. [Google Scholar]
- 10. Boynton PM, Greenhalgh T. Selecting, designing, and developing your questionnaire. BMJ. 2004;328(7451):1312–1315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Frohna AZ, Hamstra SJ, Mullan PB, Gruppen LD. Teaching medical education principles and methods to faculty using an active learning approach: the University of Michigan Medical Education Scholars Program. Acad Med. 2006;81(11):975–978. [DOI] [PubMed] [Google Scholar]
- 12. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development. 1st ed. Englewood Cliffs, NJ: Prentice Hall; 1984. [Google Scholar]
- 13. Alley M. Critical Error 7 Following the Defaults of PowerPoint. In: The Craft of Scientific Presentations. 2nd ed. New York, NY: Springer; 2013:129–170. [Google Scholar]
- 14. Wolff M, Wagner MJ, Poznanski S, Schiller J, Santen S. Not another boring lecture: engaging learners with active learning techniques. J Emerg Med. 2015;48(1):85–93. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary Material
