Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2023 Feb 5;7(1):e10842. doi: 10.1002/aet2.10842

Implementation of the SIMPL (Society for Improving Medical Professional Learning) performance assessment tool in the emergency department: A pilot study

Mary R C Haas 1,, Mallory G Davis 1, Carrie E Harvey 1, Rob Huang 1, Kirstin W Scott 2, Brian C George 3, Gregory M Wnuk 3, John Burkhardt 4
PMCID: PMC9899600  PMID: 36777102

Abstract

Background

Feedback and assessment are difficult to provide in the emergency department (ED) setting despite their critical importance for competency‐based education, and traditional end‐of‐shift evaluations (ESEs) alone may be inadequate. The SIMPL (Society for Improving Medical Professional Learning) mobile application has been successfully implemented and studied in the operative setting for surgical training programs as a point‐of‐care tool that incorporates three assessment scales in addition to dictated feedback. SIMPL may represent a viable tool for enhancing workplace‐based feedback and assessment in emergency medicine (EM).

Methods

We implemented SIMPL at a 4‐year EM residency program during a pilot study from March to June 2021 for observable activities such as medical resuscitations and related procedures. Faculty and residents underwent formal rater training prior to launch and were asked to complete surveys regarding the SIMPL app's content, usability, and future directions at the end of the pilot.

Results

A total of 36/58 (62%) of faculty completed at least one evaluation, for a total of 190 evaluations and an average of three evaluations per faculty. Faculty initiated 130/190 (68%) and residents initiated 60/190 (32%) evaluations. Ninety‐one percent included dictated feedback. A total of 45/54 (83%) residents received at least one evaluation, with an average of 3.5 evaluations per resident. Residents generally agreed that SIMPL increased the quality of feedback received and that they valued dictated feedback. Residents generally did not value the numerical feedback provided from SIMPL. Relative to the residents, faculty overall responded more positively toward SIMPL. The pilot generated several suggestions to inform the optimization of the next version of SIMPL for EM training programs.

Conclusions

The SIMPL app, originally developed for use in surgical training programs, can be implemented for use in EM residency programs, has positive support from faculty, and may provide important adjunct information beyond current ESEs.

Keywords: assessment, competency‐based medical education, direct observation, emergency medicine education, feedback, graduate medical education, mobile applications, residency, smartphone, technology‐enhanced education

NEED FOR INNOVATION

Feedback and assessment are critically important components of clinical teaching and competency‐based medical education (CBME). 1 , 2 However, aspects of the emergency department (ED) such as interruptions, time limitations, and high patient acuity serve as barriers to their provision. 3 Additionally, trainees perceive receiving less high‐quality feedback than what faculty report they provide, suggesting that trainees may not always recognize feedback when given. 4 A point‐of‐care tool for use in the ED that efficiently generates and explicitly signals feedback can potentially address this gap.

BACKGROUND

Assessment allows educators to make judgments of learning (summative assessments) and to provide information for learning (formative assessment), using feedback as a catalyst. 2 , 5 , 6 High‐quality feedback is timely, specific, actionable, descriptive, and based on direct observation. 7 , 8 Learners most value feedback provided immediately after an observed activity (OA), which allows for real‐time performance improvement. 4 , 5 , 6 , 9 , 10 Traditionally, emergency medicine (EM) trainees receive feedback and assessment via end‐of‐shift written evaluations (ESEs) that commonly incorporate a checklist based on the EM core competencies. 11 , 12 , 13 , 14 Written evaluations, if submitted at all, often provide limited and/or vague commentary. 15 , 16 Additionally, stand‐alone use of milestone‐based ESEs may overestimate resident proficiency level. 12 An optimized feedback and assessment tool for the ED is needed.

Originally developed for use in surgical training, the Society for Improving Medical Professional Learning (SIMPL) smartphone application aims to enhance the frequency and timeliness of workplace‐based feedback and assessment by providing a point‐of‐care tool for faculty to use immediately following direct observation during an operative case. 17 A SIMPL evaluation involves completion of three assessment scales and dictation of feedback. The feasibility and clinical applicability of SIMPL in the operative setting has been previously documented, but use of this tool in other settings has not yet been studied. 17 , 18 , 19 , 20 , 21 Implementation of the SIMPL application in the ED setting may enhance feedback and assessment.

OBJECTIVES OF THE INNOVATION

This pilot study aimed to implement the SIMPL feedback and assessment tool in the ED setting.

DEVELOPMENT PROCESS

The study team collaborated with SIMPL to adapt its application for ED use at a single academic institution. The SIMPL tool has substantial validity and reliability evidence in surgery and utilizes three assessment scales. 17 , 22 The first is a 4‐point “Zwisch” scale, a framework for the assessment of faculty guidance (and its inverse, trainee autonomy, also conceptually identical to retrospective entrustment; Figure S1). 22 Anchors in the scale range from “show and tell” to “supervision only.” 18 These mirror the first four levels of other supervision scales commonly used for entrustable professional activities (EPAs). 23 , 24 The other two SIMPL scales assess overall resident performance (a prospective entrustment scale) and case complexity (Figures S2 and S3, respectively).

In adapting the SIMPL application for the ED, we created a list of common OAs, such as medical resuscitations and related procedures, most analogous to the previously validated operative/procedural OA assessments in surgical training. Although not technically a procedure, medical resuscitations similarly require a systematic approach and involve direct observation by faculty members for key portions, lending themselves well to workplace‐based assessment. Following observation of the key components of an OA, faculty or trainees can initiate an evaluation that subsequently triggers a notification to the corresponding individual's device. Both faculty and residents select scores on the same scales, with residents self‐assessing and faculty assessing the resident. Faculty can subsequently dictate an audio recording accessible to the resident, a feature valued by surgical trainees. 19 The faculty member must complete the evaluation within 72 h or the evaluation automatically expires. Prior studies have noted a decline in clarity and detail after this time. 20 The trainee can access the evaluation immediately upon completing the self‐assessment or after 72 h.

IMPLEMENTATION PHASE

Our pilot occurred in the University of Michigan adult ED with postgraduate year (PGY)‐1 to ‐4 residents from March 1, 2021, to June 1, 2021. Virtual training sessions (30‐min) were held during a faculty meeting and EM residency conference to highlight best practices and characteristics of high‐quality feedback (presentation available upon request from the authors). To improve inter‐rater reliability, sessions also included videos of a faculty and a trainee conducting a medical resuscitation that corresponded to each Zwisch level. Of 58 faculty working clinical shifts with residents during the study period, 54 completed the training (93%). Of 64 total residents, 62 completed the training (97%).

Participation in the SIMPL pilot was voluntary and uncompensated, occurring in addition to existing incentivized feedback mechanisms. In our existing system, faculty may provide verbal feedback throughout or at the conclusion of the shift, although this type of feedback is not documented. For documented feedback, faculty are asked to complete ESEs automatically triggered after each shift via Medhub, an online residency management system. ESEs consist of a list of EM subcompetencies utilizing a 5‐point entrustment scale, in addition to an optional comment box. Faculty may also receive procedure‐specific evaluations triggered by residents logging procedures, often in a significantly delayed fashion. Residents consistently cite a desire for additional feedback. Faculty were thus encouraged to utilize SIMPL for medical resuscitations specifically, as they tend to be present and naturally directly observing residents during key portions and do not already receive resuscitation‐specific evaluations via Medhub that they might interpret as duplicative.

To encourage participation, the study team sent biweekly reminders to faculty and resident listservs that displayed the top users for the week and highlighted examples of usage. Individual reminders via email and pages were also sent to faculty and residents on shift.

OUTCOMES

Usage data

A total of 36/58 (62%) faculty completed at least one evaluation. Faculty completed a total of 190 evaluations. Of these, faculty initiated 130/190 (68%), and residents initiated 60/190 (32%). In the first, second, and third month of the pilot, 82, 44, and 64 evaluations were completed, respectively. On average, each faculty member completed three evaluations. Of all SIMPL evaluations, 91% included dictated feedback.

Of the 64 total residents in the program, 54 rotated through the study site during the pilot and were eligible to receive SIMPL evaluations. A total of 45/54 (83%) residents received at least one SIMPL evaluation. On average, residents received 3.5 SIMPL evaluations. Most evaluations were for medical resuscitations (n = 124, 65%), trauma resuscitations (n = 17, 9%), and endotracheal intubation (n = 14, 7%). Other less commonly evaluated procedures included arterial line (n = 7, 4%), echocardiogram (n = 7, 4%), cardiac pacing (n = 6, 3%), central line (n = 3, 2%), lung ultrasound (n = 3, 2%), cardioversion (n = 2, 1%), difficult airway (n = 2, 1%), chest tube (n = 1, 1%), FAST exam (n = 1, 1%), pediatric trauma (n = 1, 1%), and procedural sedation (n = 1, 1%).

Survey data

The study team created and piloted individual faculty and resident surveys (available as supplemental material accompanying the online article) to assess content, usability, and future directions. These surveys included items from previous SIMPL‐based studies. 19 The survey was voluntary and anonymous. The local institutional review board deemed this study exempt (HUM00193638).

Survey response rates were 75% (48/64) and 52% (30/58) for residents and faculty, respectively. Resident mean responses to each attitudinal survey question are grouped by PGY cohorts and illustrated in Figure 1A. Residents generally agreed with the statements that “SIMPL increased the quality of feedback received” and that they “value the dictated feedback.” Residents disagreed on average that they “value the numerical feedback” of the SIMPL tool. Figure 1B illustrates mean responses by faculty grouped by years since residency graduation. Overall response rates and distribution of responses for residents and faculty for each question are detailed in Table 1. Relative to the residents, the mean responses from faculty were more positive.

FIGURE 1.

FIGURE 1

(A) Resident attitudes toward SIMPL following the pilot, by postgraduate year (PGY‐1 to ‐4). (B) Faculty attitudes toward SIMPL following the pilot, by years since completing residency. Responses were on a 3‐point scale (agree/neutral/disagree). Bar graphs represent mean sentiment value (e.g., 100% responding agree = 1). For those questions that had an average response of neutral (0 on the x‐axis), a diamond symbol was added to reflect this as their bar graph would have no value

TABLE 1.

Overall response rates and distribution of responses for residents and faculty for each question

Responses (n) Agree Neutral Disagree
Resident questions
SIMPL has made it easier for me to ask for feedback.
43 10 (23.26%) 19 (44.19%) 14 (32.56%)
SIMPL has increased the frequency of feedback I receive.
43 9 (20.93%) 16 (37.21%) 18 (41.86%)
SIMPL has increased the quality of feedback I receive.
42 18 (42.86%) 14 (33.33%) 10 (23.81%)
I value the numerical feedback of SIMPL.
42 6 (14.29%) 18 (42.86%) 18 (42.86%)
I value the dictated feedback of SIMPL.
42 22 (52.38%) 10 (23.81%) 10 (23.81%)
I receive higher quality feedback through SIMPL than Medhub.
42 16 (39.02%) 13 (31.71%) 12 (29.27%)
Our residency program should continue to use SIMPL.
41 14 (34.15%) 19 (46.34%) 8 (19.51%)
Faculty questions
SIMPL has made it easier for me to provide feedback.
22 10 (45.45%) 10 (45.45%) 2 (9.09%)
SIMPL has increased the quality of feedback I provide.
22 10 (45.45%) 9 (40.91%) 3 (13.64%)
SIMPL has increased the frequency of feedback I provide.
22 6 (27.27%) 11 (50.00%) 5 (22.73)
I like the ability to use the numerical rating scales in SIMPL.
22 13 (59.09%) 6 (27.27%) 3 (13.64%)
I like the ability to dictate feedback in SIMPL.
22 14 (63.64%) 5 (22.73%) 3 (13.64%)
I was more willing to provide feedback via SIMPL than Medhub.
22 10 (45.45%) 7 (31.82%) 5 (22.73%)
Providing feedback via SIMPL was easier than providing feedback via Medhub.
22 17 (77.27%) 3 (13.64%) 2 (9.09%)
SIMPL should be expanded for use with all procedures, not just medical resuscitations.
22 15 (68.18%) 6 (27.27%) 1 (4.55%)
SIMPL should replace the standard procedural evaluations in Medhub.
22 14 (63.64%) 7 (31.82%) 1 (4.55%)
Our residency program should continue to use SIMPL.
22 15 (68.18%) 6 (27.27%) 1 (4.55%)

DISCUSSION

Our study demonstrates that the SIMPL app can be implemented for use in an EM residency program. The SIMPL app incorporates many aspects of CBME that inform best practices for both feedback and assessment, including an emphasis on formative aspects, direct observation in an environment that authentically represents the profession, and engagement of the learner in the process. 25 The ability for faculty to dictate real‐time feedback utilizing their smartphones represents an innovative approach to enhance timeliness and specificity of feedback. Additionally, the SIMPL app can provide multiple assessments per shift for residents on EPAs, which can supplement traditional ESEs. Although the Zwisch scale developed for the operative setting has not been studied in EM, this represents a first step toward its study in this context.

In interpreting the results, several additional considerations are pertinent. As pilot study participation was voluntary and uncompensated, some interpreted it as duplicative to existing requests to complete ESEs. Enhanced faculty incentives and/or replacement of our existing system may have generated more participation and further enhanced positive perceptions of SIMPL. Also, to protect anonymity we intentionally did not directly link survey responses to usage data, but by doing so cannot know how the study respondents’ attitudes aligned with their behavior. Additionally, faculty survey response rate was lower than desired. Utilizing an EPA entrustment scale more familiar to ED residents and faculty may have further enhanced SIMPL use. Lastly, personal frequent reminders to use SIMPL likely enhanced participation in this pilot but may not represent a sustainable practice without the adoption of other automated methods.

Future directions of study include qualitative exploration of why faculty favored SIMPL more than residents, utilization of SIMPL to assess the eleven core EM EPAs, 26 comparing quality and quantity of assessment data and narrative feedback from SIMPL versus traditional feedback mechanisms, generating additional validity data for its use in the EM context, assessing how years out of practice and year of training impact SIMPL usage patterns and assessment data, and studying SIMPL for other EM‐based applications, such as assessment of medical students or of interns by supervising senior residents. The pilot also generated several suggestions to inform future development of SIMPL, including to incorporate a feature allowing users to enter a brief description or title of the case, to allow for the option to type feedback in place of dictation, to provide a web‐based application for those wishing to avoid downloading new software onto their personal devices, and to allow for enhanced integration with other residency management systems.

CONCLUSIONS

The SIMPL app, originally developed for use in surgical training programs, can be implemented for use in an emergency medicine residency, had positive support from faculty, and could potentially act as an adjunct to end‐of‐shift written evaluations.

AUTHOR CONTRIBUTIONS

Mary R. C. Haas—study concept and design, acquisition of the data, analysis and interpretation of the data, drafting of the manuscript, critical revision of the manuscript for important intellectual content, statistical expertise, administrative, technical, or material support and study supervision. Mallory G. Davis—study concept and design, acquisition of the data, analysis and interpretation of the data, drafting of the manuscript and critical revision of the manuscript for important intellectual content. Carrie E. Harvey—study concept and design; acquisition of the data; analysis and interpretation of the data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; administrative, technical, or material support; and study supervision. Rob Huang—study concept and design, analysis and interpretation of the data, drafting of the manuscript, and critical revision of the manuscript for important intellectual content. Kirstin W. Scott—study concept and design, acquisition of the data, analysis and interpretation of the data, drafting of the manuscript, and critical revision of the manuscript for important intellectual content. Brian C. George—study concept and design; analysis and interpretation of the data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; and administrative, technical, or material support. Gregory M. Wnuk—study concept and design; acquisition of the data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; and administrative, technical, or material support. John Burkhardt—study concept and design; acquisition of the data; analysis and interpretation of the data; drafting of the manuscript; critical revision of the manuscript for important intellectual content; statistical expertise; administrative, technical, or material support; and study supervision.

CONFLICT OF INTEREST

The following authors declare no potential conflict of interest: RH, KWS, MGD. GMW is the Director of Operations of the 501c3 nonprofit research collaborative, the Society for Improving Medical Professional Learning (SIMPL). He is paid 50% of his salary from SIMPL, and his compensation is not directly tied to publication, membership, or any other operations metric for the collaborative. BCG serves as the Executive Director of SIMPL, a position for which he is not paid. This work was funded by a University of Michigan Institutional Graduate Medical Education (GME) Innovations grant conceived and written by JB. JB, MRCH and CEH received salary support from the grant to complete this work.

Supporting information

Appendix S1.

Figure S1.

Figure S2.

Figure S3.

ACKNOWLEDGMENTS

The authors wish to thank Brittany Holmes for her administrative assistance with the pilot, and Michael Hovenden, MD, for his participation in the training videos. We would also like to thank the University of Michigan Graduate Medical Education (GME) team for funding this work through an institutional Graduate Medical Education Innovations grant (U069806).

Haas MRC, Davis MG, Harvey CE, et al. Implementation of the SIMPL (Society for Improving Medical Professional Learning) performance assessment tool in the emergency department: A pilot study. AEM Educ Train. 2023;7:e10842. doi: 10.1002/aet2.10842

Funding information:

This work was supported by the University of Michigan Graduate Medical Education (GME) team through an institutional GME Innovations grant (U069806).

Supervising Editor: Dr. Sorabh Khandelwal

REFERENCES

  • 1. Lockyer J, Carraccio C, Chan MK, et al. Core principles of assessment in competency‐based medical education. Med Teach. 2017;39(6):609‐616. [DOI] [PubMed] [Google Scholar]
  • 2. van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ. 2005;39(3):309‐317. [DOI] [PubMed] [Google Scholar]
  • 3. Buckley C, Natesan S, Breslin A, Gottlieb M. Finessing feedback: recommendations for effective feedback in the emergency department. Ann Emerg Med. 2020;75(3):445‐451. [DOI] [PubMed] [Google Scholar]
  • 4. Yarris LM, Linden JA, Gene Hern H, et al. Attending and resident satisfaction with feedback in the emergency department: feedback in the ED. Acad Emerg Med. 2009;16:S76‐S81. [DOI] [PubMed] [Google Scholar]
  • 5. Kelly E, Richards JB. Medical education: giving feedback to doctors in training. BMJ. 2019;366:l4523. [DOI] [PubMed] [Google Scholar]
  • 6. Weinstein DF. Optimizing GME by measuring its outcomes. N Engl J Med. 2017;377(21):2007‐2009. [DOI] [PubMed] [Google Scholar]
  • 7. Richardson BK. Feedback. Acad Emerg Med. 2004;11(12):e1‐e5. [DOI] [PubMed] [Google Scholar]
  • 8. Burgess A, van Diggele C, Roberts C, Mellis C. Feedback in the clinical setting. BMC Med Educ. 2020;20(S2):460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Watling C, Driessen E, van der Vleuten CPM, Lingard L. Learning culture and feedback: an international study of medical athletes and musicians. Med Educ. 2014;48(7):713‐723. [DOI] [PubMed] [Google Scholar]
  • 10. Duijn CCMA, Welink LS, Mandoki M, ten Cate OTJ, Kremer WDJ, Bok HGJ. Am I ready for it? Students’ perceptions of meaningful feedback on entrustable professional activities. Perspect Med Educ. 2017;6(4):256‐264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Shayne P, Gallahue F, Rinnert S, Anderson CL, Hern G, Katz E. Reliability of a core competency checklist assessment in the emergency department: the standardized direct observation assessment tool. Acad Emerg Med. 2006;13(7):727‐732. [DOI] [PubMed] [Google Scholar]
  • 12. Dehon E, Jones J, Puskarich M, Sandifer JP, Sikes K. Use of emergency medicine milestones as items on end‐of‐shift evaluations results in overestimates of residents’ proficiency level. J Grad Med Educ. 2015;7(2):192‐196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Warrington S, Beeson M, Bradford A. Inter‐rater agreement of end‐of‐shift evaluations based on a single encounter. West J Emerg Med. 2017;18(3):518‐524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Bandiera G, Lendrum D. Daily encounter cards facilitate competency‐based feedback while leniency bias persists. CJEM. 2008;10(1):44‐50. [DOI] [PubMed] [Google Scholar]
  • 15. Hahn B, Waring ED, Chacko J, Trovato G, Tice A, Greenstein J. Assessment of written feedback for emergency medicine residents. South Med J. 2020;113(9):451‐456. [DOI] [PubMed] [Google Scholar]
  • 16. Jackson JL, Kay C, Jackson WC, Frank M. The quality of written feedback by attendings of internal medicine residents. J Gen Intern Med. 2015;30(7):973‐978. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. George BC, Bohnen JD, Schuller MC, Fryer JP. Using smartphones for trainee performance assessment: a SIMPL case study. Surgery. 2020;167(6):903‐906. [DOI] [PubMed] [Google Scholar]
  • 18. Bohnen JD, George BC, Williams RG, et al. The feasibility of real‐time intraoperative performance assessment with SIMPL (system for improving and measuring procedural learning): early experience from a multi‐institutional trial. J Surg Educ. 2016;73(6):e118‐e130. [DOI] [PubMed] [Google Scholar]
  • 19. Eaton M, Scully R, Schuller M, et al. Value and barriers to use of the SIMPL tool for resident feedback. J Surg Educ. 2019;76(3):620‐627. [DOI] [PubMed] [Google Scholar]
  • 20. Gunderson K, Sullivan S, Warner‐Hillard C, et al. Examining the impact of ssing the SIMPL application on feedback in surgical education. J Surg Educ. 2018;75(6):e246‐e254. [DOI] [PubMed] [Google Scholar]
  • 21. Zendejas B, Toprak A, Harrington AW, Lillehei CW, Modi BP. Quality of dictated feedback associated with SIMPL operative assessments of pediatric surgical trainees. Am J Surg. 2021;221(2):303‐308. [DOI] [PubMed] [Google Scholar]
  • 22. George BC, Teitelbaum EN, Meyerson SL, et al. Reliability, validity, and feasibility of the Zwisch scale for the assessment of intraoperative performance. J Surg Educ. 2014;71(6):e90‐e96. [DOI] [PubMed] [Google Scholar]
  • 23. Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE Guide No. 99. Med Teach. 2015;37(11):983‐1002. [DOI] [PubMed] [Google Scholar]
  • 24. Ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ. 2013;5(1):157‐158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: from Flexner to competencies. Acad Med. 2002;77(5):361‐367. [DOI] [PubMed] [Google Scholar]
  • 26. Hart D, Franzen D, Beeson M, et al. Integration of entrustable professional activities with the milestones for emergency medicine residents. West J Emerg Med. 2018;20(1):35‐42. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1.

Figure S1.

Figure S2.

Figure S3.


Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES