Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2018 Mar 22;2(2):115–120. doi: 10.1002/aet2.10084

Scholarship by the Clinician‐Educator in Emergency Medicine

Douglas Franzen 1, Robert Cooney 2, Teresa Chan 3, Michael Brown 4, Deborah B Diercks 5,
Editor: Douglas Franzen
PMCID: PMC6001503  PMID: 30051078

Abstract

Emergency medicine (EM) continues to grow as an academic specialty. Like most specialties, a large number of academic emergency physicians are focused on education of our graduate student learners. For promotion, clinician‐educators (CEs) are required to produce scholarly work and disseminate knowledge. Although promotion requirements may vary by institution, scholarly work is a consistent requirement. Due to the clinical constraints of working in the emergency department, the unique interactions emergency physicians have with their learners, and early adoption of alternative teaching methods, EM CEs’ scholarly work may not be adequately described in a traditional curriculum vitae. Using a rubric of established domains around the academic work of CEs, this article describes some of the ways EM educators address these domains. The aim of the article is to provide a guide for academic department leadership, CEs, and promotion committees about the unique ways EM has addressed the work of the CE.

1.

Recognizing the contributions of medical educators is essential to maintaining and promoting the academic mission of medical schools, residency programs, and continuous professional development. This task has become more difficult as medical education has progressed from traditional time‐based training to an era of competency‐based medical education. Unfortunately, despite two decades of progress in defining the contributions of emergency medicine (EM) educators, there still remains a need to more fully define scholarly activity as it applies to EM clinician‐educators (CEs). For this article we are focusing on the CE whom typically serves a different role than an education scholar/researcher.

In 2006, the Group on Educational Affairs of the Association of American Medical Colleges (GEA‐AAMC) convened a consensus conference that outlined five domains that encompassed the work of CEs: teaching, curriculum, advising/mentoring, educational leadership/administration, and learner assessment.1

Notably absent from the GEA‐AAMC domains is the scholarship of teaching and learning, which shares the traditional metrics for academic promotion. Additionally, scholarly activities are evolving rapidly. The era of digital media, podcasts, blogs, and online modules has made capturing the teaching activities of CEs more difficult as the dissemination of scholarly activities may not adhere to the traditional format for publications.2, 3, 4, 5 The Internet, social media, and free, open‐access medical education (FOAM) have led to a rapid shift in an educator's role from one of content delivery to one of content curator and experienced guide. Likewise, the shift to competency‐based medical education has required the implementation of new assessment strategies. Typical promotion and tenure criteria have not kept up with the rapid pace of change and do not adequately credit the scholarly output of EM educators whose work does not adhere to the classic model. Using the five domains identified, this article aims to provide a guide for EM academic department leadership, CEs, and promotion committees.

Domain I: Teaching

Teaching has been defined as “Any activity, in any venue, in which the educator engages with learners to instruct and guide them in the development of knowledge, skills, or attitudes.”6 Teaching in EM can be delineated into clinical and didactic education. Clinical education in the emergency department (ED) is unique compared to many specialties in that federal law requires attending physicians to be present at all times, making it feasible to implement a program in which learners progress from being closely supervised to being indirectly supervised with immediate access to faculty within the ED. The mandate for the continuous presence of attending faculty and the nature of shift work has downsides in that attending physicians may have limited longitudinal contact with learners and may not be free to attend didactic education. Many programs have adopted a single day for the provision of didactic education to minimize interference from clinical responsibilities and maximize faculty and resident education.

This unique shift structure for both learners and faculty may explain why EM is an early adopter of online education as evidenced by the rapid expansion in FOAM.2 EM learners frequently use online media sources as learning tools.4, 5 Teaching and learning occur 24 hours a day in the ED, creating a need for concise, high‐yield content that is readily available at all times (just in time) for our learners. It is incumbent upon EM educators to demonstrate to promotion committees the value of such educational products and to work with them to develop metrics to include them in the CE framework for promotions. Although frameworks have been proposed for evaluating social media–based scholarship,7 the lack of established peer review systems and quality metrics make this a critical arena for further development and validation.8, 9, 10 Traditional peer review is not without problems and there is a growing recognition that postpublication crowd‐sourced peer review may ultimately become an accepted model.11

With the increased utilization of contemporary teaching platforms in EM, evaluation of teaching contributions need to go beyond traditional metrics, such as number of learners, contact hours, ratings, awards, and evaluation by peers/experts (especially considering the frequency of off‐hours and just‐in‐time teaching). Other measures of quality and impact include the following:

Online Analytics and Quality Appraisal

For online educational content, educators could report metrics such as page views, number of learners, reach, sharing, and time interacting with content in their academic portfolio. Qualitative commenting may also be collected.

  • Frameworks for the evaluation of social media–based teaching scholarship have been suggested.7 The GEA‐AAMC consensus statement described the following as markers of teaching scholarship:1

    • Originality;

    • Build upon established theory, research, or best practices;

    • Archived and disseminated;

    • Availability of transparent comment and feedback system.

  • Use of an impact factor equivalent may be helpful for determining an online educator's reach. Measures such as these may be used to compare authors using social media in a similar way to how the h‐index is used to assess the impact of traditional researchers. The social media index (SMi) was derived in an attempt to help guide consumers of online educational materials by using an online community of practice's preferences to determine a relative ranking of blogs/podcasts in the online educational space. The SMi utilizes three indicators, Alexa rank, Twitter followers, and Facebook “likes” to create a score that ranges from 0 to 10.12 Other measures that have been suggested include the following:

    • ALiEM AIR score;13, 14

    • The METRIQ scores;15, 16, 17

    • Quality checklist for health professions blogs and podcasts.18

Free, open‐access medical education and the metrics to evaluate it are both evolving rapidly; the metrics listed here may be obsolete to readers of this article. Any metric being considered for use should incorporate the markers of good scholarship listed above.

Teaching Impact

Clinician‐educators can provide evidence of their impact by measuring learner reactions (i.e., satisfaction or a change in knowledge, skills, or attitudes).

  • Measurement of teaching impact should account for geography and the extent of dissemination. In the era of FOAM, use of social media analytics and altmetrics can be used to measure the exposure of learners to educational material.19, 20

  • Educators may find it helpful to follow the portfolio format often used by CEs. This is a useful tool for EM CEs to share their accomplishments that include FOAM. The Mayo Clinic has recently published their portfolio format for promotion that includes social media–based scholarship.21 It is important to note that not all education portfolios include social media content and, therefore, individuals need to develop their portfolio to reflect their personal accomplishments and local practices.

Domain II: Assessment of Learners

When attempting to evaluate faculty contributions to assessment, the quantity, quality, and impact of the assessments should be considered.6 The broad range of pathology presenting to the ED provides myriad opportunities for assessing learners, yielding data that can be put to a variety of uses—most commonly, informal formative assessment which can be used to guide further instruction. However, in many EM programs, end‐of‐shift clinical assessments (i.e., “shift cards,” whether on paper or electronic) are collected and used by the Clinical Competency Committee (CCC) to gauge residents’ development. These end‐of‐shift assessments differ from other specialties in that they reflect a single interaction instead of a summation of assessment over a period of time (e.g., 4‐week rotation)

Metrics such as shift card completion rate are a quantitative measure of faculty participation but as any program director knows, a few insightful comments are far more useful than rows of numbers from a stack of shift cards. Evaluating the quality of assessments performed by faculty is a subjective exercise and requires multiple input streams such as resident evaluations of faculty members, input from the program director (PD), and the CCC. Such information often exists informally (e.g., comments from Dr. X are valued over comments from Dr. Y, who always only writes “good job!”) and could be captured via survey or noted in a letter from the PD. Qualitative analysis of written comments may provide greater insight and rigor with less single‐person bias;22, 23, 24 however, even with advances in technology such as natural‐language processing software, this level of review is likely time and cost‐prohibitive for most institutions. Much like instructional materials, different assessment types are better for assessing different content or various aspects of performance.25 Although it may be difficult to measure, the quality of a faculty member's contributions to assessment is critical to consider.

Domain III: Curriculum Development

Curriculum development is a recognized form of academic scholarship for educators. Although there are well‐described measures for assessment of curriculum development, the application of these rubrics to EM may be limited.26 EM curricula can be topic specific, such as the evaluation and management of chest pain, or general, such as the creation of a medical student EM clerkship curriculum. Unlike many other medical specialty curricula that are built around a specific diagnosis, EM curricula are often organized around presenting chief complaints or clinical findings and often build on or reiterate material covered in other curricula (e.g., surgical emergencies).

Current methods used to assess curriculum development include student evaluations, publications related to the curriculum, and the number of learners exposed to the curriculum. EM is not a required course at many medical schools, so the number of learners exposed to an EM curriculum may be limited by the number of students who select the elective or other factors such as the size of the residency program.27 The use of many lecturers and integration of bedside teaching performed by various levels of faculty and residents may make it difficult to differentiate evaluation of the curriculum from evaluation of the instructor or learning experience.

As with other domains, in addition to impact, evaluation of curriculum development should include measures of quality. A commonly used six‐step approach to curriculum development includes 1) problem identification and general needs assessment, 2) needs assessment for targeted learners, 3) goals and objectives, 4) educational strategies, 5) implementation, and 6) evaluation of the curriculum and modification. Providing a clear description of these steps in an educational portfolio (https://www.mededportal.org/publication/626) would demonstrate a scholarly approach to this form of educational scholarship.

Domain IV: Mentoring and Advising

Although department chairs primarily depend on subjective impressions of an individual faculty member's contribution to the mission in terms of mentoring and advising learners, there are a few quantitative variables in EM that could be used as part of this assessment.28, 29 Given that a usual mentor–mentee relationship is often sustained over a long period of time (i.e., years), the impact of the faculty member's mentorship can often be more easily described than that of a student advisor.30, 31, 32, 33 The majority of our EM trainees do not pursue a career in academics; therefore, alternative measures of an impactful mentoring relationship should be considered because traditional measures such as successful placement in academia, grants obtained, or articles published may not be relevant. For student advising, an objective measure such as the match rate does not capture the critical role an advisor may play in terms of advising students for whom EM may not be the optimal specialty choice. However, when feasible, a mentor portfolio should include the accomplishments of mentees. Participation in advising on a national scale (e.g., eAdvising) fills an important need for students at schools that may not have a robust academic EM program (or an EM program at all) and should also be considered as part of the faculty promotion package.

Domain V: Educational Leadership and Administration

Serving in traditional medical education leadership roles such as residency PD or EM clerkship director is an easily recognizable form of educational administration. However, other service functions that fall under the same category may not be as readily apparent to a promotions committee, such as coordination of regional or national educational events. In EM, our academic society holds numerous regional meetings that are focused on learners and junior faculty. Therefore, faculty are encouraged to retain a record of such efforts in the form of program planning minutes or a letter from the organization's president highlighting the faculty member's administrative contributions. Regional and national organizational titles such as “chairperson” or “trustee” convey a high level of administrative leadership and should be given appropriate weight in the promotion deliberation process.

Performing educational administrative functions within the academic medical center or medical school should also be recognized as essential contributions to the academic mission. In EM, it is common for faculty to take on administrative roles that span multiple facets of an academic institution that include an educational component, such as billing or quality improvement. Sharing knowledge and skill beyond the discipline of EM (e.g., director of simulation for the entire medical school) provides further evidence of educational expertise. Similarly, serving on a medical school's student performance committee or Liason Committee on Medical Education (LCME) self‐study task force are examples of educational administration that may go unrecognized unless specifically called out in the faculty member's portfolio. Individual members of a promotions committee may weight contributions differently. For example, participation on an LCME self‐study task force may not affect a specific department, but is critically important to the school of medicine. Thus, it is important that all contributions are considered in their proper context.

Conclusion

Work in any of the five domains must be based on a foundation of good scholarship; thus, the principles of good scholarship can be used to evaluate faculty work. Elements of good scholarship have been defined as: 1) the application of sound principles and systematic planning, 2) use of best practices from the literature or recognized experts, and/or 3) self‐analysis or reflective practice that is used to improve teaching or development as an educator.6 These elements should be considered when evaluating the work of any faculty who develop courses, curricula, teaching sessions, assessments, or other educational tools such as teaching modules, capstone courses, simulation sessions, online content, or book chapters.

Academic departments of emergency medicine are composed of a high proportion of clinician‐educators. The advancement of this group of faculty is essential to the continued growth of our specialty. Unique aspects in education exist in emergency medicine such as shift work, limited longitudinal interaction with learners, and high reliance on bedside learning. These factors likely contributed to the early adoption of asynchronous learning and online medical education by emergency medicine educators. As methods of education continue to evolve, it is essential that clinician‐educators document their efforts in the form of educational scholarship and academic portfolios that are structured around the established domains of teaching, curriculum, advising/mentoring, educational leadership/administration, and learner assessment.

AEM Education and Training 2018;2:115–120

This manuscript was approved by the Board of Directors of the Academy of Academic Emergency Medicine Chairs, Clerkship Directors in Emergency Medicine Academy of the Society for Academic Emergency Medicine, Council of Residency Directors, and Society for Academic Emergency Medicine.

The authors have no relevant financial information or potential conflicts to disclose.

References

  • 1. Simpson D, Fincher RM, Hafler JP, et al. Advancing educators and education by defining the components and evidence associated with educational scholarship. Med Educ 2007;41:1002–9. [DOI] [PubMed] [Google Scholar]
  • 2. Cadogan M, Thoma B, Chan TM, Lin M. Free Open Access Meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002‐2013). Emerg Med J 2014;31:e76–7. [DOI] [PubMed] [Google Scholar]
  • 3. Kleynhans AC, Oosthuizen AH, van Hoving DJ. Emergency medicine educational resource use in Cape Town: modern or traditional? Postgrad Med J 2017;93:250–5. [DOI] [PubMed] [Google Scholar]
  • 4. Mallin M, Schlein S, Doctor S, Stroud S, Dawson M, Fix M. A survey of the current utilization of asynchronous education among emergency medicine residents in the United States. Acad Med 2014;89:598–601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Purdy E, Thoma B, Bednarczyk J, Migneault D, Sherbino J. The use of free online educational resources by Canadian emergency medicine residents and program directors. CJEM 2015;17:101–6. [DOI] [PubMed] [Google Scholar]
  • 6. Baldwin C, Chandran L, Gusic M. Guidelines for evaluating the educational performance of medical school faculty: priming a national conversation. Teach Learn Med 2011;23:285–97. [DOI] [PubMed] [Google Scholar]
  • 7. Sherbino J, Arora VM, Van Melle E, Rogers R, Frank JR, Holmboe ES. Criteria for social media‐based scholarship in health professions education. Postgrad Med J 2015;91:551–5. [DOI] [PubMed] [Google Scholar]
  • 8. Thoma B, Chan T, Desouza N, Lin M. Implementing peer review at an emergency medicine blog: bridging the gap between educators and clinical experts. CJEM 2015;17:188–91. [DOI] [PubMed] [Google Scholar]
  • 9. Sidalak D, Purdy E, Luckett‐Gatopoulos S, Murray H, Thoma B, Chan TM. Coached peer review: developing the next generation of authors. Acad Med 2017;92:201–4. [DOI] [PubMed] [Google Scholar]
  • 10. Lin M, Thoma B, Trueger NS, Ankel F, Sherbino J, Chan T. Quality indicators for blogs and podcasts used in medical education: modified Delphi consensus recommendations by an international cohort of health professions educators. Postgrad Med J 2015;91:546–50. [DOI] [PubMed] [Google Scholar]
  • 11. Herron DM. Is expert peer review obsolete? A model suggests that post‐publication reader review may exceed the accuracy of traditional peer review. Surg Endosc 2012;26:2275–80. [DOI] [PubMed] [Google Scholar]
  • 12. Thoma B, Sanders JL, Lin M, Paterson QS, Steeg J, Chan TM. The social media index: measuring the impact of emergency medicine and critical care websites. West J Emerg Med 2015;16:242–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Chan TM, Grock A, Paddock M, Kulasegaram K, Yarris LM, Lin M. Examining Reliability and Validity of an Online Score (ALiEM AIR) for rating free open access medical education resources. Ann Emerg Med 2016;68:729–35. [DOI] [PubMed] [Google Scholar]
  • 14. Lin M, Joshi N, Grock A, et al. Approved instructional resources series: a national initiative to identify quality emergency medicine blog and podcast content for resident education. J Grad Med Educ 2016;8:219–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Chan TM, Thoma B, Krishnan K, et al. Derivation of two critical appraisal scores for trainees to evaluate online educational resources: a METRIQ study. West J Emerg Med 2016;17:574–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Thoma B, Paddock M, Purdy E, et al. Leveraging a virtual community of practice to participate in a survey‐based study: a description of the METRIQ study methodology. AEM Educ Train 2017;1:110–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Thoma B, Sebok‐Syer SS, Krishnan K, et al. Individual gestalt is unreliable for the evaluation of quality in medical education blogs: a METRIQ study. Ann Emerg Med 2017;70:394–401. [DOI] [PubMed] [Google Scholar]
  • 18. Paterson QS, Thoma B, Milne WK, Lin M, Chan TM. A systematic review and qualitative analysis to determine quality indicators for health professions education blogs and podcasts. J Grad Med Educ 2015;7:549–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Simpson D, Sullivan GM. Knowledge translation for education journals in the digital age. J Grad Med Educ 2015;7:315–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Trost MJ, Webber EC, Wilson KM. Getting the word out: disseminating scholarly work in the technology age. Acad Pediatr 2017;17:223–4. [DOI] [PubMed] [Google Scholar]
  • 21. Cabrera D, Vartabedian BS, Spinner RJ, Jordan BL, Aase LA, Timimi FK. More than likes and tweets: creating social media portfolios for academic promotion and tenure. J Grad Med Educ 2017;9:421–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Ginsburg S, Eva K, Regehr G. Do in‐training evaluation reports deserve their bad reputations? A study of the reliability and predictive ability of ITER scores and narrative comments. Acad Med 2013;88:1539–44. [DOI] [PubMed] [Google Scholar]
  • 23. Ginsburg S, Regehr G, Lingard L, Eva KW. Reading between the lines: faculty interpretations of narrative evaluation comments. Med Educ 2015;49:296–306. [DOI] [PubMed] [Google Scholar]
  • 24. Hodges B. Assessment in the post‐psychometric era: learning to love the subjective and collective. Med Teach 2013;35:564–8. [DOI] [PubMed] [Google Scholar]
  • 25. Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387–96. [DOI] [PubMed] [Google Scholar]
  • 26. Kern DE, Thomas PA, Hughes MT. Curriculum Development for Medical Education: A Six‐step Approach. 2nd ed Baltimore: Johns Hopkins University Press, 2009. [Google Scholar]
  • 27. Khandelwal S, Way DP, Wald DA, et al. State of undergraduate education in emergency medicine: a national survey of clerkship directors. Acad Emerg Med 2014;21:92–5. [DOI] [PubMed] [Google Scholar]
  • 28. Garmel GM. Mentoring medical students in academic emergency medicine. Acad Emerg Med 2004;11:1351–7. [DOI] [PubMed] [Google Scholar]
  • 29. Yeung M, Nuth J, Stiell IG. Mentoring in emergency medicine: the art and the evidence. CJEM 2010;12:143–9. [DOI] [PubMed] [Google Scholar]
  • 30. Jackson VA, Palepu A, Szalacha L, Caswell C, Carr PL, Inui T. “Having the right chemistry”: a qualitative study of mentoring in academic medicine. Acad Med 2003;78:328–34. [DOI] [PubMed] [Google Scholar]
  • 31. Berk RA, Berg J, Mortimer R, Walton‐Moss B, Yeo TP. Measuring the effectiveness of faculty mentoring relationships. Acad Med 2005;80:66–71. [DOI] [PubMed] [Google Scholar]
  • 32. Mohr NM, Moreno‐Walton L, Mills AM, et al. Generational influences in academic emergency medicine: teaching and learning, mentoring, and technology (part I). Acad Emerg Med 2011;18:190–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Farrell SE, Digioia NM, Broderick KB, Coates WC. Mentoring for clinician‐educators. Acad Emerg Med 2004;11:1346–50. [DOI] [PubMed] [Google Scholar]

Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES