Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2025 Jan 11;9(1):e11057. doi: 10.1002/aet2.11057

Prevalence and characteristics of group standardized letters of evaluation in emergency medicine: A cross‐sectional observational study

Morgan Sehdev 1,, Daniel J Egan 2, Sharon Bord 3, Cullen Hegarty 4, Eric Shappell 5
PMCID: PMC11724697  PMID: 39803611

Abstract

Background

The standardized letter of evaluation (SLOE) for emergency medicine (EM) is a well‐established tool for residency selection. While previous work characterizes the utility and outcomes related to SLOE use, less is known about SLOE authorship patterns and trends.

Objective

The objective was to measure the prevalence of group SLOEs in EM over time, characterize the role groups represented in group SLOEs, and compare the rating practices of groups of authors versus single authors.

Methods

SLOE data from 2016 through 2021 were obtained from the CORD database. An algorithm was developed to process SLOE author fields to accomplish three tasks: (1) determine whether the SLOE was written by an individual or a group, (2) determine the number of named letter writers on group SLOEs, and (3) identify roles of individuals listed on group SLOEs. A total of 150 SLOEs were randomly selected for review by the study team to use as a standard to which algorithm performance was compared. Mean ratings were compared for (1) individual versus group SLOEs and (2) individual SLOEs from clerkship directors (CDs) versus others.

Results

A total of 40,218 SLOEs met inclusion criteria. The algorithm performed well in detecting group SLOEs, authors, and titles. Institutions submitting only SLOEs written by a group of authors increased from 31.4% to 54.5%. Authors per group SLOE increased from 3.4 in 2016 to 4.0 in 2021. Mean ratings were slightly higher in individual SLOEs compared to group SLOEs. Individual SLOEs from non‐CDs had higher ratings compared to those from CDs.

Conclusions

The proportion of SLOEs authored by groups increased over the study interval. Grading practices are similar between group SLOEs and individual SLOEs authored by CDs. Individual SLOEs from non‐CDs had slightly higher ratings compared to the other groups.

INTRODUCTION

Over the past three decades, standardized letters of evaluation (SLOEs) in emergency medicine (EM) have become a well‐established and expected component within the residency application process. Since the SLOE's inception, the what, or the content, it conveys to those reviewing applications has been widely studied. However, little has been done to better understand the characteristics of who is writing the what on behalf of applicants. This paucity of information persists despite both anecdotal and, more recently, qualitative evidence to suggest that SLOE authorship is an important factor in how reviewers contextualize the trustworthiness and even competitiveness of SLOEs, including an increased weighting of SLOEs authored by groups. 1 A better understanding of SLOE authorship trends and characteristics is essential for both authors and reviewers to optimize the alignment of this assessment practice with the intended effects.

As the demand for fair and equitable residency application and evaluation processes increases, the use of standardized letters has gained increasing traction. Initially introduced by EM, many other specialties have subsequently adopted similarly structured evaluation models, including orthopedic surgery, otolaryngology, dermatology, pediatrics, ophthalmology, obstetrics and gynecology, internal medicine, and general surgery. 2 When compared to narrative letters, the standardized format enables program directors to better review, compare, and select applicants for interview through a structured assessment of clinical performance, interpersonal skills, knowledge base, professionalism, and other attributes essential for successful residency training. 3 , 4 , 5 , 6 Moreover, the SLOE accomplishes this with limited opportunities for ambiguity and emphasis on providing specific examples/details. 6 , 7 Within EM, the SLOE has undergone multiple iterations to provide a validated assessment tool that creates a shared mental model for the authoring and subsequently reviewing evaluators. 8 , 9 , 10 , 11 EM program directors now cite the SLOE as the most important piece of information used to determine both interview offers and rank lists. 12 Although a significant body of scholarly work characterizes the utility and the review processes of SLOEs, less is known about authorship patterns and trends for these letters.

While we recognize the weight and utility of information that is conveyed in the SLOE to those reviewing them, 13 , 14 the interpretation of this information appears to be modified by who authors it. Anecdotally, there exists a general perception that those reviewing applications find single‐author SLOEs, SLOEs from EM faculty not affiliated with a residency program, and non‐SLOE narrative letters of recommendation less useful in the decision making process than a SLOE compiled by multiple authors. 12 , 15 , 16 Previous data suggest that SLOE authorship and particularly group authorship establishes a degree of trustworthiness within the information about an applicant conveyed in the SLOE. 1 Messaging given to students around SLOE acquisition also demonstrates these authorship preferences; for example, two advising websites for medical students contain guidance such as: “SLOEs come in a couple of varieties, but the most critical is the Composite or Group SLOE … Group SLOEs will carry more weight for your application” 17 and “Most programs now solve this issue [of who writes each SLOE] by having a group write a composite letter for the entire department, which [then] goes out on behalf of everybody. Such departmental, composite letters are usually viewed more favorably by programs.” 18 Furthermore, reviewer familiarity with a program and/or the author(s) adds significant value to a SLOE, as does the perceived discriminatory ability of the author(s) (i.e., experience in residency program leadership or application review, number of rotators that they encounter annually to enhance global assessment, prior authoring/reviewing of SLOEs). 19 , 20 Authorship, therefore, may alter the lens through which applicant qualifications and performance are interpreted.

Originally designed to be written by a single author, 6 , 21 programmatic shifts in preference and practice have led to the writing of the aforementioned “composite” or “group” SLOE which is authored by multiple EM faculty to provide a departmental perspective on student rotators, often written by the program's educational team. 15 It has been estimated that upwards of 80%–90% of programs now submit at least one group SLOE. 12 One survey of 150 program directors demonstrated that 84.7% would prefer to receive a group SLOE rather than a single‐author SLOE from outside EM residency programs, 12 as it incorporates multisource feedback 22 and may be less prone to individual bias. 2 , 3 However, a commonly accepted approach to developing a group SLOE does not yet exist, nor do measures of how group SLOE ratings compare to those from individual authors. In this study, we aim to help close this knowledge gap using 6 years of national EM SLOE data to measure the prevalence of group SLOEs, characterize the role groups represented in group SLOEs, and compare the rating practices of groups of authors versus single authors.

METHODS

SLOE data from 2016 through 2021 (eSLOE 1.0) were obtained from the Council of Residency Directors in Emergency Medicine (CORD) database. This database is populated from the online portal where authors create and save SLOEs. Entries suspected to represent system testing as opposed to true evaluations were excluded (i.e., entries from the institution “CORD” or with the author name “test”). As a part of this study, only the quantitative anchors within the SLOE were included for analysis while the narrative commentary found at the end of each SLOE was excluded.

Given the large number of SLOEs in the database, manual abstraction of SLOE types and characteristics was not feasible. An algorithm was developed using Stata (StataCorp) to process SLOE author fields to accomplish three tasks: (1) to determine whether the SLOE was written by an individual or a group (“group” defined as more than one individual), (2) to determine the number of named letter writers on group SLOEs, and (3) to identify roles of individuals listed on group SLOEs. The algorithm attempted to detect unique authors through parsing by factors such as degrees commonly applied as suffixes (e.g., “, MD”) and characters used to break text (e.g., “;”). The list of author roles that the algorithm would attempt to identify was determined a priori to include clerkship directors (CDs), assistant/associate CDs, program directors, assistant/associate program directors, chairs/vice‐chairs, deans/vice‐deans, fellows, and coordinators. For letters identified as individual SLOEs with no named authors (e.g., “clerkship director”), the number of authors was recorded as one. For letters identified as group SLOEs without any author names listed (e.g., “group SLOE by EM faculty”), the number of authors was recorded as four to approximate the post hoc mean number of authors per group SLOE.

Given the wide variety of formats and language used in the author field, the algorithm was not expected to perform perfectly. To understand and quantify the accuracy of the algorithm, 150 SLOEs were randomly selected for review by the study team, each of whom reviewed 30 SLOEs. Reviewers were not aware of the algorithm's interpretations at the time of review. Algorithm performance was then assessed using reviewer interpretations as the criterion standard. Performance was assessed using raw agreement, Cohen's kappa values, 23 and mean absolute value of discrepancy for the number of authors per group SLOE. Accuracy of author role identification was evaluated using sensitivity and specificity. Roles with fewer than three occurrences in the file review (less than 2% incidence) were excluded from analysis.

Mean ratings for the qualification for emergency medicine and ranking questions on the SLOE were compared for individual versus group SLOEs using t‐tests. For these calculations, ordinal ranks were assigned numerical values (e.g., for the item about anticipated ranking, 0 = unlikely to rank, 1 = lower one‐third, 2 = middle one‐third, 3 = top one‐third, 4 = top 10%). Given the unexpected finding of similar ratings in individual and group SLOEs, a post hoc analysis was completed to assess for differences in individual SLOEs submitted by authors identified as CDs versus those not identified as CDs. This study was approved by the institutional review board at Mass General Brigham.

RESULTS

Of the CORD SLOE data from 2016 to 2021, a total of 40,218 data entries (SLOEs) met inclusion criteria for this study.

Algorithm performance

The algorithm strongly agreed with the assignment of “individual” versus “group” SLOE 93% with a Cohen's kappa value of 0.84 (Table 1, “Task 1”). 23 When determining the number of authors per SLOE, the algorithm again agreed strongly with the team members’ counts, agreeing with 88% of counts. In cases of discrepancy in author count, the mean absolute value of the discrepancy was 1.4 (Table 1, “Task 2”). The algorithm also demonstrated high sensitivity and specificity assigning roles to the identified authors: most sensitivities were ≥92% with notable lows of 62% for “assistant/associate clerkship director” and 83% for “assistant/associate program director.” All specificities were ≥97% (Table 1, “Task 3”).

TABLE 1.

Algorithm performance in identifying SLOE characteristics.

Raw agreement Kappa
Task 1: Identify individual vs. group SLOEs
Individual vs. Group 93% (140/150) 0.84
Raw agreement Mean absolute value of discrepancy
Task 2: Identify number of named authors
Number of authors 88% (132/150) 1.4
Role identification N Sensitivity Specificity
Task 3: Identify common named roles of authors
CD 93 92% 98%
Assistant/associate CD 13 62% 100%
Program director 77 97% 97%
Assistant/associate program director 53 83% 100%
Chair or vice‐chair 14 100% 100%
Dean or vice/assistant/associate dean 1 100% 100%
Fellow 0 N/A 100%
Coordinator 0 N/A 99%

Abbreviations: CD, clerkship director; SLOE, standardized letters of evaluation.

Authorship characteristics

Having verified strong performance of the algorithm, it was applied to all 40,218 entries (Table 2). Over the 6 years included in the data set, there was a mean of 6703 SLOEs annually, ranging from 4941 in 2020 to 8037 in 2019. These SLOEs represented a mean of 250 unique institutions/programs annually, ranging from 223 institutions in 2016 to 286 institutions in 2021.

TABLE 2.

Characteristics of SLOE in EM (2016–2021).

2016 2017 2018 2019 2020 2021 Mean
Total SLOEs and institutions
SLOEs 6619 7182 7401 8037 4941 6036 6703
Unique institutions 223 236 233 248 272 286 250
SLOE types at each institution
Individual only 23.8% 28.4% 25.8% 20.2% 24.3% 21.7% 24%
Group only 31.4% 34.7% 39.9% 44.0% 53.3% 54.5% 43%
Individual and group 44.8% 36.9% 34.3% 35.9% 22.4% 23.8% 33%
Authors per SLOE
Authors per group SLOE writing group (mean ± SD) 3.4 ± 1.9 3.6 ± 1.7 3.8 ± 1.8 3.8 ± 1.9 4.0 ± 2.0 4.0 ± 1.9 3.7 ± 1.9
Author titles in group SLOEs (%)
CD 72% 74% 73% 72% 72% 72% 72.5%
Program director 69% 68% 65% 64% 68% 65% 66.5%
Assistant/associate program director 37% 42% 42% 47% 47% 47% 44%
Chair or vice‐chair 14% 17% 17% 20% 17% 18% 17%
Assistant/associate CD 8% 6% 11% 10% 11% 13% 10%
Fellow 1% <1% 1% 2% 3% 1% 1%
Coordinator <1% 1% 1% 1% 2% 1% 1%
Dean or vice/assistant/associate dean <1% <1% 1% 2% 2% 2% 1%
Qualification for EM ratings a
1 (Commitment)
Individual 2.5 2.5 2.5 2.5 2.6 2.6 2.5
Group 2.5 2.5 2.5 2.5 2.5 2.5 2.5
Difference 0.06 f
2 (Work ethic)
Individual 2.7 2.6 2.6 2.7 2.7 2.7 2.7
Group 2.6 2.6 2.6 2.6 2.7 2.7 2.6
Difference 0.02 f
3 (Treatment)
Individual 2.4 2.3 2.3 2.3 2.4 2.4 2.4
Group 2.3 2.3 2.3 2.3 2.3 2.3 2.3
Difference 0.06 f
4 (Team)
Individual 2.6 2.6 2.6 2.6 2.6 2.6 2.6
Group 2.6 2.5 2.5 2.6 2.6 2.6 2.6
Difference 0.02 f
5 (Caring)
Individual 2.6 2.5 2.5 2.5 2.6 2.6 2.6
Group 2.5 2.5 2.5 2.5 2.6 2.6 2.5
Difference 0.03 f
6 (Guidance) b
Individual 2.3 2.3 2.3 2.3 2.4 2.3 2.3
Group 2.2 2.2 2.2 2.2 2.3 2.3 2.2
Difference 0.05 f
7 (Success) c
Individual 2.4 2.3 2.3 2. 2.4 2.4 2.4
Group 2.5 2.3 2.3 2.3 2.4 2.4 2.3
Difference 0.03 f
Global Assessments (mean ± SD)
1 (Overall) d
Individual 2.8 2.7 2.7 2.7 2.9 2.8 2.8
Group 2.6 2.5 2.5 2.5 2.6 2.6 2.6
Difference 0.19 f
2 (Ranking) e
Individual 2.6 2.6 2.7 2.6 2.7 2.8 2.7
Group 2.5 2.5 2.5 2.5 2.6 2.7 2.6
Difference 0.11 f

Abbreviations: CD, clerkship director; SLOE, standardized letters of evaluation.

a

1 = below, 2 = at, 3 = above.

b

1 = more, 2 = same, 3 = less.

c

1 = good, 2 = excellent, 3 = outstanding.

d

1 = lower one‐third, 2 = middle one‐third, 3 = top one‐third, 4 = top 10%.

e

0 = unlikely to rank, 1 = lower one‐third, 2 = middle one‐third, 3 = top one‐third, 4 = top 10%.

f

t‐test p < 0.05.

Over the course of the 6 years studied in the sample, there were varying trends within institutional practice patterns. The percentage of institutions submitting only SLOEs written by individual authors remained relatively stable year to year (mean ± SD 24% ± 2.9%). The percentage of institutions submitting only SLOEs written by a group of authors steadily increased from 31.4% to 54.5% (mean ± SD 42.9% ± 9.5%). This trend was complemented by a decrease in the percentage of institutions submitting a mix of both individually and group authored SLOEs, from 44.8% to 23.8% (mean ± SD 33% ± 8.5%).

Authorship patterns were similar across all 6 years. When an institution does submit a group SLOE, a mean of 3.4 (2016) to 4.0 (2021) authors are listed on the entry. Of these authors, a consistent 72%–74% were clerkship directors, 64%–69% were program directors, and 37%–47% were assistant/associate program directors (Table 2, Figure 1). Chairs or vice‐chairs were the next highest represented author, followed by assistant/associate CDs, then fellows, coordinators, and deans.

FIGURE 1.

FIGURE 1

Frequency of author titles appearing in group SLOEs: 2016–2021. ACD, assistant/associate clerkship director; APD, assistant/associate program director; CD, clerkship director; PD, program director; SLOEs, standardized letters of evaluation.

Rating characteristics

For both individual and group SLOEs, mean ratings were quite stable across study years for both “qualifications for emergency medicine” and “ranking” (Table 2). Differences in mean scores between individual and group SLOEs were small (Table 2, Figures 2 and 3); however, all differences reached statistical significance (p < 0.01). The post hoc analysis of individual SLOEs from CDs versus non‐CDs also showed similar rating patterns with relatively small differences, though again most reached statistical significance (Table S1). Rating differences were again found to be small and fewer differences met statistical significance when comparing individual SLOEs authored by identified clerkship directors to group SLOEs, with some mean ratings higher in the group SLOE population and some higher in the individual CD population (Table S2).

FIGURE 2.

FIGURE 2

Mean ratings and standard deviations of qualifications for EM ratings by SLOE type: 2016–2021. SLOEs, standardized letters of evaluation.

FIGURE 3.

FIGURE 3

Rank ratings by SLOE type: 2016–2021. SLOEs, standardized letters of evaluation.

DISCUSSION

As residency programs seek to create an application process that promotes equitable practices while also portraying candidates’ anticipated success at a program, standardized evaluations are and will remain valuable. In tandem with ongoing inquiry and adaptation regarding what a SLOE conveys, we must also consider and optimize practices based on the importance of who conveys this information and how authorship impacts the way this information is interpreted.

To ascertain the authorship practices across the 2016–2021 CORD SLOE data set that included over 40,000 entries, it was necessary to create an algorithm that would sort and categorize each entry. We found our algorithm to be highly accurate, with over 90% accuracy for determining individual versus group SLOEs and nearly 90% accuracy in determining the number of authors in group SLOEs. Additionally, the algorithm possessed 80%–100% sensitivity and 97%–100% specificity in detecting individual author roles.

This study ultimately advances our understanding of SLOE authorship, demonstrating that over the study period group SLOEs became more prevalent. Over the course of the 6‐year study period, the frequency with which programs have submitted only group‐authored SLOEs, as opposed to individual or a combination of individual‐ and group‐authored SLOEs, increased from 31.4% to 54.5%. Institutions with only individually authored SLOEs remained constant at 20.2% to 28.4%; therefore, the difference in authorship types appears to come from more institutions opting to submit a single‐group SLOE instead of a mix of individual and group SLOEs. This seems like an appropriate trend as two SLOEs (individual and group) from the same institution are unlikely to be interpreted much differently than a group SLOE alone since the two SLOEs will either (1) have consistent assessments and therefore be redundant or (2) have inconsistent assessments, which have been shown to leave negative impressions with reviewers and leading them to favor the assessment of the group SLOE, which has been found previously to have increased trustworthiness. 1

CDs were identified as authors approximately 70% of the time, with program directors and associate/assistant program directors being the next most highly represented authors. Representation of each role group was relatively stable across years, suggesting consistency in the makeup of authorship groups over this interval. If the language provided in the author field is “SLOE writing group” or similar, the algorithm cannot detect author roles or numbers. Given the structure of many EM clerkships that lead to the acquisition of a SLOE, we suspect that CDs are likely authors of more SLOEs than the algorithm or reviewers can detect. This study did not superimpose external data (e.g., web searches for the individual writer roles). A validation process where external data are included may find more tempered results of algorithm performance but would face extreme feasibility challenges as sources such as program websites may not accurately reflect roles from past years and therefore would likely require individually contacting authors and/or departments to assess their role(s) each year.

Despite a broad general preference for group SLOEs, these data surprisingly show that scoring patterns for individual and group SLOEs are quite similar. While mean scores were slightly higher in individual SLOEs and differences reached statistical significance for all nine items, these differences are small. The post hoc analysis helps refine our understanding of this unexpected finding by clarifying the degree to which ratings from CDs and non‐CDs align with each other and those from group SLOEs. Ratings in individual SLOEs from non‐CDs were higher than those from CDs and differences in seven of nine items reached statistical significance (Table S1). These differences are again small, but when taken with the similarity between individual SLOEs from CDs compared with group SLOEs (Table S2), we can see the overall similarity between individual and group SLOEs is likely driven by individual SLOEs from CDs.

Taken together, our results suggest that it may be reasonable to treat individual SLOEs from CDs as similar to group SLOEs from a rating perspective, assuming that current rating practices remain stable. Also of note, there appears to be slight grade inflation in individual SLOEs coming from non‐CDs that can be considered when interpreting SLOEs from this population.

LIMITATIONS

While the algorithm successfully identified and categorized the data with a high degree of accuracy, there are several limitations to the results of using our algorithm and CORD SLOE data set. As reflected in the sensitivity and specificity values for detecting each role group, the algorithm was more likely to miss a title than it was to inappropriately assign a title. This tended to occur in cases where titles were signified in an atypical way (e.g., the algorithm missed “Director of UME” as an equivalent of “clerkship director”). In addition, SLOEs that did not include titles (e.g., those including author names only or “SLOE writing group”) may include authors with one or more titles of interest, but these were not detected by the algorithm nor in manual review. It is therefore likely that the role representation measures in Table 2 and Figure 1 are an underrepresentation of true values; however, we believe that these data are still valuable as the best measures we have in this area to date. Ultimately, the ability for the algorithm to discriminate groups from individuals as well as roles within the department is limited to the data presented by the author(s). Similarly, institutions were detected by unique entries in the “institution” field and overcounting may have occurred if different language was used to describe the same institution.

While the errors and variability inherent to the human nature of the input of the data represent a considerable limitation, we also found that the algorithm itself was not infallible. For example, the algorithm named authors of a group SLOE with 90% accuracy when compared to human tabulation of the authors’ names. Therefore, results may have been somewhat different had the authorship manually been characterized by the authors. However, given the size of the data set, manual review and recording of these fields would have likely taken hundreds of hours and therefore were not feasible. As such, we recognize the overall performance of the algorithm as imperfect, but we believe its performance is sufficient to characterize the dataset in a meaningful, interpretable manner.

Limited understanding of how the authorship practices of a group versus individually authored SLOE differ remains a further limitation of this study. While we have identified a pattern of increasing group SLOE submissions annually, we do not understand programs’ rationales for this switch or their practice patterns within their groups once they have decided to include multiple authors. One SLOE with multiple authors listed in the CORD data set may have been written quite differently compared to other multiauthor SLOEs including the degree to which each author listed on group SLOEs contributed to the contents of the SLOE versus participating in a review, consensus, or other capacity.

Further Investigation

Recognizing that groups of writers author more than half of the SLOEs submitted in the study interval, it is imperative to further understand this multiauthor assessment is implemented, how it may impact rotator assessment, and what are the best practices for organizations transitioning to this approach to adopt. Our data suggest that groups of authors have similar scoring patterns compared to individual authors. However, it is possible that the narratives of group SLOEs possess less biased language in comparison to those written by individuals as more perspectives are incorporated into the writing and editing processes. More qualitative work will be necessary to understand this and other questions including how groups of authors currently approach writing a SLOE. We know given the available data that current practices among groups authoring SLOEs vary across programs, which may introduce unintended bias, grade inflation, or variability. 2 , 24 In further understanding approaches and outcomes, we may be able to elucidate best practices for group authored SLOEs to further advance the field for this type of assessment.

CONCLUSIONS

Group standardized letters of evaluation are becoming more prevalent. Rating practices are similar between group standardized letters of evaluation and individual standardized letters of evaluation authored by clerkship directors, suggesting that it is reasonable to interpret standardized letters of evaluation from these sources as similar from a ratings perspective.

CONFLICT OF INTEREST STATEMENT

The authors declare no conflicts of interest.

Supporting information

Figure S1. (A–B) Types of SLOEs submitted by year: 2016–2021.

Figure S2. Emergency Medicine Standard Letter of Evaluation (eSLOE 1.0, 2016–2021).

AET2-9-e11057-s002.docx (719.4KB, docx)

Table S1. Post‐hoc comparison of mean ratings on individual SLOEs authored by individuals identified as clerkship directors vs. others.

Table S2. Post‐hoc comparison of mean ratings on individual SLOEs authored by individuals identified as clerkship directors vs. group SLOEs.

Sehdev M, Egan DJ, Bord S, Hegarty C, Shappell E. Prevalence and characteristics of group standardized letters of evaluation in emergency medicine: A cross‐sectional observational study. AEM Educ Train. 2025;9:e11057. doi: 10.1002/aet2.11057

Supervising Editor: Holly Caretta‐Weyer

REFERENCES

  • 1. Schrepel C, Sehdev M, Dubosh NM, et al. Decoding competitiveness: exploring how emergency medicine faculty interpret standardized letters of evaluation. AEM Educ Train. 2024;8(4):e11019. doi: 10.1002/aet2.11019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Love JN, Doty CI, Smith JL, et al. The emergency medicine group standardized letter of evaluation as a workplace‐based assessment: the validity is in the detail. West J Emerg Med. 2020;21(3):600‐609. doi: 10.5811/westjem.2020.3.45077 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Garmel GM, Grover CA, Quinn A, et al. Letters of recommendation. J Emerg Med. 2019;57(3):405‐410. doi: 10.1016/j.jemermed.2019.04.020 [DOI] [PubMed] [Google Scholar]
  • 4. Pelletier‐Bui AE, Schrepel C, Smith L, et al. Advising special population emergency medicine residency applicants: a survey of emergency medicine advisors and residency program leadership. BMC Med Educ. 2020;20(1):495. doi: 10.1186/s12909-020-02415-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Ginsburg S, Regehr G, Lingard L, Eva KW. Reading between the lines: faculty interpretations of narrative evaluation comments. Med Educ. 2015;49(3):296‐306. doi: 10.1111/medu.12637 [DOI] [PubMed] [Google Scholar]
  • 6. Girzadas DV Jr, Harwood RC, Dearie J, Garrett S. A comparison of standardized and narrative letters of recommendation. Acad Emerg Med. 1998;5(11):1101‐1104. doi: 10.1111/j.1553-2712.1998.tb02670.x [DOI] [PubMed] [Google Scholar]
  • 7. Girzadas DV, Harwood RC, Delis SN, et al. Emergency medicine standardized letter of recommendation: predictors of guaranteed match. Acad Emerg Med Off J Soc Acad Emerg Med. 2001;8(6):648‐653. doi: 10.1111/j.1553-2712.2001.tb00179.x [DOI] [PubMed] [Google Scholar]
  • 8. Kukulski P, Ahn J. Validity evidence for the emergency medicine standardized letter of evaluation. J Grad Med Educ. 2021;13(4):490‐499. doi: 10.4300/JGME-D-20-01110.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Hiller KM, Franzen D, Lawson L, et al. Clinical assessment of medical students in the emergency department, a National Consensus Conference. West J Emerg Med. 2017;18(1):82. doi: 10.5811/westjem.2016.11.32686 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Sehdev M, Schnapp B, Dubosh NM, et al. Measuring and predicting faculty consensus rankings of standardized letters of evaluation. J Grad Med Educ. 2024;16(1):51‐58. doi: 10.4300/JGME-D-22-00901.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Schnapp B, Sehdev M, Schrepel C, et al. Faculty consensus on competitiveness for the new competency‐based emergency medicine standardized letter of evaluation. AEM Educ Train. 2024;8(5):e11024. doi: 10.1002/aet2.11024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Love JN, Smith J, Weizberg M, et al. Council of Emergency Medicine Residency Directors’ standardized letter of recommendation: the program director's perspective. Acad Emerg Med Off J Soc Acad Emerg Med. 2014;21(6):680‐687. doi: 10.1111/acem.12384 [DOI] [PubMed] [Google Scholar]
  • 13. Miller DT, Krzyzaniak S, Mannix A, et al. The standardized letter of evaluation in emergency medicine: are the qualifications useful? AEM Educ Train. 2021;5(3):e10607. doi: 10.1002/aet2.10607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Pelletier‐Bui A, Van Meter M, Pasirstein M, Jones C, Rimple D. Relationship between institutional standardized letter of evaluation global assessment ranking practices, interviewing practices, and medical student outcomes. AEM Educ Train. 2018;2(2):73‐76. doi: 10.1002/aet2.10079 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Katirji L, Smith L, Pelletier‐Bui A, et al. Addressing challenges in obtaining emergency medicine away rotations and standardized letters of evaluation due to COVID‐19 pandemic. West J Emerg Med. 2020;21(3):538‐541. doi: 10.5811/westjem.2020.3.47444 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Jarou Z, Hillman E, Kellogg A, Pelletier‐Bui A, Shandro J, Emergency Medicine Residents’ Association . EMRA and CORD student advising guide. 2019.
  • 17. Standardized Letters of Evaluation. University of Wisconsin–Madison. Accessed March 13, 2024. https://emed.wisc.edu/education/medical‐students/standard‐letters‐of‐evaluation/
  • 18. Overton D. Advice for Emergency Medicine Applicants. Western Michigan University Homer Stryker M.D. School of Medicine. 2017. https://med.wmich.edu/sites/default/files/Advice%20for%20Emergency%20Medicine%20Applicants%202017.pdf
  • 19. Negaard M, Assimacopoulos E, Harland K, Van Heukelom J. Emergency medicine residency selection criteria: an update and comparison. AEM Educ Train. 2018;2(2):146‐153. doi: 10.1002/aet2.10089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Love JN, Ronan‐Bentle SE, Lane DR, Hegarty CB. The standardized letter of evaluation for postgraduate training: a concept whose time has come? Acad Med J Assoc Am Med Coll. 2016;91(11):1480‐1482. doi: 10.1097/ACM.0000000000001352 [DOI] [PubMed] [Google Scholar]
  • 21. Keim SM, Rein JA, Chisholm C, et al. A standardized letter of recommendation for residency application. Acad Emerg Med. 1999;6(11):1141‐1146. doi: 10.1111/j.1553-2712.1999.tb00117.x [DOI] [PubMed] [Google Scholar]
  • 22. Love J, Ronan S, Deiorio N, et al. Characterization of the CORD standardized letter of recommendation in 2011 to 2012. Ann Emerg Med. 2013;62(5):S168‐S169. doi: 10.1016/j.annemergmed.2013.06.030 [DOI] [Google Scholar]
  • 23. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med. 2012;22(3):276‐282. [PMC free article] [PubMed] [Google Scholar]
  • 24. Shappell E, Hegarty C, Bord S, Egan DJ. Hawks and doves in standardized letters of evaluation: 6 years of rating distributions and trends in emergency medicine. J Grad Med Educ. 2024;16(3):328‐332. doi: 10.4300/JGME-D-23-00231.1 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Figure S1. (A–B) Types of SLOEs submitted by year: 2016–2021.

Figure S2. Emergency Medicine Standard Letter of Evaluation (eSLOE 1.0, 2016–2021).

AET2-9-e11057-s002.docx (719.4KB, docx)

Table S1. Post‐hoc comparison of mean ratings on individual SLOEs authored by individuals identified as clerkship directors vs. others.

Table S2. Post‐hoc comparison of mean ratings on individual SLOEs authored by individuals identified as clerkship directors vs. group SLOEs.


Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES