Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 May 7.
Published in final edited form as: Acad Med. 2016 Nov;91(11):1540–1545. doi: 10.1097/ACM.0000000000001180

Ranking Practice Variability in the Medical Student Performance Evaluation: So Bad, It’s “Good”

Megan Boysen Osborn 1, James Mattson 2, Justin Yanuck 3, Craig Anderson 4, Ara Tekian 5, Christian John Fox 6, Ilene B Harris 7
PMCID: PMC5937982  NIHMSID: NIHMS961247  PMID: 27075499

Abstract

Purpose

To examine the variability among medical schools in ranking systems used in medical student performance evaluations (MSPEs).

Method

The authors reviewed MSPEs from U.S. MD-granting medical schools received by the University of California, Irvine emergency medicine and internal medicine residency programs during 2012–2013 and 2014–2015. They recorded whether the school used a ranking system, the type of ranking system used, the size and description of student categories, the location of the ranking statement and category legend, and whether nonranking schools used language suggestive of rank.

Results

Of the 134 medical schools in the study sample, the majority (n = 101; 75%) provided ranks for students in the MSPE. Most of the ranking schools (n = 63; 62%) placed students into named category groups, but the number and size of groups varied. The most common descriptors used for these 63 schools’ top, second, third, and lowest groups were “outstanding,” “excellent,” “very good,” and “good,” respectively, but each of these terms was used across a broad range of percentile ranks. Student ranks and school category legends were found in various locations. Many of the 33 schools that did not rank students included language suggestive of rank.

Conclusions

There is extensive variation in ranking systems used in MSPEs. Program directors may find it difficult to use MSPEs to compare applicants, which may diminish the MSPE’s value in the residency application process and negatively affect high-achieving students. A consistent approach to ranking students would benefit program directors, students, and student affairs officers.


The medical student performance evaluation (MSPE), formerly called the “dean’s letter,” is an important component in a medical student’s application for residency training. The Association of American Medical Colleges (AAMC) guidelines for preparing the document were revised in 2002, in an attempt to promote consistency among U.S. MD-granting medical schools.1 In a review of MSPEs submitted three years later, however, Shea and colleagues 2 found that there was still great variability from institution to institution. The MSPE summary section is one such area of variability.

The 2002 MSPE guidelines state that the summary section should include “a summative assessment … of the student’s comparative performance in medical school, relative to his/her peers, including information about any school-specific categories used in differentiating among levels of student performance.”1 Notwithstanding this recommendation, analyses of MSPEs have demonstrated that ranking systems vary among schools.25 Some medical schools provide numerical ranks for their students, while others group their students into quartiles or quintiles. Many medical schools group their students into categories, with descriptors such as “outstanding” or “very good.” Some schools do not use any type of ranking system or do not clearly define one.

Schools’ categorical ranking systems vary widely in the terminology used and in the size of the category groups. For example, many schools use the term “excellent” to describe high-achieving students, while other schools use the same term to describe average students.5 On the other hand, schools typically use the term “good” to describe students in the bottom 50% of their graduating class.3 The AAMC guidelines recommend that Appendix D of the MSPE should provide a legend for the categorical ranking system,1 but this explanation of ranking categories is commonly found elsewhere in the MSPE.

Although some authors have described the variable use of certain descriptive ranking terms within the MSPE,35 to the best of our knowledge, no authors have quantified the variability in ranking practices among U.S. medical schools. Our purpose, based on a review of MSPEs received by our residency program, is to report the proportion of medical schools that use a defined ranking system, explore the variability among ranking systems and the language most commonly used, and identify statements suggestive of rank in MSPEs from schools without formal ranking systems.

Method

We extracted the MSPE from each application to the University of California, Irvine (UC Irvine) emergency medicine (EM) residency program during the 2012–2013 and 2014–2015 application cycles. (MSPEs from the 2013–2014 cycle were not available to us.) We included any applicant from a U.S. MD-granting medical school. We excluded MSPEs for students who did not graduate in 2013 or 2015.

We searched the UC Irvine internal medicine (IM) residency program applications for any schools for which we did not have a 2015 MSPE. (The 2013 IM MSPEs were not available electronically at the time of data collection.) For any schools still missing from our sample, we contacted the school’s associate dean of student affairs and requested a deidentified MSPE.

We selected one MSPE per school for review for each application year, from the student whose application was listed first alphabetically in the Electronic Residency Application Service (ERAS).

We answered the following questions for each school’s MSPE:

  1. Does the school use a defined ranking system?

  2. What type of ranking system is used?

  3. Into how many categories are the students divided? What are the most common category descriptors for each group? What percentage of students is in each category?

  4. Where is the student’s rank provided in the MSPE? Where is the legend for the ranking system?

  5. Do nonranking schools (i.e., schools that do not rank students) use similar language to schools that rank students?

One trained reviewer (J.M. or M.B.O.), who was not blinded to the study purpose, extracted and recorded the data on a data abstraction form. A second reviewer (J.M., M.B.O., or J.Y.) re-reviewed each MSPE and the corresponding data abstraction form for accuracy. The three reviewers (J.M., M.B.O., and J.Y.) held periodic meetings and resolved any inconsistencies between raters through discussions to reach consensus.

We calculated descriptive statistics for data related to each of the study questions. Specifically, we calculated the percentages of schools using ranking systems and described the different types of ranking system. We also described common terminology/descriptors and the average group sizes for medical schools that used a categorical ranking system. We excluded from our group size calculations schools that did not provide the sizes of their category groups. For schools that changed their ranking practice (e.g., changed from nonranking to ranking, changed the number or description of category groups) between 2013 and 2015, we used the 2015 practice for our data analysis. For schools that changed the percentages of students in each category group, we averaged the group size between years. We performed a Wilcoxon signed rank test to compare the percentages of students in the first (top), second, third, and last category between 2013 and 2015 to ensure that there was no statistically significant difference between the group sizes in the two years.

To address the fifth study question, we located additional MSPEs for each nonranking school from the 2014–2015 ERAS archives for our EM and IM programs. For these schools, we reviewed multiple students’ MSPE summary sections, searching for language that could be viewed as a ranking statement—for example, a particular sentence or phrase that appeared in multiple students’ summary sections, especially if there was a single adjective (descriptor) that varied among students. We analyzed these descriptors in relation to the students’ performance in clinical clerkships. Two reviewers (J.M. and J.Y.)—who were familiar with common adjectives used in MSPEs, but blinded to students’ academic performance—numerically ranked each descriptor using standard competition ranking. They based their ranking on whether, in their experience, the descriptor portrayed a high- or low-achieving student, with “1” as the highest-achieving student. Then, using a separate, blinded version of the document, they numerically ranked the student’s academic performance, based on the number of honors, high passes, passes, and fails in the six core clinical clerkships. A third reviewer (M.B.O.) linked each student’s descriptor rank with his or her academic performance rank and calculated the Spearman rank correlation for each reviewer for every school that used language that could be viewed as a ranking statement. (For a sample analysis of MSPEs received from one school, see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A343.)

The UC Irvine and University of Illinois, Chicago human subjects institutional review boards approved this study.

Results

In 2015, there were 136 U.S. MD-granting medical schools that had graduating classes, while in 2013 there were 132.6 In each of the 2012–2013 and 2014–2015 application seasons, our EM residency program received approximately 650 applications. Our IM residency program received approximately 2,000 applications for 2014–2015. We received two additional deidentified MSPEs by contacting medical schools. From this pool, we had at least one MSPE from 134 (99%) of the 136 U.S. MD-granting medical schools with graduates in 2015. We had data for both 2013 and 2015 for 114 (85%) of these 134 medical schools. For the remainder of the schools, we had 2015 data for 19 (14%) and 2013 data for 1 (1%).

Of the 134 schools, 101 (75%) provided ranks for their medical students in the MSPE, and 33 (25%) did not. Sixty-three (62%) of the ranking schools used named category groups, such as “outstanding” for their top group and “good” for their lowest group. Twenty-four (24%) of the ranking schools broke the class into segments such as tertiles (thirds), quartiles (fourths), or quintiles (fifths), without other category descriptors. The remaining ranking schools (n = 14; 14%) used a variety of descriptive methods, such as a numerical class rank or cumulative grade average compared with a mean (Table 1).

Table 1.

Ranking system No. (%) of schools
Three groups (categories or tertiles) 8 (8)
Four groups (categories or quartiles) 43 (43)
Five groups (categories or quintiles) 32 (32)
Six or seven groups (categories) 4 (4)
Numerical (e.g., 23/100) 6 (6)
Alternative methodb 6 (6)
Not well definedc 2 (2)

Abbreviations: MSPE indicates medical student performance evaluation; UC Irvine, University of California, Irvine.

a

This study used MSPEs received by the UC Irvine emergency medicine residency program in the 2012–2013 and 2014–2015 application cycles. MSPEs received by the UC Irvine internal medicine program in 2014–2015 or obtained directly from schools were used to include schools from which the emergency medicine program did not receive applications. For schools that changed their ranking practice between study years, the 2014–2015 ranking practice was used in this analysis.

b

Students were grouped by total number of honors grades, grade point average ranges, or a cumulative earned percentage compared with a class mean.

c

These schools clearly had a ranking system, but it was not well defined in the MSPE. The authors examined multiple letters from these schools. All letters examined mentioned the student being in, for example, the “top 10%” or “lower one-third” and used terms such as “exceptional” or “good” to describe the student as a candidate for residency. However, these schools did not include a definition of the ranking system.

Students were most commonly divided into four groups (Table 1). Among the 63 schools that used named category groups, most commonly the top group of students was described as “outstanding” (n = 35; 56%), the second highest group as “excellent” (n = 32; 51%), the third group as “very good” (n = 28; 44%), and the lowest group as “good” (n = 28; 44%) (Chart 1). On average, the top group contained 21% of students (range: 3%–39%; interquartile range [IQR]: 15%–25%); the second group contained 30% of students (range: 10–75%; IQR: 20%–35%); the third group contained 30% of students (range: 7%–70%; IQR: 23%–37%); and the lowest group contained 10% of students (range: 0%–33%; IQR: 2%–16%). Six (10%) schools did not provide the sizes for each of their groups, and another 6 (10%) provided size distribution for only their top group(s). When the negative terms “marginal,” “below average,” or “recommended with reservation” were used for the lowest-ranked students, these groups included on average 1% of students (range: 0%–2%). We found 44 unique terms or phrases to describe student category groups; examples of the less common descriptors are provided in Chart 1.

Chart 1 Opens a popup window Opens a popup window Opens a popup window

The most common terms used by the 63 schools with named category groups to describe student performance, regardless of class position, were “excellent” (n = 53; 84%), “outstanding” (n = 52; 83%), “very good” (n = 51; 81%), and “good” (n = 42; 67%) (Table 2). Among these schools, “excellent” was used to describe students ranging from the 1st to the 95th percentiles; “outstanding” was used to describe students who were in the 33rd to 99th percentiles; “very good” was used to describe students who were in the 1st to 80th percentiles; and “good” was used to describe students who were in the 1st to 57th percentiles.

Table 2.

Term No. (%) of schools using descriptor Range of student percentile ranks
Excellent 53 (84) 1st–95th
Outstanding 52 (83) 33rd–99th
Very good 51 (81)b 1st–80th
Good 42 (67)b 1st–57th

Abbreviations: MSPE indicates medical student performance evaluation; UC Irvine, University of California, Irvine.

a

This study used MSPEs received by the UC Irvine emergency medicine residency program in the 2012–2013 and 2014–2015 application cycles. MSPEs received by the UC Irvine internal medicine program in 2014–2015 or obtained directly from schools were used to include schools from which the emergency medicine program did not receive applications.

b

“Very good” and “good” were terms commonly used for schools’ non-last fourth and fifth groups; therefore, the totals for “very good” and “good” in Table 2 do not match the totals for these terms in Chart 1.

Among the 101 schools with formal ranking systems, there was variability in where the reader could locate the student’s rank in the MSPE. The majority (n = 79; 78%) of the schools identified the individual student’s rank in the summary section. Other locations included the appendices (n = 14; 14%) or another section within the MSPE (n = 8; 8%). Many schools (n = 51; 50%) made an effort to highlight the rank by bolding, capitalizing, or underlining it.

The location for the ranking system legend also varied. Thirty-six (36%) of the 101 ranking schools described their system in Appendix D, as suggested by the AAMC guidelines.1 Forty-one (41%) described their ranking system in another appendix, including the medical school information page. Some included the legend in a cover letter (n = 6; 6%) or within the body of the MSPE (n = 5; 5%). Thirteen (13%) did not fully describe their ranking system anywhere, but the use of a ranking system was inferred by their giving a numerical or quantile rank to the student at some location in the MSPE.

Of the 33 schools that did not rank their students, 21 (64%) included a statement somewhere in the MSPE that they “do not rank” their students. We examined the summary sections of 330 additional MSPEs from the 33 nonranking schools (average = 10 MSPEs/school; range: 1–25, IQR: 5–14). We did not find any language suggestive of rank for 9 (27%). We found that 15 (45%) of the schools included statements in their summary paragraphs that were suggestive of rank. Specifically, each of these schools included a common phrase in its MSPE summary section that described the student’s performance using an adjective such as “outstanding.” For example, we examined 12 MSPEs from one nonranking school, and each contained the following statement in the summary section: “We recommend him/her as a [descriptor] candidate for graduate medical education.” The descriptors this school used were “outstanding,” “excellent,” and “very good.” The “outstanding” student had four honors grades in clinical clerkships, and the “very good” students had no honors grades. However, the school stated, “There is no ranking system for our students.”

When we compared students’ academic performance against the “descriptors” used by these 15 schools (independently ranked by our reviewers), we found that 8 of the schools had strong correlations between the descriptors and students’ academic performance, as measured by clinical clerkship grades (Spearman rank coefficient: rs = 0.71–1.0 for 8 schools, P < .05 for 6 of these schools). A detailed description of these phrases, descriptors, rank coefficients, and P values is available in Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A343.

In 5 (15%) of the 33 nonranking schools’ MSPEs, we identified occasional phrases suggesting rank, but these phrases were not consistently present or did not have a varying descriptor term. For 3 (9%) of the nonranking schools (2 with “we do not rank” statements), we found a statement suggesting rank for a few top students, but not for any other students. For example, the following sentence appeared in 3/23 (13%) of the MSPEs we examined for 1 of these 3 schools: “[Student] is recommended highly, a citation awarded to approximately one-third of the senior class.” This school’s remaining MSPEs had no language suggestive of rank. One additional nonranking school had ratings for students’ “housestaff potential” in the MSPE summary section, but there was no information comparing the student’s rating with that of other students.

We found that a higher percentage of nonranking schools (n = 9 of 33 [27%]; 95% confidence interval [CI], 12%–42%) were among the 2015 U.S. News and World Report top 20 medical schools 7 compared with the ranking schools (n = 11 of 101 [11%]; 95% CI, 5%–17%; chi-square test for independence: P = .02).

Seven (6%) of the 114 schools for which we had two years of data changed from nonranking schools in 2013 to ranking schools in 2015. None of the schools changed from ranking to nonranking. Five (4%) schools decreased the number of category groups they used. Two (2%) schools changed their descriptor terms. Many schools changed the size of their category groups (n = 36; 32%). A Wilcoxon signed rank test detected no statistical difference in the percentage of students in the first, second, third, or last category between 2013 and 2015 in these 36 schools.

Discussion

Despite the 2002 AAMC recommen dations for standardizing MSPEs,1 there is still considerable variability in their structure and content.2,4 Our study clearly demonstrates inconsistency in the student ranking systems used in MSPEs among U.S. MD-granting medical schools.

Variability in the format of MSPEs may contribute to difficulty in interpreting them. On average, residency programs receive 856 applications per year.8 Although the recommended length of the MSPE is two to three pages, it is usually longer.1,2 For program directors, a student’s class rank is one of the most important components of the MSPE, as a higher rank may correlate with better performance during residency.8,9 Locating a student’s class rank can be a particularly cumbersome process, however. The program director must first identify the sentence in the MSPE that contains the student’s rank category and then locate the legend that describes the ranking system—both items with variable locations and presence. Therefore, it is not surprising that one-third of program directors do not use the MSPE to make rank-order decisions and that the MSPE is not a top factor in making selection decisions for most specialties.10,11

The lower value placed on the MSPE by program directors is concerning because the MSPE is the only comprehensive assessment of a student’s medical school performance. Medical schools put significant time and resources into the production of MSPEs each year.12,13 The effort involved in the registrar analyzing transcript data, the student summarizing his or her salient characteristics, and the writer meeting with each student and composing each MSPE should not be underestimated. Issues in the readability and usability of the MSPE undermine the efforts and resources invested by all parties involved and may ultimately take emphasis away from assessment of global medical school performance in a student’s application.

We found that each of the most common category descriptors used in MSPEs was associated with a broad range of student percentiles. For example, “good” was usually used to refer to students in the bottom half of their graduating class, but its use ranged above the median in at least one school. Such variability may lead to errors of interpretation. A program director may assume, for example, that an “outstanding” student is ranked in the school’s top category, when in reality the student is ranked in the bottom half of the class or the school has no ranking system.

High-achieving students have the most to lose from the variability in ranking systems. First, they are indirectly affected if a program director incorrectly assigns their lower-achieving counterparts to a higher quartile than is deserved—for example, a student in the 35th percentile of their class who is described as “outstanding.” Furthermore, program directors may not be able to identify the above-average students from nonranking schools because grade distribution among medical schools is also variable.14

Our study also demonstrates that, more than 10 years after the AAMC recommendations on the MSPE were released,1 one-quarter of medical schools do not provide transparent, comparative performance information for their students. About half of these nonranking schools use phrases that an MSPE reader could view as category ranks, and some of these phrases strongly correlated with a student’s academic performance. Other nonranking schools provided rank information for their top students only, despite explicitly stating that they “do not rank.”

It is not clear why some schools choose not to rank their students. A larger proportion of nonranking schools than ranking schools were among the U.S. News and World Report top 20 medical schools in 2015,7 which suggests that top medical schools may be of the mind-set that all of their students are exceptional and may not want to place them into categories. Inherently, there is a conflict between the needs of the MSPE writer and the MSPE reader.15 The writer may feel compelled to act as an advocate for each student,13 whereas the reader is attempting to select the best candidates for his or her residency program.

Although our study determined the most common ranking systems, it did not identify the best ranking practice. It is our opinion that the best ranking practice is a consistent ranking practice that highlights high achievers and identifies problematic students but does not punish lower achievers. We believe that the following categorical ranking system would achieve these goals: “outstanding” (80th–99th percentiles), “excellent” (50th–79th percentiles), “very good” (4th–49th percentiles), and “satisfactory” (3rd percentile and below), with the “satisfactory” group size adjusted to include only students with serious academic performance issues (e.g., course failures). In this proposed system, the top two categories are consistent with the most common current practices. A large third group avoids punishing lower-ranking students, while a small lowest group provides an opportunity for schools to identify very-low-achieving students. We believe the average lowest group size of 10% identified in our study unduly punishes students who fall into lower percentiles but have not had significant performance issues.

Regardless of the ranking method chosen, student affairs officers, students, and program directors would benefit if MSPEs were written to demonstrate a consistent approach to ranking students, and in accordance with the 2002 AAMC guidelines.1 As the AAMC MSPE Taskforce convenes,16 we urge its members to set and communicate any future guidelines in a clear, consistent fashion, and to consider collaborating with the Liaison Committee on Medical Education to encourage schools to comply with them.

Limitations

There were a few notable limitations to this study. For ranking schools, we examined only one MSPE per application cycle and assumed that other MSPEs from the same school would have the same format; however, we minimized this potential issue by analyzing documents from two different years when available. Our data abstractors were not blinded to the study purpose. We excluded schools that did not give sizes for their category groups from the group size calculations. For our analysis of nonranking schools, we reviewed 1 to 25 MSPEs from each school. The nonranking schools for which we had fewer MSPEs may have had a “hidden” rank system that we did not discover secondary to insufficient variability in student performance distribution.

Conclusions

The majority of medical schools provide ranks for their students in the MSPE, but the types of ranking system they use vary widely. Further, nonranking medical schools may have undisclosed ranking systems. This variability in ranking practices limits the ability of program directors to interpret MSPEs and to compare applicants. Threats to the usability of the MSPE devalue the tremendous time and resources spent in its production and undermine its importance in residency applications. Moving forward, we recommend that medical schools adopt a unified, systematic, and transparent method for ranking students in the MSPE.

Supplementary Material

Appendix 1 and 2
Chart 1

Acknowledgments

The authors would like to thank Ms. Janet Settle and Ms. June Casey for providing tremendous administrative support for the study. They also thank the UC Irvine internal medicine residency program, particularly Dr. Bindu Swaroop and Ms. Susan Altmayer.

Funding/Support: None reported.

Footnotes

Editor’s Note: A Commentary by K.M. Andolsek appears on pages 1475–1479.

Other disclosures: None reported.

Ethical approval: This study was approved by the human subjects institutional review boards at the University of California, Irvine and University of Illinois, Chicago.

Disclaimers: M. Boysen Osborn is an emergency medicine residency program director, and her opinions reflect a program director’s point of view.

Previous presentations: Portions of this study were presented at the Western Group on Educational Affairs (WGEA) Regional Meeting, March 2014, Honolulu, Hawaii, and the Association of American Medical Colleges Annual Meeting, November 2014, Chicago, Illinois.

Contributor Information

Megan Boysen Osborn, Assistant professor and residency program director, Department of Emergency Medicine, University of California, Irvine, Orange, California.

James Mattson, Fourth-year medical student, University of California, Irvine, School of Medicine, Irvine, California.

Justin Yanuck, Fourth-year medical student, University of California, Irvine, School of Medicine, Irvine, California.

Craig Anderson, Research specialist, Department of Emergency Medicine, University of California, Irvine, Orange, California.

Ara Tekian, Professor, Department of Medical Education, University of Illinois, Chicago, Chicago, Illinois.

Christian John Fox, Professor and assistant dean of student affairs, Department of Emergency Medicine, University of California, Irvine, Orange, California.

Ilene B. Harris, Professor and head and director of graduate studies, Department of Medical Education, University of Illinois, Chicago, Chicago, Illinois.

References

  • 1.Association of American Medical Colleges. A guide to the preparation of the medical student performance evaluation. 2002 https://www.aamc.org/download/64496/data/mspeguide.pdf. Accessed February 4, 2016. [Context Link]
  • 2.Shea JA, O’Grady E, Morrison G, Wagner BR, Morris JB. Medical student performance evaluations in 2005: An improvement over the former dean’s letter? Acad Med. 2008;83:284–291. doi: 10.1097/ACM.0b013e3181637bdd. Ovid Full Text Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 3.Kiefer CS, Colletti JE, Bellolio MF, et al. The “good” dean’s letter. Acad Med. 2010;85:1705–1708. doi: 10.1097/ACM.0b013e3181f55a10. Ovid Full Text Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 4.Naidich JB, Grimaldi GM, Lombardi P, Davis LP, Naidich JJ. A program director’s guide to the medical student performance evaluation (former dean’s letter) with a database. J Am Coll Radiol. 2014;11:611–615. doi: 10.1016/j.jacr.2013.11.012. Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 5.Naidich JB, Lee JY, Hansen EC, Smith LG. The meaning of excellence. Acad Radiol. 2007;14:1121–1126. doi: 10.1016/j.acra.2007.05.022. Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 6.Liaison Committee on Medical Education. Medical school directory: Accredited MD programs in the United States. http://www.lcme.org/directory.htm. Accessed December 31, 2015 [Context Link]
  • 7.U.S. News and World Report. Best medical schools: Research. Ranked in 2015. http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools/research-rankings?int=af3309&int=b3b50a&int=b14409. Accessed February 4, 2016 [Context Link]
  • 8.Swide C, Lasater K, Dillman D. Perceived predictive value of the medical student performance evaluation (MSPE) in anesthesiology resident selection. J Clin Anesth. 2009;21:38–43. doi: 10.1016/j.jclinane.2008.06.019. Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 9.Lurie SJ, Lambert DR, Grady-Weliky TA. Relationship between dean’s letter rankings and later evaluations by residency program directors. Teach Learn Med. 2007;19:251–256. doi: 10.1080/10401330701366523. Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 10.National Resident Matching Program, Data Release and Research Committee. Results of the 2014 NRMP program director survey. Washington, DC: National Resident Matching Program; 2014. http://www.nrmp.org/wp-content/uploads/2014/09/PD-Survey-Report-2014.pdf. Accessed February 4, 2016. [Context Link] [Google Scholar]
  • 11.Green M, Jones P, Thomas JX., Jr Selection criteria for residency: Results of a national program directors survey. Acad Med. 2009;84:362–367. doi: 10.1097/ACM.0b013e3181970c6b. Ovid Full Text Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 12.Green M, Zick A, Thomas JX. Commentary: Accurate medical student performance evaluations and professionalism assessment: “Yes, we can!”. Acad Med. 2010;85:1105–1107. doi: 10.1097/ACM.0b013e3181e208c5. Ovid Full Text Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 13.Green MM, Sanguino SM, Thomas JX., Jr Standardizing and improving the content of the dean’s letter. Virtual Mentor. 2012;14:1021–1026. doi: 10.1001/virtualmentor.2012.14.12.oped1-1212. Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 14.Alexander EK, Osman NY, Walling JL, Mitchell VG. Variation and imprecision of clerkship grading in U.S. medical schools. Acad Med. 2012;87:1070–1076. doi: 10.1097/ACM.0b013e31825d0a2a. Ovid Full Text Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 15.Hunt D. Student affairs officers should not oversee preparation of the medical student performance evaluation. Acad Med. 2011;86:1337. doi: 10.1097/ACM.0b013e31823007e6. Ovid Full Text Bibliographic Links [Context Link] [DOI] [PubMed] [Google Scholar]
  • 16.Association of American Medical Colleges Group on Student Affairs. Medical student performance evaluation—MSPE. MSPE Review Resources. https://www.aamc.org/members/gsa/54686/gsa_mspeguide.html. Accessed February 17, 2016.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 1 and 2
Chart 1

RESOURCES