INTRODUCTION
Accurate evaluation of students by faculty is a source of angst for many stakeholders in medical education. Recently, the objectivity of faculty assessment has been questioned as age, gender, and length of contact time influenced grading.1 Even though data suggest faculty are trainable,2 most institutions struggle to obtain accurate student evaluations.3
Some have questioned the utility of faculty development sessions on evaluation.4 Despite these efforts, faculty assessments remain highly variable.2 It is unclear how often faculty are instructed on completing evaluations, how often they receive feedback on their evaluative behaviors, and how this feedback is communicated as delivery methods impact how the information is processed.5
Our aim was to determine the current methods used for faculty development on the evaluation of medical students during the internal medicine (IM) clerkship. The primary means of our study was to analyze survey results of clerkship directors as part of the 2016 CDIM National Survey (Clerkship Directors in Internal Medicine).
METHODS
In September 2016, CDIM electronically administered its annual, voluntary, confidential survey of its US membership. Six of the 94 questions addressed faculty development regarding student evaluation and were collaboratively developed by KEO, RL, JL, and AB. After blinded peer-review by the CDIM Survey and Scholarship Committee (CDIMSSC), our survey items were chosen to be included and were also separately modified by members of the CDIMSSC and CDIM Council. The survey was approved for exemption from full review by the University of Texas Medical Branch Institutional Review Board (study number 16-2091).
CDIM used the web survey platform Survey Monkey using Secure Socket Layer (SSL) encryption with 5 e-mail reminders to non-respondents. Select CDIMSSC members also made follow-up phone calls to non-respondents. The survey closed on December 15, 2016.
Demographic characteristics were summarized using descriptive statistics. For subgroup analysis, we used the Kruskal-Wallis test. Analyses were conducted using SPSS v. 24, with a 5% significance threshold.
RESULTS
The response rate was 74.2% (95/128). Forty-three (45%) reported being CD’s for 0–5 years, 16 (17%) for 6–10, and 25 (26%) for over 10. Twenty-four (25%) clerkships used 1–2 clinical sites, 40 (42%) used 3–4, and 31 (32%) used 5 or more. Sixty-nine (73%) provide instruction to faculty on completing an evaluation and 48 (51%) on constructing an informative narrative. Table 1 outlines respondents’ methods of providing feedback to faculty on their evaluative behaviors, and Table 2 describes the barriers of providing feedback.
Table 1.
Approaches to Providing Faculty Feedback on Student Evaluations
| Characteristic | No. (%)a |
|---|---|
| Mode of dissemination of faculty feedback | |
| None provided | 43 (45) |
| Workshop/faculty development | 24 (25) |
| Written feedback only | 16 (17) |
| Individual meetings, all faculty | 3 (3) |
| Individual meetings, only as neededb | 8 (8) |
| Feedback distributed to site directors | 1 (1) |
aPercentages are of the 95 responding clerkship faculty
bFeedback is provided only to select faculty members
Table 2.
Barriers to Providing Faculty Development on the Evaluation Process
| Barrier | No. (%)a |
|---|---|
| Inadequate time | 31 (33) |
| Geographic dispersion of clinical sites | 14 (15) |
| Lack of training to perform | 9 (9) |
| Volume of faculty | 4 (4) |
| Lack of engagement when completed in the past | 3 (3) |
| Fear of isolating faculty | 2 (2) |
aPercentages are of the 95 responding clerkship faculty
bOf the 43 respondents who provided no feedback. Select all that apply; thus, total number of barriers adds up to more than 43
We performed subgroup analyses to determine if CD experience or clerkship size influenced the likelihood of giving feedback. There was no statistically significant difference between CD experience or number of geographic sites and likelihood of giving feedback (p value = NS for both).
DISCUSSION
Feedback to evaluators is inconsistent in IM clerkships across the USA, with 62% of CD’s providing no feedback or simply sending an email to faculty regarding their evaluative performance. To improve performance, feedback should be timely, specific, and focused on skills and behaviors.5 Likewise, it should be a shared dialogue between the giver and receiver of the corrective information.5
Expectations regarding effective student evaluation are being given as 73% of CD’s do provide instruction on completing evaluations. However, only half train faculty on constructing an informative narrative, and nearly two-thirds are not closing the loop by giving effective feedback to faculty on how well they are doing it.
Adequate time, large number of faculty, and multiple participating institutions were cited as feedback barriers. To provide instruction and give feedback to faculty on their evaluative behaviors, CD’s need adequate support and time. Prior work suggests it can be done at a modest cost across multiple institutions.6
Our study has several limitations. Not all 128 institutions responded. Our response rate (74.2%) suggests these results are a good representation of LCME-accredited medical institutions in the USA. We also performed subgroup analyses that were not defined a priori due to the concern that CD experience and number of participating institutions might influence the likelihood of providing feedback; however, we did not find such an association.
In conclusion, our results suggest there is a lack of faculty development around accurate and meaningful medical student evaluation, and when it does exist, it often does not meet the standards of useful feedback to improve evaluative behaviors of teaching faculty.
Acknowledgments
Contributors
None
Compliance with Ethical Standards
Conflict of Interest
The authors declare that they do not have a conflict of interest.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Riese A, Rappaport L, Alverson B, Park S, Rockney RM. Clinical performance evaluations of third-year medical students and association with student and evaluator gender. Acad Med. 2017;92(6):835–840. doi: 10.1097/ACM.0000000000001565. [DOI] [PubMed] [Google Scholar]
- 2.Hemmer PA, Dadekian GA, Terndrup C, et al. Regular formal evaluation sessions are effective as frame-of-reference training for faculty evaluators of clerkship medical students. J Gen Intern Med. 2015;30(9):1313–1318. doi: 10.1007/s11606-015-3294-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Boyse TD, Patterson SK, Cohan RH, et al. Does medical school performance predict radiology resident performance? Acad Radiol. 2002;9(4):437–445. doi: 10.1016/S1076-6332(03)80189-7. [DOI] [PubMed] [Google Scholar]
- 4.Andolsek KM. Improving the medical student performance evaluation to facilitate resident selection. Acad Med. 2016;91(11):1475–1479. doi: 10.1097/ACM.0000000000001386. [DOI] [PubMed] [Google Scholar]
- 5.Ende J. Feedback in clinical medical education. JAMA. 1983;250(6):777–781. doi: 10.1001/jama.1983.03340060055026. [DOI] [PubMed] [Google Scholar]
- 6.Houston TK, Clark JM, Levine RB, et al. Outcomes of a national faculty development program in teaching skills: prospective follow-up of 110 faculty development teams. J Gen Intern Med. 2004;19(12):1220–1227. doi: 10.1111/j.1525-1497.2004.40130.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
