Skip to main content
ACR Open Rheumatology logoLink to ACR Open Rheumatology
. 2023 Dec 28;6(3):139–144. doi: 10.1002/acr2.11638

Enhancing Faculty Development Through Compiled Verbal Feedback on Clinical Teaching From Trainees

Guy Katz 1,, Eli M Miloslavsky 1, Ana D Fernandes 1, Marcy B Bolster 1
PMCID: PMC10933622  PMID: 38155482

Abstract

Objective

Feedback from fellows‐in‐training (FITs) is important for faculty development and to enrich clinical teaching. We sought to evaluate the effectiveness of traditional online evaluations and a novel compiled verbal feedback mechanism.

Methods

An annual feedback system was implemented in our rheumatology division in which FITs provided verbal feedback on all faculty to a facilitator who compiled, deidentified, and shared the feedback with individual faculty members. FITs also completed standard online annual evaluations of faculty. FITs and faculty completed surveys assessing the perceived effectiveness and confidentiality of each feedback mechanism.

Results

Thirteen of 15 eligible faculty and all 4 eligible FITs completed both surveys. Responses by FITs and faculty regarding the quality of online evaluations were generally unfavorable or neutral. Faculty responses regarding compiled verbal feedback were more favorable in all questions and significantly more favorable with respect to the feedback's ability to explain strengths (54% favorable for online evaluations vs 100% for compiled verbal feedback), the feedback's specificity (0% vs 54%), and the feedback's actionable nature (15% vs 62%). All FITs’ responses regarding quality of compiled verbal feedback were favorable. FITs had concerns regarding confidentiality with both online evaluations (0% favorable) and compiled verbal feedback (25% favorable), though FITs had less concern for future faculty interactions with compiled verbal feedback (100% favorable) than with online evaluations (0% favorable).

Conclusion

Compiled verbal feedback by FITs produced more actionable and effective feedback for faculty, with less concerns regarding future faculty interactions compared with traditional online evaluations. Further study of this method across different programs and institutions is warranted.

INTRODUCTION

Faculty teaching and provision of feedback are essential components of clinical education for fellows‐in‐training (FITs). Faculty members regularly provide feedback to learners to identify strengths and areas for improvement. Importantly, faculty similarly benefit from evaluations and feedback provided by trainees to enhance and hone their teaching skills; this process has been shown to improve quality of teaching. 1 , 2 , 3 , 4 For this reason, the Accreditation Council for Graduate Medical Education (ACGME) requires that faculty be evaluated by trainees on an annual basis using confidential written evaluations. 5

SIGNIFICANCE & INNOVATIONS.

  • The quality of feedback on faculty teaching provided through online evaluation forms is limited, as perceived by faculty and trainees.

  • Concerns regarding confidentiality of feedback and time burden on trainees may limit quality and specificity of feedback.

  • Compiled verbal feedback shows promise as a novel faculty feedback mechanism, resulting in higher‐quality feedback, as assessed by faculty and trainees.

Most clinical training programs, both in rheumatology and in other fields, use an electronic evaluation system containing multiple‐choice questions regarding faculty teaching and a free‐text narrative feedback section. However, there is a paucity of data on the effectiveness of this evaluation system. Electronic evaluations face several important barriers. There may be challenges with maintaining confidentiality or the perception of confidentiality by trainees, especially in small training programs, such as rheumatology. The inherently hierarchical nature of the relationship between faculty and FITs, particularly when combined with the small size of training programs, may deter trainees from providing honest and constructive feedback. Additionally, FITs work with many faculty members, and the time burden associated with the volume of evaluations may make it more challenging for FITs to share meaningful comments. Finally, online evaluations are completed by FITs individually, limiting the opportunity for collaboration and discussion among FITs that could enhance the quality of feedback provided.

To address some barriers to effective FIT evaluations of faculty, we introduced a novel feedback system in which FITs provided verbal feedback that was deidentified, collated, and shared with individual faculty members by the fellowship program director (PD). We assessed the effectiveness, value, and anonymity of traditional online evaluations and the novel feedback system, as perceived by both faculty and FITs.

MATERIALS AND METHODS

Participants

All clinical faculty and FITs of the Massachusetts General Hospital (MGH) Division of Rheumatology were invited to participate, and the verbal feedback system was introduced and evaluated in March through September 2022. Participation was voluntary, and all participants provided written informed consent at the time of enrollment. The study was approved by the Mass General Brigham Institutional Review Board. Study investigators (authors GK, EMM, and MBB) were excluded from participation.

Online evaluations

Our rheumatology fellowship training program has provided an annual online survey administered via New Innovations (https://www.new-innov.com/pub/) to all FITs to provide feedback for each faculty member with whom they have worked (Supplementary Figure 1). The survey includes Likert scales and free‐text questions that address the settings in and the extent to which the FIT worked with each of the faculty members and their effectiveness in various teaching domains. In accordance with ACGME requirements, faculty receive summarized feedback from the division director annually, though for programs with fewer than six trainees, the online evaluations are batched and shared with faculty members every three years to increase the number of evaluations and maintain FIT anonymity.

Compiled verbal feedback

A compiled verbal evaluation system was implemented by having FITs meet to collectively provide verbal feedback regarding faculty. In advance of the meeting, the FITs unanimously decided to participate as a group and to have the discussion facilitated by the PD (MBB). The FITs met with the PD for three hours on one morning during which all FITs were relieved of clinical duties. During the meeting, they discussed verbal feedback for each faculty member with whom they had worked, one faculty member at a time, while the PD documented the discussion. The PD used a few topic prompts (Figure 1) and asked clarifying questions, but she did not contribute to the content of the feedback discussed. The PD compiled the comments for each faculty member and created a summary document of the feedback that was later shared via email with the faculty members with an accompanying offer to discuss. For comments that could potentially be identified to a certain FIT, the PD shared with the FITs, as a group, how they were written to ensure deidentification to the satisfaction of the FIT who made those comments. The FITs were offered to approve the sharing of identifiable information, or if they preferred, identifiable comments could be reserved and shared with faculty after all FITs present during the discussion graduated from the program. A similar 30‐minute meeting was held, during which the FITs provided verbal feedback about the PD to another facilitator of their choosing (EMM), who compiled the feedback in the same way as described previously. The compiled verbal feedback was shared via email with faculty members after they had received their annual online 2021–2022 feedback.

Figure 1.

Figure 1

Prompts suggested by the program director during fellow discussion of faculty feedback.

Measures

Surveys were designed to assess the effectiveness and confidentiality of both online faculty evaluations and compiled verbal feedback, as perceived by faculty (Supplementary Figure 2) and FITs (Supplementary Figure 3). These surveys were designed to assess key factors for effective feedback, as have been previously described. 6 , 7 Surveys to faculty inquired whether faculty found the evaluations defined strengths and areas for improvement, improved teaching skills, were specific and actionable, and were similar to the faculty's own assessment of their strengths and weaknesses. Surveys to FITs inquired whether FITs felt they were able to provide specific and meaningful feedback on strengths and areas for improvement regarding faculty teaching. Surveys to both FITs and faculty included questions on perceived anonymity of each evaluation format. The surveys used Likert scales ranging from 1 (“strongly disagree”) to 5 (“strongly agree”), with free‐text boxes at the end of each survey for additional comments. Surveys used unique identifiers to enable pairing within participants.

FITs received surveys regarding both forms of evaluation on completion of both forms of evaluation. Faculty received their online evaluations first, followed by that survey, then received compiled verbal feedback, followed by that associated survey. Surveys and requests for voluntary participation in the study were sent to both FITs and faculty via email by the FIT member of the investigative team (GK). Reminder emails were sent periodically to individuals who had not completed the surveys.

Statistical analysis

Likert score responses were converted into two categorical variables: “favorable,” defined as “agree” or “strongly agree” or “disagree” or “strongly disagree” depending on the phrasing of each question, and “unfavorable or neutral,” representing all other responses. Comparisons of the proportions of favorable and unfavorable or neutral responses between the two feedback systems were made using McNemar tests because survey responses could be compared by responder.

RESULTS

Participants

Fifteen faculty members and 4 FITs were eligible for study inclusion. One faculty member opted out, and one was excluded because of partial survey completion. Of the 13 faculty members included in the analysis, 7 (54%) were women. Duration of time on faculty at MGH was ≤3 years (n = 2, 15%), 4 to 7 years (n = 5, 38%), 8 to 11 years (n = 4, 31%), and ≥12 years (n = 2, 15%). All four eligible FITs completed both surveys.

Faculty responses

Faculty survey responses regarding online written evaluations and compiled verbal feedback are shown in Table 1. In all but two questions on online faculty evaluations, a majority (>50%) of faculty reported unfavorable or neutral experiences. The two questions on online evaluations that received >50% favorable responses were regarding the feedback's ability to explain strengths (54% favorable) and inability to identify evaluators (62% favorable). In comparison, the proportion of favorable responses to compiled verbal feedback was numerically higher for all questions and favorable in a majority (>50%) for all questions. Questions regarding explaining faculty strengths, feedback being actionable, feedback being specific, and feedback being consistent with faculty self‐assessment of their strengths reached statistical significance for being more positive with compiled verbal feedback. Eleven (85%) reported inability to identify FIT evaluators using the compiled verbal feedback system.

Table 1.

Faculty members’ responses regarding online evaluation and compiled verbal feedback formats*

Online evaluations Compiled verbal feedback P value
Favorable Unfavorable/neutral Favorable Unfavorable/neutral
Quality of feedback
Explains strengths 7 (54) 6 (46) 13 (100) 0 (0) 0.041
Explains areas for improvement 4 (31) 9 (69) 9 (69) 4 (31) 0.131
Improves teaching skills 2 (15) 11 (85) 7 (54) 6 (46) 0.131
Specific 0 (0) 13 (100) 7 (54) 6 (46) 0.023
Actionable 2 (15) 11 (85) 8 (62) 5 (38) 0.041
Prompts reflection 6 (46) 7 (54) 10 (77) 3 (23) 0.221
Consistency with self‐assessment
Strengths 5 (38) 8 (62) 12 (92) 1 (8) 0.023
Areas for improvement 3 (23) 10 (77) 7 (54) 6 (46) 0.289
Confidentiality
Unable to identify evaluator 8 (62) 5 (38) 11 (85) 2 (15) 0.450

Note: Statistically significant P values (P < 0.05) are noted in bold.

*

All values are reported as n (%).

In free‐text comments, faculty noted receiving online faculty evaluations infrequently, not finding them to be helpful, and rarely receiving suggestions for improvement. Four faculty members noted in free‐text comments concerns about the infrequency with which evaluations are shared. One faculty member who had been on faculty at MGH more than 12 years reported, “I cannot say I've ever found [written evaluations] to be helpful.” In contrast, compiled verbal feedback was met with more positive responses. One faculty member reported, “Appreciated [fellows’] feedback comments. It is useful to receive this summary to understand [both] what fellows appreciate and dislike when precepting.” Nevertheless, some faculty members noted in comments that the provision of additional areas for improvement would have been helpful. The overall number of free‐text responses submitted was low, particularly for compiled verbal feedback.

FIT responses

FITs’ survey responses are summarized in Table 2. Although most FITs reported that feedback through online evaluations could be specific and highlight strengths, only one (25%) reported these were perceived to improve faculty teaching and be reflective and meaningful. All FITs responded unfavorably or neutrally to the two questions on confidentiality regarding the online evaluation process. In addition, three (75%) reported that online evaluations are too time consuming. In contrast, all FITs (four, 100%) reported favorable experiences with compiled verbal feedback for every question except one, concerns about confidentiality. One (25%) FIT responded favorably in this question, whereas two (50%) responded unfavorably, and one (25%) responded neutrally. All FITs (100%) reported they did not have concerns regarding future interactions with faculty after compiled verbal feedback, in contrast to zero (0%) who reported this for online evaluations. Furthermore, all FITs (100%) reported that compiled verbal feedback allowed FITs to be interactive with their colleagues and improved their ability to provide valuable feedback. Individual responses by FITs to the two questions on confidentiality are shown in Supplementary Table 1.

Table 2.

FITs’ responses regarding online evaluation and compiled verbal feedback systems*

Online evaluations Compiled verbal feedback P value
Favorable Unfavorable/neutral Favorable Unfavorable/neutral
Quality of feedback
Specific 3 (75) 1 (1) 4 (100) 0 (0) 1
Provides positive feedback 4 (100) 0 (0) 4 (100) 0 (0) N/A
Provides feedback on areas for improvement 2 (50) 2 (50) 4 (100) 0 (0) 0.480
Reflective and meaningful 1 (25) 3 (75) 4 (100) 0 (0) 0.248
Improves faculty teaching 1 (25) 3 (75) 4 (100) 0 (0) 0.248
Confidentiality
Concerns about confidentiality 0 (0) 4 (100) 1 (25) 3 (75) 1.000
Concerns about future faculty interactions 0 (0) 4 (100) 4 (100) 0 (0) 0.134
*

All values are reported as n (%). FIT, fellow‐in‐training; N/A, not applicable.

DISCUSSION

In this single‐center interventional study, we found that faculty and FITs reported largely unfavorable experiences with traditional online evaluations. In contrast, a novel method of providing combined verbal feedback, given by the FITs as a group in a deidentified way and distributed individually to the faculty by the PD, was more favorable as assessed by both FITs and faculty across all assessed domains. This feedback method shows great promise for faculty development in rheumatology fellowship training and is potentially transferrable to other specialty and subspecialty training programs.

Although many efforts in faculty development relate to strengthening faculty abilities to provide feedback to learners, there has been little focus on faculty development through obtaining feedback from learners. 8 , 9 , 10 Wisener et al described goals of learners’ feedback to faculty, including complexities and barriers to delivery of effective feedback. 11 Barriers that they identified included burden of surveys, the importance of protecting self‐image, a desire to provide helpful and meaningful feedback to faculty, and the wish to avoid hurting faculty members’ feelings.

Consistent with the study by Wisener et al, our study identified two likely contributing factors to the overall perception by faculty and FITs that online evaluations were a poor mechanism for faculty feedback. 11 First, FITs reported substantial time burden of online evaluations, likely allowing for less reflection and thoughtfulness in completing the evaluations. Second, many FITs expressed concerns regarding confidentiality with online evaluations. The degree of concern regarding confidentiality in our data was striking, despite measures taken to preserve it, such as compiling feedback over three years before providing it to faculty. This concern likely led to an increased hesitancy to provide feedback on areas for improvement in faculty teaching. Although concerns about confidentiality were expressed by FITs with compiled verbal feedback, concerns regarding future interactions with faculty were much lower with this form of feedback.

The small‐group approach to providing verbal feedback, within dedicated time without other clinical responsibilities, was chosen to allow for a safe and collaborative environment, thus encouraging highly meaningful feedback. Potentially identifiable comments made by FITs were shared with faculty only if FITs specifically provided permission. All other identifiable comments were reserved by the PD to be shared with faculty once all FITs present for the discussion graduated from the program. These changes were reflected in the universally improved responses by faculty and FITs regarding quality of feedback using compiled verbal feedback compared with online evaluations. With this approach, all FITs expressed that they did not have concerns about future interactions with evaluated faculty. Responses to the question about concerns regarding confidentiality with compiled verbal feedback were highly heterogeneous; future studies with larger sample sizes should aim to clarify whether this form of feedback improves FITs’ confidentiality concerns.

Many faculty members noted that the number of online evaluations they received was low, likely reflecting the fact that evaluations are compiled and shared with faculty every three years, in accordance with our institutional Graduate Medical Education policies. This may be particularly limiting to the development of junior faculty, who could likely benefit from timely feedback as they develop their teaching style. Using compiled verbal feedback, all faculty were able to receive feedback in the same year because FITs’ comments were compiled in a manner that was believed (by the FITs and PD alike) to be adequately deidentified. Future iterations of this feedback mechanism could provide feedback at different times within the academic year, thereby fostering actionable feedback and benefiting the current group of trainees.

Although our findings demonstrated universal improvements with compiled verbal feedback compared to online evaluations, several faculty members responded with neutral or unfavorable responses to some questions related to quality of feedback, particularly with respect to actionable feedback and recommended areas for improvement. There are several possible reasons for this. There are some faculty with little precepting contact with FITs. The feedback provided to these faculty members may have been less effective because of reduced exposure, thereby biasing our results toward the null. The meeting to discuss faculty feedback took place at the end of the academic year, and FITs might not have had recent experience working with certain faculty members to effectively provide high‐quality feedback. In addition, we did not have data on the time spent completing online faculty evaluations, so it was not possible to compare this directly between the two evaluation methods. Similarly, free‐text responses were few in number, so conclusions from these are limited. Finally, in this intervention, the PD did not actively query the FITs in a question‐and‐answer format; however, she more simply framed the discussion with several prompts to encourage discussion about each faculty member's precepting characteristics. Future iterations could include standardized guiding queries to evoke comments around actionable feedback to improve faculty members’ effectiveness in teaching. Furthermore, faculty could request specific topics for discussion, making feedback more specific and individualized.

Our intervention is likely generalizable to other rheumatology training programs and to other specialties. Furthermore, this feedback system can be tailored to meet the needs of the trainees and faculty based on program factors. For example, in our design of the compiled verbal feedback system, the FITs were involved in deciding the format of and facilitator for the feedback discussion. Use of this system in other programs should both prioritize FIT confidentiality and foster openness for discussion; the precise process for verbal feedback may need adjustment based on trainees’ preferences and levels of comfort with each other and the facilitator of the discussion. In addition, future studies should aim to address whether compiled verbal feedback is most effective when feedback is provided in a group setting or individually, and this could vary depending on the number of individuals and group dynamics involved.

Our study has several limitations. First, it is a single‐center study; therefore, findings at other institutions may differ. Nevertheless, our feedback mechanisms and surveys included elements that were neither specific to our institution nor specific to our specialty. Second, although faculty completed surveys regarding standard online evaluations prior to receiving their compiled verbal feedback, they were aware that a novel feedback mechanism was being implemented prior to completing these surveys, and this knowledge may have introduced bias. Third, our sample size was small. Though representative of many fellowship program sizes, the sample size limited our power to detect statistical differences, especially in FIT responses. Similarly, we did not perform a detailed analysis of the content of the feedback itself; future studies with larger sample sizes can incorporate such an analysis to identify strategies to optimize this form of feedback. Strengths of the study include the high survey completion rate by both faculty and FITs, a relatively large faculty sample size, and generalizability to other training programs.

Our study is the first, to our knowledge, to assess the effectiveness and confidentiality of online evaluations completed by rheumatology FITs as a feedback mechanism for teaching faculty and to report the experience with a novel compiled verbal feedback system. Future studies of this and other feedback interventions are needed to identify the optimal mechanism by which FITs can provide actionable and timely feedback on clinical teaching in training programs of different sizes and to assess the added value to the learning environment.

AUTHOR CONTRIBUTIONS

All authors were involved in drafting the article or revising it critically for important intellectual content, and all authors approved the final version to be published. Dr. Katz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study conception and design

Katz, Miloslavsky, Fernandes, Bolster.

Acquisition of data

Katz, Miloslavsky, Fernandes, Bolster.

Analysis and interpretation of data

Katz, Miloslavsky, Bolster.

Supporting information

Disclosure form:

ACR2-6-139-s003.pdf (695.6KB, pdf)

Supplementary Fig. 1. Traditional annual online evaluation form completed by FITs on clinical teaching for each faculty member.

ACR2-6-139-s005.png (60.3KB, png)

Supplementary Fig. 2. Surveys completed by faculty on online evaluations and compiled verbal feedback.

ACR2-6-139-s001.png (608.9KB, png)

Supplementary Fig. 3. Surveys completed by FITs on online evaluations and compiled verbal feedback.

ACR2-6-139-s002.png (569.5KB, png)

Supplementary Table 1. Individual responses to questions regarding confidentiality by FITs for online evaluations and compiled verbal feedback. Change represents the response for online evaluations minus that for compiled verbal feedback. In both questions, higher responses represent more unfavorable responses.

ACR2-6-139-s004.docx (22.6KB, docx)

Guy Katz, MD, Eli M. Miloslavsky, MD, Ana D. Fernandes, MS, Marcy B. Bolster, MD: Massachusetts General Hospital, Boston.

Dr. Katz's work was supported via the NIH T32 training grant AR‐007258. Dr. Bolster's work was supported by the Rheumatology Research Foundation (Scientist Development Award, ID 998547).

Drs. Katz and Miloslavsky contributed equally to this work.

Additional supplementary information cited in this article can be found online in the Supporting Information section (http://onlinelibrary.wiley.com/doi/10.1002/acr2.11638).

Author disclosures are available at https://onlinelibrary.wiley.com/doi/10.1002/acr2.11638.

REFERENCES

  • 1. Baker K. Clinical teaching improves with resident evaluation and feedback. Anesthesiology 2010;113:693–703. [DOI] [PubMed] [Google Scholar]
  • 2. Maker VK, Curtis KD, Donnelly MB. Faculty evaluations: diagnostic and therapeutic. Curr Surg 2004;61:597–601. [DOI] [PubMed] [Google Scholar]
  • 3. Fluit CRMG, Feskens R, Bolhuis S, et al. Repeated evaluations of the quality of clinical teaching by residents. Perspect Med Educ 2013;2:87–94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Cohan RH, Dunnick NR, Blane CE, et al. Improvement of faculty teaching performance: efficacy of resident evaluations. Acad Radiol 1996;3:63–67. [DOI] [PubMed] [Google Scholar]
  • 5. Accreditation Council for Graduate Medical Education . ACGME common program requirements (fellowship). 2022. Accessed December 3, 2022. https://www.acgme.org/globalassets/pfassets/programrequirements/cprfellowship_2023v2.pdf
  • 6. Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment. Med Teach 2012;34:787–791. [DOI] [PubMed] [Google Scholar]
  • 7. Ende J. Feedback in clinical medical education. JAMA 1983;250:777–781. [PubMed] [Google Scholar]
  • 8. Akins R. Narrative feedback in faculty development. J Reg Med Campuses 2019;2(2). doi: 10.24926/jrmc.v2i2.1220 [DOI] [Google Scholar]
  • 9. Kalynych C, Edwards L, West D, et al. Tuesday's teaching tips–evaluation and feedback: a spaced education strategy for faculty development. MedEdPORTAL 2022;18:11281. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Mitchell JD, Holak EJ, Tran HN, et al. Are we closing the gap in faculty development needs for feedback training? J Clin Anesth 2013;25:560–564. [DOI] [PubMed] [Google Scholar]
  • 11. Wisener K, Hart K, Driessen E, et al. Upward feedback: exploring learner perspectives on giving feedback to their teachers. Perspect Med Educ 2023;2(1):99–108. doi: 10.5334/pme.818 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Disclosure form:

ACR2-6-139-s003.pdf (695.6KB, pdf)

Supplementary Fig. 1. Traditional annual online evaluation form completed by FITs on clinical teaching for each faculty member.

ACR2-6-139-s005.png (60.3KB, png)

Supplementary Fig. 2. Surveys completed by faculty on online evaluations and compiled verbal feedback.

ACR2-6-139-s001.png (608.9KB, png)

Supplementary Fig. 3. Surveys completed by FITs on online evaluations and compiled verbal feedback.

ACR2-6-139-s002.png (569.5KB, png)

Supplementary Table 1. Individual responses to questions regarding confidentiality by FITs for online evaluations and compiled verbal feedback. Change represents the response for online evaluations minus that for compiled verbal feedback. In both questions, higher responses represent more unfavorable responses.

ACR2-6-139-s004.docx (22.6KB, docx)

Articles from ACR Open Rheumatology are provided here courtesy of Wiley

RESOURCES