Abstract
Background
Given that training is integral to providing constructive peer feedback, we examined the impact of a regularly reinforced, structured peer assessment method on student-reported feedback abilities throughout a two-year preclinical Communication Skills course.
Methods
Three consecutive 32-student medical school classes were introduced to the Observation-Reaction-Feedback method for providing verbal assessment during Year 1 Communication Skills orientation. In biweekly small-group sessions, students received worksheets reiterating the method and practiced giving verbal feedback to peers. Periodic questionnaires evaluated student perceptions of feedback delivery and the Observation-Reaction-Feedback method.
Results
Biweekly reinforcement of the Observation-Reaction-Feedback method encouraged its uptake, which correlated with reports of more constructive, specific feedback. Compared to non-users, students who used the method noted greater improvement in comfort with assessing peers in Year 1 and continued growth of feedback abilities in Year 2. Comfort with providing modifying feedback and verbal feedback increased over the two-year course, while comfort with providing reinforcing feedback and written feedback remained similarly high. Concurrently, student preference for feedback anonymity decreased.
Conclusions
Regular reinforcement of a peer assessment framework can increase student usage of the method, which promotes the expansion of self-reported peer feedback skills over time. These findings support investigation of analogous strategies in other medical education settings.
Supplementary Information
The online version contains supplementary material available at 10.1007/s40670-021-01242-w.
Keywords: Critique model, Collaborative learning, Peer feedback, Verbal appraisal
Introduction
Peer assessment has gained increasing traction in medical education. In addition to providing recommendations for peer improvement, the use of peer feedback can encourage the development of professional behavior, enhance critical thinking skills, and foster students’ responsibility for their own learning [1, 2]. Although physicians are expected to contribute to and undergo regular performance reviews as well as self-identify strategies for lifelong learning, they rarely receive formal training in how to deliver constructive feedback [3]. Early exposure to peer assessment during medical training may better prepare students for their professional careers by promoting reflective practice [4].
Most studies of peer feedback in medical education have focused on comparing peer assessments to faculty evaluations to assess validity, with many but not all studies indicating that peer feedback may yield accurate assessment of specific behaviors [2]. Several groups also report that medical students value the learning opportunity of giving peer feedback [5–7]. Nonetheless, little is known about how student views on assessing their peers change over successive years of medical school. Formal instruction in how to articulate feedback effectively is another important yet often overlooked aspect of incorporating peer assessment into medical school curricula. Indeed, a recent meta-analysis revealed assessor training to be the most critical factor affecting the positive impact of peer assessment on promoting student learning [8]. Poorly delivered feedback, particularly modifying (negative) feedback, may harm rather than help students by causing demotivation and performance deterioration [6]. However, in a systematic review of peer feedback utilization in undergraduate medical education, only 35% of analyzed studies explicitly stated that students underwent training in how to deliver appropriate, actionable feedback [2]. The standard practice in these studies has been to provide a one-time introduction to structured approaches for offering assessment, which students have perceived as a useful exercise [5, 6, 9]. Yet even after attending a training session, many students noted recurrent anxiety about giving modifying feedback to peers, suggesting that additional training may be required to boost student confidence, especially in providing negative feedback [5, 6].
We sought to address these knowledge gaps using data from a quality improvement project designed to enhance our institution’s Communication Skills (CS) curriculum. Notably, the first-year preclinical CS course is the first time many students are asked to provide real-time, verbal feedback to peers, a commonly encountered scenario in professional practice. Using evidence-based guidelines for structuring feedback [10, 11], we developed and implemented a formal educational method for teaching students to deliver peer feedback. Our Observation-Reaction-Feedback (ORF) method synthesizes current best practices for peer assessment, including Pendleton’s positive critique model [12] and Vickery & Lake’s recommendations, which assert that good feedback requires adequate time, direct learner observation, specific goals and outcomes, and delivery of both reinforcing and modifying criticism [13]. We aimed to 1) determine student-reported usefulness and effectiveness of a regularly reinforced, structured framework for peer assessment (ORF) during CS small-group sessions; and 2) assess longitudinal changes in student comfort with giving peer feedback over the same period.
Materials and Methods
Project Design
Following a pilot period to test feasibility of ORF implementation, longitudinal questionnaire follow-up assessed the effectiveness of ORF as a method for facilitating peer feedback and monitored changes in student confidence with giving feedback over time.
Participants and Setting
This project was conducted with three consecutive classes of first- and second-year medical students attending the Cleveland Clinic Lerner College of Medicine (CCLCM) of Case Western Reserve University between 2017 and 2020. CCLCM utilizes a small-group, problem-based learning curriculum with 32 students per class. Instead of traditional graded assessment, student performance is evaluated using a competency-based portfolio system that solicits and incorporates feedback from faculty, peers, and students themselves [1, 14]. Students overcome performance deficits by developing learning plans informed by written or verbal feedback from others. Consequently, high-quality peer feedback is exceedingly valuable for students at CCLCM, and students are frequently required to deliver feedback to their peers.
The CS course provides a setting for medical students to learn and practice effective patient communication, beginning with basic skills in the first year (e.g. acquiring medical histories) and advanced skills in the second year (e.g. sharing bad news). In every CS session, each student is observed during a short standardized patient encounter, after which the student shares a verbal self-reflection on performance prior to receiving verbal assessments from peers and faculty.
Intervention
The ORF method was developed following a needs assessment of 32 students, which revealed the feedback provision process as an area for improvement. This method provides a 3-step structured approach to giving verbal peer feedback: observation, reflexive reaction, and feedback (Table 1). The observation step consists of recording verbal and nonverbal behaviors during the standardized patient encounter. The reflexive reaction step involves noting how the observer initially felt while watching their peers’ interactions. The feedback step comprises formulating and delivering assessments by combining the previous steps to describe whether and why behaviors were effective. ORF was introduced to all first-year students with visual aids and demonstration at the CS course orientation. Students then received opportunities to practice providing verbal feedback every other week during CS small-group sessions.
Table 1.
The ORF method for verbal feedback
| Observation | Reflective Reaction | Feedback |
|---|---|---|
| Document verbal (e.g. phrasing, jargon) and nonverbal (e.g. body language) actions. How was the learning goal addressed? | What stands out? How did you feel while watching the interaction? | Describe specific behavior (e.g. quote or action) and explain why it was effective or less effective. Provide specific suggestions for improvement. How did this interview build upon previous interviews? |
|
Example: Area of Strength Student used summary statement after obtaining HPI: ‘You’ve had a cough for 2 weeks. Lying down makes it worse and nothing helps, correct?’ |
You skillfully used summary statements to confirm understanding | By summarizing key components of the HPI, you demonstrated reflective listening and allowed the patient to correct errors |
|
Example: Targeted Area for Improvement Student told patient that they ‘really need to lose weight.’ |
Your language was patronizing, and the patient became defensive, stating, ‘I told you I tried!’ | I perceived your comment, ‘you really need to lose weight,’ as being more judgmental than collaborative. Perhaps you could apply summary statements as you did in previous sessions and try saying, ‘It sounds like you tried several times to lose weight. What do you think is preventing you from being more successful?’ |
HPI, history of present illness
In the pilot cohort of first-year medical students, 20 of 32 students were randomly selected to receive regular ORF reinforcement at each CS session in the form of 1-page worksheets reviewing the method and providing space for notes on observed standardized patient encounters (Supplementary Appendix 1). The other 12 students did not receive worksheets after the initial training (no reinforcement). Based on positive trends on mid-year assessments, the reinforcing ORF worksheets were provided to the entire first cohort during the latter half of the year, and to all students in the two subsequent cohorts. Usage of the ORF worksheets was encouraged but optional.
Data Collection
Periodic questionnaires were administered to assess student comfort with providing peer feedback; all three student classes received the same set of questionnaires (Fig. 1, Supplementary Appendix 2). In the baseline questionnaire, quantitative items on a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree) were adapted from [6] and gauged prior experience and comfort with feedback provision, while qualitative items assessed preferences regarding feedback anonymity and topics on which students desired additional instruction. Subsequent questionnaires examined student perceptions of the ORF method, with usefulness and usage assessed quantitatively and descriptions of the method’s helpful and hindering aspects assessed qualitatively. Baseline questionnaire items regarding comfort with feedback provision and those eliciting topics for which students requested further instruction were replicated in subsequent questionnaires.
Fig. 1.
Project timeline. The CS course occurs during the first two years of medical school at CCLCM. Three 32-student classes participated in this project as a part of the course. Arrows indicate questionnaire time points. Y1S1 corresponds to the first (baseline) questionnaire during Year 1, and subsequent arrows are labelled similarly. At the time of data analysis, Class 3 had just concluded the Year 1 CS course and thus, Year 2 data for this class were not included in the study
Data Analysis
Qualitative analyses
Responses to individual qualitative questions were compiled across class cohorts for each questionnaire. Using a conventional content analysis approach, text data were coded into specific categories and quantified using descriptive statistics [15]. Two authors with qualitative data analysis experience (B.B. and P.D.) independently identified codes within text responses to each question, then compared codes and resolved differences through discussion and consensus until all discrepancies were reconciled.
Statistical Analyses
All statistical analyses were performed using Graphpad Prism 8 (Graphpad Software Inc.). Statistical significance for comparisons was based on two-sided comparisons with α = 0.05. Categorical data were summarized as frequency counts and percentages, and continuous data as means and standard deviations. For dichotomized questionnaire analyses, Likert scale responses of 4 and 5 (Agree, Strongly Agree) and of 1‒3 (Strongly Disagree, Disagree, Neutral) were grouped [6].
McNemar’s test was used to compare changes in proportions of students who were comfortable giving verbal feedback and proportions of students who reported possession of adequate skills to deliver feedback during the pilot period. Fisher’s exact test was used to compare preference for anonymous feedback and to compare questionnaire results between students who received versus did not receive ORF reinforcement and between students who had versus did not have prior verbal feedback experience.
Longitudinal changes in questionnaire responses were analyzed with linear mixed effects models. A mixed model was chosen because repeated-measures ANOVA cannot handle missing questionnaire values. Mixed model results can be interpreted similarly to repeated-measures ANOVA when missing values can be attributed to chance, which is the most likely case for this project. Planned post-hoc comparisons between the beginning of Year 1 and end of Year 2 were analyzed with Fisher’s LSD tests. Baseline comfort with giving positive versus negative feedback was compared using a paired t test.
For ORF users versus non-users, changes in comfort giving verbal feedback were calculated as the differences in matched individual questionnaire responses within the same year; comparison of these differences between the two groups was performed using Welch’s t test. Chi-square tests were used to compare questionnaire results at single time points, and linear mixed models were used to compare questionnaire results longitudinally (at all available time points).
Results
Pilot Period
In the pilot cohort, 32 (100%) and 31 (96.9%) first-year medical students responded to the baseline and mid-year questionnaires, respectively. Sixteen students (50%) reported feeling comfortable providing verbal feedback to peers at baseline. Following implementation of the ORF method, this number increased significantly to 31 respondents (100%) at the mid-year assessment (p = 0.0003). The number of students reporting that they possessed adequate skills to deliver peer feedback also increased from 22 (68.8%) at baseline to 27 (87.1%) at mid-year (p = 0.07). During the pilot period, the 20 students who received regular ORF reinforcement were more likely to find ORF helpful (94.7% versus 58.3%, p = 0.02) and to use the method (79% versus 41.7%, p = 0.06) than the 12 control students. ORF reinforcement was additionally associated with higher levels of confidence in judging peer performance and possessing adequate skills to deliver peer feedback at the mid-year questionnaire (95% with reinforcement versus 75% without reinforcement, p = 0.27). Although the latter finding was not statistically significant, the pilot period was intended to be exploratory, and we considered the absolute difference of 20% to be of sufficient magnitude to potentially improve educational practice. Based on these results, we provided reinforcing ORF worksheets to all students during subsequent CS sessions.
Patterns of Use and Effectiveness of the ORF Method
Since ORF usage was optional, we evaluated whether having verbal feedback experience prior to medical school affected student perceptions or adoption of the method. At baseline, 78% (49/63) of questionnaire respondents had provided feedback to peers in a professional or academic setting prior to medical school. Of those with previous feedback experience, 48% had given both written and verbal feedback, 40% had given written feedback only, and 12% had given verbal feedback only. Regardless of prior verbal feedback experience, similar proportions of students found ORF to be helpful and reported using the method, demonstrating its broad utility even for more experienced feedback providers; these proportions represented a majority of first-year and minority of second-year students (Table 2). The likelihood of applying ORF in the future was also consistent between first-year students with and without previous feedback experience. In contrast, second-year students without prior feedback experience reported being more likely to use ORF in the future than students with prior experience (p = 0.05 at the mid-year assessment).
Table 2.
Effect of prior verbal peer feedback experience on ORF-related student views and behaviors over time
| Views/behaviors related to ORF | Y1S2 | Y1S3 | Y1S4 | Y2S2 | Y2S3 | Y2S4 |
|---|---|---|---|---|---|---|
| (% of students with prior experience / % of students with no prior experience) | ||||||
| Found ORF helpful | 81.5% / 83.9% | 23.1% / 40% | ||||
| Used the ORF method | 77.8% / 58.1% | 65.5% / 67.7% | 65.4% / 61.3% | 30.8% / 46.7% | 36.4% / 50% | 33.3% / 50% |
| Likely to use ORF in the future | 60.7% / 63.3% | 65.4% / 67.7% | 27.3% / 68.8%* | 44.4% / 60% | ||
Blank boxes represent time points with questionnaires that did not include the question of interest
*p = 0.05 for students with versus without prior experience by Fisher’s exact test
We then investigated the impact of ORF usage on changes in comfort with delivering peer feedback over the course of each year. Students who used ORF in Year 1 trended towards greater improvement in comfort giving verbal peer feedback at the end of the year than those who did not use the method (0.53 ± 0.86 incremental increase in Likert score for ORF users versus 0.17 ± 0.51 for non-users, p = 0.06); this pattern was not observed for students in Year 2 (p = 0.27). However, students who used ORF during Year 2 were significantly more likely to endorse continued growth of their ability to provide both verbal and written feedback compared to those who did not use the method (Table 3).
Table 3.
Effect of ORF usage on self-reported growth of written and verbal feedback abilities during the second year
| Statement | Time points | Average difference in ratings by ORF users versus non-usersa (95% confidence interval) |
p- value |
|---|---|---|---|
| I perceive no difference in my ability to give meaningful written peer feedback between now and the end of Year 1 | Y2S1 and Y2S2 | − 0.653 (− 1.152, − 0.154) | 0.0104 |
| I perceive no difference in my ability to give meaningful verbal peer feedback between now and the end of Year 1 | Y2S1 and Y2S2 | − 0.670 (− 1.149, − 0.192) | 0.0061 |
| Compared to the beginning of the year, I feel more comfortable providing written peer feedback | Y2S3 and Y2S4 | 0.621 (0.281, 0.960) | 0.0003 |
| Compared to the beginning of the year, I feel more comfortable providing verbal peer feedback | Y2S3 and Y2S4 | 0.442 (0.093, 0.791) | 0.0130 |
aMean absolute difference in 5-point Likert scale ratings by ORF users versus non- users in response to each statement, as computed by mixed model analysis
Longitudinal effects of ORF usage on specific aspects of peer feedback at each questionnaire time point are shown in Table 4. Interestingly, students who chose to use ORF were initially less comfortable giving verbal feedback (77.5% of ORF users comfortable versus 100% of non-users, p = 0.04). This gap closed by the middle of Year 1, suggesting that usage of ORF helped to quickly bring less confident verbal feedback providers up to the level of their peers. Similar to observations during the pilot period, at early time points, using ORF was associated with a nearly 20% absolute increase in the proportions of first-year students who reported feeling confident judging peer performance as well as those who reported having adequate skills to deliver peer feedback. During Year 2, effects of ORF usage were more ambiguous. On the one hand, second-year students who used versus did not use the method had comparably high levels of comfort giving verbal peer feedback (> 90% comfortable), and ORF users appeared to be more comfortable giving written peer feedback throughout the latter half of the year. On the other hand, significantly more students who used ORF endorsed the need for additional training in feedback delivery at the end of Year 2 (39.3% of ORF users versus 0% of non-users, p = 0.0007), despite the similarly high levels of reported comfort with providing written, verbal, positive, and negative peer feedback in both groups of students.
Table 4.
Longitudinal responses to statements about peer feedback from ORF users compared to non-users
| Statement | Y1S2 | Y1S3 | Y1S4 | Y2S2 | Y2S3 | Y2S4 |
|---|---|---|---|---|---|---|
| (% of ORF users who agreed or strongly agreed / % of non-users who agreed or strongly agreed) | ||||||
| I feel comfortable providing verbal feedback to peers | 77.5% / 100%* | 93.3% / 84.4% | 94.9% / 97.1% | 97.0% / 96.6% | 100% / 91.7% | 100% / 95.7% |
| I feel confident making a judgment on my peer’s performance | 60.0% / 42.9% | 80.0% / 71.9% | 79.7% / 82.4% | 93.9% / 96.6% | 97.1% / 95.8% | 96.4% / 95.7% |
| I possess adequate skills to deliver feedback to peers | 75.0% / 57.1% | 85.0% / 75.0% | 93.2% / 100% | 93.9% / 96.6% | 97.1% / 95.8% | 96.5% / 100% |
| I need training in how to deliver feedback to peers | 37.5% / 38.1% | 15% / 9.4% | 13.6% / 14.7% | 24.2% / 10.3% | 23.5% / 12.5% | 39.3% / 0%* |
| I feel confident providing positive feedback to my peers | 100% / 95.2% | 96.7% / 96.9% | 96.6% / 97.1% | 93.9% / 96.6% | 100% / 95.8% | 100% / 95.7% |
| I feel confident providing negative feedback to my peers | 77.5% / 66.7% | 83.3% / 75% | 82.8% / 88.2% | 84.8% / 89.7% | 85.3% / 95.8% | 92.9% / 82.6% |
| I feel comfortable providing written feedback to peers | 92.5% / 100% | 93.3% / 84.4% | 96.6% / 100% | 93.9% / 96.6% | 100% / 87.5%* | 100% / 91.3% |
*p < 0.05 for ORF users vs non-users by chi-square test
In open-ended qualitative comments, students consistently identified the clear structure for feedback provided by ORF as the method’s greatest strength (18/45, 40% at Year 1 start; 17/44, 39% at Year 2 end). Students also appreciated how ORF “forces you to separate your initial gut reaction from logical, well thought out, less emotional response” (student, Year 1 start; with similar sentiments expressed by 16% of respondents at the same time point). Furthermore, students commented that ORF improves feedback phrasing (11% at Year 1 start); in the words of one respondent, “it takes out the ‘I don’t want to come off the wrong way’ in giving feedback.” Students described feedback given through the ORF method as being more constructive (18% at Year 1 start) and specific (5/57, 9% at Year 1 middle), facilitated by the detailed worksheet that provided space for note-taking (18% at Year 2 start). As one student stated at the end of Year 2, “I love the fact that [ORF] allows you to give evidence-based and constructive feedback to peers.”
Nevertheless, some of these strengths were also perceived as weaknesses: while ORF provided a helpful structure, a few students felt it could be too rigid (5/35, 14% at Year 1 start; 5/47, 11% at Year 2 end). Students expressed concern about time constraints (29% at Year 1 start; 30% at Year 2 end); in the CS environment, it was sometimes “too difficult… to think about [the] feedback breakdown, fill out the form, and pay attention to [a peer’s] standardized patient interview all at the same time” (student, Year 2 start). ORF provided a “useful framework [when] not comfortable naturally giving feedback” (student, Year 2 end); for some, “it was really helpful early on, but [became] less necessary [over time]” (student, Year 2 start).
Longitudinal Changes in Comfort with Providing Feedback
Proficiency and confidence with delivering feedback are expected to improve naturally as students engage in more practice. While it is important to understand the magnitude of such improvements in order to contextualize the effect of any interventions, data on longitudinal changes in student comfort with providing feedback remain scarce. Thus, we examined trends in student-reported feedback comfort and preferences over the two preclinical years regardless of ORF utilization. Mean questionnaire response rates during the longitudinal monitoring period were 94 ± 6%.
Over the two-year CS course and consistent with trends reported in Table 4, students expressed increasing confidence with judging peer performance (Likert score 3.2 ± 0.8 at Year 1 start versus 4.5 ± 0.6 at Year 2 end; fixed effect: F(5.064, 293) = 24.09, p < 0.0001). At baseline, students endorsed greater comfort with providing positive versus negative feedback (Likert score 4.6 ± 0.6 versus 3.1 ± 0.8, p < 0.0001). While students became more comfortable providing negative feedback to peers during the two years (Likert score 3.1 ± 0.8 versus 4.3 ± 0.7, p < 0.0001; fixed effect: F(4.899, 283.5) = 21.87, p < 0.0001), comfort with positive feedback did not change (Likert score 4.6 ± 0.6 versus 4.5 ± 0.5, p = 0.95). Students likewise endorsed growing comfort providing verbal feedback to peers (Likert score 3.3 ± 1.0 at Year 1 start versus 4.2 ± 0.9 at Year 2 end, p = 0.0003; fixed effect: F(4.201, 244.3) = 20.76, p < 0.0001), though comfort with written feedback remained the same (Likert score 4.0 ± 0.8 versus 4.0 ± 0.9, p = 0.97). Concurrently, students reported decreasing desire for training in how to deliver feedback to peers (Likert score 3.7 ± 0.9 at Year 1 start versus 2.1 ± 1.4 at Year 2 end, p < 0.0001; fixed effect: F(4.646, 268.1) = 19.18, p < 0.0001).
Qualitative analysis of requests for further instruction in feedback skills mirrored students’ increasing confidence over time. At the start of Year 1, only 2% of students (1/54) commented that they did not need additional instruction in peer feedback provision, which increased to 94% (34/36) by the end of Year 2. Requests for further instruction at the start of Year 1 centered on providing modifying feedback (25/54, 46%). This proportion decreased throughout the two-year follow-up period, to 22% (11/49) at the start of Year 2 and 0% (0/36) at the end of Year 2. Reflecting on their overall experience at the conclusion of the course, one student commented, “I am very happy with the instruction I have received in providing feedback to my peers over Years 1 and 2. This has really allowed me to converse more effectively with my peers and learn from our interactions.”
Student attitudes toward written feedback anonymity also evolved over time. The proportion of students who preferred written feedback to be anonymous decreased from 44% in Year 1 to 18% in Year 2 (p = 0.003). Of the 27/57 respondents who preferred anonymous feedback at baseline and provided qualitative comments, 59% noted that anonymous feedback encouraged more candid assessments. A minority favored anonymous feedback due to discomfort delivering peer feedback; specifically, 19% cited discomfort providing modifying feedback and 7% cited general discomfort with feedback phrasing. In Year 1, 18% of students stated that anonymous feedback protects peer relationships (10/57), which decreased to 4% (2/54) in Year 2. By contrast, the majority of first-year students who preferred identifiable written feedback appreciated the opportunity to follow up with peers for clarification (63%), and a minority noted they could better understand intentions underlying peer feedback given the increased context (17%). Among second-year students who favored identifiable feedback, additional themes included an increased sense of responsibility and accountability (8/44, 18%). Interestingly, one perspective endorsed by 5% of respondents highlighted that identifiable feedback better reflects professional assessments outside the educational environment.
Discussion
Although peer assessment plays an expanding role in medical education and proper training is undoubtedly essential for teaching students to deliver effective feedback [2, 8], there is currently no consensus on the best approach to implement such training. Additionally, the longitudinal development of student comfort with providing peer feedback has not been well documented. We found that following an initial training session, provision of regular reinforcement in the form of biweekly worksheets boosted uptake of a structured framework for verbal peer assessment (ORF) in a small-group CS setting. ORF usage correlated with greater improvement in self-reported feedback delivery skills during the first year of medical school and with continued growth of related abilities during the second year. In qualitative remarks, students appreciated the ORF structure and commented that it produced more constructive, specific feedback. Over the two preclinical years, students reported increasing comfort with judging peer performance, with specific improvements in providing modifying feedback and verbal feedback compared to reinforcing or written feedback, respectively. Based on these results, we permanently incorporated ORF with regular reinforcement into our institution’s CS course as a quality improvement measure.
Our data highlight the importance of introducing training in feedback delivery methods early in the medical curriculum to provide a solid foundation on which to build feedback skills. Compared to a minority of second-year students, a majority of first-year students found ORF to be helpful and used it, regardless of previous verbal feedback experience. This trend corresponds with increased comfort with providing peer feedback in Year 2 versus Year 1 and may stem from students evolving and internalizing personalized peer assessment strategies given time and practice. Moreover, the use of ORF enabled first-year students who were initially less comfortable with giving verbal peer feedback to rapidly close the gap with their more confident counterparts. Early introduction of organized feedback training also hastens acceptance of peer feedback as a feasible form of assessment and facilitates the development of critical judgment skills, both of which are crucial in professional medical practice [4, 16].
While general confidence in peer feedback skills improved over time, gains were most prominent in domains for which students reported lower comfort at baseline. Prior to medical school, more of our students had previous experience delivering written compared to verbal feedback. First-year students were therefore initially less comfortable delivering verbal versus written feedback, whereas over 90% of second-year students expressed comfort with both peer feedback modalities throughout the year. Consistent with the literature [5, 6], we likewise found that at baseline, students were less comfortable with delivery of modifying compared to reinforcing feedback. Even after initial instruction, over 40% of first-year students requested further training in providing modifying comments to peers. Encouragingly, more than 85% of second-year students expressed confidence in giving modifying peer feedback, which could be a consequence of receiving regular ORF reinforcement, having an additional year of practice, or both. Building on prior work [17], we postulate that shared learning experiences and structured feedback frameworks could both be incorporated into curricula to synergistically boost student confidence with assessing their peers, particularly in the context of providing modifying feedback.
Furthermore, our findings expand on previous qualitative observations of student preferences for feedback anonymity [18, 19] by quantifying decreased support for anonymous feedback in Year 2 compared to Year 1. This trend may suggest that early on, students value how anonymous feedback enables candid evaluation without jeopardizing peer relationships, while with more experience and trust in peer assessment, they prefer how identifiable feedback promotes follow-up clarification and accountability, which may better prepare them for the professional workplace [11, 18]. Hence, for institutions that ask students to perform written peer assessments, it may be appropriate to implement anonymous feedback at the start of the program and transition to identifiable feedback only after students have established peer relationships and come to trust this mode of assessment.
Since our project occurred at a single institution with a small annual class size (albeit with high questionnaire response rates), some observed trends did not achieve statistical significance and require replication in larger studies. Similar to [6], we focused on student-reported perceptions on the grounds that increased self-confidence with feedback provision may help students become more willing to provide candid, balanced assessments for others in future encounters [10]. Additional work is thus needed to examine correlations between self-reported and externally rated feedback skill levels and to objectively compare changes in peer feedback structure after receiving training and reinforcement. Our results should also be considered in the context of a close-knit learning community that prioritizes and provides numerous opportunities for peer assessment, the impact of which is difficult to parse [1]. However, as medical schools increasingly incorporate collaborative small-group learning, opportunities to foster strong longitudinal peer relationships and integrate peer feedback have become more commonplace [2, 17]. We did not mandate usage of ORF, so students who used the method were a self-selecting group who provided observational rather than randomized controlled data. Finally, we noted an unexpectedly high percentage of ORF users who requested additional peer feedback training at the end of the two-year preclinical curriculum. Although this finding could indicate an unmet educational need, it most likely represents responder error, as all five students who strongly agreed with the need for more training wrote “none” as their response to specific topics for further instruction. Moreover, students who used ORF reported similar comfort levels with all other aspects of feedback compared to ORF non-users; for all other questionnaire items, quantitative and qualitative answers were concordant.
Conclusions
We have demonstrated that a simple improvement not present in previous related studies [5, 6, 9], namely the provision of regular training reinforcement, can increase student acceptance of a structured peer assessment method. Our results suggest that provision of the ORF method with reinforcement promotes the longitudinal development of self-reported feedback skills and may facilitate more constructive feedback. Analogous strategies for training reinforcement may prove beneficial in other medical education settings. Furthermore, our project’s three-year duration provides new insight into how student perceptions of peer assessment abilities evolve over successive years of medical school. Future educational strategies should retain the beneficial structure and clarity of the ORF method while introducing enhanced adaptability and time efficiency.
Supplementary Information
Below is the link to the electronic supplementary material.
Acknowledgments
The authors thank Dr. Beth Bierer for her thoughtful comments on a previous draft of this manuscript. They also thank Josephine Volovetz, Catherine Ituarte, Kelly Shibuya, Blair Mitchell-Handley, and Brian Schroer for assistance in developing and implementing ORF.
Author contributions
All authors contributed to study conception and design. Material preparation and data collection were performed by Alice Tzeng, Bethany Bruno, Jessica Cooperrider, Perry B. Dinardo, Rachael Baird, Carol Swetlik, Brittany N. Goldstein, and Radhika Rastogi. Data analyses and initial manuscript drafting were performed by Alice Tzeng, Bethany Bruno, Jessica Cooperrider, and Perry B. Dinardo. All authors commented on previous versions of the manuscript, and read and approved the final manuscript.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Availability of data and material
Due to the IRB-exempt nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data are not available.
Declarations
Ethics approval
The project was categorized as quality improvement by the Cleveland Clinic Institutional Review Board and thus was granted an exemption. All work was conducted in accordance with the 1964 Helsinki Declaration.
Conflicts of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Consent to participate
Informed consent was obtained from all individual participants included in the study.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Bethany Bruno, Jessica Cooperrider, Perry B. Dinardo contributed equally to this work.
References
- 1.Dannefer EF, Prayson RA. Supporting students in self-regulation: Use of formative feedback and portfolios in a problem-based learning setting. Med Teach. 2013;35(8):655–660. doi: 10.3109/0142159x.2013.785630. [DOI] [PubMed] [Google Scholar]
- 2.Lerchenfeldt S, Mi M, Eng M. The utilization of peer feedback during collaborative learning in undergraduate medical education: a systematic review. BMC Med Educ. 2019;19(1):321. doi: 10.1186/s12909-019-1755-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Burt J, Abel G, Elliott MN, Elmore N, Newbould J, Davey A, et al. The Evaluation of Physicians' Communication Skills From Multiple Perspectives. Ann Fam Med. 2018;16(4):330–337. doi: 10.1370/afm.2241. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hulsman RL, Peters JF, Fabriek M. Peer-assessment of medical communication skills: the impact of students' personality, academic and social reputation on behavioural assessment. Patient Educ Couns. 2013;92(3):346–354. doi: 10.1016/j.pec.2013.07.004. [DOI] [PubMed] [Google Scholar]
- 5.Cushing A, Abbott S, Lothian D, Hall A, Westwood OMR. Peer feedback as an aid to learning – What do we want? Feedback. When do we want it? Now! Med Teach. 2011;33(2):e105-e12. doi:10.3109/0142159x.2011.542522. [DOI] [PubMed]
- 6.Burgess AW, Roberts C, Black KI, Mellis C. Senior medical student perceived ability and experience in giving peer feedback in formative long case examinations. BMC Med Educ. 2013;13(1):79. doi: 10.1186/1472-6920-13-79. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Ludwig AB, Raff AC, Lin J, Schoenbaum E. Group observed structured encounter (GOSCE) for third-year medical students improves self-assessment of clinical communication. Med Teach. 2017;39(9):931–935. doi: 10.1080/0142159x.2017.1332361. [DOI] [PubMed] [Google Scholar]
- 8.Li H, Xiong Y, Hunter CV, Guo X, Tywoniw R. Does peer assessment promote student learning? A meta-analysis Assess Eval High Educ. 2020;45(2):193–211. doi: 10.1080/02602938.2019.1620679. [DOI] [Google Scholar]
- 9.Perera J, Mohamadou G, Kaur S. The use of objective structured self-assessment and peer-feedback (OSSP) for learning communication skills: evaluation using a controlled trial. Adv Health Sci Educ. 2010;15(2):185–193. doi: 10.1007/s10459-009-9191-1. [DOI] [PubMed] [Google Scholar]
- 10.Cantillon P, Sargeant J. Giving feedback in clinical settings. BMJ. 2008;337:a1961. doi: 10.1136/bmj.a1961. [DOI] [PubMed] [Google Scholar]
- 11.Finn GM, Garner J. Twelve tips for implementing a successful peer assessment. Med Teach. 2011;33(6):443–446. doi: 10.3109/0142159X.2010.546909. [DOI] [PubMed] [Google Scholar]
- 12.Pendleton D. The Consultation: An Approach to Learning and Teaching. vol 6. Oxford University Press; 1984.
- 13.Vickery AW, Lake FR. Teaching on the run tips 10: giving feedback. Med J Aust. 2005;183(5):267–268. doi: 10.5694/j.1326-5377.2005.tb07035.x. [DOI] [PubMed] [Google Scholar]
- 14.Dannefer EF, Henson LC. The portfolio approach to competency-based assessment at the Cleveland Clinic Lerner College of Medicine. Acad Med. 2007;82(5). [DOI] [PubMed]
- 15.Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
- 16.Rudy DW, Fejfar MC, Griffith CH, Wilson JF. Self- and peer assessment in a first-year communication and interviewing course. Eval Health Prof. 2001;24(4):436–445. doi: 10.1177/016327870102400405. [DOI] [PubMed] [Google Scholar]
- 17.Chou CL, Masters DE, Chang A, Kruidering M, Hauer KE. Effects of longitudinal small-group learning on delivery and receipt of communication skills feedback. Med Educ. 2013;47(11):1073–1079. doi: 10.1111/medu.12246. [DOI] [PubMed] [Google Scholar]
- 18.Arnold L, Shue CK, Kritt B, Ginsburg S, Stern DT. Medical students' views on peer assessment of professionalism. J Gen Intern Med. 2005;20(9):819–824. doi: 10.1111/j.1525-1497.2005.0162.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Papinczak T, Young L, Groves M. Peer assessment in problem-based learning: a qualitative study. Adv Health Sci Educ Theory Pract. 2007;12(2):169–186. doi: 10.1007/s10459-005-5046-6. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Due to the IRB-exempt nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data are not available.

