Skip to main content
International Journal of Applied and Basic Medical Research logoLink to International Journal of Applied and Basic Medical Research
. 2016 Jul-Sep;6(3):220–225. doi: 10.4103/2229-516X.186968

Impact of structured verbal feedback module in medical education: A questionnaire- and test score-based analysis

Meenakshi Aggarwal *, Sonia Singh *,, Anu Sharma *, Poonam Singh *, Priya Bansal 1
PMCID: PMC4979308  PMID: 27563592

Abstract

Introduction:

Feedback is a divalent bond between the supplier (teacher) and the recipient (student). The strength of the bond depends on the instructional design of the feedback. Feedback is central to medical education in promoting self-directed learning in students. In the present study, a structured verbal feedback module was prepared, implemented, and evaluated.

Methods:

The study was done on 280 students from four consecutive batches (2011 to 2014) of the 1st year MBBS students exposed to different types and modes of feedback. Analysis was done using student feedback questionnaire for the perception of students to verbal feedback. Quantitative analysis using post hoc test and ANOVA for the impact of type of feedback (verbal or written) and effect of modes (individual or group) of verbal feedback on test score performance were done.

Result:

In this study, ≥95% of the students preferred verbal feedback of both positive and negative attributes in student questionnaires. It was observed that verbal feedback sessions made a difference of up to 2–2.4 grade points in the mean score of batch when compared to the written feedback. The initial mean test score (T1) of 2011 + 2012 and 2013 + 2014 was not statistically significant (P = 0.113). But, in all subsequent tests (T2, T3, and T4), there was a statistically significant difference in the mean test scores (P = 0.000).

Conclusion:

(1) Students prefer verbal one-to-one feedback over written feedback. (2) Verbal feedback changes learning process and causes sustained improvement in learning strategies.

Keywords: Feedback, formative, learning process, performance, self-directed learning, test score

INTRODUCTION

The concept of feedback as, “an information that a system uses to make adjustments in reaching a goal” was first appreciated by rocket engineers in 1940s. Feedback refers to information describing students' performance in a given activity that is intended to guide their future performance in that same or related activity. It is a key step in the acquisition of knowledge, skills, and attitude, yet feedback is often omitted or handled improperly. If feedback is merely used for criticism of the system, its regulation and purpose of the feedback are defeated. However, if the information which proceeds backward from the performance is able to change the general method and pattern of teaching, that will play a pivotal role in learning.[1]

The main purpose of feedback is to reduce discrepancies between current understandings and desired performance.[2] Feedback that is specific to the learner’s performance is highly valued by learners, whereas nonspecific feedback is less valued. Strategic structured feedback modules can be targeted at students to bridge the gap between current performance and what is desired. Based on that, students can increase their effort, particularly when the efforts can lead to higher quality experiences of conceptual learning. But, if handled incorrectly, it may damage the student–teacher relationship and inhibit giving or receiving feedback in future. Higher commitment from teachers will enhance students' clarity in establishing goals and learning objectives. Eventually, that will lead to higher success rate in assessment and performance.[3]

Feedback can act as a powerful influence on learning and achievement. Teachers can assist in reducing the gap between actual performance and desired goal attainment by providing feedback for evolving effective learning strategies.[4] A mentoring relationship between teacher and learner is crucial in giving such feedback.[5] Effective feedback, according to Hattie and Timperley (2007), has three components - feed up, feedback, and feed forward. Effective feedback focuses on three questions for the learner - Where am I going? (Feed up), how am I going? (Feedback), and where to go next? (Feed forward). Each of these questions works toward various levels – task level, process level, and self level. Feedback promotes comprehension of task or goal, process or way to achieve the goal, and self-analysis of learning style and learning strategies. Most importantly, teachers can create a learning environment and influence the students' ability to self-assess and self-monitor.[6] Students' self-assessment can lead to effective self-regulation and lifelong learning habits.

Effective learners create internal feedback and cognitive routines whenever engaged in academic tasks. Feedback acts as an inherent catalyst for all self-regulated activities of learner.[7] On the teacher’s part, it also conveys an attitude of concern for the progress and development of the person in a real sense and not just as a function of grades or test scores.

Considering the importance of teachers' role in student learning outcome, we designed a verbal feedback module. A greater degree of expression strengthening the links between teacher and students was experimented by incorporating two modes – one-to-one individual feedback and feedback in groups. It was based on an ask-tell-ask principle to start the dialog between the student and teacher for their self-reflection. The whole effort was undertaken to create awareness among students about how they can improve self-analysis and to encourage them for better performance. This might help us as facilitators in motivating improved student task engagement and building up good student–teacher relationship. In this study, thus, a structured feedback module considering the aforementioned weaknesses and strengths was prepared in an attempt to improve the teaching and learning process in a participatory approach.

METHODS

The present investigation included four consecutive batches of the 1st year MBBS students (all students in each batch, n = 70). Informed written voluntary consent was taken from the participants. The Institutional Ethics Committee gave ethical clearance for the work. All researches were carried out in compliance with the Helsinki Declaration.

Each year, the students were taught neuroanatomy by the same team of faculty using didactic lectures, small group discussions, dissection demonstrations, and self-study time. The comprehension of topics taught were evaluated by taking well-spaced written tests with questions such as drawing well-labeled diagrams, short-essay questions, and reasoning-based questions. A total of four formative tests were included in the study. All the four tests were followed by feedback from the teachers to the students.

In batches 2011 and 2012, no verbal interactive feedback was provided to the students regarding their performance. They were provided marked answer sheets with written comments alongside each answer.

In batch 2013, verbal feedback following all the four tests (T1–T4) was given. Students were sensitized by a role-play by the faculty. Then, the students were divided into 7 groups of 10 students each (roll number wise). Each group was asked to choose a leader, a reporter, a presenter, and a time-keeper. Group task was to formulate an ideal answer for one question by consulting books and lecture notes. They described the subtopic asked under headings and subheadings. Time allotted was 10 min for the small group discussion. The resulting answers were presented by each presenter in the larger group. Duration of presentation was 7 min. A consensus was reached in the larger group for minimum acceptable answer outline for each subtopic according to the marking scheme provided. The total duration of group feedback session was 60 min. The teachers acted only as facilitators to direct meaningful discussions in both small and large groups.

In batch 2014, verbal feedback for the four tests was given in two ways. After T1 and T3, feedback was on one-to-one basis, i.e., interacting with each student, individually. The one-to-one feedback session was an open-book re-evaluation of answers by the teacher–student team. First, the students were sensitized by a role-play by the faculty, and then the students were called individually for the one-to-one feedback session. A team of six* faculty members gave individual feedback to each student (*All have received training in basic course in medical education under the aegis of Medical Council of India, which consists of a 3-day workshop for faculty development including giving and receiving feedback). Out of these six, four were resource faculty for faculty development workshop for training of more than 100 faculty in the Institute. The teaching experience of faculty varied between 8 and 16 years in Anatomy. Authors MA, SS, and AS were part of this team. The duration for each feedback was 10 min. The total duration of the one-to-one session was 120 min. The feedback gave information regarding the performance gap between achieved result and expected goal. Each question was discussed in detail to promote student’s self-analysis of strong and weak attributes. The questions posed by the teachers to the students were:

Ask

  • How well have you attempted the question?

  • Are you satisfied with the test score?

  • Can you now point out the various important concepts that you missed out?

    -----------Tell-----------

Ask

  • What is your learning strategy?

  • Do you feel a need to improve or change your learning strategy?

    -----------Tell-----------

Here, students were also provided with the following link to assess their learning styles.

http://www.educationplanner.org/students/self-assessments/improving-study-habits.shtml

Ask

  • Did this discussion add valuable information to your learning?

  • Can you apply this knowledge to other topics?

After T2, verbal feedback was given by dividing students into seven groups. The duration and setup of group feedback session was identical to that used for batch 2013.

The evaluation of intervention comprised two components:

  1. Subjective evaluation by students: Structured student questionnaire with 22 items, scored on a 5-point Likert scale, was administered at the end of the feedback modules in two batches 2013 and 2014. The questionnaire elicited student’s reaction to various aspects of feedback process ([a] general perception or prenotion of feedback, [b] content, [c] assistance to self-directed learning, [d] preference of mode, and [e] enhancement of student–teacher relationship). The results are shown in Table 1. For simplification of data presentation, the strongly agree and agree responses were added together

  2. Objective evaluation of student performance: Mean, with standard deviation of test scores in each batch (2011–2014), was calculated [Table 2]. Post hoc tests (Fisher Least Significant Difference [LSD]) were done to analyze the test scores of batches 2013 and 2014 as shown in Tables 3 and 4. Statistical comparison of test scores, between two previous batches and the two new batches, was done using one-way ANOVA and Levene’s test for equality of variances as shown in Tables 5a and b. Values of mean, standard error of mean, F value, degree of freedom, and significance (both one-tailed and two-tailed) were documented.

Table 1.

Perceptions of students regarding feedback

graphic file with name IJABMR-6-220-g001.jpg

Table 2.

Test scores of students (n=70/batch)

graphic file with name IJABMR-6-220-g002.jpg

Table 3.

Post hoc tests (Fisher Least Significant Difference) showing intercomparison of the scores of batch 2013

graphic file with name IJABMR-6-220-g003.jpg

Table 4.

Post hoc tests (Fisher Least Significant Difference) showing intercomparison of the scores of batch 2014

graphic file with name IJABMR-6-220-g004.jpg

Table 5a.

Analysis of test score differences in various batches

graphic file with name IJABMR-6-220-g005.jpg

Table 5b.

Levene's test for equality of variances

graphic file with name IJABMR-6-220-g006.jpg

RESULTS

Analysis of perception of students

Subjective evaluation of 140 students was done. Subjective evaluation of verbal feedback was done using structured feedback questionnaire having the following components of:

  1. General perception (prenotion about feedback in general, items 1, 2, 3, 4, 5, and 21)

  2. Content and attributes (items 6, 7, 8, 10, 11, 12, and 13)

  3. Encouraging self-analysis (enquire about own learning process, items 9, 14, 17, 19, and 22)

  4. Orientation toward task completion or goal (items 16 and 18)

  5. Student–teacher interaction (items 15 and 20).

In both sets of students (2013 and 2014), more than 90% students felt assessment followed by verbal feedback is an important aid for learning. Good verbal feedback should discuss both positive and negative aspects of task performance (≥95%). The questionnaire also brought out the student perception that such feedback can act as a guide for future improvement in performance [items 1, 2, 8, and 9 in Table 1].

Analysis of improvement in test scores

Objective evaluation of the improvement in test scores for 280 students was done using the following strategy:

  • The mean and standard deviation of test scores for all the four batches were computed in Table 2. Batches 2013 and 2014 showed a sustained improvement in the test scores

  • Intercomparison of scores in batch 2013 showed a significant (P = 0.00) improvement in T2, T3, and T4 from T1. The difference in grade point between T1 and T2 was 2.2. But, further improvement in the scores of T3 and T4 was not significant (P > 0.05) as shown in Table 3

  • Table 4 for intercomparison of test score improvement in batch 2014 depicted an interesting trend. T2, T3, and T4 had statistically significant improvement in test scores from T1 (P = 0.000). The difference in performance in T2 and T3 was not significant (P = 0.352). Statistically significant improvement was seen between T3 and T4 (P = 0.017)

  • The hypothesis that one-to-one verbal feedback may induce more improvement in scores was tested using post hoc test (Fisher LSD) in 2013 and 2014 as shown in Tables 3 and 4. Both sets of students underwent verbal feedback sessions, but there was a difference in modality

The intercomparison of batch 2013 using post hoc test (Fisher LSD) showed that verbal feedback in groups caused a significant improvement in the test scores of consecutive tests [Table 3]. Similar analysis of batch 2014 showed even better performance score improvement in T2 and T4 following one-to-one verbal feedback [Table 4].

  • One-way ANOVA test (Levene’s test of equality of variances) was done to evaluate the difference in mean test scores between batches having written feedback and verbal feedback. Batches 2011 and 2012 were given written feedback for all answers. 2013 batch underwent group feedback sessions and batch 2014 was exposed to both modes of verbal feedback in groups' and one-to-one feedback. It is shown in Table 5 that the initial mean T1 score of 2011 + 2012 and 2013 + 2014 was comparable or the difference was not statistically significant (P = 0.113). However, in all subsequent tests (T2, T3, and T4), there was a statistically significant difference in mean test scores (P = 0.000) between batches with written feedback compared to batches with verbal feedback.

DISCUSSION

Feedback can increase the power of assessment for learning. Assessment, as such, does not aid metacognitive processes for long-term change in learning strategies and motivation. Assessment which just grades student knowledge does not educate the student about the actual gap between performance and expected goals. It just informs the student and teacher regarding the knowledge or lack of it in the topics evaluated. It does not bring forth any further clarity of goal desired to be attained nor does it lead to adjustment of direction or strategy by the students.

Assessment followed by feedback from teachers to assist in adjustment of effort, direction, and strategy can enable the students to break the ultimate goal into small achievable tasks. This enables clear understanding of gap between current learning and intended learning, motivating majority of students to see a need to reduce it.

The role of a teacher in re-charting the best course of action according to the lacunae of knowledge on an individual basis is immense. Students differ in their learning attitudes and diagnostic accuracy of learning gaps. They also differ in their perceptions about self-competence, capability of analysis, and thus internal feedback system. Well-directed verbal feedback after each assessment can improve task confidence and self-analysis in students.

To investigate whether verbal feedback had an effect on the mean test scores of a batch test scores of previous two batches (2011 and 2012) were compared with the test scores of same topics of batches 2013 and 2014. It was observed that verbal feedback sessions made a difference of up to 2–2.4 grade points for the whole batch when compared to batches who received written feedback postassessments [Table 1]. Though the initial difference in test scores was not statistically significant in the two groups, final test scores were significantly better after verbal feedback (P = 0.000).

Post hoc test (Fisher LSD) was done to compare test scores in batch 2013 and batch 2014. This was done to see if the mode of verbal feedback had any effect on mean class test scores. In batch 2013 in which all assessments were followed by verbal feedback in groups, the mean difference in T1 and T4 was 2.36 grade points. In batch 2014, T1 and T3 were followed by one-to-one feedback, whereas T2 was by feedback in groups. It was observed that mean difference in T1 and T4 was 2.76 grade points. This may be attributed to the fact that one-to-one verbal feedback has an edge over verbal feedback in groups. One-to-one feedback may be even more helpful in eliciting the learning gaps and motivating more accurate self-analysis by weaker students.

Our results are at variance with a published prospective randomized controlled study on the effect of explicit feedback.[8] They investigated whether feedback following an interim assessment, in pathology, would have an effect on the score of the course exam, and whether the effect is influenced by the gender of the student. The intervention consisted of immediate detailed oral feedback on the content of the questions of the interim assessment, in a bachelor course, by the tutor, including the rationale of the correct and incorrect answers. They reported no statistically significant effect of feedback and no significant interactions of feedback with gender.

Our results are similar to a cohort study which found that the detailed feedback to test enhanced learning questions is an important online learning tool.[9] A series of online multiple choice tests were developed to test the knowledge of biomedical information that students were expected to know after each patient case. Following submission of the student answers, one cohort (n = 52) received answers only while the following year and a second cohort (n = 51) received the answers with detailed feedback explaining why each answer was correct or incorrect. Students in both groups progressed through the series of online tests with little assessor intervention. Students receiving the answers along with the explanations within their feedback performed significantly better in the final biomedical information examination than those students receiving correct answers only.

Some other published literature is worth a mention to understand the reasons for this disparity. Studies have evaluated trainees' perceptions on the usefulness of feedback received. An exploration of trainees' perceptions of the educational value of case-based discussions (CBDs) specifically focusing on feedback was done.[10] An online questionnaire and interviews obtaining detailed descriptions of pediatric trainees at the UK specialist training levels 1 and 2 views and experiences were used. Qualitative data were analyzed using a thematic framework analysis. Opinions varied regarding how useful they found the feedback. Feedback was perceived as more valuable from assessors who had a positive attitude toward CBDs, understood the process, and had experience in leading them. Time constraints and assessments performed in less suitable environments had a negative impact on feedback. Trainees felt the choice of case played an important role, with challenging cases resulting in more beneficial feedback. Trainers being aware of the qualities of the discussions that result in successful feedback could significantly improve their educational value.

Engaging in formative assessment with a genuine impact on learning is complex and quite a challenge to both trainees and supervisors. This is emphasized in another study utilizing focus group with postgraduate trainees and supervisors in obstetrics and gynecology.[11] It led to three higher order emerging themes: Individual perspectives on feedback, supportiveness of the learning environment, and the credibility of feedback and/or feedback giver.

CONCLUSION

Students prefer verbal one-to-one feedback over written feedback. Structured verbal feedback leads to significant improvement in cognitive performance. Feedback is time consuming, especially if it is aimed for students on a one to one basis after each assessment. But, the present intervention suggests that students can be taught self analysis and effective learning strategies. Exhaustive task feedback need not be a whole year/course process. The initial few formative assessments can be followed by one to one feedback. Then, a few assessments can be followed by small group feedback session. Once the self analysis and task confidence are established, the internal feedback and motivation system of the student will take over.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

Acknowledgment

We gratefully acknowledge the efforts of our dear colleague Mrs. Anshu Soni for her critical appraisal of the manuscript, resulting in meaningful revision of the same.

REFERENCES

  • 1.Ende J. Feedback in clinical medical education. JAMA. 1983;250:777–81. [PubMed] [Google Scholar]
  • 2.Hattie J. Teachers Make a Difference: What is the Research Evidence? 2003. [Last accessed on 2015 Oct 20]. Available from: https://www.cdn.auckland.acnz/assets/education/hattie/docs/teachers-make-a-difference-ACER-(2003).pdf .
  • 3.Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119:254–84. [Google Scholar]
  • 4.Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77:81–112. [Google Scholar]
  • 5.Moorhead R, Maguire P, Thoo SL. Giving feedback to learners in the practice. Aust Fam Physician. 2004;33:691–5. [PubMed] [Google Scholar]
  • 6.Sargeant J. Toward a common understanding of self-assessment. J Contin Educ Health Prof. 2008;28:1–4. doi: 10.1002/chp.148. [DOI] [PubMed] [Google Scholar]
  • 7.Butter DL, Winne PH. Feedback and self-regulated learning: A theoretical synthesis. Rev Educ Res. 1995;65:245–74. [Google Scholar]
  • 8.Olde Bekkink M, Donders R, van Muijen GN, de Waal RM, Ruiter DJ. Explicit feedback to enhance the effect of an interim assessment: A cross-over study on learning effect and gender difference. Perspect Med Educ. 2012;1:180–91. doi: 10.1007/s40037-012-0027-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wojcikowski K, Kirk L. Immediate detailed feedback to test-enhanced learning: An effective online educational tool. Med Teach. 2013;35:915–9. doi: 10.3109/0142159X.2013.826793. [DOI] [PubMed] [Google Scholar]
  • 10.Mehta F, Brown J, Shaw NJ. Do trainees value feedback in case-based discussion assessments? Med Teach. 2013;35:e1166–72. doi: 10.3109/0142159X.2012.731100. [DOI] [PubMed] [Google Scholar]
  • 11.Dijksterhuis MG, Schuwirth LW, Braat DD, Teunissen PW, Scheele F. A qualitative study on trainees' and supervisors' perceptions of assessment for learning in postgraduate medical education. Med Teach. 2013;35:e1396–402. doi: 10.3109/0142159X.2012.756576. [DOI] [PubMed] [Google Scholar]

Articles from International Journal of Applied and Basic Medical Research are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES