Abstract
Introduction
Feedback cards are recommended as a feasible tool for structured written feedback delivery in clinical education while effectiveness of this tool on the medical students’ performance is still questionable. The purpose of this study was to compare the effects of structured written feedback by cards as well as verbal feedback versus verbal feedback alone on the clinical performance of medical students at the Mini Clinical Evaluation Exercise (Mini-CEX) test in an outpatient clinic.
Methods
This is a quasi-experimental study with pre- and post-test comprising four groups in two terms of medical students’ externship. The students’ performance was assessed through the Mini-Clinical Evaluation Exercise (Mini-CEX) as a clinical performance evaluation tool. Structured written feedbacks were given to two experimental groups by designed feedback cards as well as verbal feedback, while in the two control groups feedback was delivered verbally as a routine approach in clinical education.
Results
By consecutive sampling method, 62 externship students were enrolled in this study and seven students were excluded from the final analysis due to their absence for three days. According to the ANOVA analysis and Post Hoc Tukey test, no statistically significant difference was observed among the four groups at the pre-test, whereas a statistically significant difference was observed between the experimental and control groups at the post-test (F = 4.023, p =0.012). The effect size of the structured written feedbacks on clinical performance was 0.19.
Conclusion
Structured written feedback by cards could improve the performance of medical students in a statistical sense. Further studies must be conducted in other clinical courses with longer durations.
Keywords: Feedback, Medical education, Ambulatory care, Outpatient clinics
Introduction
Clinical education is a major part of medical education which allows the students to apply their theoretical knowledge in practice within a clinical setting. Providing effective feedback is a well-known educational strategy to enhance the effectiveness of the clinical teaching-learning process (1, 2). Clinical instructors deliver verbal feedback to the students as a routine approach. To enhance the effeciency, some experts believe verbal feedback is a weak method of feedback delivery due to the fact that it is easily forgotten by students and instructors and decreases the probability of re- observation and reflection (3-5). Written feedback overcomes these limitations. Furthermore, it has some more advantages compared to verbal feedback. With written feedback, students are less likely to misinterpret or forget the feedback content; it provides a common perception on feedback content between instructors and students and enhances the students’ reflection on the delivered feedback content (6-9).
During the recent years, researchers have focused on developing practical tools for providing written and documented feedbacks to the learners without interrupting the educational process in clinical settings. Among these tools, feedback cards are the most favorable means for delivering the structured written feedback (5,7). Feedback cards have numerous advantages due to their small size and easy handling. Furthermore, it could be applied in most educational settings without any interruption on the educational process (10). Although some studies have reported the learners’ satisfaction with this feedback provisional method, the effectiveness of these cards on the clinical performance of the students is still questionable (11-14). Previous studies highlight the importance of experimental studies on the effectiveness of feedback tools on clinical performance (15, 16).
Clinical performance is a complex subject to measure. Mini Clinical Evaluation Exercise (Mini-CEX) is a highly recommended tool for evaluating the performance of undergraduate medical students in most clinical settings (17-18). It is being adopted as a feasible, valid and reliable tool to evaluate six main clinical skills (19). This study investigated the effect of structured written feedback by cards on medical students’ performance at the Mini-CEX test in an outpatient clinic.
Methods
This is a quasi-experimental study with pre- and post-test comprising four groups in two terms of medical students’ externship. This study was conducted in an educational hospital of Isfahan University of Medical Sciences in Iran.
The medical students in two educational terms (the first was related to the February to April 2014 term and the second to the May to July 2014 term) participated in this study. Consecutive sampling method was adopted here where all externship students (30 students in the first term and 32 in the second term) were enrolled in this study. All the students signed the written consent to participate in the study. Participants were allocated into four groups (two from each term) based on predetermined students’ list of names in the course schedule. The researchers entered the first group in each term as the control group in the study to decrease the probability of the diffusion of the intervention of the study process. After allocation, the first group (n = 15) and the third group (n = 17) were used as the control groups while the second group (n = 14) and the fourth group (n = 15) were used as the experimental groups. During this study, 3 students in the first term and 5 in the second term were excluded due to their absence for three days. Data of 54 students were analyzed finally. Two clinical instructors examined the performance of students by Mini-CEX test as pre- and post-test. The intra-rater reliability of Mini-CEX test was controlled by assigning the same instructors for pre- and post-test in each group.
The instructors delivered verbal feedbacks to the two control groups during the outpatient clinical teaching sessions. The two experimental groups received both verbal feedbacks and written feedback simultaneously. The feedback card was designed to deliver the structured written feedback in the two experimental groups based on the recommended format in previous studies (10-12,20, 21). The feedback card was adjusted for outpatient clinic performance by four medical education experts and clinical instructors. These cards were 10×13 cm in size. In order to deliver the structured written feedback, each card had a table with eight performance items and scoring boxes for rating the performance of students as good, satisfactory, or dissatisfactory by the instructors, as presented in previous studies. The back of the card had sufficient space for additional feedback notes.
The Mini-Clinical Evaluation Exercise (Mini-CEX) is an adopted tool for measurements of clinical performance in a wide variety of clinical settings. This test was utilized to evaluate the ability of learners in six main skills related to assessment and management of patients in real conditions (17-19). The Mini-CEX offered on single sheet is more feasible to carry out, easy to handle and time saving in a busy clinical setting. The Mini-CEX test sheet consists of three parts: the first part contained three questions on patient characteristics (age, gender, and previous visits at the clinic); in the second part, a table was used to rate the complexity of cases (from low to high); also, in the third part, the performance of students in six domains was scored. They are medical interview, physical examination, professionalism, clinical judgment, data organization/sufficiency, and counseling skills. These six skills are scored on a scale of 0-9 (0: absence of the desired behavior, 1-3: below the expectations for the level of training, 4-6: meets the expectations and 7- 9: above the expectations for the level of training). The total scores of the checklist, ranging from 0 to 54, are regarded as the students’ performance score (22). The content and face validity of this test have been confirmed in previous studies (19, 23, 24). Consistency of the Mini-CEX scores increased in multiple measurements (25). The Mini-CEX test had an inter-class correlation (ICC) lower than 50% and G coefficient in a single assessment of a student (18, 25).
During the pre- and post-tests, each student chooses a volunteer patient in outpatient clinic and begins to take his/her medical history, perform physical examination, and determine some possible differential diagnoses. In the next step, the students present their selected patient to the instructor at the outpatient clinic in the presence of other students. In the final steps, the instructor completes the patient’s assessment, and determines treatment planning and discharge for the patient. The instructor completes the Mini-CEX checklist immediately to prevent the interference between patient visit and student performance evaluation. Furthermore, the Mini-CEX test results were not used as the students’ final course evaluation and the confidentiality of the participants was respected by collection and saving of the test sheets.
A self-report questionnaire was completed at the end of the course to assess the students’ opinion on the quantity and quality of the delivered feedbacks. This questionnaire consisted of seven items, scored on a five-point scale of 1 to 5; from strongly disagree as 1 to strongly agree as 5. In the experimental group, the self-report questionnaire contained five additional items regarding their opinions on the utilization of feedback cards. The results of these five items were analyzed separately. The face validity of this checklist was approved by four instructors from the Department of Medical Education and the reliability of the questionnaire was acceptable based on Cronbach’s alpha test (r = 0.74).
The Pearson correlation, the Cross Tab Chi-Square test and the One Way Analysis of Variance (ANOVA) with Post Hoc Tukey test were used to compare the data among the four groups. The significance level of difference between the groups was considered p<0.05 in the data analysis. All analyses were performed in SPSS, version 14.
Ethical consideration
This study was performed in accordance with the Declaration of Helsinki and subsequent revisions and approved by the Medical Ethics Committee of the Isfahan University of Medical Sciences.
Results
The data of 54 students were analyzed. According to ANOVA contrast analysis among the four groups, there were no significant differences in student age (F=1.46, p=0.23); also, Chi Square test results revealed that the proportion of the students’ gender was similar (χ2=1.51, p=0.68). There was no significant difference in demographic variables of the patients among the four groups. ANOVA contrast analysis revealed similarity in patients’ age among the four groups (F=0.27, p=0.84). Chi Square results revealed that the proportions of the patients’ gender (χ2=1.35, p=0.71), previous visits at the clinic (χ2= 5.33, p=0.14) and complexity of the case (χ2=9.76, p=0.13) were similar among the four groups. Also at the post-test, there were no significant differences in patients’ age (F=1.27, p=0.29), gender (χ2=6.33, p=0.09), previous visits at the clinic (χ2=4.66, p=0.19) and complexity of the case (χ2=0.35, p=0.94).
Analysis of Mini-CEX scores at post-test revealed that the interclass correlation (ICC) between instructors was low (ICC=0.14, p=0.52). This means that Mini-CEX test had low intra-rater reliability. To control the effects of low intra-rater reliability, each instructor was assigned to the determined groups and Mini-CEX results were analyzed and reported separately. There were moderate correlation between pre- and post-test for the two - instructors’ Mini-CEX results in Pearson correlation test (1st instructor, r= 0.56, p=0.05 and 2nd instructor, r=0.62, p=0.04). This revealed moderate interrater reliability for each instructor in this study.
According to ANOVA test result, there were no significant differences in the mean pre-test scores of the Mini-CEX among the four groups (F=0.965, p=0.417) while there were significant differences in the mean post-test scores of the Mini-CEX (F=4.023, p=0.012).
Tukey Post Hoc test revealed a significant difference between the first and second groups, the first and fourth groups, the third and second groups, and the third and fourth groups’ post-test scores of the Mini-CEX, while there was no significant difference between the first and third groups and the second and fourth groups in the post-test scores of the Mini-CEX (Table 1).
Table 1.
The mean deference of Mini-CEX test among four groups at ANOVA and Tukey HSD post hoc
Compared
groups |
Pre-test | Post-test | ||
---|---|---|---|---|
Mean ±SD | p | Mean ±SD | p | |
The first group /The second group | 0.04±0.90 | 1.00 | 5.00±1.69 | 0.02 |
The first group /The third group | 0.85±0.88 | 0.76 | 0.21±1.66 | 0.99 |
The first group /The fourth group | 0.66±0.90 | 0.88 | 4.54±1.69 | 0.04 |
The second group /The third group | 0.90±0.90 | 0.74 | 5.21±1.74 | 0.02 |
The second group /The fourth group | 0.61±0.92 | 0.90 | 0.46±1.77 | 0.99 |
The third group /The fourth group | 1.52±0.90 | 0.34 | 4.75±1.69 | 0.03 |
The Eta squared among the four groups was 0.19 in ANOVA test, while the Eta squared between the experimental and control groups was 0.17 in the independent t-test. This revealed the small effect size for structured written feedback by cards on the students’ performance at Mini-CEX test based on Cohen classification of the effect size (26).
The mean of the delivered cards is an influential factor in the effect size of the study. In the experimental groups, 82 cards were delivered to the students. The mean feedback card delivery in the two experimental groups was 2.88±0.65. The instructors completed all cards structured feedback. About 24.39% of the cards had the unstructured written improvement recommendation on their back; however, 75.61% of the cards’ back were left blank. The quantitative content analysis of these recommendations revealed that 75% were classified as “need more practicing” and 25% as “encouraging recommendation” on the cards. These recommendations were not specific though Pearson correlation test results showed that there was no significant relationship between the number of delivered feedback card and students’ performance in post-test (r=0.13, p= 0.52).
The students’ opinions on the quantity and quality of feedback delivered were analyzed in this study. Most of the students in the four groups were satisfied about the quantity and quality of the feedback they received. In seven items of the self-report questionnaire, there was no statistically significant difference among the four groups in the ANOVA test. According to the ANOVA results, only the two experimental groups perceived the provided feedbacks to be significantly more respectful than the two control groups (F=5.27, p=0.003). The ANOVA and Post Hoc Tukey test results are shown in Table 2.
Table 2.
Differences in students opinion about respectfulness of feedback delivery between the four groups according to the contrast analysis (ANOVA) and Tukey HSD post hoc test
Compared
groups |
Mean±SD | p |
---|---|---|
The first group /The second group | 1.08±0.33 | 0.01 |
The first group /The third group | 0.20±0.32 | 0.92 |
The first group /The fourth group | 1.01±0.35 | 0.03 |
The second group /The third group | 0.88±0.32 | 0.04 |
The second group / The fourth group | 0.66±0.35 | 0.99 |
The third group /The fourth group | 0.89±0.34 | 0.04 |
Analysis of the experimental groups’ opinions on the feedback cards indicated that most students agreed or strongly agreed with the size and shape of the cards, the number of cards received from the instructors, and the understandability of the comments on the back of the cards. They are eager to receive feedback cards from their instructors and welcomed their specific comments on the back of the cards. Moreover, 72% of the students suggested the application of this tool for subsequent educational courses.
Discussion
The findings presented in this study revealed that the feedback card improved the performance of medical students in outpatient clinic. However, the study had a small effect size; that is, the difference between the groups was statistically important but it was smaller than that of clinical importance. Similar results are reported in other studies that assessed the effectiveness of feedback card on the students’ performance (27, 28). Paukert and colleagues used feedback cards to train a number of interns during a 12-week surgery internship. They compared the performance of these students with that of the students of the previous year (as a retrospective control). They reported a greater effect size for feedback card. However, the participants were satisfied with this method and believed that the cards enhanced their skills in history taking, physical examination, and clinical decision-making (12).
The finding of this study revealed that there was a lack of specific recommendation on the back of the cards and most of them were blank. This finding is in accordance with that of Nesbitt and colleagues (9). The instructors’ training on the types of effective feedback is essential in improving the delivery of corrective recommendations (27). Richards and colleagues (14) found out that the number of delivered cards does not matter. Findings here revealed that there was no significant relationship between the number of cards given to each student and the test score at the end of the course. Meaningful content of the feedback for students and specific recommendation note on the card could improve the effectiveness of the feedback cards. Previous studies showed that instructors gradually provided more specific feedbacks by applying cards (27, 28). It is noteworthy that completing feedback cards might be at first a time-consuming task for the instructors, but it improves the performance over time.
Based on the Kirkpatrick’s model, assessment of the students’ satisfaction is essential in evaluating the effectiveness of educational program (29). The findings presented in this study revealed that the students in the four groups were satisfied with the quality and quantity of the delivered feedback and believed that the feedbacks improved their performance. It must be noted that the verbal feedback with or without structured written feedback had the same effects on the students’ satisfaction. This may be related to the content of the feedback that was encouraging in all groups. Previous studies revealed that positive feedback content could increase the satisfaction of students on feedback in clinical education (7, 30).
The two experimental groups believed that the contents of the delivered structured written feedback by cards carry more respect than that of the control groups. Previous studies did not mention this. It may be related to the respect of the students’ confidentiality by delivering the feedback content by cards to the students in that busy educational setting.
Conclusion
The findings indicated that the structured written feedback improved the performance of medical students more significantly than verbal feedback. The students in the two experimental groups believed that feedback delivery by cards is a more respectful way of feedback provision in outpatient clinic settings. However, further studies must be conducted in other clinical educational courses and on other aspects of clinical performance.
Limitations
The findings indicated that the structured written feedback improved the performance of medical students more significantly than verbal feedback. The students in the two experimental groups believed that feedback delivery by cards is a more respectful way of feedback provision in outpatient clinic settings. However, further studies must be conducted in other clinical educational courses and on other aspects of clinical performance.
Acknowledgments
This study was performed by a grant from Isfahan University of Medical Sciences with research project number of 293127. The authors are grateful to all medical instructors and students that participated in this study.
Conflict of interests: Authors have no conflict of interests.
References
- 1.Hattie J, Timperley H. The power of feedback. Review of educational research. 2007;77(1):81–112. [Google Scholar]
- 2.Wimmers PF, Schmidt HG, Splinter TA. Influence of clerkship experiences on clinical competence. Med Educ. 2006;40(5):450–8. doi: 10.1111/j.1365-2929.2006.02447.x. [DOI] [PubMed] [Google Scholar]
- 3.Van de Ridder J, Stokking KM, McGaghie WC, Ten Cate OTJ. What is feedback in clinical education? Med Educ. 2008; 42(2):189–97. doi: 10.1111/j.1365-2923.2007.02973.x. [DOI] [PubMed] [Google Scholar]
- 4.Bienstock JL, Katz NT, Cox SM, Hueppchen N, Erickson S, Puscheck EE. To the point: medical education reviews—providing feedback. American journal of obstetrics and gynecology. 2007; 196(6):508–13. doi: 10.1016/j.ajog.2006.08.021. [DOI] [PubMed] [Google Scholar]
- 5.Haghani F, Fakhari M. Feedback in Clinical Education: Concept, Barriers, and Strategies. Iranian Journal of Medical Education. 2014;13(10):869–85. Persian. [Google Scholar]
- 6.Newton PM, Wallace MJ, McKimm J. Improved quality and quantity of written feedback is associated with a structured feedback proforma. Journal of educational evaluation for health professions. 2012; 9: 10. doi: 10.3352/jeehp.2012.9.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Becker Y. What Do Clinician Encounter Cards Really Mean? The Journal of surgical research. 2008;146(1):1–2. doi: 10.1016/j.jss.2007.04.007. [DOI] [PubMed] [Google Scholar]
- 8.Weaver MR. Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education. 2006;31(3):379–94. [Google Scholar]
- 9.Nesbitt A, Pitcher A, James L, Sturrock A, Griffin A. Written feedback on supervised learning events. The Clinical Teacher. 2014;11(4):279–83. doi: 10.1111/tct.12145. [DOI] [PubMed] [Google Scholar]
- 10.Kogan JR, Shea JA. Implementing feedback cards in core clerkships. Med Educ. 2008;42(11):1071–9. doi: 10.1111/j.1365-2923.2008.03158.x. [DOI] [PubMed] [Google Scholar]
- 11.Bennett A, Goldenhar L, Stanford K. Utilization of a formative evaluation card in a psychiatry clerkship. Academic Psychiatry. 2006;30(4):319–24. doi: 10.1176/appi.ap.30.4.319. [DOI] [PubMed] [Google Scholar]
- 12.Paukert JL, Richards ML, Olney C. An encounter card system for increasing feedback to students. The American Journal of Surgery. 2002;183(3):300–4. doi: 10.1016/s0002-9610(02)00786-9. [DOI] [PubMed] [Google Scholar]
- 13.Kim S, Kogan JR, Bellini LM, Shea JA. A Randomized‐Controlled Study of Encounter Cards to Improve Oral Case Presentation Skills of Medical Students. Journal of general internal medicine. 2005;20(8):743–7. doi: 10.1111/j.1525-1497.2005.0140.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Richards ML, Paukert JL, Downing SM, Bordage G. Reliability and usefulness of clinical encounter cards for a third-year surgical clerkship. Journal of Surgical Research. 2007;140(1):139–48. doi: 10.1016/j.jss.2006.11.002. [DOI] [PubMed] [Google Scholar]
- 15.Bazrafkan L, Ghassemi GH, Nabeiei P. Feedback is good or bad? Medical residents’ points of view on feedback in clinical education. Journal of Advances in Medical Education & Professionalism. 2013;1(2):51–4. [Google Scholar]
- 16.Norcini J. The power of feedback. Med Educ. 2010;44(1): 7–16. doi: 10.1111/j.1365-2923.2009.03542.x. [DOI] [PubMed] [Google Scholar]
- 17.Hill F, Kendall K. Adopting and adapting the mini‐CEX as an undergraduate assessment and learning tool. The Clinical Teacher. 2007;4(4):244–8. [Google Scholar]
- 18.Nair BR, Alexander HG, McGrath BP, Parvathy MS, Kilsby EC, Wenzel J, et al. The mini clinical evaluation exercise (mini-CEX) for assessing clinical performance of international medical graduates. Med J Aust. 2008;189(3):159–61. doi: 10.5694/j.1326-5377.2008.tb01951.x. [DOI] [PubMed] [Google Scholar]
- 19.Kogan JR, Bellini LM, Shea JA. Feasibility, reliability, and validity of the mini-clinical evaluation exercise (Mini-CEX) in a medicine core clerkship. Academic Medicine. 2003;78(10):S33–5. doi: 10.1097/00001888-200310001-00011. [DOI] [PubMed] [Google Scholar]
- 20.Greenberg LW. Medical students' perceptions of feedback in a busy ambulatory setting: a descriptive study using a clinical encounter card. Southern medical journal. 2004;97(12):1174–78. doi: 10.1097/01.SMJ.0000136228.20193.01. [DOI] [PubMed] [Google Scholar]
- 21.Bandiera G, Lendrum D. Daily encounter cards facilitate competency-based feedback while leniency bias persists. CJEM. 2008;10(1):44. doi: 10.1017/s1481803500010009. [DOI] [PubMed] [Google Scholar]
- 22.Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2007; 29(9):855–71. doi: 10.1080/01421590701775453. [DOI] [PubMed] [Google Scholar]
- 23.Holmboe ES, Huot S, Chung J, Norcini J, Hawkins RE. Construct validity of the miniclinical evaluation exercise (Mini CEX) Acad Med. 2003;78(8):826–30. doi: 10.1097/00001888-200308000-00018. [DOI] [PubMed] [Google Scholar]
- 24.Hill F, Kendall K. Adopting and adapting the mini‐CEX as an undergraduate assessment and learning tool. The Clinical Teacher. 2007;4(4):244–8. [Google Scholar]
- 25.Cook DA, Dupras DM, Beckman TJ, Thomas KG, Pankratz VS. Effect of rater training on reliability and accuracy of mini-CEX scores: a randomized, controlled trial. Journal of general internal medicine. 2009;24(1):74–9. doi: 10.1007/s11606-008-0842-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Cohen J. Statistical power analysis for the behavioral sciences. New Jersey: Lawrence Erlbaum; 1988. [Google Scholar]
- 27.Schum TR, Krippendorf RL, Biernat KA. Simple feedback notes enhance specificity of feedback to learners. Ambulatory Pediatrics. 2003;3(1):9–11. doi: 10.1367/1539-4409(2003)003<0009:sfneso>2.0.co;2. [DOI] [PubMed] [Google Scholar]
- 28.Ozuah PO, Reznik M, Greenberg L. Improving medical student feedback with a clinical encounter card. Ambulatory Pediatrics. 2007;7(6):449–52. doi: 10.1016/j.ambp.2007.07.008. [DOI] [PubMed] [Google Scholar]
- 29.Changiz T, Fakhari M, Omid A. Kirkpatrick’s Model: a Framework for Evaluating the Effectiveness of Short-term and In-service Training Programs. Iranian Journal of Medical Education. 2014;13(12):1058–72. [Google Scholar]
- 30.Van de Ridder JM, Peters CM, Stokking KM, De Ru JA, Ten Cate OTJ. Framing of feedback impacts student’s satisfaction, self-efficacy and performance. Advances in Health Sciences Education. 2015;20(3):1–14. doi: 10.1007/s10459-014-9567-8. [DOI] [PubMed] [Google Scholar]