Skip to main content
Canadian Journal of Dental Hygiene logoLink to Canadian Journal of Dental Hygiene
. 2021 Feb 15;55(1):39–47.

Effect of diagnostic score reporting following a structured clinical assessment of dental hygiene student performance

Alix Clarke *, Hollis Lai §, Alexandra DE Sheppard , Minn N Yoon Δ
PMCID: PMC7906121  PMID: 33643416

Abstract

Background

Diagnostic score reporting is one method of providing feedback to all students following a structured clinical assessment but its effect on learning has not been studied. The objective of this study was to assess the impact of this feedback on student reflection and performance following a dental hygiene assessment.

Methods

In 2016, dental hygiene students at the University of Alberta participated in a mock structured clinical assessment during which they were randomly assigned to receive a diagnostic score report (intervention group) or an overall percentage grade of performance (control group). The students later reflected upon their performance and took their regularly scheduled structured clinical assessment. Reflections underwent content analysis by diagnostic domains (eliciting essential information, effective communication, client-centred care, and interpreting findings). Results were analysed for group differences.

Results

Students performed best on eliciting essential information (92%) and poorest on interpreting findings (42%). The intervention group was more likely to view interpreting findings as a weakness, p = 0.007, while the control group was more likely to view eliciting essential information as a weakness, p = 0.04. No differences were found on the actual assessment scores, p > 0.05.

Discussion

Students who received diagnostic score reporting appeared to reflect more accurately upon their weaknesses. However, this knowledge did not translate into improved performance. Modifications and enhancements to the report may be necessary before an effect on performance will be seen.

Conclusion

Diagnostic score reporting is a promising feedback method that may aid student reflection. More research is needed to determine if these reports can improve performance.

Keywords: dental education, dental hygiene, diagnostic score reporting, feedback (learning), formative feedback, objective structured clinical examination (OSCE)


PRACTICAL IMPLICATIONS OF THIS RESEARCH.

  • The ability to accurately self-assess is an essential skill for practising dental hygienists.

  • Diagnostic score reporting may help develop this skill during dental hygiene education so it can carry over into professional practice.

INTRODUCTION

This article is 1 of 2 papers published in this issue that report on the development, implementation, and evaluation of a diagnostic score reporting framework for structured clinical assessments in dental hygiene. A literature review describes an evidence-based methodology for developing valid reports that summarize student performance (based on diagnostic domains) and for implementing that framework within a dental hygiene skills-based assessment.1 This article reports the evaluation component, specifically how students responded to the feedback, as feedback must be both valid and useful to be considered of quality.

Feedback is considered an essential pedagogical tool in higher education.2, 3 While assessments evaluate student performance, feedback provides students with detailed information to improve future performances.2, 4, 5 Many educators believe students must receive feedback throughout their education for effective learning to take place.

Kolb6 describes learning as a cycle. The process begins with experience, which leads to reflection, observations of others, analysis, conceptualization, and finally modified behaviours. Feedback can play an integral role in facilitating this cycle.3,7,8 Feedback encourages individuals to use their experiences to make improvements, by identifying when performance has deviated from expectations (i.e., starts the cycle).7,8 Feedback also guides students through the learning process by improving the accuracy of self-assessments and suggesting where reflection and analysis should be focused (i.e., accelerates the cycle).3,9,10 Reviews have found that externally provided feedback (such as written or verbal information on, or assessment of, a student’s performance or technique) generally leads to improved performance.11-13 However, not all feedback will have a positive impact, with guidelines recommending timely, clear and concise, task-specific information from an authoritative figure, given with the intent to help the student improve.4,5,11

Despite its established importance, feedback is frequently reported as inadequate across the health disciplines.2,14 Structured clinical assessments (SCAs) capture detailed information on students’ knowledge, skills, and abilities (i.e., their competence), presenting key opportunities to provide feedback.14-17 The most common SCA is the multistation objective structured clinical examination (OSCE) seen in medical education18,19 ; single client assessments, which often focus on interpersonal communication, are also common in nursing and dental hygiene.20,21 These assessments are typically high-stakes, and they attempt to mirror authentic clinical practice in a standardized manner, such as by using trained actors and predetermined grading checklists.18-21

Research suggests that the majority of students will review feedback offered following an SCA.22 However, feedback is often limited in these types of assessments. Traditional methods, such as providing the student with their results by test item, may be precluded by concerns over test security.23,24 Other methods, such as having the examiner provide immediate verbal or written comments within the context of the assessment, significantly increase the time demands placed on instructors and often lead to rushed, incomplete feedback.14,25 A lack of instructor time is frequently cited as a key issue precluding the provision of feedback.9,26,27

Diagnostic score reporting (DSR) is a possible means of providing feedback to students following an SCA. DSR provides test-takers with information on global performance, performance by domains, relative standings to professional standards and/or peers, and specific suggestions for improving performance.28-30 Domains reflect the specific areas of knowledge, skill or ability that the examination intends to capture, defined by professional standards and competencies, so that student strengths and weaknesses can be readily identified.28,29,31 Domain scores are determined from a subset of relevant test items and provide additional information to the student beyond a single summary score, including information on how to improve within those areas.28,32,33 DSR does not reveal the actual test items to students, and the content of the reports can be pre-established and delivered efficiently online, overcoming the major barriers to feedback for SCAs. DSR has been largely confined to nationwide testing programs, and much of the literature has focused on reporting features such as usability and interpretability.28,29,31-34 The effect of DSR on student outcomes has not been well studied.

The aim of this study was to assess the impact of DSR following an SCA on student-level outcomes, specifically reflection and performance.

METHODS

Ethics approval for this study was obtained from the University of Alberta Research Ethics Office (Pro00062297).

Our companion paper describes the literature-based development and validation process for implementing DSR within a dental hygiene SCA at the University of Alberta in 2016.1

The dental hygiene structured clinical assessment

The dental hygiene SCA was a comprehensive single-client assessment requiring students to establish a rapport, conduct a full health and dental history, and identify any risk factors contraindicating or requiring modifications to dental hygiene therapy.35 The client was portrayed by a standardized patient (a trained actor), who received prior information on the client’s demographics, medical health and background, and oral health status and beliefs. The SCA was graded by clinical instructors using a checklist of observable items, such as questions the student must ask and conclusions they should make during the encounter—marked yes or no—and rating scales to assess global skills such as organization and communication. There were 5 clinical instructors who marked an average of 8 students. No formal calibration was conducted, although instructors had previous SCA experience and met with the clinical professor before and after the assessment to familiarize themselves with the checklist and address any questions or concerns.

The diagnostic score report

DSR for the dental hygiene SCA included overall scores on the assessment and a breakdown of scores by 4 skill-based domains: 1) effective communication; 2) client-centred care; 3) eliciting essential information; and 4) interpreting findings. The 4 domains were established using the Canadian dental hygiene entry-to-practice competencies36 for guidance. Test items were then mapped to an appropriate domain so students could receive a domain score. The reports also contained peer comparisons and information on how to improve performance within each of the 4 skills. Careful consideration was given to the reporting features (e.g., esthetics and language) in keeping with best practice recommendations.1 Reports were provided to students through an online portal.

An assessment blueprint, which shows how test items and competencies were mapped to domains, as well as screenshots of the actual report, is presented in the literature review.1

Study design

An experimental design was used to evaluate the effect of DSR on student-level outcomes. Thirty-nine dental hygiene students scheduled to take the SCA in 2016 were invited to participate in a mock SCA, a practice assessment prior to their final SCA, where they were randomly assigned to either the intervention or control group. The intervention group received DSR after the mock SCA, while the control group received one overall percentage grade of performance. After receiving their results, all students were asked to reflect upon their mock SCA performance and later completed their regularly scheduled SCA. Thus the effect of DSR on reflection and performance could be ascertained. Figure 1 illustrates this design.

The mock structured clinical assessment

The mock SCA covered the same diagnostic domains as the actual SCA, but with different content. The client’s demographics, health history, and oral health perceptions and concerns were thus unique, but the skills required for the assessment were identical. Two content experts (AC & AS) developed the mock SCA using subject matter expertise and course resources such as the clinic manual and course textbook.37 An assessment blueprint was established linking the test items to the diagnostic domains. The reliability of these links was further validated by clinical instructors using a modified Delphi approach to reaching expert consensus.38

The mock SCA ran identically to the regularly scheduled SCA (as described above), with standardized patients given their client history prior to the assessment, and 5 clinical instructors grading using a predetermined checklist (although standardized clients and instructors were not necessarily the same for each SCA).

Figure 1.

Figure 1.

Experimental design for evaluation of student-level outcomes

Experimental procedure

The mock SCA took place 12 days before the actual SCA. Both students and graders were blinded to group assignment (DSR or control), and verbal feedback from clinical instructors was prohibited. One day after the mock SCA, students received an email link to either their DSR (intervention group) or their overall percentage of performance (control group). Five days after the mock SCA, students were prompted to reflect upon their performance. Specifically, students were asked to describe what they did well during the mock SCA, recognizing their strengths, and what they could improve upon, identifying their weaknesses. Reflections were coded for their quality and content. The regularly scheduled SCA took place one week later and results were collected for analysis. All students received DSR for the actual SCA once the study data were collected.

Data coding and analysis

All data analysis was conducted using statistical software STATA 14.39 Descriptive statistics were reported using means or percentages, where appropriate. For inferential statistics, in cases of violations of assumptions, equivalent non-parametric tests were used in place of parametric ones.

Analysis of performance

The SCA results were analysed for group differences using linear regression controlling for mock SCA results, according to best practices in test-retest designs.40

Analysis of reflection quality

To determine the quality of the students’ reflections, a grading rubric was developed using the University of Alberta Heath Sciences Education and Research Commons (HSERC) Interprofessional Reflection Guide.41 Through an iterative process, the rubric was adjusted until 2 trained raters (AC & MY) could reliably evaluate the reflections; raters were blinded to group assignment. The final rubric dichotomized reflective statements as either low- or high-quality (Appendix A). Exact inter-rater agreement was 84% with a Cohen’s Kappa of 0.64, indicating substantial agreement,42 and any remaining discrepancies were reviewed and coded via consensus. Results were analysed using independent t-tests.

Analysis of reflection content

Each reflection underwent content analysis.43 The units of analysis were the diagnostic domains (effective communication, client-centred care, eliciting essential information, and interpreting findings). The rubric for coding reflective statements into the appropriate domains was based on the assessment blueprint (Appendix B). One statement could have represented multiple domains and an “other” category captured reflections that did not fit within the domains. Two blinded researchers (AC & MY) independently coded each reflective statement. Exact inter-rater agreement was 70% with another substantial Cohen’s Kappa of 0.64. Discrepancies were reviewed and coded via consensus. Reflection content was analysed using Poisson regression.

Table 1.

Structured clinical assessment results: Mean % (SD)

Control group

(n = 20)

DSR group

(n = 18)

Total

(N = 38)

Mock SCA

Total

73.67 (7.69)

76.30 (7.89)

74.92 (7.80)

Effective communication

75.28 (13.55)

79.01 (10.18)

77.05 (12.06)

Client-centred care

76 (10.01)

76.67 (10.54)

76.32 (10.13)

Eliciting essential information

91.67 (9.45)

92.59 (10.08)

92.11 (9.63)

Interpreting findings

37.86 (10.65)

46.83 (22.35)

42.11 (17.56)

Actual SCA results

Total

82.96 (6.60)

82.20 (7.69)

82.60 (7.05)

Effective communication

88.16 (10.23)

82.16 (9.40)

85.32 (10.18)

Client-centred care

83.00 (10.81)

83.33 (9.70)

83.16 (10.16)

Eliciting essential information

92.31 (8.65)

93.59 (8.02)

92.91 (8.27)

Interpreting findings

45.00 (26.72)

52.08 (25.36)

48.36 (25.36)

Note: nothing significant at p< 0.05

RESULTS

Thirty-eight (38) of the thirty-nine (39) students (97%) participated in the mock SCA, and 37 (95%) submitted a reflection.

The effect of diagnostic score reporting on performance

The results of the mock SCA and actual SCA are found in Table 1. The average score on the mock SCA was 75%. Students scored best on the domain eliciting essential information, averaging 92%. Students performed worst on interpreting findings, averaging 42%. No significant differences were found between the control group and the DSR group on the mock SCA, p>0.05.

The average score on the actual SCA was 83%. Similar trends were found on domain scores, with students performing best on eliciting essential information (93%) and worst on interpreting findings (48%). Controlling for mock SCA scores, there was no significant difference on overall SCA scores between the DSR group and control group, p>0.05. There was a non-significant trend that the DSR group performed better than the control group on interpreting findings by an average of 10%, p= 0.24. There was a borderline significant difference on effective communication scores, p= 0.05, where the DSR group performed poorer than the control group by 7%. Overall, there was limited evidence to support that DSR improved student performance.

The effect of diagnostic score reporting on reflection

The quality of the students’ reflections is described in Table 2. The average score was 0.40, falling below the midpoint of the scoring rubric, which ranged from 0 to 1. No significant differences between the DSR and control group were found for reflection quality overall or by question (i.e., strengths vs weaknesses), p>0.05.

The result of the content analysis is presented in Table 3. The students’ reflections fit well within the 4 diagnostic domains, with only 7% of the comments allocated to the “other” category. The majority of the comments focused on eliciting information (33%) and effective communication (29%), with fewer reflective statements on client-centred care (18%) and interpreting findings (13%). The 2 prompts on strengths and weaknesses were analysed separately for group differences.

Table 2.

Quality of student reflections: Mean (SD)

Control

(n = 20)

DSR

(n = 17)

Total

(N = 37)

Strengths

0.5 (0.43)

0.37 (0.45)

0.44 (0.44)

Improvements

0.38 (0.46)

0.34 (0.42)

0.36 (0.44)

Total

0.44 (0.34)

0.36 (0.27)

0.40 (0.31)

Note: nothing significant at p < 0.05

Table 3.

Content of student reflections: Frequency (%)

 

Effective communication

Client-centred care

Eliciting essential information

Interpreting findings

Other

Strengths

Control

11 (22%)

8 (16%)

21 (40%)

6 (12%)

4 (8%)

DSR

13 (35%)

7 (19%)

14 (38%)

1 (3%)

2 (5%)

Combined

24 (28%)

15 (17%)

35 (40%)

7 (8%)

6 (7%)

Weaknesses

Control

15 (35%)

8 (19%)

15 (35%)a

2 (5%)a

3 (7%)

DSR

8 (24%)

7 (21%)

4 (12%)a

13 (38%)a

2 (6%)

Combined

23 (30%)

15 (19%)

19 (25%)

15 (19%)

5 (6%)

Total

Control

26 (28%)

16 (17%)

36 (39%)

8 (9%)

7 (8%)

DSR

21 (30%)

14 (20%)

18 (25%)

14 (20%)

4 (6%)

Combined

47 (29%)

30 (18%)

54 (33%)

22 (13%)

11 (7%)

a significant at p < 0.05

Percentages reflect row totals and may not sum to 100% due to rounding.

Significant differences were found in the identification of weaknesses. The DSR group had 7.65 times the average number of comments on interpreting findings than the control group, p = 0.007, and 0.31 times (or 70% fewer) comments on eliciting essential information, p = 0.04. Figure 2 illustrates that students within both groups performed best on the domain eliciting essential information and worst on interpreting findings. Therefore, the DSR group appeared to reflect more accurately upon their weaknesses.

In regard to identifying strengths, the DSR group was more likely to reflect upon eliciting essential information as a strength compared to the control group (35% vs 22%), and only 1 student in the DSR group reflected upon interpreting findings as a strength compared to 6 students in the control group (3% vs 12%). While these differences were not significant, p > 0.05, these trends support the claim that DSR resulted in more accurate self-assessments.

Figure 2.

Figure 2.

Mock structured clinical assessment percentages by domain

DISCUSSION

DSR is a viable framework for providing feedback to all students following an SCA. DSR describes test results by the underlying domains the test intends to measure and includes resources for making individual-level improvements.28,29,32,33 Despite the potential of DSR, its effect on student-level outcomes has not been well investigated.

Reflection is a key component of learning.6,8,44 Through the process of experiencing, reflecting, thinking, and acting,6 each SCA provides a learning opportunity that could improve future clinical practice. Accurate self-assessment aids this process by helping students to notice when their performance needs improvement, prompting the learning cycle to start,7,8 and may speed up learning by directing students to focus on key issues, eliminating uncertainty or misdirection.3 The ability to correctly identify problem areas is one important element of reflection-in-action, a life-long skill essential for health care professionals.8 Feedback is believed to facilitate this type of reflection9,45 and yet feedback is often limited following an SCA due to time limitations and concerns over test security. DSR overcomes these issues and this study showed that this feedback mechanism did appear to encourage student reflective capabilities in a positive direction.

Nevertheless, this study did not find that DSR facilitated improved performance. While students were better able to identify their strengths and weaknesses, this awareness did not translate into a behaviour change. These findings suggest a breakdown of the learning cycle between the stages of reflection and action. Ultimately, the goal of feedback is to improve performance,2,5,8 and reasons for DSR not achieving this goal should be examined.

One possibility is that DSR may need to be provided consistently over a longer time span before it will have any impact. A review of feedback and physician performance showed that studies with longer durations were more likely to find significant effects.11 Similarly, a review of audit and feedback found that feedback was more effective when provided more than once.12 This current study took place over a few weeks and only provided DSR a single time.

Another explanation could be that simply providing resources to students is insufficient to encourage meaningful interactions with that information. Even if students are aware of their weaknesses, they may lack the motivation to improve. Characteristics such as confidence, self-esteem, self-efficacy, personal beliefs, and intrinsic interests can all affect a student’s ability to self-regulate and become accountable for their own learning.46-48 The design of this study made it difficult to determine exactly howstudents interacted with their feedback, and more qualitative research in this area may help pinpoint key motivational factors.

Finally, performance may not have improved because of issues with the feedback itself. Quality feedback should be task specific,2-5,9,27 and the information provided through DSR may not have been specific enough to alter behaviour. Providing students with the actual test items would be more precise, but for high-stakes SCAs, where test security is a chief concern, this option is not feasible. When actual test items cannot be revealed, providing example test items is a practical alternative.32 Other options include providing web links to specific sections of textbooks or other online documents,28,49 or video-based feedback where students view their own performance or exemplars of professional dental hygienists interacting with clients.3,50,51 The degree of report individualization could also be improved.28,33,49 Different reporting components could be displayed based on different SCA checklist response patterns or different levels of achievement. Low achievers might benefit from information on how to interpret their scores, while advanced students might benefit from supplemental learning activities.49 Online DSR can be expanded and modified to better encourage student-level improvements.

A limitation of this study was its small sample size, which limited statistical power and precluded more in-depth analyses., Results may not be generalizable to other health professions. This research focused on a select dental hygiene student population at a single school within a single term. Furthermore, each dental hygiene student interacted with only a single standardized patient and was graded by one clinical instructor (as compared to an OSCE that uses multiple stations and graders), and while the reliability of examiner grading was outside the considerations of this project, it may have influenced the results. This project attempted to overcome this issue by randomizing students (and thus their clients and instructors) to the control and intervention groups. Finally, there may also have been contamination between the intervention and control groups, where score reports and information on how to improve were shared between classmates. A next step in this area of research would be to replicate this study with a larger, more diverse group of students.

CONCLUSION

Providing DSR to students following a dental hygiene SCA resulted in more accurate self-assessments but did not improve performance. Online DSR offers a promising feedback framework and enhancing the reports may facilitate behaviour change. Suggestions include providing links to relevant references, incorporating video feedback, and developing more personalized/individualized reports.

CONFLICT OF INTEREST

The authors declare they have no financial, economic or professional interests that may have influenced the design, execution or presentation of this scholarly work.

APPENDIX

APPENDIX A: REFLECTION QUALITY GRADING RUBRIC
APPENDIX B: RUBRIC FOR CODING REFLECTION CONTENT

Acknowledgments

This project was supported by CIHR’s Frederick Banting and Charles Best Canada Graduate Scholarship, and the University of Alberta’s Educational Research and Scholarship Fund.

Footnotes

CDHA Research Agenda category: capacity building of the profession

REFERENCES

  • 1. Clarke A, Lai H, Sheppard ADE, Yoon MN Development of diagnostic score reporting for a dental hygiene structured clinical assessment Can J Dent Hyg 2021;55(1):48-56 [PMC free article] [PubMed] [Google Scholar]
  • 2. Branch WT, Paranjape A Feedback and reflection: Teaching methods for clinical settings Acad Med 2002;77(12):1185–1188 [DOI] [PubMed] [Google Scholar]
  • 3. Sadler DR Beyond feedback: Developing student capability in complex appraisal Assessment & Evaluation in Higher Education 2010;35(5):535–550 [Google Scholar]
  • 4. Ende J Feedback in clinical medical education JAMA 1983;250(6):777–781 [PubMed] [Google Scholar]
  • 5. Van de Ridder JMM, Stokking KM, McGaghie WC, ten Cate OTJ What is feedback in clinical education? Med Educ 2008;42(2):189–197 [DOI] [PubMed] [Google Scholar]
  • 6. Kolb D.The process of experiential learning. In: Experiential learning: Experience as the source of learning and development. 2nd ed.Upper Sadle River, New Jersey:Pearson Education;2015. [Google Scholar]
  • 7. Torbert WR The interplay of feedback, attention, and consciousness In: Learning from experience: Toward consciousness New York, NY:Columbia University Press; ; 1972 [Google Scholar]
  • 8. Sandars J The use of reflection in medical education: AMEE Guide No 44 Med Teach 2009;31(8):685–95 [DOI] [PubMed] [Google Scholar]
  • 9. Archer JC State of the science in health professional education: effective feedback Med Educ 2010;44(1):101–108 [DOI] [PubMed] [Google Scholar]
  • 10. Clark S, Duggins A.What research guides our beliefs about professional learning? In: Using quality feedback to guide professional learning: A framework for instructional leaders.Thousand Oaks, CA:Corwin;2016. [Google Scholar]
  • 11. Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME Guide No 7 Med Teach 2006;28(2):117–128 [DOI] [PubMed] [Google Scholar]
  • 12. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: Effects on professional practice and healthcare outcomes Cochrane Database Syst Rev 2012;(6):CD000259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Kluger AN, DeNisi A The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory Psychol Bull 1996;119(2):254–284 [Google Scholar]
  • 14. Harrison CJ, Molyneux AJ, Blackwell S, Wass VJ How we give personalised audio feedback after summative OSCEs Med Teach 2015;37(4):323–326 [DOI] [PubMed] [Google Scholar]
  • 15. Epstein RM Assessment in medical education N Engl J Med 2007;356(4):387–396 [DOI] [PubMed] [Google Scholar]
  • 16. Boursicot K, Etheridge L, Setna Z, et al. Performance in assessment: Consensus statement and recommendations from the Ottawa conference Med Teach 2011;33(5):370–383 [DOI] [PubMed] [Google Scholar]
  • 17. Wardman M, Yorke VC Hallam JL Evaluation of a multi-methods approach to the collection and dissemination of feedback on OSCE performance in dental education Eur J Dent Educ 2018;22(2):e203–e211 [DOI] [PubMed] [Google Scholar]
  • 18. Harden R What is an OSCE? Med Teach 1988;10(1):19–22 [DOI] [PubMed] [Google Scholar]
  • 19. Newble D Techniques for measuring clinical competence: Objective structured clinical examinations Med Educ 2004;38(2):199–203 [DOI] [PubMed] [Google Scholar]
  • 20. Rushforth HE Objective structured clinical examination (OSCE): Review of literature and implications for nursing education Nurse Educ Today 2007;27(5):481–490 [DOI] [PubMed] [Google Scholar]
  • 21. Blue C Objective structured clinical exams (OSCE): A basis for evaluating dental hygiene students’ interpersonal communication skills Access 2006;20(7):27–31 [Google Scholar]
  • 22. Harrison CJ, Könings KD, Molyneux A, Schuwirth LW, Wass V, van der Vleuten CP Web‐based feedback after summative assessment: How do students engage? Med Educ 2013;47(7):734–744 [DOI] [PubMed] [Google Scholar]
  • 23. Gotzmann A, De Champlain A, Homayra F, et al. Cheating in OSCEs: The impact of simulated security breaches on OSCE performance Teach Learn Med 2017;29(1):52–58 [DOI] [PubMed] [Google Scholar]
  • 24. Bulut O, Cutumisu M, Aquilina AM, Singh D Effects of digital score reporting and feedback on students’ learning in higher education Front Educ 2019;4(65) [Google Scholar]
  • 25. Hollingsworth MA, Richards BF, Frye AW Description of observer feedback in an objective structured clinical examination and effects on examinees Teach Learn Med 1994;6(1):49–53 [Google Scholar]
  • 26. White CB, Ross PT, Gruppen LD Remediating students’ failed OSCE performances at one school: The effects of self-assessment, reflection, and feedback Acad Med 2009;84(5):651–654 [DOI] [PubMed] [Google Scholar]
  • 27. Hendersen M, Ryan T Phillips M The challenges of feedback in higher education Assessment & Evaluation in Higher Education 2019;44(8):1237–1252 [Google Scholar]
  • 28. Goodman DP, Hambleton RK Student test score reports and interpretive guides: Review of current practices and suggestions for future research Applied Measurement in Education 2004;17(2):145–220 [Google Scholar]
  • 29. Roberts MR, Gierl MJ Developing score reports for cognitive diagnostic assessments Educational Measurement: Issues and Practice 2010;29(3):25–38 [Google Scholar]
  • 30. Sinharay S, Puhan G, Haberman SJ Reporting diagnostic scores in educational testing: Temptations, pitfalls, and some solutions Multivariate Behavioral Research 2010;45(3):553–573 [DOI] [PubMed] [Google Scholar]
  • 31. Zapata-Rivera JD, Katz IR Keeping your audience in mind: Applying audience analysis to the design of interactive score reports Assessment in Education: Principles, Policy & Practice 2014;21(4):442–463 [Google Scholar]
  • 32. Zenisky AL, Hambleton RK Developing test score reports that work: The process and best practices for effective communication Educational Measurement: Issues and Practice 2012;31(2):21–26 [Google Scholar]
  • 33. Roberts MR, Gierl MJ.Development of a framework for diagnostic score reporting. Paper presented at the Annual meeting of the American Educational Research Association, San Diego, CA 2009. [Google Scholar]
  • 34. Gotch C, Roberts RR A review of recent research on individual-level score reports Educational Measurement: Issues and Practice 2018;37(3):46–54 [Google Scholar]
  • 35. Pickett FA.Personal, dental, and health histories. In: Darby ML, Walsh MM, eds. Dental hygiene theory and practice. 3rd ed.St. Louis, MO:Saunders Elsevier;2010: 149–79. [Google Scholar]
  • 36.Canadian Dental Hygienists Association (CDHA), Federation of Dental Hygiene Regulatory Authorities (FDHRA), Commission on Dental Accreditation of Canada (CDAC), National Dental Hygiene Certification Board (NDHCB), Dental Hygiene Educators of Canada (DHEC). Entry-to-practice competencies and standards for Canadian dental hygienists.Ottawa, ON:CDHA;2010. [Google Scholar]
  • 37. Darby ML, Walsh MM.Dental hygiene theory and practice. 3rd ed.St. Louis, MO:Saunders Elsevier;2010. [Google Scholar]
  • 38. Hsu C-C, Sandford BA The Delphi technique: Making sense of consensus Practical Assessment, Research & Evaluation 2007;12(10):1–8 [Google Scholar]
  • 39.StataCorp Stata Statistical Software: Release 14 In College Station, TX: StataCorp ; 2015 [Google Scholar]
  • 40. Senn S Change from baseline and analysis of covariance revisited Stat Med 2006;25(24):4334–4344 [DOI] [PubMed] [Google Scholar]
  • 41. Health Sciences Education and Research Commons. Interprofessional Reflection Guide. 2016; Available from: http://www.hserc.ualberta.ca/Resources/CurricularResources/InterprofessionalReflectionGuide.aspx
  • 42. Landis JR, Koch GG The measurement of observer agreement for categorical data Biometrics 1977;33(1):159–174 [PubMed] [Google Scholar]
  • 43. Hanson EC Analysing qualitative data In: Successful qualitative health research: A practical introduction New York, NY:Open University Press;2006:137–60 [Google Scholar]
  • 44. Webster M, Remedios L Reflection and feedback during my OSCE: What have I learnt? Physiotherapy 2015;101:e1273 [Google Scholar]
  • 45. Quinton S, Smallbone T Feeding forward: Using feedback to promote student reflection and learning—a teaching model Innovations in Education and Teaching International 2010;47(1):125–135 [Google Scholar]
  • 46. Zimmerman BJ Self-regulated learning and academic achievement: An overview Educational Psychologist 1990;25(1):3–17 [Google Scholar]
  • 47. Nicol DJ, Macfarlane-Dick D Formative assessment and self-regulated learning: A model and seven principles of good feedback practice Studies in Higher Education 2006;31(2):199–218 [Google Scholar]
  • 48. Butler DL, Winne PH Feedback and self-regulated learning: A theoretical synthesis Rev Educ Res1995(3):245 [Google Scholar]
  • 49. Trout DL, Hyde E.Developing score reports for statewide assessments that are valued and used: Feedback from K-12 stakeholders. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA 2006. [Google Scholar]
  • 50. Fukkink RG, Trienekens N, Kramer LJ Video feedback in education and training: Putting learning in the picture Educ Psychol Rev 2011;23(1):45–63 [Google Scholar]
  • 51. Massey D, Byrne J, Higgins N, Weeks B, Shuker M-A, Coyne E, et al. Enhancing OSCE preparedness with video exemplars in undergraduate nursing students A mixed method study Nurse Educ Today 2017;54 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

APPENDIX A: REFLECTION QUALITY GRADING RUBRIC
APPENDIX B: RUBRIC FOR CODING REFLECTION CONTENT

Articles from Canadian Journal of Dental Hygiene are provided here courtesy of Canadian Dental Hygienists Association

RESOURCES