Skip to main content
The Saudi Dental Journal logoLink to The Saudi Dental Journal
. 2017 Aug 2;29(4):135–139. doi: 10.1016/j.sdentj.2017.07.001

Reliability of rubrics in the assessment of orthodontic oral presentation

Naif A Bindayel 1
PMCID: PMC5634801  PMID: 29033521

Abstract

Aims

The aim of this study was to evaluate the reliability of using rubrics in dental education, specifically for undergraduate students’ assessment in orthodontic oral presentation.

Methods

A rubric-based case presentation assessment form was introduced to three contributing instructors. In each instructor’s group, the course director, along with the assigned instructor, assessed 8 randomly selected fourth year male dental students utilizing the same assessment form (total of 24 students). The two final scorings made by the assigned instructor and the course director were then gathered for each student. The data of this prospective comparative study then was analyzed using paired t-test to look for any significant differences in the scoring of the course director and each instructor in each group.

Results

No significant statistical differences were detected in grading variables between the instructors and the course director. Furthermore, the data showed no significant correlations between the students’ final course grade, and their case presentation grades scored by instructors’/course director.

Conclusion

Despite the elaborate nature of the routine orthodontic case presentation, the use of rubrics was found to be a promising reliable assessment element.

Keywords: Assessment, Education, Orthodontic, Oral presentation, Reliability, Rubrics

1. Introduction

Teaching and healthcare practice are interrelated. This is due to the service delivery system that requires the attendance of different personal with different levels of knowledge and experience. Teaching in the clinical environment is defined as teaching and learning focused on, and usually directly involving, patients and their problems (Spencer, 2003). And it is interesting to know that the word ‘doctor’ is derived originally from the Latin word “docere”, which means “to teach” (Shapiro, 2001).

Whether in healthcare profession teaching or not, the process of learning and student comprehension is complicated. Many methods have classically proposed ways of thinking behaviors that is believed to be important to the process of learning. Bloom’s taxonomy was among the earliest and focused on the knowledge (cognitive) domain (Bloom and Krathwohl, 1956). Other domains focused on the attitude (affective) domain (Krathwohl and Bloom, 1964) and skills (psychomotor) domain (Simpson, 1972). Curry Onion-Model of learning described further the different aspects (i.e. as layers of onion) of learner and how they learn (Curry, 1983). Each style is characterized by specific features including the ability to acquire knowledge, sort and store information, learners’ interaction with peers and society. Making an assessment to test the learner should touch and consider these styles.

Based on the various domains incorporated into the leaning system, an ideal process of student assessment should cover the attitude, skills and knowledge domains. This can be a complex task, however, the awareness of the importance of these aspects in the assessment process is essential.

Assessment can be formative or summative. Formative assessment is essential for monitoring performance during a program of study, while summative assessment usually done at the end of a program such as competency and licensing examinations. Whether formative or summative, methods of assessment vary and require critical planning where any chosen method of assessment must reflect on the nature of the acquired knowledge being tested.

Many evaluation models were proposed based on each learning domain. The objectives approach (Tyler, 1949) provide a consistency between goals, experience, and outcomes. It includes pretest and posttest design that students’ progress can be measured from. The Goal-Free Assessment model (Scriven, 1991) advocates the implementation of an external evaluator whom is unaware of the stated goals and objectives. The value of a program will be determined based on the outcomes of a program and its quality. Unlike the CIPP model (context, input, process, and product), where the information for assessment is being gathered from a variety of sources to provide basis for making better decisions (Stufflebeam, 2003). Other models were also proposed such as the Hierarchy of Evaluation model (Kirkpatrick, 1979), and the Naturalistic model (Guba, 1978). Additional assessment method that was found to be reliable in clinical setting for health care professional is the RIME method (Pangaro, 1999). It has four stages of students’ development beginning with being a reporter, interpreter, manager, and then educator that leads to professionalism in medicine.

The assessment is the curriculum, as far as the students are concerned (Ramsden, 1992). And whether or not any these assessment models are being adopted, the assessment process has to be undertaken properly to be reflective of the actual students’ actual learning. One of the tools used in assessment nowadays is the use of rubrics. Rubrics can be defined as: a scoring guide or scale consisting of a set of criteria that describe what expectations are being assessed/evaluated and descriptions of levels of quality used to evaluate students work or to guide students to desired performance levels.

The use of rubrics has many advantages such as enhancing the quality of direct instructions, save the time used for explanting the assignment, and increases the efficiency of marking (Hancock and Brundage, 2010), and produce grading calibration(Turbow et al., 2016). It improves the quality of students’ projects outcomes by providing clear guidelines regarding the expected criteria. It simply fulfills the required need of shifting the assessment methods from being subjective, to fairly objective.

Rubrics are mainly of two types, analytic and holistic. The analytic type is a more of detailed version of rubrics that identifies and assesses the individual components of a completed project. While the holistic assesses student work as a whole. There are also some subtypes of rubrics such as weighting rubrics. Weighting rubrics is an analytic rubric in which certain concepts are judged more heavily than others (Dong et al., 2011).

The process of formulating rubrics can be initially difficult; thus, it requires support, time, and practice. It mainly consists of three major steps. First, the evaluation criteria and the concept being taught have to be chosen. This step is followed by organizing these criteria, and developing a grid and inserting criteria.

In the last decade, rubrics were incorporated in the teaching curriculum of many fields. Recent literature shows its wide applicability and acceptance in the teaching of medicine (Baldwin et al., 2009, D'Antoni et al., 2009), nursing (Daggett, 2008), and pharmacy (Blommel and Abate, 2007). In dentistry, Assessment rubric was used for third year dental student in developing a course toward mastering sound communication skills with patients (White et al., 2008). Also, scoring rubric was implemented to evaluate dental student portfolios as a mean of student competency assessment (Gadbury-Amyot et al., 2003).

Oral case presentation typically included in most of healthcare taught courses. The task allows for initiation of self-learning process, and assess clinical reasoning competency (Wiese et al., 2002), thus requires a crucial assessment tools to reflect student’s comprehension. Peer assessment is widely used in this felid as an effective formative assessment tool (Speyer et al., 2011). Other methods including the use of rating scale (Lewin et al., 2013). Whatever the assessment method used, objective reliability stands as important requirement. Although proposals to control such variability was introduced earlier (Kroboth et al., 1992), a continued effort and search shall continue to ensure consistency and reproducibility of such process in the teaching and assessment of each discipline.

Oral case presentation is a vital component of teaching in the discipline of orthodontics. Due to the multiple elements required in its case presentations, the ambiguous level of knowledge display expected, and the increase number of students requiring multiple assessors, a form of rubric is needed to control the process of assessment. The primary aim of this study was to evaluate the reliability of using such a method in dental education, specifically for orthodontic oral case presentation for undergraduate students’ assessment. As a secondary aim, potential correlations between instructors’/course director grading and the students’ final course grade were investigated.

2. Materials and methods

During a series of three weeks orthodontic case presentation sessions, a new rubric-based case presentation assessment form (Fig. 1) was designed and introduced to three contributing instructors (Instructor A, B, and C). The form included three major categories concerning the quality of records, accuracy of data, and display of understanding the materials being presented. Each category was subdivided for two items for the ease of grading. A simple grading scale (grid) was displayed at the bottom of the page. Additionally, the form included grading guidelines that contained a sample of questions that can be asked during the presentation.

Fig. 1.

Fig. 1

Rubric-based orthodontic case presentation form used to evaluate students. Note categories distribution, grading guidelines, and the grading scale.

Prior to the beginning of the case presentation session, 5 min were spent with each instructor to introduce the form. The forms were also presented and supplied to the studied fourth year male dental students early before they started working on their presentation preparation. In each instructor’s group, the course director (C.D.), along with the assigned instructor, assessed the first eight students utilizing the same assessment form (total of 24 students). The two final scorings made by the assigned instructor and the course director were then gathered for each student. The instructors were blinded to the fact that the course director was also taking part of the assessment process.

The data of this prospective comparative study was then analyzed using paired t-test to look for any significant differences in the scoring of the course director and each instructor in each group. Furthermore, Pearson’s correlation was applied to test for any significant correlations between the instructors’/course director’s grading and the students’ final course grade. Also, correlations between the discrepancies of the instructors’/course director’s grading (Instructor’s grade – C.D. grade) and the students’ final course grade was investigated. All statistical analysis was performed using SPSS program (Version 16 SPSS Inc., Chicago, IL), as part of internal course quality assurance initiative.

3. Results

Table 1 shows the mean grades scored for each pair of instructor and C.D. Some discrepancies were found, however, discrepancies were minimal and did not reflect any statistical significant when subjected to paired t-test analysis.

Table 1.

Means of course director’s and instructors’ A, B, and C scorings along with p-value for paired t-test statistics being applied for each pair of assessors.

Mean N S.D. Std. Error Mean p-Value
Pair 1 Course Director 5.656 8 0.229 0.081 0.129
Instructor A 5.34 8 0.640 0.226



Pair 2 Course Director 5.594 8 0.265 0.094 0.563
Instructor B 5.656 8 0.265 0.094



Pair 3 Course Director 5.388 8 0.179 0.063 0.208
Instructor C 5.200 8 0.330 0.117

No significant correlations have been detected between the instructors’/course director’s grading and the students’ final course grade. Furthermore, correlations between the discrepancies of the instructors’/course director’s grading (Instructor’s grade – C.D. grade) and the students’ final course grade did not show any significant correlations (p = 0.585).

4. Discussion

The use of rubric is well documented in the literature for not only formal course assessment, but also in online course evaluation and development (Blood-Siegfried et al., 2008). The strength of this tool enabled its application in many aspects of the field of teaching. The present study attempted to investigate the reliability of its use to assess students’ orthodontics oral case presentation. The use of rubric-based method was applied in assessing other students' case and research presentations (in pharmacy, and surgery fields respectively) and was found to be a successful tool (Musial et al., 2007, O'Brien et al., 2008).

The assessment form contained specific criteria listed in three categories. This is in order to serve as a guide for instructors’ assessments and as a tool for student learning during their preparations. Therefore, a blank copy was introduced and supplied to the students early before they started preparing their presentations. Generally, students appreciate clear guidelines to help them complete procedures. Repeated use of the rubric provides opportunities for student to achieve competency (Dong et al., 2011).

The results showed that there were no statistical significant differences between the scoring among each instructor and the course director. The form produced appeared to be reliable for assessing orthodontic case presentation. The process of grading was consistent, thus minimizing students’ complaints from inconsistent grading. Kruger & Dunning (1999) demonstrated that the poor students tend to overestimate themselves, and vice versa. Using rubric would get students’ attentions to appreciate their deficiencies from their own prospective (Kruger and Dunning, 1999). Thus minimizing discomfort and criticism that is associated with the feedback delivery process (Wood, 2000).

It was hypnotized that better performing students would display more consistent grading among different assessors. The present data however did not support this hypothesis. There was no significant correlation (p = 0.585) between the instructors’/course director’s grading discrepancies (Instructor’s grade – C.D. grade) and the students’ final course grade. Possibly due to the criteria listed in the form that controls the grading process regardless of the student’s overall knowledge and course standing as reflected by his/her final course grade.

In order for a rubric to be more efficient, sometimes students should participate in the formulation of their own assessment rubric. This would result in a use of a language of both faculty members and students that would clearly represent expectations of each criterion and standard (Moni et al., 2005). Even that the results indicates reliable rubrics from the assessment side, a feedback must be always sought from student to aid in the upcoming revision process of the rubric. The presented rubric and its wording has been found to be fairly accepted by current students.

The mean grade the randomly selected students for the oral presentation ranged from 5.20 to 5.66 out of 6.00. This indicates that on average students have performed well in their presentations. This can be attributed to the rubric guidelines presented early for the students to set their expectations and guide them through the preparation process. The rubric supplied has served as instant student feedback tool. Providing effective feedback was shown to enable students to develop a deeper approach to their learning and improve learning outcomes (Mohanna and Chambers, 2003).

The present study illustrated the applicability and value of using rubric in orthodontic case presentation assessment. It would stimulate further studies to propose rubric methods to be implemented in the teaching of orthodontics. Further studies can be initiated to re-evaluate the reliability of the presented form in different academic settings and larger sample size, given the limited current sample size number. Another limitation of the current study that can be considered in reproducing its method, is the inclusion of subjection from both genders. Furthermore, the current grading scale of the form (three categories), can be further expanded into four or five assessment’s ranking.

5. Conclusion

The use of rubrics was found to be a promising reliable element to be included in the assessment of orthodontic courses. This study would stimulate the development of a valid well-defined orthodontic oral case presentation assessment form that can be tested on a larger group. Such form can be developed not only for instructor’s use, but also for peer and self-assessment applications.

Footnotes

Peer review under responsibility of King Saud University.

References

  1. Baldwin S.G., Harik P., Keller L.A., Clauser B.E., Baldwin P., Rebbecchi T.A. Assessing the impact of modifications to the documentation component's scoring rubric and rater training on USMLE integrated clinical encounter scores. Acad. Med. 2009;84(10 Suppl):S97–100. doi: 10.1097/ACM.0b013e3181b361d4. [DOI] [PubMed] [Google Scholar]
  2. Blommel M.L., Abate M.A. A rubric to assess critical literature evaluation skills. Am. J. Pharm. Edu. 2007;71(4):63. doi: 10.5688/aj710463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Blood-Siegfried J.E., Short N.M., Rapp C.G., Hill E., Talbert S., Skinner J., Campbell A., Goodwin L. A rubric for improving the quality of online courses. Int. J. Nurs. Edu. Scholar. 2008;5:34. doi: 10.2202/1548-923X.1648. [DOI] [PubMed] [Google Scholar]
  4. Bloom B.S., Krathwohl D.R. David McKay & Co; New York: 1956. Taxonomy of educational objectives: Handbook I, The cognitive domain. [Google Scholar]
  5. Curry L. An organisation of learning styles theory and constructs. Edu. Resour. Inform. Center. 1983;(235):185. [Google Scholar]
  6. D'Antoni A.V., Zipp G.P., Olson V.G. Interrater reliability of the mind map assessment rubric in a cohort of medical students. BMC Med. Edu. 2009;9:19. doi: 10.1186/1472-6920-9-19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Daggett L.M. A rubric for grading or editing student papers. Nurse Edu. 2008;33(2):55–56. doi: 10.1097/01.NNE.0000299506.63023.18. [DOI] [PubMed] [Google Scholar]
  8. Dong, C., Asadoorian, J., Schönwetter, D.J., Lavigne, S.E., 2011. Rubric Development Tools: Dentistry and Dental Hygiene Applications. San Diego Faculty Development Workshops.
  9. Gadbury-Amyot C.C., Kim J., Palm R.L., Mills G.E., Noble E., Overman P.R. Validity and reliability of portfolio assessment of competency in a baccalaureate dental hygiene program. J. Dent. Edu. 2003;67(9):991–1002. [PubMed] [Google Scholar]
  10. Guba E.G. Center for the Study of Evaluation; Los Angeles: 1978. Toward a methodology of naturalistic inquiry in educational evaluation. [Google Scholar]
  11. Hancock A.B., Brundage S.B. Formative feedback, rubrics, and assessment of professional competency through a speech-language pathology graduate program. J. Allied Health. 2010;39(2):110–119. [PubMed] [Google Scholar]
  12. Kirkpatrick D.L. Techniques for evaluating training programs. In: Ely D.P., Plomp T., editors. Classic Writings on Instructional Technology. Libraries Unlimited, Inc.; Englewood: 1979. [Google Scholar]
  13. Krathwohl D.R., Bloom B.S., Masia B.B. David McKay Company, Inc.; New York: 1964. Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook II: Affective Domain. [Google Scholar]
  14. Kroboth F.J., Hanusa B.H., Parker S., Coulehan J.L., Kapoor W.N., Brown F.H., Karpf M., Levey G.S. The inter-rater reliability and internal consistency of a clinical evaluation exercise. J. Gen. Intern. Med. 1992;7(2):174–179. doi: 10.1007/BF02598008. [DOI] [PubMed] [Google Scholar]
  15. Kruger J., Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessment. J. Person & Soc. Psyc. 1999;77(6):1121–1134. doi: 10.1037//0022-3514.77.6.1121. [DOI] [PubMed] [Google Scholar]
  16. Lewin L.O., Beraho L., Dolan S., Millstein L., Bowman D. Interrater reliability of an oral case presentation rating tool in a pediatric clerkship. Teach. Learn. Med. 2013;25(1):31–38. doi: 10.1080/10401334.2012.741537. [DOI] [PubMed] [Google Scholar]
  17. Mohanna K., Chambers R., Wall D. Radcliffe Publishing Ltd; 2003. Teaching Made Eesier Milton Keynes. [Google Scholar]
  18. Moni R.W., Beswick E., Moni K.B. Using student feedback to construct an assessment rubric for a concept map in physiology. Adv. Physiol. Edu. 2005;29(4):197–203. doi: 10.1152/advan.00066.2004. [DOI] [PubMed] [Google Scholar]
  19. Musial J.L., Rubinfeld I.S., Parker A.O., Reickert C.A., Adams S.A., Rao S., Shepard A.D. Developing a scoring rubric for resident research presentations: a pilot study. J. Surg. Res. 2007;142(2):304–307. doi: 10.1016/j.jss.2007.03.060. [DOI] [PubMed] [Google Scholar]
  20. O'Brien C.E., Franks A.M., Stowe C.D. Multiple rubric-based assessments of student case presentations. Am. J. Pharm. Edu. 2008;72(3):58. doi: 10.5688/aj720358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Pangaro L.A. New vocabulary and other innovations for improving descriptive in-training evaluations. Acad. Med. 1999;74:1203–1207. doi: 10.1097/00001888-199911000-00012. [DOI] [PubMed] [Google Scholar]
  22. Ramsden P. Routledge; London: 1992. Learning to Teach in Higher Education. [Google Scholar]
  23. Scriven M. Sage; Newbury Park, CA: 1991. Evaluation Thesaurus. [Google Scholar]
  24. Shapiro I. Doctor means teacher. Acad. Med. 2001;76(7):711. doi: 10.1097/00001888-200107000-00013. [DOI] [PubMed] [Google Scholar]
  25. Simpson, E.J., 1972. The classification of educational objectives in the psychomotor domain. The Psychomotor Domain. 3:43-56. Silver Spring, Gryphon House, Inc.
  26. Spencer J. Learning and teaching in the clinical environment. BMJ. 2003;326(7389):591–594. doi: 10.1136/bmj.326.7389.591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Speyer R., Pilz W., Van Der Kruis J., Brunings J.W. Reliability and validity of student peer assessment in medical education: a systematic review. Med. Teach. 2011;33(11):e572–585. doi: 10.3109/0142159X.2011.610835. [DOI] [PubMed] [Google Scholar]
  28. Stufflebeam, D.L., 2003. The CIPP Model for Evaluation, in: Kellaghan, T., Stufflebeam, D.L. (Eds.), International Handbook of Educational Evaluation.
  29. Turbow D.J., Werner T.P., Lowe E., Vu H.Q. Norming a written communication rubric in a graduate health science course. J. Allied Health. 2016;45(3):e37–42. [PubMed] [Google Scholar]
  30. Tyler R.W. The University of Chicago Press; Chicago: 1949. Basic Principles of Curriculum and Instruction. [Google Scholar]
  31. White J.G., Kruger C., Snyman W.D. Development and implementation of communication skills in dentistry: an example from South Africa. Eur. J. Dent Edu. 2008;12(1):29–34. doi: 10.1111/j.1600-0579.2007.00488.x. [DOI] [PubMed] [Google Scholar]
  32. Wiese J., Varosy P., Tierney L. Improving oral presentation skills with a clinical reasoning curriculum: a prospective controlled study. Am. J. Med. 2002;112(3):212–218. doi: 10.1016/s0002-9343(01)01085-3. [DOI] [PubMed] [Google Scholar]
  33. Wood B.P. Feedback: a key feature of medical training. Radiology. 2000;215(1):17–19. doi: 10.1148/radiology.215.1.r00ap5917. [DOI] [PubMed] [Google Scholar]

Articles from The Saudi Dental Journal are provided here courtesy of Springer

RESOURCES