Skip to main content
Journal of Medical Education and Curricular Development logoLink to Journal of Medical Education and Curricular Development
. 2024 Mar 14;11:23821205241239496. doi: 10.1177/23821205241239496

Correlation Between Student Performances on Case-Based Constructed-Response Formative Assessment and Summative Assessment

Deborah Jones 1, Se-Lim Oh 1,
PMCID: PMC10956135  PMID: 38516552

Abstract

OBJECTIVES

This study aimed to evaluate the impact of formative assessment with case-based constructed-response question (CRQ) formats on student performance on the final summative assessment in the second-year periodontics course.

METHODS

Classroom quizzes with case-based CRQs were implemented as the formative assessment during the course. Each student received feedback on their responses from the course director. After all students (N = 128) took the second-year final examination, the Friedman test was conducted to compare student performances in each assessment over time. The multiple linear regression (MLR) model was used to evaluate the association between the second-year final examination score and plausible predictors—student gender, the second-year formative and midterm examination scores, and time spent on the final examination.

RESULTS

The mean % scores in the formative assessment (51) and midterm (84) examination were significantly lower than that of the final (87) examination (P < .01). The number of students who failed the final (6) examination was significantly lower than the midterm (16) examination (P = .03). The midterm (P < .0001) and the formative assessment (P = .0009) scores significantly affected the second-year final examination score while student gender (P = .59) and time spending (P = .83) showed no correlations.

CONCLUSION

Within the limitations of the study, student performance on case-based CRQs was correlated with student performance on the summative assessment.

Keywords: formative assessment, summative assessment, constructed-response questions, preclinical education

Introduction

While summative assessment at the end of the course is critical to measure student learning, 1 predicting student performance in summative assessment is challenging because various known or unknown factors may affect student performance.24 Identifying deficiencies significantly affecting student performance in summative assessment would be helpful to improve student academic achievement. Therefore, low-stakes formative assessment may be useful to collect information and feedback from students with respect to the ongoing teaching and learning context, 5 and predict student performance in high-stakes summative assessment.6,7 However, the validity of scores from such low-stake assessments is potentially susceptible to examinees’ effort because they may not devote themselves to considerable work. 8 Providing high-quality feedback following the formative assessment is challenging especially for the large-sized class. 9

Classroom quizzes with multiple-choice questions (MCQs) can be implemented during the course as formative assessment prior to the midterm and final examinations. While MCQ tests can be graded quickly and accurately, one of the most common critiques for MCQs is their inability to assess in-depth comprehension on the subject matter. 10 In fact, most students performed well on the MCQ quizzes in the second-year periodontics course; student performance on the quizzes did not provide any estimations for their performance on the summative assessment. In contrast, tests with constructed-response questions (CRQs) are considered to be superior to measuring critical reasoning skills. 10 While MCQs are a closed form, in which students need to select the correct answer, CRQs are open-ended formats; students must provide their own answers. CRQs demand different skills from students to answer them correctly. The different skills range from factual recalling for fill-in-blank or short-answer questions (SAQs) to integrating multiple facts into a rational context for essay-type questions. 11

The most preferred aspect of CRQs may be the capability to evaluate students’ problem-solving skills by presenting real-life or similar situations which students may encounter in the clinic via case-based formats. 10 Case-based questions are based on the use of condition and/or disease scenarios. 12 Since the integrated National Board Dental Examination intends to test student problem-solving skills on various cases, 13 case-based CRQs have been incorporated in the preclinical second-year periodontics course for students to train problem-solving skills on simulated cases. However, students have shown poor performance in case-based CRQs on the second-year periodontics examinations. 3 Their poor performance in the CRQs negatively affected their scores in the final examinations that included MCQs, SAQs, and essay-type questions. The second-year final examination scores had been considerably lower than the first-year final examination scores within the classes. Therefore, two classroom quizzes with case-based CRQs have been implemented as the formative assessment in preparation for the second-year final examination.

The purpose of this interventional educational action research was to evaluate the impact of formative assessment with case-based CRQ formats on student performance on the final summative assessment in the second-year periodontics course. The null hypothesis was that there is no association between student performances in the implemented CRQ formative assessment and the final examination.

Methods

Participants and study outline

This study was conducted under the deemed exemption from the Institutional Review Board (IRB) at the University of Maryland, Baltimore (HP-00082574). The study was conducted with one cohort via a retrospective longitudinal assessment from September 2022 to May 2023. The second-year periodontics course is a year-long course, which takes place over the combined fall/spring semester. The fall semester included 10 lectures and the midterm examination, and the spring semester included 13 lectures and the course final examination. The course final examination is cumulative, covering all 23 lectures in the course. Faculty delivered the lectures in the classroom; the classroom lectures were recorded and posted on the online learning management system 14 after the completion of each lecture. All formative and summative assessments in the course were delivered via an online assessment platform 15 in the classroom under the surveillance of faculty proctor. Achieving <70% in each summative assessment was marked as failure.

The first classroom quiz with four case-based CRQs was administered 4 weeks before the midterm examination; the second one with two case-based CRQs was administered 6 weeks before the final examination. The number of CRQs and the timing of administration were determined based on lecture contents and course lecture and examination schedules. Due to spring break, the interval between the second formative assessment and the course final examination was increased to 6 weeks. A weight of 3% was assigned to the results of the formative assessments in calculating the final course grade.

Case-based CRQs covered the following topics: diagnosis and etiology for a gingivitis case, local contributing and risk factors, and stage and grade for a periodontitis case. These topics are essential to make a diagnosis and to further formulate appropriate treatment plans for patients. All case-based CRQs were written by the course director (SO) with another faculty member (DJ). Figure 1 presents examples of case-based CRQs. Three faculty members (SO, DJ, and GS) reviewed the questions and agreed on the correct answers. The grading of the CRQs in the quizzes was divided among three faculty (SO, DJ, and GS) to lessen the burden of grading CRQs for the large-sized class. Using the scoring rubric for CRQs (Table 1), the faculty members graded the same assigned questions for all 128 students to minimize potential variations. After grading, each student received feedback on their answers from the course director via emails. Table 2 presents examples of student responses and feedback from the course director. The CRQs in the quizzes were also reviewed during the subsequent classroom lectures with the best exemplars from the class.

Figure 1.

Figure 1.

Examples of case-based CRQs in formative assessment.

Table 1.

A scoring rubric.

Score Criteria
4 Correct answer with listing all key supporting evidence, using proper terms
3 Correct answer with missing key supporting evidence, using proper terms
2 Correct answer with mixing correct and incorrect supporting evidence, using proper terms
1 Correct answer with listing only incorrect supporting evidence, not using proper terms
0 Incorrect answer to the question

Table 2.

Examples of student answers and feedback from the course director.

Question Student Answer Feedback
Etiology for a gingivitis case Crohn's disease and smoking The etiology for periodontal disease is bacterial plaque with ensuing host responses. Smoking is the risk factor. There is no evidence that Crohn's disease directly causes gingival inflammation.
Risk factor for periodontal disease Risk factors with this patient are high with gingival recession and bone resorption. Gingival recession and bone resorption are the results of periodontal disease. Risk factors for periodontal disease, such as smoking and diabetes, increase the probability of developing periodontal disease.
Local contributing factor for periodontal disease The local contributing factor is the factor which can contribute to the likelihood of periodontal disease occurring. The local contributing factors (LCFs) for periodontal disease maintain the disease by keeping plaque close to the gingiva. The LCFs do not directly induce gingival inflammation. Your statement is for the risk factors of periodontal disease.
Grade assignment for a periodontitis case Grade B because her HbA1c level is 6%. The primary determinant for assigning a grade is % bone loss (BL)/age as indirect evidence of progression. The ratio of % BL/age was approximately 1.3. Therefore, Grade C should be assigned regardless of the HbA1c level.

The course director (SO) and the course co-director (GS) wrote all the midterm and final examination questions, reviewed the questions, and agreed on the correct answers. The midterm examination contained 24 MCQs and 12 SAQs. The final examination was cumulative, covering all 23 lectures, and contained 45 MCQs, 7 SAQs, and 6 essay-type CRQs. The students had the opportunity to write comments on the final examination after they submitted their answers. Grading for SAQs and CRQs in the midterm and final examinations was conducted only by the course director (SO) using the same scoring rubric (Table 1). Item analyses were conducted for the midterm and final examinations to make sure that their question difficulty level was over 0.6 and discrimination power was ≥0.2. Two MCQs from the midterm and final examinations were dropped respectively because their item difficulty index was 0.2.

Statistical analysis

Given the nature of this coursework, we included the entire class. Because the sample size of 128 was robust for various analyses especially within educational settings, a formal sample size calculation was not performed.

The scores from the two quizzes were combined as the formative assessment score in statistical analysis because the topics covered by the formative assessments were from the lectures in fall semester. Student performances within the class in each assessment were compared with the Friedman test because the distributions of % scores did not pass the normality test conducted by the Kolmogorov-Smirnov normality and the Shapiro-Wilk's tests. Multiple comparisons were conducted by Dunn's test. The number of students who failed the second-year midterm and final examinations was compared with a chi-square test.

Descriptive statistics were used to identify deficient areas from the formative assessment. Gender, the second-year midterm and formative assessment scores, and time spent in taking the second-year final examination were selected as predictors. Multiple linear regression (MLR) model was built to investigate the associations between the four predictors and the second-year final examination score. All analyses were performed by tracking individual students on all variables (matched groups). The analyses were performed with GraphPad Prism (version 9.4.1; GraphPad Software, Inc. CA); P < .05 was considered significant.

Results

The mean final examination score of this cohort in the first-year periodontics course was 86 ± 6. The study class included 74 (57.4%) female and 55 (42.6%) male students. The mean % examination scores with standard deviations were 84 ± 10 (the second-year midterm examination), 51 ± 19 (formative assessment), and 87 ± 9 (the second-year final examination), showing a significant variation over time (Friedman test, P < .0001). In multiple comparisons with Dunn's test, the mean % scores of the second-year midterm examination (P = .0099) and formative assessment (P < .0001) were significantly lower than the second-year final examination (Figure 2). However, no significant difference (P > .9999) was observed between the first- and second-year final examination scores (Figure 2).

Figure 2.

Figure 2.

Comparisons for student performances in each assessment. Dunn's test; ns not significant, ** P < .01, **** P < .0001.

Sixteen students (13%) failed the second-year midterm examination while six students (5%) failed the second-year final examination. The number of students who failed the final examination was significantly reduced compared to that in the midterm examination (chi-square test, P = .03). Among the six students who failed the final examination, two students failed both midterm and final examinations although their scores improved from 60 to 66. The other four students achieved scores between 79 and 90 in the midterm examination, but their final examination scores ranged from 63 to 68.

Table 3 presents student performances in each question in the formative assessment. Less than 50% of the second-year students answered correctly on four of the six questions: diagnosis of gingivitis (6.2%), defining and recognizing risk factors (32.8%) and local contributing factors (24.2%), and assigning grade for a periodontitis case (6.2%). Seventy-eight students (61%) were assigned the correct stage for a periodontitis case and 99 students (77.3%) correctly answered the etiology for a periodontal disease. The correct answer rates in assigning stage (93%) and grade (73.6%) for a different periodontitis case on the final examination were improved.

Table 3.

Student performances in the formative assessment with CRQs.

Question The number of students with the correct answers (N = 128)
Quiz 1 Diagnosis for a gingivitis case 8 (6.2%)
Etiology for a gingivitis case 99 (77.3%)
Risk factor for periodontal disease 42 (32.8%)
Local contributing factor for periodontal disease 31 (24.2%)
Quiz 2 Stage assignment for a periodontitis case 78 (61%)
Grade assignment for a periodontitis case 8 (6.2%)

Ten students left comments after the final examination. Students thought the second-year final examination was “difficult” and “tricky” although a few students, who left such comments, achieved over 90% scores. One student left a comment that it can be very challenging to see radiographs and determine the exact % of bone loss in the examination room with all the lights on. One student stated that too many questions related to the prognosis were included in the examination. This student answered “fair” or “questionable” to the classification questions; the answers were supposed to be Stage II or Grade B.

The MLR model was used to evaluate the associations between the selected predictors and the final examination score (Table 4). Based on the MLR model, the second-year midterm (P < .0001) and formative assessment (P = .0009) scores were significantly correlated with the second-year final examination score. However, student gender (P = .59) and time spent in taking the second-year final examination (P = .83) did not affect student performance in the second-year final examination.

Table 4.

MLR model.

Outcome second-year final examination score
Regression type Least squares
Model Diagnostics Degrees of Freedom = 4, F = 16.43, P < .0001
Goodness of Fit Degrees of Freedom = 123, R2 = 0.35, Adjusted R2 = 0.33
Parameter estimates
Parameter Estimate Standard Error 95% Confidence Interval |t| P value
Intercept 49.31 6.63 36.20 to 62.42 7.443 < .0001
 Second-year Midterm 0.36 0.07 0.23 to 0.49 5.447 < .0001
 Formative assessment 0.12 0.04 0.05 to 0.19 3.399 .0009
 Gender [male] 0.71 1.31 −1.88 to 3.29 0.542 .59
 Time spent on the second-year final examination 0 0.05 −0.1 to 0.1 0.215 .83

Discussion

This study examined the correlation between student performances on the formative assessment and the final examination in the second-year preclinical periodontics course. The regression model (Table 4) demonstrated that the case-based CRQ formative assessment significantly affected student performances on the summative assessment (P < .0009), by rejecting the null hypothesis.

Periodontics primarily deals with the prevention, diagnosis, and treatment of periodontal diseases. Dental students must attain sufficient knowledge in periodontics during preclinical education. While assessment in dental education embraces a wide range of concepts, preclinical assessment should include a diagnostic tool to identify learning deficiencies among students. 16 Therefore, formative assessments with case-based CRQs were administered as assessment for learning. 5

Attention to low-stakes assessment without grading has been increased in medical education. 17 However, students may not give their best efforts to take those tests, especially with no grade assigned. 17 Therefore, low weight (3% in calculating the final course grade) has been assigned to the formative assessment in this study to provide incentive to the students who performed well.

The results of the formative assessment revealed that the most difficult tasks for the students were diagnosing gingivitis and determining a grade for a periodontitis case (Table 3). This may be due to clinical reasoning errors in interpretating clinical data. 18 Most students seemed not to grasp the meaning of the probing pocket depth. Therefore, they used the pocket depths as the primary diagnostic parameter. This also indicates that students had difficulty in evaluating radiographic images with respect to the normal crestal bone level. Students knew and described the basic information, but they could not apply their knowledge to solve clinical problems. Students paid more attention to medical history rather than dental clinical data because they were confused among the etiologic, risk, and local contributing factors for periodontal disease.3,19 This led to students’ poor performance in assigning the grade for a periodontitis case. After reviewing students’ responses, the course director identified these deficiencies and tried to provide constructive feedback rather than simple answers (Table 2).

Reviewing the results of the formative assessments influenced the following lecture contents and teaching focuses. Lecturers modified subsequent lectures to review the concepts which the students were struggling to understand. This adaptation allowed for a more tailored approach, aiming to enhance student comprehension.

Feedback is the information provided by instructors regarding aspects of student's understanding or performance. 20 While either negative or positive feedback can be delivered, feedback is supposed to be constructive. 21 Constructive feedback should be clear, tailored, factual, and descriptive based on the direct observation, provide a reasonable amount of information, and be delivered in a timely manner. 22 The course director tried to provide feedback to individual students via online (emails) and to the entire class via offline (classroom lecture) platforms. 23

While all 128 students received feedback and the correct answer rates for assigning stage and grade were improved on the final examination, it was not clear how they utilized feedback to enhance their performance on the summative assessment. Students’ attitude toward learning, as the receiving end, affects the effectiveness of constructive feedback. 24 While deep learners are more motivated to learn and enhance their performance, strategic learners may adopt tactics aiming toward academic success. 25 Furthermore, surface learners focus on the superficial features of the subjects and may skip over any content that they think is not related to their final goal. 25 Surface learners are often motivated by extrinsic factors and their engagement and commitment are not solid. Educational tactics to engage surface learners with feedback should be further explored.

Student performances on examinations are influenced by various factors, which may be controllable or not. 26 Uncontrollable factors include students’ health and personal issues, personal learning strategies, levels of test anxiety, and cultural or socioeconomic backgrounds. 26 This study maintained the same environments of examination classroom and the same style via online assessment. Student gender and time spent on the examination did not affect the final examination outcome (Table 4). Four students, who passed the midterm examination with scores between 79 and 90, failed the final examination. While students’ personal factors, such as family problems, over confidence/lack of confidence, or use of feedback, may affect their performances on the final examination, this study could not evaluate those factors.

External validity of the study findings is limited due to several factors. First, the formal sample size calculation was not performed. The decision to include the entire class was based on practical constraints and the unique context of educational research. The absence of a predetermined sample size calculation may raise questions regarding the generalizability or statistical power of the study findings. However, the inclusion of the entire class enabled us to collect comprehensive data. Second, the students in this study received the same experience with no control group. Although the MLR model tracked individual students to evaluate the correlation between the formative and summative assessments, we did not know whether the students would have performed more poorly without implementing CRQ formative assessments. Third, this study did not obtain students’ perceptions on case-based CRQ formative assessment and feedback from the course director. As students’ perceptions may affect their motivation and learning behavior, 27 obtaining students’ perceptions may be helpful to understand how they utilize formative assessment and feedback. 28 Finally, preclinical periodontics courses heavily focus on delivering and testing explicit knowledge as “know what”, although there is a component for manual skill development via simulation-based learning. 29 Therefore, the format of formative assessment in this study may not be applicable for disciplines in which attaining manual dexterity and testing manual skill levels are a major part of courses.30,31

Conclusions

Implementing case-based CRQs as the low-stakes formative assessment assisted the course director in identifying areas of deficiency on ongoing teaching. Within the limitations of the study, student performance on case-based CRQs was correlated with that on the summative assessment. Further study is needed to include more cohorts and to determine specific effects.

Acknowledgements

The authors would like to thank Dr Gary Swiec for his help in the second-year periodontics course as a co-director. The authors would like to thank Dr Man-Kyo Chung for his critical reading and insightful reviewing.

Footnotes

Authors’ contributions: DJ participated in formative assessment and in writing the manuscript. SO designed the study, conducted the study, collected data, performed statistical analysis, and participated in writing the manuscript. All authors have read and approved the manuscript, are aware of this submission, and agree with its publication.

Data availability statement: The datasets used and analyzed in this study are available from the corresponding author upon reasonable request.

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, and authorship. The Department of Advanced Oral Sciences and Therapeutics partially supported the publication of this article.

Ethics approval: This study was conducted under the deemed exemption from the Institutional Review Board (IRB) at the University of Maryland, Baltimore (HP-00082574). All methods were conducted in accordance with relevant guidelines and regulations.

References

  • 1.Taras M. Assessment - Summative and formative - Some theoretical reflections. Br J Educ Stud. 2005;53(4):466-478. doi: 10.1111/j.1467-8527.2005.00307.x [DOI] [Google Scholar]
  • 2.Abou Naaj M, Mehdi R, Mohamed EA, Nachouki M. Analysis of the factors affecting student performance using a neuro-fuzzy approach. Educ Sci. 2023;13(3):313. doi: 10.3390/educsci13030313 [DOI] [Google Scholar]
  • 3.Oh SL, Swiec G, Jones D, Chung T. Effectiveness of distance learning for preclinical periodontal education. Eur J Dent Educ. 2023;27:1060-1066. doi: 10.1111/eje.12899 [DOI] [PubMed] [Google Scholar]
  • 4.Oh SL, Mishler O, Yang JS, Barnes C. Effectiveness of remote simulation-based learning for periodontal instrumentation: a non-inferiority study. J Dent Educ. 2022;86:463-471. doi: 10.1002/jdd.12820 [DOI] [PubMed] [Google Scholar]
  • 5.Black P, Wiliam D. Developing the theory of formative assessment. Educ Assessment, Eval Account. 2009;21(1):5-31. doi: 10.1007/s11092-008-9068-5 [DOI] [Google Scholar]
  • 6.Oh SL, Kim YJ, Cho E, Han JY. Practical approaches for enhancing student performance in summative assessment. J Med Educ Train. 2020;4(1):46. [Google Scholar]
  • 7.Schüttpelz-Brauns K, Kadmon M, Kiessling C, Karay Y, Gestmann M, Kämmer JE. Identifying low test-taking effort during low-stakes tests with the new test-taking effort short scale (TESS) - development and psychometrics. BMC Med Educ. 2018;18(1):1-10. doi: 10.1186/s12909-018-1196-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wise SL, DeMars CE. Examinee noneffort and the validity of program assessment results. Educ Assess. 2010;15(1):27-41. doi: 10.1080/10627191003673216 [DOI] [Google Scholar]
  • 9.Broadbent J, Panadero E, Boud D. Implementing summative assessment with a formative flavour: a case study in a large class. Assess Eval High Educ. 2018;43(2):307-322. doi: 10.1080/02602938.2017.1343455 [DOI] [Google Scholar]
  • 10.Kuechler WL, Simkin MG. Why is performance on multiple-choice tests and constructed-response tests not more closely related? Theory and an empirical test. Decis Sci J Innov Educ. 2010;8(1):55-73. doi: 10.1111/j.1540-4609.2009.00243.x [DOI] [Google Scholar]
  • 11.Song M, Van Der Bij H, Weggeman M. Determinants of the level of knowledge application: a knowledge-based and information-processing perspective. J Prod Innov Manag. 2005;22(5):430-444. doi: 10.1111/j.1540-5885.2005.00139.x [DOI] [Google Scholar]
  • 12.McLean SF. Case-Based learning and its application in medical and health-care fields: a review of worldwide literature. J Med Educ Curric Dev. 2016;3:JMECD.S20377. doi: 10.4137/jmecd.s20377 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Duong M-LT, Cothron AE, Lawson NC, Doherty EH. U.S. Dental schools’. Preparation for the integrated national board dental examination. J Dent Educ. 2018;82(3):252-259. doi: 10.21815/JDE.018.024 [DOI] [PubMed] [Google Scholar]
  • 14.Blackboard. https://www.blackboard.com/
  • 15.Questionmark. https://assessments.questionmark.com/
  • 16.El-Kishawi M, Khalaf K, Al-Najjar D, Seraj Z, Al Kawas S. Rethinking assessment concepts in dental education. Int J Dent. 2020;2020. doi: 10.1155/2020/8672303 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Schüttpelz-Brauns K, Hecht M, Hardt K, Karay Y, Zupanic M, Kämmer JE. Institutional strategies related to test-taking behavior in low stakes assessment. Adv Heal Sci Educ. 2020;25(2):321-335. doi: 10.1007/s10459-019-09928-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.John V, Lee S-JJ, Prakasam S, Eckert GJ, Maupome G. Consensus training: an effective tool to minimize variations in periodontal diagnosis and treatment planning among dental faculty and students. J Dent Educ. 2013;77(8):1022-1032. [PubMed] [Google Scholar]
  • 19.Oh SL, Yang JS, Kim YJ. Discrepancies in periodontitis classification among dental practitioners with different educational backgrounds. BMC Oral Health. 2021;21(1):39. doi: 10.1186/s12903-020-01371-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81-112. doi: 10.3102/003465430298487 [DOI] [Google Scholar]
  • 21.Ali Al-Hattami Asst A. The perception of students and faculty staff on the role of constructive feedback. Int J Instr. 2019;12(1):885-894. [Google Scholar]
  • 22.Hamid Y, Mahmood S. Understanding constructive feedback: a commitment between teachers and students for academic and professional development. J Pak Med Assoc. 2010;60(3):224-227. [PubMed] [Google Scholar]
  • 23.Veeraiyan DN, Varghese SS, Rajasekar A, et al. Comparison of interactive teaching in online and offline platforms among dental undergraduates. Int J Environ Res Public Health. 2022;19(6):3170. doi: 10.3390/ijerph19063170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Freeman Z, Cairns A, Binnie V, McAndrew R, Ellis J. Understanding dental students’ use of feedback. Eur J Dent Educ. 2020;24(3):465-475. doi: 10.1111/eje.12524 [DOI] [PubMed] [Google Scholar]
  • 25.Papinczak T. Are deep strategic learners better suited to PBL? A preliminary study. Adv Heal Sci Educ. 2009;14(3):337-353. doi: 10.1007/s10459-008-9115-5 [DOI] [PubMed] [Google Scholar]
  • 26.Rasul S, Bukhsh Q. A study of factors affecting students’ performance in examination at university level. Procedia - Soc Behav Sci. 2011;15:2042-2047. doi: 10.1016/j.sbspro.2011.04.050 [DOI] [Google Scholar]
  • 27.Oh SL, Gourley B, Idzik-Starr C. Student perception of pilot interprofessional education and care clinical experiences at dental clinics. Int J Med Educ. 2020;11:261-262. doi: 10.5116/ijme.5fc7.7b93 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Luke AM, Mathew S, Kuriadom ST, et al. Effectiveness of problem-based learning versus traditional teaching methods in improving acquisition of radiographic interpretation skills among dental students - A systematic review and meta-analysis. Biomed Res Int. 2021;2021. doi: 10.1155/2021/9630285 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mishler O, Barnes C, Shiau HJ, Oh S-L. Remote simulation-based learning for periodontal instrumentation in preclinical education. J Dent Educ. 2021;85(Suppl.3):2025-2027. [DOI] [PubMed] [Google Scholar]
  • 30.Sim MY, Tan LF, Adam L, Loch C. No one is born with it: australasian dental students’ perceptions of learning manual dexterity. J Dent Educ. 2022;87:60-69. doi: 10.1002/jdd.13101 [DOI] [PubMed] [Google Scholar]
  • 31.Lone MA, Iqbal UA, Lone MM, et al. Current trends in fixed prosthodontics education in undergraduate dental colleges. J Med Educ Curric Dev. 2023;10. doi: 10.1177/23821205231193282 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Medical Education and Curricular Development are provided here courtesy of SAGE Publications

RESOURCES