Skip to main content
Medical Science Educator logoLink to Medical Science Educator
. 2024 Dec 4;35(1):351–358. doi: 10.1007/s40670-024-02197-4

The Predictive Power of Short Answer Questions in Undergraduate Medical Education Progress Difficulty

Keyna Bracken 1, Amr Saleh 1,, Jeremy Sandor 1, Matthew Sibbald 1, Micheal Lee-Poy 1, Quang Ngo 1
PMCID: PMC11933568  PMID: 40144084

Abstract

Background

To better understand the link between formative assessments and progress difficulty, we conducted an analysis in the undergraduate MD program of the Michael G. DeGroote School of Medicine by comparing formative assessment scores on Short Answer Questions (SAQ) called Concept Application Exercises (CAE) with subsequent progress difficulty. CAE scores are designed to formatively assess knowledge translation. These scores are not formally incorporated into the progress decision at the end of each curricular unit, which is holistic in nature. Students are referred to a student progress remediation committee if they fail to meet the curricular objectives. We sought to investigate the following research question: Do short answer questions, in the form of CAEs, predict subsequent learner progress difficulty?

Methods

Data from the last four student cohorts of 2022–2025 were included. To address the predictive power of CAE score characteristics, a binary logistic regression model was constructed with remediation committee referral as the dependent variable and CAE score characteristics as the independent variable.

Results

This study found that the average CAE score is the most powerful predictor of later progress difficulty, with each point drop in average score associated with a 37% increase in the odds of referral to the remediation committee.

Conclusion

These findings illustrate the predictive value of the SAQ to identify later progress difficulty.

Keywords: Formative assessment, Undergraduate student progress

Introduction

The predictive value of assessments in medical education for progress difficulty holds significant implications to medical students and curriculum developers. Assessment of learning — commonly summative in nature to make learner progress decisions — and assessment for learning — typically formative to provide low-stakes, meaningful feedback to learners that they can use to modify performance — cohabitate within a program of assessment [1, 2]. Formative assessment, as opposed to summative assessment, provides an opportunity for students to gauge their learning process and evaluate their own understanding.

According to Norcini and colleagues, there are four key characteristics of effective formative assessment: the assessment is embedded in the normal instructional process, provides actionable feedback, is ongoing, and is timely [3]. In addition to these traits, formative assessments would ideally be predictive of subsequent summative progress challenges in the curriculum. While medical schools have heavily invested in academic remediation, there is still a lack of agreement in the medical education literature on both the best methods for identification of and effective interventions for learners in difficulty [4]. Various studies have used different metrics such as performance during the first year of medical school, negative academic comments, and pre-admission academic achievement to try to predict medical school performance [57]. These studies have yielded mixed results with some factors, such as previous academic performance and being able to predict pre-clinical training performance, whereas factors such as the possession of a prior degree were not found to be predictive of medical school performance [5, 6]. This difficulty on how to identify learners who experience later difficulty hinders the capacity of undergraduate medical education (UGME) programs to be proactive in intervening in medical student education; medical schools are forced to be reactive in their response to academic struggles later in the medical school curriculum. Recent findings suggest that academic interventions, particularly when delivered early, can have a meaningful impact on struggling learners [4]. This prompts the question of how programmatic assessment for learning with many data points may best support student learning [4].

This finding is particularly important in the context of medical school curricula in which self-directed learning (SDL) is emphasized [8]. In self-directed learning curricula, students are responsible for reflecting on the learning process to optimize their own education [9]. Formative assessments are designed as a means of enriching self-directed learning through facilitating learner engagement and meta-cognitive awareness of the learning process. Further, feedback from these assessments would be used as guidance to direct future learning as students often have difficulty with self assessment and may minimize knowledge deficits. This feedback and reflection is critical for students to adjust their learning goals and enhance their academic achievement [10].

Formative assessments can be delivered in many ways, most commonly short answer questions (SAQ) and multiple choice questions (MCQ). SAQs are a type of assessment in which students are required to generate a written response to a question from a clinical vignette. In MCQs, students select the most appropriate response from a list of potential answers [11]. At the Michael G. DeGroote School of Medicine, SAQs are used in the form of concept application exercises (CAE) as an early formative assessment of knowledge translation administered over the course of preclinical undergraduate medical education (UGME). SAQs have persisted as a question format for summative assessment in medical education, which may be due to the belief that synthesizing an answer is more cognitively demanding than identifying the correct answer [11]. In a survey conducted by Bird and colleagues, 74% of medical students report thinking more critically in an SAQ as opposed to an MCQ and 64% report that it helps them to perform better in a clinical setting [12]. Regardless of the specific format of formative assessments, their role in the larger context of programmatic assessment is critical [13]. In the programmatic approach, a range of assessments of different formats are selected and aggregated to provide a holistic understanding of the student’s level [2]. This information is then used to determine medical student progress in undergraduate medical education curricula.

This study focuses on the predictive value of SAQs, in the form of CAEs, for subsequent learner progress difficulty in a self-directed, problem-based learning (PBL) medical curriculum. CAEs were chosen due to their formative nature — the scores are not used to impact academic progress decisions, unlike the other assessment tools used such as Personal Progress Index (PPI) or the Objective Structured Clinical Examination (OSCE) at McMaster. In particular, we investigated which metric is most predictive of subsequent progress concerns — is it an average of scores, achievement of a minimal threshold, or the lowest score? To assess progress difficulty, this study examines the relationship between CAE scores and other assessments: specifically the Personal Progress Index (PPI) (which is an MCQ progress test administered longitudinally at intervals across the UGME program), and the national licensing exam written administered by the Medical Council of Canada for graduating medical students. We also investigated the relationship between CAE score metrics and subsequent referral to the Student Progress Committee (SPC) for academic difficulties. We hypothesized that short answer questions, in the form of CAEs, could be used to predict subsequent learner progress difficulty.

Methods

Organizational Context

This study is a retrospective review of all students’ CAE performance over 4 years with subsequent exploration of progress difficulties at McMaster’s accelerated 3-year MD program. The preclerkship component of the program is composed of five curricular units; these are Medical Foundations 1 (MF1) through 4, based on biomedical systems and an integration foundation unit (IF). During each unit, there are two to three CAEs corresponding but not limited to the respective subunits covered. Each CAE consists of five to six short answer questions requiring students to apply concepts learned during tutorials and large group sessions to clinically relevant vignettes. CAEs integrate content learned across the curriculum and require students to recall material from previous foundations. CAEs are anonymized and marked by the students’ physician tutor with an answer key and scoring rubric on a Likert scale from 1 to 5 (Table 1). On this scale, 3 represents a proficient or passing answer. Tutors are provided with training on the intent and role of CAEs, complete an online training module, and are provided with a scoring rubric. The aim of this training is to help standardize marking across different tutors to ensure consistency in grading. Typically, a single tutor will mark all their students’ CAEs across the foundation. An example of a complete CAE with marking rubric is provided in the Appendix.

Table 1.

The number of CAEs completed by cohort and campus

Cohort Campus Students CAEs completed
Mean Minimum Maximum Standard deviation
2022 Hamilton 148 14 3 14 1
Niagara 27 13 3 14 2
Waterloo 28 14 13 14 0
2023 Hamilton 149 12 1 14 1
Niagara 29 13 10 14 1
Waterloo 27 13 12 13 1
2024 Hamilton 152 13 2 14 1
Niagara 29 14 13 14 0
Waterloo 29 14 12 14 1
2025 Hamilton 153 11 1 12 1
Niagara 28 11 8 11 1
Waterloo 28 11 11 11 0

Assessment in the preclerkship program is programmatic, and progression in the program is determined at the end of each curricular unit. Final assessment is holistic, and medical students’ progress is rated as satisfactory, provisional satisfactory, or unsatisfactory. This final assessment reflects the student’s achievement of the objectives of the curricular unit. In preclerkship, tutors assess student contributions to the learning environment during tutorials and clinical skills assessments to determine if foundational objectives have been met satisfactorily. CAE scores, while considered, are not formally incorporated into the final assessment. As a result, a student’s progress decision is never based on CAE scores. The Student Progress Committee (SPC) is a remediation committee that is charged with reviewing and remediating learners in difficulty. Students are referred to the SPC if they do not achieve the objectives of the curricular unit and receive a provisional satisfactory or unsatisfactory assessment. The SPC determines the individualized student learning enhancement or remediation plan required and whether a curricular pause is necessary. Most academic referrals to the SPC occur during the clinical phase of UGME, commonly known as clerkship, due to failure of a core clinical clerkship exit examination.

To determine if low scores on CAEs predict subsequent learner progress difficulty, logistic regression models were constructed using SPSS 26 with CAE scores as the independent variable and referral to SPC as the dependent variable. The use of a logistic regression model allows modelling of the probability of this outcome as a function of independent variables. This allows ease of interpretation of the results by converting log-odds into probabilities. It also does not assume a linear relationship between independent and dependent variables nor requires normal distribution or equal variances of the independent variables. Secondary outcomes, such as failure of the Medical Council of Canada Qualifying Exam (MCCQE) and low scores on the PPI, a separate progress assessment tool, were also measured against CAE scores to triangulate progress difficulty.

Sampling

Data from the last four student cohorts of 2022–2025 were included. The cohort is labelled by the year of their expected graduation. Class 2022 to 2025 are complete datasets with the most comparable curriculums, even taking the pandemic restrictions and required curricular pivots into consideration. Results from the class of 2021 or prior were not used due to curriculum review resulting in many changes to the medical foundations and assessment platforms.

Materials

CAE scores on each exercise were aggregated to produce three metrics:

  1. Average: average item score of the five to six questions in the CAE

  2. Minimum: lowest item score of the five to six questions in the CAE

  3. Unsatisfactory CAE item: presence of less than 3 (satisfactory) on any individual item (0 no, 1 yes)

These metrics were chosen as they reflect different aspects of a student’s performance. The average score represents an overview of their understanding, whereas the minimum score highlights their lowest level of proficiency. An unsatisfactory item represents a threshold after which future progress can be examined. Progress difficulty was determined by whether or not a student was referred to the SPC during any part of their journey through the program. Those who scored a red zone on a PPI were automatically referred to SPC. The PPI is a separate progress assessment tool designed to determine overall knowledge acquisition in the UGME program, with content blueprinted to the MD program competencies. A red zone indicates that the student’s score is more than 2 standard deviations below the mean score of their cohort. SPC referral was scored as 0 (no referral) or 1 (referral).

Statistics

Characteristics of CAE scores were treated as continuous variables. SPC referral was treated as a binary variable. During the analysis, all data points were anonymized. To address the predictive power of CAE score characteristics, we constructed a binary logistic regression model using SPSS 26 with SPC referral as the dependent variable and CAE score characteristics as independent variables.

Ethics

This study was conducted in McMaster University and approved by the Hamilton Integrated Research Ethics Board (HiREB #16918) as part of ongoing quality assessment and improvement.

Results

Overall, the results from 10,392 CAEs were used. The average CAE score was 3.63 ± 0.67, the average minimum CAE score was 2.82 ± 0.86, and the proportion of CAEs with an unsatisfactory item was 0.35 ± 0.48. In total, 19% of students were referred to the progress committee (Table 2).

Table 2.

CAE performance across different medical foundations in the pre-clerkship curriculum

MF1 MF2 MF3 MF4 IF
Mean Standard deviation Mean Standard deviation Mean Standard deviation Mean Standard deviation Mean Standard deviation
Average 3.65a 0.73 3.72b 0.63 3.54c 0.70 3.71b 0.61 3.53c 0.62
Minimum 2.87a 0.92 2.83a 0.85 2.81a 0.87 2.87a 0.84 2.66b 0.80
Unsatisfactory 0.35a 0.48 0.35a 0.48 0.35a 0.48 0.33a 0.47 0.42b 0.49

Values in the same row and subtable not sharing the same letters are significantly different at p < 0.05 in the two-sided test of equality for column means. Cells with no letters are not included in the test. Tests assume equal variances. Tests are adjusted for all pairwise comparisons within a row of each innermost subtable using the Bonferroni correction

Performance varied across foundation subunits, with lower average scores, lower minimum scores, and greater unsatisfactory scores in IF than other foundations (Table 3).

Table 3.

CAE performance by cohort

2022 2023 2024 2025
Mean Standard deviation Mean Standard deviation Mean Standard deviation Mean Standard deviation
Average 3.56a 0.60 3.65b 0.71 3.68b 0.68 3.65b 0.68
Minimum 2.71a 0.82 2.84b 0.90 2.91c 0.87 2.81b 0.86
Unsatisfactory 0.40a 0.49 0.37a,c 0.48 0.30b 0.46 0.36c 0.48
SPC 0.27a 0.45 0.26a 0.44 0.19b 0.39 0.00c 0.07

Values in the same row and subtable not sharing the same letters are significantly different at p < 0.05 in the two-sided test of equality for column means. Cells with no letters are not included in the test. Tests assume equal variances. Tests are adjusted for all pairwise comparisons within a row of each innermost subtable using the Bonferroni correction

Performance does vary by cohort. Average, minimum, and unsatisfactory performances were lower for the class of 2022. SPC referrals are lower for the class of 2024 and 2025, as these cohorts are yet to complete the curriculum (Table 4).

Table 4.

Summarized thresholds of CAE results for 30 and 50% probabilities of SPC referral

30% probability of SPC referral 50% probability of SPC referral
Any individual average CAE score 2.28 0.78
Lowest average CAE score ever 2.09 1.46
Cumulative average CAE score 3.30 2.96

Modelling SPC referral with CAE characteristics revealed predictive power better than an intercept-only model (Omnibus test chi square = 192, p < 0.0001). Average scores were the most important predictor (Wald Chi-square 53.1), and presence of an unsatisfactory score less so (Wald Chi-square 8.7). Average scores were lower for those referred to SPC (3.45 ± 0.67 versus 3.68 ± 0.66) with each point reducing the odds of SPC referral by 0.63 (95% CI 0.56–0.71, p < 0.001). Minimum scores tended to be lower for those referred to SPC, 2.62 ± 0.84 versus 2.87 ± 0.86, but not predictive of referral. The proportions of CAEs with an unsatisfactory score were higher among those referred (0.47 ± 0.50 versus 0.33 ± 0.47) with the presence of an unsatisfactory score increasing the odds of referral by 1.32 (95% CI 1.10–1.58, p = 0.003). Rates of SPC referral were 0.17 (95% CI 0.16–0.18) among those without an unsatisfactory CAE question and 0.21 (95% CI 0.19–0.23) among those with an unsatisfactory CAE question.

Using binary logistic regression modelling, curves were constructed based on the average score of a single CAE predicting SPC referral, red zone on a PPI (score of less than 2 standard deviations below the cohort), and the failure of the Medical Council of Canada Qualifying Examination (MCCQE). The MCCQE is the summative examination written at the end of medical school which assesses medical knowledge and clinical decision making (Medical Council of Canada). The average score of a single CAE was found to be predictive of SPC referral, red zone on a PPI, and MCCQE failure (p < 0.0001) (Figs. 1 and 2) (Table 5).

Fig. 1.

Fig. 1

Average CAE score compared to SPC referral rate

Fig. 2.

Fig. 2

Individual average CAE score compared to SPC referral, red zone on a PPI, and failure of the MCCQE. Using the data from the constructed curves, thresholds were identified which predict 30 and 50% probabilities of referral to SPC. This outcome, as opposed to a red zone on a PPI and failure of the MCCQE, was used as it is the most common among medical students

Table 5.

Likert scale used by tutors to mark CAEs. A mark of 3 represents a proficient or passing response

Accomplished (5) (4) Proficient (3) (2) Novice (1)
Understanding of concept

Student was able to describe a deep and complete understanding of the concepts/mechanisms and was able to explain new information or concepts related to topics discussed in previous subunits or foundations or encountered in order areas of the program

Mastered the learning objectives of the MF

Student demonstrated a comprehensive understanding of the concepts/mechanisms central to the MF

Demonstrated excellent organization and integration of material

Demonstrated superior achievement of the learning objectives of the MF

Student was able to describe key concepts/mechanisms to a degree sufficient for the MF

Demonstrated an understanding of the importance or relevance of the concepts/mechanisms

Information was appropriately organized and prioritized

Demonstrated acceptable achievement of the learning objectives

Student was able to describe most but not all of the key concepts/mechanisms. Understanding of some of the material was incomplete. Student was in the early stages of achieving the learning objectives in the MF

Student was able to describe some but not all of the concepts/mechanisms

Seemed unclear/uncomfortable with at least some of the material

Understanding of some concepts was superficial

Difficulty organizing and prioritizing information

Achievement of the learning objectives of the MF was not met

Discussion

Prior studies have investigated the predictive value of various assessments in UGME including performance on MCQ exams and narrative comments on academic performance [6, 14]. These studies have found some factors, such as pre-admission academic achievement, to be effective in predicting the likelihood of progress difficulty in medical school [7]. In this study, we focus on how formative SAQ assessments can be used to identify medical students who may be at risk of progress difficulty.

The findings of this study illustrate the predictive ability of formative SAQs to identify later progress difficulty. In particular, three metrics were analyzed: referral to SPC, performance on the PPI, and failure of the MCCQE. The average CAE score was found to be the most powerful predictor of referral to SPC. To specify, each point drop in average score on the CAE was associated with 37% increased odds of referral to SPC. Some CAE score measures, such as the average score on a single CAE, were also predictive of SPC referral, a red zone on a PPI, and failure of the MCCQE. The association of low CAE scores with other knowledge assessment tools illustrates the face validity of the CAEs as a formative assessment tool. This, in turn, substantiates its predictive power for subsequent progress difficulty.

Our findings reveal that CAE scores varied between Medical Foundations (MFs). The CAE results from the last Medical Foundation called Integration Foundation (IF) revealed lower average scores, lower minimum scores, and greater unsatisfactory scores compared to other MFs. This may relate to the intentional spiralling of complexity from previous concepts in this last MF prior to transitioning to clinical clerkship, which could explain this result [15]. We also found that CAE scores varied by cohort. For the class of 2022, there were lower average and minimum CAE scores. We hypothesize this could be attributed to the curriculum changes from the COVID-19 pandemic which required this cohort to complete much of their problem-based learning curricula remotely. This abrupt switch to an online learning environment may have resulted in poorer CAE scores as compared to cohorts who completed their learning in person. Further, the class of 2022 also had lower MCCQE scores compared to other cohorts, attesting to the impact of the pandemic on medical student learning. This strengthens the findings of this study since, despite changing contexts and performance levels, performance on SAQs for this cohort was still predictive of subsequent progress difficulty.

The finding of the predictive value of formative SAQs is critical given that students often struggle to determine their progress trajectory and are unaware of where their knowledge gaps lie. Formative assessments which hold predictive value of subsequent difficulty present an opportunity for medical students to adjust their learning strategies, seek out educational resources, and change their academic trajectory. This form of self-reflection is key for students to maximize their academic success [10]. Moreover, these findings provide an opportunity for intervention earlier in the medical school curriculum while students are still in pre-clerkship. This aligns with the results of Landoll et al. in which earlier academic intervention was associated with a greater impact on a student’s academic course [4]. From a medical student perspective, lower performance on formative SAQs items can be used to identify learning gaps and to gather the academic resources to avoid or mitigate difficulties on summative assessments later in the learner’s medical education. This is especially important since most academic referrals to the SPC are as a result of exam failure during clerkship. Overall, this study attests to the role of formative assessments in the broader context of programmatic assessment to further support medical student learning.

The are several strengths to this study; the dataset is large with over 10,000 data points and represents pragmatic data across multiple time points in the curriculum and different assessors; the use of different metrics such as referral to SPC, performance of the PPI, and subsequent failure of the MCCQE as methods to triangulate the validity of the findings; and the internal consistency of the findings, namely that lower CAE scores correlate with lower scores on the PPI and the MCCQE. There are several limitations to this study. First, data for the class of 2024 and 2025 is incomplete given that these two classes have not yet graduated. Second, the results from this study may not be generalizable to other medical schools who may use alternate programs of assessment to gauge student progress. Third, in order to measure progress difficulty, outcomes such as referral to SPC and PPI scores were used. These outcome measures, or those comparable to them, may not be present at other medical schools. Fourth, PBL and SDL are key pedagogic principles of McMaster’s medical school. Other medical schools may have more didactic preclerkship curricula, which may influence performance on SAQ assessments and, by extension, their predictive value for student progress.

Implications for Further Research

Further research is needed to assess the predictive value of all assessments in UGME, and how these assessments taken together provide an understanding of a student’s progress trajectory. Moreover, additional work is needed to assess how interventions earlier in the medical school curriculum triggered by low average SAQ scores, such as guided self-assessment with a learning director to identify knowledge gaps, can help prevent later progress challenges. We intend to analyze how interactions with a faculty-appointed learning director influences subsequent CAE scores along with the other programmatic assessments and from a qualitative perspective, student study strategies and techniques. We are interested to examine if and how the knowledge that the CAE is predictive of subsequent progress difficulty influences medical student and tutor attitudes towards this formative assessment tool. Lastly, this study sets the groundwork for future research on the optimal timing and nature of interventions to support medical students who may be at risk of progress difficulty. We intend to continue this work with increasing sample sizes as students matriculate as we look to better understand how early signals such as lower SAQ scores triggering learning enhancement influence students’ progress trajectory.

Conclusions

Formative assessments provide a self-directed educational opportunity for students to reflect on their learning process and learning progress. In order to maximize their effectiveness, these assessments need to be predictive of subsequent progress in medical school curricula. This study illustrates that formative SAQ assessment items are in fact predictive of subsequent progress difficulty. These results hold strong implications for potential interventions based on the results of short answer question assessments.

Appendix

Example of One Question of a CAE Used During Integration Foundation (IF)

Jorge Batista is a 2.5 month old baby who was diagnosed with a cleft palate at birth. He is feeding well with adaptive measures. Last week, Jorge received his routine immunizations, for tetanus, diphtheria, polio, Hflu, pertussis, pneumococcus, and the oral rotavirus vaccine. He now presents to the ER with intractable diarrhea with lethargy and weight loss. Bloodwork shows a white blood cell count of 3.2 with lymphocytes of 1.1, and normal-range sodium and potassium but an ionized calcium of 1.02. Stool EM is positive for rotavirus. The pediatric resident explains they believe Jorge has 22q11 deletion syndrome. This is a defect of embryonic development that affects midline structures.

  1. Based on your understanding, which arm of the immune system is implicated in Jorge’s presentation?

  2. Explain why Jorge has a rotavirus infection but no symptoms from his other vaccines.

  3. Explain why Jorge’s calcium is low and how you would correct this?

Sample Answer

Midline structures include palate, parathyroid glands and thymus. Lack of thymic infrastructure renders patients with 22q11 deletion syndrome lymphopenic and with variable T cell function. Lack of T cell function has rendered this patient vulnerable to vaccine-strain infection after live viral vaccines. The other vaccines children receive at 2 months of age are not live and cannot confer infection. This patient is hypocalcemic as a result of parathyroid dysfunction. This typically can be corrected with oral supplementation.

Declarations

Conflict of Interest

The authors declare no competing interests.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2018;53(1):76–85. 10.1111/medu.13645. [DOI] [PubMed] [Google Scholar]
  • 2.Heeneman S, Oudkerk Pool A, Schuwirth LW, van der Vleuten CP, Driessen EW. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015;49(5):487–98. 10.1111/medu.12645. [DOI] [PubMed] [Google Scholar]
  • 3.Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, Hays R, Palacios Mackay MF, Roberts T, Swanson D. 2018 consensus framework for good assessment. Med Teach. 2018;40(11):1102–9. 10.1080/0142159x.2018.1500016. [DOI] [PubMed] [Google Scholar]
  • 4.Landoll RR, Bennion LD, Maranich AM, Hemmer PA, Torre D, Schreiber-Gregory DN, Durning SJ, Dong T. Extending growth curves: a trajectory monitoring approach to identification and interventions in struggling medical student learners. Adv Health Sci Educ. 2022;27(3):645–58. 10.1007/s10459-022-10109-7. [DOI] [PubMed] [Google Scholar]
  • 5.Stegers-Jager KM, Themmen AP, Cohen-Schotanus J, Steyerberg EW. Predicting performance: relative importance of students’ background and past performance. Med Educ. 2015;49(9):933–45. 10.1111/medu.12779. [DOI] [PubMed] [Google Scholar]
  • 6.Yates J, James D. Predicting the “strugglers”: a case-control study of students at Nottingham University Medical School. BMJ. 2006;332(7548):1009–13. 10.1136/bmj.38730.678310.63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Li J, Thompson R, Shulruf B. Struggling with strugglers: using data from selection tools for early identification of medical students at risk of failure. BMC Med Educ. 2019;19:1–6. 10.1186/s12909-019-1860-z. [DOI] [PMC free article] [PubMed]
  • 8.Ricotta DN, Richards JB, Atkins KM, Hayes MM, McOwen K, Soffler MI, Tibbles CD, Whelan AJ, Schwartzstein RM. Self-directed learning in medical education: training for a lifetime of discovery. Teach Learn Med. 2021;34(5):530–40. 10.1080/10401334.2021.1938074. [DOI] [PubMed] [Google Scholar]
  • 9.Lim YS. Students’ perception of formative assessment as an instructional tool in medical education. Med Sci Educ. 2019;29(1):255–63. 10.1007/s40670-018-00687-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Boud D. Sustainable assessment: rethinking assessment for the learning society. Stud Contin Educ. 2000;22(2):151–67. 10.1080/713695728. [Google Scholar]
  • 11.Hift RJ. Should essays and other “open-ended”-type questions retain a place in written summative assessment in clinical medicine?. BMC Medic Educ. 2014;14:1–8. 10.1186/s12909-014-0249-2. [DOI] [PMC free article] [PubMed]
  • 12.Bird JB, Olvet DM, Willey JM, Brenner J. Patients don’t come with multiple choice options: Essay-based assessment in UME. Med Educ Online. 2019;24(1):1649959. 10.1080/10872981.2019.1649959. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sein A, Rashid H, Meka J, Amiel J, Pluta W. Twelve tips for embedding assessment for and as learning practices in a programmatic assessment system. Med Teach. 2020;43(3):300–6. 10.1080/0142159x.2020.1789081. [DOI] [PubMed] [Google Scholar]
  • 14.Cendán JC, Joledo O, Soborowicz MB, Marchand L, Selim BR. Using assessment point accumulation as a guide to identify students at risk for interrupted academic progress. Acad Med. 2018;93(11):1663–7. 10.1097/acm.0000000000002270. [DOI] [PubMed] [Google Scholar]
  • 15.Bracken K, Levinson AJ, Mahmud M, Allice I, Vanstone M, Grierson L. Spiraling pre-clerkship concepts into the clinical phase: augmenting knowledge transfer using innovative technology-enhanced curriculum activities. Med Sci Educ. 2021;31(5):1607–20. 10.1007/s40670-021-01348-1. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Medical Science Educator are provided here courtesy of Springer

RESOURCES