Abstract
Background
Comprehensive Basic Science Self-Assessments (CBSSAs) offered by the National Board of Medical Examiners (NBME) are used by students to gauge preparedness for the United States Medical Licensing (USMLE) Step 1. Because residency programs value Step 1 scores, students expend many resources attempting to score highly on this exam. We sought to generate a predicted Step 1 score from a single CBSSA taken several days out from a planned exam date to inform student testing and study plans.
Methods
2016 and 2017 Step 1 test takers at one US medical school were surveyed. The average daily score improvement from CBSSA to Step 1 during the 2016 study period was calculated and used to generate a predicted Step 1 score as well as mean absolute prediction errors (MAPEs). The predictive model was validated on 2017 data.
Results
In total, 43 of 61 respondents totaling 141 CBSSAs in 2016 and 37 of 43 respondents totaling 122 CBSSAs in 2017 were included. The final prediction model was [Predicted Step 1 = 292 - (292 - CBSSA score) * 0.987527 ^ (number of days out)]. In 2016, the average difference between predicted and actual scores was -0.81 (10.2) and the MAPE was 7.8. In 2017, 88 (72.1%) and 118 (96.7%) of true Step 1 scores fell within one and two standard deviations of a student’s predicted score. There was a MAPE of 7.7. Practice form used (p = 0.19, 0.07) and how far out from actual Step 1 it was taken (p = 0.82, 0.38) were not significant in either year of study.
Conclusion
This projection model is reasonable for students to use to gauge their readiness for Step 1 while it remains a scored exam and provides a framework for future predictive model generation as the landscape of standardized testing changes in medical education.
Keywords: Step 1, Testing, Projection modeling, USMLE
Introduction
For several years, the National Board of Medical Examiners (NBME) has offered Comprehensive Basic Science Self-Assessments (CBSSAs) to assist students preparing for the United States Medical Licensing Examination (USMLE) Step 1 examination. The self-assessments have been shown to predict students’ readiness to take the exam [1] when taken in test-like conditions [2]. However, much to the dismay of some [3, 4], in recent years, students’ Step 1 scores have been used increasingly more frequently to rank applicants for interviews and the match process [5–7]. This is most likely because some studies have shown the examination to predict success in residency and future board examinations [8, 9]. Despite the USMLE cautioning such use of the exam back in 1993 [10], Schrock et al. [7] found that 83% of Orthopedic Surgery program directors responding to their survey used USMLE Step 1 score cut-offs to screen applications. Medical students are acutely aware of this phenomenon and seek to improve their performance on this high-stakes examination. Students begin to prepare earlier for the examination and are using third-party vendors for question banks and study tools [11, 12].
Given the high-stakes nature of the examination, if students do not achieve scores close to their goal, they likely consider changing their examination date to accommodate more time to study if allowed by their program. Amongst the validated literature, there are very few prediction models to assist students, especially a number of days out from their examination [13] to show they are on the right track. Given that rescheduling fees can increase dramatically—sometimes as high as hundreds of dollars [14]—as test day draws closer, the ability to project students’ exam scores weeks in advance would be highly practical (Table 1). Previously, the NBME stated in CBSSA score reports that CBSSA scores could predict a Step 1 score within 13 points 67% of the time if taken within 1 week of the actual test [15].
Table 1.
Cost to reschedule USMLE tests by time out from the exam
Appointment Change Fees 2020 | ||
---|---|---|
Date Changed | Test Region | Step 1 |
31+ days before test date | All | No Fee |
6–30 days before test date | All | $50 |
5 or fewer days before test date | US and Canada | $121 |
Africa, Asia, Australia, China, India, Indonesia,Latin America, Middle East, Thailand | $293 | |
Europe, Korea, Taiwan | $333 | |
Japan | $537 |
Obtained from https://www.usmle.org/apply/rescheduling-fees.html (Retrieved 5 February 2020)
We sought both to generate a model that could improve upon this accuracy and give prospective students a better grasp on where they stand further out from their planned test date. With this study, we generated a predictive model that can provide a projected range of scores on the USMLE Step 1 based on a single CBSSA score and the number of days prior to planned test day the self-assessment was taken.
Methods
Sample
The sample consisted of USMLE Step 1 first-time examinees during 2016 (n = 200) and 2017 (n = 205) testing cycles at Case Western Reserve University School of Medicine, which consisted of the graduating classes of 2018 and 2019 (Table 2)
Table 2.
Demographic data
2016 | 2017 | |
---|---|---|
Survey respondents (included) | 61 (43) | 43 (37) |
Number of CBSSAs included | 141 | 122 |
CBSSAs per respondent | 3.3 (1.2) | 2.8 (1.7) |
Step 1 score | 250.5 (14.3) | 250.3 (11.8) |
CBSSA score | 241.9 (15.1) | 240.9 (15.6) |
Study days | 54.1 (11.7) | 56.8 (14.6) |
Data reported as mean (SD)
Survey Procedures
Our institution’s IRB approved this protocol and the online self-administered, confidential survey powered by Qualtrics (Provo, UT) was developed and distributed to Step 1 examinees during the 2016 and 2017 testing cycles. In the survey, students were asked to report their USMLE Step 1 test date, dates they completed CBSSA forms, and the scores they received on these practice examinations. Students were also asked to report if they completed the practice examinations timed, under test-like conditions, and after they had completed a “first pass” review of their study material. A “first pass” was defined in the survey by the following statement: “We consider a ‘First Pass’ equivalent to one of the following: a full review of First Aid (First Aid for the USMLE Step 1: McGraw-Hill, New York, NY), completing an entire Qbank (Question bank), initial thorough review of all subjects covered on the USMLE Step 1, etc—please use your judgement”.
Data Analysis
Only CBSSAs completed under test-like conditions and after a “first pass” were included in the analysis and only survey respondents who completed at least two CBSSAs in 2016 were included in model generation. CBSSA scores between forms were considered equivalent.
There were two notable differences between the surveys distributed in 2016 and 2017. The first was the method of distribution and the second involved Step 1 score reporting. In 2016, the survey was distributed through the class listserv e-mailing system by two of the authors (SB, JS). In 2017, the survey was sent out to an e-mail list that only contained 2017 test cycle test-takers and was sent by an administrator at the school. This change in year 2 allowed us to obtain Step 1 scores from school records before the administrator blinded the data and sent it to our team for review. Conversely, in the 2016 edition, data were reported de-identified in the survey, and Step 1 scores were self-reported.
Calculating the Average Daily Score Improvement Factor
A model was generated under the premise that as students continue to study between their CBSSAs and Step 1 exams, their score would improve by a certain factor. This method assumed all CBSSAs generate a score that is a reasonable surrogate for a student’s Step 1 score at the time the practice exam was taken. Thus, if a maximum possible score achievable for these exams was assumed, a factor could be generated—we called this the daily score improvement factor (DSI)—to calculate improvement per day of study. So, for each CBSSA and for Step 1, a maximum score of 292 was assumed as this is the highest observed score noted on CBSSAs (note: Step 1 score interpretation guidelines [16] report scores up to 300). A reverse score (RS) for each CBSSA and Step 1 was calculated and defined as 292 − CBSSA score = RS_CBSSA OR 292 − Step 1 score = RS_Step1. This was essentially the maximum score minus the achieved score. The number of days out (DO) from the student’s Step 1 that the CBSSA was taken was also included in the formula. The DSI was defined qualitatively as a multiplier representing the percentage of reverse score that remains unachieved after an additional day of studying. This factor was calculated between each included CBSSA score and the student’s Step 1 using the equation:
To further explain this, consider that RS_Step1/RS_CBSSA represents the fraction of reverse score unachieved between the CBSSA form and the student’s Step 1 score. For example, if a student’s CBSSA score was 232 (RS_CBSSA = 60) and Step 1 score was 252 (RS_Step1 = 40), then two-thirds of the reverse score (40/60 = 2/3) was not achieved, or perhaps more simply, 1/3 of the reverse score was achieved between the test dates. This is then taken to the “n-th” root with n representing the number of days apart the exams were. So, if we assume our example student scored 232 on her CBSSA twenty days prior to scoring 252 on Step 1, the DSI would be the twentieth root of two-thirds (0.671/20) or 0.9802. Thus, with each day of study, the example student could expect her reverse score to be approximately 98% of the day prior, meaning she improved by 2% of the previous day’s reverse score.
After a measure of an individual student’s improvement from one CBSSA to the USMLE Step 1 over time had been defined, the average improvement per day across our cohort was calculated. This was achieved by calculating the geometric mean of DSIs from each CBSSA to Step 1 for each individual student, and then the arithmetic mean of those geometric means across all students for the 2016 year. Plainly, each student generated one DSI—averaged across all CBSSA forms taken (geometric mean)—and then these DSIs were averaged across all respondents in 2016 (arithmetic mean) generating one average DSI (aDSI). This protected against any one individual who took multiple practice exams skewing the aDSI.
Projection Model—Training
Once the aDSI across the cohort had been calculated, the previously noted equation was rearranged and solved for a predicted Step 1 score:
A predicted Step 1 score was then generated for each CBSSA taken in the 2016 cohort and the difference of each predicted score from a student’s true Step 1 score was noted. Mean differences and standard deviations were calculated between predicted and actual Step 1 scores.
Projection Model—Validation
Once the model had been formulated on the data from 2016, it was validated using data from the 2017 survey. Once again, only CBSSA forms that were completed under “test-like” conditions were included in the analysis. This time around, all CBSSAs were included, even if they were the only practice exam taken by a student as no average calculations were being made. Every CBSSA exam taken generated a predicted Step 1 score as well as score intervals within 1 or 2 standard deviations of the average difference between predicted scores and actual scores, as generated from the 2016 training model. We evaluated how frequently each student’s true Step 1 score fell inside these intervals.
Additional Calculations
The prediction accuracy for each CBSSA form as well as the timing of CBSSA completion in relation to Step 1 were compared using mean absolute prediction errors (MAPEs) and ANOVAs. This was done to evaluate if there was any difference amongst different CBSSA forms when predicting Step 1 score, or if the timing of when a practice exam was taken affected predictive ability. For several CBSSA scores, we graphed predicted Step 1 score against days out from actual Step 1 exam (Fig. 1). The aDSI for 2017 was calculated and compared with the 2016 aDSI using Student’s t test.
Fig. 1.
Predicted scores as a function of days out from Step 1 exam that a CBSSA was taken. In parentheses are the points increased per day
Results
Survey Responses and Data
In the 2016 survey, 61 students responded with 43 (43/200; 21.5%) meeting inclusion criteria for analysis. This totaled 141 CBSSA forms with a mean of 3.3 (1.2) forms per respondent. The average USMLE Step 1 score from the 2016 cohort was 250.5 (14.3), average CBSSA score was 241.9 (15.1), and respondents reported an average study period of 54.1 (11.7) days.
In the 2017 survey, 43 students responded with 37 (37/205; 18.0%) having taken at least one CBSSA. The cohort completed 122 CBSSAs with a mean of 2.8 (1.7) forms per respondent. The average USMLE Step 1 score from the 2017 cohort was 250.3 (11.8) which was significantly different (p = 0.0003) from the average score at our institution that year—240 (17). The average CBSSA score in 2017 was 240.9 (15.6), and students reported an average study period of 56.8 (14.6) days. Students used similar study materials to previously reported cohorts: UWorld (98%), First Aid (95%), Pathoma (91%), and SketchyMicro (88%).
Prediction Model Training
The DSI calculated from the 2016 data was 0.987527 (0.022525). This was used to create the predictive model: Step 1 = 292 - (292 - CBSSA)*0.987527^(DO). Once generated, this model was used on the same data it was created from to predict a Step 1 score from each CBSSA taken. The average difference between predicted scores and actual scores was -0.81 (10.2) and the MAPE was 7.8.
Prediction Model Validation
Of the 122 CBSSAs taken in 2017, 88 (72.1%) fell within 1 standard deviation of the predicted score and 118 (96.7%) fell within 2 standard deviations of the predicted score. There was a MAPE of 7.7. A sample of one subject’s data is shown in Table 3. Subsequently, CBSSA scores were divided by form and days out from the Step 1 examination and percentages included within one or two standard deviations are reported in Tables 4 and 5.
Table 3.
Sample of subject from 2017 validation data set
Form | Score (Scaled) | Days out (DO) | Reverse Score (RS_CBSSA) | 2 SD low | 2 SD high | 1 SD low | 1 SD high | Projection | |
---|---|---|---|---|---|---|---|---|---|
1st CBSSA | 15 | 240 | 29 | 52 | 236 | 275 | 246 | 265 | 256 |
2nd CBSSA | 16 | 238 | 14 | 54 | 227 | 266 | 237 | 256 | 247 |
3rd CBSSA | 17 | 244 | 13 | 48 | 232 | 271 | 242 | 261 | 251 |
4th CBSSA | 19 | 257 | 6 | 35 | 240 | 279 | 250 | 269 | 260 |
USMLE Step 1 | Score: | 249 |
Table 4.
CBSSA scores within one or two SD from actual Step 1—separated by form (2017 data)
Form | Within 1 SD (%) | Within 2 SD (%) |
---|---|---|
NBME 19 | 14/22 (63.6) | 22/22 (100) |
NBME 18 | 16/22 (72.7) | 21/22 (95.5) |
NBME 17 | 22/27 (81.5) | 27/27 (100) |
NBME 16 | 18/25 (72.0) | 25/25 (100) |
NBME 15 | 13/16 (81.3) | 16/16 (100) |
NBME 13 | 5/10 (50.0) | 7/10 (70.0) |
Total | 88/122 (72.1) | 118/122 (96.7) |
Table 5.
CBSSA scores within one or two SD from Step 1—separated by days out (2017 data)
Time frame | Within 1 SD (%) | Within 2 SD (%) |
---|---|---|
1–7 days | 29/42 (69.0) | 42/42 (100) |
8–14 days | 29/38 (76.3) | 37/38 (97.4) |
15–21 days | 12/20 (60.0) | 17/20 (85.0) |
22+ days | 18/22 (81.8) | 22/22 (100) |
Total | 88/122 (72.1) | 118/122 (96.7) |
Additional Calculations
Mean absolute prediction errors when CBSSAs were separated by form taken as well as days out from Step 1 are reported in Tables 6 and 7. There were no significant differences in MAPEs between forms in either year (2016: F(5,135) = 1.499, p = 0.19; 2017: F(5,116) = 2.115, p = 0.07) nor a significant difference when separated by days out from Step 1 (2016: F(3,137) = 0.303, p = 0.82; 2017: F(3, 118) = 1.039, p = 0.38). A graphic depiction of predicted Step 1 score as a function of days out from exam was generated for CBSSA scores of 190, 210, 230, 250, and 270 (Fig. 1). Upon further review of the data, we also noted that of the 34 occasions where a Step 1 score fell outside 1 standard deviation of the predicted score, 30/34 (88.2%) times the student’s actual Step 1 was higher than the predicted interval. Furthermore, of the 4 occasions that a student’s actual Step 1 score fell further than 2 standard deviations away from the predicted score, 3/4 (75%) times the actual Step 1 score was higher than the predicted interval. The 2017 aDSI was calculated as 0.9794455 (0.02720648) which was not significantly different from the 2016 aDSI (p = 0.166).
Table 6.
Mean absolute prediction error—separated by form
Form | MAPE 2016 | MAPE 2017 |
---|---|---|
NBME 19 | N/A | 7.8 |
NBME 18 | 5.8 | 7.1 |
NBME 17 | 8.8 | 7.5 |
NBME 16 | 6.8 | 6.4 |
NBME 15 | 8.2 | 7.3 |
NBME 13 | 10.7 | 13.5 |
NBME 12 | 9.6 | N/A |
Total | 7.8 | 7.7 |
ANOVA: 2016: F(5,135) = 1.499, p = 0.19; Fcrit = 2.281; 2017: F(5,116) = 2.115, p = 0.07; Fcrit = 2.292
Table 7.
Mean absolute prediction error—separated by days out from exam
Time frame | MAPE 2016 | MAPE 2017 |
---|---|---|
1–7 days | 8.2 | 7.6 |
8–14 days | 7.7 | 7.3 |
15–21 days | 7.0 | 9.7 |
22+ days | 8.5 | 6.9 |
Total | 7.8 | 7.7 |
ANOVA: 2016: F(3,137) = 0.303, p = 0.82; Fcrit = 2.67; 2017: F(3, 118) = 1.039, p = 0.38; Fcrit = 2.68
Discussion
This study presents a novel prediction model for the USMLE Step 1 examination. To our knowledge, this is the first validated model that predicts a range of USMLE scores from a single CBSSA taken under test-like conditions. We surmise that this information could be invaluable to students looking to gauge their readiness for this high-stakes examination wherein high scores are becoming increasingly important in the residency match process [5–7]. The prediction model accounts for improvement over a dedicated study period, thus providing an important metric to students who take CBSSAs early in their exam preparation period. The data above showed that ~ 70% of the time, a subject’s true Step 1 score fell within 1 standard deviation of the predicted score and ~ 95% of the time it fell within 2 standard deviations of the predicted score. While this mirrors the empirical (68-95-99.7%) rule, a true confidence interval cannot statistically be generated from these data. However, the validation data suggests it would be reasonable to conclude ~ 70% and ~ 95% of all Step 1 scores would fall within 1 or 2 standard deviations, respectively, of the predicted score generated from our model.
This model was built using data from a cohort with an extremely competitive average Step 1 examination score from a single US medical school. While it could be argued this is a study limitation and limits the external validity of our study, it could also be viewed as appealing or more applicable to competitive students who are aspiring towards high board examination scores. Given the score profile of our cohorts, it may be inappropriate to apply this model—especially using the calculated aDSI—to students looking to predict passage of Step 1; this would require a larger study population with a wider range of scores.
Importantly, the predictive model did not perform any worse if practice examinations were taken within 1 week from test day or > 3 weeks out from the USMLE Step 1, suggesting the model can provide students valuable information several weeks prior to their Step 1 exam. When MAPEs were calculated between NBME forms, there was no difference in performance of the prediction model—suggesting that no form was any better than another at predicting USMLE Step 1 score.
Furthermore, post hoc analysis revealed that when actual Step 1 scores fell outside of the expected intervals, most of these students outscored their interval. This is especially important if the model is to be used to predict student readiness. While not statistically quantified, this suggests that the prediction intervals, if anything, trend toward underpredicting student performance. This can help students make decisions earlier about rescheduling an exam. A simple search of the USMLE website shows re-scheduling fees for Step 1 based upon time out from the scheduled test date (Table 1) [14]. Within 30 days, students can no longer cancel for free, and within 5 days from the exam, prices can reach as high as $120 in the USA and Canada and > $500 outside of the USA and Canada.
This study does have its limitations. As mentioned earlier, this is a survey study of cohorts at one US medical school and while it has been internally validated, it may not be externally valid. Notably, our program allows for a significant amount of dedicated study time for Step 1, wherein for those students in our sample, almost all the CBSSA scores used in analysis were taken. Furthermore, the students included in the cohort had an average Step 1 score that was higher than both the national average and our school’s average. However, despite having more competitive scores, our study cohort reported use of similar study materials to those found in the literature [3, 12], and it is unlikely that study habits are significantly different from program to program. And finally, while this study aims to provide a prediction model giving students a more accurate idea of what score they may achieve on Step 1 on test day suggesting they could re-schedule their examination, this study does not provide evidence to suggest that students would actually use this tool this way nor that re-scheduling would have any benefit.
The USMLE recently announced that as a result of the Invitational Conference on USMLE Scoring (InCUS) meetings, the USMLE Step 1 examination will cease to report numerical scores starting no earlier than January 1, 2022 [17]. Given how recently this has been announced, it remains unclear how this will affect the landscape of medical student test-taking strategies. We expect that until the pass/fail reporting system goes into effect, study habits and test-taking strategies will remain largely unchanged and this prediction model will remain relevant. In the years following the switch, it will be interesting to see how students react to the changes, and how residency programs attempt to differentiate applicants with no numerical Step 1 score report. Perhaps the USMLE Step 2 Clinical Knowledge (Step 2 CK)—which will remain a scored exam—will rise in importance. If this occurs, we believe medical students would transfer practices traditionally used for Step 1 preparation to preparation for Step 2 CK, and thus this work could lay the foundation for Step 2 CK prediction models that prior to these recent changes would have seemed far less necessary. Only time will tell.
In conclusion, the novel predictive model generated and validated from these two data sets provides a viable option for predicting Step 1 score from a single CBSSA form. The ability to project an actual exam score as well as a reasonable range several days to weeks out from the exam may prove invaluable to medical students preparing for the USMLE Step 1. While we advise that students remain cautious when evaluating predictive models given the statistical variability associated with test scoring [16], this model may assist students when determining if they need to re-schedule their exam and ultimately save them from the higher price scale closer to their test date. Once pass/fail scoring is implemented, it remains to be seen how students and residency programs alike will respond, but this model could lay a foundation for further research directions once the dust settles.
Authors’ Contributions
All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Stephen Bigach, Robert Winkelman, and Jonathan Savakus. The first draft of the manuscript was written primarily by Stephen Bigach and all authors contributed significantly on previous versions of the manuscript. Dr. Klara Papp provided significant guidance throughout. All authors read and approved the final manuscript.
Data Availability
The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Compliance with Ethical Standards
Conflict of Interest
The authors declare that they have no conflict of interest.
Ethics Approval
This study received approval from Case Western Reserve University IRB (IRB-2016-1616).
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Morrison CA, Ross LP, Fogle T, Butler A, Miller J, Dillon GF. Relationship between performance on the NBME comprehensive basic sciences self-assessment and USMLE step 1 for U.S. and Canadian medical school students. Acad Med. 2010;85(10 SUPPL). 10.1097/ACM.0b013e3181ed3f5c. [DOI] [PubMed]
- 2.Sawhill A, Butler A, Ripkey D, et al. Using the NBME self-assessments to project performance on USMLE step 1 and step 2: impact of test administration conditions. Acad Med. 2004;79:S55–S57. doi: 10.1097/00001888-200410001-00017. [DOI] [PubMed] [Google Scholar]
- 3.Prober CG, Kolars JC, First LR, Melnick DE. A plea to reassess the role of United States medical licensing examination step 1 scores in residency selection. Acad Med. 2016;91(1):12–15. doi: 10.1097/ACM.0000000000000855. [DOI] [PubMed] [Google Scholar]
- 4.McGaghie WC, Cohen ER, Wayne DB. Are United States medical licensing exam step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Acad Med. 2011;86(1):48–52. doi: 10.1097/ACM.0b013e3181ffacdb. [DOI] [PubMed] [Google Scholar]
- 5.Makdisi G, Takeuchi T, Rodriguez J, Rucinski J, Wise L. How we select our residentsa survey of selection criteria in general surgery residents. J Surg Educ. 2011;68:67–72. doi: 10.1016/j.jsurg.2010.10.003. [DOI] [PubMed] [Google Scholar]
- 6.Alterman DM, Jones TM, Heidel RE, Daley BJ, Goldman MH. The predictive value of general surgery application data for future resident performance. J Surg Educ. 2011;68:513–518. doi: 10.1016/j.jsurg.2011.07.007. [DOI] [PubMed] [Google Scholar]
- 7.Schrock JB, Kraeutler MJ, Dayton MR, Mccarty EC. A cross-sectional analysis of minimum USMLE step 1 and 2 criteria used by orthopaedic surgery residency programs in screening residency applications. J Am Acad Orthop Surg. 2017;25(6):464–468. doi: 10.5435/JAAOS-D-16-00725. [DOI] [PubMed] [Google Scholar]
- 8.Swanson DB, Sawhill A, Holtzman KZ, et al. Relationship between performance on part I of the American Board of Orthopaedic Surgery certifying examination and scores on USMLE Steps 1 and 2. Acad Med. 2009;84(SUPPL. 10). 10.1097/ACM.0b013e3181b37fd2. [DOI] [PubMed]
- 9.Yousem IJ, Liu L, Aygun N, Yousem DM. United States Medical Licensing Examination Step 1 and 2 scores predict neuroradiology fellowship success. J Am Coll Radiol. 2016;13(4):438–44.e2. doi: 10.1016/j.jacr.2015.10.024. [DOI] [PubMed] [Google Scholar]
- 10.Williams RG. III: Use of NBME and USMLE examinations to evaluate medical education programs. Acad Med. 1993;68(10):748–752. doi: 10.1097/00001888-199310000-00004. [DOI] [PubMed] [Google Scholar]
- 11.Thadani RA, Swanson DB, Galbraith RM. A preliminary analysis of different approaches to preparing for the USMLE Step 1. Acad Med. 2000;75(10 SUPPL). 10.1097/00001888-200010001-00013. [DOI] [PubMed]
- 12.Burk-Rafel J, Santen SA, Purkiss J. Study behaviors and USMLE Step 1 performance: implications of a student self-directed parallel curriculum. Acad Med. 2017;92(11):S67–S74. doi: 10.1097/ACM.0000000000001916. [DOI] [PubMed] [Google Scholar]
- 13.Giordano C, Hutchinson D, Peppler R. A predictive model for USMLE Step 1 scores. Cureus. 2016. 10.7759/cureus.769. [DOI] [PMC free article] [PubMed]
- 14.United States Medical Licensing Examination | Rescheduling fees. https://www.usmle.org/apply/rescheduling-fees.html. Accessed January 27, 2020.
- 15.National Board of Medical Examiners ® NBME ® Comprehensive Basic Science Self-Assessment (CBSSA) score report sample. https://nbme.org/sites/default/files/2020-01/comp sample.pdf. Accessed January 27, 2020.
- 16.National Board of Medical Examiners. USMLE score interpretation guidelines.; 2019. https://www.usmle.org/pdfs/transcripts/USMLE_Step_Examination_Score_Interpretation_Guidelines.pdf. Accessed February 16, 2020.
- 17.United States Medical Licensing Examination | Invitational Conference on USMLE Scoring. https://www.usmle.org/inCus/#decision. Published 2020. Accessed February 16, 2020.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.