Abstract
The Medical Council of Canada Qualifying Exam (MCCQE) Part II aims to protect societal interests through examining recently graduated physicians using clinical scenarios with standardized patients. This position paper debates the role of the MCCQE Part II in the national licensing of physicians in Canada by focusing on the consequential validity evidence of this exam and considering future directions through discussing contemporary developments in high stakes examinations. Specifically, this paper compares both MCCQE Part I and Part II in their ability to predict future practice patterns of physicians and generalizability across specialties. In weighing up the evidence this paper considers commonly used counterarguments as well as the financial implications of this exam for both the candidates and the MCC. Finally, it concludes by providing recommendations for future licensing of physicians in Canada. The available consequential validity evidence for MCCQE Part II is limited. Though still limited, MCCQE Part I has more robust evidence that it is a better predictor of future practice patterns compared to with Part II. Combined with a lack of evidence that national licensing examinations lead to graduation of substandard doctors or an improvement of care, and the shift away from assessment of learning towards assessment for learning, the maximum impact of the MCC on safeguarding public’s interests will lie in working closely with residency programs and specialty colleges to facilitate a robust assessment program of essential competencies and clinical skills during residency training and specialty certification.
Abstract
L’examen d’aptitude du Conseil médical du Canada (EACMC), partie II, vise à protéger le public en soumettant les nouveaux diplômés en médecine à une évaluation par le biais de scénarios cliniques avec des patients standardisés. Cet article porte un regard critique quant au rôle de l’EACMC, partie II, dans l’obtention du permis d’exercice en médecine compte tenu des preuves de la validité de l’examen du point de vue de ses conséquences sociales et proposee des orientations futures au regard des développements contemporains en matière d’examens à enjeux élevés. Plus spécifiquement, cet article compare la partie I et la partie II de l’EACMC quant à leur capacité à prédire les tendances dans la pratique future des médecins et à leur caractère généralisable aux diverses spécialités. En soupesant les preuves, l’auteure examine les contre-arguments couramment évoqués ainsi que les implications financières de l’examen pour les candidats et pour le CMC. En conclusion, notre travail formule des recommandations pour l’octroi futur de titres menant à l’obtention du permis d’exercer aux médecins au Canada. Les preuves quant à la validité des conséquences sociales de l’EACMC, partie II sont limitées. Bien qu’également limitées, les preuves de la validité de l’EACMC, partie I, sont plus solides que celles de l’EACMC, partie II, en ce qui a trait à sa valeur prédictive des tendances futures dans la pratique des médecins. En l’absence de données probantes indiquant que les examens mènent soit à l’octroi d’un titre menant à l’obtention d’un permis de pratique en médecine qui ne satisfont pas aux exigences soit à l’amélioration des soins, et compte tenu de l’abandon de l’évaluation de l’apprentissage au profit de l’évaluation pour l’apprentissage, la meilleure manière pour le CMC de favoriser la protection du public serait de travailler en étroite collaboration avec les programmes de résidence et les collèges de spécialité afin de faciliter la mise en place d’un solide programme d’évaluation des compétences essentielles et des compétences cliniques pendant la formation en résidence et à l’occasion de formation continue spécialisée.
Introduction
The Medical Council of Canada Qualifying Exam (MCCQE) Part II is an Objective Structured Clinical Examination (OSCE)-style exam taken by residents after a minimum of 12 months of postgraduate training meant to assess competence, knowledge, skills, and attitudes essential for entry into independent clinical practice. The Medical Council of Canada (MCC) does not issue licenses to successful candidates, and in case of a failing grade a candidate may continue with their training; however, a successful attainment of both MCCQE Part I and Part II exams is required for an unrestricted license by most provincial Colleges of Physicians and Surgeons as well as the College of Family Physicians of Canada. While the Part I exam’s goal is to test medical graduates’ foundational knowledge at the end of medical school, Part II aims to evaluate other competencies through direct observations of interactions with standardized patients, such as professionalism, that cannot be captured during the Part I exam.
Many have questioned the additional value that this exam provides to the current specialty certification exams, the lack of applicability to different specialties, and the lack of evidence that this exam improves the quality of physicians in Canada.1-3 This effort has been frequently met with counterarguments from the MCC.4,5 To this date, however, this discussion has not yet weighed up the existing evidence. At the time of the writing the position paper, the MCC still administered the MCCQE Part II as a prerequisite for licensing, but eventually cancelled it for the foreseeable future given the issues with virtual delivery of this exam during the pandemic. Despite the cancellation of the exam, the MCC may still replace this exam with new methods of assessment, pending recommendations of the Assessment Innovation Task Force. Hence, any lessons learned from decades of MCCQE Part II administration may help shape the future of physician licensing in Canada through the potential development, implementation, and evaluation of new exam(s).
In this position paper, I would like to elevate the debate about the role of the MCCQE Part II in the national licensing of physicians in Canada by reviewing the available predictive validity evidence of this exam and consider future directions through discussing contemporary developments in high stakes examinations. Specifically, I will compare both MCCQE Part I and Part II in their ability to predict future practice patterns of physicians and their generalizability across specialties. In weighing up the evidence, I will also consider commonly used counterarguments as well as the financial implications of this exam for both the candidates and the MCC. Finally, I will conclude by providing recommendations for future licensing of physicians in Canada.
Historical context and the intended use for the exam
The MCC is bound by the interest to protect the safety of the public and to be responsive to the ever-changing roles of physicians. The 1912 “Roddick Bill” established the MCC and the first national standardized examination that would be recognized across Canada.6 The actual licensing of physicians has remained with the individual provinces. Since 1912, the MCC national examinations have indeed evolved, yet they have been considered a key in maintaining standards of practice where there has been a large variation in training quality and few or no standards for training.
The stated goal of the MCCQE Part II is to assess the candidate’s core abilities to apply medical knowledge, demonstrate clinical skills, and develop investigational and therapeutic clinical plans, as well as to demonstrate professional behaviours and attitudes at a level expected of a physician in independent practice in Canada.7 MCCQE Part II is an OSCE-style exam where residents, often in their 16th month of training (for the October sitting) or the 20th month of training (for the February sitting) must “pass a multiple-case standardized patient assessment, where patient and physician examiners observe and grade clinical and communication skills to predict a candidate’s competence to practice”.8 Completion of Part I and Part II of the MCCQE has become the basis for granting the full licentiate of the MCC.
Passing both examinations simplifies the road to full licensure by residents who eventually want to practice in their specialty in individual provinces. Again, while the MCC itself does not provide any licenses to practice, it plays a key role in the road to licensure for physicians in Canada. The MCC keeps a ledger of the candidates who have passed their national exams and provides this information to the provinces, who then certify the physicians for independent license once they have successfully completed all the other requirements, as specified by each province. Gaining independent license to practice in a specific province usually involves a successful completion of a residency program as well as a final certification exam in their specialty administered either by the Royal College of Physicians and Surgeons of Canada or the College of Family Physicians of Canada in addition to a successful completion of MCC’s both qualifying exams (MCCQE Part I and Part II).
Validity of the exam in the current context
Validity framework in high-stakes assessment
All high stakes examinations, such as the MCCQE Part II, should be held to a high standard of validity.9 Our contemporary understanding of “validity” has evolved from considering a test or assessment as “valid” or “validated”, but rather, that multiple sources of validity evidence can be combined to form a coherent validity argument.10 Validity evidence is important for public accountability, as a fundamental goal of high stakes examinations is to ensure that all practicing physicians meet pre-defined standards for clinical practice. In addition to Kane’s argument-based validity framework, which has been widely accepted in medical education research,11 others that proposed that high stakes summative evaluations should also present evidence of validity-coherence, reproducibility-consistency, and equivalence. 9 While all elements of validity for high-stakes assessment are important, they are not of equal weight for all stakeholders.9 From a patient or societal point of view, validity-coherence meaning “the results of an assessment are appropriate for a particular purpose as demonstrated by a coherent body of evidence”9 may be more important than the other components because a reproducible and equivalent exam that does not serve the needs of the public or has not been shown to be fit for purpose, will not be useful to a society. As Bachman states, “tests are not developed and used in a value-free psychometric test-tube; they are virtually always intended to serve the needs of an educational system or society at large.”12 Bachman argues that it is the test developers’ “responsibility to provide as complete evidence as possible that the tests that are used are valid indicators of the abilities of interest and that these abilities are appropriate to the intended use, and then to insist that this evidence be used in the determination of test use.”12 In the context of licensing examinations like the MCCQE Part II, the link between exam performance and “real world” outcomes–in other words, predictive validity–is, therefore, the most compelling evidence for the public.13,14
How does MCCQE Part II measure up?
The predictive validity evidence for the MCCQE Part II exam is limited to only a handful of studies that demonstrate an association between first attempt failure on MCCQE Part II exam with outcomes such as physicians’ prescribing patterns of specific medications and patient complaints in practice.8,15,16 The MCC’s own commissioned report—“The Quebec-Ontario Follow-up Study of the Association between Scores Achieved on the MCCQE Part II Examination and Performance in Clinical Practice”–of graduates taking the MCCQE Part II exam between 1993-1996 found communication subscores to predict the most outcomes in practice, including college complaints and clinical management and prevention of various chronic conditions (e.g., asthma, chronic hypertension).15 It is important to note here that characteristics other than exam performance, such as not practicing as a locum or receiving a discipline flag were found to be stronger predictors of both inappropriate benzodiazepine and opioid prescribing as well as number of complaints. Interestingly, the overall MCCQE Part I score was a better predictor of college complains and unsatisfactory peer assessment than the overall MCCQE Part II score or the communication subscore.15 Moreover, the overall MCCQE Part I score predicted more outcomes measures than overall MCCQE Part II score or the communication subscore. In a related publication, Wenghofer et al. further found that MCCQE Part II results did not better predict peer assessment results over and beyond MCCQE Part I.17
Many residents in specialty programs would argue that the outcome measures chosen in this report are not representative of the work that they will be doing when in practice, nor of their current training. For example, an ophthalmologist will never be ordering a screening mammogram, a psychiatrist will not be examining a painful knee, nor will a radiologist be prescribing opioids or benzodiazepines. With the exception of patient complaints or a negative peer review, it is almost impossible to find universal clinical skills that would apply to all specialties and therefore be an appropriate common standard that all practicing physicians should be capable of. A recent content analysis of the MCC exams by Bordage et al. found that only 64% (59 out of 92) of physician-related practice indicators (PRINDs) that contribute to causing or preventing suboptimal care and adverse events were “behaviours or decisions expected of all physicians and suitable for assessment on a general medical examination.”18 MCCQE Part II tested on average 9.8 PRINDS, compared to the average 32.2 and 18.4 PRINDS tested on the Part I knowledge and clinical decision-making respectively with only five percent of the Part II total test score being attributed to PRINDS.18 As shown previously, the evidence seems to favour MCCQE Part I, rather than MCCQE Part II, in its ability to evaluate aspects of a physician’s future practice that are expected of all physicians.
One argument for the need of a national licensing exam for all physicians in Canada has been that there is sufficient variability in the quality of training between programs in Canada as well as internationally to warrant such an exam.4 It can be argued that there may be greater variability between residents training in a rural versus urban programs, however evidence shows that both urban and rural residents perform equally well on the MCCQE Part II exam.19 Rather than the location of medical training, evidence suggests that admissions and selection criteria may play a role in predicting both the MCCQE Part I and Part II scores.20,21 Eva et al. showed that candidates accepted to medical schools across Canada using the Multiple Mini-Interview (MMI) scored higher on MCCQE Part I and Part II.21 More importantly, the authors reported no significant differences in scores between medical schools on either of the exams despite one argument for national licensing exams is the need for shared standards that all medical school graduates meet given concerns about variability in the quality of medical training across institutions. Notably, the authors state that the vast majority (14 of the 17) medical schools in Canada currently use the MMI or its permutation in their selection process. Other predictors of success on the MCCQE Part II reported include MCAT verbal reasoning scores and personal/professional characteristics.22 It remains to be seen whether the differences between admissions and selection processes – and subsequently the differences between the exam scores – translate into meaningful practice differences, and importantly, outcomes that are meaningful to patients and society.
In summary, there is limited evidence of the predictive ability of the MCCQE Part II for future practice patterns. In fact, the evidence favours the MCCQE Part I exam in its generalizability across specialities, attribution of scores to future practice indicators and patterns. Arguments that further work against the Part II exam include its substantial cost to the candidates. The fee to complete the Part II exam in 2021 was $2,780, and fees have continuously risen over time. It also constitutes a significant financial interest of the MCC in this exam as it represents 30% of MCC’s total annual revenue.23 Without having additional evidence of the linkages of physician performance on the MCCQE Part II exam with measures of care quality that are expected of—and attributable to—all practicing physicians in Canada regardless of their specialty, it is reasonable to question whether the burdens of this examination outweigh the additional benefits to the public that go above and beyond those of the MCCQE Part I.
Performance on the MCCQE Part II
Based on the 2019 technical report, the Canadian medical graduates training in a Canadian residency program have an average 89% pass rate on their first attempt, and an average 69% to 85% repeat pass rate in 2018 and 2019 test group.24 This would give an estimated overall pass rate for this group of 97% to 98%. On the other hand, international medical graduates (IMGs) with or without Canadian postgraduate experience have a much lower pass rate of 55-62% on the first try in the same period, a 26-42% repeat pass rate, and an overall pass rate approximately between 66% and 88%. This discrepancy points to an overwhelming majority of Canadian trained candidates passing the exam, in contrast to internationally trained candidates. The lower IMG pass rates deserve closer attention. While there is some evidence that the performance of IMGs on the United States Medical Licensing Examination (USMLE) Step 2 Clinical Knowledge exam is associated with quality of care they provide in the future, there is no Canadian evidence to support this.25 More troubling, however, is the potential for bias in the assessment of physicians who are racialized or identify as visible minorities, as evidenced in clerkship evaluations of undergraduate medical students26,27 as well as in the systemic barriers many IMGs face for practice in Canada.28 It should be noted that IMGs must pass the National Assessment Collaboration (NAC) Examination, a similar OSCE-style exam, in addition the MCCQE Part I, prior to applying for residency positions in Canada. It is, therefore, uncertain whether the lower pass rates reflect a lower level of preparedness of IMGs for standardized examinations or a potential bias in their assessment.
Current developments in medical education around national licensing of physicians
Not all countries require a national licensing examination for physician licensure. Some countries, including The Netherlands, grant full registration and licensure upon graduation from medical school without requiring additional training. Out of six major countries (Australia, Canada, United Kingdom, The Netherlands, Germany and the United States), only three require centralized licensing examinations, with the US recently cancelling its USMLE Step 2 Clinical Skills OSCE-style exam altogether.29 Interestingly, there is no evidence that an absence of a national licensing examination leads to graduation of substandard doctors, nor does the introduction of one lead to an improvement of care.30 In their systematic review, Archer et al. found that overall national licensing examinations lack the validity evidence, particularly, the claims that licensure examinations improve patient safety or physicians’ competence.31 Instead, national licensing examinations are more likely to reflect students’ past performance rather than predict future performance in practice.22,32 Schuwirth states that the value of national licensing examinations ultimately hinges on the favourable perceptions of the public: that society believes these exams protect public safety and ensure minimum competence of physicians.33
A question that remains worthy of deep consideration: are there alternatives to a national licensing exam to ensure physician competence? On a conceptual level, Schuwirth has eloquently described the dilemmas presented by current conceptions of national licensing exams in the context of our current understanding of learning.33 Schuwirth describes three shifts in assessment in the context of Competency Based Medical Education (CBME): 1) away from highly structured and standardized testing of knowledge and skills towards professionalism, reflection and communication, 2) a shift from assessment of learning (summative) to assessment for learning (formative) that emphasizes growth, reflection and feedback, and 3) a shift from making decisions about competence based on single instruments towards programmatic assessment that utilizes information about one’s competence from a variety of assessment methods. These concepts have been crystallized in new assessment methods, such as Entrustable Professional Activities (EPAs), portfolios, and workplace-based evaluations, all of which involve multiple assessors and multiple assessments with the goal of maximizing physician ability to provide safe, effective patient-centred care through capturing competencies applied in authentic work settings.34 Although high stakes point-in-time examinations–such as the MCCQE Part II—have played an important role in assessment of learning era prior to CBME, such exams will become less important in the new era of assessment for learning where continuous feedback on one’s performance drives learning and performance improvement. Additionally, Schuwirth argues that even formative assessment can be high stakes, recommending that national licensing bodies at least explore the option of using assessment for learning for their licensing examinations.33 Shifting focus towards assessment for learning and programmatic assessment in licensing decisions will likely have more positive impact on residents’ reflection and learning compared to high stakes examinations. For example, the implementation of a formative progress test has been associated with national examination scores.35 The advantage of a progress test is that it can identify problems earlier and has more capacity for constructive feedback and stimulating learning, in addition to a cost-savings associated and fewer concerns about standardization of the examination compared to an OSCE-style exam. In another example, the American Board of Internal Medicine and the American Board of Pediatrics have introduced longitudinal low-stakes flexible assessments as a part of their maintenance of certification programs that aim to provide regular feedback to the test takers allowing them to improve their learning while improving performance. In sum, the use of formative assessment for national licensing provides several advantages over point-in-time high stakes assessments, such as the MCCQE Part II, for preparing reflective physicians who provide safer, high quality, patient-centred care.
Ways forward
Based on the discussion above, the following are options which could be considered surrounding the national licensing examination process in Canada:
Based on the evidence that MCCQE Part II does not predict future performance any more than MCCQE Part I, it should be cancelled in favour of leaving the MCCQE Part I as the sole prerequisite for full licensure in Canada. Given that findings of this paper point to the MCCQE Part I being the better predictor of future practice patterns compared to the MCCQE Part II, and since all trainees must pass the MCCQE Part I or they will not be able to continue their residency training, it would be safe to completely drop the MCCQE II as a prerequisite for full licensure by the MCC without compromising public safety.
If the MCCQE Part II is permanently cancelled, the MCC should focus its work directly with the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada to ensure a systematic assessment of essential clinical skills and professionalism on all specialty licensing exams as well as with residency programs to ensure high quality assessment of such skills take place during residency training in addition to their summative assessments at the end of specialty training.
The MCC needs to develop strategies to help identify the relatively large number of IMGs and the very small number of CMGs who have difficulties in passing the MCCQE Part II. For IMGs, one strategy could include utilizing feedback from the NAC examination to help identify areas for improvement or additional support or training. As for any high stakes OSCE, however, it is essential that the MCC ensures its assessments are equitable and are unbiased towards visible minorities or candidates with non-native English accents.
If MCCQE Part II is kept for both Canadian and international medical graduates, a modified version of the exam should be consistent with CBME models of assessment. As such, it should be formative and provide feedback directly to the residency programs in order to identify and work on any deficient skills. In this case, the feedback provided by this exam should be relevant to the physician’s future work regardless of their specialty, which may mean extensively modifying the exam in close collaboration with key stakeholders and justifying the high cost of administering such a national exam to the candidate and the public. If this approach is taken, the emphasis should be on the candidate to demonstrate they have taken steps to reflect, learn and close the gaps in their assessment before the end of their training.
Conclusion
The MCC’s claim that the MCCQE Part II ensures public safety and physician competence falls short of the available evidence that either this exam–or national licensing exams in general—ensure both. As it stands now, the evidence favours the MCCQE Part I as the best predictor of future practice patterns. The additional value for MCCQE Part II may, however, lie with identifying candidates who could benefit from additional support or training. Furthermore, as the field of medical education and licensing shifts away from assessment of learning towards assessment for learning and programmatic assessment, the maximum value of the MCC will lie in working closely with residency programs, the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada to facilitate a robust assessment program of essential competencies and clinical skills during residency and specialty certification.
Acknowledgements
The author would like to thank Dr. Allison Brown at the University of Calgary and Dr. Saad Chahine at Queen’s University for their insightful feedback on the position paper in preparation for its publication.
Conflicts of Interest
No conflicts of interest are declared
Funding
There was no funding.
References
- 1.Kennedy B. The Part II examination: political exercise or national standard? Cmaj. 1995;152(8):1183-1184. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1337789/pdf/cmaj00068-0013.pdf [PMC free article] [PubMed] [Google Scholar]
- 2.Lougheed T. Is it time to rethink the MCCQE Part II? Can Med Educ J. 2016;7(1):e87-88. 10.36834/cmej.36630 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Benusic M. Should the MCCQE II exams go ahead? In: CMAJ Blogs; 2019. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1337789/pdf/cmaj00068-0013.pdf [Google Scholar]
- 4.Bowmer MI. Response to: Is it time to rethink the MCCQE Part II? Can Med Educ J. 2016;7(1):e89-91. 10.36834/cmej.36722 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Topps M. Why the MCC qualifying examination part II still matters. In: CMAJ Blogs; 2019. https://cmajblogs.com/why-the-mcc-qualifying-examination-part-ii-still-matters/ [Google Scholar]
- 6.Sir Thomas Roddick: his work in medicine and public life. JAMA. 1940;115(9):802-803. 10.1001/jama.1940.02810350146035 [DOI] [Google Scholar]
- 7.Reda J. The new LMCC. Cmaj. 1992;146(1):10-11. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1488208/pdf/cmaj00290-0012a.pdf [PMC free article] [PubMed] [Google Scholar]
- 8.Tamblyn R, Abrahamowicz M, Dauphinee D, et al. Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. Jama. 2007;298(9):993-1001. 10.1001/jama.298.9.993 [DOI] [PubMed] [Google Scholar]
- 9.Norcini J, Anderson MB, Bollela V, et al. 2018 Consensus framework for good assessment. Med teach. 2018;40(11):1102-1109. 10.1080/0142159X.2018.1500016 [DOI] [PubMed] [Google Scholar]
- 10.Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane's framework. Med Educ. 2015;49(6):560-575 10.1111/medu.12678 [DOI] [PubMed] [Google Scholar]
- 11.Kane M. Validating score interpretations and uses. Language Testing. 2012;29(1):3-17. 10.1177/0265532211417210 [DOI] [Google Scholar]
- 12.Bachman LF. Fundamental considerations in language testing. Oxford, England: Oxford University Press; 1990. [Google Scholar]
- 13.Boulet JR. Establishing the validity of licensing examination scores. J Grad Med Educ. 2019;11(5):527-529. 10.4300/JGME-D-19-00611.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Burdick WP, Boulet JR, LeBlanc KE. Can we increase the value and decrease the cost of clinical skills assessment? Acad Med. 2018;93(5):690-692. 10.1097/ACM.0000000000001867 [DOI] [PubMed] [Google Scholar]
- 15.Tamblyn RA M, Bartlett G, Winslade N, et al. The Quebec-Ontario follow-up study of the association between scores achieved on the MCCQE Part II examination and performance in clinical practice. Online: Medical Council of Canada; 2009. https://h5a9c8a9.stackpathcdn.com/media/Tamblyn_Score-Association_MCCQE-Part-II_Clinical-Practice-Performance_2009.pdf
- 16.De Champlain AQ, Tian F, Ashworth N, Kain N, Wiebe D. Do national licensing examination scores predict patient complaints as well as physician Opioid and Benzodiazepine prescribing patterns? Online: Medical Council of Canada; 2018. https://h5a9c8a9.stackpathcdn.com/media/IAMRA-2018Poster-A.DeChamplain.pdf
- 17.Wenghofer E, Klass D, Abrahamowicz M, et al. Doctor scores on national qualifying examinations predict quality of care in future practice. Med Educ. 2009;43(12):1166-1173. 10.1111/j.1365-2923.2009.03534.x [DOI] [PubMed] [Google Scholar]
- 18.Bordage G, Meguerditchian AN, Tamblyn R. Practice indicators of suboptimal care and avoidable adverse events: a content analysis of a national qualifying examination. Acad Med. 2013;88(10):1493-1498. 10.1097/acm.0b013e3182a356af [DOI] [PubMed] [Google Scholar]
- 19.McKendry RJ, Busing N, Dauphinee DW, Brailovsky CA, Boulais AP. Does the site of postgraduate family medicine training predict performance on summative examinations? A comparison of urban and remote programs. Cmaj. 2000;163(6):708-711. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC80166/pdf/20000919s00013p708.pdf [PMC free article] [PubMed] [Google Scholar]
- 20.Dore KL, Reiter HI, Kreuger S, Norman GR. CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores. Adv Health Sci Educ Theory Pract. 2017;22(2):327-336. 10.1007/s10459-016-9739-9 [DOI] [PubMed] [Google Scholar]
- 21.Eva KW, Reiter HI, Rosenfeld J, Trinh K, Wood TJ, Norman GR. Association between a medical school admission process using the multiple mini-interview and national licensing examination scores. Jama. 2012;308(21):2233-2240. 10.1001/jama.2012.36914 [DOI] [PubMed] [Google Scholar]
- 22.Kulatunga-Moruzi C, Norman GR. Validity of admissions measures in predicting performance outcomes: the contribution of cognitive and non-cognitive dimensions. Teach Learn Med. 2002;14(1):34-42. 10.1207/S15328015TLM1401_9 [DOI] [PubMed] [Google Scholar]
- 23.Medical Council of Canada . Annual Report 2019-2020. Ottawa; 2020. https://h5a9c8a9.stackpathcdn.com/media/MCC-Annual-Report-2019-2020.pdf
- 24.Medical Council of Canada . MCCQE Part II Annual Technical Report. Ottawa; 2019. https://h5a9c8a9.stackpathcdn.com/media/MCCQE-Part-II-Annual-Technical-Report-2019.pdf
- 25.Norcini JJ, Boulet JR, Opalek A, Dauphinee WD. The relationship between licensing examination performance and the outcomes of care by international medical school graduates. Acad Med. 2014;89(8):1157-1162. 10.1097/acm.0000000000000310 [DOI] [PubMed] [Google Scholar]
- 26.Tatem GB, Gardner-Gray J, Standifer B, Alexander K. While you don't see color, i see bias: identifying barriers in access to graduate medical education training. ATS Sch. 2021;2(4):544-555. 10.34197/ats-scholar.2020-0134PS [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Low D, Pollack SW, Liao ZC, et al. Racial/Ethnic Disparities in Clinical Grading in Medical School. Teach Learn Med. 2019;31(5):487-496. 10.1080/10401334.2019.1597724 [DOI] [PubMed] [Google Scholar]
- 28.MacFarlane MM. When a Canadian is not a Canadian: marginalization of IMGs in the CaRMS match. Can Med Educ J. 2021;12(4):132-140. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463232/pdf/CMEJ-12-132.pdf [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Weggemans MM, van Dijk B, van Dooijeweert B, Veenendaal AG, Ten Cate O. The postgraduate medical education pathway: an international comparison. GMS J Med Educ. 2017;34(5):Doc63. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5704606/pdf/JME-34-63.pdf [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Harden RM. Five myths and the case against a European or national licensing examination. Med Teach. 2009;31(3):217-220. 10.1080/01421590902741155 [DOI] [PubMed] [Google Scholar]
- 31.Archer J, Lynn N, Coombes L, et al. The impact of large scale licensing examinations in highly developed countries: a systematic review. BMC Med Educ. 2016;16(1):212. 10.1186/s12909-016-0729-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Babla K, Crampton P, Kronfli M. National licensing examinations: what are they good for? Clin Teach. 2020;17(3):323-325. 10.1111/tct.13083 [DOI] [PubMed] [Google Scholar]
- 33.Schuwirth L. National licensing examinations, not without dilemmas. Med Educ. 2016;50(1):15-17. 10.1111/medu.12891 [DOI] [PubMed] [Google Scholar]
- 34.Lockyer J, Carraccio C, Chan MK, et al. Core principles of assessment in competency-based medical education. Med Teach. 2017;39(6):609-616. 10.1080/0142159x.2017.1315082 [DOI] [PubMed] [Google Scholar]
- 35.Karay Y, Schauber SK. A validity argument for progress testing: Examining the relation between growth trajectories obtained by progress tests and national licensing examinations using a latent growth curve approach. Med Teach. 2018;40(11):1123-1129. 10.1080/0142159x.2018.1472370 [DOI] [PubMed] [Google Scholar]
