Skip to main content
Canadian Medical Education Journal logoLink to Canadian Medical Education Journal
. 2022 Aug 26;13(4):23–29. doi: 10.36834/cmej.73894

Licensing exams in Canada: a closer look at the validity of the MCCQE Part II

Examen d’aptitude du Conseil médical du Canada : un regard approfondi sur la validité de l’EACMC, partie II

Alina Smirnova 1,2,
PMCID: PMC9441120  PMID: 36091734

Abstract

The Medical Council of Canada Qualifying Exam (MCCQE) Part II aims to protect societal interests through examining recently graduated physicians using clinical scenarios with standardized patients. This position paper debates the role of the MCCQE Part II in the national licensing of physicians in Canada by focusing on the consequential validity evidence of this exam and considering future directions through discussing contemporary developments in high stakes examinations. Specifically, this paper compares both MCCQE Part I and Part II in their ability to predict future practice patterns of physicians and generalizability across specialties. In weighing up the evidence this paper considers commonly used counterarguments as well as the financial implications of this exam for both the candidates and the MCC. Finally, it concludes by providing recommendations for future licensing of physicians in Canada. The available consequential validity evidence for MCCQE Part II is limited. Though still limited, MCCQE Part I has more robust evidence that it is a better predictor of future practice patterns compared to with Part II. Combined with a lack of evidence that national licensing examinations lead to graduation of substandard doctors or an improvement of care, and the shift away from assessment of learning towards assessment for learning, the maximum impact of the MCC on safeguarding public’s interests will lie in working closely with residency programs and specialty colleges to facilitate a robust assessment program of essential competencies and clinical skills during residency training and specialty certification.

Introduction

The Medical Council of Canada Qualifying Exam (MCCQE) Part II is an Objective Structured Clinical Examination (OSCE)-style exam taken by residents after a minimum of 12 months of postgraduate training meant to assess competence, knowledge, skills, and attitudes essential for entry into independent clinical practice. The Medical Council of Canada (MCC) does not issue licenses to successful candidates, and in case of a failing grade a candidate may continue with their training; however, a successful attainment of both MCCQE Part I and Part II exams is required for an unrestricted license by most provincial Colleges of Physicians and Surgeons as well as the College of Family Physicians of Canada. While the Part I exam’s goal is to test medical graduates’ foundational knowledge at the end of medical school, Part II aims to evaluate other competencies through direct observations of interactions with standardized patients, such as professionalism, that cannot be captured during the Part I exam.

Many have questioned the additional value that this exam provides to the current specialty certification exams, the lack of applicability to different specialties, and the lack of evidence that this exam improves the quality of physicians in Canada.1-3 This effort has been frequently met with counterarguments from the MCC.4,5 To this date, however, this discussion has not yet weighed up the existing evidence. At the time of the writing the position paper, the MCC still administered the MCCQE Part II as a prerequisite for licensing, but eventually cancelled it for the foreseeable future given the issues with virtual delivery of this exam during the pandemic. Despite the cancellation of the exam, the MCC may still replace this exam with new methods of assessment, pending recommendations of the Assessment Innovation Task Force. Hence, any lessons learned from decades of MCCQE Part II administration may help shape the future of physician licensing in Canada through the potential development, implementation, and evaluation of new exam(s).

In this position paper, I would like to elevate the debate about the role of the MCCQE Part II in the national licensing of physicians in Canada by reviewing the available predictive validity evidence of this exam and consider future directions through discussing contemporary developments in high stakes examinations. Specifically, I will compare both MCCQE Part I and Part II in their ability to predict future practice patterns of physicians and their generalizability across specialties. In weighing up the evidence, I will also consider commonly used counterarguments as well as the financial implications of this exam for both the candidates and the MCC. Finally, I will conclude by providing recommendations for future licensing of physicians in Canada.

Historical context and the intended use for the exam

The MCC is bound by the interest to protect the safety of the public and to be responsive to the ever-changing roles of physicians. The 1912 “Roddick Bill” established the MCC and the first national standardized examination that would be recognized across Canada.6 The actual licensing of physicians has remained with the individual provinces. Since 1912, the MCC national examinations have indeed evolved, yet they have been considered a key in maintaining standards of practice where there has been a large variation in training quality and few or no standards for training.

The stated goal of the MCCQE Part II is to assess the candidate’s core abilities to apply medical knowledge, demonstrate clinical skills, and develop investigational and therapeutic clinical plans, as well as to demonstrate professional behaviours and attitudes at a level expected of a physician in independent practice in Canada.7 MCCQE Part II is an OSCE-style exam where residents, often in their 16th month of training (for the October sitting) or the 20th month of training (for the February sitting) must “pass a multiple-case standardized patient assessment, where patient and physician examiners observe and grade clinical and communication skills to predict a candidate’s competence to practice”.8 Completion of Part I and Part II of the MCCQE has become the basis for granting the full licentiate of the MCC.

Passing both examinations simplifies the road to full licensure by residents who eventually want to practice in their specialty in individual provinces. Again, while the MCC itself does not provide any licenses to practice, it plays a key role in the road to licensure for physicians in Canada. The MCC keeps a ledger of the candidates who have passed their national exams and provides this information to the provinces, who then certify the physicians for independent license once they have successfully completed all the other requirements, as specified by each province. Gaining independent license to practice in a specific province usually involves a successful completion of a residency program as well as a final certification exam in their specialty administered either by the Royal College of Physicians and Surgeons of Canada or the College of Family Physicians of Canada in addition to a successful completion of MCC’s both qualifying exams (MCCQE Part I and Part II).

Validity of the exam in the current context

Validity framework in high-stakes assessment

All high stakes examinations, such as the MCCQE Part II, should be held to a high standard of validity.9 Our contemporary understanding of “validity” has evolved from considering a test or assessment as “valid” or “validated”, but rather, that multiple sources of validity evidence can be combined to form a coherent validity argument.10 Validity evidence is important for public accountability, as a fundamental goal of high stakes examinations is to ensure that all practicing physicians meet pre-defined standards for clinical practice. In addition to Kane’s argument-based validity framework, which has been widely accepted in medical education research,11 others that proposed that high stakes summative evaluations should also present evidence of validity-coherence, reproducibility-consistency, and equivalence. 9 While all elements of validity for high-stakes assessment are important, they are not of equal weight for all stakeholders.9 From a patient or societal point of view, validity-coherence meaning “the results of an assessment are appropriate for a particular purpose as demonstrated by a coherent body of evidence”9 may be more important than the other components because a reproducible and equivalent exam that does not serve the needs of the public or has not been shown to be fit for purpose, will not be useful to a society. As Bachman states, “tests are not developed and used in a value-free psychometric test-tube; they are virtually always intended to serve the needs of an educational system or society at large.”12 Bachman argues that it is the test developers’ “responsibility to provide as complete evidence as possible that the tests that are used are valid indicators of the abilities of interest and that these abilities are appropriate to the intended use, and then to insist that this evidence be used in the determination of test use.”12 In the context of licensing examinations like the MCCQE Part II, the link between exam performance and “real world” outcomes–in other words, predictive validity–is, therefore, the most compelling evidence for the public.13,14

How does MCCQE Part II measure up?

The predictive validity evidence for the MCCQE Part II exam is limited to only a handful of studies that demonstrate an association between first attempt failure on MCCQE Part II exam with outcomes such as physicians’ prescribing patterns of specific medications and patient complaints in practice.8,15,16 The MCC’s own commissioned report—“The Quebec-Ontario Follow-up Study of the Association between Scores Achieved on the MCCQE Part II Examination and Performance in Clinical Practice”of graduates taking the MCCQE Part II exam between 1993-1996 found communication subscores to predict the most outcomes in practice, including college complaints and clinical management and prevention of various chronic conditions (e.g., asthma, chronic hypertension).15 It is important to note here that characteristics other than exam performance, such as not practicing as a locum or receiving a discipline flag were found to be stronger predictors of both inappropriate benzodiazepine and opioid prescribing as well as number of complaints. Interestingly, the overall MCCQE Part I score was a better predictor of college complains and unsatisfactory peer assessment than the overall MCCQE Part II score or the communication subscore.15 Moreover, the overall MCCQE Part I score predicted more outcomes measures than overall MCCQE Part II score or the communication subscore. In a related publication, Wenghofer et al. further found that MCCQE Part II results did not better predict peer assessment results over and beyond MCCQE Part I.17

Many residents in specialty programs would argue that the outcome measures chosen in this report are not representative of the work that they will be doing when in practice, nor of their current training. For example, an ophthalmologist will never be ordering a screening mammogram, a psychiatrist will not be examining a painful knee, nor will a radiologist be prescribing opioids or benzodiazepines. With the exception of patient complaints or a negative peer review, it is almost impossible to find universal clinical skills that would apply to all specialties and therefore be an appropriate common standard that all practicing physicians should be capable of. A recent content analysis of the MCC exams by Bordage et al. found that only 64% (59 out of 92) of physician-related practice indicators (PRINDs) that contribute to causing or preventing suboptimal care and adverse events were “behaviours or decisions expected of all physicians and suitable for assessment on a general medical examination.”18 MCCQE Part II tested on average 9.8 PRINDS, compared to the average 32.2 and 18.4 PRINDS tested on the Part I knowledge and clinical decision-making respectively with only five percent of the Part II total test score being attributed to PRINDS.18 As shown previously, the evidence seems to favour MCCQE Part I, rather than MCCQE Part II, in its ability to evaluate aspects of a physician’s future practice that are expected of all physicians.

One argument for the need of a national licensing exam for all physicians in Canada has been that there is sufficient variability in the quality of training between programs in Canada as well as internationally to warrant such an exam.4 It can be argued that there may be greater variability between residents training in a rural versus urban programs, however evidence shows that both urban and rural residents perform equally well on the MCCQE Part II exam.19 Rather than the location of medical training, evidence suggests that admissions and selection criteria may play a role in predicting both the MCCQE Part I and Part II scores.20,21 Eva et al. showed that candidates accepted to medical schools across Canada using the Multiple Mini-Interview (MMI) scored higher on MCCQE Part I and Part II.21 More importantly, the authors reported no significant differences in scores between medical schools on either of the exams despite one argument for national licensing exams is the need for shared standards that all medical school graduates meet given concerns about variability in the quality of medical training across institutions. Notably, the authors state that the vast majority (14 of the 17) medical schools in Canada currently use the MMI or its permutation in their selection process. Other predictors of success on the MCCQE Part II reported include MCAT verbal reasoning scores and personal/professional characteristics.22 It remains to be seen whether the differences between admissions and selection processes – and subsequently the differences between the exam scores – translate into meaningful practice differences, and importantly, outcomes that are meaningful to patients and society.

In summary, there is limited evidence of the predictive ability of the MCCQE Part II for future practice patterns. In fact, the evidence favours the MCCQE Part I exam in its generalizability across specialities, attribution of scores to future practice indicators and patterns. Arguments that further work against the Part II exam include its substantial cost to the candidates. The fee to complete the Part II exam in 2021 was $2,780, and fees have continuously risen over time. It also constitutes a significant financial interest of the MCC in this exam as it represents 30% of MCC’s total annual revenue.23 Without having additional evidence of the linkages of physician performance on the MCCQE Part II exam with measures of care quality that are expected of—and attributable to—all practicing physicians in Canada regardless of their specialty, it is reasonable to question whether the burdens of this examination outweigh the additional benefits to the public that go above and beyond those of the MCCQE Part I.

Performance on the MCCQE Part II

Based on the 2019 technical report, the Canadian medical graduates training in a Canadian residency program have an average 89% pass rate on their first attempt, and an average 69% to 85% repeat pass rate in 2018 and 2019 test group.24 This would give an estimated overall pass rate for this group of 97% to 98%. On the other hand, international medical graduates (IMGs) with or without Canadian postgraduate experience have a much lower pass rate of 55-62% on the first try in the same period, a 26-42% repeat pass rate, and an overall pass rate approximately between 66% and 88%. This discrepancy points to an overwhelming majority of Canadian trained candidates passing the exam, in contrast to internationally trained candidates. The lower IMG pass rates deserve closer attention. While there is some evidence that the performance of IMGs on the United States Medical Licensing Examination (USMLE) Step 2 Clinical Knowledge exam is associated with quality of care they provide in the future, there is no Canadian evidence to support this.25 More troubling, however, is the potential for bias in the assessment of physicians who are racialized or identify as visible minorities, as evidenced in clerkship evaluations of undergraduate medical students26,27 as well as in the systemic barriers many IMGs face for practice in Canada.28 It should be noted that IMGs must pass the National Assessment Collaboration (NAC) Examination, a similar OSCE-style exam, in addition the MCCQE Part I, prior to applying for residency positions in Canada. It is, therefore, uncertain whether the lower pass rates reflect a lower level of preparedness of IMGs for standardized examinations or a potential bias in their assessment.

Current developments in medical education around national licensing of physicians

Not all countries require a national licensing examination for physician licensure. Some countries, including The Netherlands, grant full registration and licensure upon graduation from medical school without requiring additional training. Out of six major countries (Australia, Canada, United Kingdom, The Netherlands, Germany and the United States), only three require centralized licensing examinations, with the US recently cancelling its USMLE Step 2 Clinical Skills OSCE-style exam altogether.29 Interestingly, there is no evidence that an absence of a national licensing examination leads to graduation of substandard doctors, nor does the introduction of one lead to an improvement of care.30 In their systematic review, Archer et al. found that overall national licensing examinations lack the validity evidence, particularly, the claims that licensure examinations improve patient safety or physicians’ competence.31 Instead, national licensing examinations are more likely to reflect students’ past performance rather than predict future performance in practice.22,32 Schuwirth states that the value of national licensing examinations ultimately hinges on the favourable perceptions of the public: that society believes these exams protect public safety and ensure minimum competence of physicians.33

A question that remains worthy of deep consideration: are there alternatives to a national licensing exam to ensure physician competence? On a conceptual level, Schuwirth has eloquently described the dilemmas presented by current conceptions of national licensing exams in the context of our current understanding of learning.33 Schuwirth describes three shifts in assessment in the context of Competency Based Medical Education (CBME): 1) away from highly structured and standardized testing of knowledge and skills towards professionalism, reflection and communication, 2) a shift from assessment of learning (summative) to assessment for learning (formative) that emphasizes growth, reflection and feedback, and 3) a shift from making decisions about competence based on single instruments towards programmatic assessment that utilizes information about one’s competence from a variety of assessment methods. These concepts have been crystallized in new assessment methods, such as Entrustable Professional Activities (EPAs), portfolios, and workplace-based evaluations, all of which involve multiple assessors and multiple assessments with the goal of maximizing physician ability to provide safe, effective patient-centred care through capturing competencies applied in authentic work settings.34 Although high stakes point-in-time examinations–such as the MCCQE Part II—have played an important role in assessment of learning era prior to CBME, such exams will become less important in the new era of assessment for learning where continuous feedback on one’s performance drives learning and performance improvement. Additionally, Schuwirth argues that even formative assessment can be high stakes, recommending that national licensing bodies at least explore the option of using assessment for learning for their licensing examinations.33 Shifting focus towards assessment for learning and programmatic assessment in licensing decisions will likely have more positive impact on residents’ reflection and learning compared to high stakes examinations. For example, the implementation of a formative progress test has been associated with national examination scores.35 The advantage of a progress test is that it can identify problems earlier and has more capacity for constructive feedback and stimulating learning, in addition to a cost-savings associated and fewer concerns about standardization of the examination compared to an OSCE-style exam. In another example, the American Board of Internal Medicine and the American Board of Pediatrics have introduced longitudinal low-stakes flexible assessments as a part of their maintenance of certification programs that aim to provide regular feedback to the test takers allowing them to improve their learning while improving performance. In sum, the use of formative assessment for national licensing provides several advantages over point-in-time high stakes assessments, such as the MCCQE Part II, for preparing reflective physicians who provide safer, high quality, patient-centred care.

Ways forward

Based on the discussion above, the following are options which could be considered surrounding the national licensing examination process in Canada:

  1. Based on the evidence that MCCQE Part II does not predict future performance any more than MCCQE Part I, it should be cancelled in favour of leaving the MCCQE Part I as the sole prerequisite for full licensure in Canada. Given that findings of this paper point to the MCCQE Part I being the better predictor of future practice patterns compared to the MCCQE Part II, and since all trainees must pass the MCCQE Part I or they will not be able to continue their residency training, it would be safe to completely drop the MCCQE II as a prerequisite for full licensure by the MCC without compromising public safety.

  2. If the MCCQE Part II is permanently cancelled, the MCC should focus its work directly with the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada to ensure a systematic assessment of essential clinical skills and professionalism on all specialty licensing exams as well as with residency programs to ensure high quality assessment of such skills take place during residency training in addition to their summative assessments at the end of specialty training.

  3. The MCC needs to develop strategies to help identify the relatively large number of IMGs and the very small number of CMGs who have difficulties in passing the MCCQE Part II. For IMGs, one strategy could include utilizing feedback from the NAC examination to help identify areas for improvement or additional support or training. As for any high stakes OSCE, however, it is essential that the MCC ensures its assessments are equitable and are unbiased towards visible minorities or candidates with non-native English accents.

  4. If MCCQE Part II is kept for both Canadian and international medical graduates, a modified version of the exam should be consistent with CBME models of assessment. As such, it should be formative and provide feedback directly to the residency programs in order to identify and work on any deficient skills. In this case, the feedback provided by this exam should be relevant to the physician’s future work regardless of their specialty, which may mean extensively modifying the exam in close collaboration with key stakeholders and justifying the high cost of administering such a national exam to the candidate and the public. If this approach is taken, the emphasis should be on the candidate to demonstrate they have taken steps to reflect, learn and close the gaps in their assessment before the end of their training.

Conclusion

The MCC’s claim that the MCCQE Part II ensures public safety and physician competence falls short of the available evidence that either this exam–or national licensing exams in general—ensure both. As it stands now, the evidence favours the MCCQE Part I as the best predictor of future practice patterns. The additional value for MCCQE Part II may, however, lie with identifying candidates who could benefit from additional support or training. Furthermore, as the field of medical education and licensing shifts away from assessment of learning towards assessment for learning and programmatic assessment, the maximum value of the MCC will lie in working closely with residency programs, the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada to facilitate a robust assessment program of essential competencies and clinical skills during residency and specialty certification.

Acknowledgements

The author would like to thank Dr. Allison Brown at the University of Calgary and Dr. Saad Chahine at Queen’s University for their insightful feedback on the position paper in preparation for its publication.

Conflicts of Interest

No conflicts of interest are declared

Funding

There was no funding.

References


Articles from Canadian Medical Education Journal are provided here courtesy of University of Saskatchewan

RESOURCES