A serious look at how educational innovations are disseminated may give the thoughtful observer cause to question the pervasiveness of the “scientific method.” Sometimes it seems that education moves from one fad to the next. In my now longish career, I have seen many educational methods come and go: patient-management problems, modified essay questions, behavioural objectives, learning styles and more. This decade's “flavours of the month” — reflective practice, e-learning and high-fidelity simulations — appear to be no more evidence-based than all those that have come (and gone) before. Sadly, although good evidence in support of a particular educational innovation may exist, it is rarely instrumental in decisions to adopt that innovation.
Nowhere is this pattern more evident than with the locally grown phenomenon, problem-based learning. In this radically different approach to medical education, learning is driven by challenging, open-ended problems; students work in small groups; learning is facilitated by a tutor; courses do not exist; and lectures are minimal. Problem-based learning originated at McMaster University in Hamilton, Ontario, in the late 1960s. The method's “founding fathers” were an iconoclastic group of physicians and basic scientists from the Toronto–Hamilton area who were recruited by the school of medicine's first dean, John Evans. They all shared a negative view of their undergraduate experiences and thought they could do better. Their goals were straightforward: in the words of Bill Walsh, the first associate dean of education, “All we want is for them to get an MD and have some fun doing it.” Within a very few years, we had evidence that this was true.
In the meantime, to everyone's surprise, the method caught on like wildfire. Within a few years there were problem-based learning curricula in the Netherlands, Australia, Israel and the United States. And the method has continued to spread: now, several hundred schools profess to offer some form of problem-based learning.
All this happened without any really convincing evidence that problem-based learning made much difference in terms of learning outcomes. As Koh and colleagues1 point out in this issue of CMAJ, 2 highly cited systematic reviews were done in the early 1990s, but these reviews found more similarities than differences in outcomes, particularly in licensing exam scores, among graduates from problem-based learning and traditional curricula. There were some indications that graduates of problem-based learning curricula were more caring and compassionate than graduates of traditional curricula;2 however, there was always a concern that these findings may have been related to selection bias during the admissions process, a consequence of a deliberate attempt to select students with specific personal characteristics.
The study by Koh and colleagues provides a significant contribution because the authors systematically reviewed all of the studies in medicine linking problem-based learning to outcomes. One critical inclusion criterion of theirs was that the study had to have used a control group comprising graduates of a traditional curriculum. Although Koh and colleagues have not ruled out selection bias entirely, we can be confident that the differences in psychosocial outcomes were not a consequence of other differences, such as different selection criteria at admissions or other institutional differences, that confounded the findings of earlier studies. A second critical methodological point of their review was to analyze self-and observer assessments of outcomes separately. As the authors correctly point out, the literature on self-assessment so consistently points out the nonrelation between self-assessed abilities and observed abilities, that it really makes little sense to rely on such judgments. More's the pity that proponents of continuing professional development and maintenance of certification continue to place great stock in physicians' abilities to identify their own weaknesses.
Koh and colleagues did find differences in outcomes in just the place where we might have hoped. Compared with graduates of traditional curricula, graduates of problem-based learning curricula had better diagnostic and communication skills; had a greater appreciation for the cultural aspects of care as well as legal and ethical issues; demonstrated greater responsibility; and were better able to cope with uncertainty. Given current attention to cultural and ethical issues, as reflected in the CanMEDS Physician Competency Framework,3 the Medical Council of Canada's C2LEO (Cultural–Communication, Legal, Ethical and Organizational Aspects of Medicine) objectives and the usual concern about poor communication skills demonstrated in complaints to medical regulatory authorities, it bodes well for graduates of problem-based learning curricula that they are doing well in these high-priority areas.
One concern with the study by Koh and colleagues is that their designation of strength of evidence was based on replication and study quality. For example, a small and possibly educationally nonsignificant effect of problem-based learning that was replicated over 2 good studies might have been considered strong evidence, whereas a single study showing a very large effect of problem-based learning would be viewed as weaker evidence. Before we advocate for problem-based learning curricula to be implemented around the world, there should be quantitative evidence of how much difference such a change is likely to make. A second concern is that one wonders about what measures were used to observe these effects. A review of the original articles can yield such information, but we must nevertheless accept that when the reviewer refers to “cultural sensitivity,” we must take on faith that this was what was actually measured.
Finally, the real conundrum is why these effects of problem-based learning were observed at all. We have ruled out selection of more compassionate students in the admission process of problem-based learning curricula, but where does this leave us? What is the active ingredient in the problem-based learning method — a mixed bag of nostrums if ever there was one — that is causing better outcomes for graduates of this type of curriculum? Does the process of working in small groups help problem-based learning graduates acquire better communication and interpersonal skills? Is it that problem-based learning curricula typically have more input from professionals, such as social workers and psychologists, who may be more concerned about physicians having a better appreciation of the cultural, legal and ethical aspects of care? Is the curriculum itself more likely to contain objectives that better prepare graduates to cope with uncertainty? Such questions need to be answered so that the potential benefits identified in the study by Koh and colleagues can be incorporated into the curricula of other medical schools.
For years we have endured debate about the relative merits and weaknesses of problem-based learning. Now there is good evidence that the method delivers on some very important issues. The next step is to determine why the method works.
@ See related article page 34
Key points of the article
• Although good evidence in support of a particular education innovation may exist, it is rarely instrumental in decisions to adopt that innovation.
• Given the importance placed on cultural and ethical issues, it bodes well for graduates of problem-based learning curricula that they are doing well in these high-priority areas.
• Now that there is evidence in support of problem-based learning, the next step is to determine why the method works so that the potential benefits can be incorporated into the curricula of other medical schools.
Footnotes
Competing interests: None declared.
Correspondence to: Dr. Geoffrey Norman, Department of Clinical Epidemiology and Biostatistics, Faculty of Health Sciences, McMaster University, MDCL 3519, 1200 Main St. W, Hamilton ON L8N 3Z5; fax 905 572-7099; norman@mcmaster.ca
REFERENCES
- 1.Koh GCH, Khoo HE, Wong ML, et al. The effects of problem-based learning during medical school on physician competency: a systematic review. CMAJ 2008;178:34-41. [DOI] [PMC free article] [PubMed]
- 2.Woodward CA, Cohen M, Ferrier BM, et al. Correlates of certification in family medicine in the billing patterns of Ontario general practitioners. CMAJ 1989;141:897-904. [PMC free article] [PubMed]
- 3.The Royal College of Physicians and Surgeons of Canada. The CanMEDS physician competency framework. Available: http://rcpsc.medical.org/canmeds (accessed 2007 Nov. 19).