Skip to main content
The BMJ logoLink to The BMJ
editorial
. 2002 Jan 19;324(7330):126–127. doi: 10.1136/bmj.324.7330.126

Researching the outcomes of educational interventions: a matter of design

RCTs have important limitations in evaluating educational interventions

David Prideaux 1
PMCID: PMC1122058  PMID: 11799017

Problem based learning, an educational intervention characterised by small group and self directed learning, is one of medical education's more recent success stories, at least in terms of its ubiquity. From its beginnings in McMaster University in the 1960s it has been adopted in undergraduate medical courses worldwide. It is also being used in postgraduate and continuing medical education.

Problem based learning has been the subject of at least four much quoted reviews, three published in the early 1990s and one more recently.14 Such attention is not surprising. What might be surprising is that the effects of such a popular educational approach are seemingly small, except in the area of student satisfaction. According to the reviews the extent of knowledge gained by such measures as performance in licensing examinations is at best unclear. Participants in problem based learning, however, can expect small gains in clinical reasoning.

The paper by Smits and colleagues in this issue provides a review of problem based learning in postgraduate and continuing education (p 153).5 It is, however, based on only six studies which met the authors' inclusion criteria for controlled study designs. The conclusions of the paper are similar to those of the major reviews. There is limited evidence that use of problem based learning in postgraduate and continuing medical education increases knowledge, doctor performance, and patient outcomes. There is moderate evidence for increased satisfaction of participants.

The debate on systematic reviews of problem based learning was taken to a new level with the publication of two articles in Medical Education in September 2000.6,7 They focused on the potential effects of research design on the findings of reviews. Albanese concentrated on effect size while Norman and Schmidt argued for a theory based approach to the study of educational interventions. Taking the debate to this level is timely given the recent interest in the nature of evidence in medical education research, particularly through the work of the best evidence medical education movement. Smits and his colleagues claim that controlled evaluation studies provide the best evidence of educational effectiveness. Despite claims in the paper to the contrary, this is not necessarily supported by the advocates of best evidence medical education, who have moved away from grading studies according to the gold standard of randomised control to a scheme based on criteria such as quality, utility, and strength of evidence.8 Norman and Schmidt provide a critique of the randomised control trial approach to researching curriculum interventions suggesting that such studies are doomed to fail. This is familiar to educational researchers outside medicine who some time ago abandoned the supremacy of randomised designs to embrace a range of quasi-experimental and qualitative designs.

Three of the limitations of randomised control studies for studying educational interventions are highlighted by the paper. The first is randomisation. While randomisation is theoretically possible in educational research it is often not feasible nor justifiable. Is it justifiable to enrol medical professionals in postgraduate and continuing education programmes in which they are given no choice over the learning methods they will engage in? Furthermore, as Norman and Schmidt point out, randomisation relies on the maintenance of blind allocation.7 Maintaining blinding is rarely possible in research on educational interventions.

The second issue is control of variables. At the very least the intervention itself may be variable. There are many variants of problem based learning. The process of education depends on the context. A myriad of factors, including facilities and resources, teacher and student motivation, individual expectations, and institutional ethos affect the process. Again it is theoretically possible to control for such variables but in doing so the key factors that determine the success or failure of the intervention may be removed.

The third issue concerns the choice of appropriate outcome measures. There is much interest in the defining of clear outcomes for medical education and hence for medical education research.9,10 But the outcomes must be appropriate for the intervention. For example, is improved patient health an appropriate measure of educational effectiveness in continuing medical education? After all it is influenced by a whole range of factors within and outside a doctor's control.

Education is a discipline that is rich in theory. One of the functions of educational theory is to make predictions about outcomes and their relationships that can be tested through empirical work. Much research about medical education proceeds devoid of theory. More not less theory based research is needed7 so that researchers will focus on significant outcomes that are amenable to intervention.

There is a clear imperative to research the effects of educational interventions at all levels of medical education and training. The research, however, must be designed so that the findings can be truly ascribed to the intervention rather than being an artefact of the methods used.

Learning in practice p 153

References

  • 1.Albanese MA, Mitchell S. Problem-based learning: a review of literature on its outcomes and implementation issues. Acad Med. 1993;68:52–81. doi: 10.1097/00001888-199301000-00012. [DOI] [PubMed] [Google Scholar]
  • 2.Berkson L. Problem-based learning: have expectations been met? Acad Med. 1993;68:S79–S88. doi: 10.1097/00001888-199310000-00053. [DOI] [PubMed] [Google Scholar]
  • 3.Vernon DTA, Blake RL. Does problem-based learning work? A meta-analysis of evaluative research. Acad Med. 1993;68:550–563. doi: 10.1097/00001888-199307000-00015. [DOI] [PubMed] [Google Scholar]
  • 4.Colliver J. Effectiveness of problem-based learning curricula. Acad Med. 2000;75:259–266. doi: 10.1097/00001888-200003000-00017. [DOI] [PubMed] [Google Scholar]
  • 5.Smits PBA, Verbeek JHAM, de Buisonjé CD. Problem-based learning in continuing medical education: a review of controlled evaluation studies. BMJ. 2002;324:153–156. doi: 10.1136/bmj.324.7330.153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Albanese M. Problem-based learning: why curricula are likely to show little effect on knowledge and clinical skills. Med Ed. 2000;34:729–738. doi: 10.1046/j.1365-2923.2000.00753.x. [DOI] [PubMed] [Google Scholar]
  • 7.Norman GR, Schmidt HG. Effectiveness of problem-based learning: theory practice and paper darts. Med Ed. 2000;34:721–728. doi: 10.1046/j.1365-2923.2000.00749.x. [DOI] [PubMed] [Google Scholar]
  • 8.Harden RM, Grant J, Buckley G, Hart IR. Best Evidence Medical Education. Guide No 1. Dundee: Association for the Study of Medical Education in Europe; 1999. [Google Scholar]
  • 9.Harden RM, Crosby JR, Davis MH. An introduction to outcome-based education. Med Teach. 1999;21:7–14. doi: 10.1080/01421599978951. [DOI] [PubMed] [Google Scholar]
  • 10.Prideaux D. The emperor's new clothes: from objectives to outcomes. Med Ed. 2000;34:168–169. doi: 10.1046/j.1365-2923.2000.00636.x. [DOI] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES