Littlewood et al present the results of a systematic review of the evidence in the medical education literature about how early experience contributes to the basic education of health professionals.1 Increasingly, emphasis is being given to basing decisions about teaching practice on evidence because the alternative is the PHOG approach: prejudices, hunches, opinions and guesses.2 The review was carried out under the auspices of the Best Evidence Medical Education (BEME, www.bemecollaboration.org) collaboration, which aims to promote best evidence medical education through dissemination of information, producing systematic reviews and the creation of an evidence based culture. It attempts to synthesise the available evidence in a format that can be used by curriculum planners and others involved in medical education to enable them to make decisions about how to provide the best learning opportunities for students.
What are the readers of the BMJ to make of this review? Its readers are accustomed to a rather different kind of systematic review that predominantly evaluates the results of a number of randomised controlled trials. As Littlewood et al say that early experience is part of a complex curriculum intervention.1 It, therefore, does not lend itself to evaluation using simple experimental designs such as randomised controlled trials. BEME recognises that systematic reviews should not be restricted to randomised controlled trials, which may have high validity from the perspective of research methods but are expensive to undertake and may not be the most appropriate type of study to answer the questions raised.3
Norman and Schmidt go further and say that educational trials are ill founded, ill advised, and a waste of time and resources.4 They argue that there is no such thing as a blinded intervention or a pure outcome or a uniform intervention in educational trials.
What is needed is for “multiple lenses to look at data from different perspectives,”3 but Harden and Lilley have described the challenge of identifying and evaluating the evidence as formidable.2 The evidence may not be available; the research method, the outcomes investigated, or the replication of the evidence may not be optimal; and the applicability of the conclusions to the individual teacher in their particular setting may not be appropriate. Of course, this is true of much clinical evidence. We don't know the answers to many clinical questions because the evidence is not available or not convincing and often research carried out on a population of highly selected patients cannot be generalised to an individual patient.
The BEME collaboration endorses the principle that medical educators should implement the practice of methods and approaches to education based on the best available evidence. Littlewood et al have identified and evaluated the evidence about early experience for us. They freely discuss the limitations of the review but point to the rigour of its methods. The evidence in this review is as good as it gets for medical educators but, as Harden points out, it is still up to the individual teacher to evaluate the evidence and to arrive at the best approximation of the truth for his or her teaching practice.2
Competing interests: None declared.
References
- 1.Littlewood S, Ypinazar V, Margolis SA, Scherpbier A, Spencer J, Dornan T. Early practical experience and the social responsiveness of clinical education: systematic review. BMJ 2005;331: 387-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Harden RM, Lilley PM. Best evidence medical education: the simple truth. Med Teacher 2000;22: 117-9. [Google Scholar]
- 3.Best Evidence Medical Education (BEME): report of meeting—3-5 December 1999, London, UK. Med Teacher 2000;22: 242-5. [Google Scholar]
- 4.Norman GR, Schmidt HG. Effectiveness of problem-based learning curricula: theory, practice and paper darts. Med Educ 2000;34: 721-8. [DOI] [PubMed] [Google Scholar]