Skip to main content
The BMJ logoLink to The BMJ
editorial
. 2006 Sep 9;333(7567):544–546. doi: 10.1136/bmj.38952.701875.94

Challenges for educationalists

Lambert W T Schuwirth 1, Cees P M van der Vleuten 1
PMCID: PMC1562480  PMID: 16960212

Short abstract

Medical education has to change to meet the shifts in public and professional attitudes. Experts gathering at the annual meeting of the Association for Medical Education in Europe next week have plenty to discuss


Medical education is currently a hot topic. More and more people want to be involved in developing new educational and assessment methods and in conducting research in medical education. These developments have an increasing influence on the work of everybody in health care. Of course, it is nice that our discipline is high on the agenda, but some major challenges need to be tackled if the specialty wants to be taken seriously, as we outline below.

Make practical training more effective

Many of the developments and research in medical education have focused on the undergraduate curriculum, especially the theoretical parts. Clinical attachments and postgraduate training have not received nearly as much attention. This is unfortunate because not only are these practical aspects largely unstructured but they also waste too much time on non-educational activities and rely on learning by doing.1,2

Changes in society have made this approach inadequate. Firstly, patients' growing awareness about quality of care makes them understandably reluctant to act as learning tools for medical students or registrars. Secondly, the European Working Time Directive, which limits the number of hours a registrar is allowed to work to 58 a week, has implications for training. The directive is a good thing for individual patient care, as working long hours increases the risk of medical errors,3,4 but there is a downside. The evidence from cognitive psychological research on expertise shows that to become an expert you need many hours of practice.5

The challenges are thus to find ways to allow registrars to practise without using patients as learning objects and to optimise the educational effectiveness of learning in practice. Dummies and (computer) simulation tools are likely to have only minor value. Most of the things doctors have to learn cannot be taught on dummies or with simulations.6 The use of simulated patients can be helpful, but they too can train students in only certain areas. The main emphasis, therefore, should be on implementing what is already known about effective learning practice.

Cognitive psychological research has shown that deliberate practice is a far better method to acquire expertise than simple unstructured practice.7 Deliberate practice in simple terms is the combination of acquiring expertise with activities that help learners to become more conscious of their learning. Key elements in deliberate practice are:

  • Supervision and detailed feedback

  • Well defined tasks to improve certain aspects of performance

  • Ample opportunity to improve performance gradually by performing tasks repeatedly.

Top athletes and musicians apply a similar approach. It is not just practice that makes perfect; it is deliberate practice.

Figure 1.

Figure 1

Simulation has limited applications

Credit: KEITH SRAKOKIC/AP

Develop high quality assessment

The challenges in assessment are even bigger. To explain this, we have to summarise the recent history of assessment. Medical competence has long been considered a combination of constructs—psychological characteristics that, although they cannot be observed directly, can be measured. Good, generally known examples of constructs are intelligence or extraversion. Constructs are assumed to be stable, generic, and independent—someone's intelligence, for example, does not fluctuate from day to day and is independent of extraversion.

Typical constructs included in medical competence are knowledge, skills, problem solving, and attitudes. Many assessment instruments have been developed for each of these aspects, often with the aim of being a single definitive test of a construct. A typical example is the objective structured clinical examination, which was assumed to be the best instrument to measure skills.

Because the idea of stable and generic constructs proved no longer tenable,8 assessment has moved to competencies. Competencies are tasks that a qualified medical professional should be able to handle successfully. As part of this development, new instruments such as portfolios, the mini-clinical evaluation exercise, and 360° feedback have become popular.9,10 In the mini-clinical evaluation exercise a consultation is observed and scored on a generic rating scale including items such as problem analysis, history taking, and organisation and efficiency. In 360° feedback the candidate asks colleagues and coworkers to complete a questionnaire on his or her performance that rates technical skills, interpersonal skills, team skills, education and research skills, etc.

These instruments differ from previous methods in that they focus on observable behaviour. They help the supervisor or teacher to document and monitor performance and provide feedback to the learner. As such, they also imply that no single instrument can be used for each competency but that the whole picture of someone's medical competence requires use of various instruments.11 Although this may seem quite logical to the medical professional, who does this daily in patient care, it leaves medical educators with major challenges.

Firstly, we will have to learn more about how to build high quality assessment programmes—for example, what are the design criteria and what are the best ways to combine information from qualitative and quantitative sources? Such questions are relevant not only for the educational setting but also for revalidation.12 Secondly, we need to extend our statistical and psychometrical methods. Current methods focus on the measurement of constructs, and the evidence for the fairness of any assessment comes from reproducibility and construct validity. This approach does not really apply to the new assessment methods. We must find other ways to prove and defend our assessment decisions to all stakeholders.

Improve research standards

We concur with the many authors who have claimed that the rigour of research in medical education has to improve.13-15 Several factors could explain the poor quality of much published research.

Medical education research is often equated with biomedical research, whereas it really has more in common with social sciences research. Researchers often try to apply methods that are successful in biomedical research to medical education, which either does not work or forces the researchers to adapt the research question to the method instead of the other way around.

Another problem is that medical education research is considered to be easy and something that can be done by any intelligent person, even without proper training. Clearly, this is not true. There are many training and PhD programmes in education research, and anyone who wants to conduct research should follow such a programme or be trained in another way.

Then there is a sort of ivory tower problem. Trained educationalists may conduct good research, but it is often so specialised and theoretical that it is not of interest to the practising doctor or teacher and thus may not be published in general medical journals or even general medical education journals. Rigorous and relevant research requires a combination of well trained educationalists and researchers with good practical knowledge of medicine and teaching. We also need to abandon dogmatic thinking. It is not the method that determines whether a study is scientifically rigorous, it is the strength of the research question, the value of the operational definitions, the extent to which the chosen method is the best for the specific research question, and the care with which the study was done.16 This is elementary in a scientific training programme.

Collaborations between institutions both nationally and internationally are needed. The criticism that medical education research is too locally oriented, is too rarely multicentred, and yields few generalisable findings is well deserved, and the “not invented here syndrome” is overprevalent.17

Finally, we think that medical education and general medical journals have accepted poorly performed or poorly reported research papers too often. Most papers do not go beyond show and tell, describing a locally developed method without thorough scientific study of its value. Journals have to increase their standards. We realise that they have to take care not to lose the larger community and become journals in which only a small group of selected researchers can publish.18 But the higher standards are needed as soon as possible.

Create positive attitude to assessment

Unfortunately, we have all been raised in a culture where assessment is synonymous with punitive examinations whose sole purpose is to pass or fail candidates. This has shaped us and has made us fear assessment. Few people see assessment as a way to improve professional activities in order to provide better patient care and conduct better research. This has made the implementation of nationwide professional assessment activities very difficult. Revalidation schemes in the United Kingdom and the Netherlands struggle in a political minefield, and only high profile medical disasters seem to make a difference.

In medical schools students try to find out what the assessment is and prepare strategically instead of studying to become better doctors. This is not a surprise, as in the past many approaches to assessment have been extremely reductionist, aiming only to pass or fail candidates. It is also unsurprising that many professionals choose continuing medical education programmes in subjects that they are already good at, as good formal postgraduate assessment programmes do not exist and self assessment is apparently not adequate. The major challenge is to change the culture of assessment into one where assessment is informative, helps people to improve their work, and where the goal is not to be better than the others but to be better today than you were yesterday.

Summary points

Medical education needs to adapt to society's changing attitudes

Work based training must be made more effective to counter reduced working hours

New methods of assessment are needed to reflect the focus on competencies

High quality, relevant research requires more interdisciplinary collaboration

Overcoming negative attitudes to assessment will involve a cultural shift

We have described four major challenges in medical education. One conclusion from all of these is that a close collaboration between doctors and educationalists is indispensable for good medical education and development of better education. Any monodisciplinary endeavour will lead to a suboptimal result.

Contributors and sources: LWTS and CPMvdV have the key responsibility for the development, implementation, and quality control of the assessment system of the medical school of the Maastricht University and are consultants for assessment both nationally and internationally. This article reflects long and frequent discussions between both authors. LWTS wrote the first draft. CPMvdV commented and made the necessary changes and additions. Both authors approved the final version. LWTS is guarantor.

Competing interests: None declared.

References

  • 1.Bogg J, Gibbs T, Bundred P. Training, job demands and mental health of pre-registration house officers. Med Educ 2001;35: 590-5. [DOI] [PubMed] [Google Scholar]
  • 2.Prince KJAH, Van de Wiel MWJ, Van der Vleuten CPM, Boshuizen HPA, Scherpbier AJJA. Junior doctors' opinions about the transition from medical school to clinical practice: A change of environment. Educ Health 2004;17: 323-31. [DOI] [PubMed] [Google Scholar]
  • 3.Gaba DM, Howard SK. Patient safety: fatigue among clinicians and the safety of patients. N Engl J Med 2002;347: 1249-55. [DOI] [PubMed] [Google Scholar]
  • 4.Landrigan CP, Rothschild JM, Cronin JW, Kaushal R, Burdick E, Katz JT, et al. Effect of reducing interns' work hours on serious medical errors in intensive care units. N Engl J Med 2004;351: 1838-48. [DOI] [PubMed] [Google Scholar]
  • 5.Ericsson KA, Charness N. Expert performance, its structure and acquisition. Am Psychol 1994;49: 725-47. [Google Scholar]
  • 6.Issenberg SB, McGaghie WC, Petrusa ER, Lee-Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27: 10-28. [DOI] [PubMed] [Google Scholar]
  • 7.Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004;79(10 Suppl): S70-81. [DOI] [PubMed] [Google Scholar]
  • 8.Elstein AS, Shulmann LS, Sprafka SA. Medical problem-solving: an analysis of clinical reasoning. Cambridge, MA: Harvard University Press, 1978.
  • 9.Norcini J, Blank LL, Arnold GK, Kimball HR. The mini-CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med 1995;123: 795-9. [DOI] [PubMed] [Google Scholar]
  • 10.Brett JF, Atwater LE. 360° feedback: accuracy, reactions, and perceptions of usefulness. J Appl Psychol 2001;86: 930-42. [DOI] [PubMed] [Google Scholar]
  • 11.Van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ 2005;39: 309-17. [DOI] [PubMed] [Google Scholar]
  • 12.Southgate L, Cox J, David T, Hatch D, Howes A, Johnson N, et al. The assessment of poorly performing doctors: the development of the assessment programmes for the General Medical Council's performance procedures. Med Educ 2001;35: 2-8. [PubMed] [Google Scholar]
  • 13.Davis MH, Ponnamperuma GG. Medical education research at the crossroads. Lancet 2006;367: 377-8. [DOI] [PubMed] [Google Scholar]
  • 14.Torgerson CJ. Educational research and randomised trials. Med Educ 2002;36: 1002-3. [DOI] [PubMed] [Google Scholar]
  • 15.Colliver JA. The research enterprise in medical education. Teach Learn Med 2003;15: 154-5. [DOI] [PubMed] [Google Scholar]
  • 16.Norman G. RCT = results confounded and trivial: the perils of grand educational experiments. Med Educ 2003;37: 582-4. [DOI] [PubMed] [Google Scholar]
  • 17.Reed DA, Kern DE, Levine RB, Wright SM. Costs and funding for published medical education research. JAMA 2005;294: 1052-7. [DOI] [PubMed] [Google Scholar]
  • 18.McLachlan J. The new editor writes [editorial]. Med Educ 2006;40: 2-3. [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES