Skip to main content
The BMJ logoLink to The BMJ
editorial
. 2002 Mar 30;324(7340):748–749. doi: 10.1136/bmj.324.7340.748

The long case versus objective structured clinical examinations

The long case is a bit better, if time is equal

Geoff Norman 1
PMCID: PMC1122692  PMID: 11923143

The examination of graduates of medicine to ensure competence has a long tradition predicated on the historical right of self regulation bestowed on the professions. While many may wish to replace such summative and frequently punitive assessment with softer assessment to facilitate learning, this amounts to a shirking of social responsibility. A consequence of the importance attached to such examinations is that considerable research has been devoted to establishing the reliability and validity of these examinations.

One truism in educational research is that few self evident truths are true. Historically, it has seemed self evidently true that an experienced physician could, by active questioning around a case, determine whether a candidate was or was not competent—the long case. Unfortunately this assertion was challenged by evidence showing that the reliability of the long case was insufficient to justify decisions about competence to practice.1 The replacement of the long case by objective structured clinical examinations was predicated on a second self evident truth—the promise of truly objective clinical assessment using checklists, which should self evidently lead to more objective or reliable assessment.

Now, horror of horrors, along comes a study showing that maybe the long case was not so bad after all.2 This study, published earlier this year in Medical Education, involved assessment of 214 final year undergraduates, each of whom did two long cases using patients, and 20 stations for objective structured clinical examinations. After the numbers settled out, the reliability of the long case was 0.84 and of the objective examinations 0.72. The authors conclude that the reliability of long cases is no worse or no better than objective structured clinical examinations in assessing clinical competence.

But perhaps all is not quite what it appears. The long case was observed and evaluated against a “previously used check list which itemised key features of history taking . . .” as well as global ratings. The quoted reliabilities are for 200 minutes of testing involving 10 long cases or 30 stations for objective examinations. The problem is that no one I know was ever observed doing the long case, no examination ever prepared a detailed scoring sheet in advance, and no candidate ever had an examination with 10 cases. Typically the examination consists of one or two long cases lasting an hour or two where examiners ask their pet questions then give an overall score.

These differences are critical. Performance on one problem is a poor predictor of performance on the next one.3 So the only solution is to sample many problems. That is the real strength of the objective structured clinical examinations and the real weakness of the traditional long case exam. Still, for equal testing time, the long case turns out a bit better than the objective structured clinical examinations.

What happened to all the gains from the standardisation in the objective structured clinical examinations? Well, maybe they are an illusion after all. Since the variability of performance across problems is the major cause of poor reliability in assessment, efforts to standardise what happens within a case are likely to lead to only small gains.4 Further, in the long case, examiners used global ratings; in the objective examinations, they exclusively used checklists. And detailed objective checklists turn out to be less reliable than ratings.5 Perhaps the superiority of the long case in this study is related to using rating scales, not cases which were not standardised.

A companion paper to this study is a further evaluation of the entire examination.6 This included four written examinations—a multiple choice, true or false test of basic factual knowledge; an extended matching test of problem solving skills; a short answer test of problem solving and data interpretation skills; and an essay to assess ability to present written debate and communicate with professional colleagues. The question was how best to add up the subscores. The answer turned out to accord with best statistical principles. The optimum approach, in terms of maximising reliability, was to weight according to the number of items. But this obscures an underlying philosophical dilemma. Combining scores on subtests which are supposed to measure different things amounts to an admission that they are not so different after all. The fine print in the paper actually confirms this—all the correlations between tests except two or three are in the mid range.

The explanation for this finding is twofold, and is a second example of self evident truths that are not. Firstly, despite the claims of their inventors, different testing formats do not necessarily measure different things.7 Secondly, the notion that problem solving skills or communication skills can ever be separated from the content of the problem and assessed separately is simply wrong.8,9 So differences in the correlations between tests probably reflect content differences more than different skills.

Current discussions about best evidence medical education are an indication that, just as in clinical medicine, intuitions will frequently be at variance with evidence.10 And since we will continue to be engaged in activities to ensure that our graduates are competent, these procedures should be based on evidence of the effectiveness of these methods. The ignorance of relevant evidence is no more pardonable in education than in clinical medicine.

References

  • 1.Meskauskas JA. Cardiology and the American Board of Internal Medicine. Presentation at the conference on the oral examination. American Board of Medical Specialties; 1975. [Google Scholar]
  • 2.Wass V, Jones R, van der Vleuten CPM. Standardized or real patients to test clinical competence? The long case revisited. Medical Education. 2001;35:321–325. doi: 10.1046/j.1365-2923.2001.00928.x. [DOI] [PubMed] [Google Scholar]
  • 3.Norman GR, Tugwell P, Feightner JW, Muzzin LJ. Knowledge and clinical problem-solving ability. Med Educ. 1985;19:344–356. doi: 10.1111/j.1365-2923.1985.tb01336.x. [DOI] [PubMed] [Google Scholar]
  • 4.Norcini JJ. The validity of long cases. Med Educ. 2001;35:720–721. doi: 10.1046/j.1365-2923.2001.01006.x. [DOI] [PubMed] [Google Scholar]
  • 5.Regehr G, Freeman R, Robb A, Missiha N, Heisey R. OSCE performance evaluations made by standardized patients: comparing checklist and global rating scores. Acad Med. 1999;74:s135–s137. doi: 10.1097/00001888-199910000-00064. [DOI] [PubMed] [Google Scholar]
  • 6.Wass V, McGibbon D, van der Vleuten C. Composite undergraduate clinical examinations: How should the components be combined to maximize reliability? Med Educ. 2001;35:326–330. doi: 10.1046/j.1365-2923.2001.00929.x. [DOI] [PubMed] [Google Scholar]
  • 7.Frederikson N. The real test bias: Influences of testing on teaching and learning. Am Psychologist. 1984;39:193–202. [Google Scholar]
  • 8.Swanson DB, Norman GR, Linn RL. Performance based assessment: Lessons from the health professions. Educ Res. 1994;24:5–11. [Google Scholar]
  • 9.Perkins DN, Salomon G. Are cognitive skills context-bound? Educational Researcher. 1989;18:16–25. [Google Scholar]
  • 10.Hart IR. Editorial: Best evidence medical education. Med Teacher. 1999;21:453–454. doi: 10.1080/01421599978960. [DOI] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES