Editor – We welcome John Cookson's interest in our new specialty certificate examinations (SCEs) (Clin Med April 2010 pp 141–4). However, his critique was based on a limited selection of the available information, so this correspondence provides a fuller update for readers. Four years on from the pilot examinations in 2006, after 11 diets in eight separate specialties and with almost 10,000 questions in the bank, there is much to report.1 The contribution from specialists throughout the UK to this effort has been superb.
Cookson is correct in saying that the pilot examinations were not mapped robustly to the curricula. Furthermore, progressive revision of the specialty curricula during the last two years has presented a moving target for the new examining boards. We have risen to this challenge. From 2009, each SCE blueprint has been mapped to the appropriate curriculum and every usable question related to the relevant curriculum domain. Question-writing groups are giving priority to the remaining gaps.
He criticises the curricula for differentiating between knowledge, skills and attitudes and expresses concern that the SCEs assess only knowledge. Although single best answer questions can evaluate problem-solving skills and clinical judgement, the SCEs were always intended as knowledge-based assessments. They were not designed to test skills or attitudes, which, we agree, are much better evaluated by direct observation and discussion face to face.
Cookson expresses disappointment that the indices of reliability in the pilots were inconsistent. Values of Cronbach's a obtained in examination diets of 200 questions, involving small cohorts with a narrow range of ability, are unlikely to reach 0.9. Indeed, recent research into the use of reliability suggests that the standard error of measurement may be a more appropriate metric.2 Nevertheless, it is reassuring that in nine out of 11 SCE diets to date reliability values have exceeded 0.8.
We appreciate the challenge of standard setting for new examinations. For information, the SCEs use the same criterion-referencing process (the Angoff method) used for the MRCP(UK) written examinations in recent years. Although many of those involved in the process had no previous experience, their task was made simpler by taking as a consistent yardstick the knowledge expected of a newly appointed specialist.
References
- 1.Joint Committee on Higher Medical Training Knowledge-based assessment; pilot project. London: Joint Committee on Higher Medical Training, 2006. www.jrcptb.org.uk/SiteCollectionDocuments/KBA%20Project%20Final%20Report.pdf. [Google Scholar]
- 2.Tighe J, McManus IC, Dewhurst NG, Chis L, Mucklow J. The standard error of measurement is a more appropriate measure of quality for postgraduate medical assessments than is reliability: an analysis of MRCP(UK) written examinations, 2002–2008, and Specialty Certificate Examinations. BioMed Central Medical Education (accepted for publication). [DOI] [PMC free article] [PubMed]
