C ausal inferences about the effects of treatments must always depend on best judgments. Because the lives and wellbeing of patients will be influenced for better or worse by the validity of these judgments, however, it is important to be explicit about the logic as well as the empirical evidence on which the judgments are based. This issue of the BMJ is about one important aspect of that logic—the attempt to control bias through randomisation.
There is a growing acceptance that it is logical to try to control biases of various kinds when assessing the effects of treatments. Efforts by clinicians to control biases stretch back for at least three centuries,1 but only during the past 100 years have these become widespread. In particular, as we approach the end of the 20th century, there are now hundreds of thousands of reports of studies in which efforts have been made to control selection biases, the aim here being to distinguish differences attributable to treatments from differences that reflect the characteristics (known and unknown) of the people who have received treatment.
These studies are known as randomised trials because eligible patients are allocated at random to one of two or more alternative forms of care. This is their sole defining characteristic.2 Other measures sometimes used to control biases—for example, the use of placebos to minimise observer biases—are neither specific to nor necessary features of randomised trials.
Consensus is growing that the results of randomised trials provide the most secure basis for valid causal inferences about the effects of treatments.3 Not everyone subscribes to this view,4 however, and there are certainly aspects of the design and interpretation of randomised trials which continue to present real challenges.5,6 The results of randomised trials usually differ from those of studies in which the comparison groups have been assembled in other ways.7 Although the most likely explanation for these differences would seem to be uncontrolled biases, other explanations cannot be ruled out.8
Two studies stand out in the history of efforts to control selection biases in clinical research. In 1898 a Danish physician, Johannes Fibiger, allocated patients with diphtheria to comparison groups on the basis of day they were admitted to hospital. He gave anti-diphtheria serum to patients admitted on alternate days and compared their progress with that of those admitted on other days. Fibiger’s report is remarkable not only because it shows that he was conscious of the need to control selection biases but also because he described his methods and analyses so clearly.9,10
Whether the basis for allocating patients in an unselected series to comparison groups is alternation or random numbers, failure to adhere strictly to the allocation schedule may result in bias.11,12 Fifty years ago yesterday, the BMJ carried the report of another landmark study in the history of efforts to control selection biases—the UK Medical Research Council’s randomised trial of streptomycin for pulmonary tuberculosis.13–15 The report is especially important because it describes in detail the precautions taken by the researchers to conceal the allocation schedule from those entering patients into the trial.13
Randomised trials conducted over the past half century have helped to bring about a situation in which health care has been credited with three of the seven years of increased life expectancy over that time and an average of five additional years of partial or complete relief from the poor quality of life associated with chronic disease.16 But we should not be complacent. Systematic reviews of some of the hundreds of thousands of reports of trials published since 1948 are beginning to make painfully clear that, in most of these studies, inadequate steps were taken to control biases, many questions and outcomes of interest to patients were ignored,17 and insufficient numbers of participants were studied to yield reliable estimates of treatment effects.18 In brief, a massive amount of research effort, the goodwill of hundreds of thousands of patients, and millions of pounds have been wasted.
Several developments could help to ensure that efforts over the next 50 years will be more effective in yielding unbiased, relevant, and reliable assessments of the effects of health care. Information derived from systematic reviews of past research19 and from registers of continuing trials20 will help to show where new trials are needed and how best to maximise the quality and relevance of the new information sought. Some of this information is likely to be in the form of qualitative data, and this implies the need for greater cooperation among clinical and social scientists in designing and running trials.21
Electronic publication will offer opportunities for improving the quality of research and of research reports22 through open peer review of protocols and reduction of publication bias, and by providing a mechanism through which the results of new studies can be set properly within the context of other relevant studies.23 Improvements in the infrastructure needed to support trials24 should mean that clinicians and patients faced with uncertainties about the relative merits of treatment options will more often be able to participate in the research needed to resolve these uncertainties.
The greatest potential for improving research may lie in greater public involvement. Partly because of perverse incentives to pursue particular research projects25,26 researchers often seem to design trials to address questions that are of no interest to patients. Greater public involvement could help to reduce this mismatch and ensure that trials are designed to address questions that patients see as relevant. More generally, it will be important to assess whether the public understands and endorses the efforts being made to control biases in assessing the effects of health care.27,28 So far, the research community has made very little effort to involve the public in discussions about this. All in all, there is plenty of scope for building on the undoubted progress made during the past century.
Acknowledgments
I thank Doug Altman, Mike Bracken, Ray Garry, Peter Gøtzsche, Andrew Herxheimer, Tony Hope, Muir Gray, and Ann Oakley for helpful comments on an earlier draft of this article.
References
- 1.Royal College of Physicians of Edinburgh/UK Cochrane Centre. Controlled trials from history. www.rcpe.ac.uk/cochrane/
- 2.Kleijnen J, Gøtzsche P, Kunz RH, Oxman AD, Chalmers I. So what’s so special about randomisation? In: Maynard A, Chalmers I, editors. Non-random reflections on health services research. London: BMJ Books; 1997. pp. 93–106. [Google Scholar]
- 3.Clarke MJ. Ovarian ablation in breast cancer, 1896 to 1998: milestones along hierarchy of evidence from case report to Cochrane review. BMJ. 1998;317:1246–1248. doi: 10.1136/bmj.317.7167.1246. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Cranberg L. Evaluating new treatments. BMJ. 1998;317:1261. doi: 10.1136/bmj.317.7167.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Russell I. Evaluating new surgical procedures. BMJ. 1995;311:1243–1244. doi: 10.1136/bmj.311.7015.1243. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.McPherson K, Chalmers I. Incorporating patient preferences into clinical trials. BMJ. 1998;317:78. [PMC free article] [PubMed] [Google Scholar]
- 7.Kunz R, Oxman A. The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials. BMJ. 1998;317:1185–1190. doi: 10.1136/bmj.317.7167.1185. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312:1215–1218. doi: 10.1136/bmj.312.7040.1215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Fibiger J. Om Serumbehandling af Difteri. Hospitalstidende. 1898;6:309–325. , 337-50. [Google Scholar]
- 10.Hróbjartsson A, Gøtzsche PC, Gluud C. The controlled clinical trial turns 100 years: Fibiger’s trial of serum treatment of diphtheria. BMJ. 1998;317:1243–1245. doi: 10.1136/bmj.317.7167.1243. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273:408–412. doi: 10.1001/jama.273.5.408. [DOI] [PubMed] [Google Scholar]
- 12.Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet. 1998;352:609–613. doi: 10.1016/S0140-6736(98)01085-X. [DOI] [PubMed] [Google Scholar]
- 13.Medical Research Council. Streptomycin treatment of pulmonary tuberculosis: a Medical Research Council investigation. BMJ. 1948;2:769–782. [PMC free article] [PubMed] [Google Scholar]
- 14.Doll R. Controlled trials: the 1948 watershed. BMJ. 1998;317:1217–1220. doi: 10.1136/bmj.317.7167.1217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Yoshioka A. Use of randomisation in the Medical Research Council’s clinical trial of streptomycin in pulmonary tuberculosis in the 1940s. BMJ. 1998;317:1220–1223. doi: 10.1136/bmj.317.7167.1220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Bunker JP, Frazier HS, Mosteller F. Improving health: measuring effects of medical care. Milbank Quarterly. 1994;72:225–258. [PubMed] [Google Scholar]
- 17.Thornley B, Adams C. Content and quality of 2000 controlled trials in schizophrenia over 50 years. BMJ. 1998;317:1181–1184. doi: 10.1136/bmj.317.7167.1181. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Peto R, Baigent C. Trials: the next 50 years. BMJ. 1998;317:1170–1171. doi: 10.1136/bmj.317.7167.1170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Medical Research Council. MRC guidelines for good clinical practice in clinical trials. www.mrc.ac.uk/ctg.html
- 20.Current Science. Current Controlled Trials. www.controlled-trials.com
- 21.Oakley A. Experimentation and social interventions: a forgotten but important history. BMJ. 1998;317:1239–1242. doi: 10.1136/bmj.317.7167.1239. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Chalmers I, Altman DG. How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing. Lancet (in press). [DOI] [PubMed]
- 23.Clarke M, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA. 1998;280:280–282. doi: 10.1001/jama.280.3.280. [DOI] [PubMed] [Google Scholar]
- 24.Farrell B. Efficient management of randomised controlled trials: nature or nurture. BMJ. 1998;317:1236–1239. doi: 10.1136/bmj.317.7167.1236. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Chalmers I. The perinatal research agenda: whose priorities? Birth. 1991;18:137–145. doi: 10.1111/j.1523-536x.1991.tb00083.x. [DOI] [PubMed] [Google Scholar]
- 26.Pieters T. Marketing medicines through randomised controlled trials: the case of interferon. BMJ. 1998;317:1231–1233. doi: 10.1136/bmj.317.7167.1231. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Edwards SJL, Lilford RJ, Hewison J. The ethics of randomised controlled trials from the perspective of patients, the public, and healthcare professionals. BMJ. 1998;317:1209–1212. doi: 10.1136/bmj.317.7167.1209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Featherstone K, Donovan JL. Random allocation or allocation at random? Patients’ perspectives of participation in a randomised controlled trial. BMJ. 1998;317:1177–1180. doi: 10.1136/bmj.317.7167.1177. [DOI] [PMC free article] [PubMed] [Google Scholar]