Since 1991 the BMJ has had a policy of not publishing trials that have not been properly randomised, except in rare cases where this can be justified.1 Why?
The simplest approach to evaluating a new treatment is to compare a single group of patients given the new treatment with a group previously treated with an alternative treatment. Usually such studies compare two consecutive series of patients in the same hospital(s). This approach is seriously flawed. Problems will arise from the mixture of retrospective and prospective studies, and we can never satisfactorily eliminate possible biases due to other factors (apart from treatment) that may have changed over time. Sacks et al compared trials of the same treatments in which randomised or historical controls were used and found a consistent tendency for historically controlled trials to yield more optimistic results than randomised trials.2 The use of historical controls can be justified only in tightly controlled situations of relatively rare conditions, such as in evaluating treatments for advanced cancer.
The need for contemporary controls is clear, but there are difficulties. If the clinician chooses which treatment to give each patient there will probably be differences in the clinical and demographic characteristics of the patients receiving the different treatments. Much the same will happen if patients choose their own treatment or if those who agree to have a treatment are compared with refusers. Similar problems arise when the different treatment groups are at different hospitals or under different consultants. Such systematic differences, termed bias, will lead to an overestimate or underestimate of the difference between treatments. Bias can be avoided by using random allocation.
A well known example of the confusion engendered by a non-randomised study was the study of the possible benefit of vitamin supplementation at the time of conception in women at high risk of having a baby with a neural tube defect.3 The investigators found that the vitamin group subsequently had fewer babies with neural tube defects than the placebo control group. The control group included women ineligible for the trial as well as women who refused to participate. As a consequence the findings were not widely accepted, and the Medical Research Council later funded a large randomised trial to answer to the question in a way that would be widely accepted.4
The main reason for using randomisation to allocate treatments to patients in a controlled trial is to prevent biases of the types described above. We want to compare the outcomes of treatments given to groups of patients which do not differ in any systematic way. Another reason for randomising is that statistical theory is based on the idea of random sampling. In a study with random allocation the differences between treatment groups behave like the differences between random samples from a single population. We know how random samples are expected to behave and so can compare the observations with what we would expect if the treatments were equally effective.
The term random does not mean the same as haphazard but has a precise technical meaning. By random allocation we mean that each patient has a known chance, usually an equal chance, of being given each treatment, but the treatment to be given cannot be predicted. If there are two treatments the simplest method of random allocation gives each patient an equal chance of getting either treatment; it is equivalent to tossing a coin. In practice most people use either a table of random numbers or a random number generator on a computer. This is simple randomisation. Possible modifications include block randomisation, to ensure closely similar numbers of patients in each group, and stratified randomisation, to keep the groups balanced for certain prognostic patient characteristics. We discuss these extensions in a subsequent Statistics note.
Fifty years after the publication of the first randomised trial5 the technical meaning of the term randomisation continues to elude some investigators. Journals continue to publish “randomised” trials which are no such thing. One common approach is to allocate treatments according to the patient’s date of birth or date of enrolment in the trial (such as giving one treatment to those with even dates and the other to those with odd dates), by the terminal digit of the hospital number, or simply alternately into the different treatment groups. While all of these approaches are in principle unbiased—being unrelated to patient characteristics—problems arise from the openness of the allocation system.1 Because the treatment is known when a patient is considered for entry into the trial this knowledge may influence the decision to recruit that patient and so produce treatment groups which are not comparable.
Of course, situations exist where randomisation is simply not possible.6 The goal here should be to retain all the methodological features of a well conducted randomised trial7 other than the randomisation.
References
- 1.Altman DG. Randomisation. BMJ. 1991;302:1481–1482. doi: 10.1136/bmj.302.6791.1481. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Sacks H, Chalmers TC, Smith H. Randomized versus historical controls for clinical trials. Am J Med. 1982;72:233–240. doi: 10.1016/0002-9343(82)90815-4. [DOI] [PubMed] [Google Scholar]
- 3.Smithells RW, Sheppard S, Schorah CJ, Seller MJ, Nevin NC, Harris R, et al. Possible prevention of neural-tube defects by periconceptional vitamin supplementation. Lancet. 1980;i:339–340. doi: 10.1016/s0140-6736(80)90886-7. [DOI] [PubMed] [Google Scholar]
- 4.MRC Vitamin Study Research Group. Prevention of neural tube defects: results of the Medical Research Council vitamin study. Lancet. 1991;338:131–137. [PubMed] [Google Scholar]
- 5.Medical Research Council. Streptomycin treatment of pulmonary tuberculosis. BMJ. 1948;2:769–782. [PMC free article] [PubMed] [Google Scholar]
- 6.Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312:1215–1218. doi: 10.1136/bmj.312.7040.1215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials: the CONSORT Statement. JAMA. 1996;276:637–639. doi: 10.1001/jama.276.8.637. [DOI] [PubMed] [Google Scholar]