Editor—Expressing the results of clinical trials and systematic reviews in terms of odds ratios can be more seriously misleading than Davies et al advise us.1 They gave a correct analysis of situations in which odds ratios are used to describe increases in event rates, but their consideration of the more common situation, in which treatments reduce event rates, is short sighted. Here, effectiveness is more commonly expressed as the percentage relative risk reduction (100×(1−relative risk)%) than the actual relative risk. The discrepancy between a relative risk reduction and the equivalent relative odds reduction (100×(1−odds ratio)%) can be misleading. When event rates are high (commonly the case in trials and systematic reviews) the relative odds reduction can be many times larger than the equivalent relative risk reduction.
For example, Brent et al report results of a trial of a programme aimed at increasing the duration of breast feeding.2 By three months 32/51(63%) women had stopped breast feeding in the intervention group, compared with 52/57(91%) in the control group. Whereas the relative risk reduction is 31% the relative odds reduction is 84%: nearly three times as large. The same problem can occur in systematic reviews: a summary of the results of seven trials of antimicrobial treatment on premature rupture of membranes showed a 49% relative odds reduction of delivery by seven days, whereas the relative risk reduction was only 19%.3
Although relative odds and relative risk reductions always go in the same direction, these discrepancies in magnitude are large enough to mislead. Good estimates of treatment effects are essential for clinicians to be able to balance the relative probabilities of the good and bad outcomes that could be caused by a treatment.
The only safe use of odds ratios is in case-control studies and logistic regression analyses, where they are the best estimates of relative risks that can be obtained. Theoretical mathematical arguments for using odds ratios in other circumstances have not been supported by empirical studies.
In clinical trials and systematic reviews of trials there is no reason for compromising interpretation by reporting results in terms of odds rather than risks.4,5 Authors and journal editors should ensure that the results of trials and systematic reviews are reported as relative risks unless there is a convincing argument otherwise.
Footnotes
J.Deeks@icrf.icnet.uk
References
- 1.Davies HTO, Crombie IK, Tavakoli M. When can odds ratios mislead? BMJ. 1998;316:989–991. doi: 10.1136/bmj.316.7136.989. . (28 March.) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Brent NB, Redd B, Dworetz A, D’Amico F, Greenberg J. Breast-feeding in a low-income population. Arch Pediatr Adolesc Med. 1995;149:788–803. doi: 10.1001/archpedi.1995.02170200088014. [DOI] [PubMed] [Google Scholar]
- 3.Mercer BM, Arheart KL. Antimicrobial therapy in expectant management of preterm premature rupture of membranes. Lancet. 1995;346:1271–1279. doi: 10.1016/s0140-6736(95)91868-x. [DOI] [PubMed] [Google Scholar]
- 4.Sackett DL, Deeks JJ, Altman DG. Down with odds ratios! Evidence-Based Med. 1996;1:164–166. [Google Scholar]
- 5.Sinclair JC, Bracken MB. Clinically useful measures of effect in binary analyses of randomized trials. J Clin Epidemiol. 1994;47:881–890. doi: 10.1016/0895-4356(94)90191-0. [DOI] [PubMed] [Google Scholar]