Skip to main content
The BMJ logoLink to The BMJ
. 1998 Oct 24;317(7166):1155. doi: 10.1136/bmj.317.7166.1155a

When can odds ratios mislead?

Odds ratios should be used only in case-control studies and logistic regression analyses

Jon Deeks 1
PMCID: PMC1114127  PMID: 9784470

Editor—Expressing the results of clinical trials and systematic reviews in terms of odds ratios can be more seriously misleading than Davies et al advise us.1 They gave a correct analysis of situations in which odds ratios are used to describe increases in event rates, but their consideration of the more common situation, in which treatments reduce event rates, is short sighted. Here, effectiveness is more commonly expressed as the percentage relative risk reduction (100×(1−relative risk)%) than the actual relative risk. The discrepancy between a relative risk reduction and the equivalent relative odds reduction (100×(1−odds ratio)%) can be misleading. When event rates are high (commonly the case in trials and systematic reviews) the relative odds reduction can be many times larger than the equivalent relative risk reduction.

For example, Brent et al report results of a trial of a programme aimed at increasing the duration of breast feeding.2 By three months 32/51(63%) women had stopped breast feeding in the intervention group, compared with 52/57(91%) in the control group. Whereas the relative risk reduction is 31% the relative odds reduction is 84%: nearly three times as large. The same problem can occur in systematic reviews: a summary of the results of seven trials of antimicrobial treatment on premature rupture of membranes showed a 49% relative odds reduction of delivery by seven days, whereas the relative risk reduction was only 19%.3

Although relative odds and relative risk reductions always go in the same direction, these discrepancies in magnitude are large enough to mislead. Good estimates of treatment effects are essential for clinicians to be able to balance the relative probabilities of the good and bad outcomes that could be caused by a treatment.

The only safe use of odds ratios is in case-control studies and logistic regression analyses, where they are the best estimates of relative risks that can be obtained. Theoretical mathematical arguments for using odds ratios in other circumstances have not been supported by empirical studies.

In clinical trials and systematic reviews of trials there is no reason for compromising interpretation by reporting results in terms of odds rather than risks.4,5 Authors and journal editors should ensure that the results of trials and systematic reviews are reported as relative risks unless there is a convincing argument otherwise.

Footnotes

J.Deeks@icrf.icnet.uk

References

  • 1.Davies HTO, Crombie IK, Tavakoli M. When can odds ratios mislead? BMJ. 1998;316:989–991. doi: 10.1136/bmj.316.7136.989. . (28 March.) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Brent NB, Redd B, Dworetz A, D’Amico F, Greenberg J. Breast-feeding in a low-income population. Arch Pediatr Adolesc Med. 1995;149:788–803. doi: 10.1001/archpedi.1995.02170200088014. [DOI] [PubMed] [Google Scholar]
  • 3.Mercer BM, Arheart KL. Antimicrobial therapy in expectant management of preterm premature rupture of membranes. Lancet. 1995;346:1271–1279. doi: 10.1016/s0140-6736(95)91868-x. [DOI] [PubMed] [Google Scholar]
  • 4.Sackett DL, Deeks JJ, Altman DG. Down with odds ratios! Evidence-Based Med. 1996;1:164–166. [Google Scholar]
  • 5.Sinclair JC, Bracken MB. Clinically useful measures of effect in binary analyses of randomized trials. J Clin Epidemiol. 1994;47:881–890. doi: 10.1016/0895-4356(94)90191-0. [DOI] [PubMed] [Google Scholar]
BMJ. 1998 Oct 24;317(7166):1155.

Avoidable systematic error in estimating treatment effects must not be tolerated

Michael B Bracken 1,2, John C Sinclair 1,2

Editor—Davies et al conclude that “qualitative judgments based on interpreting odds ratios as though they were relative risks are unlikely to be seriously in error.”1-1 Statisticians may be satisfied with qualitative judgments, but doctors and patients must make quantitative judgments.

Relative risk and its complement, relative risk reduction, are widely used and well understood measures of treatment effect. Only case-control studies do not permit direct calculation of relative risk. Why then, when measures of treatment effect come from research that uses stronger designs, would clinicians accept odds ratios as being roughly equivalent to relative risks rather than demand to know the relative risk itself? If our goal is to provide as valid an estimate of a treatment effect as possible, why introduce any unnecessary systematic error?

Davies et al suggest that there is no important concern in interpreting an odds ratio of 0.66 (reduction in death after management in specialist stroke units) as if it were the relative risk (the true relative risk was 0.81 in their example). We disagree. How treatment effects are described influences doctors’ perceptions of efficacy.1-2,1-3 Moreover, the number needed to treat, a statistic widely used to express the clinical importance of treatment effects,1-4 is seriously underestimated (by 45%) when the odds ratio is interpreted as the relative risk (in their example, it would be calculated erroneously as 5.3 rather than the true 9.7).

Knowing the number of patients one needs to treat to prevent one patient having the adverse target event is particularly useful in deciding whether to treat. Clinicians will treat patients when the number needed to treat is lower than a threshold number at which benefits of treatment wholly offset adverse events attributable to it.1-5 Interpreting an odds ratio as if it were a relative risk introduces a systematic error in the estimation of the number needed to treat and hence in decisions on treatment: treatment will be recommended when it should not be.

The table shows the number needed to treat calculated erroneously from misinterpretation of the odds ratio as if it were the relative risk and correctly from the true relative risk. The calculations are done at high control event rates and over a range of odds ratios. When the control event rate is high, interpretation of the odds ratio as the relative risk results in a systematic and important underestimate of the number needed to treat.

Table.

Number needed to treat calculated from misinterpretation of odds ratio (OR) as if it were relative risk (RR) and from true RR

OR Number needed to treat
Control event rate 50%
Control event rate 80%
When OR used as RR When true RR used When OR used as RR When true RR used
0.5 4.0 6.0 2.5 7.5
0.6 5.0 8.0 3.1 10.6
0.7 6.7 11.4 4.2 15.6
0.8 10.0 18.0 6.3 26.2
0.9 20.0 40.0 12.5 57.5

When relative risk can be directly calculated, it should be. There is no reason to tolerate avoidable systematic error in estimating treatment effects.

References

  • 1-1.Davies HTO, Crombie IK, Tavakoli M. When can odds ratios mislead? BMJ. 1998;316:989–991. doi: 10.1136/bmj.316.7136.989. . (28 March.) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 1-2.Forrow L, Taylor WC, Arnold RM. Absolutely relative: how research results are summarized can affect treatment decisions. Am J Med. 1992;92:121–124. doi: 10.1016/0002-9343(92)90100-p. [DOI] [PubMed] [Google Scholar]
  • 1-3.Naylor CD, Chen E, Strauss B. Measured enthusiasm: does the method of reporting trial results alter perceptions of therapeutic effectiveness? Ann Intern Med. 1992;117:916–921. doi: 10.7326/0003-4819-117-11-916. [DOI] [PubMed] [Google Scholar]
  • 1-4.Sinclair JC, Bracken MB. Clinically useful measures of effect in binary analyses of randomized trials. J Clin Epidemiol. 1994;47:881–889. doi: 10.1016/0895-4356(94)90191-0. [DOI] [PubMed] [Google Scholar]
  • 1-5.Guyatt GH, Sackett DL, Sinclair JC, Haywood R, Cook DJ, Cook RJ. Users’ guides to the medical literature. IX. A method for grading health care recommendations. JAMA. 1995;274:1800–1804. doi: 10.1001/jama.274.22.1800. [DOI] [PubMed] [Google Scholar]
BMJ. 1998 Oct 24;317(7166):1155.

Authors’ reply

Huw Talfryn Oakley Davies 1,2, Manouche Tavakoli 1,2, Iain Kinloch Crombie 1,2

Editor—Both letters make interesting points about odds ratios but do not actually uncover any shortcomings in our paper. We did not advocate the use of odds ratios. Instead our paper addressed the issue of how common events must be, and how big effect sizes must be, before the odds ratio becomes a misleading estimate of the relative risk. Our main aim was to put to rest the widespread misconception that the odds ratio is a good approximation to the relative risk only when rare events are being dealt with. Our conclusion was that “serious divergence between the odds ratio and the relative risk only occurs with large effects on groups at high initial risk.”

In our paper we clarified this. So long as the event rate in both the intervention and the control groups is less than 30% and the effect size is no more than moderate (say, a halving or a doubling of risk) then interpreting an odds ratio as a relative risk will overestimate the size of the effect by less than one fifth. This is a far cry from the requirement that events be rare. The authors of the letters confirm that problems can arise with higher event rates—all their examples use unusually high rates of between 50% and 91%.

In the paper we were quite clear that we were concerned with broad qualitative judgments of treatment effects and not precise quantitative estimates of the size of any effect. Though it is true, as Bracken and Sinclair state, that “doctors and patients must make quantitative judgments” we should be wary of invoking too great a precision in making these judgments. Many factors may influence the observed effect size of a treatment—for example, the nature of the group of patients studied, variations in the healthcare setting and concomitant care, and, of course, the play of chance.

On one thing we are in clear agreement: odds ratios can lead to confusion and alternative measures should be used when these are available. Authors reporting on prospective studies should be encouraged to report the actual relative risk or relative risk reduction. Better still, as Bracken and Sinclair point out, numbers needed to treat (which measure absolute benefit) are more useful when treatment decisions are made than either relative risks or odds ratios (which measure only relative benefit). Nevertheless, when odds ratios are encountered, guidance on their interpretation is of more use than outright rejection.


Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES