Report the absolute event rates. 0.6 percent vs 1.0 percent and 60 percent vs. 100 percent represent very different absolute event rates, yet yield the same 60 percent relative risk. |
Report benefit and harm results symmetrically. When reporting the results of an intervention, present absolute event rates for both benefits and harms. |
Put the results in context. Help the reader calibrate the magnitude of the risk or intervention effect by placing the results in context. Give the risk of events generally acknowledged to be common (e.g., common cold) or rare (e.g., hit by lightening). Compare your intervention effects to other accepted treatments using outcomes common to both (e.g., mortality). |
Convert odds ratios to risk ratios if the event rate exceeds 10%. When the outcome of interest is uncommon (e.g., <10%) the odds ratio is appropriate because it approximates the risk ratio; odds ratios increasingly overstate the risk ratio as outcomes become more common. Odds ratios are hard to comprehend directly and are usually interpreted as being equivalent to the relative risk. Many people have no intuitive feel for odds or odds ratios and it is unrealistic to expect reporters or the public to understand the distinction. There are simple methods of conversion for both crude and adjusted data.18,19 If odds ratios cannot be avoided one should remind the reader that the higher the base rate the more the odds ratio will overstate the relative risk. |
Report the precision of the result. The confidence interval is a range of values consistent with the data that is believed to encompass the “true” population value.17 Help the reporter understand that the “true” effect in the broader population to which your study applies lies within this confidence interval. |
Ensure that comparisons are appropriate. In reporting the results of comparisons, one should be clear about the reference group. If an important interaction is identified, aggregate data should not be used in reporting the results. |