Abstract
Objective
Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR).
Study Design and Setting
This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of HIV Testing and Counseling (HTC). We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes.
Results
Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, e.g., consistent condom use) and undesirable (70.5% e.g., inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR, and provide the conceptual basis for advantages of RRR over other metrics.
Conclusion
Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underutilized by original and review studies.
Keywords: Relative Risk Reduction, Risk Ratio, Odds Ratio, Risk Factor, Decision Making
INTRODUCTION
The goal of most epidemiological study is to measure the effect of an exposure or intervention. Studies may report observed effect sizes in many different forms, including: relative risk, odds ratio, hazard ratios, and differences in means. This heterogeneity of outcome metrics complicates the use of studies for translational research. Specifically, estimating the potential benefit of implementing interventions requires a consistent efficacy metric that can be combined across studies. The resulting summary efficacy measure can then be applied to the baseline risk of adverse outcomes in a population of interest to calculate likely health gains.
The effect of an exposure on a binary outcome (eg, adverse outcome or not) can be presented in two broad categories: absolute (risk reduction, number need to treat), or relative (relative risk, relative risk reduction, odds ratio). The absolute measures are easier to compute and more meaningful when seeking the number of affected people and burden of disease or risk factors. Relative measures are more widely used by researchers conducting intervention or etiological studies (1). Understanding and choosing the proper effect measure is crucial for groups of researchers, as well as health practitioners focused on clinical or public health issues (2).
Various terminologies have been used to convey different perspectives on the relative reduction in risk, such as attributable risk reduction, preventive fraction, relative risk increase, relative benefit increase and relative benefit reduction (2, 3). Heterogeneity in use of different effect measures has introduced challenges to comparing findings from different studies and in pooling effect measures using meta-analysis. Our work in systematic review and the simulation modeling of health impact suggests that the most easily understood (4) and widely applicable measure (5) of reduction in risk is “relative risk reduction” – the proportionate drop in adverse outcomes associated with exposure to an intervention or other protective factor. It combines the concepts of incidence (in intervention and non-intervention subpopulations) and coverage (when translated to the population level; see Discussion) (4). It also has the potential to quantify the impact of multiple interventions on outcome of interests by taking into account the main and interaction effects of each intervention (5).
We define relative risk reduction (RRR) as the relative lowering in the risk of adverse behavioral or health outcomes compare to the control group. Using the Leviton notation (6), the basic formula for RRR is
where Icontrol is the risk of an undesirable behavioral or health outcome in the absence of a preventive intervention and Iintervention is the risk in those receiving the intervention. RRR is the proportion of risk prevented by intervention divided by the overall risk in the control group. RRR for an effective intervention is greater than zero (defined as no effect) and as high as 1 (defined as complete elimination of risk of adverse outcomes). An intervention which causes harm has a negative RRR (with the range of 0 to −1).
The objective of this paper is to provide public health researchers with a theoretical framework as well as practical steps on how to calculate RRR from different effect measures. We will consider study examples which vary in design (e.g., single arm pre-post, double-arm cohort study) and desirability of outcome (i.e., progression to AIDS is undesirable and safer sexual practice desirable).
RRR TRANSFORMATION ALGORITHM
As mentioned above, studies may report effect measures in different ways. To provide a comprehensive guide on how to convert all types of effect measures into RRR, we developed a stepwise algorithm (Fig. 1). In the first step, one determines whether the incidence of outcomes was reported for each group, or if the results were reported only as effect size measures such as Relative Risk (RR), Hazard Ratio (HR) or Odds Ratio (OR).
Fig. 1.
Relative Risk Reduction (RRR) calculation algorithm.
ES: Effect Size; RR: Risk Ratio; HR: Hazard ratio; OR: Odds Ratio; RRR: Relative Risk Reduction; Iu: Incidence in unexposed (or control) group; Ie: Incidence in exposed (or intervention) group; Iu1: Incidence in unexposed (or control) group at follow-up visit; Ie1: Incidence in exposed (or intervention) group at follow-up visit; Iu0: Incidence in unexposed (or control) group at baseline visit; Ie0: Incidence in exposed (or intervention) group at baseline visit;P0:Prevalence of outcome at the time OR was calculated for.
When the incidence of outcomes is reported, it is crucial to distinguish whether there is a parallel control group in addition to the intervention group (double arm study) or not (single arm study). The next step is to define whether the outcome is desirable or not.
Based on our definition, we are looking for effects of interventions in preventing undesirable outcomes. So, all desirable outcomes need to be converted into undesirable outcomes. As an example, if the reported outcome is consistent use of condoms during all sexual contact in the last month, we need to convert it to inconsistent condom use. For ORs, there is another step of calculation if the reported outcome is not rare (<10%). This step addresses built-in bias in the OR when we want to use it instead of RR. The bias is ignorable when the outcome is rare (7), however this is not always met in the case in preventive intervention studies.
In the following section, we elaborate with examples from the published literature on HIV prevention. Such examples are from our recent work in systematic review and the simulation modeling of health impact of HIV Testing and Counselling (HTC, including Voluntary Testing and Counseling - VCT). In brief, to identify relevant studies, we conducted a literature search in six bibliographic databases (PubMed, EMBASE, PychInfo, Web of Science, CINHAL, and Cochrane) to identify relevant systematic review studies. We retrieved the original studies cited in the systematic review papers, and following pre-specified eligibility criteria, two reviewers independently screened potentially relevant title, abstract, and then full text through a step-wise fashion to identify studies for data extraction. Then, two trained research staff independently extracted data from included studies and supervised by a senior researcher, reconciled extracted data in case there were discrepancies. Our team developed and used a comprehensive and detailed protocol to extract data on Intervention – control condition, Outcome, and Population Trio (IOPT) as a unique unit of data extraction and analysis to describe each reported effect size by studies. For details on the protocol, please visit www.globalhealthdecisions.org.
In the end, from the 55 relevant articles included for data extraction, we were able to identify 473 IOPTs. As presented in table 1, reported outcomes were heterogeneous according to the study design, reported effect measure and the desirability of the reported outcome. Single arm prospective studies yielded the largest portion of IOPTs (37.8%), and randomized controlled trial contributed 17.1% of all IOPTs. In respect to the type of effect size reported by studies, frequency of IOPTs were as follow: 70.6% pre-post proportion, 14.0% odds ratio, 10.2% mean difference, 4.4% risk ratio, and less than 1% of all RRR. IOPTs were also differing by the type of outcome, 29.5% desirable and 70.5% undesirable. These heterogeneities complicate the use of studies for translational research, because results cannot be easily combined across studies. Below, we provide examples on how all these heterogeneous effect measures could be transformed into one type of effect measure, relative risk reduction.
Table 1.
The heterogeneity in the 55 articles (473 IOPTs) regarding the study design, reported effect measure and the desirability of the reported outcome.
| Study Design | n (%) |
|---|---|
| Randomized Controlled Trial | 81 (17.1) |
| Double-arm Pre-Post Quasi-Experimental | 12 (2.5) |
| Single Arm Pre-Post Quasi-Experimental | 83 (17.5) |
| Prospective Single Arm Cohort | 179 (37.8) |
| Retrospective Single Arm Cohort | 14 (3.0) |
| Prospective Double Arm Cohort | 71 (15.0) |
| Retrospective Double Arm Cohort | 33 (7.0) |
| Reported effect measure | |
| Risk ratio | 21 (4.4) |
| Odds ratio | 66 (14.0) |
| Mean difference | 48 (10.2) |
| Pre-post rate/prevalence | 334 (70.6) |
| Relative risk reduction | 4 (0.9) |
| Outcome Desirability | |
| Undesirable | 328 (70.5) |
| Desirable | 137 (29.5) |
ILLUSTRATIVE EXAMPLES
Example 1: Pre-post proportion of unsafe sex in a (single arm) study with no concurrent control group
The effect of HIV testing on sexual behaviors of 307 homosexual men was assessed by Griensven et al in 1989 (8). Among seropositive men, the prevalence of consistent condom use with regular partners increased from 29% (before the HIV testing) to 62% (Fig. 2). To calculate the RRR, we first transform the desirable (always condom use) to an undesirable outcome. Thus, the proportion of inconsistent condom use before the intervention is 1-0.29=0.71 and 0.38 for post-intervention. And then, we can calculate the RRR as
Fig. 2.
The proportion of seropositive MSM reported inconsistent condom use with their regular sexual partners before and after HIV testing [6]
And so, 46% of the observed prevalence for inconsistent condom use in seropositive men who have sex with men (MSM) can be reduced by providing HIV testing services.
Example 2: Pre-post risk of undesirable outcome in a study with concurrent control group
Coates et al examined the efficacy of voluntary counseling and testing (VCT) in reducing unprotected intercourse (9). Individuals were randomly assigned to receive VCT or basic health information. The proportion of individuals reporting unprotected intercourse with non-primary partners declined significantly more for those receiving VCT than for controls. The results are summarized in Fig. 3.
Fig. 3.
The proportion of individuals reporting unprotected intercourse with non-primary partners at baseline and follow-up in VCT and basic health information group[7].
RRR, as the summary effect measure of VCT, can be calculated using the following formula:
The findings indicate that VCT, in comparison to basic health information, contributed to a 24.9% reduction in the proportion of unprotected intercourse with non-primary partners.
If the baseline risk is the same in intervention and control groups, the RRR can be simplified. It is decomposed into two separated parts, RRR in the intervention (VCT) arm and RRR in the control arm (basic information arm):
and
and then the total RRR can be calculated by
Example 3: OR of an undesirable rare outcome
It has been shown (9) that men and women who receive VCT have a lower risk of acquisition of a sexually transmitted disease at the first follow up visit, compared with those who receive basic health information. The effect was reported as odds ratio (OR) of 0.80 (95%CI 0.53–1.20). Since such an undesirable outcome is rare (< 10%), the reported OR can be interpreted as relative risk. And so, RRR can be calculated as
In words: by providing VCT services to all male and female individuals in a community, we could expect a 20% reduction in sexually transmitted diseases, compared to providing only basic health information. This assumes that the reported 20% reduction is due to the intervention (see the discussion).
Example 4: OR of a desirable outcome that needs to be transformed to RR of an undesirable outcome
Cremin et al, explored consistent condom use over time comparing those who received VCT to those who did not (10). The authors reported the effect by sex and HIV status. For this example, we focus on men testing positive. At baseline, 13% of these men reported consistent condom use with regular partners in last two weeks. The effect of VCT on consistent condom use with regular partners was reported as an adjusted OR=1.67 (95%CI 0.68–3.98). Since the OR is symmetrical, the OR* for inconsistent condom use would be the reverse of 1.67 (OR), which is equal to 0.60 (95%CI 0.25–1.47). Basically, how the event of interest is defined (consistent condom use instead of inconsistent condom use) will not affect the magnitude of OR, but only it direction. However, it’s not true for RR and RRR, which may lead to confusion (11). Therefore, first we need to transform OR into OR* to reflect our outcome of interest. We also need to transform P0=13% of consistent condom use to P*0=87% of inconsistent condom use and then apply Zhang’s formula (7) to convert it to RR*. The calculation process is as below
And then the
This means that men, who received VCT with a positive HIV test result, would decrease their inconsistent condom use by 7% (mid-point estimate, not statistically significant). We can calculate the 95% confidence interval by applying the same formula to the lower and upper confidence limits of the reported Adj. OR, which gives figures of −4% and 27.9% (as 95%CI for RRR). The wide range indicates that there is uncertainty around the protective effect of VCT, ranging from a 4% increase in inconsistent condom use to a dramatic 27.9% decrease in inconsistent condom use among HIV positive men receiving VCT.
DISCUSSION
We have presented an algorithm than can be used to transform different effect measures into relative risk reduction. To use the algorithm, the first step is to identify the type of effect size reported in the paper, and then to ascertain whether it is a desirable or undesirable outcome. The study design must also be accounted for, as in some studies there is a concurrent control group where as others only report the baseline and outcome measures for a single group. The last aspect for consideration is whether the outcome is reported as an OR, and hence the need to recalculate the RR.
Our definition of RRR is similar to what Miettinen described as Prevented Fraction (PF), which is the proportion of disease that would have occurred had the interventions not been present in the populations. Although RRR and PF both focus on intervention, rather than risk factor prevalence, RRR is defined in the context of the study, where all cases are exposed to the intervention and all participants in the control group are not. PF estimates intervention effect at the population level. Below we explain how RRR formula can be expanded to also consider the level of exposure to a preventive factor in a population.
In the context of HIV prevention, risky behaviors are relatively common, thus the potential bias of OR (when interpreted as RR) is large (12). When the prevalence of the outcome in the control group is less than 10%, although there is always a debate around this cutoff, the OR is close to the risk ratio. However, it would be a safe strategy to use Zhang correction in all cases, particularly whenever the event of interest is not rare (more than 10%) (7).
Zhang correction is hard to implemented to adjust the OR estimated in case-control studies. The challenge is that in case-control studies, the prevalence of the outcome of interest among the control group (one of the two parameters in Zhang formula), cannot be estimated directly (13). Usually, this parameter is estimated using information from extra sources like other studies relevant to the setting. Moreover, in case-control studies, depending on the type of source population for cases and control group, the reported effect measure, OR, can be interpreted in so many ways. The odds ratio obtained in a case-control study may be interpreted as risk ratio, rate ratio, odds ratio or prevalence odds ratio (14). Zhang correction might not be necessary for some case-control studies such as nested case-control designs as risk or rate ratios could be directly estimated.
Our simple algorithm for estimating an adjusted RRR from different adjusted point effect sizes can also be used for estimating the confidence limits. We propose applying the above formula to the lower and upper bounds (confidence interval limits) of the adjusted effect sizes. It is not the ideal method to estimate the confidence bounds but it’s a trade-off between simplicity and precision (7, 15). Other non-parametric methods like bootstrap can be used for interval estimation of RRR (16). To be able to do this, given the reported effect measure (e.g. OR) and the prevalence of outcome in the control group, one should back calculate the joint distribution of exposure and outcome of interest, resample it numerous times (iteration) and then calculate the RRR for each iteration. This leads to a distribution of RRR, from which variance can be calculated (17). Delta method (18) is the other approach that can be applied for both primary reported effect measure (OR, RR and so on) or the calculated RRR. For details consult with Hildebrandt et al (19).
Since the RRR has a very simple definition – the fraction of all cases of a disease or condition that can be attributed to exposure – then, we can calculate it for both dichotomous and continues exposures (either as discrete numbers or average) (20). The approach is similar to the pre-post method above. For example, if the total number of sexual partners over the month before VCT decreased from 7 to 3, the effect can be reported as (7–3)/7 = 57%. This makes RRR a more popular effect measure for transmission models (21).
An observed RRR for an exposure or a set of multiple exposures and subpopulations can be decomposed into its original combinations for quantifying the fractions of disease preventable by every exposure (3). As mentioned above in Example 2, the overall RRR can be partitioned into RRR for the main exposure (VCT + information material) and RRR for the sub-exposure (information material).
In practice, the common situation is the one in which several exposures and confounding variables exist and influences the outcome of interest, and they may even interact with each other. In such cases, when someone describing the effect of several exposures (sequentially or overall), Bruzzi et al (22) and later Eide et al (23) proposed using either the adjusted OR or predicted adjusted prevalence from the multiple logistic model. The results can be presented as either sequential RRR (consider the effect of exposure one at the time in an order and then sequentially consider other exposures) or an average RRR (when all exposures works simultaneously) to quantify the proportion of disease that can be prevented overall (23, 24). These methods work mostly in primary studies, where the logistic models and the individual level data are available for further calculation. In secondary studies, we have observed very limited applicability. We recommend either using the adjusted reported effect measure like OR to calculate the RRR (overall approach) or extracting the effect measure in strata of different exposure and confounding variables and then transforming all into RRR.
Regarding the sampling distribution of RRR, it is skewed, in the opposite direction and not severely as the sampling distribution of OR and RR. Usually, it is suggested to make the transformation of Log(1-RRR) to make the confidence interval symmetrical around the point estimate (13). This is an added value of using RRR instead of RR or OR in systematic review and meta-analysis, since log and anti-log transformation leads to missing of some data points (with zero effect) and also bring complexity into the interpretation of results.
Our method for calculating study RRR can be used to calculate population RRR. Generally, population RRR (RRRp) addresses the question: what fraction of all cases of a disease or condition in a population can be prevented if all are exposed to the protective factor? (2, 24). The assumption of 100% exposure can be relaxed by adding a parameter which specifies the coverage (Q) of the preventive factor. In cases where OR (consider Zhang correction when the outcome is not rare), HR or RR are reported, the below formula can be used:
Like any other effect measures, RRR has also its limitations. First, for rare outcomes, when no events happen in control arm, the RRR cannot be calculated, and so such studies excluded from meta-analysis. One way to overcome is to add a small amount to correct the continuity or apply other measures like arcsine difference (11). Also, the RRR is not symmetric in terms of the outcome of interest. If one has calculated the RRR for the outcome of “condom use”, it cannot be reversed for the complement outcome, “non-condom use”. Like other relative effect measures, the magnitude of the effect should be interpreted according to the baseline prevalence of the outcome of interest (4). As an example, if the outcome prevalence increased from 1% at baseline to 0.8% at the follow-up, which is translated into an RRR of 20% that seems to be huge effect. However, it may not be an important effect in absolute terms, just a 0.2% change.
CONCLUSION
RRR has the potential to be used to assess the effect of multiple continuous and/or categorical (harmful or preventive) factors over sub-populations. The results of different studies of similar exposures and outcomes can be pooled or compared when all transformed into one measure of RRR. When calculating the RRR, the research question, the desirability of the reported outcome, and the study design should be considered. The algorithm that we presented offers a practical guide for this process. We recommend that original studies and reviews report the RRR.
Supplementary Material
RRR calculation algorithm; An Excel Calculator
What’s new?
Key findings
Although Relative Risk Reduction (RRR) is easily understood and widely applicable in public health intervention settings, it’s not routinely reported as primary effect size in original studies.
The heterogeneity in effect measures and types of reported outcome complicates the use of studies for policy translation, because results cannot be easily combined or compared across studies.
What this adds to what was known?
We propose an algorithm that can be applied to transform different reported effect measures into RRR as a metric to standardize effect size for public heath interventions for translational research.
What’s the implication and what should change now?
Using the proposed algorithm, RRR can be calculated and interpreted more accurately in original studies and reviews.
Acknowledgments
We are deeply indebted to our colleagues from Global Health Decisions Team, University of California, San Francisco for helping in systematic searching the literature, critical appraisal and data extraction for relevant papers used as illustrative examples in this paper.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- 1.Egger M. Meta-analysis. principles and procedures. BMJ. 1997;315:1533–7. doi: 10.1136/bmj.315.7121.1533. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Schwartz LM, Woloshin S, Welch HG. Misunderstandings about the effects of race and sex on physicians’ referrals for cardiac catheterization. N Engl J Med. 1999;341(4):279–83. doi: 10.1056/NEJM199907223410411. discussion 286–7. [DOI] [PubMed] [Google Scholar]
- 3.Eide GE. Attributable fractions for partitioning risk and evaluating disease prevention: a practical guide. Clin Respir J. 2008;2(Suppl 1):92–103. doi: 10.1111/j.1752-699X.2008.00091.x. [DOI] [PubMed] [Google Scholar]
- 4.Rockhill B, Newman B, Weinberg C. Use and misuse of population attributable fractions. Am J Public Health. 1998 Jan;88(1):15–19. doi: 10.2105/ajph.88.1.15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Gefeller O, Land M, Eide GE. Averaging attributable fractions in the multifactorial situation: assumptions and interpretation. J Clin Epidemiol. 1998;51(5):437–41. doi: 10.1016/s0895-4356(98)00002-x. [DOI] [PubMed] [Google Scholar]
- 6.Leviton A. Definitions of attributable risk. Am J Epidemiol. 1973;98(3):231. doi: 10.1093/oxfordjournals.aje.a121552. [DOI] [PubMed] [Google Scholar]
- 7.Zhang J, Yu KF. What’s the relative risk? A method of correcting the odds ratio in cohort studies of common outcomes. JAMA. 1998;280(19):1690–1. doi: 10.1001/jama.280.19.1690. [DOI] [PubMed] [Google Scholar]
- 8.van Griensven G, de Vroome E, Tielman R, Goudsmit J, de Wolf F, JvdN, et al. Effect of human immunodeficiency virus (HIV) antibody knowledge on high-risk sexual behavior with steady and nonsteady sexual partners among homosexual men. Am J Epidemiol. 1989 Mar;129(3):596–603. doi: 10.1093/oxfordjournals.aje.a115172. [DOI] [PubMed] [Google Scholar]
- 9.Coates TJ, Grinstead OA, Gregorich SE, Heilbron DC, Wolf WP, Choi K-H, et al. Efficacy of voluntary HIV-1 counselling and testing in individuals and couples in Kenya, Tanzania, and Trinidad: a randomised trial. The Voluntary HIV-1 Counseling and Testing Efficacy Study Group. Lancet. 2000;356(9224):103–12. [PubMed] [Google Scholar]
- 10.Cremin I, Nyamukapa C, Sherr L, Hallett T, Chawira G, Cauchemez S, et al. Patterns of self-reported behaviour change associated with receiving voluntary counselling and testing in a longitudinal study from Manicaland, Zimbabwe. AIDS Behav. 2010 Jun;14(3):708–15. doi: 10.1007/s10461-009-9592-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Rucker G, Schwarzer G, Carpenter J, Olkin I. Why add anything to nothing? The arcsine difference as a measure of treatment effect in meta-analysis with zero cells. Stat Med. 2009;28(5):721–38. doi: 10.1002/sim.3511. [DOI] [PubMed] [Google Scholar]
- 12.McNutt LA, Hafner JP, Xue X. Correcting the odds ratio in cohort studies of common outcomes. JAMA. 1999;282(6):529. doi: 10.1001/jama.282.6.529. [DOI] [PubMed] [Google Scholar]
- 13.Jewell NP. Statistics for Epidemiology - Chapter 7: Estimation and inferences for measures of association. Chapman & Hall/CRC; 2003. pp. 88–90. [Google Scholar]
- 14.Knol MJ, Vandenbroucke JP, Scott P, Egger M. What do case-control studies estimate? Survey of methods and assumptions in published case-control research. Am J Epidemiol. 2008;168(9):1073–81. doi: 10.1093/aje/kwn217. [DOI] [PubMed] [Google Scholar]
- 15.McNutt LA, Wu C, Xue X, Hafner JP. Estimating the relative risk in cohort studies and clinical trials of common outcomes. Am J Epidemiol. 2003;157(10):940–3. doi: 10.1093/aje/kwg074. [DOI] [PubMed] [Google Scholar]
- 16.Greenland S. Interval estimation by simulation as an alternative to and extension of confidence intervals. Int J Epidemiol. 2004;33:1389–97. doi: 10.1093/ije/dyh276. [DOI] [PubMed] [Google Scholar]
- 17.Lehnert-Batar A, Pfahlberg A, Gefeller O. Comparison of confidence intervals for adjusted attributable risk estimates under multinomial sampling. Biom J. 2006 Aug;48(5):805–19. doi: 10.1002/bimj.200510215. [DOI] [PubMed] [Google Scholar]
- 18.Hoel P. Introduction to Mathematical Statistics: Wiley Series in Probability and Statistics, the University of Michigan. 1984. [Google Scholar]
- 19.Hildebrandt M, Bender R, Gehrmann U, Blettner M. Calculating confidence intervals for impact numbers. BMC Med Res Methodol. 2006;6:32. doi: 10.1186/1471-2288-6-32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Lloyd CJ. Estimating attributable response as a function of a continuous risk factor. Biometrika. 1996;83(3):563–573. [Google Scholar]
- 21.Barendregt JJ, Veerman JL. Categorical versus continuous risk factors and the calculation of potential impact fractions. J Epidemiol Community Health. 2010;64(3):209–12. doi: 10.1136/jech.2009.090274. [DOI] [PubMed] [Google Scholar]
- 22.Bruzzi P, Green S, Byar D, Brinton L, Schairer C. Estimating the population attributable risk for multiple risk factors using case-control data. Am J Epidemiol. 1985 Nov;122(5):904–14. doi: 10.1093/oxfordjournals.aje.a114174. [DOI] [PubMed] [Google Scholar]
- 23.Eide GE, Gefeller O. Sequential and average attributable fractions as aids in the selection of preventive strategies. J Clin Epidemiol. 1995;48(5):645–55. doi: 10.1016/0895-4356(94)00161-i. [DOI] [PubMed] [Google Scholar]
- 24.Eide GE, Heuch I. Average attributable fractions: a coherent theory for apportioning excess risk to individual risk factors and subpopulations. Biom J. 2006;48(5):820–37. doi: 10.1002/bimj.200510228. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
RRR calculation algorithm; An Excel Calculator



