Skip to main content
International Journal of Epidemiology logoLink to International Journal of Epidemiology
. 2021 Oct 13;51(2):364–371. doi: 10.1093/ije/dyab218

Are Greenland, Ioannidis and Poole opposed to the Cornfield conditions? A defence of the E-value

Tyler J VanderWeele
PMCID: PMC9082787  PMID: 34643669

Introduction

The E-value1 is a relatively new metric for assessing the sensitivity or robustness of associations to potential unmeasured confounding. In a recent exchange in the International Journal of Epidemiology, a series of commentaries1–7 were written in response to a paper by Blum et al.8 That paper provided insightful descriptive analyses of the early uses of the E-value. Although the commentary invitations arose from the Blum et al. paper,7 many of the resulting commentaries were directed principally at the E-value itself; the entire set served as a tribunal of sorts on the nature and usefulness of the E-value. Several of the commentaries provided what I thought were balanced appraisals of the E-value, noting both its uses and limitations and the need to often supplement or replace the E-value with more extensive sensitivity analysis. I was very sympathetic to the positions put forward by Groenwold,3 Kaufman,4 and Fox et al.5

Other commentaries, however—namely those of Greenland,6 Poole,7 and an earlier commentary by Ioannidis et al.9—expressed scepticism concerning the E-value’s usefulness. These authors’ comments have certainly helped shape and refine my own thinking, and for this I am grateful, even though we do not always agree. However, from the statements in those commentaries, my sense is that they are being interpreted as being ‘against the E-value’. Indeed, one of the commentaries has this expression in its title.6 I cannot necessarily definitively state that these authors are opposed to the use of the E-value in all circumstances, and their own thinking on the matter may be evolving. However, their critique and scepticism seem clear. Thus, in this paper, I would like to address both their specific critiques and also the broader question of the E-value’s usefulness. I will propose my counter-argument in part by comparing and contrasting the E-value with the classic Cornfield conditions.10–12 The Cornfield conditions were an early form of sensitivity analysis and were important in making the case that the smoking-lung cancer association was causal—that even in the face of potential uncontrolled confounding, the evidence was in fact definitive.10 I will argue that the points put forward by Greenland, Ioannidis et al., and Poole, if interpreted as an argument against the usefulness of E-value, in fact also effectively entail a position that is opposed to the use of the Cornfield conditions. To abandon the use of the E-value on the grounds they suggest would require abandoning the use of the Cornfield conditions also. To my mind, and I suspect to that of many in epidemiology, such an abandonment would be, and would have been, a mistake; and would have deprived epidemiologists of an important tool that has proved useful in causal reasoning. Like the Cornfield conditions, the E-value itself is one such tool. It can be misused,2,8,9,13 but its proper use can provide important insights.

The E-value and the Cornfield conditions

Let us begin by reviewing the Cornfield conditions and the E-value. Consider the context of an observed relative risk, RR >1, between exposure A and outcome Y with a possible binary unmeasured confounder U. The Cornfield conditions can be formulated by considering two parameters corresponding to the risk ratio between U and Y, RRUY, and between A and U, RRAU.10,11 The Cornfield conditions state that U cannot be entirely responsible for the observed association unless both RRUY >RR and RRAU >RR.10–12 The E-value uses similar parameters but generalizes these to allow for non-binary and even multivariate unmeasured confounders U.1,11 If there are multiple unmeasured confounders, the parameters may be more difficult to interpret and the confounding bias may be much larger; however, with a single binary confounder, the parameters are essentially analogous to the Cornfield parameters.11 The E-value evaluates the minimum that the larger of the two parameters, RRUY and RRAU, would have to be among all values that would allow some U to be entirely responsible for the association.1,11,14 That minimum is achieved when the two are equal, and is given by the formula: E-Value = RR + sqrt{RR(RR-1)}. The E-value will be larger than the observed RR. For an observed RR = 2, the Cornfield conditions are RRUY ≥2 and RRAU ≥2; and the E-value = 2 + sqrt{2(2–1)} = 3.41 indicates that the larger of RRUY and RRAU must be at least as large as 3.41. The E-value paper1 also gave a bounding factor formula11 that specified the maximum bias that could be generated by any two, possibly different, values of RRUY and RRAU. Let us now turn to the criticisms that Greenland, Ioannidis et al., and Poole level against the E-value.

One critique of Ioannidis et al.9 and Greenland6 is that the E-value is simply a function of the observed risk ratio. This is true. It is true also of the Cornfield conditions. In the case of the E-value, the transformation is non-linear, and non-trivial in that it is difficult to carry out in one’s head, but it is a direct function of the observed RR. This does not mean it is uninteresting or uninformative. Both the Cornfield conditions and the E-value effectively convert the observed risk ratio to what would be required, in terms of unmeasured confounder associations, to completely explain away the observed exposure-outcome association. To dismiss the value of the E-value on these grounds is to dismiss the value of the Cornfield conditions also, and this inference likewise pertains to the subsequent critiques considered below.

Both Greenland6 and Poole7 criticize the E-value because it can understate the unmeasured confounding associations needed to explain away an observed exposure-outcome association, i.e. that it is ‘conservative’. For a real unmeasured confounder, these associations might need to be a lot larger than those indicated by the E-value.6 This is also true. The E-value effectively searches over all distributions of the confounder(s) (which need not be binary) which are consistent with the specified parameters and which maximize the bias generated. The E-value effectively considers worst-case scenarios. Poole rightly points out that when the unmeasured confounder is binary, the E-value effectively presumes that the prevalence of that unmeasured confounder is 100% in one of the two exposure group. This is true; this is the worst-case scenario in this setting. This is true also of the Cornfield conditions. In fact, the Cornfield conditions consider even more extreme scenarios in so far as, by considering the sensitivity parameters one at a time (rather than jointly as with the E-value),11 the Cornfield conditions, when evaluating one parameter e.g. RRUY, effectively consider a worst-case scenario for the other e.g. RRAU, that it is essentially infinite. And yet the Cornfield conditions, in spite of considering these extreme scenarios, are still sometimes useful. The E-value both weakens the assumptions employed in the derivation of the Cornfield conditions and yet also delivers stronger conclusions by considering the parameters jointly.11 Both the Cornfield conditions and the E-value, by considering worst-case scenarios, can however still be useful in making the strongest case possible for evidence for a causal effect: if evidence for robustness persists even while considering worst-case scenarios, then the evidence may be very strong indeed. Contrary to Poole,7 one need not believe that the worst-case scenario is in fact attained to usefully employ these approaches. The worst-case scenario itself may be implausible. But if the evidence persists even in the face of such worst-case scenarios, then the argument for an effect may be compelling. Even if the worst-case scenario itself is not plausible, close approximations to it might be, or there may simply be considerable uncertainty. The Cornfield conditions and the E-value allow one, in some circumstances, to draw reasonable inferences about robustness to confounding even while considering worst-case scenarios. The E-value is not the right tool for trying to obtain the most accurate possible estimates, but it can sometimes be the right tool for making the strongest case possible for the presence of a causal effect. It does so precisely by considering worst-case scenarios.

Poole7 and Greenland6 criticize the E-value because it does not incorporate information that may be available about a known uncontrolled confounder. This is true. It is true also of the Cornfield conditions. If an investigator has knowledge of an unmeasured confounder, such as its prevalence, that information can be employed by using other sensitivity analysis techniques.12,15–17 However, in considering the possibility of an unknown unmeasured confounder, such information will not be available and the Cornfield conditions and E-value may then be especially useful. Even when the unmeasured confounder is known, if a compelling case can be made for robustness to confounding by using the E-value or Cornfield conditions without that additional knowledge, then the case is arguably stronger still. If a compelling case cannot be made, then other techniques making use of additional information become even more important, as will often be the case if the effect estimate is small or has substantial uncertainty.

Greenland6 further argues that the E-value has the potential to reinforce cognitive tendencies of only considering biases away from the null. Relatedly, Poole7 claims that if the actual direction of the bias induced by the confounding is towards the null, then the E-value is uninformative. Both claims are in some sense true, and true of the Cornfield conditions as well. However, there are analogues both of the Cornfield conditions and of E-values which can also be employed for biases towards the null.1,11 For example, one can assess how strong the unmeasured confounding associations would need to be at a minimum to shift an observed RR = 1.0 to a true risk ratio of 1.5.1,11 These points then do not seem to be adequate grounds for abandoning the approach.

Thus, although the aforementioned points by Greenland, Ioannidis et al., and Poole are correct, I believe they are inadequate grounds for dismissing the usefulness of E-values. To dismiss the utility of the E-value on these grounds is to dismiss the value of the Cornfield conditions as well. Some of these commentators may in fact favour discarding both; it would not be logically inconsistent to do so. However, in both cases, I believe there would be real loss to epidemiology, since both tools can sometimes be quite useful.

Addressing specific criticisms

In this section, I will address additional specific critiques concerning the interpretation of the E-value. In the material discussed above, I agreed with the points made by Greenland, Ioannidis et al., and Poole, but did not think that those points justified a conclusion against using E-values. However, other criticisms that have been put forward I believe involve misinterpretation or require some form of clarification, and it is these that I would like to address now. With regard to Greenland’s remarks,6 there is little with which I disagree other than his ultimate conclusion, as reflected in his commentary title, not to use E-values. The additional critiques of Ioannidis et al. were put forward in an earlier paper9 rather than in the IJE exchange,2–8 and I have addressed them elsewhere13 and so will not repeat those responses. I will thus turn to those of Poole.7

Recommendations for sensitivity analysis

Poole7 has complained that I and co-authors have advocated that the E-value become ‘standard practice’ and be ‘reported routinely’. The actual statements Poole cites are taken out of context and the statements are not cited in their entirety. The statement in the abstract of the original E-value paper1 reads (italics added for emphasis): ‘The authors propose that in all observational studies intended to produce evidence for causality, the E-value be reported or some other sensitivity analysis be used’.1 Likewise, the statement that begins the discussion section of that paper reads, ‘We propose that all observational studies that assess causality (that is, are not strictly about description or predictive or prognostic modeling) report the E-value for the estimate and the CI or use some other sensitivity analysis technique’. As noted above, the E-value is not the right tool for all contexts. A more thorough sensitivity analysis will often be desirable, especially when the E-value does not clearly demonstrate robustness, but that does not invalidate its usefulness in some settings and it is not being promoted as a replacement for all other sensitivity analyses.

The ‘sufficiency’ of E-values

Poole7 states that the E-value has been promoted as a sufficient alternative to more extensive sensitivity analysis. This language of ‘sufficiency’ is not language I have used. I am not sure what sort of ‘sufficiency’ is in view. As with, say, assessing model mis-specification, so also with sensitivity analysis for unmeasured confounding, one can do more, or less, or nothing at all, to address these issues. A more extensive sensitivity analysis, or a more extensive assessment of model mis-specification, will in principle always be more informative, provided it is understood by its readers. However, regardless of how far one goes, one could always in principle still do more. A more thorough analysis will yield a more thorough assessment of the evidence and thereby allow, if the evidence consistently points in the same direction, for a stronger argument. But it is not at all clear to me that there is always a point at which such an assessment is ‘sufficient’. How much effort (and space in a paper) is devoted to such assessments depends in part on how important the question under consideration is, how good the data are, to what extent this particular bias is likely the dominant one (versus, say, measurement error or selection bias), to what extent journal editors are willing to devote space to such assessments (though Online Supplements help with this issue), how much time the investigators have relative to the question’s importance, and various other considerations.

The E-value was proposed as a very basic form of sensitivity analysis that is easy to implement. It was introduced because it seemed better to report that than nothing, which was often what was taking place with regard to assessments of unmeasured confounding.8 The intent was not to propose the E-value as always being ‘sufficient’. As noted above and elsewhere,2,13,18 I think a more extensive sensitivity analysis is often desirable. I would not say ‘always desirable’ because, once again, this will depend on the context, the importance of the question, what else is at stake, and whether the E-value, by considering worst-case scenarios, might on its own be sufficient for a particular purpose in a particular study. When it does not clearly indicate robustness, for example because of a small effect size, other techniques may be especially valuable if, by incorporating additional information, a compelling case for robustness can be made or if it is again clear that only modest confounding might explain the effect away. I have made no proposal that it is always sufficient for all purposes in all studies. The proposal in the original E-value paper (that ‘the E-value be reported or some other sensitivity analysis be used’) was for a change in practice—to try to make things at least somewhat better; it was not a proposal about what constitutes ‘sufficiency’. Again, that will depend on context.

The nature and value of bounds

Poole7 states that the E-value assumes the prevalence of 100% for the unmeasured confounder in the exposure group. This is incorrect. The very specification of a prevalence requires a binary unmeasured confounder. Neither the E-value nor the associated bounding factor formula makes this assumption. It allows for a categorical or even multivariate unmeasured confounder. Moreover, even for a binary unmeasured confounder, the definition of the E-value does not assume a prevalence of 100%. The E-value and the associated bounding factor formula are defined in terms of the maximum bias that is possible over all possible distributions of the unmeasured confounder(s) U consistent with the parameters specified.1,11,14 As noted above, the E-value does consider ‘worst-case’ scenarios; however it does not assume that these in fact correspond to the reality. Bounds and worst-case scenarios can be useful to consider because, if the evidence is robust even in such worst-case scenarios, then one also knows it will be robust in more moderate scenarios.

Of course, if the evidence is not robust under worst-case scenarios, then consideration of what might be plausible or ‘best-guess’ scenarios will be especially important, though in such circumstances there can be temptation for investigators to downplay the magnitude of actual biases (or exaggerate them, if trying to argue against an effect). Poole states that the developers of the E-value recommend relaxing the worst-case assumptions ‘only when’ they is known to be false. But this is incorrect. We simply cautioned against the use of the E-value when an unmeasured confounder was known to have low prevalence; but there was no indication that this was the only scenario in which one might not want to consider worst-case scenarios. Whether one would want to consider worst-case scenarios is partially subject to the purpose of the sensitivity analysis. If the purpose is to provide the strongest possible argument that at least some of the associations are not explainable by unmeasured confounding, then considering the worst-case scenarios can sometimes be very helpful. If the purpose is to try to attain the best possible assessment of the actual causal effect magnitude (as might be desired for example when assessing cost-effectiveness or public health impact), then worst-case scenarios are not helpful; one instead wants to try to employ assumptions and parameters that one hopes are as close to the truth as possible, rather than worst-case scenarios.12,15–17,19

Inverse associations

Poole7 rightly points out that the language used for the exposure in the original E-value paper for a protective association was too loose. When interpreting the boiler-plate language used for the E-value,1 the ‘exposure’ under consideration should always be labelled as the risk-increasing exposure. In the context of the breastfeeding examples, the ‘breastfeeding risk exposure’ would be ‘absence of breastfeeding’. Whereas expressions in the original E-value paper such as ‘an unmeasured confounder associated with childhood leukaemia and breastfeeding by a risk ratio of 1.4-fold…’ might still be interpreted in a way consistent with the precise definition of the parameters, the language was admittedly too loose and was clearly subsequently interpreted by others in a manner not consistent with the precise interpretation of the unmeasured confounder increasing the absence of breastfeeding by a factor of 1.4. I certainly take responsibility for this oversight concerning the overly ambiguous language in the original E-value paper and regret the misinterpretation it may have, and may yet still, cause.

Responsibility for correct use

Poole7 states that, ‘It is accepted in the product liability literature that harms arising from reasonably foreseeable uses should be ascribed to the products’ developers, not to their users’, and he explicitly references both automobiles and methodological tools. I doubt that this position, as stated, is in fact accepted in the product liability literature, or by Poole himself. The use of automobiles when the driver is drunk is a reasonably foreseeable use. It would be odd to ascribe harms from that use to the products’ developers, rather than to drunk drivers. Some further qualification is surely necessary. I do think it is important that the developers of a product consider the possible misuses and harms that may arise from it, do what is possible to try to protect against misuses and weigh the good and the harms that may result from its use. In the original E-value paper,1 we worked closely with the journal’s editors to try to foresee misuses and misinterpretation. The paper also benefited from comments by James Robins and Sander Greenland, the latter of whom at least did not wholly approve, as can be seen,6 of the product’s release but whose comments were thus perhaps especially valuable in trying to foresee misuses. The paper and exposition are, as noted above, inevitably still imperfect and I take responsibility for those imperfections and the misinterpretations that may have resulted from them. As Poole notes, the E-value’s post-publication peer review continues7 and I am grateful for a number of his comments that have helped refine and clarify use and interpretation. However, just as with automobiles, it seems unreasonable to ascribe all misuses to the ‘product’s developers’. The paper by Blum et al.8 provides a useful service to the field in documenting some of those misuses. In response to that report, I and co-authors have tried to supply some initial principles concerning best practices for reporting E-values.2 Such guidelines will inevitably benefit from further refinement and improvement but we hope will be of help in preventing some of the misuses. However, as also noted by Kaufman,4 it seems unreasonable to attribute all misuses of the E-value to its developers.

Proposed language and interpretation

In personal correspondence and public debate, Poole has argued that the sentence that had been proposed as an interpretation of the E-value1 for use in research reports—namely, ‘The observed risk ratio of 3.9 could be explained away by an unmeasured confounder that was associated with both the treatment and the outcome by a risk ratio of 7.2-fold each, above and beyond the measured confounders, but weaker confounding could not do so,’1—is not adequate because the magnitude of ‘confounding’, he argues, is of course precisely the magnitude of the observed risk ratio itself. The Annals of Internal Medicine paper1 did specify that ‘The strength of an unmeasured confounder here is understood to be the maximum bias that could be generated in the bias formula for B given the confounder associations’.1 However, in light of Poole’s point, the boiler-plate text could indeed perhaps be made more precise by instead using, for instance, ‘With an observed risk ratio of RR = 3.9, an unmeasured confounder that was associated with both the outcome and the exposure by risk ratios of 7.2-fold each, conditional on the measured confounders, could explain away the estimate but weaker joint unmeasured confounder associations could not’, where the joint strength of confounder associations is again assessed by the maximum bounding factor formula.1,11 This formulation has the advantage of making reference directly to the unmeasured confounder associations, rather than the resulting unmeasured confounding.

The use of such language in the interpretation of the E-value was always intended to try to point the reader back towards to the full bounding factor formula, for consideration of all possible values of the two parameters that might jointly suffice to explain away an association. The software tools provided for E-values20,21 likewise plot the full curve of these joint values, which can similarly be included in papers when space, and possibly competing reporting priorities, allow. Such curves effectively also include the Cornfield conditions for each parameter, as the limit of one parameter as the other tends towards infinity.11 Such curves moreover include the E-value, which is the point on that curve that minimizes the maximum of the two parameters across all parameter values that suffice to explain away observed risk ratio.14 This occurs when they are equal. The E-value might thus be described in words, as in the original paper,1 as, ‘the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment-outcome association, conditional on the measured covariates’.1 But again, the proposed boiler-plate text for reporting (possibly modified, as above, in light of Poole’s point) was always intended to try to bring to mind the fuller bounding factor curve, even when all that was being reported was the boiler-plate text itself.

The actual practice of sensitivity analysis

Poole,7 in contrast to my reported teaching experience prior to the introduction of the E-value,13 claims to have had considerable success over decades at having students take up sensitivity analysis. Here, we were perhaps talking past one another. Poole noted in personal correspondence that he was referring to all forms of sensitivity analysis, rather than just unmeasured confounding. Perhaps more importantly, he was referring to students’ use of sensitivity analysis while they were completing their dissertation. My previous statements concerning dismay over the students’ lack of uptake of sensitivity analysis concerned ‘their subsequent research’,13 i.e. including after they had completed their dissertation. I believe the use of sensitivity analysis is much rarer post-dissertation and that the general lack of use is reflected in citation counts of sensitivity analysis methods papers.

Unfortunately, most sensitivity analysis papers are not widely cited; they are not widely cited because the techniques are not widely used. The three primary sensitivity analysis papers12,22,23 I had previously used in teaching (and in fact do continue to teach, along also now with the E-value), do not have that many citations, and especially so considering their age. In spite of teaching this material now to perhaps over a thousand students, the combined citation count total in Google Scholar of these three papers is only slightly more than 1000, over their combined total of 76 years since they were published. Moreover, I know that my own teaching is only responsible for a small fraction of these citations, since many others teach from these papers as well. The ratios here concerning use per exposure are thus, unfortunately, not very good. If Poole7 has had considerable success in achieving consistent post-dissertation use of unmeasured confounding sensitivity analysis techniques, it seems that there should be at least some very well cited sensitivity analysis papers. Unfortunately, I do not think that they exist in our discipline. Sensitivity analysis, alas, has simply not been used all that much in practice. I think one of the central reasons for the lack of uptake of sensitivity analysis techniques has been the relative difficulty of implementation, reporting, and interpretation of many of the existing techniques.

The E-value, because of its ease of use, may prove to be the exception. As illustration of this point, on 15 February 2021—at the time of writing and precisely 3.5 years after the publication of the original E-value paper,1 Google Scholar reported the paper as already having 1044 citations. In comparison, Rosenbaum and Rubin’s classic sensitivity analysis paper24 from 1983, from which I myself learned sensitivity analysis, had fewer citations: a total of 998 over its 38 years. The comparison of citation counts per year is in no way intended to suggest that one paper is superior. Rather I think all that these citation counts indicate is a rapid adoption of the E-value, arguably owing to its ease of use. This was indeed why the measure was introduced: to have an easy-to-use technique and to leave researchers without an excuse for not carrying out at least a crude form of sensitivity analysis (one that is admittedly conservative, but useful in some contexts for that very reason). The intent was not to replace other forms of sensitivity analysis but to try to ensure that at least something is done. It seems, based on citation counts, that it has been at least partially successful in this regard.

Concerning issues related to ease of use, the philosopher and mathematician Alfred Whitehead comments: ‘It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle—they are strictly limited in number, they require fresh horses and must only be made at decisive moments’.25

I do believe that proper interpretation of evidence from observational studies will almost always require careful thinking. However, I also think that civilization and science advance when the inputs that go into that evaluation of evidence can be accomplished without too much mental exertion. I think it is no bad thing that an epidemiologist can fit a logistic regression model without having detailed knowledge of, or having to implement by hand, the Newton-Raphson method. To my mind, the sensitivity analysis community has partially failed to serve the broader epidemiological community in the lack of sufficiently simple tools that would allow for widespread use. The output of such tools still needs to be carefully interpreted and then integrated with the other aspects of evidence at hand. The E-value in some sense shifts the careful thinking required from the implementation of the bias analysis to its interpretation. However, I think the lack of previous uptake of sensitivity analysis is an indication that sufficiently simple and automated tools have not been available.

The appropriate tool will of course vary by context and no tool is always appropriate. A logistic regression with main effects for each covariate is not always the right tool when investigating a binary outcome. As noted above, the E-value will not always be the right tool for sensitivity analysis. But both logistic regression and E-values are straightforward to implement; and they are both of use, at least in some contexts. When accompanied with a proper understanding of their interpretation, uses, and limitations, they can be valuable.1,2,13 The field would undoubtedly benefit from other effectively automated and easy-to-use sensitivity analysis techniques for other contexts, both concerning unmeasured confounding, and also for other biases,26–31 but until those are available, uptake of the techniques will likely be restricted to those with sufficient energy for cavalry charges.

Conclusion

In conclusion, the E-value, along with the Cornfield conditions, are simply tools. They are not tools that anyone needs to use. There are many other useful tools as well, and further easy-to-use resources perhaps need to be developed. As noted above, in some contexts, the Cornfield conditions and the E-value will not be sufficiently informative and more nuanced techniques incorporating additional information will be important. Nevertheless, many have found that the Cornfield conditions provide a helpful and intuitive tool to sometimes rule out the possibility that confounding might explain away an observed exposure-outcome association. It was helpful in this manner in the smoking-lung cancer debate.10,32 Based on the extent of the use of the E-value, it seems that many have likewise found it helpful for these purposes as well. Somewhat remarkably, the E-value, and the accompanying bounding factor, relax all of the assumptions of the Cornfield conditions and yet still effectively deliver stronger conclusions. One can of course choose not to use these tools. However, to dismiss the usefulness of the E-value wholesale effectively entails dismissing the usefulness of the Cornfield conditions as well.

Acknowledgements

I thank Charles Poole and Sander Greenland for helpful comments on an earlier draft of this paper.

Conflict of Interest

None declared.

Funding

This research was funded by NIH grant R01CA222147.

References

  • 1. VanderWeele TJ, Ding P.  Sensitivity analysis in observational research: introducing the E-value. Ann Intern Med  2017;167:268–74. [DOI] [PubMed] [Google Scholar]
  • 2. VanderWeele TJ, Mathur MB.  Commentary: Developing best-practice guidelines for the reporting of E-values. Int J Epidemiol  2020;49:1495–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Groenwold RH.  Commentary: Quantifying the unknown unknowns. Int J Epidemiol  2020;49:1503–05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Kaufman JS.  Commentary: Cynical epidemiology. Int J Epidemiol  2020;49:1507–08. [DOI] [PubMed] [Google Scholar]
  • 5. Fox MP, Arah OA, Stuart EA.  Commentary: The value of E-values and why they are not enough. Int J Epidemiol  2020;49:1505–06. [DOI] [PubMed] [Google Scholar]
  • 6. Greenland S.  Commentary: An argument against E-values for assessing the plausibility that an association could be explained away by residual confounding. Int J Epidemiol  2020;49:1501–03. [DOI] [PubMed] [Google Scholar]
  • 7. Poole C.  Commentary: Continuing the E-value’s post-publication peer review. Int J Epidemiol  2020;49:1497–500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Blum MR, Tan YJ, Ioannidis JP.  Use of E-values for addressing confounding in observational studies—an empirical assessment of the literature. Int J Epidemiol  2020;49:1482–94. [DOI] [PubMed] [Google Scholar]
  • 9. Ioannidis JP, Tan YJ, Blum MR.  Limitations and misinterpretations of E-values for sensitivity analyses of observational studies. Ann Intern Med  2019;170:108–11. [DOI] [PubMed] [Google Scholar]
  • 10. Cornfield J, Haenszel W, Hammond EC, Lilienfeld AM, Shimkin MB, Wynder EL.  Smoking and lung cancer: recent evidence and a discussion of some questions. J Natl Cancer Inst  1959;22:173–203. [PubMed] [Google Scholar]
  • 11. Ding P, VanderWeele TJ.  Sensitivity analysis without assumptions. Epidemiology  2016;27:368–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Schlesselman JJ.  Assessing effects of confounding variables. Am J Epidemiol  1978;108:3–8. [PubMed] [Google Scholar]
  • 13. VanderWeele TJ, Mathur MB, Ding P.  Correcting misinterpretations of the E-value. Ann Intern Med  2019;170:131–32. [DOI] [PubMed] [Google Scholar]
  • 14. VanderWeele TJ, Ding P, Mathur M.  Technical considerations in the use of the E-value. J Causal Inference  2019;7:1–11. [Google Scholar]
  • 15. Lash TL, VanderWeele TJ, Haneause S, Rothman K.  Modern Epidemiology. 4th edn. Philadelphia PA: Lippincott Williams & Wilkins, 2021. [Google Scholar]
  • 16. Lash TL, Fox MP, Fink AK.  Applying Quantitative Bias Analysis to Epidemiologic Data. Berlin: Springer Science & Business Media, 2011. [Google Scholar]
  • 17.. MacLehose RF, Ahern T, Lash TL, Poole C, Greenland S.  The importance of making assumptions in bias analysis. Epidemiology  2021;32:617–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. VanderWeele TJ, Martin JN, Mathur MB.  E values and incidence density sampling. Epidemiology  2020;31:e51–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Greenland S.  Dealing with the inevitable deficiencies of bias analysis – and all analyses. Am J Epidemiol  2021;190:1617–21. [DOI] [PubMed] [Google Scholar]
  • 20. Mathur MB, Ding P, Riddell CA, VanderWeele TJ.  Website and R package for computing E-values. Epidemiology  2018;29:e45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Linden A, Mathur MB, VanderWeele TJ.  Conducting sensitivity analysis for unmeasured confounding in observational studies using E-values: the evalue package. Stata J  2020;20:162–75. [Google Scholar]
  • 22. Lin DY, Psaty BM, Kronmal RA.  Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics  1998;54:948–63. [PubMed] [Google Scholar]
  • 23. VanderWeele TJ, Arah OA.  Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders. Epidemiology  2011;22:42–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Rosenbaum PR, Rubin DB.  Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. J R Statist Soc Ser B (Methodol)  1983;45:212–18. [Google Scholar]
  • 25. Whitehead AN.  An Introduction to Mathematics. London: H. Holt & Co., 1911. [Google Scholar]
  • 26. Smith LH, VanderWeele TJ.  Mediational E-values: approximate sensitivity analysis for mediator-outcome confounding. Epidemiology  2019;30:835–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. VanderWeele TJ, Li Y.  Simple sensitivity analysis for differential measurement error. Am J Epidemiol  2019;188:1823–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Smith LH, VanderWeele TJ.  Bounding bias due to selection. Epidemiology  2019;30:509–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Mathur MB, VanderWeele TJ.  Sensitivity analysis for unmeasured confounding in meta-analyses. J Am Stat Assoc  2020;115:163–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Mathur MB, VanderWeele TJ.  Sensitivity analysis for publication bias in meta-analyses. J R Stat Soc Ser C  2020;69:1091–119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Smith LH, Mathur M, VanderWeele TJ.  Multiple-bias sensitivity analysis using bounds. Epidemiology  2021;32:625–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Morabia A.  Has epidemiology become infatuated with methods? A historical perspective on the place of methods during the classical (1945–1965) phase of epidemiology. Annu Rev Public Health  2015;36:69–88. [DOI] [PubMed] [Google Scholar]

Articles from International Journal of Epidemiology are provided here courtesy of Oxford University Press

RESOURCES