The letters to the editor inspired by “Causal analysis of existing databases: no power calculations required” are heartening because none of them questioned the possibility of making causal inferences from observational databases. Increasingly, researchers are erasing one of the blights that mainstream statistics brought to science: labeling causal inference from non-experimental data as an invalid scientific goal. Another blight is the overemphasis on statistical significance and power. My commentary dealt with the latter.
Campbell et al. were not swayed by my arguments against statistical power as a criterion to decide which causal analyses are worth pursuing with existing databases [1]. Their letter repeatedly refers to power, as in “a study […] would have a statistical power of 37%”. Because “a power of 37%” is shorthand for “a power of 37% to detect a non-null effect”, Campbell et al. effectively disagree with a key point of my commentary: the primary goal of causal analyses is not to detect a non-null causal effect but to quantify it as unbiasedly and precisely as possible. Not surprisingly, Campbell et al. prefer talking about underpowered studies rather than about studies with imprecise estimates.
It is tempting to argue that this fundamental disagreement about the goals of scientific research underlies Campbell et al.’s reaction to my proposal. However, this disagreement doesn’t fully explain how their rejection of my proposal fits with their concern (which I share) about small studies being “a cause of distrust in science because their results are selectively reported.” My proposal—publishing all effect estimates regardless of sample size—is precisely a way to fight selective reporting by ensuring that all evidence becomes accessible.
Campbell et al. also say that the hypothetical example used in my commentary is too simplistic. I agree: I designed an admittedly extreme example to quickly convey a conceptual point while adhering to strict editorial constraints on word count. A less extreme example would have conveyed the same concept in a more nuanced way, but it would have required more journal space. Helpfully, the authors of two other letters used their allocated journal space to elaborate on the challenges of combining extremely sparse data. Morris and van Smeden describe the difficulties to carry out meaningful meta-analyses when investigators fail to provide uncertainty measures, use incorrect statistical methods, or target different effect measures; Mansournia also reminds us that between-study heterogeneity in design and analysis must be taken seriously.
I thank Morris, van Smeden, and Mansournia for their thoughtful comments [2,3]. While some issues raised by these authors can be addressed by existing meta-analysis guidelines, settings with extremely imprecise effect estimates would benefit from additional guidance. Ideally, investigators would also coordinate research efforts through a master protocol, as discussed in my commentary.
The good news is that not only these practical problems are solvable, but they are also good problems to have: these problems only arise when imprecise estimates are made available, which gives us a shot at answering important causal questions. The alternative to not having these problems is withholding the publication of imprecise, but potentially helpful, effect estimates.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- 1.Campbell H. et al. A few things to consider when deciding whether or not to conduct underpowered research. doi: 10.1016/j.jclinepi.2021.11.038. In press. [DOI] [PubMed] [Google Scholar]
- 2.Morris. et al. Causal analyses of existing databases: the importance of understanding what can be achieved with your data before analysis (commentary on Hernán) doi: 10.1016/j.jclinepi.2021.09.026. In press. [DOI] [PubMed] [Google Scholar]
- 3.Mansournia M. et al. Sample size considerations are needed for the causal analyses of existing databases. doi: 10.1016/j.jclinepi.2021.09.024. In press. [DOI] [PubMed] [Google Scholar]
