Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Oct 7.
Published in final edited form as: Biometrics. 2017 Nov 15;74(3):801–802. doi: 10.1111/biom.12815

Rejoinder to “Quantifying Publication Bias in Meta-Analysis”

Lifeng Lin 1,*, Haitao Chu 1,**, James S Hodges 1
PMCID: PMC6779042  NIHMSID: NIHMS1052658  PMID: 29141108

We thank co-editor Michael Daniels for organizing the discussion and Drs. Nancy Geller, Dan Jackson, and Christopher Schmid (henceforth NG, DJ, and CS, respectively) for their outstanding discussion of our work. All three discussants pointed out the importance of assessing publication bias in meta-analysis as well as the difficulties of doing so. Indeed, publication bias is full of uncertainties: how publication criteria lead to bias varies greatly case by case. Studies may be suppressed from publication because their p-values are not statistically significant (Copas et al., 2013; Citkowicz and Vevea, 2017), or their effect sizes are too negative (Duval and Tweedie, 2000a), or their sample sizes are too small (Tang and Liu, 2000), as discussed by NG and CS. Because of this difference, no method can perform well in all cases. Meanwhile, these difficulties warrant future research on exploring the proposed methods’ performance using more simulations and case studies, as suggested by all three discussants. As discussed by CS, the missing data mechanism for publication bias is probably missing not at random, so conducting extensive simulations under various settings requires a considerable amount of work and is beyond the scope of this rejoinder, yet we believe such work could make a good contribution to the literature, similar to the simulations by Duval and Tweedie (2000b), Macaskill et al. (2001), Peters et al. (2006), and Bürkner and Doebler (2014).

In addition to these uncertainties, even the name “publication bias” is a bit controversial. Compared with selection models, funnel-plot-based methods tend to be favored possibly because checking a funnel plot’s asymmetry is intuitive. However, such asymmetry may arise from causes other than publication bias (e.g., poor study quality) and it is often confused with other sources of bias, such as reporting bias, mentioned by CS (Schmid, 2017). Hence, some researchers prefer to describe the problem as “small-study effects” (e.g., Harbord et al., 2006), instead of publication bias. However, small-study effects may only reflect one aspect of a funnel plot’s asymmetry. Like other publication bias detection methods based on asymmetry in funnel plots, our methods cannot distinguish whether a funnel plot’s asymmetry is due to publication bias or other sources of bias. Researchers need to employ other evidence and carefully examine whether a funnel plot’s asymmetry is truly due to publication bias. Moreover, compared with diagnostic or corrective analysis, prevention of publication bias is more desirable (Lau et al., 2006). As suggested by CS, publication bias may be diminished by extensively searching for a wide range of resources; and prospective registration of trials may also help prevent publication bias (Rothstein et al., 2005).

As DJ noted, we intended to emphasize measuring publication bias instead of merely testing for it, because the former has seldom been considered. However, we did not mean to discourage researchers from testing for publication bias. “Measuring” and “testing” are different but strongly related concepts, and both can provide valuable information about publication bias. Measures must possess certain features; for example, they should be invariant to the scale of treatment effects and the number of studies in a meta-analysis. So far, most papers on publication bias have focused on hypothesis testing. However, this is insufficient for researchers, who may want to quantify how serious the publication bias is. The measures of publication bias considered in our article, the regression intercept TI and the skewness TS, can help us evaluate the severity of publication bias in a meta-analysis. Currently, the regression intercept TI has no intuitive cutoff points to distinguish the magnitude of publication bias, and the traditional cutoff points of the skewness, ±0.5 and ±1, may be fairly rough. In future work, we will explore empirical choice of cutoff points of these measures using real-world meta-analyses.

On the other hand, the publication bias measures themselves can be good test statistics. If the hypothesis test based on a publication bias measure is very powerful, the measure may be able to quantify publication bias accurately. The two publication bias measures discussed in our article, TI and TS, can directly serve as test statistics, and their performance is determined by the power of the regression test and the skewness-based test. However, not all test statistics can serve as publication bias measures; for example, the R0 statistic used in the trim-and-fill method (Duval and Tweedie, 2000a) may not be a good measure, because it depends on the number of studies.

We are grateful for DJ’s many inspiring methodological suggestions. For example, the modified regression test of model (1) in our article requires an estimate of the between-study variance τ2, and we used the classical method-of-moments estimator (DerSimonian and Laird, 1986). However, as DJ pointed out, publication bias has some impact on this estimator (Jackson, 2006, 2007). The performance of model (1) could be improved by employing the adjusted between-study variance obtained from the trim-and-fill method (Duval and Tweedie, 2000a). Nevertheless, using the simple method-of-moments estimator, our simulations have shown that the TI statistic based on model (1) still controlled type I error rate well and had higher power than Egger’s original regression test in the presence of noticeable heterogeneity. The potential bias in the estimation of τ2, caused by publication bias, could be remedied to some extent by the dispersion parameter σ in model (1), which is another concern of DJ. As DJ noted, under the true random-effects model, the errors in model (1) should follow the standard normal distribution, i.e., with variance one. However, in the presence of unknown publication bias, the errors do not strictly follow the standard normal and the σ parameter allows dispersion of the errors and gives more degrees of freedom for fitting the regression. The idea is similar to Egger’s original regression test. Specifically, researchers may have two different interpretations for Egger’s regression. First, it can be viewed as a result of the fixed-effect model with a dispersion parameter that adjusts the effect of potential publication bias on the regression fit. Second, one may also interpret Egger’s regression as a model with multiplicative heterogeneity. However, the rationale for multiplicative heterogeneity is usually considered weak and generally not recommended, while additive heterogeneity is more intuitively appealing and has been widely used among meta-analysts (Thompson and Sharp, 1999). Therefore, we prefer the former interpretation; this coincides with our model (1), which was designed under the additive random-effects setting, while Egger’s regression does not account for additive heterogeneity from this perspective. Last but not least, although model (1) allows heterogeneity by modeling each study’s underlying effect as a draw from a normal random effect, in practice the collected studies may be heterogeneous with an underlying distribution that is not approximately normal. We recognize this challenge to distinguish true publication bias from heterogeneity or subgroup effects (Sterne et al., 2011), which will call for further research and continuous attention.

References

  1. Bürkner P-C and Doebler P (2014). Testing for publication bias in diagnostic meta-analysis: a simulation study. Statistics in Medicine 33, 3061–3077. [DOI] [PubMed] [Google Scholar]
  2. Citkowicz M and Vevea JL (2017). A parsimonious weight function for modeling publication bias. Psychological Methods 22, 28–41. [DOI] [PubMed] [Google Scholar]
  3. Copas J, Dwan K, Kirkham J, and Williamson P (2013). A model-based correction for outcome reporting bias in meta-analysis. Biostatistics 15, 370–383. [DOI] [PubMed] [Google Scholar]
  4. DerSimonian R and Laird N (1986). Meta-analysis in clinical trials. Controlled Clinical Trials 7, 177–188. [DOI] [PubMed] [Google Scholar]
  5. Duval S and Tweedie R (2000a). A nonparametric “trim and fill” method of accounting for publication bias in meta-analysis. Journal of the American Statistical Association 95, 89–98. [Google Scholar]
  6. Duval S and Tweedie R (2000b). Trim and fill: a simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56, 455–463. [DOI] [PubMed] [Google Scholar]
  7. Harbord RM, Egger M, and Sterne JAC (2006). A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints. Statistics in Medicine 25, 3443–3457. [DOI] [PubMed] [Google Scholar]
  8. Jackson D (2006). The implications of publication bias for meta-analysis’ other parameter. Statistics in Medicine 25, 2911–2921. [DOI] [PubMed] [Google Scholar]
  9. Jackson D (2007). Assessing the implications of publication bias for two popular estimates of between-study variance in meta-analysis. Biometrics 63, 187–193. [DOI] [PubMed] [Google Scholar]
  10. Lau J, Ioannidis JPA, Terrin N, Schmid CH, and Olkin I (2006). The case of the misleading funnel plot. BMJ 333, 597–600. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Macaskill P, Walter SD, and Irwig L (2001). A comparison of methods to detect publication bias in meta-analysis. Statistics in Medicine 20, 641–654. [DOI] [PubMed] [Google Scholar]
  12. Peters JL, Sutton AJ, Jones DR, Abrams KR, and Rushton L (2006). Comparison of two methods to detect publication bias in meta-analysis. JAMA 295, 676–680. [DOI] [PubMed] [Google Scholar]
  13. Rothstein HR, Sutton AJ, and Borenstein M (2005). Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. John Wiley & Sons, Chichester, UK. [Google Scholar]
  14. Schmid CH (2017). Outcome reporting bias: a pervasive problem in published meta-analyses. American Journal of Kidney Diseases 69, 172–174. [DOI] [PubMed] [Google Scholar]
  15. Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, Carpenter J, Rücker G, Harbord RM, Schmid CH, Tetzlaff J, Deeks JJ, Peters J, Macaskill P, Schwarzer G, Duval S, Altman DG, Moher D, and Higgins JPT (2011). Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 343,. [DOI] [PubMed] [Google Scholar]
  16. Tang J-L and Liu JLY (2000). Misleading funnel plot for detection of bias in meta-analysis. Journal of Clinical Epidemiology 53, 477–484. [DOI] [PubMed] [Google Scholar]
  17. Thompson SG and Sharp SJ (1999). Explaining heterogeneity in meta-analysis: a comparison of methods. Statistics in Medicine 18, 2693–2708. [DOI] [PubMed] [Google Scholar]

RESOURCES