Skip to main content
JAMA Network logoLink to JAMA Network
. 2019 May 14;321(18):1825–1826. doi: 10.1001/jama.2019.2994

Postpublication Metrics of Randomized Clinical Trials With and Without Null Findings

Stuart B Murray 1,, James A Heathers 2, Rebecca M Schauer 1, Scott Griffiths 3, Deborah Mitchison 4, Jonathan M Mond 5, Jason M Nagata 5
PMCID: PMC6518340  PMID: 31087013

Abstract

This study examined the association between RCT findings supporting or rejecting the trials’ experimental hypotheses and postpublication metrics reflecting scientific and public interest; namely, citations, Altimetric scores, and views.


Publication bias can arise from investigators not submitting studies with outcomes that do not support their hypotheses1 or from journals selectively publishing studies in which the results are statistically significant.2 Publication bias may arise from the perception that nonsignificant findings will garner less scientific or public attention than findings that confirm study hypotheses. However, whether this perception is accurate is unknown. Thus, we investigated the association between whether a study supported or rejected the null hypothesis and postpublication metrics reflecting scientific and public interest.

Methods

We manually searched each issue of 10 JAMA Network journals (impact factors ranged from 2.39 to 47.66 in 2018) published between January 1, 2013, and December 31, 2015, screening abstracts and full articles to identify all published randomized clinical trials (RCTs). A secondary search involved a PubMed search for all RCTs in JAMA Network journals within this same period, although this secondary search revealed no new trials. This period was selected to allow sufficient time for citations and broader interest to accumulate. The postpublication metrics of interest were citations, Altmetric scores, and views, to date, which were selected based on the proposed criteria for a “high-impact” article.3 All postpublication metrics were recorded from the relevant journal website from December 7, 2018, to December 14, 2018.

Trials were independently assessed by 2 investigators (S.B.M. and R.M.S.). Full manuscripts and trial registrations (if needed) were consulted to determine if primary outcomes supported the experimental hypotheses (ie, rejected the null hypothesis), supported the null hypotheses, or both (ie, mixed). Noninferiority trials that demonstrated noninferiority were coded as rejecting the null hypothesis and noninferiority trials that failed to demonstrate noninferiority were coded as supporting the null hypothesis.

Interrater reliability was excellent (κ = 0.96), with discrepancies (n = 9) resolved via discussion and direct communication with authors. Citations, Altmetric scores, and views were strongly right-skewed and were neither reliably normal nor log normal; thus, they were analyzed with the Kruskal-Wallis H test for comparing distributions for publication type (supporting the null hypothesis, rejecting the null hypothesis, or mixed results) and for year of publication (2013, 2014, or 2015) and with the Dunn test for pairwise comparisons. Proportions were calculated using a χ2 test of independence. A 2-sided P<.05 was the threshold for statistical significance. All analysis was conducted in GraphPad Prism 8 (GraphPad Software).

Results

Of 498 total articles, 65 were excluded because of not reporting hypotheses (n = 56), being subordinate analyses of previous published findings (n = 7), or having been retracted (n = 2), leaving 433 published RCTs. Of these 433 trials, 245 (56.6%) rejected the null hypotheses, 158 (36.5%) supported the null hypotheses, and 30 (6.9%) reported mixed findings. The median number of citations was 56 (interquartile range [IQR], 26-106) for studies that rejected the null hypothesis and 45.5 (IQR, 20.5-89.25) for studies that supported it. The median Altmetric scores and views were 78 (IQR, 28.5-160) and 13 536 (IQR, 6126-29266), respectively, for studies that rejected the null hypothesis and 73 (IQR, 28.5-135.5) and 13 694 (IQR, 6063-25 205), respectively, for studies that supported the null hypothesis. No groupwise or pairwise comparison of hypothesis type vs citations, Altmetric score, or views met the criteria for significance (Table).

Table. Citations, Altmetric Scores, and Views for Articles by Hypothesis and Year of Publicationa.

Study Resultsb Median (IQR) Kruskal-Wallis H Test Dunn Test
H Value P Value Comparison P Value
Citations
Rejected null hypothesis 56 (26-106) 5.975 .05 vs supported null hypothesis .14
Supported null hypothesis 45.5 (20.5-89.25) vs mixed .99
Mixed results 43 (20.75-65.5) vs rejected null hypothesis .24
Altmetric Score
Rejected null hypothesis 78 (28.5-160) 2.343 .31 vs supported null hypothesis >.99
Supported null hypothesis 73 (28.5-135.5) vs mixed .51
Mixed results 44 (17-111.8) vs rejected null hypothesis .38
Views
Rejected null hypothesis 13 536 (6126-29 266) 5.634 .06 vs supported null hypothesis >.99
Supported null hypothesis 13 694 (6063-25 205) vs mixed .10
Mixed results 9183 (3942-16 367) vs rejected null hypothesis .05

Abbreviation: IQR, interquartile range.

a

All JAMA Network journals in existence in 2013 were screened, which included JAMA, JAMA Dermatology, JAMA Facial Plastic Surgery, JAMA Internal Medicine, JAMA Neurology, JAMA Ophthalmology, JAMA Otolaryngology, JAMA Pediatrics, JAMA Psychiatry, and JAMA Surgery.

b

Of 433 total articles, 245 rejected the null hypothesis, 158 supported the null hypothesis, and 30 had mixed findings.

Discussion

No association was found between the postpublication metrics of RCTs published in JAMA Network journals and the direction of their findings (ie, whether they rejected or supported the null hypothesis). The extent to which a finding changes established knowledge may be more important than whether it supports experimental, null, or mixed findings.4 Thus, a clearer understanding of what is not effective in medicine appears to be of equal public, clinical, and research interest as what is effective. Limitations of this study include that only RCTs published in JAMA Network journals were assessed and the generalizability of these findings to other study designs or journals is unclear. Moreover, given the continually evolving nature of postpublication metrics, articles published earlier inherently had more time to accrue postpublication metrics, and further research assessing postpublication metrics should be done within a fixed time frame after publication.

Section Editor: Jody W. Zylke, MD, Deputy Editor.

References

  • 1.Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502-1505. doi: 10.1126/science.1255484 [DOI] [PubMed] [Google Scholar]
  • 2.Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H Jr. Publication bias and clinical trials. Control Clin Trials. 1987;8(4):343-353. doi: 10.1016/0197-2456(87)90155-3 [DOI] [PubMed] [Google Scholar]
  • 3.Rhee JS. High-impact articles-citations, downloads, and Altmetric score. JAMA Facial Plast Surg. 2015;17(5):323-324. doi: 10.1001/jamafacial.2015.0869 [DOI] [PubMed] [Google Scholar]
  • 4.Evangelou E, Siontis KC, Pfeiffer T, Ioannidis JP. Perceived information gain from randomized trials correlates with publication in high-impact factor journals. J Clin Epidemiol. 2012;65(12):1274-1281. doi: 10.1016/j.jclinepi.2012.06.009 [DOI] [PubMed] [Google Scholar]

Articles from JAMA are provided here courtesy of American Medical Association

RESOURCES