Skip to main content
BMC Medicine logoLink to BMC Medicine
letter
. 2020 Jun 2;18:156. doi: 10.1186/s12916-020-01630-w

Conduct and reporting of individual participant data network meta-analyses need improvement

Anna Chaimani 1,2,
PMCID: PMC7265632  PMID: 32482163

Background

Network meta-analysis (NMA) of healthcare interventions is being increasingly used in medical research aiming to address a key question for clinical decision making: which interventions work best for a given disease? The popularity of NMA builds on three main characteristics that make it a unique tool in the field of evidence synthesis: (a) it allows inference for comparisons that have never been evaluated in individual studies, (b) it usually gives relative effect estimates with the highest precision, and (c) it allows estimating the ranking of interventions with respect to some outcomes of interest [1, 2]. Despite these benefits, a major constraint of most NMAs to date is that they are primarily considering aggregate data, hence data at study level or, at best, arm-level data.

The use of individual participant data (IPD) in meta-analyses is known to have several advantages, such as allowing to handle properly missing data, investigation of associations between outcomes and participants’ characteristics, and exploration of case-mix heterogeneity [3]. According to the article by Gao et al., though, only 21 NMAs of randomized controlled trials (RCTs) with available IPD were published until June 2019 [4]. On the contrary, a collection of NMAs with aggregate RCT data included, already in 2015, 450 publications [5]; this number has probably been tripled now. The main explanation of this imbalance is probably the difficulties encountered in obtaining IPD from trial investigators. Generally, IPD sharing is a rather time- and resource-consuming procedure and, until recently, was restricted within small research communities. On top of data availability issues, conducting NMA with IPD requires knowledge of advanced statistical modeling approaches and specific expertise in the review team. This increased complexity is sometimes an obstacle for NMA investigators leading them to refrain from an IPD analysis plan.

Recent advances

Interestingly, the study by Gao et al. does not show a consistent increase in the publications of IPD NMAs over years as it would be expected given the overall steep increase of NMAs in the literature [5]. However, new initiatives, such as YODA (https://yoda.yale.edu/) and Clinical Study Data Request (https://www.clinicalstudydatarequest.com/), that aim to promote large-scale IPD sharing will possibly facilitate the access of meta-analysts to such data. At the same time, researchers have started being more familiar with methods for incorporating IPD in NMA due to the greater focus placed lately in the field including new developments [6, 7] and several training events (e.g., Cochrane webinars). Considering these important advances, a larger number of IPD NMAs is anticipated to be published in the next few years. It is of great importance, though, to reassure that they will follow high-quality standards, since they are usually considered superior to NMAs of aggregate data and their conclusions may have stronger impact.

Empirical results

Gao et al. found important deficiencies in the 21 IPD NMAs they identified. Specifically, they evaluated the methodological and reporting quality of these NMAs using three tools: the PRISMA-IPD [8], the PRISMA-NMA [9], and the AMSTAR-2 [10] checklists (after removing overlapping items) and concluded that the overall quality of the publications was suboptimal. One of the most important limitations was the lack of an assessment of the NMA assumptions. None of the articles reported assessment of the fundamental conceptual synthesis assumption and less than half assessed the respective statistical assumption. Further, most of the NMAs ignored missing data or used naïve imputation methods, none described how IPD and aggregate data were combined when both were used, and only 10% reported registration of a protocol and evaluation of across-study biases. Based on AMSTAR-2, the majority of these NMAs were rated as being of critically low quality. It is encouraging that involvement of a statistician or epidemiologist in the review team was associated with overall better methodological and reporting quality.

These findings raise concerns with respect to the validity of results from published IPD NMAs. Of course, some of these articles were published before the development of the appraisal tools that were used and full compliance could not be expected. Deficiencies, though, did not seem to be mitigated after the development of the checklists. For example, PRISMA-NMA clearly states that evaluation of the synthesis assumptions is crucial, but even subsequent IPD NMAs did not follow this recommendation. It is also possible that some of the identified deficiencies are the result of poor reporting and do not necessarily show poor methodological quality.

Conclusions

In conclusion, the study by Gao et al. poses an important question: whether existing guidelines for conducting and reporting NMAs in the presence of IPD are sufficient. Although using a combination of PRISMA-IPD and PRISMA-NMA would cover most of the required information that needs to be reported, it seems reasonable to move towards the development of specific guidance for IPD NMAs. The development of a comprehensive list of items that any IPD NMA should report, such as an extension of the PRISMA-NMA, would be helpful not only to NMA authors but also to journal editors and reviewers who will be able to easily judge the reporting and methodological completeness of these articles.

Acknowledgements

Not applicable

Abbreviations

AMSTAR

A MeaSurement Tool to Assess systematic Reviews

IPD

Individual participant data

NMA

Network meta-analysis

PRISMA

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

RCT

Randomized controlled trial

YODA

Yale University Open Data Access

Author’s contributions

AC drafted the first version of the manuscript. All authors read and approved the final manuscript.

Funding

Not applicable

Availability of data and materials

Not applicable

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Salanti G. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Meth. 2012;3(2):80–97. doi: 10.1002/jrsm.1037. [DOI] [PubMed] [Google Scholar]
  • 2.Chaimani A, Caldwell DM, Li T, Higgins JP, Salanti G. Undertaking network meta-analyses. In: Cochrane Handbook for Systematic Reviews of Interventions. Wiley; 2019:285–320. doi:10.1002/9781119536604.ch11.
  • 3.Riley R. D., Lambert P. C., Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ. 2010;340(feb05 1):c221–c221. doi: 10.1136/bmj.c221. [DOI] [PubMed] [Google Scholar]
  • 4.Gao Y, Shi S, Li M, Luo X, Liu M, Yang K, Zhang J, Song F, Tian J. Statistical analyses and quality of individual participant data network meta-analyses were suboptimal: a cross-sectional study. BMC Med. 2020. 10.1186/s12916-020-01591-0. [DOI] [PMC free article] [PubMed]
  • 5.Petropoulou Maria, Nikolakopoulou Adriani, Veroniki Areti-Angeliki, Rios Patricia, Vafaei Afshin, Zarin Wasifa, Giannatsi Myrsini, Sullivan Shannon, Tricco Andrea C., Chaimani Anna, Egger Matthias, Salanti Georgia. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. Journal of Clinical Epidemiology. 2017;82:20–28. doi: 10.1016/j.jclinepi.2016.11.002. [DOI] [PubMed] [Google Scholar]
  • 6.Debray TP, Schuit E, Efthimiou O, et al. An overview of methods for network meta-analysis using individual participant data: when do benefits arise? Stat Methods Med Res. 2018;27(5):1351–1364. doi: 10.1177/0962280216660741. [DOI] [PubMed] [Google Scholar]
  • 7.Leahy J, O’Leary A, Afdhal N, et al. The impact of individual patient data in a network meta-analysis: an investigation into parameter estimation and model selection. Res Synth Methods. 2018;9(3):441–469. doi: 10.1002/jrsm.1305. [DOI] [PubMed] [Google Scholar]
  • 8.Stewart LA, Clarke M, Rovers M, et al. Preferred Reporting Items for Systematic Review and Meta-Analyses of individual participant data: the PRISMA-IPD Statement. JAMA. 2015;313(16):1657–1665. doi: 10.1001/jama.2015.3656. [DOI] [PubMed] [Google Scholar]
  • 9.Hutton Brian, Salanti Georgia, Caldwell Deborah M., Chaimani Anna, Schmid Christopher H., Cameron Chris, Ioannidis John P.A., Straus Sharon, Thorlund Kristian, Jansen Jeroen P., Mulrow Cynthia, Catalá-López Ferrán, Gøtzsche Peter C., Dickersin Kay, Boutron Isabelle, Altman Douglas G., Moher David. The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations. Annals of Internal Medicine. 2015;162(11):777. doi: 10.7326/M14-2385. [DOI] [PubMed] [Google Scholar]
  • 10.Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358. 10.1136/bmj.j4008. [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable


Articles from BMC Medicine are provided here courtesy of BMC

RESOURCES