Abstract
Introduction:
For evidence-based healthcare decisions, systematic reviews are essential, yet data extraction errors, often overlooked, pose a substantial threat. In the field of urology, there has been a notable increase in the number of reviews that are not subject to rigorous examination. This study pioneers a shift, investigating data reproducibility issues in urological systematic reviews, highlighting the critical need for scrutiny in evidence synthesis.
Methods:
This study examines data extraction errors in systematic reviews from 58 urology journals indexed in PubMed and Embase. Systematic reviews that include meta-analyses with randomized controlled trials will be selected. Data extraction will be carried out independently by two reviewers using standardized forms, followed by cross-verification with original sources. Errors will be categorized at the review, meta-analysis, and study levels. Statistical analyses will evaluate the prevalence of these errors and their impact on meta-analytic results. Sensitivity analyses will explore the effect of missing data on the study outcomes.
Discussion:
This study addresses the often-overlooked issue of data extraction errors in urology systematic reviews, which could impact the reliability of evidence-based decisions. By evaluating the reproducibility of data extraction, the study aims to enhance methodological rigor in urological reviews and improve the validity of conclusions drawn from evidence synthesis. Despite its limitations, this research will contribute valuable insights into the quality of systematic reviews and guide future improvements in evidence-based practice.
Keywords: data extraction errors, evidence synthesis practice, reproducibility, urology
Introduction
Systematic reviews and meta-analyses serve as a crucial source of high-grade evidence to guide decision making in contemporary medical practice[1]. Ensuring the reliability of these reviews involves adhering to a standardized and rigorous process to guide review authors in their implementation[2]. Within this process, data extraction stands out as a pivotal step, facilitating the transfer of information from the original sources to the systematic reviews for subsequent synthesis. Errors during this stage can significantly affect the final results, potentially leading to misleading conclusions. Figure 1 clearly shows the parts we are concerned about.
HIGHLIGHTS
What this adds to what is known?
This study highlights the prevalence and impact of data extraction errors in systematic reviews within urology.
It addresses a gap in previous research by focusing specifically on the reproducibility of data extraction in this field.
It provides insights into how data extraction errors can affect the results and conclusions of meta-analyses in urology.
What is the implication, what should change now?
There is a need for more rigorous validation of data extraction processes in systematic reviews, especially in urology.
Improved error-checking and data verification should be incorporated into the methodology of systematic reviews to enhance their reliability.
Urgent implementation guidelines are needed to help authors of future systematic reviews improve data extraction validity.
Figure 1.

The part of the evidence pyramid we are interested in. RCT = randomized controlled trial.
Contemporary investigations into the quality of systematic reviews and meta-analyses have predominantly concentrated on aspects such as design, methods, and reporting, with limited emphasis on validating the data extraction process[3,4]. Despite the implementation of quality assurance mechanisms, systematic reviews often encounter data extraction errors in the literature[5,6]. A more recent study involving 201 systematic reviews revealed that nearly 85.1% of them exhibited at least one error in data extraction[7]. Furthermore, the analysis examined the effect of errors in 288 meta-analyses. Data extraction errors caused a change in the direction of the effect in 10 (3.5%) of the 288 meta-analyses and a change in the significance of the P value in 19 (6.6%) of the 288 meta-analyses. This study underscores that errors in data extraction are a pervasive and significant source of bias, posing a threat to the reliability of systematic review results.
In the field of urology, there has been a notable rise in the volume of systematic reviews over the past decades[8,9]. Studies have demonstrated that meta-analyses in pediatric urology lack robustness[10]. Even minor data extraction errors can substantially alter the statistical significance (P value) of the findings, ultimately undermining the validity of the conclusions. Furthermore, urological research often involves complex datasets, including surgical outcomes and longitudinal follow-up data, which typically encompass multiple variables. The intricacy of such data increases the risk of extraction errors, potentially leading to biased and unreliable results. Earlier assessments have indicated significant methodological shortcomings in the systematic reviews of sleep medicine, particularly concerning the validation of data extraction[11]. However, it is still unknown whether the same issue exists in urology. Understanding the accuracy of data extraction and the impact of errors on results is crucial for evidence-based practice and decisions. Therefore, this study examines issues of data reproduction of systematic reviews in urology on the basis of previous evidence[7,11,12].
Methods
Literature search
A literature search will be conducted by searching the main urology medicine journals indexed in PubMed and Embase. Given the primary aim of the current research, we will not cover journals of andrology, gynecology, or obstetrics, although some systematic reviews/meta-analyses related to urology may be published in these journals. By a prior search in Scimago Journal & Country Rank (https://www.scimagojr.com/), we identified 80 journals in urology, for example, “European Urology,” “Journal of Urology,” and “BJU International.” We however excluded 22 of them as they either were not related to urology or were non-English-language journals. A Senior Information Specialist developed search strategies to ensure the quality of the literature search. The search strategy to be used in PubMed is presented in Table 1 and similar strategies will be performed in Embase. A full list of the remaining 58 journals detailed and the search strategy will be provided in Supplementary File 1 (available at: http://links.lww.com/ISJP/A10).
Table 1.
Search strategy in PubMed.
| No. | Search term |
|---|---|
| #1 | “Eur Urol”[ta] OR “Eur Urol Oncol”[ta] OR “Nat Rev Urol”[ta] OR “J Urol”[ta] OR “Prostate Cancer Prostatic Dis”[ta] OR “Eur Urol Focus”[ta] OR “BJU Int”[ta] OR “Clin Genitourin Cancer”[ta] OR “Prostate”[ta] OR “World J Urol”[ta] OR “World J Mens Health”[ta] OR “J Endourol”[ta] OR “Urol Oncol”[ta] OR “Abdom Radiol (NY)”[ta] OR “Curr Urol Rep”[ta] OR “Int Urogynecol J”[ta] OR “Curr Opin Urol”[ta] OR “Urolithiasis”[ta] OR “Prostate Int”[ta] OR “Neurourol Urodyn”[ta] OR “Ther Adv Urol”[ta] OR “Int J Urol”[ta] OR “Urology”[ta] OR “J Pediatr Urol”[ta] OR “Urol Clin North Am”[ta] OR “Female Pelvic Med Reconstr Surg”[ta] OR “Asian J Urol”[ta] OR “BMC Urol”[ta] OR “Int Neurourol J”[ta] OR “Int Braz J Urol”[ta] OR “Transl Androl Urol”[ta] OR “Eur Urol Open Sci”[ta] OR “Bladder Cancer”[ta] OR “Int J Impot Res”[ta] OR “Scand J Urol”[ta] OR “Investig Clin Urol”[ta] OR “Urol Int”[ta] OR “Can Urol Assoc J”[ta] OR “Adv Urol”[ta] OR “Low Urin Tract Symptoms”[ta] OR “Arab J Urol”[ta] OR “Res Rep Urol”[ta] OR “Wideochir Inne Tech Maloinwazyjne”[ta] OR “Urol J”[ta] OR “Cent European J Urol”[ta] OR “Can J Urol”[ta] OR “Prostate Cancer”[ta] OR “Arch Ital Urol Androl”[ta] OR “Turk J Urol”[ta] OR “Indian J Urol”[ta] OR “Actas Urol Esp”[ta] OR “Prog Urol”[ta] OR “Urol Ann”[ta] OR “Afr J Urol”[ta] OR “Urol Sci”[ta] OR “J Clin Urol”[ta] OR “Arch Esp Urol”[ta] |
| #2 | “Meta-analysis” [pt] |
| #3 | “randomized controlled trials as topic”[mh] OR trial*[tiab] OR random*[tiab] |
| #4 | #1 AND #2 AND #3 |
Grey literature will not be included as we only aimed at published systematic reviews and meta-analyses. Sample size is very important in research[13-15], but we refrained from searching additional databases as our objective was not to encompass all systematic reviews; rather, a representative sample proved adequate for the purpose of the present study. Due to the same reason, we will not use a hand search for the reference lists of each systematic review and meta-analysis.
Types of study to be included
Systematic reviews of randomized controlled trials that contained one or more meta-analyses published in the primary urology journals. To enhance the replicability of the data employed in meta-analyses, only those providing 2 by 2 table data in article for binary outcomes, or sample size (n), mean, and standard deviation in both comparative groups for continuous outcomes were considered. Rapid reviews, narrative reviews, clinical guideline, scoping reviews, overviews, and network meta-analyses will not be considered. Pool analyses without a comprehensive literature review (at least one database) are also excluded. In certain circumstances, authors may submit their work as an original study in combination with a systematic review or an overview/scoping review combined with a systematic review, again, will not be considered in the current study. Non-English language studies were not included.
Identification of RCTs
Utilizing “randomized controlled trials” as one of the search terms (specific strategies are shown in Supplementary File 1, available at: http://links.lww.com/ISJP/A10), we identified randomized controlled trials (RCTs) from a comprehensive database of studies. Despite the potential for underdetection, this method remains the prevailing standard practice[16,17], and the filtering mechanism automatically excludes a substantial number of non-randomized controlled trial articles. Because database filters consist of a combination of text strings and database tags, they are developed by information experts in conjunction with search terms. Subsequently, the remaining articles undergo manual screening.
Literature screen
We will use Endnote X9 to merge search results from the different sources, and then will first check the records for duplicates. Finding duplicates is a multi-stage process that includes automatic deduplication and manual deduplication. Automatic deduplication will be done through two steps, namely, exact matching and then crude matching based on author, title, year, and journal. Furthermore, the literature screen will be conducted by two review authors independently. We will first read the titles and abstracts and exclude those clearly outside the scope of the current review. It would be expected that most of the unrelated records will be excluded at this step. The remaining publications’ full texts will be additionally screened for an ultimate decision. Any disputes will be settled by discussion until consensus is achieved. To keep the process blind, we will use Rayyan (https://rayyan.qcri.org/), a widely used online application for screening. The study flow chart is shown in Fig. 2. The exclusion lists and reasons for exclusion are showed in Supplementary File 2 (available at: http://links.lww.com/ISJP/A11).
Figure 2.
Flowchart for selection of articles. RCT = randomized controlled trial.
Data collection
This review will concentrate on identifying data extraction errors in eligible systematic reviews, encompassing two distinct steps. The initial step involves gathering data utilized in the meta-analyses of these reviews. Within this phase, the following information will be collected using a standardized Excel sheet (Office 2016, Microsoft, USA): the first author’s name of the systematic review, publication year, outcome names, first authors’ names of included studies, publication years of included studies, summarized data (2 by 2 table for binary outcomes; mean1, sd1, n1, mean2, sd2, n2 for continuous outcomes) for each included study within the meta-analysis, citations for included studies, and detailed methods employed for meta-analysis. The following information will also be extracted: region, registration, classification of intervention (e.g., pharmacological), statistical analysis methods, source of funding (e.g., none, not reported, industry, academic), topic of disease, etc. This process will be carried out by an experienced author and subsequently cross-verified by a second author. Any disagreements will be checked and addressed by the core research members.
Following the compilation of meta-analytic data from systematic reviews, the same two authors will consult the original sources of data (published articles, supplementary files, registrations, and pharmaceutical factory websites) for each included study to replicate the data extraction process. Instances where replication is not feasible from any of the original sources will be categorized as data extraction errors. Once again, this step will be executed by an experienced author and meticulously double checked by the other two authors separately to confirm that no errors were present from the data extraction.
We will have a pilot training for the above items to be extracted before the formal data extraction. All extractors were required for at least two rounds of data extraction training tasks, and only those reached 90% and above accuracy were allowed to participate the project.
Data reproduction
We plan to scrutinize the original data, encompassing full RCTs, supplementary files, and public registers, for each included study to verify the accuracy of components utilized in the meta-analysis. For continuity results, missing data estimation will be rerun, considering the data structure. The estimation methods are the same as those reported by the review authors. In cases where such information was not provided, we followed the estimation methods suggested in the Cochrane Handbook (versions 4.2.6 to 6.3)[18]. Recent research suggests that errors in data collection may stem from five mechanisms: numerical errors, data ambiguity errors, mismatch errors, null hypothesis errors, and identification errors[7]. Additionally, any emerging forms of errors should be documented.
A review is deemed error-free when the review authors successfully contact the lead author and obtain the original data, even in the presence of data inconsistencies. Data replication will adhere to the same procedure, involving initial extraction by one author and subsequent review by another.
Statistical analysis
Descriptive summaries of the main features of the meta-analyses will be presented using median or proportions values along with the interquartile range. Potential data extraction errors will be categorized into three levels – systematic review, meta-analysis, and study level – based on frequency and proportion. Considering potential differences in characteristics and impact, the analyses mentioned earlier will be stratified by outcome type, distinguishing between binary and continuous outcomes. Subsequently, we will rerun all meta-analyses with data extraction errors, conducting one analysis without addressing these errors and another with the errors corrected. Both analyses will follow the meta-analytic methods documented in the respective systematic reviews. A comparison of results from each pair will be undertaken to assess potential discrepancies, providing insights into the possible impact of errors on the results and conclusions.
The primary outcome of this study centered on assessing the prevalence of errors across various levels. At the meta-analysis level, the prevalence of errors will be determined by calculating the ratio of meta-analyses containing at least one study with errors to the total number of meta-analyses. Similarly, at the systematic review level, the prevalence of errors will be computed as the proportion of systematic reviews where at least one meta-analysis contained errors in relation to the total number of systematic reviews. Additionally, we plan to explore the study distribution of each error type, as defined earlier, among the overall errors.
The secondary outcome of our investigation will involve evaluating the potential influence of diverse errors on the outcomes of the meta-analyses. This evaluation entailed comparing the outcomes synthesized from error-addressed data with the results documented in the reviews, employing identical effect estimates and methodologies. The following metrics will be assessed: the proportion of meta-analyses where the direction of the effect estimate changed (e.g., from RR >1 to RR <1), the proportion of meta-analyses where significance changed (from P <0.05 to >0.05), and the proportion of meta-analyses where both effect size and P value changed after correcting the pooled results. Regarding the magnitude of the effect, we categorize the impact as slight (<20%), moderate (20–50%), and large (≥50%). This is calculated using the formula , where are the estimated pooled effects[19].
The above analyses will be conducted by using the Stata SE/16 program (Stata Corp LCC, College Station, TX, USA) or the CMA software (Comprehensive Meta-Analysis version 4.0) (Biostat., Englewood, NJ, USA), with a P value of 0.05 as an indicator of significance. The visualization was conducted using Excel 2016 (Microsoft, Washington, USA).
Missing data
We anticipate that certain randomized controlled trials may lack the original sources (such as withdrawn, dated, or uncopyrighted, when we have exhausted every means to obtain it), thereby resulting in instances of missing information concerning these components. We anticipated a minimal rate of missing data, deemed unlikely to exert influence on the study outcomes[20,21]. To validate this presumption, a rigorous sensitivity analysis will be undertaken. Specifically, we will systematically exclude meta-analyses containing missing data and subsequently recalculate the primary outcomes employing the identical analytical methods. The results of this sensitivity analysis will then be juxtaposed with the original findings to ascertain whether the presence of missing data exerts a significant influence on the overall outcomes. Furthermore, the absence of phase information disclosure may contribute to missing information. Potential impacts will be thoroughly examined and analyzed, particularly through post hoc sensitivity analyses, whenever feasible.
Discussion
The study outlined in this protocol addresses a crucial aspect of systematic reviews in the field of urology – the potential impact of data extraction errors on the reliability of conclusions drawn from these reviews. The significance of this research lies in its potential to enhance the quality and validity of systematic reviews, which are integral to evidence-based decision-making in healthcare.
Contemporary investigations into the quality of systematic reviews have primarily concentrated on design, methods, and reporting, often overlooking the validation of the data extraction process. The revelation from this research involving 34 Cochrane systematic reviews, indicating that nearly 58.5% of them exhibited errors in data extraction, highlights a pervasive issue in the field[5]. This study concluded that data extraction errors occurred in high proportions, but that these problems did not lead to a substantial change in any of the conclusions, contrary to the conclusions of a recent study in the field of sleep science. Besides, the rising volume of systematic reviews in urology over the past decades and the distinct difficulties encountered in systematic reviews within the field of urology, including the complexities of incorporating data add urgency to the need for a comprehensive examination of the validity of data extraction in this specific domain. By addressing this gap, the research aims to contribute valuable insights into the methodological robustness of systematic reviews in urology. The study acknowledges the potential implications of errors on evidence-based practice and decision-making, underscoring the broader relevance of its findings.
Systematic bibliometric analysis, which quantifies literature to map research trends, hotspots, and academic impact[22,23], shares challenges with systematic reviews. Despite automation (e.g., database queries), errors persist during data cleaning, classification, and standardization. Validation remains critical due to heterogeneous data sources and formats, increasing risks of oversights. Reproducibility is further compromised by variability in analytical tools and parameter settings. While bibliometric insights inform evidence-based decisions (e.g., trend identification), extraction errors risk skewed conclusions, undermining scientific validity. Ensuring methodological transparency and robust validation protocols is thus essential to mitigate biases and enhance reliability in both bibliometric and systematic review research.
The expected outcomes, involving the re-run of meta-analyses with and without addressing data extraction errors, promise to provide valuable insights into the potential discrepancies and the overall impact of errors on results and conclusions. This comparative analysis is crucial for understanding the robustness of systematic reviews in the face of data extraction challenges.
Nevertheless, this reproducibility study has certain limitations. Initially, the systematic reviews were sourced from a restricted pool of 58 academic journals in the field of urology. However, it is worth noting that certain systematic reviews in urology may have been published in general periodicals not encompassed in our study. Future research endeavors should encompass a more extensive spectrum of journals. By doing so, the generalizability of research findings can be significantly enhanced. Additionally, to ensure the practicality of executing this study, we identified inclusion criteria for systematic reviews based on RCTs and the availability of comprehensive pooled data for each included trial. These factors may compromise the representativeness of our study’s findings. We acknowledge that different study designs, such as observational studies, may present unique challenges and error profiles in data extraction. The inclusion of a wider variety of study designs in future research could provide a more nuanced understanding of how study design impacts the prevalence and types of data extraction errors. Additionally, considering the potential differences in error rates between observational and experimental studies could further enhance the generalizability of our findings. Nonetheless, we maintain that these selected studies suffice for reporting on the distinctive features of data reproducibility problems within this domain. Thirdly, the categorization of data extraction errors used in this study may not be exhaustive. Although our classification scheme was developed based on established frameworks and expert consensus, it is possible that other types of errors exist that were not captured in our analysis. Future research should aim to explore a broader range of potential errors to ensure a more comprehensive understanding of the issue.
Given the challenges observed in both systematic reviews and bibliometric analysis, future research should focus on developing more robust validation methods for data extraction and improving the reproducibility of results. This includes addressing the inconsistencies in bibliographic record formats and exploring the impact of different analytical tools and parameter settings on the results. Additionally, broadening the scope of journals and including more diverse types of studies in future research could enhance the generalizability of findings and provide a more comprehensive understanding of data extraction errors across different fields.
In conclusion, the study outlined in this protocol has the potential to significantly contribute to the improvement of systematic review quality in urology. To the best of our knowledge, this is the first research evaluating the potential data extraction errors and their impact on the field of urology. By addressing the often-overlooked issue of data extraction errors, the research aims to enhance the reliability of evidence available for clinical decision-making, ultimately benefiting both healthcare practitioners and patients.
Stage of research
A preliminary search was conducted by 1 August 2023. Subsequently, the screening phase was finalized by 20 September 2023. Data extraction and verification was finished on 8 August 2024. Currently, data analysis is ongoing. The general schedule and records of the article process are presented in Supplementary File 3 (available at: http://links.lww.com/ISJP/A12).
Footnotes
Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article.
Supplemental Digital Content is available for this article. Direct URL citations are provided in the HTML and PDF versions of this article on the journal's website, www.ijsprotocols.com.
Published online 2 June 2025
Contributor Information
Zuhaer Yisha, Email: 2411110187@bjmu.edu.cn.
Linfa Guo, Email: guolinfa@whu.edu.cn.
Tongzu Liu, Email: liutongzu@163.com.
Xiaolong Wang, Email: nogardinmunich@gmail.com.
Ethics approval
Formal ethical approval and consent is not required as the data are not individualized.
Consent
None.
Sources of funding
This study was funded by the National Natural Science Foundation of China (Grant No. 82400906), with the researchers maintaining full independence in all aspects of the study, from design to publication, without any involvement from the funding organizations.
Author’s contribution
Z.Y.: writing – original draft, methodology, formal analysis, data curation, visualization; L.G. and A.G.: data curation; S.L. and T.L.: writing – review and editing, methodology; X.W.: conceptualization, methodology, project administration, supervision, writing – review and editing.
Conflicts of interest disclosure
We declare no known conflict of competing financial interests.
Guarantor
Zuhaer Yisha and Xiaolong Wang
Research registration unique identifying number (UIN)
Not applicable.
Provenance and peer review
Our paper was not invited.
Data availability
The data will be shared with the public after the publication of the research.
References
- [1].Gurevitch J, Koricheva J, Nakagawa S, Stewart G. Meta-analysis and the science of research synthesis. Nat 2018;555:175–82. [DOI] [PubMed] [Google Scholar]
- [2].Cochrane Handbook for Systematic Reviews of Interventions, (n.d.). https://training.cochrane.org/handbook (Accessed December 1, 2023).
- [3].Chapter 5: collecting data, (n.d.). https://training.cochrane.org/handbook/current/chapter-05 (Accessed December 1, 2023).
- [4].Siddaway AP, Wood AM, Hedges LV. How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annu Rev Psychol 2019;70:747–70. [DOI] [PubMed] [Google Scholar]
- [5].Jones AP, Remmington T, Williamson PR, et al. High prevalence but low impact of data extraction and reporting errors were found in Cochrane systematic reviews. J Clin Epidemiol 2005;58:741–42. [DOI] [PubMed] [Google Scholar]
- [6].Mathes T, Klaßen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol 2017;17:152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Xu C, Yu T, Furuya-Kanamori L, et al. Validity of data extraction in evidence synthesis practice of adverse events: reproducibility study. BMJ 2022;377:e069155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Braga LH, Pemberton J, Demaria J, Lorenzo AJ. Methodological concerns and quality appraisal of contemporary systematic reviews and meta-analyses in pediatric urology. J Urol 2011;186:266–71. [DOI] [PubMed] [Google Scholar]
- [9].Han JL, Gandhi S, Bockoven CG, et al. The landscape of systematic reviews in urology (1998 to 2015): an assessment of methodological quality. BJU Int 2017;119:638–49. [DOI] [PubMed] [Google Scholar]
- [10].Anand S, Kainth D. Fragility index of recently published meta-analyses in pediatric urology: a striking observation. Cureus 2021;13:e16225. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Xu C, Doi SAR, Zhou X, et al. Data reproducibility issues and their potential impact on conclusions from evidence syntheses of randomized controlled trials in sleep medicine. Sleep Med Rev 2022;66:101708. [DOI] [PubMed] [Google Scholar]
- [12].Xu C, Furuya-Kanamori L, Kwong JSW, et al. Methodological issues of systematic reviews and meta-analyses in the field of sleep medicine: a meta-epidemiological study. Sleep Med Rev 2021;57:101434. [DOI] [PubMed] [Google Scholar]
- [13].Cook JA, Julious SA, Sones W, et al. DELTA2 guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial. BMJ 2018;363:k3750. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Andrade C. Sample size and its importance in research. Indian J Psychol Med 2020;42:102–03. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Süt N, Ajredani M, Koçak Z. Importance of sample size calculation and power analysis in scientific studies: an example from the. Balkan Med J, Balkan Med J 2022;39:384–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Marshall IJ, Noel-Storr A, Kuiper J, et al. Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 2018;9:602–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Lefebvre C, Glanville J, Briscoe S, et al. on behalf of the C.I.R.M. Group. Searching for and selecting studies. In: Cochrane Handbook for Systematic Reviews of Interventions. John Wiley & Sons, Ltd; 2019. 67–107. 10.1002/9781119536604.ch4. [DOI] [Google Scholar]
- [18].Chapter 10: analysing data and undertaking meta-analyses, (n.d.). https://training.cochrane.org/handbook/current/chapter-10 (Accessed December 29, 2023).
- [19].Xu C, Ju K, Lin L, et al. Rapid evidence synthesis approach for limits on the search date: how rapid could it be? Res Synth Methods 2022;13: 68–76. [DOI] [PubMed] [Google Scholar]
- [20].Kahale LA, Diab B, Brignardello-Petersen R. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol 2018;99:14–23. [DOI] [PubMed] [Google Scholar]
- [21].Spineli LM, Pandis N, Salanti G. Reporting and handling missing outcome data in mental health: a systematic review of Cochrane systematic reviews and meta-analyses. Res Synth Methods 2015;6:175–87. [DOI] [PubMed] [Google Scholar]
- [22].Guo S-B, Feng X-Z, Huang W-J, et al. Global research hotspots, development trends and prospect discoveries of phase separation in cancer: a decade-long informatics investigation. Biomark Res 2024;12:39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Guo S-B, Hu L-S, Huang W-J, et al. Comparative investigation of neoadjuvant immunotherapy versus adjuvant immunotherapy in perioperative patients with cancer: a global-scale, cross-sectional, and large-sample informatics study. Int J Surg 2024;110:4660–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data will be shared with the public after the publication of the research.

