Abstract
Objective:
Systematic reviews and meta-analyses (SRMAs) rely upon comprehensive searches into diverse resources that catalog primary studies. However, since what constitutes a comprehensive search is unclear, we examined trends in databases searched from 2005–2016, surrounding the publication of search guidelines in 2013, and associations between resources searched and evidence of publication bias in SRMAs involving human subjects.
Study Design:
To ensure comparability of included SRMAs over the 12 years in the face of a near 100-fold increase of international SRMAs (mainly genetic studies from China) during this period, we focused on USA-affiliated SRMAs, manually reviewing 100 randomly selected SRMAs from those published in each year. After excluding articles (mainly for inadequate detail or out-of-scope methods), we identified factors associated with the databases searched, used network analysis to see which resources were simultaneously searched, and used logistic regression to link information sources searched with a lower chance of finding publication bias.
Results:
Among 817 SRMA articles studied, the common resources used were Medline (95%), EMBASE (44%), and Cochrane (41%). Methods journal SRMAs were most likely to use registries and grey literature resources. We found substantial co-searching of resources with only published materials, and not complemented by searches of registries and the grey literature. The 2013 guideline did not substantially increase searching of registries and grey literature resources to retrieve primary studies for the SRMAs. When used to augment Med-line, Scopus (in all SRMAs) and ClinicalTrials.gov (in SRMAs with safety outcomes) were negatively associated with publication bias.
Conclusions:
Even SRMAs that search multiple sources tend to search similar resources. Our study supports searching Scopus and CTG in addition to Medline to reduce the chance of publication bias.
Keywords: Systematic review, Meta-analysis, Evidence synthesis, Trial registries, Grey literature, Literature databases, Publication bias
1. Introduction
An increasing number of bibliographic databases cataloging biomedical literature have made the job of meta-analysis researchers to search for primary studies less arduous [1]. Bolstered by this ease of electronic literature searches, however, although publications of systematic reviews and meta-analyses (SRMAs) have increased exponentially over the last decade, there has not been comparable improvement in methodological rigor or reporting standards [2]. Meanwhile, studies linking favorable meta-analysis outcomes with financial conflicts of interest of authors have highlighted the susceptibility of SRMAs to manipulation [3–5]. Amidst these findings, while some have questioned the position of SRMAs as the highest level of evidence [6], others have called for ensuring greater objectivity and reproducibility in producing them [7]. A way to ensure the objectivity of SRMA results is to perform comprehensive searches into resources mining primary literature [8]. However, what constitutes a comprehensive search remains open to interpretation [9] and potential manipulation.
The purpose of an SRMA is to summarize all scientifically generated evidence on a topic of interest. A systematic search for data from a diverse body of evidence is fundamental to serve that purpose. In addition to extracting data from published studies, it is important to search unpublished studies (also known as gray literature) as research shows that the latter have smaller treatment effects than published studies [10,11] and that inclusion of unpublished results can change conclusions of meta-analyses [12,13]. Conversely, failure to include unpublished data biases the results toward a positive treatment effect (also known as publication bias) [14,15]. However, publication bias affects even the best SRMAs, as shown by Kicinski and colleagues in a sample of 1,106 Cochrane SRMAs, using a Bayesian hierarchical selection model [16]. Thus, research into the evolving use and relative importance of information resources mining published and unpublished research can improve the scientific rigor of evidence synthesis.
Although including both published and unpublished data is important for validity, in practice, neither is searching the unpublished data easy [17–19] nor are guidelines suggesting resources to be searched for unpublished data consistent among each other [20]. Most popular biomedical search engines mine only published studies, and little consensus exists regarding which resources to look into for unpublished data [9,21]. Among the sources of unpublished studies, trial registries established to mine data from clinical studies have emerged as a rich source of information over the last decade. Registries store information from studies regardless of the success of their outcomes, making them an important source for unpublished research [8]. Clinical trials of drugs, biologics, and devices must be registered in study registries including ClinicalTrials.gov [22]. ClinicalTrials.gov, launched in 2000, is currently the world’s largest clinical study registry and contains information from clinical trials of products that are subject to Food and Drug Administration (FDA) regulation [23]. Other trial registries can be accessed through the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP), a portal to 16 trial registries managed by various national regulatory bodies [24]. Additional repositories of trial reports can include regulatory body reports (FDA databases), grant databases, and manufacturer web sites [8]. Sources of gray literature that are not clinical trials include conference abstracts, dissertations, book chapters, policy documents, and specialized gray literature databases, among others [19]. Although studies have documented the underuse of registries [2,25–28], factors that may lead to more widespread inclusion of such resources in search strategies have not been studied.
In this study, we tracked the self-reported use of various information resources in SRMAs published between 2005 and 2016 and identified the factors associated with their use. We also examined the information resources simultaneously searched for the SRMAs through the years. In addition, we looked into the effect of best practice guidelines in the use of registries. Finally, we examined the use of information resources that are associated with low publication bias.
2. Methods
2.1. Study selection
We searched PubMed for systematic review and meta-analyses published from 2005 through 2016.
2.1.1. Definition of systematic review and meta-analysis
To answer our research question, we sought studies that systematically searched for evidence (so as to be able to identify the information resources used for search) and used statistical techniques to synthesize that evidence (assessing publication bias is relevant after quantitative synthesis). Accordingly, to be eligible, studies had to satisfy to the Cochrane definition of systematic review and to have conducted a meta-analysis [29]. Thus, we identified studies as SRMAs if there was an explicitly stated search strategy, a defined study selection process, and a formal statistical evidence synthesis. We selected the Cochrane definition of SRMAs as it is well accepted and aligns with definitions of other agencies focusing on evidence synthesis [30–32]. We designed both components of our study selection process (searching and screening) so as to ensure inclusion of studies that adhered to our operational definition.
2.1.2. Searching
We used a PubMed search to identify all systematic reviews with quantitative meta-analyses indexed in each calendar year between 2005 and 2016. We restricted the search to include articles with human subjects and those authored by US-based investigators. The complete search term was as follows: “systematic [sb] AND Meta-Analysis [ptyp] AND USA[ad] AND (‘respective Year/01/01’[PDAT]: ‘Year/12/31’ [PDAT]) AND ‘humans’[MeSH Terms].” Here, “Year” stands for the particular year among the 12 years studied (2005–2016). To arrive at the full search term, we started with a prevalidated search strategy (“systematic[sb]”) developed by Shojania and Bero [33] and recommended by National Library of Medicine [34] (sensitivity to detect systematic reviews in PubMed 90%, specificity 97%) and added the additional filter Meta-Analysis[ptyp] to subselect systematic reviews that have meta-analyses. Additional filters for publication year, United States as authors’ geographical location, and humans as study subjects were added to further narrow down to studies relevant to us.
A filter for author affiliation to a US institution was added to ensure comparability of the SRMAs between the years. Since 2005, worldwide SRMA production has changed dramatically: in 2005, there were 539 meta-analyses from the United States and 33 from China; by 2014 these numbers were 822 and 3,150 respectively [7]. Indeed, mainly because of studies from exclusively Chinese authors, there was over a 100-fold increase in epidemiological meta-analyses on genetic associations [35]. Since we sought to examine the trends in the use of information resources between 2005 and 2016, particularly the changes induced by the SRMA guidelines, we used US affiliation to avoid findings that are driven by this huge influx of studies [36], whose inclusion would, among other things, notably alter the relative proportion of therapeutic vs. epidemiological studies (as was the case in the study by Page et al. [2]). The US-author-affiliation filter resulted in SRMAs whose characteristics remained fairly constant across the years studied (Table 1).
Table 1.
Characteristics of systematic reviews and meta-analyses (SRMAs) included in the study
| Year | 2005 | 2006 | 2007 | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 | 2016 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Number of SRMAs included (n) | 66 | 59 | 65 | 74 | 71 | 69 | 65 | 64 | 62 | 68 | 75 | 79 |
| General medical journals (% of n) | 15.2 | 6.8 | 15.4 | 10.8 | 7.0 | 10.1 | 12.3 | 4.7 | 8.1 | 5.9 | 5.3 | 8.9 |
| Methods journals (% of n) | 10.6 | 10.2 | 9.2 | 9.5 | 12.7 | 14.5 | 9.2 | 6.3 | 14.5 | 5.9 | 17.3 | 10.1 |
| Psychiatry journals (% of n) | 19.7 | 20.3 | 18.5 | 17.6 | 16.9 | 13.0 | 4.6 | 17.2 | 16.1 | 13.2 | 9.3 | 11.4 |
| Specialized journals (% of n) | 54.6 | 62.7 | 56.9 | 62.2 | 63.4 | 62.3 | 73.9 | 71.9 | 61.3 | 75.0 | 68.0 | 69.6 |
| Mean (SD) impact factors of journals | 4.8 (4.7) | 5.0 (5.0) | 5.6 (7.3) | 5.3 (6.0) | 4.6 (2.9) | 5.1 (5.6) | 4.8 (3.9) | 5.2 (4.0) | 4.8 (3.5) | 3.3 (2.0) | 4.7 (3.9) | 3.9 (2.9) |
| Efficacy end points (% of n)a | 42.4 | 28.8 | 35.4 | 37.8 | 40.9 | 53.6 | 53.9 | 42.2 | 30.7 | 48.5 | 54.7 | 27.9 |
| Safety end points (% of n) | 37.9 | 30.5 | 35.4 | 33.8 | 38.0 | 37.7 | 30.8 | 26.6 | 32.3 | 32.4 | 32.0 | 22.8 |
| Other end points (% of n) | 54.6 | 59.3 | 55.4 | 52.7 | 42.3 | 47.8 | 53.9 | 53.1 | 67.7 | 42.7 | 42.7 | 51.9 |
| Pharmacological agent studied (% of n) | 40.9 | 27.1 | 33.9 | 35.1 | 42.3 | 34.8 | 38.5 | 15.6 | 30.7 | 30.9 | 37.3 | 34.2 |
| Primary studies: interventional (% of n) | 65.2 | 54.2 | 58.5 | 60.8 | 60.6 | 59.4 | 56.9 | 65.6 | 59.7 | 63.2 | 66.7 | 60.8 |
| Primary studies: observational (% of n) | 56.1 | 61.0 | 53.9 | 60.8 | 67.6 | 63.8 | 63.1 | 68.8 | 64.5 | 67.7 | 60.0 | 67.1 |
| Publication bias assessed (% of n)b | 39.4 | 37.3 | 30.8 | 48.7 | 47.9 | 40.6 | 56.9 | 54.7 | 45.2 | 39.7 | 58.7 | 48.1 |
| Publication bias identified (% of n) | 12.1 | 11.8 | 6.1 | 20.3 | 15.5 | 17.4 | 21.5 | 12.5 | 11.3 | 11.7 | 22.6 | 20.2 |
Abbreviation: SD, standard deviation.
None of the variables were significantly different between the years except
frequencies of SRMAs with an efficacy end point (X2, P = 0.002) and
frequencies of SRMAs where publication bias was assessed statistically (X2, P = 0.030).
2.1.3. Random sampling and screening
From the SRMAs retrieved by the aforementioned search, we randomly selected 100 articles for each year to review manually. Each of the 100 SRMA articles sampled was then manually reviewed by one of three investigators (R.P., K.G., or B.B.) to assess eligibility. For article to be included, they (1) had to have a defined and documented search strategy (that included resources used to search for primary studies as well as description of a study selection procedure) and (2) performed quantitative data synthesis and meta-analysis. These criteria ensured that all eligible studies adhered to the study definition for SRMAs. Among the randomly selected SRMAs that met the aforementioned criteria, we excluded studies (1) for which we could not find full texts, (2) that did not involve humans, (3) that excluded USA population, (4) that used a single case experiment design, or (5) that were duplicates identified in another year. The study flow diagram is represented in Fig. 1.
Fig. 1.
Study flow diagram. *2005–2016 n’s: 550, 694, 720, 778, 810, 963, 1107, 1301, 1198, 1071, 1507, 1169. **2005–2016 n’s: 66, 59, 65, 74, 71, 69, 65, 64, 62, 68, 75, 79. “(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)”
2.2. Data extraction and coding
Each eligible SRMA article was manually reviewed by one of three investigators (R.P., K.G., or B.B.) to extract the data regarding publication characteristics and searched resources. A study-specific standardized template was created on Microsoft Excel for the data extraction (Appendix 1). To help achieve consistent extraction, all three investigators independently extracted data from a set of 10 SRMAs. For all included SRMAs, after one investigator extracted the data, a second investigator validated the extraction, and disagreements were resolved by mutual discussion.
We extracted information on each SRMA on the following: the name of the journal where the SRMA was published, the journal impact factor for the particular year, the nature of primary studies, the outcomes studied, whether the studies involved a pharmacological agent, and if publication bias was statistically assessed, and, if assessed, whether the authors deemed publication bias to be present or not. We also extracted names of all information resources as explicitly mentioned in the search strategy of the SRMA.
After data extraction, we noted the nature of the journal in which the SRMA was published [general med/surgery journals (general medical journals), journals specializing in epidemiology/public health/research methods (methods journals), psychiatry and psychology journals (psychiatry journals), specialized journals on subdisciplines within medicine/surgery (specialized medical journals)]. We formed a separate class for psychiatry journals because of the fairly distinct information resources accessed by this medical discipline. A complete list of all journals classified into each of the four groups is provided in Appendix 2. To determine the design of the primary studies, we looked into the search criteria in each SRMA: when the specific nature of the primary studies searched was not mentioned, we examined the primary studies directly. In cases where the specific nature of the primary studies was not mentioned in the search criteria, and yet only one type of primary study (say, interventional) was included in the SRMA, we classified the SRMA to have interventional studies only. Our logic was that the SRMA authors would have anticipated the nature of their primary studies and searched resources accordingly. An SRMA was adjudged to have studied a pharmacological agent if the study focused on an agent that could be considered a “drug” according to the WHO definition. We determined the nature of outcomes examined in each SRMA: efficacy, safety, and other [diagnostic, prognostic, epidemiological (etiological/descriptive/associations), economic analysis, etc.] outcomes. A study was considered to have studied efficacy if a beneficial effect was studied (encompassing both efficacy and effectiveness as per the AHRQ definition) [37]. A study was considered to have studied safety if any outcome studied satisfied the definition of harms as in PRISMA-Harms guidelines for SRMAs (encompassing various terms such as “adverse drug reaction,” “adverse effect,” “adverse event,” “complication,” “harm,” “safety,” “side effect,” and “toxicity”) [38]. Studies were deemed having other outcomes as per their established definitions in context of SRMAs (eg, diagnostic [39], prognostic [40], epidemiological (etiological/descriptive/association) [41–43], economic analysis [44]). A single SRMA could have multiple types of outcomes.
We manually extracted names of all databases and other information resources used by authors of each SRMA to search for primary studies, as declared in the search strategy of the SRMA. After data extraction for all years was complete, we retrospectively classified the resources into four broad categories with a total of 31 subclasses. The classification was done retrospectively to accommodate all information resources used in the 817 SRMAs included over the 12 years. An information resource was assigned its separate subclass if 1% of the 817 SRMAs used it, or else the information was included in the subclass “other resources” for all the four broad categories. The broad categories and the subclasses were as follows: (1) Study registries [ClinicalTrials.gov, ICTRP, regulatory databases (eg, FDA, EMEA), manufacturer database, grant web sites for funded studies (eg, National Institutes of Health, Well-come trust), other registries (other international registries, International Standard Randomised Controlled Trial Number, Health Technology Assessment, Health Services Research Projects in Progress, Campbell Collaboration Social, Psychological, Educational and Criminological Trials Register, International Prospective Register of Systematic Reviews, etc.)]; (2) Resources mining general published literature without special focus (Medline, Excerpta Medica Database [EMBASE], other general published literature databases [Scientific Electronic Library Online, Journal Storage, etc.]); (3) Resources mining specialized published literature with a particular focus (Cochrane library [where CENTRAL focuses on clinical trials retrieved from Medline and EMBASE] [45] psycINFO, CINAHL, POPLINE, regional-/language-specific database, review collections [eg, EBMR, ACP journal club], Sport-Discus, HealthStar, International pharmaceutical abstracts, PILOTS, other specialized literature databases); (4) Gray literature resources other than registries that include unpublished studies (conference proceedings, dissertations, Scopus, Web of Science, search engines [eg, Google Scholar], ProQuest, BioSciences Information Service, Education Resources Information Center, clinical query applications, designated gray literature databases [OpenGrey, System for Information on Grey Literature in Europe, National Technical Information Service], and other gray literature resources [MedScape, manufacturers package inserts, etc.]). Whether SRMA authors were searched from the information resources was noted as a binary variable in the final table used for analyses. Salient features of each information resource subclass are described in Appendix 3.
2.3. Statistical analysis
We summarized the characteristics of the SRMAs over the years using percentages of total for the year for categorical parameters or mean and standard deviations for numerical parameters. Differences in parameters between years were examined using chi-square statistics for categorical variables and ANOVA for numerical variables. We tracked the use of all 31 information resources over 12 years using bar diagrams. To identify the factors predicting study registries, and ClinicalTrials.gov in particular, we used a logistic regression model with their characteristics as independent variables. To identify concurrent use of similar types of information resources, we constructed the network geometry of articles and the information resources in each year. Each information resource formed one node in each network, and each node was connected to another by a line if at least one article searched both information resources. The thickness of the connecting lines denotes the number of articles searching the two resources. Distance between any two nodes is given by their Euclidean distance, which represents the similarity of the two nodes in terms of the neighboring nodes they share, the distance being larger if less number of neighbors is shared. The distance between any two nodes is given by the square root of the sum of squared differences between the strength of connections to their neighboring nodes (the strength of connection between the node and its neighbor being the number of SRMAs co-searching them) [46]. To detect the presence of homophily (tendency of resources to be associated and, in our scenario, searched concomitantly), we used the ANOVA density model [47]. The network densities for a year (number of actual ties between nodes/number of all possible ties between all nodes with theoretical extremes between zero and one) were calculated. We plotted the usage of various information resources during 2005–2012 vs. 2014–2016 to see if there was a change in the use of registries after December 2012, when Cochrane guidelines (Methodological Expectations of Cochrane Intervention Reviews [MECIR]) [48] first mandated using registries as an information resource in SRMAs. Assuming a time lag of 1 year between promulgation of MECIR and publication of SRMAs that might have followed the guideline, we selected 2013 as a washout period and 2014 as the year from which we could expect the promulgation guideline to show results. Finally, we used logistic regression adjusted for SRMA characteristics (year of publication, journal type, journal impact factor, design of primary studies, outcome nature, and whether a pharmacological agent was studied) and all information resources to identify the information resources negatively associated with publication bias.
Data were analyzed using STATA version 14 (StataCorp LP, College Station, TX, USA) [49] and UCINET version 6 (Analytic Technologies, Harvard, MA, USA) [50].
3. Results
3.1. Characteristics of SRMAs and information sources used
We retrieved a total of 11,868 PubMed entries using the search terms and randomly selected 100 entries each for each year (1,200 entries between 2005 and 2016). A total of 817 SRMA articles with an average of 68 SRMAs per year (range: 59–79) were included in the final analyses (Fig. 1). Table 1 describes the included SRMAs. Of the journals publishing the SRMAs, about two-thirds (65.2%) were specialized medical journals, while the rest were approximately similar proportions of general medical journals (75/817, 9.2%), methods journals (89/817, 10.9%), and psychiatry journals (120/817, 14.7%). The mean impact factor of these journals was 4.7 (standard deviation: 4.5). The primary research analyzed by the SRMAs included interventional studies in 61.0% (499/817) of cases and observational studies in 62.9% (514/817) of cases (ie, 37.0% SRMAs [303/817] included only interventional studies, 38.9% SRMAs [318/817] included only observational studies, and 23.9% SRMAs [196/817] included both types). Almost one-third of the SRMAs had a safety parameter as an outcome (265/817, 32.4%), and one-third of the SRMAs examined a pharmacological agent (275/817, 33.6%). Publication bias was statistically assessed in about 45.9% of the SRMAs, of which a third (33.1%) found statistical evidence of such bias. Fig. 2 represents a comparative picture of information resource use in 2005 and 2016. The most common sources used overall were Medline/PubMed, EMBASE, and the Cochrane databases in both years. Use of registries and Scopus increased substantially from 2005 to 2016. Trends in utilization of registries, general and specialized published literature databases, and gray literature sources are provided in Appendix 4.
Fig. 2.
Percentage of SRMAs using each information resource type in the years 2005 and 2016. Gray bars represent percentage of SRMAs using a resource in 2005 (n = 66) and black bars, 2016 (n = 79). Within each class (registries/general literature database/specialized literature database/gray literature resource), resources are arranged in decreasing order of usage frequency in 2016. Percentages of SRMAs using each information resource are shown in Appendix 4. SRMA, systematic review and meta-analysis. “(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)”
3.2. Predictors of use of different information resources
We examined factors predictive of searching various types of information resources (Fig. 3). Appendix 5 describes the factors predictive of searching individual databases. Methods journals were strongly associated with the use of registries, gray resources, and specialized literature resources. SRMAs used registries more often if a pharmacological agent was being studies. Their overall use of registries or gray sources did not increase over the years. Use of specialized literature databases was more prominent in psychiatry journals. Perhaps encouragingly, there was no correlation between journal impact factor and greater use. Interestingly, general medical journals showed a correlation with lower use of general literature databases. On examining the 34 SRMAs that did not use any generalized literature database, we found that three were in specialized medical journals, 26 in psychiatry journals, and five in general medical journals. For the five SRMAs in general medical journals, the following resources were used: Cochrane library, Web of Science, regulatory and manufacturer web sites, sponsoring web site, and specialized resources. All the five SRMAs were published on or before 2012.
Fig. 3.
Factors associated with use of various information resources. Using a bivariate logistic regression model, we derived adjusted odds ratios (boxes) and their 95% confidence limits for association of various SRMA characteristics and use of (A) registries, (B) general literature databases, (C) specialized literature resources, (D) gray literature resources. SRMA characteristics are color-coded to represent year (dark navy), journal type (blue) with specialized medical journals as reference, impact factor (green), primary study type (turquoise), outcome examined in SRMA (gray), and whether pharmacological agents were examined in SRMA (purple). Factors associated with each individual data source are shown in Appendix 5. *Methods journal always searched general literature databases, causing the odds ratio to be much greater than 20, the x-axis scale limit. SRMA, systematic review and meta-analysis. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
3.3. Networks of information resources
We performed network analysis to ascertain the degree of clustering of the different resources searched by the SRMAs over the years. Fig. 4 shows representative resource networks formed by the SRMAs in the years 2005 and 2016. Each node represents an information resource that has been used by an SRMA. A line joins two nodes if any SRMA has used the two information resources concurrently, while the thickness of the line denotes the number of SRMAs that have searched both resources. The closeness of any two nodes in the network depends on the number of SRMAs that have searched both resources.
Fig. 4.
Network of data sources in 2005 and 2016. Nodes denoted by cyan circles
for registries, purple squares
for general or specialized literature resources, gray triangles
for gray literature resources. Each information resource forms one node in each network, and each node was connected to another by a line if at least one article searched both information resources. The thickness of the connecting lines denotes the number of articles searching the two resources. CTG = ClinicalTrials.gov; other_spec_lit = other specialized literature database; other_gen_lit = other general literature database; DesignatedGL = designated gray literature database; RegionalDB 5 regional-/language-specific database; MAcollections = collections of EBM reviews/ACP journal clubs. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
In both networks, the three published literature databases Medline, EMBASE, and Cochrane databases form the central resources and are closest to each other, denoting the highest degree of co-occurrence in SRMA searches. The network connections increase substantially from 2005 through to 2016. However, the presence of published literature resources (combining generalized and specialized literature databases, both denoted by purple squares) clustered toward the center and registries (denoted by cyan circles) and other gray literature resources (denoted by gray triangles) toward the network periphery indicates signifi-cant amount of co-searching of published literature resources by the same SRMAs.
ANOVA density models constructed to detect homophily (tendency of resources to be associated and, in our scenario, searched concomitantly) confirm that while querying multiple databases, authors mostly just search the same kinds of information from resources that have a high degree of overlap. The standardized regression coefficient of co-searching of published resources is highest among the possible permutations (0.22, P = 0.04). The ranking of information resource type co-searches is as follows: (1) two published literature databases (coefficient: 0.22, P = 0.04); (2) a published literature database and a gray literature database (coefficient: 0.01, P = 0.06); (3) a published literature database and a registry (coefficient: 0.01, P = 0.59); (4) two registries (coefficient: −0.007, P = 0.43); (5) a gray literature database and a registry (coefficient: −0.04, P = 0.43) (reference combination: two gray literature databases).
3.4. Effect of search guidelines on registry usage in SRMAs
In 2012, the Cochrane guideline for SRMA methodology first mandated searching from study registries as well as gray resources over and above Cochrane databases. On visual examination of scatter plots of resource use over 12 years, we did not find any remarkable effect of the guideline on the usage of either general literature databases, or registry, or gray sources (Fig. 5A–D). The use of specialized literature databases (Fig.5C) and the yearly network densities (number of actual ties between nodes/number of all possible ties between all nodes with theoretical extremes between zero and one; in our context, it denotes diversity of the information resources used in SRMAs) increased from 2014 onward (Fig. 5E). We performed an interrupted time series analyses to see whether there was any statistically significant increase in resource usage after the implementation of the guideline (2014–2016) as compared to before itsimplementation(2005–2013, Appendix 6). Although we did not have enough post-treatment analysis points to be confident about the interrupted time series analyses, the trends from the analyses aligned well with inferences from visual examination.
Fig. 5.
Descriptive trend analysis showing changes in data resources before and after 2013. Analysis for (A) registries, (B) general literature databases, (C) specialized literature resources, (D) gray literature resources, and (E) network densities. Dots represent the actual percentage of SRMAs using the resource each year, whereas the connecting solid lines represent the linear predicted fit for two time segments: 2004–2012 and 2014–2016. SRMA, systematic review and meta-analysis. “(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)”
3.5. Relative importance of information resources in avoiding publication bias
Three hundred seventy-five out of 817 SRMAs statistically examined publication bias; 363 out of these 375 SRMAs searched Medline. To understand the additional benefit of resources when searched alongside Medline, we identified the resources that were negatively associated with publication bias using a logistic regression model (n = 363). Table 2 lists the resources negatively associated with publication bias (adjusted odds ratio <1) in all SRMAs significant at P = 0.20, stratified by outcome type. Appendix 7 provides a list of all resources and their associated adjusted odds ratios. Of the information resources, Scopus was significantly associated with low publication bias, while ClinicalTrials.gov exhibited a strong trend toward low publication bias. On repeating the procedure for SRMAs looking at only safety outcomes, low publication bias was associated with ClinicalTrials.gov.
Table 2.
Information sources that, when searched alongside Medline, are negatively associated with publication bias
| Resource | Adjusted odds ratio (95% confidence limits) | P value |
|---|---|---|
| For all end points (n = 363) | ||
| Scopus | 0.32 (0.12–0.88) | 0.03 |
| ClinicalTrials.gov | 0.33 (0.09–1.20) | 0.09 |
| Conference proceedings | 0.56 (0.27–1.15) | 0.12 |
| Other specialized literature database | 0.37 (0.10–1.40) | 0.14 |
| For safety end points (n = 120) | ||
| ClinicalTrials.gov | 0.02 (0.00–0.56) | 0.02 |
| Scopus | 0.17 (0.02–1.72) | 0.13 |
| For other end points (n = 188) | ||
| Conference proceedings | 0.14 (0.01–1.28) | 0.08 |
| Scopus | 0.25 (0.05–1.25) | 0.09 |
| SportDiscus | 0.10 (0.01–1.52) | 0.10 |
Abbreviation: SRMA, systematic review and meta-analysis.
Using a bivariate logistic regression model, adjusted for SRMA characteristics and use of other resources, we derived odds ratios for association of information resources and the finding of publication bias in the SRMAs that already searched Medline. We list the information resources negatively associated with publication bias, significant at P = 0.2, for SRMAs examining any end point (n = 363), SRMAs examining safety end points (n = 120), and SRMAs examining other end points (n = 188). No resource was significantly negatively associated with publication bias at P = 0.2 for only efficacy end points (n = 148). Odds ratios for all resources are shown in Supplementary Material 6.
4. Discussion
Searching from a large and diverse body of studies is key to valid evidence generation in SRMAs. However, little evidence-based guidance exists on which resources to search, resulting in haphazard searches of cherry-picked resources. We conducted a systematic review of the information resources searched by SRMAs published from 2005 to 2016. The best predictor of use of alternate resources is the type of journal publishing the SRMA. Although search guidelines have not directly contributed to the increase in registries or gray resources, the diversity of searches after propounding of 2012 guidelines has increased. However, overall, there is still a considerable sequestration in use of published literature databases. Among the resources, when used in addition to Medline, searching Scopus was significantly associated with low publication bias while ClinicalTrials.gov was associated with low publication bias in studies with safety outcomes.
Commercial publishers are likely to be interested in publishing studies that have positive results in alignment with the hypothesis and thus are “news worthy.” Similarly, industry-based or academic researchers are likely to be interested in publishing positive results to more efficiently use their time in talking about successful products or research, respectively. In fact, not only are negative studies published on an average of 2 years later than their positive counterparts, but also are they less likely to be published at all [51]. Although ignored during publication, negative studies assume just as much importance as published ones when the aim is to summarize evidence on a topic of interest. Importantly, including results from unpublished studies has been shown to alter the results of meta-analysis [12,13]. Thus, searching unpublished research is important for valid meta-analyses. Study registries like ClinicalTrials.gov store data from both published and unpublished research studies. For example, Riveros et al. found that up to 50% of trials posted in ClinicalTrials.gov remained unpublished in peer-reviewed journals [52]. Registries should thus constitute an important resource for unpublished literature. Accordingly, 2012 Cochrane guidelines mandate inclusion of study registries and gray sources in SRMA search strategies. Although many studies have shown that registries are underutilized in SRMAs, factors associated with greater use have not been studied. We found that, adjusted for other covariates, methods journals are most likely to use trial registries. The methods journals were also associated with increased use of other resources including conference proceedings, dissertations, and all gray literature resources combined. This implied that the policies implemented by editors of such journals significantly affect SRMA search strategies. At the same time, SRMAs in both general and specialized medical/surgical journals rarely searched trial registries and other gray literature resources. This is concerning because such journals form most of the journals publishing SRMAs and are the likeliest to be read by clinicians who might practice health care based on the results of SRMAs.
The purpose of searching multiple resources for SRMAs is to avoid missing relevant research that may not have been indexed in one particular database. Little is gained by searching multiple databases if there is a significant overlap in content covered by them. Efficient search strategies should aim to retrieve the maximum number of studies using only the most essential information resources: not only because several data resources require paid subscriptions but also searching each resource requires substantial time, skilled manpower, and eventual elimination of duplicate studies. Therefore, searching data resources diverse in their study coverage should be the aim of the review strategy. However, we found evidence of sequestration of similar resources in the SRMA searches. For example, the most common two resources to co-occur in searches were Medline and EMBASE. Medline, EMBASE, and the Cochrane libraries form the most used triad. Similar sequestration was noted among all databases focusing on published literature. Conversely, co-searching diverse resources such as databases mining published literature and registries or databases mining published literature and gray literature sources were relatively rare. Such practice may provide a false sense of security about the comprehensiveness of the search because there is a considerable overlap between published literature resources like Medline and EMBASE (eg, according to the Cochrane Handbook about 65% of 5200 journals indexed in Medline are also indexed in EM-BASE and 62% of 4,800 journals indexed in EMBASE are also indexed in Medline, and they “have been found to return similar numbers of relevant references”) [53], and because studies suggest searching EMBASE in addition to Medline provides little incremental benefit [21]. Similarly, Cochrane CENTRAL databases mainly mine data from Medline and EMBASE (only 0.02% of studies deposited in CENTRAL were not from either Medline or EM-BASE) [54]. Such sequestration of search resources seems reflective of following prevalent practice, rather than well-thought-out strategies. Researchers may consider searching diverse information resources depending on the focus of their study, as also proven benefit of a particular resource in preventing publication bias in a specific study area.
Although several guidelines exist directing the use of information resources in conducting SRMAs, advice regarding use of resources storing unpublished data is mostly ambiguous [20]. In their review examining the clarity of SRMA guidelines, Boden et al. find that only three provide procedural guidance on using databases such as registries to identify unpublished studies: the Cochrane MECIR guideline (2012) [48], the Centre for Reviews Dissemination (CRD) guideline (2009) [55], and the Preferred Reporting Items for Systematic Reviews and Meta-analyses guideline (2015) [30]. Of the three, although CRD was the earliest to suggest use of registries to find unpublished studies, the Cochrane MECIR guideline published in December 2012 was the first to “mandate” searching ClinicalTrials.gov, ICTRP, and other gray literature resources. Hence, to examine whether the guidelines had influenced the practice of diverse searches in SRMAs, we examined differences in database usage before (2005–2012) and after (2014–2016) the MECIR guideline promulgation Cochrane MECIR guideline in December 2012 (having 2013 as a “washout” year). We assumed that the MECIR guidelines could have substantial influence on non-Cochrane meta-analyses since the Cochrane organization forms one of the most respected and cited authorities on the science of meta-analyses. Although the use of diverse resources as evidenced by a progressively higher network density 2014 onward, there was no significant effect of the guidelines on the absolute rate of registry search 2014 onward. This suggests that mere guideline recommendations may be insufficient to ensure a widespread use of registries as information resources in SRMAs beyond methods journals. Taken together, by showing continued use of nondiverse use of SRMAs and relative ineffectiveness of guidelines in widespread improvement of practice, our study justifies the ad hoc registration of SRMAs in designated registries like International Prospective Register of Systematic Reviews, the Joanna Briggs Institute, and so on. These registries, during the registration procedure, should encourage conformity with evidence-based information search practices, and this, in turn, will improve evidence synthesis. (Only 6% of the 2016 SRMAs in our study were registered at a systematic review registry.) All types of journals, mirroring the 2004 ICMJE rule regarding clinical trial registration [56], should insist on ad hoc registration of SRMAs before considering publication. Such practices will not only increase transparency but also may help to stem the tide of deteriorating SRMA quality.
Finally, of the resources used in the SRMAs, searching Scopus in addition to Medline was associated with statistical findings of no publication bias. Using ClinicalTrials.gov in addition to Medline was associated with statistical findings of no publication bias in meta-analyses with safety outcomes. The benefit of Scopus may be explained by the fact that it has 100% coverage of multiple resources including Medline, EMBASE, Compendex, World textile index, Fluidex, Giobase, and Biobase. In addition, it has approximately 20% more citation coverage than Web of Science and covers patents and Web literature as sources of gray literature [57]. ClinicalTrials.gov usage showed a trend toward low publication bias in general, while this link was stronger in studies with safety outcomes. The benefit of ClinicalTrials.gov can be explained by its coverage of unpublished trial reports as evidenced in previous research [52]. The fact that ClinicalTrials.gov provides a wider coverage of safety data than published articles may explain its benefit in SRMAs with safety studies [58,59]. While the benefit of searching ClinicalTrials.gov is diminished by the fact that outcomes are not always reported there [49], our study strengthens the case for making trial data available in ClinicalTrials.gov, regardless of whether they will be published in the future.
Our study has several limitations. First, we look at use of information resources in search strategy and link this to publication bias. This is, however, an indirect linkage because mere searching of a certain resource may not lead to finding additional studies in individual cases. Moreover, we use statistically measured publication bias results as end points in the aforementioned analysis. However, many studies do not report statistical tests to identify publication bias simply because determination of publication bias by most statistical tests is imperfect. However, owing to our relatively large sample size, we aimed to find an association between finding publication bias and information resources. In addition, to retrieve SRMAs by authors with US affiliations, we used, following previous studies, “USA[ad]” as a PubMed filter. Note that the PubMed filter “USA[ad]” only ensures that a study had a US-affiliated corresponding author. However, having a US-affiliated corresponding author seems likely to provide access to the full range of resources available to any group of US-affiliated authors. The additional requirement that each SRMA includes at least one US population also helps to maintain the focus on US-anchored studies. Importantly, although aware that filtering on US affiliations limits the generalizability of our study, we prioritized internal validity to maintain comparability of SRMAs in the longitudinal trend study over loss of generalizability. Finally, we find that including nonjournal resources (eg, trial registries, gray literature sources) is associated with a lesser chance of “publication bias.” This is to be expected, since dramatic, large effects are more likely to get published, while less impressive (or even negative) findings can often only be found in the gray literature.
In conclusion, our study provides evidence that the diverse databases like registries and other forms of gray literature resources in SRMA are mostly searched only in methods journals, and recommendations in best practice guidelines may not be sufficient for widespread use of such resources. We additionally found that even SRMAs that search multiple sources tend to search similar resources, rather than databases with diverse coverage. Our study provides supports using Scopus in addition to Medline search to reduce publication bias. Using ClinicalTrials.gov in addition to Medline is associated with low publication bias in studies with safety outcomes.
Supplementary Material
What is new?
Key findings
In evidence synthesis for systematic reviews and meta-analyses (SRMAs), only journals focusing on research methods, and not general medical journals, are associated with searching diverse information resources.
Promulgation of search guidelines have not substantially increased search of diverse resources.
When searched alongside Medline, Scopus (in all SRMAs) and ClinicalTrials.gov (in SRMAs with safety outcomes) were negatively associated with publication bias.
What this adds to what was known?
While searching multiple resources is widespread, this work finds that most systematic reviews search databases with overlapping coverage, a practice that has not changed in the last decade.
This work also provides the evidence to justify the use of certain resources in SRMAs by examining the association of various resources with occur-rence of publication bias.
What is the implication and what should change now?
This work shows that mere guidelines may not be sufficient to increase searching of diverse, non-overlapping resources in SRMAs. All journals publishing SRMAs, and not just the ones focusing on research methods, should encourage searching of resources that are likely to find unpublished research.
Scopus (in all SRMAs) and ClinicalTrials.gov (in SRMAs with safety outcomes) should be searched during evidence synthesis.
Acknowledgments
Funding: Research reported in this publication was supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number HL125089. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Conflicts of interest: None.
Supplementary data
Supplementary data related to this article can be found at https://doi.org/10.1016/j.jclinepi.2018.05.024.
References
- [1].Page D Systematic literature searching and the bibliographic database haystack. Electron J Bus Res Methods 2008;6(2):171–80. [Google Scholar]
- [2].Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 2016;13(5):e1002028. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Dunn AG, Arachi D, Hudgins J, Tsafnat G, Coiera E, Bourgeois FT. Financial conflicts of interest and conclusions about neuraminidase inhibitors for influenza: an analysis of systematic reviews. Ann Intern Med 2014;161:513–8. [DOI] [PubMed] [Google Scholar]
- [4].Bes-Rastrollo M, Schulze MB, Ruiz-Canela M, Martinez-Gonzalez MA. Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: a systematic review of systematic reviews. PLoS Med 2013. December;10(12):e1001578 discussion e1001578. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Ebrahim S, Bance S, Athale A, Malachowski C, Ioannidis JPA. Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. J Clin Epidemiol 2016; 70:155–63. [DOI] [PubMed] [Google Scholar]
- [6].Honório HM. Should all systematic reviews be at the top of the pyramid? Arch Oral Biol 2017;73:321–2. [DOI] [PubMed] [Google Scholar]
- [7].Ioannidis JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q 2016; 94(3):485–514. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Citrome L Beyond pubmed: searching the “grey literature” for clinical trial results. Innov Clin Neurosci 2014;11(7–8):42–6. [PMC free article] [PubMed] [Google Scholar]
- [9].Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol 2016;16:127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev 2007;MR000010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet Lond Engl 1991;337(8746): 867–72. [DOI] [PubMed] [Google Scholar]
- [12].Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ 2012;344:d7202. [DOI] [PubMed] [Google Scholar]
- [13].Baudard M, Yavchitz A, Ravaud P, Perrodeau E, Boutron I. Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: methodological systematic review and reanalysis of meta-analyses. BMJ 2017;356:j448 PMID: . [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252–60. [DOI] [PubMed] [Google Scholar]
- [15].Conn VS, Valentine JC, Cooper HM, Rantz MJ. Grey literature in meta-analyses. Nurs Res 2003;52(4):256–61. [DOI] [PubMed] [Google Scholar]
- [16].Kicinski M, Springate DA, Kontopantelis E. Publication bias in meta-analyses from the Cochrane Database of Systematic Reviews. Stat Med 2015;34:2781–93. [DOI] [PubMed] [Google Scholar]
- [17].Mahood Q, Van Eerd D, Irvin E. Searching for grey literature for systematic reviews: challenges and benefits. Res Synth Methods 2014;5(3):221–34. [DOI] [PubMed] [Google Scholar]
- [18].Saleh AA, Ratajeski MA, Bertolet M. Grey literature searching for health sciences systematic reviews: a prospective study of time spent and resources utilized. Evid Based Libr Inf Pract 2014;9(3): 28–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Paez A Gray literature: an important resource in systematic reviews. J Evid Based Med 2017;10(3):233–40. [DOI] [PubMed] [Google Scholar]
- [20].Boden C, Bidonde J, Busch A. Gaps exist in the current guidance on the use of randomized controlled trial study protocols in systematic reviews. J Clin Epidemiol 2017;85:59–69. [DOI] [PubMed] [Google Scholar]
- [21].Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions. J Clin Epidemiol 2015;68:1076–84. [DOI] [PubMed] [Google Scholar]
- [22].Zarin DA, Tse T, Williams RJ, Carr S. Trial reporting in ClinicalTrials.gov - the final rule. N Engl J Med 2016;375:1998–2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Home - ClinicalTrials.gov [internet]. Available at: https://clinicaltrials.gov/. Accessed July 10, 2017.
- [24].WHO | welcome to the WHO ICTRP [internet]. WHO; Available at: http://www.who.int/ictrp/en/. Accessed July 10, 2017. [Google Scholar]
- [25].Jones CW, Keil LG, Weaver MA, Platts-Mills TF. Clinical trials registries are under-utilized in the conduct of systematic reviews: a cross-sectional analysis. Syst Rev 2014;3:126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Sinnett PM, Carr B, Cook G, Mucklerath H, Varney L, Weiher M, et al. Systematic reviewers in clinical neurology do not routinely search clinical trials registries. PLoS One 2015;10:e0134596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Bibens ME, Chong AB, Vassar M. Utilization of clinical trials registries in obstetrics and gynecology systematic reviews. Obstet Gynecol 2016;127(2):248–53. [DOI] [PubMed] [Google Scholar]
- [28].Yerokhin VV, Carr BK, Sneed G, Vassar M. Clinical trials registries are underused in the pregnancy and childbirth literature: a systematic review of the top 20 journals. BMC Res Notes 2016;9(1):475. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].1.2.2 What is a systematic review? | systematic reviews of interventions | Cochrane community [Internet]. Available at: www.handbooksri/chapter-1-introduction/11-cochrane/12-systematic-reviews/122-what-systematic-review. Accessed March 27, 2018.
- [30].Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. , PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015;349:g7647 5. [DOI] [PubMed] [Google Scholar]
- [31].Agency for healthcare research and quality [internet]. Available at: https://www.ahrq.gov. [DOI] [PubMed]
- [32].Centre for reviews and dissemination [Internet]. Available at: https://www.york.ac.uk/crd/.
- [33].Shojania KG, Bero LA. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Eff Clin Pract 2001;4(4):157–62. [PubMed] [Google Scholar]
- [34].Search strategy used to create the systematic reviews subset on pubmed [Internet]. Available at: https://www.nlm.nih.gov/bsd/pubmed_subsets/sysreviews_strategy.html. Accessed March 28, 2018.
- [35].Ioannidis JPA, Chang CQ, Lam TK, Schully SD, Khoury MJ. The geometric increase in meta-analyses from China in the genomic era. PLoS One 2013;8:e65602. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [36].Yao L, Sun R, Chen Y-L, Wang Q, Wei D, Wang X, et al. The quality of evidence in Chinese meta-analyses needs to be improved. J Clin Epidemiol 2016;74:73–9. [DOI] [PubMed] [Google Scholar]
- [37].Gartlehner G, Hansen RA, Nissman D, Lohr KN, Carey TS. Criteria for distinguishing effectiveness from efficacy trials in systematic reviews [internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2006. Accessed April 2, 2018. [PubMed] [Google Scholar]
- [38].Zorzela L, Loke YK, Ioannidis JP, Golder S, Santaguida P, Altman DG, et al. PRISMAHarms Group. PRISMA harms checklist: improving harms reporting in systematic reviews. BMJ 2016; 352:i157. [DOI] [PubMed] [Google Scholar]
- [39].McInnes MDF, Moher D, Thombs BD, McGrath TA, Bossuyt PM, The PRISMA-DTA Group, Clifford T, et al. Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement. JAMA 2018;319: 388–96. [DOI] [PubMed] [Google Scholar]
- [40].Matino D, Chai-Adisaksopha C, Iorio A. Systematic reviews of prognosis studies: a critical appraisal of five core clinical journals. Diagn Progn Res 2017;1(1):9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [41].Sheehan MC, Lam J. Use of systematic review and meta-analysis in environmental health epidemiology: a systematic review and comparison with guidelines. Curr Environ Health Rep 2015;2(3):272–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [42].Available from:Participate|HuGENet|genomics||CDC [Internet];. https://www.cdc.gov/genomics/hugenet/participate.htm. Accessed April 2, 2018.
- [43].Moola S, Munn Z, Sears K, Sfetcu R, Currie M, Lisy K, et al. Conducting systematic reviews of association (etiology): the Joanna Briggs institute’s approach. Int J Evid Based Healthc 2015;13(3): 163–9. [DOI] [PubMed] [Google Scholar]
- [44].Gomersall JS, Jadotte YT, Xue Y, Lockwood S, Riddle D, Preda A. Conducting systematic reviews of economic evaluations. Int J Evid Based Healthc 2015;13(3):170–8. [DOI] [PubMed] [Google Scholar]
- [45].What is in The Cochrane Central Register of Controlled Trials (CENTRAL) from MEDLINE? [internet]. Available at: http://handbook-5-1.cochrane.org/chapter_6/6_3_2_1_what_is_in_the_cochrane_central_register_of_controlled.htm.
- [46].Wasserman S, Faust K. Social network analysis: methods and applications. Cambridge: Cambridge University Press; 1994: ISBN:978-0-521-38707-1. [Google Scholar]
- [47].Hanneman R, Ridde M. Introduction to social network methods [internet]. Available at: http://faculty.ucr.edu/|hanneman/nettext/C18_Statistics.html Accessed 5 April 2018.
- [48].Chandler J, Churchill R, Higgins J, Lasserson T, Tovey D. Methodological standards for the conduct of new Cochrane intervention reviews version 2.2. The Cochrane Collaboration; 2011. Available at: http://www.editorial-unit.cochrane.org/sites/editorial-unit.cochrane.org/files/uploads/MECIR_conduct_standards%202.2.pdf. Accessed 21 October 2017. [Google Scholar]
- [49].Stata 14 | stata [Internet]. Available at: https://www.stata.com/stata14/. Accessed October 10, 2017. [Google Scholar]
- [50].Borgattu SP, Everett MG, Freeman LC. UCINET for windows: software package for the social network analysis [internet]. Harvard, MA: Analytical Technologies; 2002. Available at: https://sites.google.com/site/ucinetsoftware/home. Accessed October 10, 2017. [Google Scholar]
- [51].Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev 2009;MR000006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [52].Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS Med 2013; 10(12):e1001566 discussion e1001566. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [53].MEDLINE and EMBASE [Internet]. Available at: http://handbook-5-1.cochrane.org/chapter_6/6_2_1_3_medline_and_embase.htm Accessed 5 April 2018.
- [54].Dickersin K, Manheimer E, Wieland S, Robinson KA, Lefebvre C, McDonald S. Development of the Cochrane collaboration’s CENTRAL register of controlled clinical trials. Eval Health Prof 2002. March;25(1):38–64. [DOI] [PubMed] [Google Scholar]
- [55].Available at:Systematic reviews: CRD’s guidance for undertaking systematic reviews in health care [internet]. University of York; 2009. https://www.york.ac.uk/inst//crd/index_guidance.htmISBN:1-900640-47-3. [Google Scholar]
- [56].De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. , International Committee of Medical Journal Editors. Clinical trial registration: a statement from the international committee of medical journal editors. N Engl J Med 2004;351:1250–1. [DOI] [PubMed] [Google Scholar]
- [57].Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of pubmed, scopus, web of science, and google scholar: strengths and weaknesses. FASEB J 2008. February;22(2):338–42. [DOI] [PubMed] [Google Scholar]
- [58].Tang E, Ravaud P, Riveros C, Perrodeau E, Dechartres A. Comparison of serious adverse events posted at ClinicalTrials.gov and published in corresponding journal articles. BMC Med 2015;13:189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [59].Golder S, Loke YK, Wright K, Norman G. Reporting of adverse events in published and unpublished studies of health care interventions: a systematic review. PLoS Med 2016;13(9):e1002127. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.





