Objectives
This is a protocol for a Cochrane Review (methodology). The objectives are as follows:
To review systematically empirical studies that report the development, evaluation, or comparison of search filters to retrieve reports of systematic reviews in MEDLINE and Embase.
Background
Bibliographic databases, such as MEDLINE and Embase, provide access to an international body of scientific literature in health and medical sciences. They provide bibliographic citation information and, frequently, abstracts or links to full‐text publications. These databases also provide controlled vocabulary (index terms) to make it easier to index, catalogue, and search biomedical and health‐related information and documents (Dhammi 2014; Leydesdorff 2016; Lipscomb 2000).
Systematic reviews of the literature are an important source of evidence for clinicians, researchers, consumers, and policy makers. They address a specific health‐related question, and use explicit methods to identify, appraise and synthesize research‐based evidence and present it in an accessible format. They provide more reliable findings from which conclusions can be drawn and decisions made (Chandler 2019). Systematic reviews can also be a useful starting point for researchers in the design of new investigations, bringing together knowledge from the existing body of evidence, and identifying gaps in the evidence (Ioannidis 2016). There two main guidelines provided by Cochrane for the development and reporting of systematic reviews. These are the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2019b) and the Methodological Expectations of Cochrane Intervention Reviews (known as the MECIR standards) (Higgins 2019a). The Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) statement (Moher 2009) provides guidance on how to report systematic reviews; there is evidence that adherence to these guideline has improved over time (Page 2016), however there is still scope for improvement.
Systematic reviews are also widely used to develop clinical practice guidelines (defined as “statements that include recommendations intended to optimize patient care that are informed by systematic reviews of evidence and an assessment of the benefits and harms of alternative care options” (IOM 2011)), overviews (defined as “reviews designed to compile evidence from multiple systematic reviews into one accessible and usable document” (Becker 2011)), and other forms of evidence synthesis such as evidence mappings (Bragge 2011). In consequence, the appropriate and prompt identification of systematic reviews is necessary for many important purposes.
Search filters were originally defined by Wilczynski 1995 as a list of terms that can improve the detection of studies of high quality for clinical practice. The Cochrane Handbook defines search filters as "search strategies that are designed to retrieve specific types of records, such as those of a particular methodological design" (Lefebvre 2019). Alternative terms that have been used include clinical queries, hedges, optimal search filters, optimal search strategies, quality filters, search filters, or search strategies (Jenkins 2004). Whereas the original definition by Wilczynski and colleagues included study quality, methodological search filters are not necessarily designed to retrieve studies by their quality. Some search filters have been assessed to determine how effective they are at identifying relevant articles while avoiding the detection of irrelevant articles and, for the purpose of this review, we will restrict the definition of "search filters" to search strategies with a formal test of diagnostic performance. We will refer to other methods for the retrieval of specific types of records as "search strategies" alone. As the amount of research evidence continues to increase rapidly in some areas and indexing is not consistent for all studies designs, the use of search filters has been advocated to assist the searching process, as it reduces the total number of records found and increases the likelihood that they will be of interest. For that reason, the performance of a search filter is usually calculated according to its capacity to retrieve as many relevant citations as possible whilst also omitting irrelevant results (Wilczynski 2005), the aim being to reduce the number or irrelevant citations that may have to be screened to find a relevant article (Bachmann 2002).
Methodological search filters are used to help end‐users search the literature effectively (Jenkins 2004). Filters have been developed with different levels of sensitivity and specificity according to the requirements of the users (for instance, those with high sensitivity, high specificity, or a balance between sensitivity and specificity) (Glanville 2000; Jenkins 2004). Brettle 1998 proposes that a successful strategy is one that retrieves a manageable number of references, providing the user with a balance of sensitivity and precision.
Methodological search filters have been developed for various study designs and have been found to be particularly useful for intervention studies. Within the Cochrane Handbook, for example, a highly sensitive search strategy is proposed for identifying reports of randomized trials (Lefebvre 2019); and there are Cochrane Reviews of the evidence on filters for retrieving studies of diagnostic test accuracy (Beynon 2013) and observational studies (Li 2019). However, there is no guidance for finding systematic reviews (Becker 2011).
Description of the methods being investigated
Currently, MEDLINE can be searched via PubMed (www.pubmed.gov), and provides two related publication type descriptors for the retrieval of systematic reviews: “meta‐analysis” (introduced in 1993), which might not be useful for those systematic reviews which do not include a meta‐analysis (Boynton 1998; Hunt 1997; Lee 2012; Montori 2005; Shojania 2001); and “review” (introduced in 1966), which may not differentiate systematic reviews from narrative reviews (Hunt 1997; Lee 2012; Shojania 2001; Montori 2005). More recently, PubMed incorporated a filter to retrieve “systematic reviews” through the system interface (systematic review subset), based on a validation process (Shojania 2001). This is intended to retrieve citations identified as systematic reviews, meta‐analyses, reviews of clinical trials, evidence‐based medicine, consensus development conferences, guidelines, and citations to articles from journals specializing in reviews of value to clinicians. This filter has been updated periodically (it was last updated in December 2018), however it is pragmatic, and has not undergone testing for sensitivity, selectivity, precision, or accuracy in a formal validation process (Bradley 2010). Recently, the National Library of Medicine added new terminology to the Medical Subject Headings: "Systematic review as topic" and "Systematic review"[publication type], defined as "A review of primary literature in health and health policy that attempts to identify, appraise, and synthesize all the empirical evidence that meets specified eligibility criteria to answer a given research question ... aimed at minimizing bias in order to produce more reliable findings regarding the effects of interventions for prevention, treatment, and rehabilitation that can be used to inform decision making." (NLM 2019a) These recent developments might help with identifying and cataloguing studies addressing systematic reviews and studies related to systematic reviews (e.g. overviews), however their value is dependent on the quality of the indexing (NLM 2019b; NLM 2019c).
Elsevier indexes Embase citations with the check tag “systematic review” for studies summarizing systematically all the available evidence (Embase Indexing Guide 2020). Wilczynski 2007 developed a filter applied to the Embase database syntax for the online platform Ovid, based on an index of medical terms and text words of clinical studies as well as recommendations from clinicians and biomedical librarians. Additionally, the Scottish Intercollegiate Guidelines Network (SIGN) developed a filter to retrieve systematic reviews from the Embase database (SIGN). This filter is an adaptation of a filter from the Health Information Research Unit (HIRU) in McMaster University. One of its main characteristics is the greater emphasis on specificity over sensitivity. Other filters have been developed for finding systematic reviews in MEDLINE via Ovid, such as one by Montori and colleagues, who developed the filter by assessing index terms, text words and discussions with clinicians and biomedical librarians (Montori 2005).
Why it is important to do this review
Systematic reviews provide core material for guidelines, overviews, health technology assessments and other forms of evidence synthesis, as well as being an invaluable tool for decision making (Sprakel 2019). Authors of these documents are faced with a choice of different methods to retrieve systematic reviews for their research question or clinical scenario and, considering the variety and number of available search filters for systematic reviews, there is a need for a review of them in order to provide up‐to‐date empirical evidence about their retrieval properties. We have restricted our review scope to MEDLINE and Embase since these are widely accepted bibliographic databases used by many people, and are often considered as mandatory for conducting evidence syntheses (Bramer 2017). The findings of our review will aid those who use systematic methods for information retrieval (e.g. researchers conducting overviews or evidence maps) who wish to use validated search filters with adequate sensitivity and specificity, whereas other stakeholders (e.g. clinicians and consumers) might use search filters that are built into search interfaces, such as the "systematic review" filter in PubMed/MEDLINE.
Objectives
To review systematically empirical studies that report the development, evaluation, or comparison of search filters to retrieve reports of systematic reviews in MEDLINE and Embase.
Methods
Criteria for considering studies for this review
Types of studies
Studies will be included if one of their main objectives is the development, evaluation, or comparison of a search filter that can be used to identify systematic reviews in MEDLINE, Embase, or both. A development study is one in which a filter was generated and tested for its ability to identify relevant articles while avoiding the detection of irrelevant articles. An evaluation study is one in which these properties of a developed filter are tested in a new reference set. A comparison study is one in which different search filters are tested in a reference set to compare their properties.
Types of data
We will collect the following information from the included studies.
Methodological filters: fully detailed search strategies
Dates the searches were conducted
Years covered by the searches
Electronic bibliographic database (MEDLINE or Embase) and interface used (e.g. Ovid or PubMed)
Healthcare topic
Characteristics of the gold standard used to test the filter
Outcome measures (e.g. sensitivity, specificity, or precision)
Types of methods
Search strategies for identifying reports of systematic reviews in MEDLINE and Embase.
Types of outcome measures
We will include any of the following outcome measures (see Table 1, below, for more information).
Sensitivity: proportion of the gold‐standard systematic reviews that are detected in searches using the methodological filter
Specificity: proportion of the gold‐standard records that are not systematic reviews that are not retrieved in searches using the methodological filter
Precision: proportion of systematic reviews retrieved in searches using the methodological filter
Accuracy: proportion of records that were adequately classified using the methodological filter
Number needed to read (NNR): 1 / precision
Number of unique systematic reviews retrieved by each search strategy ("a" in Table 1)
Quality assessment of the systematic reviews retrieved and missed by the search strategy ("a" + "c" in Table 1, analyzed by the developers of the filter)
Table 1. Definition of outcome measures of this review
Gold standard | |||
Systematic review | Not systematic review | ||
Searches with methodological filters | Detected by the filter | a | b |
Not detected by the filter | c | d | |
Sensitivity = a / (a + c) Specificity = d / (b + d) Precision = a / (a + b) Accuracy = (a + d) / (a + b + c + c) Number needed to read = 1 / precision = (a + b) / a Systematic reviews in the gold standard = a + c Non‐systematic reviews in the gold standard = b + c |
We have defined a priori the levels of sensitivity (more than 90%) and precision (more than 10%) in external validation studies that would be an acceptable threshold for use when searching for systematic reviews (Beynon 2013).
Search methods for identification of studies
Electronic searches
We will search the following databases from inception to the present.
MEDLINE (Ovid)
Embase (Elsevier)
PsycInfo (Ovid)
Library, Information Science & Technology Abstracts (LISTA) (EBSCO)
Science Citation Index (ISI Web of Science)
For detailed search strategies for each database, see Appendix 1. We will place no restrictions on the language of publication when searching the electronic databases or reviewing reference lists in identified studies.
Searching other resources
To identify additional published, unpublished and ongoing studies:
relevant studies identified from the above sources will be entered into PubMed and the Related Articles feature will be used; and
reference lists of all relevant studies will be assessed (Horsley 2011).
We will also search the websites of, among others, the InterTASC Information Specialists’ Sub‐Group and the Health Information Research Unit (McMaster).
Data collection and analysis
Selection of studies
Four review authors (CMEL, VG, VV, JVAF) will work independently in pairs to screen the titles and abstracts of all retrieved records and assess papers for eligibility. Any disagreements will be resolved by discussion or by consultation with a third author (IS) to reach consensus.
Full copies of the relevant reports will be obtained for records possibly meeting the inclusion criteria. Each full report will be assessed independently in pairs by four review authors (CMEL, VG, VV, JVAF) to determine if it meets the inclusion criteria for the review. Any disagreements will be resolved by discussion or by consultation with a third author (IS) to reach consensus.
Data extraction and management
Two review authors will independently extract data, using a piloted pre‐specified data extraction form. Information will be extracted on the following.
Citation details for the study
Methodological filter used
Dates the searches were conducted
Years covered by the searches
Search interface used (e.g. Ovid or PubMed)
Healthcare topic
Gold standard
Outcome measures (e.g. sensitivity, specificity, or precision)
Any disagreements will be resolved by discussion or by consultation with a third review author to reach consensus. If there are studies with incomplete or missing data, we will attempt to contact the corresponding author (Appendix 2) (Young 2011).
Assessment of risk of bias in included studies
A small number of critical appraisal tools have been developed to assess the quality of methodological search filters (Bak 2009; Glanville 2008; Jenkins 2004). The included studies will be assessed against the search filter appraisal checklist proposed by the UK InterTASC Information Specialists’ Sub‐Group (Glanville 2008) and reported in the Cochrane Handbook (Lefebvre 2019).
Two review authors (VG and CMEL) will complete the appraisal checklist (Appendix 3). Any disagreements will be resolved by discussion or by consultation with a third author (VV, JVAF, or IS) to reach consensus.
Data synthesis
We will synthesize performance measures of the filters separately for MEDLINE and Embase. We will tabulate the performance measures reported by development and evaluation studies grouped by individual filters, so that a comparison can be made between the original reported performance of a filter and its performance in subsequent evaluation studies. If sensitivity, specificity, or precision (together with 95% confidence intervals (CIs)) are not reported in the original reports, these will be calculated from the 2 x 2 data tables, where possible.
Subgroup analysis and investigation of heterogeneity
Where sufficient data are available, we will perform subgroup analyses based on the following characteristics.
Dates the searches were conducted: searches conducted before release of the PRISMA statement in 2009 versus those conducted after its release (because the PRISMA guidance may affect how systematic reviews are reported)
Search interface used (e.g. PubMed or Ovid)
Healthcare topic: searches conducted within a specific health topic (e.g. public health, cardiovascular disease, etc.) versus those conducted across the biomedical literature or within a core set of non‐specialized biomedical journals (e.g. Core Clinical Journals)
History
Protocol first published: Issue 7, 2020
Notes
The methods section of this protocol has been adapted from a previous review on a similar topic (Beynon 2013).
Acknowledgements
We thank Andrea Juliana Sanabria, David Rigau, Marta Roqué i Figuls and Pablo Alonso‐Coello for commenting on the protocol; and Laura Martínez García for her work on an early draft of it.
Appendices
Appendix 1. Search strategies
Ovid/MEDLINE search strategy
1 search*.ti. 2 strateg*.ti. 3 filter.ti,ab. 4 filters.ti,ab. 5 retriev*.ti. 6 identif*.ti. 7 locat*.ti. 8 find*.ti. 9 (indexing or indexed).ti. 10 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 11 systematic review*.ti. 12 meta analys*.ti. 13 metaanalys*.ti. 14 11 or 12 or 13 15 10 and 14 16 review*.ti. 17 filter*.ti. 18 filters.ti. 19 1 or 17 or 18 20 16 and 19 21 15 or 20
Elsevier/Embase search strategy
#1. search*:ti
#2. strateg*:ti
#3. filter:ti,ab
#4. filters:ti,ab
#5. retriev*:ti
#6. identif*:ti
#7. indexing:ti OR indexed:ti
#8. locat*:ti
#9. find*:ti
#10. #1 OR #2 OR #3 OR #4 OR #5 OR #6 OR #7 OR #8 OR #9
#11. systematic AND review*:ti
#12. meta AND analys*:ti
#13. metaanalys*:ti
#14. #11 OR #12 OR #13
#15. #10 AND #14
#16. review*:ti
#17. filter:ti
#18. filters:ti
#19. #1 OR #17 OR #18
#20. #16 AND #19
#21. #15 OR #20
Ovid/PsycINFO search strategy
1 search*.ti.
2 strateg*.ti.
3 filter.ti,ab.
4 filters.ti,ab.
5 retriev*.ti,ab.
6 identif*.ti.
7 (indexing or indexed).ti.
8 locat*.ti.
9 find*.ti.
10 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9
11 systematic review*.ti.
12 meta analys*.ti.
13 metaanalys*.ti.
14 11 or 12 or 13
15 10 and 14
16 review*.ti.
17 filter.ti.
18 filters.ti.
19 1 or 17 or 18
20 16 and 19
21 15 or 20
EBSCO/LISTA search strategy
S1 TI search*
S2 TI strateg*
S3 TI filter OR AB filter
S4 TI filters OR AB filters
S5 TI retriev*
S6 TI identif*
S7 TI indexing OR indexed
S8 TI Locat*
S9 TI Find*
S10 S1 OR S2 OR S3 OR S4 OR S5 OR S6 OR S7 OR S8 OR S9
S11 TI systematic review*
S12 TI meta analys*
S13 TI metaanalys*
S14 S11 OR S12 OR S13
S15 S10 AND S14
S16 TI review*
S17 TI filter
S18 TI filters
S19 S1 OR S17 OR S18
S20 S16 AND S19
S21 S15 OR S20
Appendix 2. Survey of authors providing information on included systematic reviews
Study ID | Study author contacted | Study author replied | Additional information requested | Additional information provided |
Study 1 |
Appendix 3. UK InterTASC Information Specialists’ Sub‐Group (ISSG) Search Filter Appraisal Checklist
A. Information
A.1. State the author’s objective.
A.2. State the focus of the research.
[ ] Sensitivity‐maximizing
[ ] Precision‐maximizing
[ ] Specificity‐maximizing
[ ] Balance of sensitivity and specificity/precision
[ ] Other
A.3. Database(s) and search interface(s).
A.4. Describe the methodological focus of the filter (e.g. RCTs).
A.5. Describe any other topic that forms an additional focus of the filter (e.g. clinical topics, such as breast cancer; geographic location, such as Asia; or population grouping, such as paediatrics).
A.6. Other observations.
B. Identification of a gold standard (GS) of known relevant records
B.1. Did the authors identify 1 or more gold standards (GSs)? None/1/2/3/4/5/More than 5
B.2. How did the authors identify the records in each GS?
B.3. Report the dates of the records in each GS.
B.4. What are the inclusion criteria for each GS?
B.5. Describe the size of each GS and the authors’ justification, if provided (e.g. the size of the GS may have been determined by a power calculation).
B.6. Are there limitations to the gold standard(s)? Yes/No/Unclear
B.7. How was each GS used?
[ ] To identify potential search terms
[ ] To derive potential strategies (groups of terms)
[ ] To test internal validity
[ ] To test external validity
[ ] Other, please specify
B.8. Other observations.
C. How did the researchers identify the search terms in their filter(s)? (Select all that apply)
C.1. Adapted a published search strategy. Yes/No/Unclear (please describe)
C.2. Asked experts for suggestions of relevant terms. Yes/No/Unclear (please describe)
C.3. Used a database thesaurus. Yes/No/Unclear (please describe)
C.4. Performed statistical analysis of terms in a GS set of records (see B above). Yes/No/Unclear (please describe)
C.5. Extracted terms from the GS set of records (see B above). Yes/No/Unclear (please describe)
C.6. Extracted terms from some relevant records (but not a GS). Yes/No/Unclear (please describe)
C.7. Tick all types of search terms tested.
[ ] Subject headings
[ ] Text words (e.g. in title, abstract)
[ ] Publication types
[ ] Subheadings
[ ] Check tags
[ ] Other, please specify
C.8. Include the citation of any adapted strategies.
C.9. How were the (final) combination(s) of search terms selected?
C.10. Were the search terms combined (using Boolean logic) in a way that is likely to retrieve the studies of interest?
C.11. Other observations.
D. Internal validity testing (this type of testing is possible when the search filter terms were developed from a known GS set of records.)
D.1. How many filters were tested for internal validity?
For each filter report the following information.
D.2. Was the performance of the search filter tested on the GS from which it was derived? Yes/No/Unclear (please describe)
D.3. Report sensitivity data (a single value, a range, "Unclear"* or "Not reported", as appropriate).
D.4. Report precision data (a single value, a range, "Unclear"* or "Not reported", as appropriate).
D.5. Report specificity data (a single value, a range, "Unclear"* or ‘‘Not reported", as appropriate).
D.6. Other performance measures reported.
D.7. Other observations.
E. External validity testing (this section relates to testing the search filter on records that are different from the records used to identify the search terms.)
E.1. How many filters were tested for external validity on records different from those used to identify the search terms?
E.2. Describe the validation set(s) of records, including the interface.
For each filter report the following information.
E.3. On which validation set(s) was the filter tested?
E.4. Report sensitivity data for each validation set (a single value, a range, "Unclear" or "Not reported", as appropriate).
E.5. Report precision data for each validation set (report a single value, a range, "Unclear" or ‘‘Not reported", as appropriate).
E.6. Report specificity data for each validation set (a single value, a range, "Unclear" or "Not reported", as appropriate).
E.6. Other performance measures reported.
E.7. Other observations.
F. Limitations and comparisons
F.1. Did the authors discuss any limitations to their research?
F.2. Are there other potential limitations to this research that you have noticed?
F.3. Report any comparisons of the performance of the filter against other relevant published filters (sensitivity, precision, specificity, or other measures).
F.4. Include the citations of any compared filters.
F.5. Other observations and/or comments.
G. Other comments (this section can be used to provide any other comments. Selected prompts for issues to bear in mind are given below.)
G.1. Have you noticed any errors in the document that might impact on the usability of the filter?
G.2 Are there any published errata or comments (e.g. in the MEDLINE record)?
G.3. Is there public access to prepublication history and/or correspondence?
G.4. Are further data available on a linked site or from the authors?
G.5. Include references to related papers and/or other relevant material.
G.6. Other comments.
Contributions of authors
Juan Franco, Ivan Solá and Valeria Vietto wrote the draft and final version of the protocol, with contributions from Camila Escobar and Virginia Garrote.
Sources of support
Internal sources
-
Instituto Universitario Hospital Italiano, Argentina
Provides funding for Virginia Garrote, Juan Víctor Ariel Franco, Camila Micaela Escobar Liquitay and Valeria Vietto
External sources
Andrea Juliana Sanabria is funded by a Río Hortega research contract from the Instituto de Salud Carlos III (CM12/00168), Spain
Pablo Alonso‐Coello is funded by a Miguel Servet research contract from the Instituto de Salud Carlos III (CP09/00137), Spain
Declarations of interest
JVAF: none known IS: none known VV: none known CMEL: none known VG: none known
New
References
Additional references
Bachmann 2002
- Bachmann LM, Coray R, Estermann P, Ter Riet G. Identifying diagnostic studies in MEDLINE: reducing the number needed to read. Journal of the American Medical Informatics Association: JAMIA 2002;9(6):653-8. [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Bak 2009
- Bak G, Mierzwinski-Urban M, Fitzsimmons H, Morrison A, Maden-Jenkins M. A pragmatic critical appraisal instrument for search filters: introducing the CADTH CAI. Health Information and Libraries Journal 2009;26(3):211-9. [DOI] [PubMed] [Google Scholar]
Becker 2011
- Becker LA, Oxman AD. Chapter 22: Overviews of reviews. In: Higgins JPT, Green S (editors), Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). Available from training.cochrane.org/handbook.
Beynon 2013
- Beynon R, Leeflang MM, McDonald S, Eisinga A, Mitchell RL, Whiting P, et al. Search strategies to identify diagnostic accuracy studies in MEDLINE and EMBASE. Cochrane Database of Systematic Reviews 2013, Issue 9. [DOI: 10.1002/14651858.MR000022.pub3] [DOI] [PMC free article] [PubMed] [Google Scholar]
Boynton 1998
- Boynton J, Glanville J, McDaid D, Lefebvre C. Identifying systematic reviews in MEDLINE: developing an objective approach to search strategy design. Journal of Information Science 1998;24:137-54. [Google Scholar]
Bradley 2010
- Bradley SM. Examination of the Clinical Queries and Systematic Review ‘‘hedges’’ in EMBASE and MEDLINE. Journal of the Canadian Health Libraries Association / Journal de l’Association des bibliothèques de la santé du Canada 2010;31:27-37. [Google Scholar]
Bragge 2011
- Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The Global Evidence Mapping Initiative: scoping research in broad topic areas. BMC Medical Research Methodology 2011;11:92. [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Bramer 2017
- Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Systematic Reviews 2017;6(1):245. [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Brettle 1998
- Brettle AJ, Long AF, Grant MJ, Greenhalgh J. Searching for information on outcomes: do you need to be comprehensive? Quality in Health Care 1998;7(3):163-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
Chandler 2019
- Chandler J, Cumpston M, Thomas J, Higgins JPT, Deeks JJ, Clarke MJ. Chapter I: Introduction. In: Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated August 2019). Cochrane, 2019. Available from www.training.cochrane.org/handbook.
Dhammi 2014
- Dhammi IK, Kumar S. Medical subject headings (MeSH) terms. Indian Journal of Orthopaedics 2014;48(5):443-4. [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Embase Indexing Guide 2020
- Embase Indexing Guide 2020. https://www.elsevier.com/__data/assets/pdf_file/0010/901693/Embase-indexing-guide-2020.pdf (accessed 1 July 2020).
Glanville 2000
- Glanville J, Lefebvre C. Identifying systematic reviews: key resources. ACP Journal Club 2000;132(3):A11-2. [PubMed] [Google Scholar]
Glanville 2008
- Glanville J, Bayliss S, Booth A, Dundar Y, Fernandes H, Fleeman ND, et al. So many filters, so little time: the development of a search filter appraisal checklist. Journal of the Medical Library Association 2008;96(4):356-61. [DOI] [PMC free article] [PubMed] [Google Scholar]
Higgins 2019a
- Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas, J, Flemyng E, Churchill R. Methodological Expectations of Cochrane Intervention Reviews. London: Cochrane, 2019. [Google Scholar]
Higgins 2019b
- Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019). Cochrane, 2019. Available from www.training.cochrane.org/handbook.
Horsley 2011
- Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database of Systematic Reviews 2011, Issue 8. [DOI: 10.1002/14651858.MR000026.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
Hunt 1997
- Hunt DL, McKibbon KA. Locating and appraising systematic reviews. Annals of Internal Medicine 1997;126(7):532-8. [DOI] [PubMed] [Google Scholar]
Ioannidis 2016
- Ioannidis JPA. Why most clinical research is not useful. PLOS Medicine 2016;13(6):e1002049. [DOI] [PMC free article] [PubMed] [Google Scholar]
IOM 2011
- IOM (Institute of Medicine). Clinical Practice Guideline We Can Trust. Washington, DC: The National Academies Press, 2011. [Google Scholar]
Jenkins 2004
- Jenkins M. Evaluation of methodological search filters--a review. Health Information and Libraries Journal 2004;21(3):148-63. [DOI] [PubMed] [Google Scholar]
Lee 2012
- Lee E, Dobbins M, Decorby K, McRae L, Tirilis D, Husson H. An optimal search filter for retrieving systematic reviews and meta-analyses. BMC Medical Research Methodology 2012;12:51. [DOI] [PMC free article] [PubMed] [Google Scholar]
Lefebvre 2019
- Lefebvre C, Glanville J, Briscoe S, Littlewood A, Marshall C, Metzendorf M-I, et al. Chapter 4: Searching for and selecting studies. In: Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019). Cochrane, 2019. Available from www.training.cochrane.org/handbook.
Leydesdorff 2016
- Leydesdorff L, Comins JA, Sorensen AA, Bornmann L, Hellsten I. Cited references and Medical Subject Headings (MeSH) as two different knowledge representations: clustering and mappings at the paper level. Scientometrics 2016;109(3):2077-91. [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Li 2019
- Li L, Smith HE, Atun R, Tudor Car L. Search strategies to identify observational studies in MEDLINE and Embase. Cochrane Database of Systematic Reviews 2019, Issue 3. [DOI: 10.1002/14651858.MR000041.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lipscomb 2000
- Lipscomb CE. Medical Subject Headings (MeSH). Bulletin of the Medical Library Association 2000;88(3):265-6. [PMID: ] [PMC free article] [PubMed] [Google Scholar]
Moher 2009
- Moher D, Liberati A, Tetzlaff J, Altman DG, the PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement. PLOS Medicine 2009;6(7):e1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]
Montori 2005
- Montori VM, Wilczynski NL, Morgan D, Haynes RB, Hedges Team. Optimal search strategies for retrieving systematic reviews from Medline: analytical survey. BMJ 2005;330(7482):68. [DOI] [PMC free article] [PubMed] [Google Scholar]
NLM 2019a
- NLM - MeSH. Systematic Review [Publication Type]. https://www.ncbi.nlm.nih.gov/mesh/2028176 (accessed 1 July 2020).
NLM 2019b
- National Library of Medicine. PubMed updates February 2019. National Library of Medicine Technical Bulletin 2019;(427):b6.
NLM 2019c
- National Library of Medicine. Support for Systematic Reviews. National Library of Medicine Technical Bulletin 2019;(427):b6.
Page 2016
- Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLOS Medicine 2016;13(5):e1002028. [DOI] [PMC free article] [PubMed] [Google Scholar]
Shojania 2001
- Shojania KG, Bero LA. Taking advantage of the explosion of systematic reviews: an efficient MEDLINE search strategy. Effective Clinical Practice 2001;4(4):157-62. [PMID: ] [PubMed] [Google Scholar]
SIGN
- Scottish Intercollegiate Guidelines Network. SIGN: search filters. www.sign.ac.uk/search-filters.html (accessed 1 July 2020).
Sprakel 2019
- Sprakel J, Carrara H, Manzer BM, Fedorowicz Z. A mapping study and recommendations for a joint NGO (Think Pink) and Bahrain Government Breast Cancer project. Journal of Evidence-Based Medicine 2019;12(3):209-17. [DOI: 10.1111/jebm.12357] [DOI] [PMC free article] [PubMed] [Google Scholar]
Wilczynski 1995
- Wilczynski NL, Walker CJ, McKibbon KA, Haynes RB. Reasons for the loss of sensitivity and specificity of methodologic MeSH terms and textwords in MEDLINE. Proceedings. Symposium on Computer Applications in Medical Care 1995:436-40. [PMID: ] [PMC free article] [PubMed]
Wilczynski 2005
- Wilczynski NL, Morgan D, Haynes RB. An overview of the design and methods for retrieving high-quality studies for clinical care. BMC Medical Informatics and Decision Making 2005;5:20. [PMID: ] [DOI] [PMC free article] [PubMed] [Google Scholar]
Wilczynski 2007
- Wilczynski NL, Haynes RB, Hedges Team. EMBASE search strategies achieved high sensitivity and specificity for retrieving methodologically sound systematic reviews. Journal of Clinical Epidemiology 2007;60(1):29-33. [PMID: ] [DOI] [PubMed] [Google Scholar]
Young 2011
- Young T, Hopewell S. Methods for obtaining unpublished data. Cochrane Database of Systematic Reviews 2011, Issue 11. [DOI: 10.1002/14651858.MR000027.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]