Skip to main content
F1000Research logoLink to F1000Research
. 2021 Sep 2;9:1386. Originally published 2020 Dec 1. [Version 3] doi: 10.12688/f1000research.27337.3

Using citation tracking for systematic literature searching - study protocol for a scoping review of methodological studies and a Delphi study

Julian Hirt 1,2,3, Thomas Nordhausen 2, Christian Appenzeller-Herzog 4, Hannah Ewald 4,a
PMCID: PMC8474097  PMID: 34631036

Version Changes

Revised. Amendments from Version 2

We added methodological details of the Delphi study and we now specify the number of rounds we expect to perform, the consensus rate, anonymity of participants, and expected non-response.

Abstract

Background: Up-to-date guidance on comprehensive study identification for systematic reviews is crucial. According to current recommendations, systematic searching should combine electronic database searching with supplementary search methods. One such supplementary search method is citation tracking. It aims at collecting directly and/or indirectly cited and citing references from "seed references". Tailored and evidence-guided recommendations concerning the use of citation tracking are strongly needed.

Objective: We intend to develop recommendations for the use of citation tracking in systematic literature searching for health-related topics. Our study will be guided by the following research questions: What is the benefit of citation tracking for systematic literature searching for health-related topics? Which methods, citation indexes, and other tools are used for citation tracking? What terminology is used for citation tracking methods?

Methods: Our study will have two parts: a scoping review and a Delphi study. The scoping review aims at identifying methodological studies on the benefit and use of citation tracking in systematic literature searching for health-related topics with no restrictions on study design, language, and publication date. We will perform database searching in MEDLINE (Ovid), CINAHL (EBSCOhost), Web of Science Core Collection, two information science databases, web searching, and contact experts in the field. Two reviewers will independently perform study selection. We will conduct direct backward and forward citation tracking on included articles. Data from included studies will be extracted using a prespecified extraction sheet and presented in both tabular and narrative form. The results of the scoping review will inform the subsequent Delphi study through which we aim to derive consensus recommendations for the future practice and research of citation tracking.

Keywords: Citation Tracking, Literature Search, Supplementary Search, Methods, Scoping Review, Research Methodology, Survey, Systematic Review

Introduction

Systematic reviews are considered to be of high clinical and methodological importance as they help to derive recommendations for health care practice and future research 13 . A comprehensive literature search that aims to identify the available evidence as completely as possible is the foundation of every systematic review 46 . Due to an ever-growing research volume, lack of universal terminology and indexation, as well as extensive time requirements for identifying studies in a systematic way, efficient search approaches are required 5, 7, 8. According to current recommendations, systematic search approaches should include both electronic database searching and one or several supplementary search methods 9. Potential supplementary search methods include citation tracking, contacting study authors or experts, handsearching, trial register searching, and web searching 10. In this study, we focus on citation tracking.

Citation tracking is an umbrella term for multiple methods which directly or indirectly collect related references from so called "seed references". These seed references are usually eligible for inclusion into the review. Some may be known at the beginning of the review and others may emerge as eligible records following full-text screening 1012 . The terminology used to describe the principles of citation tracking is non-uniform and heterogeneous 1316 . Citation tracking methods are sub-categorized into direct and indirect citation tracking ( Figure 1a). For direct citation tracking, the words "backward" and "forward" denote the directionality of tracking 13, 17, 18. Backward citation tracking is the oldest form of citation tracking. It aims at identifying references cited by a seed reference - which can easily be achieved by checking the reference list. Terms like "footnote chasing" or "reference list searching" are synonyms 6, 13. In contrast, forward citation tracking or chaining aims at identifying citing references, i.e. references that cite a seed reference 19. Indirect citation tracking describes the identification of (i) co-cited references or co-citations (i.e. other references cited by citing literature of a seed reference) and of (ii) co-citing references (i.e. publications sharing references with a seed reference) 11, 20. Direct and indirect citation relationships of references based on a seed reference are illustrated in Figure 1b. Both direct and indirect citation tracking may contain one or more layers of iteration. To this end, researchers may use newly retrieved, relevant references as new seed references.

Figure 1. Overview of citation tracking methods.

Figure 1.

1a: Hierarchical illustration of different citation tracking methods; 1b: Direct and indirect citation relationships of references based on a seed reference. A → B denotes A cites B. The horizontal axis denotes time, i.e. the chronology in which references were published relative to the seed reference: “Older” stands for references that were published before the seed reference, “Newer” stands for references that were published after the seed reference.

Direct backward citation tracking of cited references is currently the most common citation tracking method. However, recent guidance suggests that a combination of several methods (e.g., tracking cited, citing, co-cited and co-citing references) may be the most effective way to use citation tracking for systematic reviewing 10. It is quite likely that the added value of any form of citation tracking is not the same for all systematic reviews. It rather depends on a variety of factors. For instance, citation tracking may be beneficial in research areas that require complex searches such as reviews of complex interventions, mixed-methods reviews, qualitative evidence syntheses, or reviews on public health topics. Furthermore, research areas without consistent terminology or with vocabulary overlaps with other fields, such as methodological topics, may also benefit from the use of citation tracking 20, 21. Hence, tailored and evidence-guided recommendations on the use of citation tracking are strongly needed. However, none of the current reviews on this topic has systematically identified available evidence on the use and benefit of citation tracking in the context of systematic literature searching 10.

Therefore, the aim of our study is to develop recommendations for the use of citation tracking in systematic literature searching for health-related topics. The scoping review will be guided by the following three research questions which in turn will inform the Delphi study:

  • What is the benefit of citation tracking for systematic literature searching for health-related topics?

  • Which methods, citation indexes, and other tools are used for citation tracking?

  • What terminology is used for citation tracking methods?

Protocol

This protocol is reported according to the “Preferred Reporting Items for Systematic review and Meta-Analysis Protocols” (PRISMA-P) checklist 22 which we published on the Open Science Framework 23. Our study will have two parts: a scoping review and a Delphi study. The scoping review has the objective to map the benefit and the use of citation tracking, or research gaps if the results are not sufficiently informative. The objective of the subsequent Delphi study is to derive consensus recommendations for future practice and research of citation tracking 2426 . For the scoping review, we will use the framework by Arksey and O’Malley 26 and the “Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews” (PRISMA-ScR) 27. For the Delphi study, we will follow the “Guidance on Conducting and REporting DElphi Studies” (CREDES) statement 28.

Scoping review

Eligibility criteria. We will include any study with a focus on citation tracking as a means of evidence retrieval which exhibits one of the following criteria: benefit and/or effectiveness of (i) citation tracking in general; (ii) different methods of citation tracking (e.g., backward vs. forward, direct vs. indirect); or (iii) technical uses of citation tracking (e.g., comparing citation indexes and/or tools, e.g., Scopus vs. Web of Science, Oyster, Voyster). Eligible studies need to have a health-related context. There will be no restrictions on study design, language, and publication date.

We will exclude studies solely using citation tracking for evidence retrieval, e.g., a systematic review applying citation tracking as a supplementary search technique, or studies focussing on citation tracking as a means to explore network or citation impact (i.e. bibliometric analysis). Studies only assessing the benefit of combined search methods in which the isolated benefit of citation tracking cannot be extracted will also be excluded. Furthermore, we will exclude methodological guidelines without empirical investigations and other non-empirical publications like editorials, commentaries, letters and abstract-only publications. Table 1 illustrates our inclusion and exclusion criteria per domain.

Table 1. Inclusion and exclusion criteria.

Domain Inclusion criteria Exclusion criteria
Study focus Any study with a focus on citation tracking as
an evidence retrieval method
AND
one of the following criteria:
- any study assessing the benefit and/or
effectiveness of citation tracking
- any study comparing different methods of
citation tracking (e.g., backward vs. forward,
direct vs. indirect)
- any study assessing technical uses of citation
tracking (e.g., comparing citation indexes and/or
tools, e.g., Scopus vs. WoS, Oyster, Voyster, etc.)
Any study solely using citation tracking for evidence retrieval (e.g., a
systematic review applying citation tracking as supplementary search
technique)
OR
any study solely assessing benefits and/or use and/or effectiveness of
citation tracking to explore a network or citation impact (i.e. bibliometric
analysis)
OR
any study describing solely the method of citation tracking without
further assessing it, e.g., guidelines for developing search strategies or
guidelines for systematic or other reviews
OR
any study only assessing the benefit of combined search methods in
which the isolated benefit of citation tracking cannot be extracted
Research
context
Health-related Other
Language All languages -
Publication
year
All publication years -
Publication
type
Any reports of empirical studies Editorials
Commentaries
Letters
Abstract-only publications

Information sources. We will search MEDLINE via Ovid; CINAHL (Cumulative Index to Nursing and Allied Health Literature), LLISFT (Library Literature & Information Science Full Text) and LISTA (Library, Information Science & Technology Abstracts) via EBSCOhost, and the Web of Science Core Collection by using database-specific search strategies. Additionally, we will perform web searching via Google Scholar as well as direct forward and backward citation tracking of included studies. As some evidence suggests that one citation index may not be enough for this 29, we will use Scopus, Web of Science, and Google Scholar for forward citation tracking. For backward citation tracking, we will use Scopus and, if seed references are not indexed in Scopus, we will manually extract the seed reference's reference list. We will iteratively repeat direct citation tracking on newly identified eligible references until no further eligible references will be identified. We will also contact librarians in the field of health sciences and information specialists through several mailing lists (Canadian Medical Libraries, Expertsearching, MEDIBIB-L/German-speaking medical librarians, and EAHIL-list) to ask for further studies.

Search strategy. Due to a lack of adequate index terms, our search strategy will be based on text words only. To determine frequently occurring terms for inclusion into the search strategy, we analysed keywords in the titles and abstracts of potentially relevant publications retrieved from preliminary searches and similar articles identified via PubMed by using various text mining tools ( PubMed Reminer, AntConc, Yale MeSH analyzer, Voyant, VOSviewer, Termine, Text analyzer) 30. We restricted some of our text words to the title field in order to avoid retrieving systematic reviews that used citation tracking.

All authors contributed to the development of search strategies. HE and CAH are information specialists with a professional background in research; JH and TN are researchers experienced in the development of search strategies. HE drafted the search strategy and JH peer-checked it.

Box 1 shows the final search for MEDLINE in Ovid syntax. To use the search in other databases, we will translate it by means of Polyglot Search Translator 31. CAH will conduct the searches and eliminate duplicates using the Bramer method 32. We will perform web searching in Google Scholar using search terms from our database search. We will document our search strategy according to PRISMA-S 33.

Box 1. Search strategy for MEDLINE via Ovid.

(reference list OR reference lists OR ((reference OR references OR citation OR citations OR co-citation OR co-citations) ADJ3 (search OR searches OR searching OR searched OR screen OR screening OR chain OR chains OR chaining OR check OR checking OR checked OR chased OR chasing OR tracking OR tracked OR harvesting OR tool OR tools OR backward OR forward)) OR ((cited OR citing OR cocited OR cociting OR co-cited OR co-citing) ADJ3 (references OR reference)) OR citation discovery tool OR cocitation OR co-citation OR cocitations OR co-citations OR co-cited OR backward chaining OR forward chaining OR snowball sampling OR snowballing OR footnote chasing OR berry picking OR cross references OR cross referencing OR cross-references OR cross-referencing OR citation activity OR citation activities OR citation analysis OR citation analyses OR citation network OR citation networks OR citation relationship OR citation relationships).ti OR (((((strategy OR strategies OR method* OR literature OR evidence OR additional OR complementary OR supplementary) ADJ3 (find OR finding OR search* OR retriev*)) OR (database ADJ2 combin*)).ti) AND ((search OR searches OR searching OR searched).ab))

Data management. A bibliography management tool will be used to manage the number of reference retrievals throughout the study selection process. Furthermore, we will use specific tools for study selection that we describe below.

Selection of sources of evidence. After an initial calibration phase, that is screening 100 titles and abstracts separately and discussing divergent decisions (TN, JH, HE), two authors (JH, TN) will independently screen titles, abstracts, and full texts using Rayyan 34. They will solve disagreements by third author arbitration (HE). To screen the results of the citation tracking step, we will consider ASReview, particularly if the number of references exceeds 1000. ASReview combines machine (deep) learning models on a set of eligible studies with active learning on manual selections during title-abstract screening to generate a relevancy-ranked abstract list and to save screening time. Should the tool prove to be beneficial for reducing the screening load, we will consider conducting a more sensitive database search at a later stage and screen additional results with ASReview.

Data charting process. We will pilot a prespecified data extraction sheet approved by consensus among the authors. We will extract bibliographic and geographic data, design- and study-specific data as well as results that answer our research questions. Since we expect heterogeneous studies in terms of aim, design, and methods, we aim for an iterative data extraction process. This will allow a flexible and study-specific data extraction process, e.g., by adding previously neglected data extraction items that might contribute to the overall body of knowledge to the data extraction form. In the final publication, we will provide a detailed overview of extracted data items. One author will extract data and a second author will peer-check the extraction. We will solve disagreements by third author arbitration.

Synthesis of results. One author (JH) will narratively summarise study characteristics and results. Depending on the results, we will also chart them graphically.

Delphi study

Design and rationale. A consensus multi-stage online Delphi procedure will be used to derive recommendations for the use of citation tracking in systematic literature searching for health-related topics 28, 35, 36. A Delphi procedure will be chosen since the method enables to collect the perspectives of international experts on citation tracking, promote discussions on the topic as well as derive consensus recommendations for future practice and research. The Delphi study will entail several Delphi rounds (see below). The results of the scoping review will inform the initial Delphi round (see below for details). To distribute the Delphi rounds to the experts, we will use the web-based tool SosciSurvey 37. The Delphi language will be English.

Expert panel. The recruitment of experts will be based on a stepwise approach. First, we will contact authors of pertinent articles identified during the literature search as well as experts from our professional networks. This "person-based" approach will help us to identify experts who authored papers, books, comments, and reviews in the field of citation tracking. We will ask the contacted persons to take part in the Delphi study. Second, we will identify and contact relevant national and international organisations as well as systematic review collaborations (e.g., Cochrane groups, Joanna Briggs Institute (JBI), Campbell Collaboration, National Academy of Medicine (NAM), expert information specialists, Evidence Synthesis International, and PRISMA-S working group). This "organisation-based approach" will allow us to reach experts in the field of literature retrieval methods who are potentially using citation tracking without necessarily being the authors of methodological studies (yet). By using this stepwise approach, we intend to recruit at least 15 experts.

Data collection. In online Delphi rounds, we will seek guidance on various aspects of citation tracking. For example, recommendations on the following aspects could be of particular interest:

  • Uniform terminology for citation tracking methods

  • Situations in which citation tracking should be applied

  • Potential situations in which citation tracking can be used as a sole method of evidence retrieval

  • Situations in which a particular citation tracking method or a combination thereof is likely to be most effective

  • Situations in which further layers of iteration of citation tracking should be applied

  • Necessity to use multiple citation indexes for citation tracking

  • For indirect citation tracking, screening of selected records only and definition of their ranking and cut-off

  • Reporting of citation tracking (complementing PRISMA-S 33)

  • Questions on citation tracking that currently cannot be answered and require more research

Based on the results of our scoping review, we will formulate draft recommendations for the first Delphi round. Experts will be invited to rate their agreement with the draft recommendations using a four-point Likert scale (strongly agree – agree – disagree – strongly disagree). If experts vote disagree/strongly disagree, they will be required to comment on their reasons and/or give constructive feedback. We consider a recommendation as consented when at least 75% of the experts agree/strongly agree. All other recommendations will be adapted for the next Delphi round. This adaptation will be based on the comments collected from the experts and, if necessary, on discussion via video conference.

There are items where we will not directly propose recommendations, e.g., if the results of our scoping review do not allow it or if there are several equally valid options (e.g., for terminology). In these cases, we will either ask the Delphi experts for their experiences and perspectives or let them vote on several options. We will use the resulting answers to formulate draft recommendations, which will be entered into the Delphi consensus process (see above). Therefore, our Delphi study may comprise qualitative and quantitative aspects.

We will limit the number of Delphi rounds to a maximum of four rounds. Should there be no consensus for any of the items by the end of the fourth round, we will report the results but not give any recommendations.

Expert assessments will be anonymous among experts but open to the study team. We expect a low non-response rate since experts' participation is indicative of their interest in our study.

To describe experts’ characteristics, we will collect sociodemographic data, i.e. professional education and background, current field of work as well as years of experience in literature searching and citation tracking. We expect that experts will invest around 30 to 90 minutes per Delphi round depending on the underpinning aim of the Delphi round as well as experts’ familiarity and experiences with the topic. For each Delphi round, we will schedule approximately three weeks for participation. Table 2 illustrates our reminder strategy within a Delphi round. We will pilot test and discuss our Delphi items with a person experienced in literature searching who is not an author and not involved in the Delphi study.

Table 2. Reminder strategy of each online Delphi round.

Process and
time
Person-based
approach
Organisation-based
approach
Delphi round setup Invitation Invitation
One week after Reminder -
Two weeks after Reminder Reminder
Delphi round closing
after three weeks
- -

Note: Person-based approach: contacting authors of pertinent articles identified during the literature search as well as experts from authors’ professional networks; Organisation-based approach: contacting national and international organisations and systematic review collaborations.

Data analysis. We will use descriptive statistics for votes for which results are numeric or can be converted into numbers. For free text answers and statements of experts, we will use thematic categorisation 38.

Ethical concerns. The online Delphi study will contain introductory information on our aims, the Delphi itself, data management and security. We do not expect vulnerability on the part of experts and with regard to the Swiss Human Research Act, our research does not concern human diseases and the structure and function of the human body 39. We will therefore not apply for ethical approval of the Delphi study. Taking part in the Delphi study will indicate consent to participate. There will be no mandatory participation once an expert consented to participate. Experts will not receive an incentive for participation and may leave the process at any time.

Dissemination of results

Our dissemination strategy uses multiple ways to share our study results with academic stakeholders. The final scoping review and Delphi study will each be published in an international open access journal relevant in the field of information retrieval. Additionally, we will discuss our results with experts at national and international conferences (e.g., conference of the German Network for Evidence-based Medicine (EbM-Netzwerk), conference of the European Association for Health Information and Libraries (EAHIL), Cochrane Colloquium, Health Technology Assessment International (HTAi) conference). To inform about our study results and publications, we will use Twitter, ResearchGate, and mailing lists from relevant stakeholders such as Canadian Medical Libraries, Expertsearching, MEDIBIB-L/German-speaking medical librarians, and EAHIL-list.

Study status

We conducted the initial search for the scoping review in November 2020 and expect to complete the Delphi study in 2022.

Current study status: literature searches: yes; piloting of the study selection process: yes; formal screening of search results against eligibility criteria: yes; data extraction: no; data analysis: no.

Conclusions

Missing pertinent evidence might have an impact on the validity of systematic reviews and, consequently, on the quality of health care 40, 41. Therefore, authors of systematic reviews should conduct high quality literature searches aiming to detect all relevant evidence. Citation tracking may be an effective way to complement electronic database searches and to broaden the scope of possible findings. Therefore, our study intends to provide literature- and expert-based recommendations on the use of citation tracking for systematic literature searching. Although we solely focus on a health-related context, it is possible that some of the recommendations developed during this project may prove relevant also for other academic fields such as social or environmental sciences 9, 42. Finally, tailored and evidence-based recommendations concerning the use of citation tracking for systematic literature searching may guide future steps in semi-automated and automated literature retrieval methods 43, 44.

Data availability

Underlying data

No underlying data are associated with this article.

Reporting guidelines

Open Science Framework (OSF): PRISMA-P checklist for ‘Using citation tracking for systematic literature searching - study protocol for a scoping review of methodological studies and a Delphi study’, https://doi.org/10.17605/OSF.IO/7ETYD 23.

Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).

Funding Statement

The author(s) declared that no grants were involved in supporting this work.

[version 3; peer review: 2 approved]

References

  • 1.Ioannidis JP: The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016;94(3):485–514. 10.1111/1468-0009.12210 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Abbas Z, Raza S, Ejaz K: Systematic reviews and their role in evidence-informed health care. J Pak Med Assoc. 2008;58(10):561–67. [PubMed] [Google Scholar]
  • 3.Gough D, Davies P, Jamtvedt G, et al. : Evidence Synthesis International (ESI): Position Statement. Syst Rev. 2020;9(1):155. 10.1186/s13643-020-01415-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sutton A, Clowes M, Preston L, et al. : Meeting the review family: exploring review types and associated information retrieval requirements. Health Info Libr J. 2019;36(3):202–22. 10.1111/hir.12276 [DOI] [PubMed] [Google Scholar]
  • 5.Gusenbauer M, Haddaway NR: Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synth Methods. 2020;11(2):181–217. 10.1002/jrsm.1378 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Lefebvre C, Glanville J, Briscoe S, et al. : Searching for and selecting studies. In: Higgins JPT, Thomas J, eds. Cochrane Handbook for Systematic Reviews of Interventions. Version 6, 2nd edn. Hoboken: Wiley Online Library.2019;67–108. 10.1002/9781119536604.ch4 [DOI] [Google Scholar]
  • 7.McGowan J, Sampson M: Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80. [PMC free article] [PubMed] [Google Scholar]
  • 8.de Souza Leão L, Eyal G: The rise of randomized controlled trials (RCTs) in international development in historical perspective. Theor Soc. 2019;48(3):383–418. 10.1007/s11186-019-09352-6 [DOI] [Google Scholar]
  • 9.Cooper C, Booth A, Varley-Campbell J, et al. : Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol. 2018;18(1):85. 10.1186/s12874-018-0545-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Cooper C, Booth A, Britten N, et al. : A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Syst Rev. 2017;6(1):234. 10.1186/s13643-017-0625-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Belter CW: Citation analysis as a literature search method for systematic reviews. J Assoc Inf Sci Technol. 2016;67(11):2766–77. 10.1002/asi.23605 [DOI] [Google Scholar]
  • 12.Hu X, Rousseau R, Chen J: On the definition of forward and backward citation generations. J Informetr. 2011;5(1):27–36. 10.1016/j.joi.2010.07.004 [DOI] [Google Scholar]
  • 13.Booth A: Unpacking your literature search toolbox: on search styles and tactics. Health Info Libr J. 2008;25(4):313–17. 10.1111/j.1471-1842.2008.00825.x [DOI] [PubMed] [Google Scholar]
  • 14.Saimbert M, Fowler SA, Pierce J, et al. : Search Resources and Techniques to Maximize Search Efforts. In: Holly C, Salmond SW, Saimbert MK, eds. Comprehensive Systematic Review for Advanced Nursing Practice. New York: Springer Publishing Company,2016;139–72. 10.1891/9780826131867.0006 [DOI] [Google Scholar]
  • 15.Choong MK, Tsafnat G: Role of citation tracking in updating of systematic reviews. AMIA Jt Summits Transl Sci Proc. 2014;2014:18. [PMC free article] [PubMed] [Google Scholar]
  • 16.Lowe J, Peters J, Shields B, et al. : Methods to update systematic literature searches: full update searching vs. forward citation chasing: A case study from a systematic review of diagnostic test accuracy. Exeter o J. Reference Source [Google Scholar]
  • 17.Booth A: Innovative approaches to systematic reviewing. In: Levay P, Craven J, eds. Systematic Searching: Practical ideas for improving results. London: Facet Publishing.2019;25–50. [Google Scholar]
  • 18.Lefebvre C, Glanville J, Briscoe S, et al. : Chapter 4: Searching for and selecting studies. Draft version (13 September 2018)2018. Reference Source [Google Scholar]
  • 19.Cribbin T: Augmenting Citation Chain Aggregation with Article Maps. CEUR Workshop Proceedings. 2014;1311. Reference Source [Google Scholar]
  • 20.Janssens AC, Gwinn M: Novel citation-based search method for scientific literature: application to meta-analyses. BMC Med Res Methodol. 2015;15:84. 10.1186/s12874-015-0077-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Linder SK, Kamath GR, Pratt GF, et al. : Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments. J Clin Epidemiol. 2015;68(4):412–17. 10.1016/j.jclinepi.2014.10.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Shamseer L, Moher D, Clarke M, et al. : Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;350:g7647. 10.1136/bmj.g7647 [DOI] [PubMed] [Google Scholar]
  • 23.Hirt J: Using citation tracking for systematic literature searching (supplementary material).2020. 10.17605/OSF.IO/7ETYD [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ruiz-Perez I, Petrova D: Scoping reviews. Another way of literature review. Med Clin (Barc). 2019;153(4):165–68. 10.1016/j.medcli.2019.02.006 [DOI] [PubMed] [Google Scholar]
  • 25.Levac D, Colquhoun H, O'Brien KK: Scoping studies: advancing the methodology. Implement Sci. 2010;5:69. 10.1186/1748-5908-5-69 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Arksey H, O'Malley L: Scoping studies: towards a methodological framework. Int J Soc. 2005;8(1):19–32. 10.1080/1364557032000119616 [DOI] [Google Scholar]
  • 27.Tricco AC, Lillie E, Zarin W, et al. : PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73. 10.7326/M18-0850 [DOI] [PubMed] [Google Scholar]
  • 28.Jünger S, Payne SA, Brine J, et al. : Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684–706. 10.1177/0269216317690685 [DOI] [PubMed] [Google Scholar]
  • 29.Li J, Burnham JF, Lemley T, et al. : Citation analysis: Comparison of Web of Science®, Scopus™, SciFinder®, and Google Scholar. J Med Libr Assoc. 2010;7(3):196–217. 10.1080/15424065.2010.505518 [DOI] [Google Scholar]
  • 30.Glanville J: Text mining for information specialists. Levay P Craven J eds. Systematic Searching: Practical ideas for improving results. London: Facet Publishing,2019:147–70. Reference Source [Google Scholar]
  • 31.Clark JM, Sanders S, Carter M, et al. : Improving the translation of search strategies using the Polyglot Search Translator: a randomized controlled trial. JMLA. 2020;108(2):195–207. 10.5195/jmla.2020.834 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Bramer WM, Giustini D, de Jonge GB, et al. : De-duplication of database search results for systematic reviews in EndNote. J Med Libr Assoc. 2016;104(3):240–43. 10.3163/1536-5050.104.3.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Rethlefsen ML, Kirtley S, Waffenschmidt S, et al. : PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst Rev. 2021;10(1):39. 10.1186/s13643-020-01542-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Ouzzani M, Hammady H, Fedorowicz Z, et al. : Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Diamond IR, Grant RC, Feldman BM, et al. : Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67(4):401–409. 10.1016/j.jclinepi.2013.12.002 [DOI] [PubMed] [Google Scholar]
  • 36.Okoli C, Pawlowski SD: The Delphi method as a research tool: an example, design considerations and applications. Inf Manag. 2004;42(1):15–29. 10.1016/j.im.2003.11.002 [DOI] [Google Scholar]
  • 37.SoSci Survey GmbH: SoSci Survey.2020; Accessed August 14, 2020. Reference Source [Google Scholar]
  • 38.Saldaña J: The Coding Manual for Qualitative Researchers. 2nd edn. London: SAGE Publications,2013. Reference Source [Google Scholar]
  • 39.Gloy V, McLennan S, Rinderknecht M, et al. : Uncertainties about the need for ethics approval in Switzerland: a mixed-methods study. Swiss Med Wkly. 2020;150:w20318. 10.4414/smw.2020.20318 [DOI] [PubMed] [Google Scholar]
  • 40.Sampson M, McGowan J: Errors in search strategies were identified by type and frequency. J Clin Epidemiol. 2006;59(10):1057–1063. 10.1016/j.jclinepi.2006.01.007 [DOI] [PubMed] [Google Scholar]
  • 41.Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R: Errors in search strategies used in systematic reviews and their effects on information retrieval. J Med Libr Assoc. 2019;107(2):210–21. 10.5195/jmla.2019.567 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Nussbaumer-Streit B, Klerings I, Wagner G, et al. : Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11. 10.1016/j.jclinepi.2018.05.022 [DOI] [PubMed] [Google Scholar]
  • 43.Ewald H, Klerings I, Wagner G, et al. : Abbreviated and comprehensive literature searches led to identical or very similar effect estimates: a meta-epidemiological study. J Clin Epidemiol. 2020;128:1–12. 10.1016/j.jclinepi.2020.08.002 [DOI] [PubMed] [Google Scholar]
  • 44.James KL, Randall NP, Haddaway NR: A methodology for systematic mapping in environmental sciences. Environ Evid. 2016;5:7. 10.1186/s13750-016-0059-6 [DOI] [Google Scholar]
F1000Res. 2021 Sep 23. doi: 10.5256/f1000research.77208.r93320

Reviewer response for version 3

David Moher 1,2

The authors have addressed the concerns I raised in my second review.

Is the study design appropriate for the research question?

Yes

Is the rationale for, and objectives of, the study clearly described?

Yes

Are sufficient details of the methods provided to allow replication by others?

Partly

Are the datasets clearly presented in a useable and accessible format?

Partly

Reviewer Expertise:

Systematic reviews; open science; reporting guidelines.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2021 Aug 18. doi: 10.5256/f1000research.55984.r91228

Reviewer response for version 2

Julie Glanville 1

I have read the revised version of the study protocol and I am happy with the changes that have been made.

Is the study design appropriate for the research question?

Yes

Is the rationale for, and objectives of, the study clearly described?

Yes

Are sufficient details of the methods provided to allow replication by others?

Partly

Are the datasets clearly presented in a useable and accessible format?

Not applicable

Reviewer Expertise:

Information retrieval for evidence identification for systematic reviews.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2021 Aug 18. doi: 10.5256/f1000research.55984.r91227

Reviewer response for version 2

David Moher 1,2

I think the revisions, particularly around the scoping review, are much improved.

In terms of the Delphi study, I think details are still missing. For example, it is unclear how many rounds will be given. After round 1, what consensus rate (e.g., 75%) will be used to drop an item from subsequent rounds? Overall, I think more details are required for the proposed Delphi methods. My recommendation is to read these papers:

  • Examples of Delphi studies: Vogel  et al., 2019 1, Santaguida  et al., 2018 2;

  • General methodological issues about Delphi: Okoli  et al., 2004 3, Diamond  et al., 2014 4

One final minor detail. In the study status section of the paper, the authors state they started in November 2020. Is this correct?

Is the study design appropriate for the research question?

Yes

Is the rationale for, and objectives of, the study clearly described?

Yes

Are sufficient details of the methods provided to allow replication by others?

Partly

Are the datasets clearly presented in a useable and accessible format?

Partly

Reviewer Expertise:

Systematic reviews; open science; reporting guidelines.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1.: A Delphi study to build consensus on the definition and use of big data in obesity research. Int J Obes (Lond).43(12) : 10.1038/s41366-018-0313-92573-2586 10.1038/s41366-018-0313-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.: Protocol for a Delphi consensus exercise to identify a core set of criteria for selecting health related outcome measures (HROM) to be used in primary health care. BMC Family Practice.2018;19(1) : 10.1186/s12875-018-0831-5 10.1186/s12875-018-0831-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.: The Delphi method as a research tool: an example, design considerations and applications. Information & Management.2004;42(1) : 10.1016/j.im.2003.11.00215-29 10.1016/j.im.2003.11.002 [DOI] [Google Scholar]
  • 4.: Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol.2014;67(4) : 10.1016/j.jclinepi.2013.12.002401-9 10.1016/j.jclinepi.2013.12.002 [DOI] [PubMed] [Google Scholar]
F1000Res. 2021 Aug 29.
Julian Hirt 1

Dear David Moher

Thank you very much for your helpful comments and references. We revised our manuscript with respect to methodological details of the Delphi study and now specified the number of rounds we expect to perform, the consensus rate, anonymity of participants, and expected non-response.

Reviewer comment: I think the revisions, particularly around the scoping review, are much improved.

In terms of the Delphi study, I think details are still missing. For example, it is unclear how many rounds will be given. After round 1, what consensus rate (e.g., 75%) will be used to drop an item from subsequent rounds? Overall, I think more details are required for the proposed Delphi methods. My recommendation is to read these papers:

Examples of Delphi studies: Vogel et al., 2019 1, Santaguida et al., 2018 2;

General methodological issues about Delphi: Okoli et al., 2004 3, Diamond et al., 2014 4

Authors’ response: We added “Based on the results of our scoping review, we will formulate draft recommendations for the first Delphi round. Experts will be invited to rate their agreement with the draft recommendations using a four-point Likert scale (strongly agree – agree – disagree – strongly disagree). If experts vote disagree/strongly disagree, they will be required to comment on their reasons and/or give constructive feedback. We consider a recommendation as consented when at least 75% of the experts agree/strongly agree. All other recommendations will be adapted for the next Delphi round. This adaptation will be based on the comments collected from the experts and, if necessary, on discussion via video conference.

There are items where we will not directly propose recommendations, e.g., if the results of our scoping review do not allow it or if there are several equally valid options (e.g., for terminology). In these cases, we will either ask the Delphi experts for their experiences and perspectives or let them vote on several options. We will use the resulting answers to formulate draft recommendations, which will be entered into the Delphi consensus process (see above). Therefore, our Delphi study may comprise qualitative and quantitative aspects.

We will limit the number of Delphi rounds to a maximum of four rounds. Should there be no consensus for any of the items by the end of the fourth round, we will report the results but not give any recommendations.

Expert assessments will be anonymous among experts but open to the study team. We expect a low non-response rate since experts' participation is indicative of their interest in our study.”

One final minor detail. In the study status section of the paper, the authors state they started in November 2020. Is this correct?

Authors’ response: Yes, we started to work on this scoping review in November by doing the initial searches. In the revised version, we reworded to be clearer on that issue: “We conducted the initial search for the scoping review in November 2020 and expect to complete the Delphi study in 2022.”

F1000Res. 2021 Apr 7. doi: 10.5256/f1000research.30208.r81307

Reviewer response for version 1

David Moher 1,2

The authors are proposing a new guidance on citation tracking. They will achieve this by completing a scoping review and subsequently a survey. The authors propose using appropriate methods (and reporting) for conducting the scoping review. I found the methods for the survey less clear and hope my comments/questions, below, are helpful in revised the protocol.

Questions/comments:

  1. A more conceptual question is to what degree this initiative overlaps and differs from PRISMA-S (Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, Koffel JB; PRISMA-S Group. PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst Rev. 2021 Jan 26;10(1):39. doi: 10.1186/s13643-020-01542-z) 1.

  2. For example, item 5 of PRIMSA-S – Citation searching - states “Citation searching 5 Indicate whether cited references or citing references were examined, and describe any methods used for locating cited/citing references (e.g., browsing reference lists, using a citation index, setting up email alerts for references citing included studies).” I think this is important so as not to potentially confuse readers. Indeed, should the current initiative be seen as an implementation of PRISMA-S?

  3. I had more difficulty understanding the proposed survey. For me, much of the methods were missing. I understand the recruitment (well reported). I would recommend the authors include the PRISMA-S group unless there is a lot of overlap between PRISMA-S and the other groups mentioned.

  4. It was not clear to me how long the authors will make completing the survey– 15 minutes of 45 minutes? Similarly, will the respondents receive an incentive for completing the survey?

  5. Why ‘simply’ a survey rather than a Delphi (or modified) approach? Will the survey (or Delphi) be pilot tested for question and response option clarity and language?

  6. The data analysis describes cross tables. I assume the authors mean cross tabulations which can be more than descriptive. Will the authors be conducting Chi-square analysis or other analytical approaches of the cross tabulations (e.g., p values)?

  7. The ethical concerns section is likely jurisdiction specific. In my setting ethics would be required. Can the authors explicitly indicate whether ethics is required or not. The section is currently vague on this critical issue. 

  8. For the survey, do the authors have an estimated sample size they are aiming for. Similarly, do the authors have an estimated response rate they are aiming for?

Is the study design appropriate for the research question?

Yes

Is the rationale for, and objectives of, the study clearly described?

Yes

Are sufficient details of the methods provided to allow replication by others?

Partly

Are the datasets clearly presented in a useable and accessible format?

Partly

Reviewer Expertise:

Systematic reviews; open science; reporting guidelines.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1.: PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst Rev.2021;10(1) : 10.1186/s13643-020-01542-z39. 10.1186/s13643-020-01542-z [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2021 Jul 9.
Julian Hirt 1

Dear David Moher

We are grateful for your reviewer report and, in particular, for your helpful suggestions to derive evidence-based recommendations from our study. Below, we provide a detailed point-by-point response with changes that we implemented in version 2 of our manuscript.

You will see that in response to your comments, we replaced the planned survey with a planned Delphi study to address the original objective. This change also entails the reformulation of our research questions and the intent to publish the final scoping review and the results of the Delphi study separately.

We hope that these changes fully meet your concerns.

Sincerely yours,

Julian Hirt, Thomas Nordhausen, Christian Appenzeller-Herzog, and Hannah Ewald

Reviewer comment: A more conceptual question is to what degree this initiative overlaps and differs from PRISMA-S (Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, Koffel JB; PRISMA-S Group. PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst Rev. 2021 Jan 26;10(1):39. doi: 10.1186/s13643-020-01542-z).

Authors’ response: PRISMA-S is a reporting guideline for systematic literature searches. However, it is lacking on precise recommendations how to report citation tracking methods and results. By conducting a Delphi study with selected experts in the field (see our answer below for details), we aim at developing more detailed recommendations on adequately reporting citation tracking methods as part of systematic literature searching. This may complement recommendations on reporting given by the PRISMA-S group or could even be integrated in future versions. For the Delphi, we will invite the PRISMA-S group to best use synergies and experiences.

Reviewer comment: For example, item 5 of PRIMSA-S – Citation searching - states “Citation searching 5 Indicate whether cited references or citing references were examined, and describe any methods used for locating cited/citing references (e.g., browsing reference lists, using a citation index, setting up email alerts for references citing included studies).” I think this is important so as not to potentially confuse readers. Indeed, should the current initiative be seen as an implementation of PRISMA-S?

Authors’ response: See answer above. An important detail for the reporting of our study: since we will use citation tracking as part of our information retrieval, we will use PRISMA-S to guide our reporting of the scoping review. In the revised version, we reference it accordingly: “We will document our search strategy according to PRISMA-S”.

Reviewer comment: I had more difficulty understanding the proposed survey. For me, much of the methods were missing. I understand the recruitment (well reported). I would recommend the authors include the PRISMA-S group unless there is a lot of overlap between PRISMA-S and the other groups mentioned.

Authors’ response: As detailed further below, the survey will be replaced by a Delphi study. So, we replaced the survey methods with more detailed methods for the Delphi study. Concerning PRISMA-S, we agree with the reviewer and will ask the PRISMA-S working group to participate in our Delphi study (see below).

Reviewer comment: It was not clear to me how long the authors will make completing the survey– 15 minutes of 45 minutes? Similarly, will the respondents receive an incentive for completing the survey?

Authors’ response: Following your recommendation below, we now plan to conduct an expert Delphi study containing multiple Delphi rounds. We expect that experts will invest around 30 to 90 minutes per Delphi round depending on the underpinning aim of the Delphi round as well as experts’ familiarity and experiences with the topic. Participants will not receive an incentive for participation.

Reviewer comment: Why ‘simply’ a survey rather than a Delphi (or modified) approach? Will the survey (or Delphi) be pilot tested for question and response option clarity and language?

Authors’ response: This is an important point and we thank the reviewer for raising it. We agree that a Delphi study containing several Delphi rounds is suitable to collect the perspectives of international experts on citation tracking, promote discussions on the topic as well as to derive consensus recommendations for future practice and research on the use of citation tracking in systematic literature searching for health-related topics. We subjected our protocol to a major revision based on this point and now outline the methods for the Delphi. As stated in the revised text, we will pilot test and discuss our Delphi items with a person experienced in literature searching but who is not an author and not involved in the Delphi study.

Reviewer comment: The data analysis describes cross tables. I assume the authors mean cross tabulations which can be more than descriptive. Will the authors be conducting Chi-square analysis or other analytical approaches of the cross tabulations (e.g., p values)?

Authors’ response: We revised the corresponding section to address a suitable type of analysis for our data retrieved in Delphi rounds. For free text answers and statements of experts, we will use thematic categorisation. For votes whose results are available or can be converted into numbers, we will use descriptive statistics.

Reviewer comment: The ethical concerns section is likely jurisdiction specific. In my setting ethics would be required. Can the authors explicitly indicate whether ethics is required or not. The section is currently vague on this critical issue.

Authors’ response: We added details to the ethical concerns section. With regard to the Swiss Human Research Act, our research does not concern human diseases and the structure and function of the human body. We will therefore not apply for ethical approval of the expert Delphi study.

Reviewer comment: For the survey, do the authors have an estimated sample size they are aiming for. Similarly, do the authors have an estimated response rate they are aiming for?

Authors’ response: This answer concerns the recruitment of experts for the Delphi study, not the survey. We intend to recruit at least 15 participants by using our stepwise approach for recruitment, the person-based approach (contacting authors of pertinent articles identified during the literature search as well as experts from authors’ professional networks) and the organisation-based approach (contacting national and international organisations and systematic review collaborations).

F1000Res. 2021 Jan 5. doi: 10.5256/f1000research.30208.r76282

Reviewer response for version 1

Julie Glanville 1

The authors propose to conduct a scoping review of the use of citation tracking techniques to identify research evidence to inform systematic reviews and also to survey experts on their use of citation tracking.

My suggestions for action on the proposal are as follows:

  1. Introduction: 'These references are usually eligible for inclusion into the review and known at the start of the citation search 1012.'  I think this could be reworded for clarity along the following lines: 

    'These references are usually eligible for inclusion into the review and some may be known at the beginning of the review and others may emerge as eligible records following the main database searches 1012.'

    Then perhaps you may not need the sentence that follows the one I am commenting on.

  2. Introduction: 'The taxonomy used to describe the…'. I think it might be better to use a word such as  'terminology' rather than 'taxonomy'. 

  3. Figure 1. The figure would be more helpful if it could show the relationships of the indirect citation tracking in the Figure 1b perhaps by colour coding. Please check diagrams in the following references which incorporate chronology into the  citation picture as well as showing the indirect citation tracking clearly.

    Belter CW. A relevance ranking method for citation-based search results. Scientometrics. 2017;112(2):731-46. 1

    Janssens A, Gwinn M, Brockman JE, Powell K, Goodman M. Novel citation-based search method for scientific literature: a validation study. BMC Med Res Methodol. 2020;20(1):25. 2

  4. Introduction: 'However, recent guidance suggests that combining several citation tracking methods (e.g. screening cited, citing, co-cited and co-citing references) may be the most effective way to use citation tracking for systematic reviewing. '  Please cite the guidance. 

  5. Introduction: ' It rather depends on a variety of factors. For instance, citation tracking may be especially beneficial in case of (i) complex searches (e.g. for reviews on public health topics), (ii) searching for health outcome measurement instruments, or (iii) research areas without consistent taxonomy, with vocabulary overlaps with other fields, or with a lack of index terms in databases (e.g. methodological topics)'.  These are all important issues and each deserves a bit more description so that readers can understand the differences. Again I think the word 'taxonomy' should be replaced with 'terminology'. 

  6. Introduction: current 'topical reviews' - suggest replacing with 'recent reviews on this topic'. Then please add in more citations so that 'reviews' can be supported with more than one reference.  

  7. Introduction: 'health-related systematic literature searching'.  This occurs three times and for each occurrence I suggest rewording as follows for clarity: 'literature searching for systematic reviews of health and health-related topics'. 

  8. Eligibility criteria. 'evidence retrieval method'.  Suggest that 'method' is not needed as the sentence has 'means'. 

  9. 'Eligible studies need to have a health-related context. Studies without an explicitly specified research context are also eligible. '  This seems contradictory.  Why are studies in education topics for example not useful if they are looking at the methods rather than the topic? Just including papers where the topic is not clear seems arbitrary when there may be much to learn from a paper even if it has an engineering topic, say.  Perhaps reword along the lines of 'although studies undertaken in the context of health-related topics are the main focus, studies of citation tracking in other literatures will also be eligible where the focus is on exploring the methods rather than the subject topic'. 

  10. Table 1. Publication Type cannot be 'Any' when there are publication types listed as ineligible. The authors should list eligible study types or use some sort of exception wording.

  11. Information sources. I am not sure 'free web searching via Google Scholar' is a helpful description, perhaps just say 'searches of Google Scholar' here and further down where it is mentioned again. 

  12. 'For citation tracking, we will use Scopus, as this database seems to cover the largest number of relevant citations for the purpose of our review 30. '  There is quite a lot of research that reports that other resources such as Google Scholar or Microsoft Academic provide wider coverage - the authors may wish to consider searching some of the free resources as well (since Scopus alone is not going to find the largest number of results) or finding other reasons for the use of a Scopus only approach. 

  13. 'MEDBIB-L' - I think there may be a typo here and it is MEDLIB-L? I think it may also need to be corrected in the Dissemination section. 

  14. My understanding of the purpose of a protocol is that it should list what will happen and therefore the resources to be searched should be listed and there should be no examples which imply that other things may be added. So there should be no' e.g.'. 

  15. 'text mining approach' - the chapter describes many approaches - which one was used in this case? 

  16. 'parts of our textwords' - suggest 'some of our textwords'? 

  17. Box 1 - there is harmless redundancy in the search. For example co-citation is searched alone and in combination with other terms - it only needs to be searched alone.  

  18. Box 1 - with the focus on searching in the titles, the use of such close adjacency does not seem warranted - it would be more sensitive to use AND for title searching.

  19. Box 1 - it might be helpful in the absence of subject headings and the focus on searching mainly in titles to also search the author keywords field.

  20. Data charting process: I realise it is difficult to know what will happen but the authors description is not very clear about their plans 'We aim for an iterative data extraction process, but in the final publication, we will provide a detailed overview of extracted data items.'  What is an 'iterative data extraction process' - what might it look like and what does it involve?  

  21. Data collection 'preferred taxonomy' - again, I suggest the word you mean is 'terminology' not 'taxonomy'. 

  22. Data analysis. 'To analyse the survey data, we will apply descriptive statistics based on frequencies, percentages, and cross tables.' I think it may be 'cross tabulations' rather than 'cross tables'? 

  23. Dissemination of results - it might be useful to also to try to present results at conferences of systematic review experts - Cochrane Colloquium and the Health Technology Assessment International (HTAi) conferences. These organisations also have active information retrieval special interest groups. 

  24. Conclusions: 'Depending on the available study landscape' - this seems rather vague to me - it would help the reader if the authors could be more explicit about what they mean. 

  25. 'consequently, on the quality of clinical care' - I think it might not only be clinical care that could be impacted - it might be any aspect of health care depending on what is being reviewed e.g. a service, a policy, a new method of organising staff etc.

  26. 'in a health-related context may prove relevant also for other academic fields such as social or environmental sciences' - I think that much of what you find may be generalisable to other disciplines, so I suggest that you state this earlier as a possibility.

Is the study design appropriate for the research question?

Yes

Is the rationale for, and objectives of, the study clearly described?

Yes

Are sufficient details of the methods provided to allow replication by others?

Partly

Are the datasets clearly presented in a useable and accessible format?

Not applicable

Reviewer Expertise:

Information retrieval for evidence identification for systematic reviews.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1.: A relevance ranking method for citation-based search results. Scientometrics.2017;112(2) : 10.1007/s11192-017-2406-y731-746 10.1007/s11192-017-2406-y [DOI] [Google Scholar]
  • 2.: Novel citation-based search method for scientific literature: a validation study. BMC Medical Research Methodology.2020;20(1) : 10.1186/s12874-020-0907-5 10.1186/s12874-020-0907-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2021 Jul 9.
Julian Hirt 1

Dear Julie Glanville

We are grateful for your reviewer report and, in particular, for your helpful suggestions for improvement of our scoping review protocol. Below, we provide a detailed point-by-point response with changes that we implemented in version 2 of our manuscript.

Please note that in response to the comments by reviewer 2 (David Moher), we replaced the planned survey with a planned Delphi study to address the original objective. This change also entails the reformulation of our research questions and the intent to publish the final scoping review and the results of the Delphi study separately.

We hope our revisions now fully meet your concerns.

Sincerely yours,

Julian Hirt, Thomas Nordhausen, Christian Appenzeller-Herzog, and Hannah Ewald

Reviewer comment: Introduction: 'These references are usually eligible for inclusion into the review and known at the start of the citation search10–12.' I think this could be reworded for clarity along the following lines:

'These references are usually eligible for inclusion into the review and some may be known at the beginning of the review and others may emerge as eligible records following the main database searches10–12.'

Then perhaps you may not need the sentence that follows the one I am commenting on.

Authors’ response: We adapted this sentence and the subsequent one according to your suggestion.

Reviewer comment: Introduction: 'The taxonomy used to describe the…'. I think it might be better to use a word such as 'terminology' rather than 'taxonomy'.

Authors’ response: Thank you. We now use terminology instead of taxonomy throughout the manuscript.

Reviewer comment: Figure 1. The figure would be more helpful if it could show the relationships of the indirect citation tracking in the Figure 1b perhaps by colour coding. Please check diagrams in the following references which incorporate chronology into the citation picture as well as showing the indirect citation tracking clearly.

Belter CW. A relevance ranking method for citation-based search results. Scientometrics. 2017;112(2):731-46.1

Janssens A, Gwinn M, Brockman JE, Powell K, Goodman M. Novel citation-based search method for scientific literature: a validation study. BMC Med Res Methodol. 2020;20(1):25.2

Authors’ response: We have colour coded the 4 citation tracking techniques as suggested. We had already consulted the citation tracking figures of Belter et al. and Janssens. et al. amongst others. Furthermore, we have added text to the figure caption. We feel that our figure is now clearer with respect to chronology.

Reviewer comment: Introduction: 'However, recent guidance suggests that combining several citation tracking methods (e.g. screening cited, citing, co-cited and co-citing references) may be the most effective way to use citation tracking for systematic reviewing.' Please cite the guidance.

Authors’ response: We added a reference.

Reviewer comment: Introduction: ' It rather depends on a variety of factors. For instance, citation tracking may be especially beneficial in case of (i) complex searches (e.g. for reviews on public health topics), (ii) searching for health outcome measurement instruments, or (iii) research areas without consistent taxonomy, with vocabulary overlaps with other fields, or with a lack of index terms in databases (e.g. methodological topics)'. These are all important issues and each deserves a bit more description so that readers can understand the differences. Again I think the word 'taxonomy' should be replaced with 'terminology'.

Authors’ response: Thank you for this point! We have revised and elaborated this section.

Reviewer comment: Introduction: current 'topical reviews' - suggest replacing with 'recent reviews on this topic'. Then please add in more citations so that 'reviews' can be supported with more than one reference. 

Authors’ response: We adapted the sentence according to your suggestion, thank you.

Reviewer comment: Introduction: 'health-related systematic literature searching'. This occurs three times and for each occurrence I suggest rewording as follows for clarity: 'literature searching for systematic reviews of health and health-related topics'.

Authors’ response: Thank you for your proposition. We now use “systematic literature searching for health-related topics“.

Reviewer comment: Eligibility criteria. 'evidence retrieval method'. Suggest that 'method' is not needed as the sentence has 'means'.

Authors’ response: We deleted the word ‘method’.

Reviewer comment: 'Eligible studies need to have a health-related context. Studies without an explicitly specified research context are also eligible.' This seems contradictory. Why are studies in education topics for example not useful if they are looking at the methods rather than the topic? Just including papers where the topic is not clear seems arbitrary when there may be much to learn from a paper even if it has an engineering topic, say. Perhaps reword along the lines of 'although studies undertaken in the context of health-related topics are the main focus, studies of citation tracking in other literatures will also be eligible where the focus is on exploring the methods rather than the subject topic'.

Authors’ response: We agree that it is interesting to learn about citation tracking from other disciplines. However, we have decided to focus on studies in a health-related context because this enables us to (i) consider the context in which the studies were conducted and to (ii) specifically direct our conclusions and recommendations to the health context. Furthermore, we concentrate on health-related studies for practical reasons facing the high number of references that need screening. We deleted the sentence ‘Studies without an explicitly specified research context are also eligible’ to avoid misunderstandings.

On that note we also improved our exclusion criteria by adding this criterion: ‘any study only assessing the benefit of combined search methods (whereas the isolated benefit of citation tracking cannot be extracted)’.

Reviewer comment: Table 1. Publication Type cannot be 'Any' when there are publication types listed as ineligible. The authors should list eligible study types or use some sort of exception wording.

Authors’ response: We now use ‘Any reports of empirical studies’.

Reviewer comment: Information sources. I am not sure 'free web searching via Google Scholar' is a helpful description, perhaps just say 'searches of Google Scholar' here and further down where it is mentioned again.

Authors’ response: We now use ‘web searching’ instead of ‘free web searching’ throughout the manuscript.

Reviewer comment: 'For citation tracking, we will use Scopus, as this database seems to cover the largest number of relevant citations for the purpose of our review30.' There is quite a lot of research that reports that other resources such as Google Scholar or Microsoft Academic provide wider coverage - the authors may wish to consider searching some of the free resources as well (since Scopus alone is not going to find the largest number of results) or finding other reasons for the use of a Scopus only approach.

Authors’ response: We agree and now plan forward citation tracking using a triple approach: Scopus, Web of Science, and Google Scholar. We will iteratively repeat citation tracking on newly identified records to include until no further eligible references will be identified. We now describe this new approach in the methods.

Reviewer comment: 'MEDBIB-L' - I think there may be a typo here and it is MEDLIB-L? I think it may also need to be corrected in the Dissemination section.

Authors’ response: The correct name is MEDIBIB-L (the “I” was missing), a mailing list specific to libraries of medicine in the German-speaking area (library = Bibliothek; https://lists.uni-due.de/mailman/listinfo/medbib-l).

Reviewer comment: My understanding of the purpose of a protocol is that it should list what will happen and therefore the resources to be searched should be listed and there should be no examples which imply that other things may be added. So there should be no' e.g.'.

Authors’ response: We deleted ‘e.g.’

Reviewer comment: 'text mining approach' - the chapter describes many approaches - which one was used in this case?

Authors’ response: We used several tools that were used to improve our search. We now list all the tools that we used during search drafting phase in our manuscript and here:

  • PubMed PubReMiner: https://hgserver2.amc.nl/cgi-bin/miner/miner2.cgi

  • AntConc 3.5.7 (Windows), developed by Laurence Anthony, Faculty of Science and Engineering, Waseda University, Japan

  • Yale MeSH Analyzer: http://mesh.med.yale.edu/

  • TerMine: http://www.nactem.ac.uk/software/termine/

  • Text Analyzer: https://www.online-utility.org/text/analyzer.jsp

  • Voyant: https://voyant-tools.org/

  • VOSviewer: https://www.vosviewer.comrt

Reviewer comment: 'parts of our textwords' - suggest 'some of our textwords'?

Authors’ response: We agree, thank you.

Reviewer comment: Box 1 - there is harmless redundancy in the search. For example co-citation is searched alone and in combination with other terms - it only needs to be searched alone. 

Authors’ response: Yes, these are remnants of a previously more specific search and indeed now redundant. Since we already ran the searches by now, we would prefer to leave the search string exactly as we ran it for precise documentation. However, we noted this in our search files in case of a later update.

Reviewer comment: Box 1 - with the focus on searching in the titles, the use of such close adjacency does not seem warranted - it would be more sensitive to use AND for title searching.

Authors’ response: Again, this is very good input which we noted in our search files in case of a later update. Unfortunately, we already ran the searches by now and are done screening. We are, however, confident that any article that we missed due to the close adjacency will be retrieved through our extensive citation tracking, web searches or expert contacting.

Reviewer comment: Box 1 - it might be helpful in the absence of subject headings and the focus on searching mainly in titles to also search the author keywords field.

Authors’ response: Thank you for the sensible suggestion which we also noted in our search files in case of a later update.

Reviewer comment: Data charting process: I realise it is difficult to know what will happen but the authors description is not very clear about their plans 'We aim for an iterative data extraction process, but in the final publication, we will provide a detailed overview of extracted data items.' What is an 'iterative data extraction process' - what might it look like and what does it involve? 

Authors’ response: We revised this paragraph and provide more details: ‘Since we expect heterogeneous studies in terms of aim, design, and methods, we aim for an iterative data extraction process. This allows a flexible and study-specific data extraction process, e.g. by adding previously neglected data extraction items that might contribute to the overall body of knowledge to the data extraction form.’

Reviewer comment: Data collection 'preferred taxonomy' - again, I suggest the word you mean is 'terminology' not 'taxonomy'.

Authors’ response: Indeed, thank you.

Reviewer comment: Data analysis. 'To analyse the survey data, we will apply descriptive statistics based on frequencies, percentages, and cross tables.' I think it may be 'cross tabulations' rather than 'cross tables'?

Authors’ response: We revised this paragraph to address a suitable type of analysis for our data retrieved in Delphi rounds. For free text answers and statements of experts, we will use thematic categorisation. For votes whose results are available or can be converted into numbers, we will use descriptive statistics.

Reviewer comment: Dissemination of results - it might be useful to also to try to present results at conferences of systematic review experts - Cochrane Colloquium and the Health Technology Assessment International (HTAi) conferences. These organisations also have active information retrieval special interest groups.

Authors’ response: Thank you, we added these conferences to our list of potential conferences.

Reviewer comment: Conclusions: 'Depending on the available study landscape' - this seems rather vague to me - it would help the reader if the authors could be more explicit about what they mean.

Authors’ response: We deleted ‘Depending on the available study landscape’ to be more precise.

Reviewer comment: 'consequently, on the quality of clinical care' - I think it might not only be clinical care that could be impacted - it might be any aspect of health care depending on what is being reviewed e.g. a service, a policy, a new method of organising staff etc.

Authors’ response: We agree and revised the sentence to ‘health care’ instead of ‘clinical care’.

Reviewer comment: 'in a health-related context may prove relevant also for other academic fields such as social or environmental sciences' - I think that much of what you find may be generalisable to other disciplines, so I suggest that you state this earlier as a possibility.

Authors’ response: We agree that our results may indeed prove relevant to other disciplines. However, we decided to narrow our research efforts to the health-related fields, so we think this should be rather a part of the discussion/outlook than earlier in the manuscript. We changed the sentence to underline that aspect: “Although we solely focus on a health-related context, it is possible that some of the recommendations developed during this project may prove relevant also for other academic fields such as social or environmental sciences”.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    Underlying data

    No underlying data are associated with this article.

    Reporting guidelines

    Open Science Framework (OSF): PRISMA-P checklist for ‘Using citation tracking for systematic literature searching - study protocol for a scoping review of methodological studies and a Delphi study’, https://doi.org/10.17605/OSF.IO/7ETYD 23.

    Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).


    Articles from F1000Research are provided here courtesy of F1000 Research Ltd

    RESOURCES