Objectives
This is a protocol for a Cochrane Review (methodology). The objectives are as follows:
To evaluate the effects of educational interventions for improving health‐related literature searching skills of health professionals and students.
Background
The identification and critical appraisal of available scientific evidence through systematic literature searching constitute a fundamental pillar of evidence‐based healthcare [1, 2]. Systematic literature searches are used to identify relevant literature in research and clinical practice with different primary aims. Sensitive searches, e.g. for systematic reviews informing decision‐making in clinical practice or health policy, aim at being more comprehensive as they are intended to identify as much of, or possibly all of, the relevant evidence as possible. Therefore, search strategies need to be conducted with high methodological rigour, minimizing the risk of overlooking relevant evidence with the potential to change conclusions. Precise searches, e.g. for addressing a patient‐related issue in clinical practice with limited time and knowledge resources, aim at being less comprehensive as they are intended to find the key evidence for resolving the respective question. Precise searches require search strategies to be conducted with high efficacy to achieve the aim with a minimal amount of time and effort [3, 4].
Therefore, proficiency in advanced literature searching and the formulation of search strategies appropriately addressing the respective search approach/aim is recognized as a core competence for (future) health professionals, and possesses substantial scientific, practical, and ethical relevance [5, 6, 7].
Description of the problem or issue
For searches aiming at being more sensitive, the shortcomings of high‐quality literature searches have been shown in systematic reviews [8, 9]. For example, 70% to 90% of search strategies in Cochrane reviews or other systematic reviews contain at least one error, such as missing Medical Subject Headings (MeSH), unwarranted explosion of MeSH terms, irrelevant MeSH or free‐text terms, missed spelling variants, failure to tailor the search strategy for other databases, and misuse of logical operators. Furthermore, 50% to 80% of these errors potentially lower the recall of relevant studies and may impact the overall results of the review [6, 10, 11]. While international standards recommend involving librarians or information specialists in planning, reviewing, and conducting systematic literature searches [3, 6, 12], and evidence suggests positive effects on the quality and reporting of the search strategy [13, 14, 15], even searches for systematic reviews lack such professional support [9, 14]. Potential reasons may include the lack of perceived necessity or knowledge about the possibility, and the lack of availability of respective specialists [14], thus highlighting the need to enable health professionals to independently search the available evidence themselves.
There can be many reasons to identify evidence with more precise searches, including efficient information retrieval in clinical practice [16]. While the attitudes of health professionals are generally positive, many lack the practical skills required to provide and implement an evidence‐based practice [17]. These skill deficits include the ability to search databases for relevant health‐related evidence, which may be caused by insufficient formal training, and limited time to search for and interpret evidence within clinical practice [18, 19, 20]. Consequently, finding relevant, reliable, and recent evidence might be a major challenge for health professionals despite the prerequisite to provide safe and efficient care [5].
Description of the methods being investigated
Education and training have been proposed to promote the competencies of healthcare professionals in systematic literature searching [5]. While evidence‐based practice training may enhance general competencies among health professionals [21], the actual impact of these educational interventions on skills related to the methods of evidence‐based practice remains unclear [21, 22]. Evidence‐based practice training is usually comprised of several components that may influence each other, including the:
formulation of clinical or research questions;
planning, conduct, and reporting of search strategies;
critical appraisal of studies; and
communication, implementation, and evaluation of findings.
As a result, educational interventions can be very heterogeneous in terms of duration, intensity, learning material, and didactic approach [21].
When systematic literature searching is part of an overall evidence‐based practice training, literature searching skills might only be one of several outcomes of interest. In this regard, the inherent complexity of evidence‐based practice educational interventions poses significant challenges in isolating the specific effects of individual components on measurable outcomes, such as literature searching skills. In our previous scoping review on studies assessing educational interventions that aim to improve health‐related literature searching skills [23], we identified six controlled trials and eight pre‐post trials. Study participants were students of various health professions and health professionals (mainly physicians). The educational formats of the interventions varied widely. To present the results of the scoping review, we clustered outcomes into two categories:
developing search strategies (e.g. identifying search concepts, selecting databases, applying Boolean operators); and
database searching skills (e.g. searching PubMed, MEDLINE, or CINAHL).
In addition to baseline and post‐intervention measurements, five studies reported long‐term follow‐up. Almost all studies adequately described their intervention procedure and delivery, but did not provide access to the educational materials. Only three studies described the expertise of the intervention facilitators. The results demonstrated a wide range of study populations, educational interventions, components, and outcomes [23].
While a systematic review from 2003 suggested that specific training may improve health professionals’ literature searching skills, the included studies lacked statistical power and had various methodological limitations [24]. A 2025 systematic review identified 10 studies that primarily compared the effectiveness of lectures and bedside teaching with either lectures alone or no intervention, specifically in the context of searching MEDLINE [25]. The findings indicated a positive effect of educational interventions on participants’ attitudes towards the intervention (moderate‐certainty evidence), with limited findings on knowledge, skills, satisfaction, and behavioural outcomes. Given that not all evidence from randomized studies has been considered (compared to our previous scoping review) [23], and subjective outcomes showed weak correlations with objectively measurable competencies [26], further investigation into the relationship between perceived and actual skill acquisition is warranted.
While preparing this systematic review protocol, we identified additional evidence syntheses on the topic that included randomized trials [24, 25, 27, 28, 29, 30, 31, 32]. In our own previous scoping review, we missed some relevant randomized trials [33, 34, 35, 36], mainly due to the challenge of searching for methods research [37]. Also, potentially eligible trials may have been published more recently that have not been covered by existing evidence syntheses. Thus, a more comprehensive systematic review on the topic following Cochrane methods is necessary, allowing us to comprehensively identify and systematically evaluate the available evidence from randomized trials.
How these methods might work
Educational interventions can be designed to be as simple as changes in teaching methods or as complex as comprehensive programmes. Regarding systematic literature searching, educational interventions can impart the theoretical knowledge required to develop, implement, and report appropriate search strategies; and provide an opportunity to practise the various steps involved in systematic literature searching [23]. Moreover, they have the potential to change awareness and attitudes regarding the topic (e.g. its importance for improving healthcare) and thereby to motivate health professionals to increasingly use literature searches in their field of work, which would further improve the acquired searching skills due to practical experience [23, 25, 38].
Educational interventions on systematic literature searching are provided in various formats (e.g. single or multiple sessions; face‐to‐face or group training; presence, online or blended learning), focus on different topics (e.g. formulating research questions, developing search string, database selection), and use various methods and materials (e.g. live demos, handouts/worksheets, videos, discussion) [38, 39, 40, 41]. However, it remains unclear which method of delivery facilitates an actual improvement of literature searching skills.
Why it is important to do this review
High‐quality systematic literature searches are a core component of methods of evidence‐based practice [5], and health systems worldwide benefit from evidence‐based healthcare [42]. To achieve this, professionals engaged in evidence‐based healthcare should be enabled by advanced searching skills. For this reason, political, healthcare, and educational institutions are interested in interventions to improve literature searching skills at various levels and for different interest‐holders.
A thorough and rigorous assessment of the literature, allowing conclusions based on studies using the best objective evaluation methods, is therefore needed. Our review is a fundamental step to systematically evaluate available educational interventions that aim to improve health‐related literature searching skills following the high‐quality Cochrane methods for systematic reviews. The results will show what educational interventions exist, and provide estimates of the effects of these educational interventions on the health‐related literature searching skills of health professionals and students. The results will also highlight associated research gaps. Our review may therefore also serve as a basis for the development of future educational interventions and the methodological architecture to evaluate them.
Objectives
To evaluate the effects of educational interventions for improving health‐related literature searching skills of health professionals and students.
Methods
Criteria for considering studies for this review
Types of studies
We will include any individually randomized controlled trials (RCTs) and cluster‐RCTs, including stepped‐wedge designs. We will exclude observational and non‐randomized trials of interventions.
Types of data
We will consider data from published or unpublished trials (including grey literature), either published as a journal article, preprint, thesis, registry entry, or conference proceeding (i.e. abstract‐only or poster). There will be no restrictions regarding publication language or publication year.
Types of methods
We will include all types of educational intervention (e.g. training, instruction, course, information, consultation, peer‐to‐peer review) and combinations of interventions (i.e. multicomponent interventions) explicitly aimed at improving skills in literature searching. The topic of the search must be health‐related, at least partly. For example, we will include search topics about medical questions, exclude search topics about economic questions, but include search topics about a combination of both.
Furthermore, we will include both active and inactive control interventions (e.g. standard of teaching, different or abbreviated variant of the educational intervention, no intervention).
There will be no restrictions on frequency, duration, delivery, co‐interventions, or any other aspects of the intervention and comparator intervention.
We will exclude trials in which literature searching was one topic, among others, of an educational intervention that did not explicitly aim at improving skills in literature searching; for example, interventions aimed at improving competencies in evidence‐based practice (e.g. evidence‐based practice courses). We will also exclude trials investigating the performance or effectiveness of search filters or search methods, such as database searching, hand‐searching, or citation searching alone (i.e. when the investigation is not part of the evaluation of an educational intervention for improving literature searching skills) and those trials investigating information retrieval or search support by librarians or information specialists.
The population of interest will be health professionals (including, but not restricted to, nurses, occupational therapists, pharmacists, physical therapists, physicians, psychologists, or other healthcare workers; with any professional qualification level, including assistants to them; and within a working field that is primarily clinical or scientific) and students (including undergraduate, graduate, vocational training, and continuing education students); regardless of their experience or expertise in literature searching; and with no restriction to age, gender, and setting. As recipients of the intervention, we will exclude populations that typically deliver or implement educational interventions to improve health‐related literature searching skills, i.e. lecturers, teachers, librarians, and information specialists or professionals. If the population of a given trial is mixed (i.e. a combined sample of eligible and ineligible populations), we will only consider the trial for inclusion when the eligible population constitutes the majority of the total sample, or if the results for an eligible population have been reported separately [43].
Types of outcome measures
We will only include trials with at least one outcome on literature searching skills that has been externally or objectively measured. In the absence of an established core outcome set to answer our review question, we will consider the following primary and secondary outcomes and measurements. We do not expect all eligible trials to also measure and report the secondary review outcomes, but we will consider them where available.
Primary outcomes
We will consider recall and search precision as primary/critical outcomes to assess the literature searching skills of health professionals and students.
Recall is defined as the proportion of relevant publications retrieved by a search, out of all relevant publications (i.e. also referred to as the true positive rate or sensitivity). Search precision is defined as the proportion of relevant publications retrieved by a search, out of all the publications retrieved (i.e. also referred to as positive predictive value). Based on our previous scoping review [23], we believe that these measures have been used; and for recall, different comparisons have been used across eligible trials out of which we will consider and report any (e.g. existing, expert, or librarian search as reference or 'gold standard').
We will group primary review outcomes into two sets of time points: 'post‐intervention' and 'follow‐up'. The 'post‐intervention' time point set will contain the first post‐intervention measurement time point following the intervention. The 'follow‐up' time point set will contain any later follow‐up measurement time point. If there are multiple follow‐up measurement time points (i.e. excluding the first post‐intervention time point), we will consider only the latest follow‐up time point that has been assessed for the 'follow‐up' time point set. We will exclude participant‐reported outcome measures (e.g. self‐perceived or self‐assessed knowledge, attitude, behaviour, and confidence in literature searching skills), as there is evidence of a weak correlation between self‐perceived and objectively assessed literature searching skills [26].
We will further exclude objective outcomes with no obvious or direct relation to literature searching skills (e.g. number or quantity of searches, number of database log‐ons, time spent searching, number of screened references, number of search hits only) and outcomes related to other steps of evidence‐based practice (e.g. question formulation or critical literature appraisal).
Secondary outcomes
As secondary/important outcomes, we will consider the quality of the search strategy and relevance of the search retrieval. Based on our previous scoping review [23], we expect that, across eligible trials, these outcomes have been measured using (components of) the Fresno tool [44], or diverse numerical scoring assessments that were majorly self‐developed and uniquely used across eligible trials out of which we will consider any. In case of multiple measurements for any of the outcomes, we will primarily consider measurements for which any type of validation is reported by the trial authors; if no measurement properties are reported, we will consider the measurement that has been first mentioned in the methods section of the eligible primary study report (see Selection of studies for details).
We will group secondary review outcomes into two sets of time points: 'post‐intervention' and 'follow‐up'. The 'post‐intervention' time point set will contain the first post‐intervention measurement time point following the intervention. The 'follow‐up' time point set will contain any later follow‐up measurement time point. If there are multiple follow‐up measurement time points (i.e. excluding the first post‐intervention time point), we will consider only the latest follow‐up time point that has been assessed for the 'follow‐up' time point set.
Search methods for identification of studies
We will follow the guidance from the Cochrane Handbook for Systematic Reviews of Interventions and the Methodological Expectations for Cochrane Intervention Reviews (MECIR) to plan and inform the design of the search strategy for this review [45, 46]. For reporting the search methods, we will follow the Preferred Reporting Items for Systematic reviews and Meta‐Analyses literature search extension (PRISMA‐S) and the Terminology, Application, and Reporting of Citation Searching (TARCiS) statement [47, 48].
An Information Specialist (HE) and two review authors experienced in systematic literature searching (JH and TN) will select the information sources. One review author (JH) will run all searches (electronic database searches and searching for other resources) and will be responsible for deduplication and literature management throughout the review process using the current desktop version of Citavi [49].
Electronic searches
We will search the following:
MEDLINE® ALL via Ovid;
Embase via Ovid;
Cochrane Central Register of Controlled Trials (CENTRAL), including trial registry entries from clinicaltrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) [50];
Education Resources Information Center (ERIC) via Ovid;
Library and Information Sciences Abstracts (LISA) via EBSCO;
Library, Information Science & Technology Abstracts (LISTA) via ProQuest; and
the Web of Science Core Collection.
Wherever our university licences allow (i.e. Eastern Switzerland University of Applied Sciences, University of Basel, or University of Bern, Switzerland), we will search from database inception. The detailed search time frame will be given in the final review report.
An Information Specialist (HE) will conceptualize and develop the electronic database search strategy for MEDLINE/Ovid (preliminary version in Supplementary material 1) that will be informed by textwords and index terms used in eligible references of our previous scoping review [23], and eligible references of other reviews in the field that one review author (JH) identified in a preliminary search on PubMed and Google Scholar in May and October 2025 [24, 25, 27, 28, 29, 30, 31, 32, 51, 52]. Our database search strategy will contain textwords and index terms related to our intervention (type) of interest (i.e. education and literature searching) that will be combined with a methodological search filter. The methodological search filter will combine the "Cochrane Highly Sensitive Search Strategy for identifying randomized trials in MEDLINE: sensitivity‐ and precision‐maximizing version (2023 revision)" [53], and a pragmatic exclusion of review‐related publication types, which often introduce noise by detailing search strategies in their abstracts. No other limits will be used. We will use the Yale MeSH Analyzer [54], and SearchRefinery [55], to develop and refine the search strategy.
Two review authors (JH and TN) will review and verify the MEDLINE/Ovid search strategy using the Peer Review of Electronic Search Strategies [6]. The MEDLINE/Ovid search strategy will be translated using Polyglot Search [56], and, where necessary, adapted (i.e. index terms, proximity operators, and methodological search filters) for its use in Embase/Ovid, CENTRAL, ERIC/Ovid, LISA/EBSCO, LISTA/ProQuest, and the Web of Science Core Collection.
Searching other resources
Following the search and screening of search results retrieved by database searching, we will perform direct and indirect citation searching using the novel citation searching web application Co*Citation Network, which is currently under development [57]. The tool has been tested in a case study nested in a systematic review by two authors of this review (JH, MV) [58]. Co*Citation Network collects citing, cited, co‐citing, and co‐cited references based on seed references via OpenAlex, a bibliographic database of the non‐profit organisation OurResearch [59]. As seed references, we will use any eligible study report identified via database searching and evidence syntheses identified via preliminary searching (see Electronic searches for details). The results will be deduplicated against the hits that we will retrieve by database searching [47]. We will perform one iteration of citation searching. The findings of the citation searching, in addition to the database search, will contribute to a global research consortium that aims to evaluate the impact of indirect co‐citation searches compared to direct citation searches [57].
Finally, following the screening of search results retrieved by database and citation searching, we will ask experts in the field for additional trials or study reports. We will share the list of eligible study reports (i.e. retrieved by database and citation searching):
with corresponding authors of study reports that are eligible to this review; and
in pertinent mailing lists, i.e. Canadian Medical Libraries (CANMEDLIB), European Association for Health Information and Libraries (EAHIL), Expertsearching, and Information Retrieval Methods Group (IRMG), and will ask addressees of these mailing lists whether they are aware of additional trials or study reports that we have not identified.
The shared document will offer to address the possibility to add new references via comment‐function. We will share our list of eligible study reports once, with a 10‐day offer to add additional study reports, without a reminder.
We will search for post‐publication amendments, including expressions of concern, errata, and corrigenda using our citation searching approach based on all eligible study reports; assuming that these publications will be cited by post‐publication amendments [3, 53]. We will specifically search for retractions of all eligible study reports using the Retraction Watch Database [60], based on individual study report identifier (i.e. PubMed identification (PMID) or digital object identifier (DOI)) or bibliographic information (e.g. author, title, or journal).
Data collection and analysis
Selection of studies
We will perform a pilot screening of 200 randomly selected references (title and abstract level, retrieved by database searching) involving all screeners of the review team (JH, MV, JV, JEM, MND, RM, MM, ACR, and TN). We will then discuss any disagreements, before we initiate the screening of all references and assign references to individual pairs of review authors. At least two review authors (of JH, MV, JV, JEM, MND, RM, MM, ACR, and TN) will then independently screen titles, abstracts, and full texts using the Rayyan web application [61]. Any reference with at least one suggestion for inclusion at title‐abstract level will undergo full‐text screening. We will translate the full texts of articles that are only available in a language other than English or German (the native language of all involved review authors) using the latest version of DeepL [62]. Disagreements (at the full‐text stage) will be resolved by discussion within each pair of review authors or by involving a third review author (JH or TN, one who was not involved in the primary screening for a specific reference). Three review authors (JH, GM, and TN) will discuss trials for which we identified one or more retracted reports, partially retracted reports, or expressions of concern and will consider excluding these trials from the review [3]. We will record and report the reasons for all excluded trials. Eligible references will be transferred to Covidence for data extraction [63]. In Covidence, we will merge multiple reports of the same trial (including errata and corrigenda or expressions of concern and retractions, if applicable), so that each trial, rather than each report, will be our unit of interest. For each trial, we will identify a primary study report that will be the one with the most comprehensive information on trial methods and results, e.g. a full journal article rather than a conference abstract.
Data extraction and management
We will extract the following information from each trial: bibliographic information including DOI and PMID, publication year, corresponding author and email contact, unit of randomization, interventional model (e.g. parallel assignment, stepped‐wedge assignment), blinding, number of arms, number and location of trial centres, country/countries of trial conduct, protocol and registration details (i.e. protocol webpage, trial registry identification), funding, setting characteristics (e.g. type and funding of organization, organization size), inclusion and exclusion criteria, participants (i.e. number, type, age, gender), intervention and comparison characteristics, components, and description (see details below), trial outcomes (i.e. name, type, measurement, time point), and results per outcome.
In cases where we identify errata and corrigenda, we will consider the most current version of the study report and trial results.
To specify the potential educational intervention complexity [64], we will use the template for intervention description and replication (TIDieR) for the extraction of intervention and comparison characteristics [65].
One review author (out of MV, JV, and JEM) will perform data extraction using Covidence extraction mode 1 (the recommended extraction mode for Cochrane reviews [63]), and a second review author will verify data extraction. We will resolve disagreements through discussion between each pair of review authors or by involving a third review author (JH or TN, one who was not involved in the verification of the primary data extraction for a specific trial).
To verify a consensus extraction from a full text that is only available in a language other than English or German and translated using the latest version of DeepL [62], we will look for, consult, and acknowledge a native speaker who is fluent in the relevant language.
Assessment of risk of bias in included studies
We will assess the risk of bias for each eligible trial at the outcome level (primary review outcomes only; separated for each measurement time point) using the current versions of the Cochrane risk of bias tool for (i) individually‐randomized parallel‐group or (ii) cluster‐randomized trials version 2 (RoB 2), the related guidance, and the publicly available Excel tool [66, 67]. We are interested in quantifying the effect of assignment to the interventions at baseline, regardless of whether the interventions are received as intended (the 'intention‐to‐treat effect') [66, 67].
Regarding the risk of bias for results from individually‐randomized parallel‐group trials, we will assess the following domains with the judgment options 'high risk of bias, 'some concerns', and 'low risk of bias' and based on the response options for the individual signalling questions (i.e. 'yes', 'probably yes', 'probably no', 'no', and 'no information', where applicable):
bias arising from the randomization process;
bias due to deviations from intended interventions;
bias due to missing outcome data;
bias in measurement of the outcome; and
bias in selection of the reported result.
We will reach the overall risk of bias judgement by using the Excel tool's integrated algorithm, which is based on the assessment of the individual domains (i.e. overall low risk of bias when low risk of bias will be judged for all domains, some concerns when at least one domain will be judged with some concerns and no domain will be judged as high risk of bias, and high risk of bias when at least one domain will be judged with high risk of bias or when multiple domains will be judged with some concerns) [67].
The risk of bias for results from cluster‐randomized trials will be identical, but we will assess an additional domain, i.e. (1b) bias arising from identification or recruitment of individual participants within clusters [67].
Two review authors (of JH, MND, RM, MM, ACR, GM, and TN) will independently assess the risk of bias. We will resolve any disagreements by involving a third review author (JH or RM, one who was not involved in the primary risk of bias assessment for a specific trial).
Measures of the effect of the methods
We expect continuous outcome result data only across eligible trials for both primary and secondary review outcomes [23]. The effect measure will therefore be the mean difference (MD) or the standardized mean difference (SMD) if different calculations and instruments or scales were used for the same outcome measure.
Unit of analysis issues
For randomized trials with parallel group design (the unit of randomization is typically the individual participant), we do not anticipate major unit of analysis issues. If a trial has multiple intervention groups, we will not include the same intervention group more than once in a meta‐analysis aiming at pairwise independent comparisons [68].
For the analysis of results from cluster‐randomized and stepped‐wedge trials, if any, we will follow, after seeking additional statistical consultation, the recommendations in Chapter 23 of the Cochrane Handbook for Systematic Reviews of Interventions that are appropriate to the design (i.e. effective sample sizes or inflating standard errors) [68].
For cluster‐randomized trials, we will extract adjusted measures of the effect estimates, where possible. If the study authors did not adjust for clustering, then we will adjust the raw data using an intraclass correlation coefficient (ICC) value. We may borrow an ICC value from a previous study or estimate the ICC ourselves [68].
For stepped‐wedge designs, we will take into account the possibility of time trends, i.e. confounding by any variables that change over time [68]. We will only consider outcome results from these trials for meta‐analysis if the trial authors have appropriately adjusted for time trends.
Dealing with missing data
If result data are incompletely reported (e.g. missing outcome data as indicated by a study protocol, registry entry, or report of the study findings); reported narratively only, or in non‐sufficient detail (e.g. dichotomized data of continuous outcome measures), or data on intervention characteristics is missing (e.g. details about intervention delivery, intervention material); we will request additional data from the corresponding author(s) via email. One reminder will be sent seven working days after the initial request. If we receive no answer or no additional data, we will note this as such and will make no further efforts to collect missing data.
Assessment of heterogeneity
Given the potential diversity of the population‐intervention‐comparison‐outcome‐criteria in this review, and following our previous scoping review [23], and other related evidence syntheses in the field [25, 29, 51], we do not expect the true effects to be absent of heterogeneity.
Three review authors (JH, GM, and TN) will assess and discuss the clinical heterogeneity based on the similarity and comparability of the population‐intervention‐comparison‐outcome‐criteria (including trial setting) of the eligible trials. This assessment will inform the meaningfulness of potential meta‐analyses on the primary review outcome and population‐intervention‐comparison‐outcome‐clusters for data synthesis (see Data synthesis for details).
We will explore statistical heterogeneity visually using forest plots to consider the direction and magnitude of effects and the degree of overlap between confidence intervals (CIs). In addition, we will base our interpretation on the I2 statistic among the trials in each analysis using Review Manager (RevMan) [69], but we will consider that there is substantial uncertainty in the value of I2 statistic when only a small number of trials is summarized by meta‐analysis (what we expect based on our previous scoping review [23], and other related evidence syntheses in the field [25, 29, 51]). We will therefore also consider the P value from the Chi2 test using RevMan [69], following the guidance in Chapter 10 of the Cochrane Handbook for Systematic Reviews of Interventions [70].
Assessment of reporting biases
For the primary review outcomes that will be combined by meta‐analysis (see Assessment of heterogeneity and Data synthesis for details), we will assess the risk of bias due to missing results (arising from reporting biases) using the current versions of the risk of bias due to missing evidence in a meta‐analysis (ROB‐ME) tool, the related guidance, and the publicly available template [71, 72].
The ROB‐ME tool follows a stepwise approach and will be applied without modifications on all individual meta‐analyses that we may perform (step 1) followed by an assessment of which studies meeting the inclusion criteria for the meta‐analyses have missing results (step 2). We will then consider the potential for absolute missing of studies across the systematic review (step 3) and assess the risk of bias due to missing evidence in a meta‐analysis following signalling questions on a within‐study and across‐study assessment (step 4). Risk of bias judgement will be reached as suggested by the tool's algorithm (i.e. 'low risk', 'some concerns', and 'high risk' [72]). No requests to corresponding authors will be made to collect additional information.
Two review authors (JH and TN) will independently assess the risk of bias due to missing evidence using the ROB‐ME tool. Disagreements will be resolved by involving a third review author (GM or RM).
For the primary review outcomes that will not be combined by meta‐analysis, two review authors (JH and TN) will generate an overview of the included trials with trial outcomes that were planned to be assessed and trial outcome results that were actually reported based on the following stepwise approach adapted from the ROB‐ME tool (step 2 [72]). During data extraction (see Data extraction and management for details), we will use the full texts of eligible study reports for information on a related study protocol or registry entry. Given the availability of a study protocol or registry entry, one review author (JH, MV, or TN) will assess the selective reporting of outcomes and results by comparing the prespecified methods as stated in the study protocol or registry entry or described in the methods section of an eligible study report with the findings of the trial across all study reports. If no study protocol or registry entry is linked in the study report, we will not make a request to corresponding authors regarding whether a trial was prospectively registered and whether all comparisons and outcomes were reported.
We will not assess the risk of reporting bias regarding the secondary review outcomes and their results.
Data synthesis
Trial and intervention characteristics will be thematically categorized and narratively summarized. We will discuss the summary within the review author team.
To summarize the results on the primary review outcomes and following the assessment of heterogeneity (see Assessment of heterogeneity for details), we will consider random‐effects meta‐analyses (inverse variance as a statistical method, Restricted Maximum‐Likelihood (REML) as a heterogeneity estimator, and Wald‐type as summary effect CI method). For meta‐analyses, we will consider all eligible trials and trial outcome results, regardless of their overall risk of bias. We will report the number of participants, mean and standard deviations per group for each trial, in addition to the effect estimate based on MD or SMD with 95% CIs, depending on the calculations and instruments or scales that were used to ascertain the outcome results data. Statistical heterogeneity will be calculated and described with the I2‐statistic and the P value from the Chi2 test [70]. We will use RevMan to perform meta‐analyses [69].
When considerable clinical or statistical heterogeneity of trials or results is assessed, we will perform a synthesis without meta‐analysis (SWiM) [73], based on meaningful intervention and outcome grouping. We will also consider a SWiM based on meaningful intervention and outcome grouping for the summary of results on the secondary review outcomes. We will discuss the summary within the review author team.
For the primary review outcomes that will be combined by meta‐analysis (separated for the two time point sets, if applicable), we will combine the results and certainty assessment of the body of evidence, and present this information in a summary of findings table [74]. We will consider one comparison and create one summary of findings table for educational interventions versus inactive interventions (i.e. standard of teaching or no intervention) or active control interventions (e.g. different or abbreviated variant of the educational intervention) in health professionals and students.
We will assess the certainty of the evidence using the five GRADE considerations, i.e. 'risk of bias' (based on the overall risk of bias judgement for individual trial outcome results), 'consistency of effect', 'imprecision', 'indirectness', and 'publication bias' (or 'dissemination bias' by 2026 [75]); followed by a certainty rating of the body of evidence, i.e. 'high', 'moderate', 'low', or 'very low' [74]. We will consider the decision rules and recommendations for downgrading the five GRADE domains following Chapter 14 of the Cochrane Handbook for Systematic Reviews of Interventions [74], and the updated GRADE Handbook (which, in 2026, should replace the current GRADE Handbook [75]). We will document and report any justifications for downgrading decisions, and may add comments to improve the readers' understanding of our review.
Two review authors (out of JH, RM, GM, and TN) will independently perform the GRADE assessments of the certainty of the evidence using a simple Excel sheet that will contain a printscreen with effect estimates including CIs and number of participants and trials per outcome (as generated automatically in the GRADEpro tool based on results data and meta‐analysis in RevMan [69]), outcomes in cell rows, and the five GRADE domains and certainty assessment in cell columns, each with a free text field for rating explanations, comments, and justifications. Disagreements will be resolved by involving a third review author (JH, GM, or RM, one who was not involved in the primary GRADE assessment). Based on the consensus certainty assessment, we will create a final GRADE profile using the online GRADEpro tool [76], and the summary of findings table(s) will be automatically transferred to RevMan [69].
Subgroup analysis and investigation of heterogeneity
For the primary review outcomes that will be combined by meta‐analysis (see Assessment of heterogeneity and Data synthesis for details), we will conduct subgroup analyses on the population (i.e. health professionals and students) if there are at least three trials per subgroup category. Based on our previous scoping review [23], we expect to conduct the subgroup analysis using study‐level variables (where each trial will be included in one subgroup only) rather than within‐trial contrasts (where data on subsets of participants within a trial will be available, allowing the trial to be included in more than one subgroup) [70]. We will use the formal test for subgroup differences in RevMan [69] (Chi2 test, P value), and will base our interpretation on this [70].
Sensitivity analysis
For the primary review outcomes that will be combined by meta‐analysis (see Assessment of heterogeneity and Data synthesis for details), we will conduct two sensitivity analyses that only include:
trials with an overall low risk of bias for a primary review outcome, if any; and
trials that assessed educational interventions on how to search online (e.g. via database web interface), i.e. excluding educational interventions on how to search offline (e.g. via database Compact Disc Read‐Only Memory (CD‐ROM)).
Supporting Information
Supplementary materials are available with the online version of this article: 10.1002/14651858.CD016153.
Supplementary materials are published alongside the article and contain additional data and information that support or enhance the article. Supplementary materials may not be subject to the same editorial scrutiny as the content of the article and Cochrane has not copyedited, typeset or proofread these materials. The material in these sections has been supplied by the author(s) for publication under a Licence for Publication and the author(s) are solely responsible for the material. Cochrane accordingly gives no representations or warranties of any kind in relation to, and accepts no liability for any reliance on or use of, such material.
Supplementary material 1 Search strategies
New
Additional information
Acknowledgements
Editorial and peer‐reviewer contributions
The following people conducted the editorial process for this article:
Sign‐off Editor (final editorial decision): Professor Mike Clarke, Director, Northern Ireland Methodology Hub, Centre for Public Health. Cochrane Senior Editor;
Managing Editors (selected peer reviewers, provided editorial guidance to authors, edited the article): Joey Kwong and Gail Quinn, Cochrane Central Editorial Service;
Editorial Assistant (conducted editorial policy checks, collated peer‐reviewer comments and supported the editorial team): Lisa Wydrzynski, Cochrane Central Editorial Service;
Copy Editor (copy editing and production): Deirdre Walshe, Cochrane Central Production Service;
Peer reviewers (provided comments and recommended an editorial decision): Julia Robertson, Griffith University (patient and public review); Elizabeth Stovold (methods review); Lindsay Robertson, Cochrane (methods review); Jo Platt, Central Editorial Information Specialist (search review); Steve McDonald, Cochrane Australia (search review).
Contributions of authors
Conceptualization: JH, GM, and TN.
Funding acquisition: JH, MV, and JV.
Methodology: JH, MV, JV, MND, JEM, HE, RM, MM, ACR, GM, and TN.
Project administration: JH and TN.
Supervision: JH and TN.
Writing – original draft: JH and TN.
Writing – review & editing: JH, MV, JV, MND, JEM, HE, RM, MM, ACR, GM, and TN.
All protocol authors read and approved the final version prior to publication.
Declarations of interest
Julian Hirt has declared that he has no conflict of interest.
Magdalena Vogt has declared that she has no conflict of interest.
Janine Vetsch has declared that she has no conflict of interest.
Martin N Dichter has declared that he has no conflict of interest.
Jasmin Eppel‐Meichlinger has declared that she has no conflict of interest.
Hannah Ewald has declared that she has no conflict of interest.
Ralph Möhler has declared that he has no conflict of interest.
Martin Mueller has declared that he has no conflict of interest.
Anne C Rahn has declared that she has no conflict of interest.
Gabriele Meyer has declared that she has no conflict of interest.
Thomas Nordhausen has declared that he has no conflict of interest.
Sources of support
Internal sources
-
Eastern Switzerland University of Applied Sciences, Switzerland
JH, MV, and JV received internal funding for some of their working time, from which HE was also paid to conceptualize the search strategy.
External sources
-
External support, Other
No specific external support or funding.
Registration and protocol
Cochrane approved the proposal for this review in March 2024.
Data, code and other materials
All data, code, and other material used, generated, or analyzed in this review will be part of the data package published alongside the review findings.
References
- 1.Greenhalgh T, Dijkstra P. How to Read a Paper. 7th edition. Hoboken, NJ: Wiley-Blackwell, 2025. [Google Scholar]
- 2.Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ (Clinical Research Ed.) 1996;312(7023):71-2. [DOI: 10.1136/bmj.312.7023.71] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Lefebvre C, Glanville J, Briscoe S, Featherstone R, Littlewood A, Metzendorf M-I, et al. Chapter 4: Searching for and selecting studies [last updated March 2025]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5.1 Cochrane, 2025. Available from cochrane.org/handbook.
- 4.University of Toronto Libraries. Searching the literature: a guide to comprehensive searching in the health sciences. https://guides.library.utoronto.ca/c.php?g=577919&p=4304403 (accessed 8 December 2025).
- 5.Albarqouni L, Hoffmann T, Straus S, Olsen NR, Young T, Ilic D, et al. Core competencies in evidence-based practice for health professionals: consensus statement based on a systematic review and Delphi survey. JAMA Network Open 2018;1(2):e180281. [DOI: 10.1001/jamanetworkopen.2018.0281] [DOI] [PubMed] [Google Scholar]
- 6.McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. Journal of Clinical Epidemiology 2016;75:40-6. [DOI: 10.1016/j.jclinepi.2016.01.021] [DOI] [PubMed] [Google Scholar]
- 7.Aromataris E, Lockwood C, Porritt K, Pilla B, Jordan Z. 2.4.2.1 Search sensitivity and specificity (last updated 5 May 2025). Chapter 2: Methodological considerations. JBI Manual for Evidence Synthesis. https://jbi-global-wiki.refined.site/space/MANUAL/652574736/2.4.2.1+Search+sensitivity+and+specificity (accessed 08 December 2025).
- 8.Uttley L, Quintana DS, Montgomery P, Carroll C, Page M, Falzon L, et al. The problems with systematic reviews: a living systematic review. Journal of Clinical Epidemiology 2023;156:30-41. [DOI: 10.1016/j.jclinepi.2023.01.011] [DOI] [PubMed] [Google Scholar]
- 9.Rethlefsen ML, Brigham TJ, Price C, Moher D, Bouter LM, Kirkham JJ, et al. Systematic review search strategies are poorly reported and not reproducible: a cross-sectional metaresearch study. Journal of Clinical Epidemiology 2024;166:111229. [DOI: 10.1016/j.jclinepi.2023.111229] [DOI] [PubMed] [Google Scholar]
- 10.Faggion CM Jr, Huivin R, Aranda L, Pandis N, Alarcon M. The search and selection for primary studies in systematic reviews published in dental journals indexed in MEDLINE was not fully reproducible. Journal of Clinical Epidemiology 2018;98:53-61. [DOI: 10.1016/j.jclinepi.2018.02.011] [DOI] [PubMed] [Google Scholar]
- 11.Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Errors in search strategies used in systematic reviews and their effects on information retrieval. Journal of the Medical Library Association 2019;107(2):210-21. [DOI: 10.5195/jmla.2019.567] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Waffenschmidt S, Bender R. Involvement of information specialists and statisticians in systematic reviews. International Journal of Technology Assessment in Health Care 2023;39(1):e22. [DOI: 10.1017/s026646232300020x] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Ramirez D, Foster MJ, Kogut A, Xiao D. Adherence to systematic review standards: impact of librarian involvement in Campbell Collaboration's education reviews. Journal of Academic Librarianship 2022;48(5):102567. [DOI: 10.1016/j.acalib.2022.102567] [DOI] [Google Scholar]
- 14.Pawliuk C, Cheng S, Zheng A, Siden HH. Librarian involvement in systematic reviews was associated with higher quality of reported search methods: a cross-sectional survey. Journal of Clinical Epidemiology 2024;166:111237. [DOI: 10.1016/j.jclinepi.2023.111237] [DOI] [PubMed] [Google Scholar]
- 15.Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. Journal of Clinical Epidemiology 2015;68(6):617-26. [DOI: 10.1016/j.jclinepi.2014.11.025] [DOI] [PubMed] [Google Scholar]
- 16.Boesen K, Hirt J, Woelfle T, Moscosio-Cuevas JI, Olsen O, Klingenberg SL. How to search for evidence to answer clinical questions: pragmatic guidance for healthcare professionals and biomedical researchers. https://osf.io/preprints/osf/bpxfv (accessed 08 December 2025).
- 17.Barzkar F, Baradaran HR, Koohpayehzadeh J. Knowledge, attitudes and practice of physicians toward evidence-based medicine: a systematic review. Journal of Evidence-Based Medicine 2018;11(4):246-51. [DOI: 10.1111/jebm.12325] [DOI] [PubMed] [Google Scholar]
- 18.Lafuente-Lafuente C, Leitao C, Kilani I, Kacher Z, Engels C, Canouï-Poitrine F, et al. Knowledge and use of evidence-based medicine in daily practice by health professionals: a cross-sectional survey. BMJ Open 2019;9(3):e025224. [DOI: 10.1136/bmjopen-2018-025224] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Younger P. Internet-based information-seeking behaviour amongst doctors and nurses: a short review of the literature. Health Information and Libraries Journal 2010;27(1):2-10. [DOI: 10.1111/j.1471-1842.2010.00883.x] [DOI] [PubMed] [Google Scholar]
- 20.Alving BE, Christensen JB, Thrysøe L. Hospital nurses’ information retrieval behaviours in relation to evidence based nursing: a literature review. Health Information and Libraries Journal 2018;35(1):3-23. [DOI: 10.1111/hir.12204] [DOI] [PubMed] [Google Scholar]
- 21.Hecht L, Buhse S, Meyer G. Effectiveness of training in evidence-based medicine skills for healthcare professionals: a systematic review. BMC Medical Education 2016;16:103. [DOI: 10.1186/s12909-016-0616-2] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Horsley T, Hyde C, Santesso N, Parkes J, Milne R, Stewart R. Teaching critical appraisal skills in healthcare settings. Cochrane Database of Systematic Reviews 2011, Issue 11. Art. No: CD001270. [DOI: 10.1002/14651858.CD001270.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Hirt J, Nordhausen T, Meichlinger J, Braun V, Zeller A, Meyer G. Educational interventions to improve literature searching skills in the health sciences: a scoping review. Journal of the Medical Library Association 2020;108(4):534-46. [DOI: 10.5195/jmla.2020.954] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Garg A, Turtle KM. Effectiveness of training health professionals in literature search skills using electronic health databases - a critical appraisal. Health Information and Libraries Journal 2003;20(1):33-41. [DOI: 10.1046/j.1471-1842.2003.00416.x] [DOI] [PubMed] [Google Scholar]
- 25.Lee MM, Lin X, Lee ES, Smith HE, Tudor Car L. Effectiveness of educational interventions for improving healthcare professionals' information literacy: a systematic review. Health Information and Libraries Journal 2025 Feb 2 [Epub ahead of print}. [DOI: 10.1111/hir.12562] [DOI] [PMC free article] [PubMed]
- 26.Lai NM, Teng CL. Self-perceived competence correlates poorly with objectively measured competence in evidence based medicine among medical students. BMC Medical Education 2011;11:25. [DOI: 10.1186/1472-6920-11-25] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Brettle A. Evaluating information skills training in health libraries: a systematic review. Health Information and Libraries Journal 2007;24(Suppl 1):18-37. [DOI: 10.1111/j.1471-1842.2007.00740.x] [DOI] [PubMed] [Google Scholar]
- 28.Goodman JS, Gary MS, Wood RE. Bibliographic search training for evidence-based management education: a review of relevant literatures. Academy of Management Learning & Education 2014;13(3):322-53. [DOI: 10.5465/amle.2013.0188] [DOI] [Google Scholar]
- 29.Grabowsky A, Weisbrod L. The effectiveness of library instruction for graduate/professional students: a systematic review and meta-analysis. Evidence Based Library and Information Practice 2020;15(2):100-37. [DOI: 10.18438/eblip29657] [DOI] [Google Scholar]
- 30.Just ML. Is literature search training for medical students and residents effective? A literature review. Journal of the Medical Library Association 2012;100(4):270-6. [DOI: 10.3163/1536-5050.100.4.008] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Koufogiannakis D, Wiebe N. Effective methods for teaching information literacy skills to undergraduate students: a systematic review and meta-analysis. Evidence Based Library and Information Practice 2006;1(3):3-43. [DOI: 10.18438/b8ms3d] [DOI] [Google Scholar]
- 32.Weightman AL, Farnell DJ, Morris D, Strange H, Hallam G. A systematic review of information literacy programs in higher education: effects of face-to-face, online, and blended formats on student skills and views. Evidence Based Library and Information Practice 2017;12(3):20-55. [DOI: 10.18438/b86w90] [DOI] [Google Scholar]
- 33.Ilic D, Tepper K, Misso M. Teaching evidence-based medicine literature searching skills to medical students during the clinical years: a randomized controlled trial. Journal of the Medical Library Association: JMLA 2012;100(3):190-6. [DOI: 10.3163/1536-5050.100.3.009] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Rosenberg WM, Deeks J, Lusher A, Snowball R, Dooley G, Sackett D. Improving searching skills and evidence retrieval. Journal of the Royal College of Physicians of London 1998;32(6):557-63. [PMID: ] [PMC free article] [PubMed] [Google Scholar]
- 35.Schilling, K, Wiecha J, Polineni D, Khalil S. An interactive web-based curriculum on evidence-based medicine: design and effectiveness. Family Medicine 2006;38(2):126-32. [PMID: ] [PubMed] [Google Scholar]
- 36.Eldredge JD, Bear DG, Wayne SJ, Perea PP. Student peer assessment in evidence-based medicine (EBM) searching skills training: an experiment. Journal of the Medical Library Association: JMLA 2013;101(4):244-51. [DOI: 10.3163/1536-5050.101.4.003] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Hirt J, Ewald H, Briel M, Schandelmaier S. Searching a methods topic: practical challenges and implications for search design. Journal of Clinical Epidemiology 2024;166:111201. [DOI: 10.1016/j.jclinepi.2023.10.017] [DOI] [PubMed] [Google Scholar]
- 38.Wilkes M, Bligh J. Evaluating educational interventions. BMJ (Clinical Research Ed.) 1999;318(7193):1269-72. [DOI: 10.1136/bmj.318.7193.1269] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Albarqouni L, Hoffmann T, Glasziou P. Evidence-based practice educational intervention studies: a systematic review of what is taught and how it is measured. BMC Medical Education 2018;18:177. [DOI: 10.1186/s12909-018-1284-1] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Bradley-Ridout G, Parker R, Sikora L, Quaiattini A, Fuller K, Nevison M, et al. Exploring librarians’ practices when teaching advanced searching for knowledge synthesis: results from an online survey. Journal of the Medical Library Association 2024;112(3):238-49. [DOI: 10.5195/jmla.2024.1870] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Horsley T, O'Neill J, McGowan J, Perrier L, Kane G, Campbell C. Interventions to improve question formulation in professional practice and self-directed learning. Cochrane Database of Systematic Reviews 2010, Issue 12. Art. No: CD007335. [DOI: 10.1002/14651858.CD007335.pub2] [DOI] [PubMed] [Google Scholar]
- 42.Connor L, Dean J, McNett M, Tydings DM, Shrout A, Gorsuch PF, et al. Evidence-based practice improves patient outcomes and healthcare system return on investment: findings from a scoping review. Worldviews on Evidence-Based Nursing 2023;20(1):6-15. [DOI: 10.1111/wvn.12621] [DOI] [PubMed] [Google Scholar]
- 43.McKenzie JE, Brennan SE, Ryan RE, Thomson HJ, Johnston RV, Thomas J. Chapter 3: Defining the criteria for including studies and how they will be grouped for the synthesis [last updated August 2023]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5. Cochrane, 2024. Available from cochrane.org/handbook.
- 44.Ramos KD. Validation of the Fresno test of competence in evidence based medicine. BMJ (Clinical Research Ed.) 2003;326(7384):319-21. [DOI: 10.1136/bmj.326.7384.319] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Higgins J, Lasserson T, Thomas J, Flemyng E, Churchill R. Methodological Expectations of Cochrane Intervention Reviews (MECIR). Version August 2023. https://community.cochrane.org/mecir-manual.
- 46.Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5 (updated August 2024). Cochrane, 2024. Available from www.cochrane.org/handbook.
- 47.Hirt J, Nordhausen T, Fuerst T, Ewald H, Appenzeller-Herzog C. Guidance on terminology, application, and reporting of citation searching: the TARCiS statement. BMJ 2024;385:e078384. [DOI: 10.1136/bmj-2023-078384] [DOI] [PubMed] [Google Scholar]
- 48.Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, et al. PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews. Systematic Reviews 2021;10:39. [DOI: 10.1186/s13643-020-01542-z] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Lumivero. Citavi. https://lumivero.com/products/citavi/ (accessed 08 December 2025).
- 50.Cochrane. How CENTRAL is created. www.cochranelibrary.com/central/central-creation (accessed 08 December 2025).
- 51.Purnell M, Royal B, Warton L. Supporting the development of information literacy skills and knowledge in undergraduate nursing students: an integrative review. Nurse Education Today 2020;95:104585. [DOI: 10.1016/j.nedt.2020.104585] [DOI] [PubMed] [Google Scholar]
- 52.Brettle A. Information skills training: a systematic review of the literature. Health Information and Libraries Journal 2003;20(Suppl 1):3-9. [DOI: 10.1046/j.1365-2532.20.s1.3.x] [DOI] [PubMed] [Google Scholar]
- 53.Lefebvre C, Glanville J, Briscoe S, Featherstone R, Littlewood A, Metzendorf M-I, et al. Technical Supplement to Chapter 4: Searching for and selecting studies [last updated September 2024]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5. Cochrane, 2024. Available from cochrane.org/handbook.
- 54.Yale MeSH Analyzer. https://mesh.med.yale.edu/ (accessed 08 December 2025).
- 55.Scells H, Zuccon G. searchrefiner: a query visualisation and understanding tool for systematic reviews. https://ielab.io/publications/pdfs/scells2018searchrefiner.pdf (accessed 08 December 2025). [DOI: 10.1145/3269206.3269215] [DOI]
- 56.Clark JM, Sanders S, Carter M, Honeyman D, Cleo G, Auld Y, et al. Improving the translation of search strategies using the Polyglot Search Translator: a randomized controlled trial. Journal of the Medical Library Association 2020;108(2):195-207. [DOI: 10.5195/jmla.2020.834] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Woelfle T, Hirt J, Fucile G, Nordhausen T, Ewald H, Appenzeller-Herzog C. Ranked indirect versus unranked direct citation searching for evidence retrieval: a study protocol. https://osf.io/npm2e (first registered 21 December 2023). [DOI: 10.17605/OSF.IO/NPM2E] [DOI]
- 58.CRD42025645744. Non-pharmaceutical interventions for persons living with young-onset or frontotemporal dementia and their caregivers: a systematic review. www.crd.york.ac.uk/PROSPERO/view/CRD42025645744 (first submitted 04 February 2025).
- 59.OurResearch. OpenAlex. https://openalex.org/ (accessed 08 December 2025).
- 60.Retraction Watch. Retraction Watch Database. https://retractiondatabase.org/RetractionSearch.aspx? (accessed 08 December 2025).
- 61.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Systematic Reviews 2016;5:210. [DOI: 10.1186/s13643-016-0384-4] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.DeepL SE. DeepL. https://www.deepl.com/de/translator (accessed 08 December 2025).
- 63.Covidence. Covidence, Version accessed 08 December 2025. Melbourne, Australia: Veritas Health Innovation, 2025. Available at https://www.covidence.org.
- 64.Thomas J, Petticrew M, Noyes J, Chandler J, Rehfuess E, Tugwell P, et al. Chapter 17: Intervention complexity [last updated October 2019]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5. Cochrane, 2024. Available from cochrane.org/handbook.
- 65.Hoffmann TC, Glasziou P, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348:g1687. [DOI: 10.1136/bmj.g1687] [DOI] [PubMed] [Google Scholar]
- 66.Higgins JP, Savović J, Page M, Sterne J. Revised Cochrane risk-of-bias tool for randomized trials (RoB 2). 2019. https://www.riskofbias.info/welcome/rob-2-0-tool/current-version-of-rob-2 (accessed 08 December 2025).
- 67.Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2 tool. A revised Cochrane risk of bias tool for randomized trials. Version 22 August 2019. https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool?authuser=0.
- 68.Higgins J, Eldridge S, Li T. Chapter 23: Including variants on randomized trials [last updated October 2019]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5. Cochrane, 2024. Available from cochrane.org/handbook.
- 69.Review Manager (RevMan). Version 9.16.1. The Cochrane Collaboration, 2025. Available at https://revman.cochrane.org.
- 70.Deeks JJ, Higgins JP, Altman DG, McKenzie JE, Veroniki AA (editors). Chapter 10: Analysing data and undertaking meta-analyses [last updated November 2024]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5. Cochrane, 2024. Available from cochrane.org/handbook.
- 71.Page MJ, Sterne JA, Boutron I, Hróbjartsson A, Kirkham JJ, Li T, et al. ROB-ME: a tool for assessing risk of bias due to missing evidence in systematic reviews with meta-analysis. BMJ (Clinical Research Ed.) 2023;383:e076754. [DOI: 10.1136/bmj-2023-076754] [DOI] [PubMed] [Google Scholar]
- 72.Page MJ, Sterne JAC, Boutron I, Hróbjartsson A, Kirkham JJ, Li T, et al. ROB-ME. A tool for assessing Risk Of Bias due to Missing Evidence in a synthesis. https://sites.google.com/site/riskofbiastool/welcome/rob-me-tool?authuser=0 2023. [DOI] [PubMed]
- 73.Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ 2020;368:l6890. [DOI: 10.1136/bmj.l6890] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Schünemann HJ, Higgins J, Vist GE, Glasziou P, Akl EA, Skoetz N, et al. Chapter 14: Completing ‘Summary of findings’ tables and grading the certainty of the evidence [last updated August 2023]. In: Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.5. Cochrane, 2024. Available from cochrane.org/handbook.
- 75.Neumann I, Schünemann H (editors). The GRADE Book version 1.0 (updated September 2024). The GRADE Working Group. Available from https://book.gradepro.org/.
- 76.GRADEpro GDT. Version accessed 08 December 2025. Hamilton (ON): McMaster University (developed by Evidence Prime), 2025. Available at https://www.gradepro.org.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary material 1 Search strategies
Data Availability Statement
All data, code, and other material used, generated, or analyzed in this review will be part of the data package published alongside the review findings.
