A. Information |
A.1 State the author’s objective |
Develop a search filter for retrieving systematic reviews. |
A.2 State the focus of the research. |
Sensitivity‐maximising
Precision‐maximising
|
A.3 Database(s) and search interface(s) .
|
MEDLINE via PubMed |
A.4 Describe the methodological focus of the filter (e.g. RCTs). |
Systematic reviews. |
A.5 Describe any other topic that forms an additional focus of the filter (e.g. clinical topics such as breast cancer, geographic location such as Asia or population grouping such as paediatrics). |
None |
A.6 Other observations. |
None |
B. Identification of a gold standard (GS) of known relevant records |
B.1 Did the authors identify one or more gold standards (GSs)? |
None |
B.2 How did the authors identify the records in each GS? |
None |
B.3 Report the dates of the records in each GS. |
None |
B.4 What are the inclusion criteria for each GS? |
None |
B.5 Describe the size of each GS and theauthors’ justification, if provided (for example the size of the gold standard may have been determined by a power calculation) |
None |
B.6 Are there limitations to the gold standard(s)? |
None |
B.7 How was each gold standard used? |
None |
B.8 Other observations. |
None |
C. How did the researchers identify the search terms in their filter(s) (select all that apply)? |
C.1 Adapted a published search strategy. |
Terms extracted from the titles of articles indexed as systematic review [pt] and differing from those already in the PubMed SR filter. |
C.2 Asked experts for suggestions of relevant terms. |
None |
C.3 Used a database thesaurus. |
Yes, MeSH |
C.4 Statistical analysis of terms in a gold standard set of records (see B above). |
None |
C.5 Extracted terms from the gold standard set of records (see B above). |
None |
C.6 Extracted terms from some relevant records (but not a gold standard). |
Yes, terms extracted from the titles of articles indexed as systematic review [pt] and differing from those already in the PubMed SR filter. |
C.7 Tick all types of search terms tested. |
|
C.8 Include the citation of any adapted strategies. |
Yes |
C.9 How were the (final) combination(s) of search terms selected? |
The list of terms was sorted acording to the best results for recall and precision. |
C.10 Were the search terms combined (using Boolean logic) in a way that is likely to retrieve the studies of interest? |
yes |
C.11 Other observations. |
None |
D. Internal validity testing (This type of testing is possible when the search filter terms were developed from a known gold standard set of records). |
D.1 How many filters were tested for internal validity? |
None |
For each filter report the following information |
D.2 Was the performance of the search filter tested on the gold standard from which it was derived? |
None |
D.3 Report sensitivity data (a single value, a range, ‘Unclear’* or ‘not reported’, as appropriate). *Please describe. |
None |
D.4 Report precision data (a single value, a range, ‘Unclear’* or ‘not reported’ as appropriate). *Please describe. |
None |
D.5 Report specificity data (a single value, a range, ‘Unclear’* or ‘not reported’ as appropriate). *Please describe. |
None |
D.6 Other performance measures reported. |
None |
D.7 Other observations. |
None |
E. External validity testing (This section relates to testing the search filter on records that are different from the records used to identify the search terms) |
E.1 How many filters were tested for external validity on records different from those used to identify the search terms? |
1 filters |
E.2 Describe the validation set(s) of records, including the interface. |
The validation was compared with the Pubmed SR filter. |
E.3 On which validation set(s) was the filter tested? |
Does not report a validation set |
E.4 Report sensitivity data for each validation set (a single value, a range or ‘Unclear’ or ‘not reported’, as appropriate). |
Not reported |
E.5 Report precision data for each validation set (report a single value, a range or ‘Unclear’ or ‘not reported’, as appropriate). |
Between 72.3 and96.7%, with a weighted mean precision of 83.8%. |
E.6 Report specificity data for each validation set (a single value, a range or ‘Unclear’ or ‘not reported’, as appropriate). |
Not reported |
E.6 Other performance measures reported. |
Recall – Single value. 91.6% |
E.7 Other observations. |
None |
F. Limitations and comparisons. |
F.1 Did the authors discuss any limitations to their research? |
Broad definition of SR. Not using a gold standar set of records. |
F.2 Are there other potential limitations to this research that you have noticed? |
None |
F.3 Report any comparisons of the performance of the filter against other relevant published filters (sensitivity, precision, specificity or other measures). |
“The PubMed SR filter retrieved 62.0% (168,677/272,048) of the articles of our final filter, which means that it is likely to have missed a large number of potential systematic reviews.” |
F.4 Include the citations of any compared filters. |
Yes, Shojania 2001
|
F.5 Other observations and / or comments. |
None |
G. Other comments. This section can be used to provide any other comments. Selected prompts for issues to bear in mind are given below. |
G.1 Have you noticed any errors in the document that might impact on the usability of the filter? |
None |
G.2 Are there any published errata or comments (for example in the MEDLINE record)? |
None |
G.3 Is there public access to pre‐publication history and / or correspondence? |
None |
G.4 Are further data available on a linked site or from the authors? |
Yes, "SUPPLEMENTAL FILES" |
G.5 Include references to related papers and/or other relevant material. |
Yes |
G.6 Other comments. |
None |