Abstract
Objective
Clinical Queries filters were developed to improve the retrieval of high-quality studies in searches on clinical matters. The study objective was to determine the yield of relevant citations and physician satisfaction while searching for diagnostic and treatment studies using the Clinical Queries page of PubMed compared with searching PubMed without these filters.
Materials and methods
Forty practicing physicians, presented with standardized treatment and diagnosis questions and one question of their choosing, entered search terms which were processed in a random, blinded fashion through PubMed alone and PubMed Clinical Queries. Participants rated search retrievals for applicability to the question at hand and satisfaction.
Results
For treatment, the primary outcome of retrieval of relevant articles was not significantly different between the groups, but a higher proportion of articles from the Clinical Queries searches met methodologic criteria (p=0.049), and more articles were published in core internal medicine journals (p=0.056). For diagnosis, the filtered results returned more relevant articles (p=0.031) and fewer irrelevant articles (overall retrieval less, p=0.023); participants needed to screen fewer articles before arriving at the first relevant citation (p<0.05). Relevance was also influenced by content terms used by participants in searching. Participants varied greatly in their search performance.
Discussion
Clinical Queries filtered searches returned more high-quality studies, though the retrieval of relevant articles was only statistically different between the groups for diagnosis questions.
Conclusion
Retrieving clinically important research studies from Medline is a challenging task for physicians. Methodological search filters can improve search retrieval.
Keywords: Health information science; knowledge translation; information storage and retrieval; PubMed, search engine; databases as topic; medical informatic; Health; evidence-based medicine; information retrieval; informatics education; library science
Background and significance
A systematic review reports that primary-care physicians generate an average of between 0.07 and 1.85 questions per consultation in the course of daily practice.1 On average, only 30%1 2 to 55%3 of these patient-care questions are pursued further. When they are, colleagues and textbooks are primarily used as information sources,1 but electronic resources are also employed.4–7
In addressing their clinical questions, physicians need to be able to define the question clearly; search resources effectively; identify and appraise retrieved information; and then apply the evidence appropriately.8 9 The most salient hurdle to answering clinical questions is limited time.2 3 8 10 Other reported barriers include poor searching skills,8 11 limited resource availability and accessibility,12 and inadequate critical appraisal skills.13 Additional challenges include perceptions that the information does not exist,3 8 that resources do not address the specific issue or do not adequately synthesize the information into a useful statement,3 and that searches retrieve too much irrelevant material.1
Several studies have assessed the impact of clinical information retrieval systems on answering clinical questions6 10 11 14 15 and changes in patient care.5 14 These studies show that relevant answers to 46% of clinicians' questions were found when searched by medical librarians, mostly through the use of Medline.14 Medical and nurse practitioner students improved their ability to obtain relevant answers to simulated clinical questions from 45% to 77% following searches of Medline.11 Physicians with access to a virtual library including Medline, textbooks, and clinical guidelines improved their ability to answer clinical questions correctly from 29% (95% CI 25.4 to 32.6) correct before system use to 50% (95% CI 46.0% to 54.0%) after system use.6 Pluye et al16 reported in a literature review of clinical information retrieval studies that observational studies indicate about 30% of searches may have a positive impact on physicians. Hoogendam et al15 found that searches in UpToDate resulted in more full or partial answers than those in PubMed (83% vs 63%, p<0.001).
Clearly clinical information retrieval systems have a role to play, but outcomes are not consistently positive. In a study where physicians were encouraged to use their own best resources to answer simulated questions, McKibbon and Fridsma (2006)17 found that 11% of answers went from correct prior to searching to incorrect following searching. This was balanced by 13% of answers that were incorrect prior to searching that were answered correctly after searching (overall correct rate increased from 39.1% to 41.3%). The rate of correct answers becoming incorrect after searches was similar to that found by Hersh and colleagues in two separate studies when Medline was used as a searching tool: 4.5%11 and 10.5%.11 18 Koonce et al19 found that 35% of the time, clinical information retrieval from evidence-based resources did not provide answers, indicating that primary literature is still an important resource.
Objectives
PubMed is one of the most accessible primary research sources, allowing free searches and access to abstracts for material indexed in Medline. One set of tools available for physicians searching PubMed is the Clinical Queries search filters (http://www.ncbi.nlm.nih.gov/pubmed/clinical) for therapy, diagnosis, etiology, prognosis, and clinical prediction guides. To date, the search filters available through the Clinical Queries interface of PubMed have not been formally tested with the intended group of clinician users. This exploratory study set out to determine the yield of relevant citations and physician satisfaction while searching using Clinical Queries in PubMed compared with searching PubMed without these filters.
The research questions were:
When practicing general internists conduct searches for each of three questions (therapy, diagnosis, and their own question), what is the yield of relevant and methodologically sound citations, comparing the yield from the main PubMed search screen with the appropriate specific search filter on Clinical Queries?
Are clinicians more satisfied with the studies retrieved from Medline when searching via PubMed Clinical Queries than when using the main search in PubMed without the clinical filters?
What are the sensitivity and specificity of the methodologic components (if used) of the clinician's own search terms? How do the operating characteristics of these searches compare with those stored in the Clinical Queries interface of PubMed?
What are the effects of limiting searches to a core journal subset for internal medicine, compared with the full PubMed journal database, on the yield of clinically relevant citations and clinician satisfaction?
Materials and methods
Physician recruitment and searches
Practicing general internists, registered with the primary discipline of ‘Internal Medicine’ in the McMaster Online Rating of Evidence system (http://plus.mcmaster.ca/more) were recruited between March 2008 and March 2009. One hundred and sixty McMaster Online Rating of Evidence raters were invited to participate; 62 accepted the invitation (38.8%), 40 of whom completed the study (64.5%). Participants were offered a $100 Canadian honorarium for their participation. All participants consented to taking part in the study which was approved by the McMaster University Research Ethics Board.
Standardized patient care questions, nine concerning treatments and four concerning diagnosis (box 1), were devised based on highly rated primary treatment and diagnosis articles in the field of internal medicine in bmjupdates+ (now EvidenceUpdates: http://plus.mcmaster.ca/EvidenceUpdates/). Each participating physician was randomly assigned by computer to one treatment and one diagnosis question. They were then asked to devise a third treatment or diagnosis clinical question of their own.
Box 1. Standardized treatment and diagnosis questions presented to participants [frequency of being searched].
Treatment questions:
Are antiseptics effective for reducing catheter-associated bloodstream infections in medical ICU patients?[5]
Can an ACE-inhibitor improve physical function in elderly people with functional impairment?[2]
What is the current best treatment for agitation in Alzheimer's disease?[5]
Is low-dose, self-administered anticoagulation safe for patients with mechanical heart valve prostheses?[4]
Does self-monitoring of blood glucose improve glycemic control in patients with type 2 diabetes not on insulin?[4]
Do probiotic Lactobacillus preparations prevent antibiotic-associated diarrhea?[3]
Does N-terminal pro-B-type natriuretic testing improve the management of patients with congestive heart failure?[4]
Does ultrasound screening for abdominal aortic aneurysm reduce mortality for elderly men?[5]
Which of the Atkins, Zone, Ornish, and LEARN diets leads to greater weight loss over a year among premenopausal overweight women?[8]
Diagnosis questions:
Which is the preferred diagnostic procedure for patients with suspected acute stroke: MRI or CT?[8]
How sensitive is the microscopic-observation drug-susceptibility assay for the diagnosis of pulmonary TB?[9]
How accurate is multislice CT for the evaluation of coronary artery disease?[9]
What is the most sensitive non-invasive test for diagnosing acute pulmonary embolism?[14]
After completing a brief survey of searching habits (online appendix table A1), participants conducted their searches as follows. Physicians were presented with each question and were asked to indicate which information sources they would usually prefer to use to answer the question. They then entered their search terms which were submitted via a secure online interface which allowed blinding to which information source was searched. Participants were instructed to search ‘as they would if they were using a database like PubMed’ but were unaware of where their searches were submitted.
For the Clinical Queries search, participants' terms that dealt with methods (eg, randomized control trial), if any, were replaced by the most ‘specific’ Clinical Queries search filter for treatment20 or diagnosis,21 depending on the question being addressed. That is, for treatment questions, the methods terms were replaced with (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract])).20 For diagnosis questions, the methods terms were replaced with (specificity[Title/Abstract]).21 The physician was then presented with a maximum of the first 20 citations retrieved with the two searches, PubMed or Clinical Queries, in random order. If no citations were retrieved by the search terms, the physician was asked to revise their search terms; if once again no citations were retrieved, the physician was presented with a new randomly selected question.
The yield of relevant citations was determined as the number of articles selected by the physician as relevant to answering the clinical question from the first 20 retrieved (if <20 were retrieved, all citations were shown) by each search. Retrievals were limited to the 20 most recent articles as would appear on the first page of a PubMed search to limit the time taken to complete the study and to mimic the time a clinician might spend on a search. Participants reviewed the article titles and had access to the abstracts in a new window by selecting the hyperlinked titles. Full-text retrieval was not available to participants during the study.
Participants were then asked to indicate their satisfaction with the retrieval from each search by responding to the following statement: ‘Overall, I am satisfied with the results of my search,’ using the following 7-point scale: strongly agree, moderately agree, mildly agree, neither agree or disagree, mildly disagree, moderately disagree, strongly disagree.
For a specified 80% power to detect a difference of at least two relevant citations between the retrievals via PubMed Clinical Queries versus PubMed main screen at an α level of 0.05 with a SD of 2.0, and an approximately normal distribution for the number of citations, 40 participants were required.
Outcome variables
The primary outcome was the number of articles retrieved by the search judged to be relevant by the participants (search retrievals were capped at 20). Secondary measures included satisfaction, the proportion of relevant articles selected, the proportion selected that met methodological criteria, the placement order of the first relevant article, and performance characteristics of methodological terms submitted by participants.
Analysis
For each variable individually, a participant score was calculated as the difference between the Clinical Queries value and the PubMed value (CQ–PM). The variables were: number of relevant articles selected, satisfaction, the proportion of relevant articles selected by participants, and the proportion of these that passed methods criteria, the number of articles retrieved and the placement order of the first relevant article. Using paired differences in this way eliminates the effects of characteristics of participants; these differences were then used as dependent variables in the analyses.
Because the standardized questions were presented to more than one participant, treatment and diagnosis search outcomes were first analyzed with an analysis of variance (ANOVA) using ‘question’ as a nominal independent variable to determine if an effect occurred that was related to the ‘question’ factor. The results indicated that the question significantly influenced the number of articles selected as relevant for treatment and diagnosis searches and for the number of diagnosis articles retrieved (online appendix table A2). For these two variables, the standardized and participant questions were analyzed separately:
the standardized questions were analyzed with the ANOVA including question as an independent variable;
the participant questions were analyzed with a paired t test on the raw participant scores, as these questions were unique.
For all other outcomes, the results from the standardized questions were pooled with those posed by the participants. Treatment and diagnosis questions were analyzed separately using linear regression with question origin (standardized vs participant) as an independent variable. A significant t value for the intercept indicated whether there was any difference between the Clinical Queries and PubMed scores. The number of articles retrieved per search varied from 1 to 20, leading to differences in the error variations of the observations. To take this effect into account, the number of relevant articles, the proportion of relevant articles, and the proportion meeting methods criteria were weighted by (the number articles retrieved for Clinical Queries+the number articles retrieved for PubMed)/2.
The order in the retrieved list of the first relevant article and the difference between the orders for Clinical Queries and PubMed were highly skewed. The difference variable was recoded as a binary variable with categories (1) for values below zero (PubMed order was higher than Clinical Queries) and (2) values equal to or above zero (PubMed order the same or lower than the Clinical Queries order). A logistic regression was performed, first assessing the effect of repeated standardized questions, then pooled for standard and participant questions.
Methodological rigor of articles
Two research associates assessed whether the articles selected by the participants as relevant met methodological criteria. The assessments were independent, and any disagreements were resolved through consensus. The criteria used for treatment and diagnosis articles are presented in box 2.
Box 2. Methodological assessment used to assess articles selected as relevant.
Is the TREATMENT study methodologically sound?
Random allocation of participants to comparison groups
Outcome assessment of at least 80% of those entering the investigation accounted for in one major analysis at any given follow-up assessment
Analysis consistent with study design
Is the DIAGNOSIS study methodologically sound?
Inclusion of a spectrum of participants, some (but not all) of whom have the disorder or derangement of interest
Objective diagnostic (‘gold’) standard OR current clinical standard for diagnosis
Each participant must receive both the new test and some form of the diagnostic standard
Interpretation of diagnostic standard without knowledge of test result
Interpretation of test without knowledge of diagnostic standard result
Analysis consistent with study design
Journal subset
The number of articles retrieved that originated from the top 30 internal medicine journals were also analyzed (online appendix table A3).22 This list is based on a survey of the contents of 170 core clinical journals for the publishing year 2000 which assessed which journals published the highest number of methodologically sound and clinically relevant studies.22 The list includes internal medicine titles that contributed at least one abstracted article to the ACP Journal Club in 2000. Retrieval from this subset of strong clinical journals between PubMed and Clinical Queries was compared.
Search operating characteristics
The sensitivity and specificity of the methodologic components of the clinician-derived searches were tested and compared to the Clinical Query search filter in PubMed. The PubMed translation of each search containing methods terms was recorded, and the performance characteristics of the various terms were tested using the Clinical Hedges Database. The Clinical Hedges Database was constructed by six research assistants in the Health Information Research Unit who hand-searched 161 journals titles that were indexed in Medline in the publishing year 2000. The research assistants categorized all original and review studies found in these journals, for eight purpose categories (treatment/quality improvement, diagnosis, prognosis, etiology, clinical prediction guide, economics, cost, and qualitative) and then applied methodologic criteria to determine if the categories of treatment/quality improvement, diagnosis, prognosis, etiology, clinical prediction guide, and economics were methodologically sound. All-purpose category definitions and corresponding methodologic rigor were outlined in a previous paper.23 Research staff were thoroughly calibrated before reviewing the literature, and the inter-rater agreement for application of all criteria exceeded 0.80 (κ statistic) beyond chance.23
Results
Searching habits
Forty participants completed the study. Participants reported that they searched on average 26.75 times per month (95% CI 6.5 to 50.0). The majority worked at a center that has a fellowship training program (68%). Boolean operators (using AND, OR, and NOT to connect search terms) were the most reported (93%) and used (70%) search option (online appendix table A4). Limits (eg, limiting the scope of searching by language, publication type, date, author, age of participants, type of article), controlled vocabulary (eg, searching using Medical Subject Headings [MeSH] terms in PubMed/Medline) and wild cards (eg, * or $) were reported as used by 90%, 68%, and 38% of the participants respectively, but were used in this study by 30%, 10%, and 8% (online appendix table A4).
When asked to indicate which resources they would normally use to address the questions, UpToDate (57.5%), PubMed (52.5%), and Medline (25%) were most often reported, followed by Cochrane (22.5%), colleagues (20%), Google (17.5%), Harrison's (12.5%), other textbooks (12.5%), websites (12.5%), and guidelines (12.5%). Forty-five other resources were also indicated by one to three participants (online appendix table A5).
For the 120 searches performed, a total of 3762 articles were retrieved, 2720 of which were unique. Searches for the same questions often had some overlap in articles retrieved, but due to the diverse approaches to searching, variation was also seen in retrieved articles.
Retrievals, relevant articles, methodological rigor, and satisfaction
Treatment questions
The standardized questions were presented randomly to participants; box 1 outlines their frequency. For treatment questions, no significant differences in satisfaction, number of relevant articles, proportion of relevant articles, or rank of the first relevant article were found (table 1). While most of the adjusted results from the regressions are consistent with the corresponding unadjusted means, there are a few anomalous cases where the two sets of results are in opposite directions. For example, with the data for the number of relevant articles for standardized questions, the y-intercept of the regression is negative, suggesting that the PubMed retrieval method produced more relevant articles than the Clinical Queries method; in contrast, a comparison of the unadjusted means suggests the opposite conclusion. Noting the relatively high standard errors for both the means and for the regression intercept, these discrepancies are probably due to high sampling variation, and therefore not meaningful.
Table 1.
Outcome measure | Search mode | Mean | SE | 95% CI |
Satisfaction‡ | CQ | 4.5 | 0.267 | 4.01 to 5.08 |
PM | 4.3 | 0.268 | 3.76 to 4.83 | |
y-intercept | 0.300 | 0.335 | −0.371 to 0.971 | |
No relevant (standard)§ | CQ | 3.5 | 0.451 | 2.59 to 4.41 |
PM | 3.17 | 0.406 | 2.35 to 4.00 | |
y-intercept | −0.75 | 0.888 | −2.56 to 1.06 | |
No relevant (own)¶ | CQ | 2.95 | 0.749 | 1.40 to 4.51 |
PM | 3.09 | 0.450 | 2.15 to 4.03 | |
(difference) | −0.136 | 0.691 | −1.57 to 1.30 | |
Proportion relevant‡ | CQ | 0.33 | 0.039 | 0.246 to 0.404 |
PM | 0.25 | 0.033 | 0.182 to 0.312 | |
y-intercept | 0.091 | 0.053 | −0.014 to 0.197 | |
No of retrievals‡ | CQ | 14.4 | 0.958 | 12.5 to 16.3 |
PM | 17.0 | 0.811 | 15.4 to 18.6 | |
y-intercept | −1.8 | 1.42 | −4.64 to 1.04 | |
Order of first relevant article‡ | CQ | 3.83 | 0.536 | 2.76 to 4.91 |
PM | 3.70 | 0.584 | 2.53 to 4.87 | |
y-intercept | −0.12 | 0.648 | −1.42 to 1.18 | |
No of retrievals in core journal subset (standard)§ | CQ | 1.38 | 0.316 | 0.737 to 2.01 |
PM | 0.525 | 0.129 | 0.264 to 0.786 | |
y-intercept | 0.850 | 0.357 | 0.127 to 1.57 | |
No of retrievals in core journal subset (own)¶ | CQ | 0.909 | 0.278 | 0.330 to 1.49 |
PM | 0.364 | 0.105 | 0.145 to 0.582 | |
(difference) | 0.545 | 0.269 | −0.014 to 1.11* | |
No meeting methods criteria‡ | CQ | 0.439 | 0.048 | 0.342 to 0.535 |
PM | 0.266 | 0.044 | 0.178 to 0.355 | |
y-intercept | 0.180 | 0.089 | 0.001 to 0.359† |
Analysis was performed on the difference between scores for (Clinical Queries (CQ) and PubMed (PM)) to account for participant effects and adjusted for question origin (standardized or participant's own). For variables analyzed by linear regression and ANOVA, the y-intercept represents the difference between clinical queries and PubMed; for those tested with paired t tests, the difference is the mean difference.
p=0.056.
p=0.049.
Analyzed with linear regression with pooled standardized and own questions.
Analyzed with an ANOVA on standardized questions only, with question as an independent variable.
Analyzed with a paired t test, since standard and participant questions could not be pooled as a result of a significant ‘question’ effect for standard questions.
The number of articles needed to read (NNR), determined as the number retrieved divided by the number selected as relevant, for treatment questions was 6.0 (95% CI 4.43 to 7.66) for Clinical Queries and 7.1 (95% CI 5.37 to 8.79) for PubMed (difference 1.03 (95% CI −1.29 to 3.36, NS). Of the articles that were selected as relevant by the participant, proportionally more of the Clinical Queries articles met methods criteria than the PubMed articles (0.439 vs 0.266, y-intercept 0.180, 95% CI 0.001 to 0.359, p=0.049).
Diagnosis questions
For diagnosis questions via Clinical Queries versus PubMed, there were significantly more relevant articles retrieved for the standardized questions, a higher proportion of relevant articles over all questions, a lower number of retrieved articles for participant questions, and earlier presentation of the first relevant article in the retrieved list overall (table 2). The NNR was 5.2 (95% CI 3.70 to 6.66) for Clinical Queries and 5.6 (95% CI 4.10 to 7.06) for PubMed (difference 0.399, 95% CI −1.68 to 2.48, NS). No statistical difference in the methodological quality of articles selected as relevant by the participants was found. Again, anomalous y-intercepts observed in table 2 are probably due to high sampling variation.
Table 2.
Outcome Measure | Search mode | Mean | SE | 95% CI |
Satisfaction† | CQ | 4.9 | 0.263 | 4.34 to 5.39 |
PM | 4.3 | 0.290 | 3.75 to 4.91 | |
y-intercept | 0.7 | 0.362 | −0.024 to 1.42 | |
No relevant (standard)‡ | CQ | 4.65 | 0.552 | 3.53 to 5.77 |
PM | 3.77 | 0.515 | 2.73 to 4.82 | |
y-intercept | 1.71 | 0.665 | 0.366 to 3.03* | |
No relevant (own)§ | CQ | 3.28 | 1.05 | 1.06 to 5.49 |
PM | 2.83 | 0.678 | 1.40 to 4.26 | |
Difference | 0.444 | 0.781 | −1.20 to 2.09 | |
Proportion relevant† | CQ | 0.38 | 0.041 | 0.294 to 0.460 |
PM | 0.25 | 0.032 | 0.191 to 0.318 | |
y-intercept | 0.106 | 0.048 | 0.010 to 0.202* | |
No of retrievals (standard)‡ | CQ | 15.4 | 1.13 | 13.1 to 17.7 |
PM | 17.1 | 0.903 | 15.2 to 18.9 | |
y-intercept | −1.36 | 1.37 | −4.63 to 0.914 | |
No of retrievals (own)§ | CQ | 11.9 | 2.03 | 7.60 to 16.2 |
PM | 16.7 | 1.53 | 13.4 to 19.9 | |
Difference | −4.78 | 1.91 | −8.81 to −0.743* | |
Order of first relevant article† | CQ | 3.0 | 0.533 | 1.93 to 4.07 |
PM | 3.94 | 0.570 | 2.79 to 5.08 | |
y-intercept | −1.76 | 0.526 | −2.81 to −0.695* | |
No of retrievals in Core Journal subset (standard)‡ | CQ | 0.375 | 0.128 | 0.117 to 0.633 |
PM | 0.450 | 0.124 | 0.200 to 0.700 | |
y-intercept | −0.075 | 0.184 | −0.447 to 0.297 | |
†No of retrievals in Core Journal subset (own)§ | CQ | 0.333 | 0.162 | −0.008 to 0.674 |
PM | 0.389 | 0.183 | 0.002 to 0.776 | |
Difference | −0.056 | 0.151 | 0.373 to 0.262 | |
No meeting methods criteria† | CQ | 0.474 | 0.060 | 0.353 to 0.596 |
PM | 0.280 | 0.049 | 0.181 to 0.379 | |
y-intercept | 0.058 | 0.102 | −0.146 to 0.262 |
Analysis was performed on the difference between scores for Clinical Queries (CQ) and PubMed (PM) to account for participant effects and adjusted for question origin (standardized or participant's own). For variables analyzed by linear regression and ANOVA, the y-intercept represents the difference between clinical queries and PubMed; for those tested with paired t tests, the difference is the mean difference.
*p<0.05.
Analyzed with linear regression with pooled standardized and own questions.
Analyzed with an ANOVA on standardized questions only, with question as an independent variable.
Analyzed with a paired t test, since standard and participant questions could not be pooled as a result of a significant ‘question’ effect for standard questions.
Journal subset
Clinical Queries and PubMed returned a similar number of articles published in the 30 Internal Medicine journals subset (tables 1, 2). For treatment questions, the difference approached significance, but the variation in Clinical Queries retrieval was great.
Search performance
Participants included methods terms in 19 of 62 searches for treatment questions (table 3). When testing the PubMed translations of these terms in the Clinical Hedges database, their sensitivity ranged from 0% to 98.9%, specificity from 60% to 100%, and precision from 2% to 55%. The sensitivity for methods terms used in diagnosis searches ranged from 0% to 96.6%, specificity from 54% to 100%, and precision from 0% to 16%; 40 of 58 diagnostic searches contained methods terms (table 4).
Table 3.
Qualifier terms used in searches and removed before submission to clinical queries | Frequency (n=62) | Sensitivity (%) | Specificity (%) | Precision (%) |
clinical queries Treatment filter (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract])) | 93 | 97 | 54 | |
(clinical[Title/Abstract] AND trial[Title/Abstract]). OR clinical trials[MeSH Terms] OR clinical trial[Publication Type] OR random*[Title/Abstract]. OR random allocation[MeSH Terms] OR therapeutic use[MeSH Subheading] | 1 | 98.9 | 78.8 | 13.5 |
Randomized controlled trial | 1 | 95.5 | 95.5 | 41.4 |
Treatment | 7 | 89.8 | 59.7 | 6.9 |
Management or treatment | 1 | 89.8 | 59.7 | 6.9 |
Therapy | 2 | 75.5 | 68.5 | 7.4 |
Limits for aged | 1 | 70.8 | 65.9 | 6.5 |
Adults, placebo | 1 | 31.7 | 99.1 | 54.7 |
Randomized control trial | 1 | 23.5 | 99.0 | 45.1 |
Treatment + current | 1 | 3.3 | 97.8 | 4.7 |
review of randomized controlled trials | 1 | 0.8 | 98.8 | 2.2 |
Limits: randomized controlled trials | 1 | 0.7 | 100 | 47.8 |
Limit to meta-analysis | 1 | 0 | 100 | 0 |
The first entry is the actual Clinical Queries specific therapy search filter. Values were determined by querying the Clinical Hedges Database.
Table 4.
Qualifier terms used in searches and removed before submission to clinical queries | Frequency (n=58) | Sensitivity (%) | Specificity (%) | Precision (%) |
clinical queries Diagnosis filter (specificity[Title/Abstract]) | 65 | 98 | 11 | |
Diagnosis | 9 | 96.6 | 53.8 | 0.6 |
Sensitivity | 6 | 85.7 | 93.8 | 4.0 |
Sensitivity and specificity | 4 | 85.0 | 94.9 | 4.8 |
Diagnosis, sensitivity | 5 | 83.7 | 95.1 | 4.8 |
Diagnosis [Mesh] | 1 | 82.3 | 60.9 | 0.6 |
‘Sensitivity and specificity’ | 1 | 66.7 | 97.2 | 6.7 |
(Specificity[Title/Abstract]) | 1 | 64.6 | 98.4 | 10.8 |
Diagnosis and ‘sensitivity and specificity’ | 1 | 64.6 | 97.6 | 7.4 |
‘Diagnosis’ | 1 | 51.0 | 91.3 | 1.7 |
Test | 1 | 45.6 | 79.1 | 0.7 |
Diagnostic accuracy | 2 | 29.3 | 98.9 | 7.7 |
Diagnostic test, diagnosis | 1 | 19.1 | 99.5 | 9.6 |
Diagnostic test AND sensitivity OR test characteristics | 1 | 17.7 | 99.7 | 15.6 |
Evaluation, diagnosis, accuracy | 1 | 12.9 | 99.5 | 7.4 |
Diagnosis, sensitive | 1 | 6.1 | 99.1 | 2.0 |
Meta-analysis | 1 | 0.7 | 98.9 | 0.2 |
Randomized control trials | 1 | 0 | 99.2 | 0 |
Systematic review | 1 | 0 | 90.6 | 0 |
Blind comparison | 1 | 0 | 100 | 0 |
The first entry is the actual Clinical Queries specific diagnosis search filter.
Discussion
Many clinical information retrieval services have been developed to facilitate searching for the current best evidence for clinical decisions. DiCenso, Bayley, and Haynes24 have described a ‘6S’ hierarchy of evidence-based information services. This follows the evolution of information processing from: (1) original studies (the lowest level); (2) synopses of original studies (evidence-based abstraction journals); (3) syntheses (reviews); (4) synopses of synthesis (eg, DARE, healthevidence.ca); (5) summaries (evidence-based online textbooks); and (6) systems (the highest level—computerized clinical decision support). Being familiar with available resources at the various levels of the hierarchy can expedite searching. Evidence-based resources near the top of the 6S hierarchy have already filtered the primary literature and appraised the research, creating quality evidence in their resources from a large quantity of information.24
Articles in Medline, the lowest level of evidence, are still important in clinical care. Tools such as the Clinical Queries search filters assist by limiting search retrievals to articles of higher quality with the intent that they will reduce the number of articles that a clinician needs to screen out and increase the number more likely to provide them with an answer based on sound research. This research shows that these filters can assist clinicians in their quest for high-quality clinically relevant articles.
Retrieval
For treatment questions, the primary outcome of retrieval of relevant articles was not different for Clinical Queries compared with PubMed. The filtered searches did result in higher numbers of articles from the core internal medicine journal subset and higher-quality articles that were selected as relevant and that passed scientific criteria; these articles provide the clinicians with better answers to their questions. In addition, because of the popularity of the core journals, clinicians will likely have a higher probability of obtaining the article in full text form, since many institutions have subscriptions to these. Few differences between the retrieval of relevant articles were found from the main PubMed page and from those filtered through the Clinical Queries page, and no differences in search satisfaction, although the non-significant differences favored Clinical Queries searches.
For the diagnosis searches, the filtered results returned more relevant articles, with the first relevant one presented higher in the retrieved list, which would allow searchers to get answers more quickly and reduce the time to screen through results. Clinical Queries also returned significantly fewer articles, which is often a preferable situation because physicians can determine these more quickly if they need to continue searching or if they have found an answer to their question.
The aim of the Clinical Query filters is to optimize search results by increasing the sensitivity, specificity, and precision of searches with the goal of returning more articles that are on target and fewer that are off target. The treatment filter uses the terms (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract])) which increase the yield of higher-quality trials. The current study supports the return of a greater proportion of higher-quality studies than without the use of the filter. The perceived clinical relevance of the retrieved articles to the searches, however, was not impacted by the filter.
Similarly, for the diagnostic studies, the Clinical Queries filter limits results with the term (specificity[Title/Abstract]). The filters resulted in more relevant articles but did not impact participant satisfaction. This is presumably because participants are focused on content relevance rather than methodologic quality (which they were not directed to judge).
Future research includes testing the robustness of the Clinical Queries filters as applied to current databases since they were derived in 2000. The performance of PubMed with and without Clinical Queries on assisting clinicians in arriving at ‘correct’ answers to clinical questions will also be assessed.
Searching
Searching databases for research relevant to a physician's question is not an easy task. A number of strategies can help improve search retrieval such as using Boolean operators, controlled vocabulary, and the Participants, Intervention, Control, Outcome format of question analysis. In this study, the approaches to searching by the participants varied greatly. Some very sophisticated users included MeSH terms and truncation; others directly copied the question into the search box. The methodological terms used by participants had varying levels of impact on the operating characteristics of their searches (tables 3, 4).
The precision of searches, the proportion of retrieved articles that are on target, is one of the most important measures for busy clinicians; they want an answer, and they want it quickly and easily. The filtered searches showed some improvements in giving fewer articles, and more on target, but the content of the returned searches still relied heavily on the content terms submitted.
Strengths and limitations
One of the strengths of this study is that the participants were blinded to where their search terms were being sent. They were unaware that the differences between PubMed and the Clinical Queries filter were being tested. Further, the patterns in the results were similar for provided and participants' own search questions, indicating that the findings can be extrapolated beyond the study sample/participants.
The use of standardized questions gives some control on the search terms being used. The generation of good standardized questions is quite challenging; because the questions were based on systematic review topics, assurances were made that the question did not replicate words in the review title to reduce the probability of the systematic review being the first article retrieved and skewing the participants' search by that result.
Limitations for this study included the constraints put on physician searching and lack of access to full-text articles for relevance assessment. Searching is generally an iterative process; once the results of a search are presented, searchers usually refine their search to increase the applicability of the articles retrieved. Participants were only able to adapt their search once, but only if their initial search returned no articles in one of the interfaces. Some expressed frustration at this limit. Participants did not have access to full text of the articles, but they did have the option to access the abstract. Relevance assessments were therefore not based on the whole study, but rather only on the title and optionally the abstract. Only one clinician requested the full text of an article.
The study used more ‘specific’ Clinical Queries search strategies, which minimize the retrieval of ‘off target’ articles. Performance would likely be different if the ‘sensitive’ search filters had been used; these filters maximize the proportion of high-quality studies retrieved (at the expense of somewhat lower specificity).
Conclusion
In this study the use of PubMed Clinical Queries to filter search results improved some search retrieval measures. This was more marked in the quest for diagnostic studies, although trends toward improvement were seen with treatment questions.
Footnotes
Funding: CIHR. Grant number 177748.
Ethics approval: Ethics approval was provided by the McMaster FHS Research Ethics Board.
Provenance and peer review: Not commissioned; externally peer reviewed.
References
- 1.Coumou HCH, Meijman FJ. How do primary care physicians seek answers to clinical questions? A literature review. J Med Libr Assoc 2006;94:55–60 [PMC free article] [PubMed] [Google Scholar]
- 2.Dawes M, Sampson U. Knowledge management in clinical practice: a systematic review of information seeking behavior in physicians. Int J Med Inform 2003;71:9–15 [DOI] [PubMed] [Google Scholar]
- 3.Ely JW, Osheroff JA, Chambliss ML, et al. Answering physicians' clinical questions: obstacles and potential solutions. J Am Med Inform Assoc 2005;12:217–24 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.National Institute of Clinical Studies Information Finding and Assessment Methods that Different Groups of Clinicians Find Most Useful. Melbourne, Australia: National Institute of Clinical Studies, 2003 [Google Scholar]
- 5.Magrabi F, Coiera EW, Westbrook JI, et al. General practitioners' use of online evidence during consultations. Int J Med Inform 2005;74:1–12 [DOI] [PubMed] [Google Scholar]
- 6.Westbrook JI, Coiera EW, Gosling AS. Do online information retrieval systems help experienced clinicians answer clinical questions. J Am Med Inform Assoc 2005;12:315–21 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Prendiville TW, Saunders J, Fitzsimons J. The information-seeking behaviour of paediatricians accessing web-based resources. Arch Dis Child 2009;94:633–5 [DOI] [PubMed] [Google Scholar]
- 8.Ely JW, Osheroff JA, Ebell MH, et al. Obstacles to answering doctors' questions about patient care with evidence: qualitative study. BMJ 2002;324:710–16 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Straus SE, Richardson WS, Glasziou P, et al. Evidence-Based Medicine: How to Practice and Teach EBM. 3rd edn Edinburgh; New York: Elsevier/Churchill Livingstone, 2005 [Google Scholar]
- 10.Westbrook JI, Gosling AS, Westbrook MT. Use of point-of-care online clinical evidence by junior and senior doctors in New South Wales public hospitals. Int Med J 2005;35:399–404 [DOI] [PubMed] [Google Scholar]
- 11.Hersh WR, Crabtree MK, Hickam DH, et al. Factors associated with success in searching MEDLINE and applying evidence to answer clinical questions. J Am Med Inform Assoc 2002;9:283–93 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Green ML, Ruff TR. Why do residents fail to answer their clinical questions? A qualitative study of barriers to practicing evidence-based medicine. Acad Med 2005;80:176–82 [DOI] [PubMed] [Google Scholar]
- 13.Green ML. Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicine: a critical review of curricula. Acad Med 1999;74:686–94 [DOI] [PubMed] [Google Scholar]
- 14.Gorman PN, Ash J, Wykoff L. Can primary care physicians' questions be answered using the medical journal literature. Bull Med Libr Assoc 1994;82:140–6 [PMC free article] [PubMed] [Google Scholar]
- 15.Hoogendam A, Vries Robbe PF, Stalenhoef AF, et al. Evaluation of PubMed filters used for evidence-based searching: validation using relative recall. J Med Libr Assoc 2009;97:186–93 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Pluye P, Grad RM, Dunikowski LG, et al. Impact of clinical information-retrieval technology on physicians: a literature review of quantitative, qualitative and mixed methods studies. Int J Med Inform 2005;74:745–68 [DOI] [PubMed] [Google Scholar]
- 17.McKibbon KA, Fridsma DB. Effectiveness of clinician-selected electronic information resources for answering primary care physicians' information needs. J Am Med Inform Assoc 2006;13:653–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Hersh WR, Crabtree MK, Hickam DH, et al. Factors associated with successful answering of clinical questions using an information retrieval system. Bull Med Libr Assoc 2000;88:323–31 [PMC free article] [PubMed] [Google Scholar]
- 19.Koonce TY, Giuse NB, Todd P. Evidence-based databases versus primary medical literature: an in-house investigation on their optimal use. J Med Libr Assoc 2004;92:407–11 [PMC free article] [PubMed] [Google Scholar]
- 20.Haynes RB, Wilczynski N. Finding the gold in MEDLINE: clinical queries. ACP J Club 2005;142:A8–9 [PubMed] [Google Scholar]
- 21.Haynes RB, McKibbon KA, Wilczynski NL, et al. Optimal search strategies for retrieving scientifically strong studies of treatment from Medline: analytical survey. BMJ 2005;330:1179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.McKibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals? BMC Med 2004;2:33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Wilczynski NL, McKibbon KA, Haynes RB. Enhancing retrieval of best evidence for health care from bibliographic databases: calibration of the hand search of the literature. Stud Health Technol Inform 2001;84:390–3 [PubMed] [Google Scholar]
- 24.DiCenso A, Bayley L, Haynes RB. Accessing preappraised evidence: fine-tuning the 5S model into a 6S model. ACP J Club 2009;151:2–3 [DOI] [PubMed] [Google Scholar]