Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jul 1.
Published in final edited form as: J Biomed Inform. 2017 Jan 13;71 Suppl:S53–S59. doi: 10.1016/j.jbi.2017.01.007

Physicians’ Perception of Alternative Displays of Clinical Research Evidence for Clinical Decision Support–A Study with Case Vignettes

Stacey L Slager 1, Charlene R Weir 1,2, Heejun Kim 3, Javed Mostafa 3, Guilherme Del Fiol 1
PMCID: PMC5509533  NIHMSID: NIHMS845725  PMID: 28089913

Abstract

Objective

To design alternate information displays that present summaries of clinical trial results to clinicians to support decision-making; and to compare the displays according to efficacy and acceptability.

Methods

A 6-between (information display presentation order) by 3-within (display type) factorial design. Two alternate displays were designed based on Information Foraging theory: a narrative summary that reduces the content to a few sentences; and a table format that structures the display according to the PICO (Population, Intervention, Comparison, Outcome) framework. The designs were compared with the summary display format available in PubMed. Physicians were asked to review five clinical studies retrieved for a case vignette; and were presented with the three display formats. Participants were asked to rate their experience with each of the information displays according to a Likert scale questionnaire.

Results

Twenty physicians completed the study. Overall, participants rated the table display more highly than either the text summary or PubMed’s summary format (5.9 vs. 5.4 vs. 3.9 on a scale between 1 [strongly disagree] and 7 [strongly agree]). Usefulness ratings of seven pieces of information, i.e. patient population, patient age range, sample size, study arm, primary outcome, results of primary outcome, and conclusion, were high (average across all items = 4.71 on a 1 to 5 scale, with 1=not at all useful and 5=very useful). Study arm, primary outcome, and conclusion scored the highest (4.9, 4.85, and 4.85 respectively). Participants suggested additional details such as rate of adverse effects.

Conclusion

The table format reduced physicians’ perceived cognitive effort when quickly reviewing clinical trial information and was more favorably received by physicians than the narrative summary or PubMed’s summary format display.

Keywords: Information display, information seeking, clinical decision making, clinician information needs, information foraging theory

Graphical abstract

graphic file with name nihms845725u1.jpg

1 INTRODUCTION

One of the goals of biomedical research is to provide clinicians with the best evidence possible to support patient care decisions. Despite best efforts, tools that provide optimal access to evidence-based research reports in the primary literature for clinical decision support are still lacking.(13) This is important, because clinicians often raise clinical questions in the course of patient care that can be answered by online clinical evidence resources such as Medline.(4) When faced with a non-routine clinical problem, having clinicians consult the primary literature for the best available evidence might seem ideal, yet the size, evolving nature, and complexity of the primary literature make it challenging for consumption at the point of decision-making. As a result, physicians prefer to consult alternate sources such as books, colleagues, guidelines, and synthesized sources of evidence such as UpToDate® and Dynamed®.(5, 6)

When faced with a clinical question, clinicians need to scan a potentially large number of citations to find those that are relevant for a particular patient.(79) This process is cognitively intense and often prohibitive in busy clinical workflows. Several tools have been investigated to improve clinicians’ access to the primary literature, but most of these efforts focused on improving the search process to find high quality articles.(814) Less effort has been dedicated to tools that improve clinicians’ ability to quickly scan citations to identify those that apply to a specific patient and information needs.

In the present study, we designed alternate information displays for presenting the results of clinical studies to clinicians. The alternate displays were designed according to principles derived from Information Foraging theory.(15) Applying Information Foraging theory, we speculate that scanning for patient-specific studies can be streamlined with structured data visualization that provides cues to help information seekers to quickly identify relevant information. We hypothesized that, compared to traditional narrative formats, an information display that minimizes visual clutter and uses a semi-structured format improves clinicians’ perceived efficiency and efficacy when scanning for relevant studies for clinical case vignettes.

2 MATERIALS AND METHODS

2.1 Overview

This study used a within-subject design where each participant was exposed to three types of presentation formats associated with one out of three possible case vignettes on the following topics: (i) efficacy and safety of vernakalant for cardioversion in atrial fibrillation (Figure 1); (ii) benefits and safety of statins for a patient with heart failure, hypertension and previous myocardial infarction; and (iii) evidence on the use of metformin combined with a dipeptidyl peptidase-4 (DPP-4) inhibitor for a patient with uncontrolled diabetes mellitus. Each vignette consisted of a brief case narrative and five reports of clinical trials potentially relevant to the decision task presented in the vignette. The vignettes described simple clinical cases that were meant to focus the user’s search to determine the relevancy of the search results, rather than for problem solving. The full vignette descriptions are available in the online supplement.

Figure 1.

Figure 1

Atrial Fibrillation vignette.

2.2 Information display design

We compared three information displays that summarize clinical trial reports. The first display was the standard presentation of search results available in PubMed (Figure 2). The second display consisted of a synthesized narrative summary with key information from the abstract including title, journal, publication year, and a few sentences about the study conclusions (Figure 3). The third display consisted of a semi-structured table format according to elements in the PICO framework (Population, Intervention, Comparison, and Outcome) (Figure 4). (16)

Figure 2.

Figure 2

PubMed summary display format for atrial fibrillation. Participants had to click on each article title to read the study abstract.

Figure 3.

Figure 3

Narrative summary display for a case vignette on atrial fibrillation.

Figure 4.

Figure 4

PICO tabular display format for a case vignette on atrial fibrillation.

2.2.1 Design rationale

Our design decisions were guided by Information Foraging theory (15) and the PICO framework.(16) Information Foraging proposes that humans forage for information in information patches similar to how animals forage for food. In the information foraging process, information seekers constantly estimate the cost and benefit of foraging from a specific information patch. Our approach focused on helping information seekers produce cost-benefit estimates more reliably and efficiently so they can quickly identify the most profitable information patches. Specifically, we adopted the following strategies: (1) maximize cues (i.e., information scent) that help seekers to quickly identify relevant information; and (2) apply information patch enrichment, i.e. ensure that information patches have the highest possible concentration of useful information.

The PICO framework emerged in the mid-1990s as a means for physicians and clinical researchers to frame their clinical questions.(17, 18) Since then, it has been widely adopted for evidence-based inquiry, especially in the development of systematic reviews and clinical practice guidelines.(1924) Clinicians have also been recommended to structure their patient-specific clinical questions according to PICO.(18, 2529) Finally, information retrieval tools that structure queries according to the PICO format have shown promising results.(3032) However, searches formulated in the PICO format in current search engines retrieve publications that are then displayed in a format that follows the traditional abstracts in biomedical journals.(16) This leads to a mismatch between the clinicians’ mental model that is reflected in the structure of the search and the format of the retrieved information, compromising information scent. In addition, several pieces of information in a scientific abstract may not be relevant to clinicians’ decision-making, compromising information patch enrichment.

Our design goals were to maximize information scent by matching the structure of the information display to clinicians’ information foraging mental model; and by including only the information that is necessary for judging the relevance of clinical trial publications for a specific patient (the Population, Intervention, and Comparison in the PICO format) and interpreting the gist of the findings of those trials, therefore maximizing patch enrichment. The use of table format information displays have been shown to be successful in other studies and supports our hypothesis.(33)

2.3 Preparation of Display Formats

Using iterative design processes, members of our research team and target users provided feedback on a series of hypertext markup language (HTML) prototypes. The motivation for the design alternatives was to present only the information necessary to help clinicians judge the relevancy of a study to a specific patient and to enhance the ability of readers to grasp the gist of the study findings rapidly. For each vignette, we selected publications resulting from five recent clinical trials that were manually searched using PubMed’s Clinical Queries filter. The key information about the five clinical trials was then manually formatted in HTML according to the two alternate displays (Figures 3 and 4).

2.4 Participants/Recruitment

Participants for this study were a convenience sample of 20 physicians who were invited by e-mail to join our study. The first participants were known to members of our team. From that initial sample we used snowball sampling, where participants recruited others like themselves to participate in the study, to achieve a larger sample size. The recruitment targeted practicing primary care and internal medicine physicians with a wide range of clinical experience, from second year residents to practitioners with over 25 years of patient care experience.

2.5 Procedure

The e-mail sent to potential participants included a link to an online survey, from which they could launch the case vignettes along with the three information displays. Participants completed the assessment independently and at their own leisure with no time limit imposed. Participants were blinded to the study goals and hypotheses. For each participant, three vignettes were randomly assigned to one of the display types, so that each participant viewed search results for all three vignettes, each result displayed in a different display format. Randomization was implemented with a simple computer-based randomization procedure. There were six possible vignette-information display pairings. Each pairing was assigned a range within 0 and 1. At the time of enrollment of each participant, a computer algorithm generated a random number from 0 to 1, which determined the vignette-display pairing for the participant.

Participants did not receive training on any of the information displays. Information displays were presented in random order for each participant to minimize order effect, with no washout period between vignette presentations. After reading each vignette, participants were asked to scan the summary information about the clinical trials, find the publications that were relevant to the vignette, and interpret the key results. At the end of each vignette and information visualization, participants were asked to complete a brief questionnaire in the online survey and data collection tool Research Electronic Data Capture REDCap.(34) The study was approved by the University of Utah Institutional Review Board.

2.6 Measures

We developed an 11-item questionnaire that included 4 questions obtained from the System Usability Scale (SUS);(35) 6 questions that measured self-perceived ability to understand the meaning of user interface features, quickly scan and judge study relevancy to the vignette, and interpret the studies that were presented; and one question about satisfaction. Questions were framed according to a Likert scale (1=strongly disagree; 7=strongly agree). The complete questionnaire is available in the online supplement.

Participants were asked an additional set of questions about the tabular display format to rate the usefulness of the set of display features we had chosen to present (Table 2) according to a 5-point Likert scale (1=not at all useful, 5=very useful) with an option to add open-ended comments.

Table 2.

Usefulness of different types of information presented in the PICO tabular display (1=not at all useful, 5=very useful).

Item Mean± standard deviation
Patient population 4.8±0.4
Patient age 4.6±1.0
Sample size 4.4±1.1
Study arm 4.7±0.6
Primary outcome 4.9±0.3
Primary outcome results 4.9±0.5
Conclusion 4.9±0.4

2.7 Data analysis

The data analysis assessed the differential impact of the 3 different displays on perceptions of decision quality, usability and satisfaction using a generalized linear model with repeated measures factorial analysis with design type as a within-subject component and sequence as a between-subject variable. Mean differences were assessed using a generalized linear model repeated measures analysis with design type as the within-subject variable and sequence as the between-subject variable. The statistical power of the sample size for all study comparisons was above 80%.

3 RESULTS

3.1. Mean comparison across groups

Across all of the single-item questions, the PICO presentation format received significantly higher ratings after controlling for sequence in all 11 variables (Table 1). In all of the single-item analyses, the effect of sequence was non-significant. The Benjamini-Hochberg method was used to control for multiple comparisons (36) and all but one (Q10) remained significant. All post hoc Tukey comparisons were not significant. A power analysis was done assessing a within-subject ANOVA for 20 and 2 trials and a large effect size estimate of 0.70 with a resultant power estimate of 0.844 and a non-centrality parameter of 3.13.

Table 1.

Physicians’ ratings of the three information displays. All ratings use a Likert scale 1(strongly disagree) to 7 (strongly agree).

Criteria PubMed Narrative summary PICO table F score P value /adjusted (0.05)
Mean ± standard deviation
Q1. It was easy to understand the meaning of the information presented. 4.4±1.7 5.8±1.2 5.9±1.2 6.0 0.008/0.05
Q2. I think that I would like to use this product frequently. 3.6±1.76 5.6±1.3 5.9±1.3 16.8 <0.001/0.05
Q3. I was able to scan the information quickly. 3.5±1.8 5.8±1.2 6.0±1.3 18.6 <0.001/0.05
Q4. I was able to quickly determine relevance of the study for my patient. 4.2±1.7 5.3±1.3 6.1±1.1 6.0 0.008/0.05
Q5. It was easy for me to locate the information about the study that I needed. 3.0±1.2 4.6±1.7 5.2±1.6 8.5 0.002/0.05
Q6. I thought that this product was easy to use. 4.1±1.6 6.0±1.4 6.2±1.2 15.9 <0.001/0.05
Q7. I was able to find the pieces of information I needed to understand the study. 4.2±1.8 5.6±1.2 6.2±1.0 19.7 <0.001/0.05
Q8. I would imagine that most people would learn to use this product quickly. 4.9±1.5 5.6±1.5 6.1±1.2 8.0 0.002/0.05
Q9. I was able to quickly grasp the gist of the paper's findings. 3.7±1.9 5.6±1.1 6.1±1.0 13.1 <0.001/0.05
Q10. I found this product very awkward to use. (reversed question) 3.8±1.8 5.4±1.5 5.7±1.4 4.4 0.02/NS
Q11. In general, I am satisfied with the presentation (i.e., format of the display) of the information. 3.7±1.6 5.4±1.3 5.7±1.3 13.9 < .001/0.05
Average and standard deviation of all items (after reversing the scale of negative criteria) 3.9±0.5 5.4±0.4 5.9±0.3 N/A N/A

3.2 Usefulness ratings of specific PICO display components

The average ratings for the different types of information presented in the PICO tabular display was between 4.4 and 4.9 out of 5 (Table 2).

3.3. Open-ended comments

The open-ended comments in general favored the PICO display and included suggestions for improvement (Table 3).

Table 3.

Open-ended comments on the PICO display

In the tabular display, we have included information on patient population, age, sample size, study arms, outcomes, and conclusions. Is there enough information here for you to be able to make a clinical decision about the patient?
Yes! It was great to click on the study and see the full abstract and links to other studies. It was so much faster to have the studies broken down and the information organized.
This is generally enough.
Yes. 'Sample size' was listed in #25, but didn't see it appear consistently, making it hard to know if the trial was powered sufficiently. Indication of statistical significance would clearly be useful.
Generally yes.
The tabular form when introduced secondarily, at first overloads because of the complexity of the layout but quickly overcomes this by its utility. This presentation provides structure to the text from the articles without removing the semantic benefit of transmitting information through prose and allows the reader to easily compare study characteristics.
There was a lot of information at first. After using it a few times it would probably be quite a bit easier to understand it quickly.
Other types of information that could be included and display suggestions.
Race, was it an industry funded study? A link to editorials on the paper if there are any. Study limitations
It would be very helpful to know if the study can be applied to the patient for whom a clinical decision is being made. For example, in the geriatrics clinic, it is essential to know whether older adults were studied, and if so, if there were adequate numbers of older adults in the trial to make a meaningful conclusion. I would add that the study arm/results column was somewhat difficult to read, as I think it is difficult to condense this type of detailed information into a small space.
Need to read abstract for full context. For example, abstract for first study included this useful sentence: Rosuvastatin 10 mg daily did not affect clinical outcomes in patients with chronic heart failure of any cause, in whom the drug was safe.
Type of study (randomized, placebo-controlled, case control, etc.), follow-up period, lost to follow-up rate, measures that would suggest effectiveness of randomization (or other variables controlled for) It would be nice to color code the results by positive effect, negative effect or no measurable effect.

4 DISCUSSION

Clinicians must engage in substantial efforts to address their frequent scientific questions and many go unanswered. Although substantial work has been dedicated to improving the biomedical literature search process,(8, 9, 12, 37) less has been done to investigate optimal displays for summarizing individual studies, especially in the context of clinical decision support.(38) This study was designed to address this question by assessing the perceived efficiency and efficacy of 3 display formats for presenting a summary of clinical trial reports to clinicians. Participants significantly rated either the PICO tabular display or the narrative display higher than the PubMed display in at least 10 of the 11 criteria. Of the two non-PubMed displays, PICO tabular displays were always rated higher, but post-hoc comparisons did not show a difference, although the pattern is suggestive. Our results suggest that a tabular format is helpful for communicating useful and clinically actionable information when compared to a traditional PubMed format. One explanation for our findings may be that a structure that distills the main components of a clinical trial may require less cognitive load than the traditional PubMed narrative summary, no matter how concise. Our study suggests that further work on creating tabular displays would be a promising alternative to help clinicians find answers to their patient-specific clinical questions in the primary literature and would warrant further investigation, both in terms of identifying optimal displays and developing algorithms that can automate tabular displays.(15) Other fields have found similar positive results for tabular displays.(33)

The PICO tabular design is particularly supported by empirical and theoretical principles. The PICO framework has been widely promoted as a mechanism to help clinicians formulate better clinical questions.(16) In addition, participants in a study of summarized systematic reviews suggested that the most useful data in a clinical setting would include “population, setting, intervention and control group.”(39) The PICO table was designed to optimize information seeking by following a consistent structure that matches clinicians’ information seeking mental model, therefore increasing information scent.(15) Although the post hoc comparisons were non-significant, the consistent pattern of higher scores for the PICO format or the tighter narrative supports the notion that this framework might be successful in a larger trial.

Participants rated the text summary and the PICO tables higher than the PubMed summary displays in terms of scanning the information quickly, judging the relevancy of studies for a particular patient, and overall usability. Because previous research has shown that clinicians are unlikely to devote more than two to three minutes for such an information search,(4) supporting efficient information scanning and rapid relevance judgment was one of the most important design considerations in the alternate displays. Another important feature was optimizing the display to improve the user’s ability to quickly understand the gist of study findings. Overall, our findings are congruent with the Technology Acceptance Model (TAM) (40), which postulates that perceived ease of use and perceived usefulness are predictors of actual adoption of a particular technology.

Based on user suggestions, other items could be included in the table such as race, funding source, study type, and adverse effects. However, additional information could also clutter the display and compromise efficiency and cognitive effort. A potential alternative is to add interactive functions to the PICO display so that only the essential information for scanning and relevance judging is displayed up-front and additional details are provided on-demand upon user request.

4. 1 Limitations

This study was focused on assessing participants’ self-perceived ability to scan, judge relevancy, and interpret the results of clinical studies. We have not investigated the effect of the different information displays on physicians’ actual decision making subjected to typical clinical setting constraints, such as time pressure and interruptions. Yet, the perceived improvements of the PICO display in items related to efficiency and cognitive effort suggest clinician adoption. Future studies could present clinical vignettes that request clinicians to make patient care decisions, simulating an environment with time pressure and interruptions. We limited the displays to five publications to minimize participants’ time. Actual searches are likely to retrieve a much larger number of studies. Yet, PubMed displays by default up to 20 publications per page and clinicians are unlikely to scan a large number of studies in the context of patient care (4). The three medical conditions we used for the case vignettes are fairly common conditions. It is possible that other design features may be needed to help with clinical scenarios that are less familiar to clinicians. All participants were previously familiar with the PubMed format and a subset of the study subjects (n=4) had participated in usability testing of the narrative summary display. Familiarity with those displays may have influenced participants’ ratings in comparison with the PICO table in ways that we were unable to measure. Last, we have not investigated the effect of clinical experience and specialty on the study outcomes, which limits the generalizability of findings with different populations. Yet, we expect that the randomization of participants across the six order pairs and the within-subject design minimizes the impact of individual differences as an explanation of the findings.

4.2 Future work

Future investigation is needed to identify optimal displays based on the PICO table, developing algorithms that can automate tabular displays, and investigate the use of tabular displays in real clinical settings. While the information items provided in the PICO table were all rated highly in terms of usefulness, some participants requested that other pieces of information be included, such as adverse events. Researchers have investigated mechanisms to automatically extract key pieces of information from clinical trials using methods such as information extraction from full-text articles (4143). Our research group is currently investigating the feasibility of extracting PICO elements from ClinicalTrials.gov. We are also investigating alternative displays that add interactive functionality to the PICO table.

5. CONCLUSION

Overall, a tabular information display structured according to the PICO framework appears to be more useful to clinicians than a traditional (i.e., PubMed) narrative format, which aims to serve general audiences and investigators. Integration of primary literature resources with EHR systems for clinical decision support could be improved with a tabular PICO display and warrant further investigation. While we have not investigated the effect of the different displays on clinicians’ decisions and clinical outcomes, superiority of the PICO table display in efficiency and effort favor clinician adoption of such a display format and warrant further research.

Supplementary Material

supplement

Highlights.

  • Alternative display formats for clinical trials were designed to support patient care decisions.

  • Design informed by Information Foraging theory and information visualization principles.

  • Physician participants preferred a table display format compared to narrative.

  • Table information display reduced perceived cognitive effort

  • Table displays could increase usefulness and usability of clinical trials at the point of care

Acknowledgments

The authors would like to acknowledge Innovation Librarian Tallie Casucci from the University of Utah Eccles Library for her assistance with the literature search.

Funding and role of funding source. This project was supported by grant 1R01LM011416-01 from the National Library of Medicine. The funders had no role in the study design, collection, analysis, and interpretation of the data, the writing of the report or the decision to submit the article for publication.

Footnotes

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government or any of the affiliated institutions.

Conflict of Interest Disclosure. none

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Reeve LH, Han H, Brooks AD. Biomedical text summarisation using concept chains. Int J Data Min Bioinform. 2007;1(4):389–407. doi: 10.1504/ijdmb.2007.012967. [DOI] [PubMed] [Google Scholar]
  • 2.Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: a focus group study. PLoS One. 2013;8(11):e80318. doi: 10.1371/journal.pone.0080318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: a grounded theory study. JAMA Intern Med. 2013;173(21):1962–9. doi: 10.1001/jamainternmed.2013.10103. [DOI] [PubMed] [Google Scholar]
  • 4.Del Fiol G, Workman T, Gorman P. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Internal Medicine. 2014;174(5):710–8. doi: 10.1001/jamainternmed.2014.368. [DOI] [PubMed] [Google Scholar]
  • 5.Dwairy M, Dowell AC, Stahl J-C. The application of foraging theory to information searching behaviour of general practitioners. BMC Family Practice. 2011;12(90) doi: 10.1186/1471-2296-12-90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cook DA, Enders F, Linderbaum JA, Zwart D, Lloyd FJ. Speed and accuracy of a point of care web-based knowledge resource for clinicians: a controlled crossover trial. Interact J Med Res. 2014;3(1):e7. doi: 10.2196/ijmr.2811. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Osheroff JA, Forsythe DE, Buchanan BG, Bankowitz RA, Blumenfield B, Miller RA. Physicians' Information Needs: Analysis of Questions Posed during Clinical Teaching. Allans of Internal Medicine. 1991;114:576–81. doi: 10.7326/0003-4819-114-7-576. [DOI] [PubMed] [Google Scholar]
  • 8.Aphinyanaphongs Y, Tsamardinos I, Statnikov A, Hardin D, Aliferis CF. Text categorization models for high-quality article retrieval in internal medicine. J Am Med Inform Assoc. 2005;12(2):207–16. doi: 10.1197/jamia.M1641. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kilicoglu H, Demner-Fushman D, Rindflesch TC, Wilczynski NL, Haynes RB. Toward automatic recognition of high quality clinical evidence. AMIA Annu Symp Proc. 2008:368. [PMC free article] [PubMed] [Google Scholar]
  • 10.Kilicoglu H, Demner-Fushman D, Rindflesch TC, Wilczynski NL, Haynes RB. Towards automatic recognition of scientifically rigorous clinical research evidence. J Am Med Inform Assoc. 2009;16(1):25–31. doi: 10.1197/jamia.M2996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Aphinyanaphongs Y, Statnikov A, Aliferis CF. A comparison of citation metrics to machine learning filters for the identification of high quality MEDLINE documents. J Am Med Inform Assoc. 2006;13(4):446–55. doi: 10.1197/jamia.M2031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bernstam EV, Herskovic JR, Aphinyanaphongs Y, Aliferis CF, Sriram MG, Hersh WR. Using citation data to improve retrieval from MEDLINE. J Am Med Inform Assoc. 2006;13(1):96–105. doi: 10.1197/jamia.M1909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lokker C, Haynes RB, Wilczynski NL, McKibbon KA, Walter SD. Retrieval of diagnostic and treatment studies for clinical use through PubMed and PubMed's Clinical Queries filters. J Am Med Inform Assoc. 2011;18(5):652–9. doi: 10.1136/amiajnl-2011-000233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Montori VM, Wilczynski NL, Morgan D, Haynes RB, Hedges T. Optimal search strategies for retrieving systematic reviews from Medline: analytical survey. Bmj. 2005;330(7482):68. doi: 10.1136/bmj.38336.804167.47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Pirolli P. Information Foraging Theory: Adaptive Interaction with Information. Oxford: Oxford University Press; 2007. [Google Scholar]
  • 16.Schardt C, Adams MB, Owens T, Keitz S, Fontelo P. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Medical Informatics And Decision Making. 2007;7:16. doi: 10.1186/1472-6947-7-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123(3):A12–3. [PubMed] [Google Scholar]
  • 18.Oxman AD, Sackett DL, Guyatt GH. Users' guides to the medical literature. I. How to get started. The Evidence-Based Medicine Working Group. JAMA. 1993;270(17):2093–5. [PubMed] [Google Scholar]
  • 19.Clark AM, Savard LA, Thompson DR. What is the strength of evidence for heart failure disease-management programs? J Am Coll Cardiol. 2009;54(5):397–401. doi: 10.1016/j.jacc.2009.04.051. [DOI] [PubMed] [Google Scholar]
  • 20.Crandall M, Duncan T, Mallat A, Greene W, Violano P, Christmas AB, et al. Prevention of fall-related injuries in the elderly: An Eastern Association for the Surgery of Trauma practice management guideline. The journal of trauma and acute care surgery. 2016;81(1):196–206. doi: 10.1097/TA.0000000000001025. [DOI] [PubMed] [Google Scholar]
  • 21.Lewis SZ, Diekemper R, Addrizzo-Harris DJ. Methodology for development of guidelines for lung cancer: Diagnosis and management of lung cancer, 3rd ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest. 2013;143(5 Suppl):41s–50s. doi: 10.1378/chest.12-2344. [DOI] [PubMed] [Google Scholar]
  • 22.Louw A, Diener I, Butler DS, Puentedura EJ. The effect of neuroscience education on pain, disability, anxiety, and stress in chronic musculoskeletal pain. Archives of physical medicine and rehabilitation. 2011;92(12):2041–56. doi: 10.1016/j.apmr.2011.07.198. [DOI] [PubMed] [Google Scholar]
  • 23.Ornelas J, Dichter JR, Devereaux AV, Kissoon N, Livinski A, Christian MD. Methodology: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement. Chest. 2014;146(4 Suppl):35s–41s. doi: 10.1378/chest.14-0746. [DOI] [PubMed] [Google Scholar]
  • 24.Patel MB, Humble SS, Cullinane DC, Day MA, Jawa RS, Devin CJ, et al. Cervical spine collar clearance in the obtunded adult blunt trauma patient: a systematic review and practice management guideline from the Eastern Association for the Surgery of Trauma. The journal of trauma and acute care surgery. 2015;78(2):430–41. doi: 10.1097/TA.0000000000000503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Abu-Gharbieh E, Khalidi DA, Baig MR, Khan SA. Refining knowledge, attitude and practice of evidence-based medicine (EBM) among pharmacy students for professional challenges. Saudi pharmaceutical journal : SPJ : the official publication of the Saudi Pharmaceutical Society. 2015;23(2):162–6. doi: 10.1016/j.jsps.2014.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Brignardello-Petersen R, Carrasco-Labra A, Booth HA, Glick M, Guyatt GH, Azarpazhooh A, et al. A practical approach to evidence-based dentistry: How to search for evidence to inform clinical decisions. Journal of the American Dental Association (1939) 2014;145(12):1262–7. doi: 10.14219/jada.2014.113. [DOI] [PubMed] [Google Scholar]
  • 27.Caldwell PH, Bennett T, Mellis C. Easy guide to searching for evidence for the busy clinician. Journal of paediatrics and child health. 2012;48(12):1095–100. doi: 10.1111/j.1440-1754.2012.02503.x. [DOI] [PubMed] [Google Scholar]
  • 28.Hosny S, Ghaly MS. Teaching evidence-based medicine using a problem-oriented approach. Medical teacher. 2014;36(Suppl 1):S62–8. doi: 10.3109/0142159X.2014.886007. [DOI] [PubMed] [Google Scholar]
  • 29.Rice MJ. Evidence-based practice: a model for clinical application. Journal of the American Psychiatric Nurses Association. 2013;19(4):217–21. doi: 10.1177/1078390313495563. [DOI] [PubMed] [Google Scholar]
  • 30.O'Sullivan D, Wilk S, Michalowski W, Farion K. Using PICO to align medical evidence with MDs decision making models. Studies in health technology and informatics. 2013;192:1057. [PubMed] [Google Scholar]
  • 31.LaRue EM, Draus P, Klem ML. A description of a Web-based educational tool for understanding the PICO framework in evidence-based practice with a citation ranking system. Computers, informatics, nursing : CIN. 2009;27(1):44–9. doi: 10.1097/NCN.0b013e31818dd3d7. [DOI] [PubMed] [Google Scholar]
  • 32.Agoritsas T, Merglen A, Courvoisier DS, Combescure C, Garin N, Perrier A, et al. Sensitivity and predictive value of 15 PubMed search strategies to answer clinical questions rated against full systematic reviews. Journal of medical Internet research. 2012;14(3):e85. doi: 10.2196/jmir.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Hildon Z, Allwood D, Black N. Impact of format and content of visual display of data on comprehension, choice and preference: a systematic review. International Journal for Quality in Health Care. 2011:mzr072. doi: 10.1093/intqhc/mzr072. [DOI] [PubMed] [Google Scholar]
  • 34.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap) - a metatadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics. 2009;42(2):377–81. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Brooke J. SUS - A quick and dirty usability scale. Usability Evaluation in Industry. 1996;189(184):4–7. [Google Scholar]
  • 36.Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the royal statistical society Series B (Methodological) 1995:289–300. [Google Scholar]
  • 37.Shariff SZ, Sontrop JM, Haynes RB, Iansavichus AV, McKibbon KA, Wilczynski NL, et al. Impact of PubMed search filters on the retrieval of evidence by physicians. CMAJ. 2012;184(3):E184–90. doi: 10.1503/cmaj.101661. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Mishra R, Bian J, Fiszman M, Weir CR, Jonnalagadda S, Mostafa J, et al. Text summarization in the biomedical domain: a systematic review of recent research. J Biomed Inform. 2014;52:457–67. doi: 10.1016/j.jbi.2014.06.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Rosenbaum SE, Glenton C, Nylund HK, Oxman AD. User testing and stakeholder feedback contributed to the development of understandable and useful Summary of Findings tables for Cochrane reviews. Journal of Clinical Epidemiology. 2010;63(6):607–19. doi: 10.1016/j.jclinepi.2009.12.013. [DOI] [PubMed] [Google Scholar]
  • 40.Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly. 1989;13(3):319–40. [Google Scholar]
  • 41.Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME. Answering physicians' clinical questions: obstacles and potential solutions. J Am Med Inform Assoc. 2005;12(2):217–24. doi: 10.1197/jamia.M1608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Hsu W, Speier W, Taira RK. Automated extraction of reported statistical analyses: towards a logical representation of clinical trial literature. AMIA Annu Symp Proc. 2012;2012:350–9. [PMC free article] [PubMed] [Google Scholar]
  • 43.Summerscales RL, Argamon S, Bai S, Huperff J, Schwartzff A, editors. Automatic Summarization of Results from Clinical Trials. Bioinformatics and Biomedicine (BIBM), 2011 IEEE International Conference on; 2011 12–15 Nov; 2011. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplement

RESOURCES