Abstract
Background:
Healthcare providers often encounter clinical trial results in the form of visual data displays. Although there is a robust literature on patient responses to data displays in medical settings, less is known about how providers comprehend and apply this information. Our study provides a scoping review of the literature on providers’ reactions to and perceptions of data displays.
Methods:
We searched article databases (PubMed, PsycINFO, Web of Science, Cumulative Index to Nursing and Allied Health Literature, the Cochrane Library) supplemented by handsearching. Eligible articles were published in English from 1990–2020.
Results:
We identified 15 articles meeting our criteria. Studies with physicians were more prevalent (13/15) than those with other healthcare providers (6/15). Commonly assessed outcomes included objective (10/15) and subjective comprehension (4/15), preference for certain data display formats (6/15), and hypothetical decision-making around prescribing (4/15). In studies that assessed comprehension of clinical trial concepts, scores were average or below what would be considered mastery of the information). Data display formats that were preferred did not always correlate with better comprehension of information; lesser preferred formats (e.g., icon array) often resulted in better comprehension.
Conclusion:
Our findings suggest that healthcare providers may not accurately interpret complex types of data displays, and it is unknown if such limitations affect actual decision-making. Interventions are needed to enhance comprehension of complex data displays within the context of prescription drug professional promotion.
Keywords: data displays, visual, graphic, comprehension, decision-making, systematic review
INTRODUCTION
A number of factors can influence the prescribing recommendations and behaviors of healthcare providers. The ability to understand and interpret clinical research to guide therapeutic decision-making is one such factor and a critical skill for healthcare providers who prescribe medications. A systematic review of prescribers’ knowledge and skills in evaluating clinical research results suggested that familiarity with evidence-based medicine concepts and skills, such as understanding statistical concepts or identifying study bias, ranged from low to moderate.1 A recent qualitative study with 77 U.S. physicians found that understanding of clinical trial concepts and terms displayed in prescription drug promotion was less than optimal.2 Further, prescribers may be unduly influenced by different statistical measures used to represent study outcomes.1 For example, in one large study, physicians rated a cholesterol-lowering drug as more effective and were more likely to prescribe it when the same trial data were presented as a relative risk reduction compared with an absolute risk reduction.3
Visual data displays are commonly used to communicate clinical trial results in clinical trial summaries, medical journals, and prescription drug promotional materials. Data displays vary widely in their presentation formats and complexity,4 from basic tables and charts to more complex presentations such as survival curves and forest plots of meta-analysis findings. Although there is a robust literature on patient responses to data displays in medical settings,5–9 there has been less research attention on how healthcare providers understand and use this information, and more specifically, whether certain visual elements of the data display influence healthcare providers’ perceptions of the drug’s efficacy and prescribing related decision-making. For example, research with patients suggests that poorly or inaccurately designed data displays can increase the potential for denominator neglect,9 which is the tendency to focus only on the numerator without placing that information in the context of the larger study population. However, it is unclear if healthcare providers are susceptible to the same misperceptions, and if these misperceptions in turn influence prescribing related decisions or recommendations.
Our scoping review sought to answer the following questions: (1) what is healthcare providers’ understanding and processing of information in data displays used in medical settings; and (2) what characteristics of display design and presentation influence prescribing related decisions? By definition, scoping reviews are a type of evidence synthesis that use systematic methods to identify and map the breath of evidence on a topic.10As part of this effort, a related goal was to identify evidence gaps in the literature and suggest areas for future research. An understanding of how healthcare providers comprehend, process, and use this information can inform future best practices in developing data displays that can accurately and effectively communicate clinical trial information to those who prescribe medications. This information can be useful for industries that create and disseminate complex data displays of clinical trial information and for agencies that provide evidence-based guidance for their development. Further, gaps in understanding around data displays may inform future education opportunities for healthcare providers who rely on clinical trial results to make prescribing decisions.
METHODS
Our scoping review follows standard methods for the identification, selection, and synthesis of relevant research, described below.10 While we comment on the limitations to the evidence more broadly, we did not conduct a formal risk of bias assessment. We registered our protocol prior to data collection on Open Science Framework (osf.io/pm8×7).
Data Sources
The authors worked with a health sciences library specialist to develop search strategies in PubMed, American Psychological Association (APA) PsycINFO, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), and the Cochrane Library. We developed search terms to identify relevant populations (i.e., physicians, healthcare providers, nurse practitioners, physician assistants), interventions (i.e., visual data displays pertaining to medical interventions), and outcomes of interest, which included comprehension, perceptions, preferences, and decision-making. We conducted our database searches in October of 2020. The full search strategy is available in Supplementary Appendix A. Hand searches for additional publications were also conducted using reference lists from key publications and web tools such as PubMed’s Cited by and Google Scholar. The search period spanned peer-reviewed literature published between January 1990 and October 13, 2020. We used DistillerSR software to manage the article search and selection and for identification of duplicates.
Study Selection
Two reviewers screened titles and abstracts (CW, MB), and any records excluded by one of these reviewers was reviewed by a third member of the team (JT). Titles and abstracts selected for full review were dually reviewed for eligibility. Conflicts regarding inclusion were adjudicated by the lead investigator (JT). We included studies meeting the following criteria: original research, inclusion of healthcare providers who prescribe medication (e.g., physicians, nurse practitioners, physician assistants), and outcomes that assessed comprehension, attitudes, preferences or decisions in the context of exposure to a medical data display. We accepted eligible studies published after 1990, in English, and conducted in the United States or another very high or high human development index nation (http://hdr.undp.org/en/composite/HDI). We excluded systematic or narrative reviews and meta-analyses and primary studies published only in abstract or poster format or as letters.
Data Extraction
One reviewer extracted data, and the lead investigator checked the data for quality and accuracy. We extracted the following: study characteristics and design; intervention and comparison information; and eligible results from each study. We also noted the strengths and limitations of each study.
Data Synthesis
Outcomes from the included studies varied widely in terms of how they were operationalized and measured. This precluded any quantitative or meta-analytic synthesis of data. Therefore, we qualitatively synthesize findings and characteristics across studies. As part of this synthesis, we also identified gaps in the evidence.
RESULTS
Searches of five electronic databases yielded 722 unique citations, and an additional 11 articles were identified from hand searching. We screened 733 titles and abstracts and identified 68 articles for full-text review, of which 53 were excluded. We extracted and synthesized data from 15 articles meeting our inclusion criteria. Figure 1 shows the disposition of the reviewed articles in our study. Supplementary Appendix B provides details for each individual study.
Figure 1:

Article Flow Diagram
Table 1 synthesizes grouped characteristics of the 15 included articles. Studies with physicians were more prevalent (13/15) than those that included other medical providers (6/15), which included physician assistants, nurse practitioners, and medical students or residents. Four of the studies with non-physicians (4/6) also included physicians and made comparisons by provider type. Sample sizes ranged from 33 to 968. Types of data in the displays were varied and included risk of adverse events, survival, health-related quality of life, and treatment benefit; only one study assessed reactions to data from a meta-analysis display. Common study comparisons were type of visual display (e.g., icon array versus other format) and visual display compared with narrative text; less common comparisons included varied length of time, framing (positive versus negative), additive effects (one display versus two), and stylistic data display elements (e.g., shading, colors). Studies also varied regarding the medical condition that was the subject of the study, with a slight focus (5 of 15) on cancer. Other medical conditions were varied and included general surgery, anesthesia, diabetes, heart disease, and stroke. Studies used a variety of designs including a between-subject experimental design (4/15), within-subjects design (4/15), both a between and within-subject design (3/15), cross-sectional surveys (2/15), and qualitative interviews (2/15).
Table 1:
Characteristics of Studies Assessing Prescribers’ Perceptions of Data Displays (N=15)
Categories are not mutually exclusive such that more than one classification can apply within a single study; DO=Doctor of Osteopathic Medicine; NP=nurse practitioners; PA=physician assistant; PROs=patient-reported outcomes. Refer to Appendix B for details on individual studies.
Table 2 provides a synthesis of measured outcomes across the 15 studies. The most commonly assessed outcomes in this literature (Table 2) were objective comprehension of information (i.e., accuracy of response), subjective comprehension of information (e.g., perceived understanding), preferences for alternate data display presentations, and hypothetical decision-making (e.g., drug prescribing) Other studied outcomes included confidence in applying or using the information in the displays, acceptability of data displays for use with patients, comprehension assessment response time, and other attitudes and perceptions related to the data displays. We describe key findings according to each outcome category.
Table 2.
Outcomes Measured in Included Studies (N=15)
| Objective Comprehension | Subjective Comprehension | Preference | Hypothetical Decision-making | Response Time | Other Attitudes/Perceptions | Confidence in Applying/Using | |
|---|---|---|---|---|---|---|---|
| Baicus et al., 2017 | X | ||||||
| Brundage et al., 2011 | X | X | |||||
| Elting and Bodey, 1991 | X | X | X | ||||
| Elting et al., 1999 | X | X | |||||
| Friederichs et al., 2014 | X | X | |||||
| Garcia-Retamero et al., 2016 | X | X | |||||
| Garcia-Retarmero et al., 2020 | X | X | |||||
| Kuijpers et al., 2016 | X | X | X | ||||
| Mazur and Hickam, 1993 | X | ||||||
| Moynihan et al., 2018 | X | X | |||||
| Petit-Moneger et al., 2017 | X | X | |||||
| Raina et al., 2005 | X | X | X | ||||
| Snyder et al., 2017 | X | X | X | ||||
| Windish et al., 2007 | X | X | X | ||||
| Zikmund-Fisher et al., 2019 | X | X |
Objective and Subjective Comprehension
Ten studies assessed objective comprehension of information presented in data displays.2,11–19 Three of these also studied subjective comprehension,15,16,18 while one study captured subjective comprehension alone.20 In four studies that tested icon arrays compared to other formats, the icon array consistently outperformed other formats.11–15 In these four studies, icon arrays conveyed information about risk likelihood by using a matrix representation of individual units (people) within the larger at-risk population. One study tested and concluded that addition of a tree diagram (depicting how calculations on test sensitivity and specificity are derived) improved objective comprehension over numeric probabilities alone.13 In studies that assessed subjective comprehension, participants generally perceived information to be clear or easy to understand, regardless of display type.
In one study, specialty physicians performed better on objective comprehension tasks than nurses and paramedical professionals,16 while another found that nurses and lab technicians performed better than physicians.11 In the latter, nurses and lab technicians had the highest scores with icon arrays, while physicians had the highest scores with tables.11 In the former study, the discrepancy between subjective understanding of the data and accuracy in understanding for nonphysicians was large: 85% responded that the information was easy to understand, while accurate understanding was 35%.16 We identified one study that also assessed the influence of numeracy: the accuracy of comprehension scores for less numerate surgeons improved when they were shown an icon array rather than numerical information alone.14
In four studies that either assessed objective comprehension of clinical trial terms2,17,19 or asked participants to calculate statistical quantities using data13, accuracy varied and overall was lower than what would be considered mastery of the information Aspects of data display design associated with greater objective comprehension of clinical trial information included presentation of equal denominators across displays,15 data tables that used negative as opposed to positive framing of outcomes,11 axes where higher values indicated better outcomes rather than larger quantities, and threshold lines or red circles signaling concerning outcomes.18
Preferences
Six studies assessed preferences when comparing two or more types of displays, and four of these also assessed objective comprehension of information. Of these studies, two found that although icon arrays produced the greatest comprehension, they were not the preferred display.11,12 Almost a quarter of participants in these two studies noted that they disliked the icon array format despite improved comprehension compared with pie or bar chart formats.12 Tables were most preferred in both studies, despite placing second to icon arrays in comprehension. In another study, participants preferred heat maps even though they had higher comprehension scores with noncolored bar charts.16 In two studies that assessed preferences without comprehension, participants expressed a preference for line graphs over tables and bar charts,21,22 particularly when displaying information on change over time.21
Decision-Making
Four studies assessed hypothetical decision-making about treatments and prescribing behavior.17,20,23,24 In one study of prescription drug effectiveness data, healthcare providers’ motivation to change elements of their practice was associated with display format: pictographs (i.e., icon arrays) and bar charts increased motivation more than linear “sliders”, despite icon arrays being rated least clear on subjective comprehension measures.20 In other studies, participants were more likely to say they would prescribe when graphs depicted a 5-year risk time frame as opposed to a 1-year risk time frame23 and when meta-analysis data were more homogeneous or yielded a larger effect size.17 In another study, prescribers who viewed survival curves comparing two treatments reported being more influenced by the areas under the curve rather than the endpoints.24
Other Outcomes (Response Time, Confidence, Perceptions/Attitudes)
In two studies that evaluated response time for comprehension questions, participants consistently had the fastest response times and greatest accuracy with icon arrays.11,14 Across three studies that evaluated confidence with statistical concepts, participants reported low to moderate confidence with certain calculations such as number needed to treat and positive predictive value.13,17,19 Participants felt more confident using more basic concepts, such as P values,19 and interpreting results with greater homogeneity or effect sizes.17 Other findings included a desire to learn more about statistics and acknowledgment that statistics encountered in research are difficult to comprehend19 and the perception that clinical trial data often lack enough detail to assess its validity.21 In one study, participants’ preferred formats were the same as those they perceived as acceptable for use with patients, such graphs featuring a goal range for target A1C levels (featured as a green shaded background area behind a grey target line) rather than the goal only display (grey target line).22
DISCUSSION
We identified very few studies of healthcare providers that assessed understanding and processing of clinical trial data presented as visual displays. While numerous factors influence safe and appropriate prescribing decisions, accurate understanding of clinical trial information is a critical and understudied skill in the practice of evidence-based medicine. Numbers of studies identified in our review were small relative to the likely thousands of articles published on medical therapeutics during this same time frame. Our search for relevant research suggests that the topic is greatly understudied or is perhaps a nascent field of research, because about one-half of the identified 15 studies were published within the past 5 years (from 2016 to 2020). Conversely, a robust body of literature exists on patient understanding of and best practices related to data displays in medical contexts.7,9,25–27 In a recent systematic review in which the authors concluded that well-designed visual aids can be very effective tools for improving informed decision-making, healthcare providers comprised only 6% of the study population across the 36 studies.28 Thus, it is not clear if the same best practices designed with patients in mind are applicable to healthcare providers. For example, studies with patients suggest a “less is more” approach that favors simplifying data displays to increase comprehension.5,7,8 However, this approach may not be beneficial to healthcare providers, because providing too little information can fail to convey the complexity or nuances of the data.4
We identified four key gaps in this body of evidence with healthcare providers. First, most studies included only physicians; fewer included other healthcare providers who prescribe medications, such as physician assistants and nurse practitioners, and none exclusively focused on nonphysician prescribers. Thus, it is not clear if areas of understanding (or misunderstanding) vary by type of medical training or profession among those who are licensed to prescribe medical interventions. Of interest were findings of one study where the gap between perceived understanding of information presented in a data display and accurate comprehension was large for non-physicians16, pointing to the need for education around clinical trial concepts for other professionals who prescribe. Second, studies that assessed decision-making relied on hypothetical or simulated scenarios; no studies assessed actual decision-making. Although the use of hypothetical or simulated scenarios (or vignettes) is a common methodology, the extent to which hypothetical prescribing translates to actual prescribing is not clear given the myriad of influences on this behavior. Third, our review identified only one study that assessed understanding of data displays presenting meta-analysis results (a statistical method that combines and analyzes results of all discoverable trials on the same topic). Given the growing importance of applying meta-analysis results to the practice of evidence-based medicine,29,30 understanding how clinicians comprehend data displays often used in meta-analyses (e.g., forest plots) warrants further exploration. Similarly, evidence on how healthcare providers comprehend information in survival curves was also scant, and no studies assessed comprehension of different clinical endpoints in data displays such as overall survival versus progression-free survival.31 Findings from our review suggest that icon arrays may be ideal for communicating exact percentages while also conveying “gist” impressions of risk likelihood. However, icon arrays are not typically used to convey information on medication benefit or efficacy, and this format was often the least preferred by prescribers. When considering which formats to adopt, preference is secondary to comprehension of the information. Lastly, we did not identify any studies that examined how healthcare providers comprehend visual information conveying statistical uncertainty, such as interpretation of large confidence intervals. Evidence is scant about the best way to visually depict statistical ambiguity, signaling the need to explore this topic more deeply.32
The small and varied body of evidence precluded us from drawing quantitative conclusions across studies. However, we offer several observations. Findings suggest that healthcare providers may not accurately interpret complex types of data displays; they tend to prefer display presentations that are less likely to be interpreted correctly, and they may lack confidence in their ability to understand statistical concepts and apply them to their practice. It is plausible that these limitations to comprehension can adversely affect decision-making, although this hypothesis has not been tested. Certain characteristics of the data displays themselves, such as framing outcomes as negative versus positive events or displaying longer versus shorter time frames, influenced study outcomes. These findings suggest that careful attention to data display details should be considered during their development to communicate findings effectively and clearly.
Limitations to our study were that despite use of a comprehensive database search strategy, we identified only 15 articles that met our criteria, several through hand searches, indicating that this literature is inconsistently indexed in databases of peer-reviewed literature. It is possible that we did not identify all relevant literature, but our thorough search methods should have mitigated this possibility. Our review was limited to peer-reviewed manuscripts published in English and highly-developed counties, although we do not suspect that these criteria would have eliminated large numbers of otherwise eligible studies. Second, we did not conduct a formal risk of bias assessment on our included studies, as would be done with a systematic review.10 However, our cursory review of the individual studies’ quality suggests that many are limited by use of small, convenience samples and idiosyncratic measures.
CONCLUSION
Our scoping review of the literature suggests that best practices are needed for developing data displays that visually communicate complex clinical trial information to prescribers. Further research is needed to guide these practices that is inclusive of nonphysician prescribers and rigorously evaluates comprehension of information and concepts, including data displays of meta-analysis results and visual presentations of statistical uncertainty. Such research should be informed by the large evidence base on patient understanding of data displays and best practices for their design more generally6,9,25,27,33,34 but adapted and tested for use with healthcare providers. Further, our findings could suggest the need for opportunities to better educate providers on interpreting complex data displays of clinical trial information within the context of prescription drug promotion.
Supplementary Material
LESSONS FOR PRACTICE.
Healthcare providers’ understanding of clinical trial information in data displays was low to moderate, and preferred display format did not always correspond with comprehension.
Best practices are needed for developing visual data displays that accurately and effectively communicate clinical trial information.
Interventions are needed to enhance comprehension of complex data displays among healthcare providers.
Acknowledgments
Financial support for this study was provided entirely by a contract with the US Food and Drug Administration. The following authors are employed by the sponsor: Kathryn Aikin and Helen Sullivan.
REFERENCES
- 1.Kahwati L, Carmody D, Berkman N, Sullivan HW, Aikin KJ, DeFrank J. Prescribers’ Knowledge and Skills for Interpreting Research Results: A Systematic Review. J Contin Educ Health Prof. 2017;37(2):129–136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Moynihan CK, Burke PA, Evans SA, O’Donoghue AC, Sullivan HW. Physicians’ Understanding of Clinical Trial Data in Professional Prescription Drug Promotion. J Am Board Fam Med. 2018;31(4):645–649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Bucher HC, Weinbacher M, Gyr K. Influence of method of reporting study results on decision of physicians to prescribe drugs to lower cholesterol concentration. BMJ. 1994;309(6957):761–764. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Cooper RJ, Schriger DL, Wallace RC, Mikulich VJ, Wilkes MS. The quantity and quality of scientific graphs in pharmaceutical advertisements. J Gen Intern Med. 2003;18(4):294–297. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Brewer NT, Richman AR, DeFrank JT, Reyna VF, Carey LA. Improving communication of breast cancer recurrence risk. Breast Cancer Res Treat. 2012;133(2):553–561. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Fagerlin A, Ubel PA, Smith DM, Zikmund-Fisher BJ. Making numbers matter: present and future research in risk communication. Am J Health Behav. 2007;31 Suppl 1:S47–56. [DOI] [PubMed] [Google Scholar]
- 7.Fagerlin A, Zikmund-Fisher BJ, Ubel PA. Helping patients decide: ten steps to better risk communication. J Natl Cancer Inst. 2011;103(19):1436–1443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Zikmund-Fisher BJ, Fagerlin A, Ubel PA. Improving understanding of adjuvant therapy options by using simpler risk graphics. Cancer. 2008;113(12):3382–3390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Trevena LJ, Zikmund-Fisher BJ, Edwards A, Gaissmaier W, Galesic M, Han PK, King J, Lawson ML, Linder SK, Lipkus I, et al. Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers. BMC Med Inform Decis Mak. 2013;13(Suppl 2):S7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018. Nov 19;18(1):143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Elting LS, Bodey GP. Is a picture worth a thousand medical words? A randomized trial of reporting formats for medical research data. Methods Inf Med. 1991;30(2):145–150. [PubMed] [Google Scholar]
- 12.Elting LS, Martin CG, Cantor SB, Rubenstein EB. Influence of data display formats on physician investigators’ decisions to stop clinical trials: prospective trial with repeated measures. Bmj. 1999;318(7197):1527–1531. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Friederichs H, Ligges S, Weissenstein A. Using tree diagrams without numerical values in addition to relative numbers improves students’ numeracy skills: A randomized study in medical education. Medical Decision Making. 2014;34(2):253–257. [DOI] [PubMed] [Google Scholar]
- 14.Garcia-Retamero R, Cokely ET, Wicki B, Joeris A. Improving risk literacy in surgeons. Patient Educ Couns. 2016;99(7):1156–1161. [DOI] [PubMed] [Google Scholar]
- 15.Garcia-Retamero R, Petrova D, Cokely ET, Joeris A. Scientific risk reporting in medical journals can bias expert judgment: Comparing surgeons’ risk comprehension across reporting formats. J Exp Psychol Appl. 2020;26(2):283–299. [DOI] [PubMed] [Google Scholar]
- 16.Kuijpers W, Giesinger JM, Zabernigg A, Young T, Friend E, Tomaszewska IM, Aaronson NK, Holzner B. Patients’ and health professionals’ understanding of and preferences for graphical presentation styles for individual-level EORTC QLQ-C30 scores. Qual Life Res. 2016;25(3):595–604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Raina PS, Brehaut JC, Platt RW, Klassen TP, Moher D, St John P, Bryant D, Viola R, Pham B. The influence of display and statistical factors on the interpretation of metaanalysis results by physicians. Med Care. 2005;43(12):1242–1249. [DOI] [PubMed] [Google Scholar]
- 18.Snyder CF, Smith KC, Bantug ET, Tolbert EE, Blackford AL, Brundage MD. What do these scores mean? Presenting patient-reported outcomes data to patients and clinicians to improve interpretability. Cancer. 2017;123(10):1848–1859. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Windish DM, Huot SJ, Green ML. Medicine residents’ understanding of the biostatistics and results in the medical literature. Jama. 2007;298(9):1010–1022. [DOI] [PubMed] [Google Scholar]
- 20.Petit-Monéger A, Saillour-Glénisson F, Nouette-Gaulain K, Jouhet V, Salmi LR. Comparing Graphical Formats for Feedback of Clinical Practice Data. A Multicenter Study among Anesthesiologists in France. Methods Inf Med. 2017;56(1):28–36. [DOI] [PubMed] [Google Scholar]
- 21.Brundage M, Bass B, Jolie R, Foley K. A knowledge translation challenge: clinical use of quality of life data from cancer clinical trials. Qual Life Res. 2011;20(7):979–985. [DOI] [PubMed] [Google Scholar]
- 22.Zikmund-Fisher BJ, Solomon JB, Scherer AM, Exe NL, Tarini BA, Fagerlin A, Witteman HO. Primary Care Providers’ Preferences and Concerns Regarding Specific Visual Displays for Returning Hemoglobin A1c Test Results to Patients. Med Decis Making. 2019;39(7):796–804. [DOI] [PubMed] [Google Scholar]
- 23.Baicus C, Delcea C, Dima A, Oprisan E, Jurcut C, Dan GA. Influence of decision aids on oral anticoagulant prescribing among physicians: a randomised trial. Eur J Clin Invest. 2017;47(9):649–658. [DOI] [PubMed] [Google Scholar]
- 24.Mazur DJ, Hickam DH. Patients’ and physicians’ interpretations of graphic data displays. Med Decis Making. 1993;13(1):59–63. [DOI] [PubMed] [Google Scholar]
- 25.Ancker JS, Senathirajah Y, Kukafka R, Starren JB. Design features of graphs in health risk communication: a systematic review. J Am Med Inform Assoc. 2006;13(6):608–618. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Hibbard JH, Peters E. Supporting informed consumer health care decisions: data presentation approaches that facilitate the use of information in choice. Annu Rev Public Health. 2003;24:413–433. [DOI] [PubMed] [Google Scholar]
- 27.Lipkus IM. Numeric, verbal, and visual formats of conveying health risks: suggested best practices and future recommendations. Med Decis Making. 2007;27(5):696–713. [DOI] [PubMed] [Google Scholar]
- 28.Garcia-Retamero R, Cokely ET. Designing Visual Aids That Promote Risk Literacy: A Systematic Review of Health Research and Evidence-Based Design Heuristics. Hum Factors. 2017;59(4):582–627. [DOI] [PubMed] [Google Scholar]
- 29.Sauerland S, Seiler CM. Role of systematic reviews and meta-analysis in evidence-based medicine. World J Surg. 2005;29(5):582–587. [DOI] [PubMed] [Google Scholar]
- 30.Rosenberg W, Donald A. Evidence based medicine: an approach to clinical problem-solving. BMJ. 1995;310(6987):1122–1126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Boudewyns V, Southwell BG, DeFrank JT, Ferriola-Bruckenstein K, Halpern MT, O’Donoghue AC, Sullivan HW. Patients’ understanding of oncology clinical endpoints: a literature review. Patient Educ Couns. 2020;103(9):1724–1735. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Politi MC, Han PK, Col NF. Communicating the uncertainty of harms and benefits of medical interventions. Med Decis Making. 2007;27(5):681–695. [DOI] [PubMed] [Google Scholar]
- 33.Gillan D, Wickens C, Hollands J, Carswell C. Guidelines for Presenting Quantitative Data in HFES Publications. Human Factors. 1998;40:28–41. [Google Scholar]
- 34.Lipkus IM, Hollands JG. The visual communication of risk. J Natl Cancer Inst Monogr. 1999(25):149–163. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
