Skip to main content
Cognitive Research: Principles and Implications logoLink to Cognitive Research: Principles and Implications
. 2018 Jul 11;3:29. doi: 10.1186/s41235-018-0120-9

Decision making with visualizations: a cognitive framework across disciplines

Lace M Padilla 1,2,, Sarah H Creem-Regehr 2, Mary Hegarty 3, Jeanine K Stefanucci 2
PMCID: PMC6091269  PMID: 30238055

Abstract

Visualizations—visual representations of information, depicted in graphics—are studied by researchers in numerous ways, ranging from the study of the basic principles of creating visualizations, to the cognitive processes underlying their use, as well as how visualizations communicate complex information (such as in medical risk or spatial patterns). However, findings from different domains are rarely shared across domains though there may be domain-general principles underlying visualizations and their use. The limited cross-domain communication may be due to a lack of a unifying cognitive framework. This review aims to address this gap by proposing an integrative model that is grounded in models of visualization comprehension and a dual-process account of decision making. We review empirical studies of decision making with static two-dimensional visualizations motivated by a wide range of research goals and find significant direct and indirect support for a dual-process account of decision making with visualizations. Consistent with a dual-process model, the first type of visualization decision mechanism produces fast, easy, and computationally light decisions with visualizations. The second facilitates slower, more contemplative, and effortful decisions with visualizations. We illustrate the utility of a dual-process account of decision making with visualizations using four cross-domain findings that may constitute universal visualization principles. Further, we offer guidance for future research, including novel areas of exploration and practical recommendations for visualization designers based on cognitive theory and empirical findings.

Keywords: Decision making with visualizations review, Cognitive model, Visual-spatial biases, Graphs, Geospatial visualizations, Healthcare visualizations, Weather forecast visualizations, Uncertainty visualizations, Graphical decision making, Dual-process

Significance

People use visualizations to make large-scale decisions, such as whether to evacuate a town before a hurricane strike, and more personal decisions, such as which medical treatment to undergo. Given their widespread use and social impact, researchers in many domains, including cognitive psychology, information visualization, and medical decision making, study how we make decisions with visualizations. Even though researchers continue to develop a wealth of knowledge on decision making with visualizations, there are obstacles for scientists interested in integrating findings from other domains—including the lack of a cognitive model that accurately describes decision making with visualizations. Research that does not capitalize on all relevant findings progresses slower, lacks generalizability, and may miss novel solutions and insights. Considering the importance and impact of decisions made with visualizations, it is critical that researchers have the resources to utilize cross-domain findings on this topic. This review provides a cognitive model of decision making with visualizations that can be used to synthesize multiple approaches to visualization research. Further, it offers practical recommendations for visualization designers based on the reviewed studies while deepening our understanding of the cognitive processes involved when making decisions with visualizations.

Introduction

Every day we make numerous decisions with the aid of visualizations, including selecting a driving route, deciding whether to undergo a medical treatment, and comparing figures in a research paper. Visualizations are external visual representations that are systematically related to the information that they represent (Bertin, 1983; Stenning & Oberlander, 1995). The information represented might be about objects, events, or more abstract information (Hegarty, 2011). The scope of the previously mentioned examples illustrates the diversity of disciplines that have a vested interest in the influence of visualizations on decision making. While the term decision has a range of meanings in everyday language, here decision making is defined as a choice between two or more competing courses of action (Balleine, 2007).

We argue that for visualizations to be most effective, researchers need to integrate decision-making frameworks into visualization cognition research. Reviews of decision making with visual-spatial uncertainty also agree there has been a general lack of emphasis on mental processes within the visualization decision-making literature (Kinkeldey, MacEachren, Riveiro, & Schiewe, 2017; Kinkeldey, MacEachren, & Schiewe, 2014). The framework that has dominated applied decision-making research for the last 30 years is a dual-process account of decision making. Dual-process theories propose that we have two types of decision processes: one for automatic, easy decisions (Type 1); and another for more contemplative decisions (Type 2) (Kahneman & Frederick, 2002; Stanovich, 1999).1 Even though many research areas involving higher-level cognition have made significant efforts to incorporate dual-process theories (Evans, 2008), visualization research has yet to directly test the application of current decision-making frameworks or develop an effective cognitive model for decision making with visualizations. The goal of this work is to integrate a dual-process account of decision making with established cognitive frameworks of visualization comprehension.

In this paper, we present an overview of current decision-making theories and existing visualization cognition frameworks, followed by a proposal for an integrated model of decision making with visualizations, and a selective review of visualization decision-making studies to determine if there is cross-domain support for a dual-process account of decision making with visualizations. As a preview, we will illustrate Type 1 and 2 processing in decision making with visualizations using four cross-domain findings that we observed in the literature review. Our focus here is on demonstrating how dual-processing can be a useful framework for examining visualization decision-making research. We selected the cross-domain findings as relevant demonstrations of Type 1 and 2 processing that were shared across the studies reviewed, but they do not represent all possible examples of dual-processing in visualization decision-making research. The review documents each of the cross-domain findings, in turn, using examples from studies in multiple domains. These cross-domain findings differ in their reliance on Type 1 and Type 2 processing. We conclude with recommendations for future work and implications for visualization designers.

Decision-making frameworks

Decision-making researchers have pursued two dominant research paths to study how humans make decisions under risk. The first assumes that humans make rational decisions, which are based on weighted and ordered probability functions and can be mathematically modeled (e.g. Kunz, 2004; Von Neumann, 1953). The second proposes that people often make intuitive decisions using heuristics (Gigerenzer, Todd, & ABC Research Group, 2000; Kahneman & Tversky, 1982). While there is fervent disagreement on the efficacy of heuristics and whether human behavior is rational (Vranas, 2000), there is more consensus that we can make both intuitive and strategic decisions (Epstein, Pacini, Denes-Raj, & Heier, 1996; Evans, 2008; Evans & Stanovich, 2013; cf. Keren & Schul, 2009). The capacity to make intuitive and strategic decisions is described by a dual-process account of decision making, which suggests that humans make fast, easy, and computationally light decisions (known as Type 1 processing) by default, but can also make slow, contemplative, and effortful decisions by employing Type 2 processing (Kahneman, 2011). Various versions of dual-processing theory exist, with the key distinctions being in the attributes associated with each type of process (for a more detailed review of dual-process theories, see Evans & Stanovich, 2013). For example, older dual-systems accounts of decision making suggest that each process is associated with specific cognitive or neurological systems. In contrast, dual-process (sometimes termed dual-type) theories propose that the processes are distinct but do not necessarily occur in separate cognitive or neurological systems (hence the use of process over system) (Evans & Stanovich, 2013).

Many applied domains have adapted a dual-processing model to explain task- and domain-specific decisions, with varying degrees of success (Evans, 2008). For example, when a physician is deciding if a patient should be assigned to a coronary care unit or a regular nursing bed, the doctor can use a heuristic or utilize heart disease predictive instruments to make the decision (Marewski & Gigerenzer, 2012). In the case of the heuristic, the doctor would employ a few simple rules (diagrammed in Fig. 1) that would guide her decision, such as considering the patient’s chief complaint being chest pain. Another approach is to apply deliberate mental effort to make a more time-consuming and effortful decision, which could include using heart disease predictive instruments (Marewski & Gigerenzer, 2012). In a review of how applied domains in higher-level cognition have implemented a dual-processing model for domain-specific decisions, Evans (2008) argues that prior work has conflicting accounts of Type 1 and 2 processing. Some studies suggest that the two types work in parallel while others reveal conflicts between the Types (Sloman, 2002). In the physician example proposed by Marewski and Gigerenzer (2012), the two types are not mutually exclusive, as doctors can utilize Type 2 to make a more thoughtful decision that is also influenced by some rules of thumb or Type 1. In sum, Evans (2008) argues that due to the inconsistency of classifying Type 1 and 2, the distinction between only two types is likely an oversimplification. Evans (2008) suggests that the literature only consistently supports the identification of processes that require a capacity-limited, working memory resource versus those that do not. Evans and Stanovich (2013) updated their definition based on new behavioral and neuroscience evidence stating, “the defining characteristic of Type 1 processes is their autonomy. They do not require ‘controlled attention,’ which is another way of saying that they make minimal demands on working memory resources” (p. 236). There is also debate on how to define the term working memory (Cowan, 2017). In line with prior work on decision making with visualizations (Patterson et al., 2014), we adopt the definition that working memory consists of multiple components that maintain a limited amount of information (their capacity) for a finite period (Cowan, 2017). Contemporary theories of working memory also stress the ability to engage attention in a controlled manner to suppress automatic responses and maintain the most task-relevant information with limited capacity (Engle, Kane, & Tuholski, 1999; Kane, Bleckley, Conway, & Engle, 2001; Shipstead, Harrison, & Engle, 2015).

Fig. 1.

Fig. 1

Coronary care unit decision tree, which illustrates a sequence of rules that a doctor could use to guide treatment decisions. Redrawn from “Heuristic decision making in medicine” by J. Marewski, and G. Gigerenzer 2012, Dialogues in clinical neuroscience, 14(1), 77. ST-segment change refers to if certain anomaly appears in the patient’s electrocardiogram. NTG nitroglycerin, MI myocardial infarction, T T-waves with peaking or inversion

Identifying processes that require significant working memory provides a definition of Type 2 processing with observable neural correlates. Therefore, in line with Evans and Stanovich (2013), in the remainder of this manuscript, we will use significant working memory capacity demands and significant need for cognitive control, as defined above, as the criterion for Type 2 processing. In the context of visualization decision making, processes that require significant working memory are those that depend on the deliberate application of working memory to function. Type 1 processing occurs outside of users’ conscious awareness and may utilize small amounts of working memory but does not rely on conscious processing in working memory to drive the process. It should be noted that Type 1 and 2 processing are not mutually exclusive and many real-world decisions likely incorporate all processes. This review will attempt to identify tasks in visualization decision making that require significant working memory and capacity (Type 2 processing) and those that rely more heavily on Type 1 processing, as a first step to combining decision theory with visualization cognition.

Visualization cognition

Visualization cognition is a subset of visuospatial reasoning, which involves deriving meaning from external representations of visual information that maintain consistent spatial relations (Tversky, 2005). Broadly, two distinct approaches delineate visualization cognition models (Shah, Freedman, & Vekiri, 2005). The first approach refers to perceptually focused frameworks which attempt to specify the processes involved in perceiving visual information in displays and make predictions about the speed and efficiency of acquiring information from a visualization (e.g. Hollands & Spence, 1992; Lohse, 1993; Meyer, 2000; Simkin & Hastie, 1987). The second approach considers the influence of prior knowledge as well as perception. For example, Cognitive Fit Theory (Vessey, 1991), suggests that the user compares a learned graphic convention (mental schema) to the visual depiction. Visualizations that do not match the mental schema require cognitive transformations to make the visualization and mental representation align. For example, Fig. 2 illustrates a fictional relationship between the population growth of Species X and a predator species. At first glance, it may appear that when the predator species was introduced that the population of Species X dropped. However, after careful observation, you may notice that the higher population values are located lower on the Y-axis, which does not match our mental schema for graphs. With some effort, you can mentally reorder the values on the Y-axis to match your mental schema and then you may notice that the introduction of the predator species actually correlates with growth in the population of Species X. When the viewer is forced to mentally transform the visualization to match their mental schema, processing steps are increased, which may increase errors, time to complete a task, and demand on working memory (Vessey, 1991).

Fig. 2.

Fig. 2

Fictional relationship between the population growth of Species X and a predator species, where the Y-axis ordering does not match standard graphic conventions. Notice that the y-axis is reverse ordered. This figure was inspired by a controversial graphic produced by Christine Chan of Reuters, which showed the relationship between Florida’s “Stand Your Ground” law and firearm murders with the Y-axis reversed ordered (Lallanilla, 2014)

Pinker (1990) proposed a cognitive model (see Fig. 3), which provides an integrative structure that denotes the distinction between top-down and bottom-up encoding mechanisms in understanding data graphs. Researchers have generalized this model to propose theories of comprehension, learning, and memory with visual information (Hegarty, 2011; Kriz & Hegarty, 2007; Shah & Freedman, 2011). The Pinker (1990) model suggests that from the visual array, defined as the unprocessed neuronal firing in response to visualizations, bottom-up encoding mechanisms are utilized to construct a visual description, which is the mental encoding of the visual stimulus. Following encoding, viewers mentally search long-term memory for knowledge relevant for interpreting the visualization. This knowledge is proposed to be in the form of a graph schema.

Fig. 3.

Fig. 3

Adapted figure from the Pinker (1990) model of visualization comprehension, which illustrates each process

Then viewers use a match process, where the graph schema that is the most similar to the visual array is retrieved. When a matching graph schema is found, the schema becomes instantiated. The visualization conventions associated with the graph schema can then help the viewer interpret the visualization (message assembly process). For example, Fig. 3 illustrates comprehension of a bar chart using the Pinker (1990) model. In this example, the matched graph schema for a bar graph specifies that the dependent variable is on the Y-axis and the independent variable is on the X-axis; the instantiated graph schema incorporates the visual description and this additional information. The conceptual message is the resulting mental representation of the visualization that includes all supplemental information from long-term memory and any mental transformations the viewer may perform on the visualization. Viewers may need to transform their mental representation of the visualization based on their task or conceptual question. In this example, the viewer’s task is to find the average of A and B. To do this, the viewer must interpolate information in the bar chart and update the conceptual message with this additional information. The conceptual question can guide the construction of the mental representation through interrogation, which is the process of seeking out information that is necessary to answer the conceptual question. Top-down encoding mechanisms can influence each of the processes.

The influences of top-down processes are also emphasized in a previous attempt by Patterson et al. (2014) to extend visualization cognition theories to decision making. The Patterson et al. (2014) model illustrates how top-down cognitive processing influences encoding, pattern recognition, and working memory, but not decision making or the response. Patterson et al. (2014) use the multicomponent definition of working memory, proposed by Baddeley and Hitch (1974) and summarized by Cowan (2017) as a “multicomponent system that holds information temporarily and mediates its use in ongoing mental activities” (p. 1160). In this conception of working memory, a central executive controls the functions of working memory. The central executive can, among other functions, control attention and hold information in a visuo-spatial temporary store, which is where information can be maintained temporally for decision making without being stored in long-term memory (Baddeley & Hitch, 1974).

While incorporating working memory into a visualization decision-making model is valuable, the Patterson et al. (2014) model leaves some open questions about relationships between components and processes. For example, their model lacks a pathway for working memory to influence decisions based on top-down processing, which is inconsistent with well-established research in decision science (e.g. Gigerenzer & Todd, 1999; Kahneman & Tversky, 1982). Additionally, the normal processing pathway, depicted in the Patterson model, is an oversimplification of the interaction between top-down and bottom-up processing that is documented in a large body of literature (e.g. Engel, Fries, & Singer, 2001; Mechelli, Price, Friston, & Ishai, 2004).

A proposed integrated model of decision making with visualizations

Our proposed model (Fig. 4) introduces a dual-process account of decision making (Evans & Stanovich, 2013; Gigerenzer & Gaissmaier, 2011; Kahneman, 2011) into the Pinker (1990) model of visualization comprehension. A primary addition of our model is the inclusion of working memory, which is utilized to answer the conceptual question and could have a subsequent impact on each stage of the decision-making process, except bottom-up attention. The final stage of our model includes a decision-making process that derives from the conceptual message and informs behavior. In line with a dual-process account (Evans & Stanovich, 2013; Gigerenzer & Gaissmaier, 2011; Kahneman, 2011), the decision step can either be completed with Type 1 processing, which only uses minimal working memory (Evans & Stanovich, 2013) or recruit significant working memory, constituting Type 2 processing. Also following Evans and Stanovich (2013), we argue that people can make a decision with a visualization while using minimal amounts of working memory. We classify this as Type 1 thinking. Lohse (1997) found that when participants made judgments about budget allocation using profit charts, individuals with less working memory capacity performed equally well compared to those with more working memory capacity, when they only made decisions about three regions (easier task). However, when participants made judgments about nine regions (harder task), individuals with more working memory capacity outperformed those with less working memory capacity. The results of the study reveal that individual differences in working memory capacity only influence performance on complex decision-making tasks (Lohse, 1997). Figure 5 (top) illustrates one way that a viewer could make a Type 1 decision about whether the average value of bars A and B is closer to 2 or 2.2. Figure 5 (top) illustrates how a viewer might make a fast and computationally light decision in which she decides that the middle point between the two bars is closer to the salient tick mark of 2 on the Y-axis and answers 2 (which is incorrect). In contrast, Fig. 5 (bottom) shows a second possible method of solving the same problem by utilizing significant working memory (Type 2 processing). In this example, the viewer has recently learned a strategy to address similar problems, uses working memory to guide a top-down attentional search of the visual array, and identifies the values of A and B. Next, she instantiates a different graph schema than in the prior example by utilizing working memory and completes an effortful mental computation of 2.4 + 1.9/2. Ultimately, the application of working memory leads to a different and more effortful decision than in Fig. 5 (top). This example illustrates how significant amounts of working memory can be used at early stages of the decision-making process and produce downstream effects and more considered responses. In the following sections, we provide a selective review of work on decision making with visualizations that demonstrates direct and indirect evidence for our proposed model.

Fig. 4.

Fig. 4

Model of visualization decision making, which emphasizes the influence of working memory. Long-term memory can influence all components and processes in the model either via pre-attentive processes or by conscious application of knowledge

Fig. 5.

Fig. 5

Examples of a fast Type 1 (top) and slow Type 2 (bottom) decision outlined in our proposed model of decision making with visualizations. In these examples, the viewer’s task is to decide if the average value of bars A and B are closer to 2 or 2.2. The thick dotted line denotes significant working memory and the thin dotted line negligible working memory

Empirical studies of visualization decision making

Review method

To determine if there is cross-domain empirical support for a dual-process account of decision making with visualizations, we selectively reviewed studies of complex decision making with computer-generated two-dimensional (2D) static visualizations. To illustrate the application of a dual-process account of decision making to visualization research, this review highlights representative studies from diverse application areas. Interdisciplinary groups conducted many of these studies and, as such, it is not accurate to classify the studies in a single discipline. However, to help the reader evaluate the cross-domain nature of these findings, Table 1 includes the application area for the specific tasks used in each study.

Table 1.

Application area for the tasks used in the reviewed studies

Task application area Studies
Meteorology, weather, and natural disaster forecasting, weather communication Cheong et al. (2016); Fabrikant et al. (2010); Gattis and Holyoak (1996); Hegarty et al. (2010); Joslyn and LeClerc (2013); Padilla et al. (2015); Ruginski et al. (2016)
Health research, medical images, health risk communication Ancker et al. (2006); Fagerlin et al. (2005); Garcia-Retamero and Galesic (2009); Keehner et al. (2011); Keller et al. (2009); McCabe and Castel (2008); Okan et al. (2015); Okan, Garcia-Retamero, Cokely, and Maldonado (2012); Schirillo and Stone (2005); Stone et al. (2003); Stone et al. (1997); Waters et al. (2006); Waters et al. (2007)
Land-use planning, spatial planning, urban planning Dennis and Carte (1998); Lee and Bednarz (2009); Smelcer and Carmel (1997); Wilkening and Fabrikant (2011)
Cost comparison, finance Lohse (1993); Vessey and Galletta (1991)
Geospatial location Hegarty et al. (2016); McKenzie et al. (2016)
Error-bar interpretation, graph comparison, statistics communication, science reasoning Belia et al. (2005); Feeney et al. (2000); Newman and Scholl (2012); Sanchez and Wiley (2006); Wainer et al. (1999)
Map reading, map perception Brügger et al. (2017); St. John et al. (2001)
Social network, computer connections Tversky et al. (2012); Zhu and Watts (2010)
Map-based threat identification, emergency management Bailey et al. (2007); Shen et al. (2012)

In reviewing this work, we observed four key cross-domain findings that support a dual-process account of decision making (see Table 2). The first two support the inclusion of Type 1 processing, which is illustrated by the direct path for bottom-up attention to guide decision making with the minimal application of working memory (see Fig. 5 top). The first finding is that visualizations direct viewers’ bottom-up attention, which can both help and hinder decision making (see “Bottom-up attention”). The second finding is that visual-spatial biases comprise a unique category of bias that is a direct result of the visual encoding technique (see “Visual-Spatial Biases”). The third finding supports the inclusion of Type 2 processing in our proposed model and suggests that visualizations vary in cognitive fit between the visual description, graph schema, and conceptual question. If the fit is poor (i.e. there is a mismatch between the visualization and a decision-making component), working memory is used to perform corrective mental transformations (see “Cognitive fit”). The final cross-domain finding proposes that knowledge-driven processes may interact with the effects of the visual encoding technique (see “Knowledge-driven processing”) and could be a function of either Type 1 or 2 processes. Each of these findings will be detailed at length in the relevant sections. The four cross-domain findings do not represent an exhaustive list of all cross-domain findings that pertain to visualization cognition. However, these were selected as illustrative examples of Type 1 and 2 processing that include significant contributions from multiple domains. Further, some of the studies could fit into multiple sections and were included in a particular section as illustrative examples.

Table 2.

Overview of the four cross-domain findings along with the type of processing that they reflect

Evidence for Type
Cross-domain finding 1 2 Either
1 Visualizations direct viewers’ bottom-up attention, which can both help and hinder decision making. ×
2 The visual encoding technique gives rise to visual-spatial biases. ×
3 Visualizations that have greater cognitive fit produce faster and more effective decisions. ×
4 Knowledge-driven processes can interact with the effects of the encoding technique. ×

The italicised words correspond to section titles

Type 1

Bottom-up attention

The first cross-domain finding that characterizes Type 1 processing in visualization decision making is that visualizations direct participants’ bottom-up attention to specific visual features, which can be either beneficial or detrimental to decision making. Bottom-up attention consists of involuntary shifts in focus to salient features of a visualization and does not utilize working memory (Connor, Egeth, & Yantis, 2004), therefore it is a Type 1 process. The research reviewed in this section illustrates that bottom-up attention has a profound influence on decision making with visualizations. A summary of visual features that studies have used to attract bottom-up attention can be found in Table 3.

Table 3.

Visual features used in the reviewed studies to attract bottom-up attention

Features Studies
Color Fabrikant et al. (2010); Hegarty et al. (2010)
Edges and lines Fabrikant et al. (2010); Hegarty et al. (2010); Padilla, Ruginski, and Creem-Regehr (2017)
Foreground information Schirillo and Stone (2005); Stone et al. (2003); Stone et al. (1997)

Numerous studies show that salient information in a visualization draws viewers’ attention (Fabrikant, Hespanha, & Hegarty, 2010; Hegarty, Canham, & Fabrikant, 2010; Hegarty, Friedman, Boone, & Barrett, 2016; Padilla, Ruginski, & Creem-Regehr, 2017; Schirillo & Stone, 2005; Stone et al., 2003; Stone, Yates, & Parker, 1997). The most common methods for demonstrating that visualizations focus viewers’ attention is by showing that viewers miss non-salient but task-relevant information (Schirillo & Stone, 2005; Stone et al., 1997; Stone et al., 2003), viewers are biased by salient information (Hegarty et al., 2016; Padilla, Ruginski et al., 2017) or viewers spend more time looking at salient information in a visualization (Fabrikant et al., 2010; Hegarty et al., 2010). For example, Stone et al. (1997) demonstrated that when viewers are asked how much they would pay for an improved product using the visualizations in Fig. 6, they focus on the number of icons while missing the base rate of 5,000,000. If a viewer simply totals the icons, the standard product appears to be twice as dangerous as the improved product, but because the base rate is large, the actual difference between the two products is insignificantly small (0.0000003; Stone et al., 1997). In one experiment, participants were willing to pay $125 more for improved tires when viewing the visualizations in Fig. 6 compared to a purely textual representation of the information. The authors also demonstrated the same effect for improved toothpaste, with participants paying $0.95 more when viewing a visual depiction compared to text. The authors’ term this heuristic of focusing on salient information and ignoring other data the foreground effect (Stone et al., 1997) (see also Schirillo & Stone, 2005; Stone et al., 2003).

Fig. 6.

Fig. 6

Icon arrays used to illustrate the risk of standard or improved tires. Participants were tasked with deciding how much they would pay for the improved tires. Note the base rate of 5 M drivers was represented in text. Redrawn from “Effects of numerical and graphical displays on professed risk-taking behavior” by E. R. Stone, J. F. Yates, & A. M. Parker. 1997, Journal of Experimental Psychology: Applied, 3(4), 243

A more direct test of visualizations guiding bottom-up attention is to examine if salient information biases viewers’ judgments. One method involves identifying salient features using a behaviorally validated saliency model, which predicts the locations that will attract viewers’ bottom-up attention (Harel, 2015; Itti, Koch, & Niebur, 1998; Rosenholtz & Jin, 2005). In one study, researchers compared participants’ judgments with different hurricane forecast visualizations and then, using the Itti et al. (1998) saliency algorithm, found that the differences in what was salient in the two visualizations correlated with participants’ performance (Padilla, Ruginski et al., 2017). Specifically, they suggested that the salient borders of the Cone of Uncertainty (see Fig. 7, left), which is used by the National Hurricane Center to display hurricane track forecasts, leads some people to incorrectly believe that the hurricane is growing in physical size, which is a misunderstanding of the probability distribution of hurricane paths that the cone is intended to represent (Padilla, Ruginski et al., 2017; see also Ruginski et al., 2016). Further, they found that when the same data were represented as individual hurricane paths, such that there was no salient boundary (see Fig. 7, right), viewers intuited the probability of hurricane paths more effectively than the Cone of Uncertainty. However, an individual hurricane path biased viewers’ judgments if it intersected a point of interest. For example, in Fig. 7 (right), participants accurately judged that locations closer to the densely populated lines (highest likelihood of storm path) would receive more damage. This correct judgment changed when a location farther from the center of the storm was intersected by a path, but the closer location was not (see locations a and b in Fig. 7 right). With both visualizations, the researchers found that viewers were negatively biased by the salient features for some tasks (Padilla, Ruginski et al., 2017; Ruginski et al., 2016).

Fig. 7.

Fig. 7

An example of the Cone of Uncertainty (left) and the same data represented as hurricane paths (right). Participants were tasked with evaluating the level of damage that would incur to offshore oil rigs at specific locations, based on the hurricane forecast visualization. Redrawn from “Effects of ensemble and summary displays on interpretations of geospatial uncertainty data” by L. M. Padilla, I. Ruginski, and S. H. Creem-Regehr. 2017, Cognitive Research: Principles and Implications, 2(1), 40

That is not to say that saliency only negatively impacts decisions. When incorporated into visualization design, saliency can guide bottom-up attention to task-relevant information, thereby improving performance (e.g. Fabrikant et al., 2010; Fagerlin, Wang, & Ubel, 2005; Hegarty et al., 2010; Schirillo & Stone, 2005; Stone et al., 2003; Waters, Weinstein, Colditz, & Emmons, 2007). One compelling example using both eye-tracking measures and a saliency algorithm demonstrated that salient features of weather maps directed viewers’ attention to different variables that were visualized on the maps (Hegarty et al., 2010) (see also Fabrikant et al., 2010). Interestingly, when the researchers manipulated the relative salience of temperature versus pressure (see Fig. 8), the salient features captured viewers’ overt attention (as measured by eye fixations) but did not influence performance, until participants were trained on how to effectively interpret the features. Once viewers were trained, their judgments were facilitated when the relevant features were more salient (Hegarty et al., 2010). This is an instructive example of how saliency may direct viewers’ bottom-up attention but may not influence their performance until viewers have the relevant top-down knowledge to capitalize on the affordances of the visualization.

Fig. 8.

Fig. 8

Eye-tracking data from Hegarty et al. (2010). Participants viewed an arrow located in Utah (obscured by eye-tracking data in the figure) and made judgments about whether the arrow correctly identified the wind direction. The black isobars were the task-relevant information. Notice that after instructions, viewers with the pressure-salient visualizations focused on the isobars surrounding Utah, rather than on the legend or in other regions. The panels correspond to the conditions in the original study

In sum, the reviewed studies suggest that bottom-up attention has a profound influence on decision making with visualizations. This is noteworthy because bottom-up attention is a Type 1 process. At a minimum, the work suggests that Type 1 processing influences the first stages of decision making with visualizations. Further, the studies cited in this section provide support for the inclusion of bottom-up attention in our proposed model.

Visual-spatial biases

A second cross-domain finding that relates to Type 1 processing is that visualizations can give rise to visual-spatial biases that can be either beneficial or detrimental to decision making. We are proposing the new concept of visual-spatial biases and defining this term as a bias that elicits heuristics, which are a direct result of the visual encoding technique. Visual-spatial biases likely originate as a Type 1 process as we suspect they are connected to bottom-up attention, and if detrimental to decision making, have to be actively suppressed by top-down knowledge and cognitive control mechanisms (see Table 4 for summary of biases documented in this section). Visual-spatial biases can also improve decision-making performance. As Card, Mackinlay, and Shneiderman (1999) point out, we can use vision to think, meaning that visualizations can capitalize on visual perception to interpret a visualization without effort when the visual biases elucidated by the visualization are consistent with the correct interpretation.

Table 4.

Biases documented in the reviewed studies

Bias Studies
Anchoring Belia et al. (2005)
Anecdotal evidence Fagerlin et al. (2005)
Containment McKenzie et al. (2016); Joslyn and LeClerc (2013); Grounds et al. (2017); Newman and Scholl (2012); Ruginski et al. (2016)
Deterministic construal Grounds et al. (2017); Joslyn and LeClerc (2013)
High-quality image Keehner et al. (2011); McCabe and Castel (2008); St. John et al. (2001); Ancker et al. (2006); Brügger et al. (2017); Hegarty et al. (2012); Wainer et al. (1999); Wilkening and Fabrikant (2011)
Risk aversion Schirillo and Stone (2005)
Side effect aversion Waters et al. (2006); Waters et al. (2007)

Tversky (2011) presents a taxonomy of visual-spatial communications that are intrinsically related to thought, which are likely the bases for visual-spatial biases (see also Fabrikant & Skupin, 2005). One of the most commonly documented visual-spatial biases that we observed across domains is a containment conceptualization of boundary representations in visualizations. Tversky (2011) makes the analogy, “Framing a picture is a way of saying that what is inside the picture has a different status from what is outside the picture” (p. 522). Similarly, Fabrikant and Skupin (2005) describe how, “They [boundaries] help partition an information space into zones of relative semantic homogeneity” (p. 673). However, in visualization design, it is common to take continuous data and visually represent them with boundaries (i.e. summary statistics, error bars, isocontours, or regions of interest; Padilla et al., 2015; Padilla, Quinan, Meyer, & Creem-Regehr, 2017). Binning continuous data is a reasonable approach, particularly when intended to make the data simpler for viewers to understand (Padilla, Quinan, et al., 2017). However, it may have the unintended consequence of creating artificial boundaries that can bias users—leading them to respond as if data within a containment is more similar than data across boundaries. For example, McKenzie, Hegarty, Barrett, and Goodchild (2016) showed that participants were more likely to use a containment heuristic to make decisions about Google Map’s blue dot visualization when the positional uncertainty data were visualized as a bounded circle (Fig. 9 right) compared to a Gaussian fade (Fig. 9 left) (see also Newman & Scholl, 2012; Ruginski et al., 2016). Recent work by Grounds, Joslyn, and Otsuka (2017) found that viewers demonstrate a “deterministic construal error” or the belief that visualizations of temperature uncertainty represent a deterministic forecast. However, the deterministic construal error was not observed with textual representations of the same data (see also Joslyn & LeClerc, 2013).

Fig. 9.

Fig. 9

Example stimuli from McKenzie et al. (2016) showing circular semi-transparent overlays used by Google Maps to indicate the uncertainty of the users’ location. Participants compared two versions of these visualizations and determined which represented the most accurate positional location. Redrawn from “Assessing the effectiveness of different visualizations for judgments of positional uncertainty” by G. McKenzie, M. Hegarty, T. Barrett, and M. Goodchild. 2016, International Journal of Geographical Information Science, 30(2), 221–239

Additionally, some visual-spatial biases follow the same principles as more well-known decision-making biases revealed by researchers in behavioral economics and decision science. In fact, some decision-making biases, such as anchoring, the tendency to use the first data point to make relative judgments, seem to have visual correlates (Belia, Fidler, Williams, & Cumming, 2005). For example, Belia et al. (2005) asked experts with experience in statistics to align two means (representing “Group 1” and “Group 2”) with error bars so that they represented data ranges that were just significantly different (see Fig. 10 for example of stimuli). They found that when the starting position of Group 2 was around 800 ms, participants placed Group 2 higher than when the starting position for Group 2 was at around 300 ms. This work demonstrates that participants used the starting mean of Group 2 as an anchor or starting point of reference, even though the starting position was arbitrary. Other work finds that visualizations can be used to reduce some decision-making biases including anecdotal evidence bias (Fagerlin et al., 2005), side effect aversion (Waters et al., 2007; Waters, Weinstein, Colditz, & Emmons, 2006), and risk aversion (Schirillo & Stone, 2005).

Fig. 10.

Fig. 10

Example display and instructions from Belia et al. (2005). Redrawn from “Researchers misunderstand confidence intervals and standard error bars” by S. Belia, F. Fidler, J. Williams, and G. Cumming. 2005, Psychological Methods, 10(4), 390. Copyright 2005 by “American Psychological Association”

Additionally, the mere presence of a visualization may inherently bias viewers. For example, viewers find scientific articles with high-quality neuroimaging figures to have greater scientific reasoning than the same article with a bar chart or without a figure (McCabe & Castel, 2008). People tend to unconsciously believe that high-quality scientific images reflect high-quality science—as illustrated by work from Keehner, Mayberry, and Fischer (2011) showing that viewers rate articles with three-dimensional brain images as more scientific than those with 2D images, schematic drawings, or diagrams (See Fig. 11). Unintuitively, however, high-quality complex images can be detrimental to performance compared to simpler visualizations (Hegarty, Smallman, & Stull, 2012; St. John, Cowen, Smallman, & Oonk, 2001; Wilkening & Fabrikant, 2011). Hegarty et al. (2012) demonstrated that novice users prefer realistically depicted maps (see Fig. 12), even though these maps increased the time taken to complete the task and focused participants’ attention on irrelevant information (Ancker, Senathirajah, Kukafka, & Starren, 2006; Brügger, Fabrikant, & Çöltekin, 2017; St. John et al., 2001; Wainer, Hambleton, & Meara, 1999; Wilkening & Fabrikant, 2011). Interestingly, professional meteorologists also demonstrated the same biases as novice viewers (Hegarty et al., 2012) (see also Nadav-Greenberg, Joslyn, & Taing, 2008).

Fig. 11.

Fig. 11

Image showing participants’ ratings of three-dimensionality and scientific credibility for a given neuroimaging visualization, originally published in grayscale (Keehner et al., 2011)

Fig. 12.

Fig. 12

Example stimuli from Hegarty et al. (2012) showing maps with varying levels of realism. Both novice viewers and meteorologists were tasked with selecting a visualization to use and performing a geospatial task. The panels correspond to the conditions in the original study

We argue that visual-spatial biases reflect a Type 1 process, occurring automatically with minimal working memory. Work by Sanchez and Wiley (2006) provides direct evidence for this assertion using eye-tracking data to demonstrate that individuals with less working memory capacity attend to irrelevant images in a scientific article more than those with greater working memory capacity. The authors argue that we are naturally drawn to images (particularly high-quality depictions) and that significant working memory capacity is required to shift focus away from images that are task-irrelevant. The ease by which visualizations captivate our focus and direct our bottom-up attention to specific features likely increases the impact of these biases, which may be why some visual-spatial biases are notoriously difficult to override using working memory capacity (see Belia et al., 2005; Boone, Gunalp, & Hegarty, in press; Joslyn & LeClerc, 2013; Newman & Scholl, 2012). We speculate that some visual-spatial biases are intertwined with bottom-up attention—occurring early in the decision-making process and influencing the down-stream processes (see our model in Fig. 4 for reference), making them particularly unremitting.

Type 2

Cognitive fit

We also observe a cross-domain finding involving Type 2 processing, which suggests that if there is a mismatch between the visualization and a decision-making component, working memory is used to perform corrective mental transformations. Cognitive fit is a term used to describe the correspondence between the visualization and conceptual question or task (see our model for reference; for an overview of cognitive fit, see Vessey, Zhang, & Galletta, 2006). Those interested in examining cognitive fit generally attempt to identify and reduce mismatches between the visualization and one of the decision-making components (see Table 5 for a breakdown of the decision-making components that the reviewed studies evaluated). When there is a mismatch produced by the default Type 1 processing, it is argued that significant working memory (Type 2 processing) is required to resolve the discrepancy via mental transformations (Vessey et al., 2006). As working memory is capacity limited, the magnitude of mental transformation or amount of working memory required is one predictor of reaction times and errors.

Table 5.

Decision-making components that the reviewed studies evaluated the cognitive fit of

Cognitive fit examined Studies
Visualization - > task Dennis and Carte (1998); Grounds et al. (2017); Huang et al. (2006); Nadav-Greenberg et al. (2008); Smelcer and Carmel (1997); Vessey and Galletta (1991); Zhu and Watts (2010)
Visualization - > primed schema Tversky et al. (2012)
Visualization - > learned schema Feeney et al. (2000); Gattis and Holyoak (1996); Joslyn and LeClerc (2013)

Direct evidence for this claim comes from work demonstrating that cognitive fit differentially influenced the performance of individuals with more and less working memory capacity (Zhu & Watts, 2010). The task was to identify which two nodes in a social media network diagram should be removed to disconnect the maximal number of nodes. As predicted by cognitive fit theory, when the visualization did not facilitate the task (Fig. 13 left), participants with less working memory capacity were slower than those with more working memory capacity. However, when the visualization aligned with the task (Fig. 13 right), there was no difference in performance. This work suggests that when there is misalignment between the visualization and a decision-making process, people with more working memory capacity have the resources to resolve the conflict, while those with less resources show performance degradations.2 Other work only found a modest relationship between working memory capacity and correct interpretations of high and low temperature forecast visualizations (Grounds et al., 2017), which suggests that, for some visualizations, viewers utilize little working memory.

Fig. 13.

Fig. 13

Examples of social media network diagrams from Zhu and Watts (2010). The authors argue that the figure on the right is more aligned with the task of identifying the most interconnected nodes than the figure on the left

As illustrated in our model, working memory can be recruited to aid all stages of the decision-making process except bottom-up attention. Work that examines cognitive fit theory provides indirect evidence that working memory is required to resolve conflicts in the schema matching and a decision-making component. For example, one way that a mismatch between a viewer’s mental schema and visualization can arise is when the viewer uses a schema that is not optimal for the task. Tversky, Corter, Yu, Mason, and Nickerson (2012) primed participants to use different schemas by describing the connections in Fig. 14 in terms of either transfer speed or security levels. Participants then decided on the most efficient or secure route for information to travel between computer nodes with either a visualization that encoded data using the thickness of connections, containment, or physical distance (see Fig. 14). Tversky et al. (2012) found that when the links were described based on their information transfer speed, thickness and distance visualizations were the most effective—suggesting that the speed mental schema was most closely matched to the thickness and distance visualizations, whereas the speed schema required mental transformations to align with the containment visualization. Similarly, the thickness and containment visualizations outperformed the distance visualization when the nodes were described as belonging to specific systems with different security levels. This work and others (Feeney, Hola, Liversedge, Findlay, & Metcalf, 2000; Gattis & Holyoak, 1996; Joslyn & LeClerc, 2013; Smelcer & Carmel, 1997) provides indirect evidence that gratuitous realignment between mental schema and the visualization can be error-prone and visualization designers should work to reduce the number of transformations required in the decision-making process.

Fig. 14.

Fig. 14

Example of stimuli from Tversky et al. (2012) showing three types of encoding techniques for connections between nodes (thickness, containment, and distance). Participants were asked to select routes between nodes with different descriptions of the visualizations. Redrawn from “Representing category and continuum: Visualizing thought” by B. Tversky, J. Corter, L. Yu, D. Mason, and J. Nickerson. In Diagrams 2012 (p. 27), P. Cox, P. Rodgers, and B. Plimmer (Eds.), 2012, Berlin Heidelberg: Springer-Verlag

Researchers from multiple domains have also documented cases of misalignment between the task, or conceptual question, and the visualization. For example, Vessey and Galletta (1991) found that participants completed a financial-based task faster when the visualization they chose (graph or table, see Fig. 15) matched the task (spatial or textual). For the spatial task, participants decided which month had the greatest difference between deposits and withdrawals. The textual or symbolic tasks involved reporting specific deposit and withdrawal amounts for various months. The authors argued that when there is a mismatch between the task and visualization, the additional transformation accounts for the increased time taken to complete the task (Vessey & Galletta, 1991) (see also Dennis & Carte, 1998; Huang et al., 2006), which likely takes place in the inference process of our proposed model.

Fig. 15.

Fig. 15

Examples of stimuli from Vessey and Galletta (1991) depicting deposits and withdraw amounts over the course of a year with a graph (a) and table (b). Participants completed either a spatial or textual task with the chart or table. Redrawn from “Cognitive fit: An empirical study of information acquisition” by I. Vessey, and D. Galletta. 1991, Information systems research, 2(1), 72–73. Copyright 1991 by “INFORMS”

The aforementioned studies provide direct (Zhu & Watts, 2010) and indirect (Dennis & Carte, 1998; Feeney et al., 2000; Gattis & Holyoak, 1996; Huang et al., 2006; Joslyn & LeClerc, 2013; Smelcer & Carmel, 1997; Tversky et al., 2012; Vessey & Galletta, 1991) evidence that Type 2 processing recruits working memory to resolve misalignment between decision-making processes and the visualization that arise from default Type 1 processing. These examples of Type 2 processing using working memory to perform effortful mental computations are consistent with the assertions of Evans and Stanovich (2013) that Type 2 processes enact goal directed complex processing. However, it is not clear from the reviewed work how exactly the visualization and decision-making components are matched. Newman and Scholl (2012) propose that we match the schema and visualization based on the similarities between the salient visual features, although this proposal has not been tested. Further, work that assesses cognitive fit in terms of the visualization and task only examines the alignment of broad categories (i.e., spatial or semantic). Beyond these broad classifications, it is not clear how to predict if a task and visualization are aligned. In sum, there is not a sufficient cross-disciplinary theory for how mental schemas and tasks are matched to visualizations. However, it is apparent from the reviewed work that Type 2 processes (requiring working memory) can be recruited during the schema matching and inference processes.

Either type 1 and/or 2

Knowledge-driven processing

In a review of map-reading cognition, Lobben (2004) states, “…research should focus not only on the needs of the map reader but also on their map-reading skills and abilities” (p. 271). In line with this statement, the final cross-domain finding is that the effects of knowledge can interact with the affordances or biases inherent in the visualization method. Knowledge may be held temporally in working memory (Type 2), held in long-term knowledge but effortfully used (Type 2), or held in long-term knowledge but automatically applied (Type 1). As a result, knowledge-driven processing can involve either Type 1 or Type 2 processes.

Both short- and long-term knowledge can influence visualization affordances and biases. However, it is difficult to distinguish whether Type 2 processing is using significant working memory capacity to temporarily hold knowledge or if participants have stored the relevant knowledge in long-term memory and processing is more automatic. Complicating the issue, knowledge stored in long-term memory can influence decision making with visualizations using both Type 1 and 2 processing. For example, if you try to remember Pythagorean’s Theorem, which you may have learned in high school or middle school, you may recall that a2 + b2 = c2, where c represents the length of the hypotenuse and a and b represent the lengths of the other two sides of a triangle. Unless you use geometry regularly, you likely had to strenuously search in long-term memory for the equation, which is a Type 2 process and requires significant working memory capacity. In contrast, if you are asked to recall your childhood phone number, the number might automatically come to mind with minimal working memory required (Type 1 processing).

In this section, we highlight cases where knowledge either influenced decision making with visualizations or was present but did not influence decisions (see Table 6 for the type of knowledge examined in each study). These studies are organized based on how much time the viewers had to incorporate the knowledge (i.e. short-term instructions and long-term individual differences in abilities and expertise), which may be indicative of where the knowledge is stored. However, many factors other than time influence the process of transferring knowledge by working memory capacity to long-term knowledge. Therefore, each of the studies cited in this section could be either Type 1, Type 2, or both types of processing.

Table 6.

Type of knowledge examined in each study

Knowledge Studies
Short-term training, instructions Boone et al. (in press); Shen et al. (2012)
Individual differences Galesic and Garcia-Retamero (2011); Galesic et al. (2009) Keller et al. (2009) Okan et al. (2015); Okan, Garcia-Retamero, Galesic, and Cokely (2012); Okan, Garcia-Retamero, Cokely, and Maldonado (2012); Okan, Garcia-Retamero, Galesic, and Cokely (2012); Reyna et al. (2009); Rodríguez et al. (2013)
Semester-long course Lee and Bednarz (2009)
Experts Belia et al. (2005); Riveiro (2016); St. John et al. (2001)

One example of participants using short-term knowledge to override a familiarity bias comes from work by Bailey, Carswell, Grant, and Basham (2007) (see also Shen, Carswell, Santhanam, & Bailey, 2012). In a complex geospatial task for which participants made judgments about terrorism threats, participants were more likely to select familiar map-like visualizations rather than ones that would be optimal for the task (see Fig. 16) (Bailey et al., 2007). Using the same task and visualizations, Shen et al. (2012) showed that users were more likely to choose an efficacious visualization when given training concerning the importance of cognitive fit and effective visualization techniques. In this case, viewers were able to use knowledge-driven processing to improve their performance. However, Joslyn and LeClerc (2013) found that when participants viewed temperature uncertainty, visualized as error bars around a mean temperature prediction, they incorrectly believed that the error bars represented high and low temperatures. Surprisingly, participants maintained this belief despite a key, which detailed the correct way to interpret each temperature forecast (see also Boone et al., in press). The authors speculated that the error bars might have matched viewers’ mental schema for high- and low-temperature forecasts (stored in long-term memory) and they incorrectly utilized the high-/low-temperature schema rather than incorporating new information from the key. Additionally, the authors propose that because the error bars were visually represented as discrete values, that viewers may have had difficulty reimagining the error bars as points on a distribution, which they term a deterministic construal error (Joslyn & LeClerc, 2013). Deterministic construal visual-spatial biases may also be one of the sources of misunderstanding of the Cone of Uncertainty (Padilla, Ruginski et al., 2017; Ruginski et al., 2016). A notable difference between these studies and the work of Shen et al. (2012) is that Shen et al. (2012) used instructions to correct a familiarity bias, which is a cognitive bias originally documented in the decision-making literature that is not based on the visual elements in the display. In contrast, the biases in Joslyn and LeClerc (2013) were visual-spatial biases. This provides further evidence that visual-spatial biases may be a unique category of biases that warrant dedicated exploration, as they are harder to influence with knowledge-driven processing.

Fig. 16.

Fig. 16

Example of different types of view orientations used by examined by Bailey et al. (2007). Participants selected one of these visualizations and then used their selection to make judgments including identifying safe passageways, determining appropriate locations for firefighters, and identifying suspicious locations based on the height of buildings. The panels correspond to the conditions in the original study

Regarding longer-term knowledge, there is substantial evidence that individual differences in knowledge impact decision making with visualizations. For example, numerous studies document the benefit of visualizations for individuals with less health literacy, graph literacy, and numeracy (Galesic & Garcia-Retamero, 2011; Galesic, Garcia-Retamero, & Gigerenzer, 2009; Keller, Siegrist, & Visschers, 2009; Okan, Galesic, & Garcia-Retamero, 2015; Okan, Garcia-Retamero, Cokely, & Maldonado, 2012; Okan, Garcia-Retamero, Galesic, & Cokely, 2012; Reyna, Nelson, Han, & Dieckmann, 2009; Rodríguez et al., 2013). Visual depictions of health data are particularly useful because health data often take the form of probabilities, which are unintuitive. Visualizations inherently illustrate probabilities (i.e. 10%) as natural frequencies (i.e. 10 out of 100), which are more intuitive (Hoffrage & Gigerenzer, 1998). Further, by depicting natural frequencies visually (see example in Fig. 17), viewers can make perceptual comparisons rather than mathematical calculations. This dual benefit is likely the reason visualizations produce facilitation for individuals with less health literacy, graph literacy, and numeracy.

Fig. 17.

Fig. 17

Example of stimuli used by Galesic et al. (2009) in a study demonstrating that natural frequency visualizations can help individuals overcome less numeracy. Participants completed three medical scenario tasks using similar visualizations as depicted here, in which they were asked about the effects of aspirin on risk of stroke or heart attack and about a hypothetical new drug. Redrawn from “Using icon arrays to communicate medical risks: overcoming less numeracy” by M. Galesic, R. Garcia-Retamero, and G. Gigerenzer. 2009, Health Psychology, 28(2), 210

These studies are good examples of how designers can create visualizations that capitalize on Type 1 processing to help viewers accurately make decisions with complex data even when they lack relevant knowledge. Based on the reviewed work, we speculate that well-designed visualizations that utilize Type 1 processing to intuitively illustrate task-relevant relationships in the data may be particularly beneficial for individuals with less numeracy and graph literacy, even for simple tasks. However, poorly designed visualizations that require superfluous mental transformations may be detrimental to the same individuals. Further, individual differences in expertise, such as graph literacy, which have received more attention in healthcare communication (Galesic & Garcia-Retamero, 2011; Nayak et al., 2016; Okan et al., 2015; Okan, Garcia-Retamero, Cokely, & Maldonado, 2012; Okan, Garcia-Retamero, Galesic, & Cokely, 2012; Rodríguez et al., 2013), may play a large role in how viewers complete even simple tasks in other domains such as map-reading (Kinkeldey et al., 2017).

Less consistent are findings on how more experienced users incorporate knowledge acquired over longer periods of time to make decisions with visualizations. Some research finds that students’ decision-making and spatial abilities improved during a semester-long course on Geographic Information Science (GIS) (Lee & Bednarz, 2009). Other work finds that experts perform the same as novices (Riveiro, 2016), experts can exhibit visual-spatial biases (St. John et al., 2001) and experts perform more poorly than expected in their domain of visual expertise (Belia et al., 2005). This inconsistency may be due in part to the difficulty in identifying when and if more experienced viewers are automatically applying their knowledge or employing working memory. For example, it is unclear if the students in the GIS course documented by Lee and Bednarz (2009) developed automatic responses (Type 1) or if they learned the information and used working memory capacity to apply their training (Type 2).

Cheong et al. (2016) offer one way to gauge how performance may change when one is forced to use Type 1 processing, but then allowed to use Type 2 processing. In a wildfire task using multiple depictions of uncertainty (see Fig. 18), Cheong et al. (2016) found that the type of uncertainty visualization mattered when participants had to make fast Type 1 decisions (5 s) about evacuating from a wildfire. But when given sufficient time to make Type 2 decisions (30 s), participants were not influenced by the visualization technique (see also Wilkening & Fabrikant, 2011).

Fig. 18.

Fig. 18

Example of multiple uncertainty visualization techniques for wildfire risk by Cheong et al. (2016). Participants were presented with a house location (indicated by an X), and asked if they would stay or leave based on one of the wildfire hazard communication techniques shown here. The panels correspond to the conditions in the original study

Interesting future work could limit experts’ time to complete a task (forcing Type 1 processing) and then determine if their judgments change when given more time to complete the task (allowing for Type 2 processing). To test this possibility further, a dual-task paradigm could be used such that experts’ working memory capacity is depleted by a difficult secondary task that also required working memory capacity. Some examples of secondary tasks in a dual-task paradigm include span tasks that require participants to remember or follow patterns of information, while completing the primary task, then report the remembered or relevant information from the pattern (for a full description of theoretical bases for a dual-task paradigm see Pashler, 1994). To our knowledge, only one study has used a dual-task paradigm to evaluate cognitive load of a visualization decision-making task (Bandlow et al., 2011). However, a growing body of research on other domains, such as wayfinding and spatial cognition, demonstrates the utility of using dual-task paradigms to understand the types of working memory that users employ for a task (Caffò, Picucci, Di Masi, & Bosco, 2011; Meilinger, Knauff, & Bülthoff, 2008; Ratliff & Newcombe, 2005; Trueswell & Papafragou, 2010).

Span tasks are examples of spatial or verbal secondary tasks, which include remembering the orientations of an arrow (taxes visual-spatial memory, (Shah & Miyake, 1996) or counting backward by 3 s (taxes verbal processing and short-term memory) (Castro, Strayer, Matzke, & Heathcote, 2018). One should expect more interference if the primary and secondary tasks recruit the same processes (i.e. visual-spatial primary task paired with a visual-spatial memory span task). An example of such an experimental design is illustrated in Fig. 19. In the dual-task trial illustrated in Fig. 19, if participants responses are as fast and accurate as the baseline trial then participants are likely not using significant amounts of working memory capacity for that task. If the task does require significant working memory capacity, then the inclusion of the secondary task should increase the time taken to complete the primary task and potentially produce errors in both the secondary and primary tasks. In visualization decision-making research, this is an open area of exploration for researchers and designers that are interested in understanding how working memory capacity and a dual-process account of decision making applies to their visualizations and application domains.

Fig. 19.

Fig. 19

A diagram of a dual-tasking experiment is shown using the same task as in Fig. 5. Responses resulting from Type 1 and 2 processing are illustrated. The dual-task trial illustrates how to place additional load on working memory capacity by having the participant perform a demanding secondary task. The impact of the secondary task is illustrated for both time and accuracy. Long-term memory can influence all components and processes in the model either via pre-attentive processes or by conscious application of knowledge

In sum, this section documents cases where knowledge-driven processing does and does not influence decision making with visualizations. Notably, we describe numerous studies where well-designed visualizations (capitalizing on Type 1 processing) focus viewers’ attention on task-relevant relationships in the data, which improves decision accuracy for individuals with less developed health literacy, graph literacy, and numeracy. However, the current work does not test how knowledge-driven processing maps on to the dual-process model of decision making. Knowledge may be held temporally by working memory capacity (Type 2), held in long-term knowledge but strenuously utilized (Type 2), or held in long-term knowledge but automatically applied (Type 1). More work is needed to understand if a dual-process account of decision making accurately describes the influence of knowledge-driven processing on decision making with visualizations. Finally, we detailed an example of a dual-task paradigm as one way to evaluate if viewers are employing Type 1 processing.

Review summary

Throughout this review, we have provided significant direct and indirect evidence that a dual-process account of decision making effectively describes prior findings from numerous domains interested in visualization decision making. The reviewed work provides support for specific processes in our proposed model including the influences of working memory, bottom-up attention, schema matching, inference processes, and decision making. Further, we identified key commonalities in the reviewed work relating to Type 1 and Type 2 processing, which we added to our proposed visualization decision-making model. The first is that utilizing Type 1 processing, visualizations serve to direct participants’ bottom-up attention to specific information, which can be either beneficial or detrimental for decision making (Fabrikant et al., 2010; Fagerlin et al., 2005; Hegarty et al., 2010; Hegarty et al., 2016; Padilla, Ruginski et al., 2017; Ruginski et al., 2016; Schirillo & Stone, 2005; Stone et al., 1997; Stone et al., 2003; Waters et al., 2007). Consistent with assertions from cognitive science and scientific visualization (Munzner, 2014), we propose that visualization designers should identify the critical information needed for a task and use a visual encoding technique that directs participants’ attention to this information. We encourage visualization designers who are interested in determining which elements in their visualizations will likely attract viewers’ bottom-up attention, to see the Itti et al. (1998) saliency model, which has been validated with eye-tracking measures (for implementation of this model along with Matlab code see Padilla, Ruginski et al., 2017). If deliberate effort is not made to capitalize on Type 1 processing by focusing the viewer’s attention on task-relevant information, then the viewer will likely focus on distractors via Type 1 processing, resulting in poor decision outcomes.

A second cross-domain finding is the introduction of a new concept, visual-spatial biases, which can also be both beneficial and detrimental to decision making. We define this term as a bias that elicits heuristics, which is a direct result of the visual encoding technique. We provide numerous examples of visual-spatial biases across domains (for implementation of this model along with Matlab code, see Padilla, Ruginski et al., 2017). The novel utility of identifying visual-spatial biases is that they potentially arise early in the decision-making process during bottom-up attention, thus influencing the entire downstream process, whereas standard heuristics do not exclusively occur at the first stage of decision making. This possibly accounts for the fact that visual-spatial biases have proven difficult to overcome (Belia et al., 2005; Grounds et al., 2017; Joslyn & LeClerc, 2013; Liu et al., 2016; McKenzie et al., 2016; Newman & Scholl, 2012; Padilla, Ruginski et al., 2017; Ruginski et al., 2016). Work by Tversky (2011) presents a taxonomy of visual-spatial communications that are intrinsically related to thought, which are likely the bases for visual-spatial biases.

We have also revealed cross-domain findings involving Type 2 processing, which suggest that if there is a mismatch between the visualization and a decision-making component, working memory is used to perform corrective mental transformations. In scenarios where the visualization is aligned with the mental schema and task, performance is fast and accurate (Joslyn & LeClerc, 2013). The types of mismatches observed in the reviewed literature are likely both domain-specific and domain-general. For example, situations where viewers employ the correct graph schema for the visualization, but the graph schema does not align with the task, are likely domain-specific (Dennis & Carte, 1998; Frownfelter-Lohrke, 1998; Gattis & Holyoak, 1996; Huang et al., 2006; Joslyn & LeClerc, 2013; Smelcer & Carmel, 1997; Tversky et al., 2012). However, other work demonstrates cases where viewers employ a graph schema that does not match the visualization, which is likely domain-general (e.g. Feeney et al., 2000; Gattis & Holyoak, 1996; Tversky et al., 2012). In these cases, viewers could accidentally use the wrong graph schema because it appears to match the visualization or they might not have learned a relevant schema. The likelihood of viewers making attribution errors because they do not know the corresponding schema increases when the visualization is less common, such as with uncertainty visualizations. When there is a mismatch, additional working memory is required resulting in increased time taken to complete the task and in some cases errors (e.g. Joslyn & LeClerc, 2013; McKenzie et al., 2016; Padilla, Ruginski et al., 2017). Based on these findings, we recommend that visualization designers should aim to create visualizations that most closely align with a viewer’s mental schema and task. However, additional empirical research is required to understand the nature of the alignment processes, including the exact method we use to mentally select a schema and the classifications of tasks that match visualizations.

The final cross-domain finding is that knowledge-driven processes can interact or override effects of visualization methods. We find that short-term (Dennis & Carte, 1998; Feeney et al., 2000; Gattis & Holyoak, 1996; Joslyn & LeClerc, 2013; Smelcer & Carmel, 1997; Tversky et al., 2012) and long-term knowledge acquisition (Shen et al., 2012) can influence decision making with visualizations. However, there are also examples of knowledge having little influence on decisions, even when prior knowledge could be used to improve performance (Galesic et al., 2009; Galesic & Garcia-Retamero, 2011; Keller et al., 2009; Lee & Bednarz, 2009; Okan et al., 2015; Okan, Garcia-Retamero, Cokely, & Maldonado, 2012; Okan, Garcia-Retamero, Galesic, & Cokely, 2012; Reyna et al., 2009; Rodríguez et al., 2013). We point out that prior knowledge seems to have more of an effect on non-visual-spatial biases, such as a familiarity bias (Belia et al., 2005; Joslyn & LeClerc, 2013; Riveiro, 2016; St. John et al., 2001), which suggests that visual-spatial biases may be closely related to bottom-up attention. Further, it is unclear from the reviewed work when knowledge switches from relying on working memory capacity for application to automatic application. We argue that Type 1 and 2 processing have unique advantages and disadvantages for visualization decision making. Therefore, it is valuable to understand which process users are applying for specific tasks in order to make visualizations that elicit optimal performance. In the case of experts and long-term knowledge, we propose that one interesting way to test if users are utilizing significant working memory capacity is to employ a dual-task paradigm (illustrated in Fig. 19). A dual-task paradigm can be used to evaluate the amount of working memory required and compare the relative working memory required between competing visualization techniques.

We have also proposed a variety of practical recommendations for visualization designers based on the empirical findings and our cognitive framework. Below is a summary list of our recommendations along with relevant section numbers for reference:

  • Identify the critical information needed for a task and use a visual encoding technique that directs participants’ attention to this information (“Bottom-up attention” section);

  • To determine which elements in a visualization will likely attract viewers’ bottom-up attention try employing a saliency algorithm (see Padilla, Quinan, et al., 2017) (see “Bottom-up attention”);

  • Aim to create visualizations that most closely align with a viewer’s mental schema and task demands (see “Visual-Spatial Biases”);

  • Work to reduce the number of transformations required in the decision-making process (see "Cognitive fit");

  • To understand if a viewer is using Type 1 or 2 processing employ a dual-task paradigm (see Fig. 19);

  • Consider evaluating the impact of individual differences such as graphic literacy and numeracy on visualization decision making.

Conclusions

We use visual information to inform many important decisions. To develop visualizations that account for real-life decision making, we must understand how and why we come to conclusions with visual information. We propose a dual-process cognitive framework expanding on visualization comprehension theory that is supported by empirical studies to describe the process of decision making with visualizations. We offer practical recommendations for visualization designers that take into account human decision-making processes. Finally, we propose a new avenue of research focused on the influence of visual-spatial biases on decision making.

Acknowledgments

Funding

This research is based upon work supported by the National Science Foundation under Grants 1212806, 1810498, and 1212577.

Availability of data and materials

No data were collected for this review.

Authors’ contributions

LMP is the primary author of this study; she was central to the development, writing, and conclusions of this work. SHC, MH, and JS contributed to the theoretical development and manuscript preparation. All authors read and approved the final manuscript.

Authors’ information

LMP is a Ph.D. student at the University of Utah in the Cognitive Neural Science department. LMP is a member of the Visual Perception and Spatial Cognition Research Group directed by Sarah Creem-Regehr, Ph.D., Jeanine Stefanucci, Ph.D., and William Thompson, Ph.D. Her work focuses on graphical cognition, decision making with visualizations, and visual perception. She works on large interdisciplinary projects with visualization scientists and anthropologists.

SHC is a Professor in the Psychology Department of the University of Utah. She received her MA and Ph.D. in Psychology from the University of Virginia. Her research serves joint goals of developing theories of perception-action processing mechanisms and applying these theories to relevant real-world problems in order to facilitate observers’ understanding of their spatial environments. In particular, her interests are in space perception, spatial cognition, embodied cognition, and virtual environments. She co-authored the book Visual Perception from a Computer Graphics Perspective; previously, she was Associate Editor of Psychonomic Bulletin & Review and Experimental Psychology: Human Perception and Performance.

MH is a Professor in the Department of Psychological & Brain Sciences at the University of California, Santa Barbara. She received her Ph.D. in Psychology from Carnegie Mellon University. Her research is concerned with spatial cognition, broadly defined, and includes research on small-scale spatial abilities (e.g. mental rotation and perspective taking), large-scale spatial abilities involved in navigation, comprehension of graphics, and the role of spatial cognition in STEM learning. She served as chair of the governing board of the Cognitive Science Society and is associate editor of Topics in Cognitive Science and past Associate Editor of Journal of Experimental Psychology: Applied.

JS is an Associate Professor in the Psychology Department at the University of Utah. She received her M.A. and Ph.D. in Psychology from the University of Virginia. Her research focuses on better understanding if a person’s bodily states, whether emotional, physiological, or physical, affects their spatial perception and cognition. She conducts this research in natural settings (outdoor or indoor) and in virtual environments. This work is inherently interdisciplinary given it spans research on emotion, health, spatial perception and cognition, and virtual environments. She is on the editorial boards for the Journal of Experimental Psychology: General and Virtual Environments: Frontiers in Robotics and AI. She also co-authored the book Visual Perception from a Computer Graphics Perspective.

Ethics approval and consent to participate

The research reported in this paper was conducted in adherence to the Declaration of Helsinki and received IRB approval from the University of Utah, #IRB_00057678. No human subject data were collected for this work; therefore, no consent to participate was acquired.

Consent for publication

Consent to publish was not required for this review.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Footnotes

1

Dual-process theory will be described in greater detail in next section.

2

It should be noted that in some cases the activation of Type 2 processing should improve decision accuracy. More research is needed that examines cases where Type 2 could improve decision performance with visualizations.

The original version of this article has been revised. Table 2 was corrected to be presented appropriately.

Change history

9/2/2018

The original article (Padilla et al., 2018) contained a formatting error in Table 2; this has now been corrected with the appropriate boxes marked clearly.

Contributor Information

Lace M. Padilla, Email: Lace.m.k.padilla@gmail.com

Sarah H. Creem-Regehr, Email: sarah.creem@psych.utah.edu

Mary Hegarty, Email: mary.hegarty@psych.ucsb.edu.

Jeanine K. Stefanucci, Email: jeanine.stefanucci@psych.utah.edu

References

  1. Ancker JS, Senathirajah Y, Kukafka R, Starren JB. Design features of graphs in health risk communication: A systematic review. Journal of the American Medical Informatics Association. 2006;13(6):608–618. doi: 10.1197/jamia.M2115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baddeley AD, Hitch G. Working memory. Psychology of Learning and Motivation. 1974;8:47–89. doi: 10.1016/S0079-7421(08)60452-1. [DOI] [Google Scholar]
  3. Bailey, K., Carswell, C. M., Grant, R., & Basham, L. (2007). Geospatial perspective-taking: how well do decision makers choose their views? ​In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 51, No. 18, pp. 1246-1248). Los Angeles: SAGE Publications.
  4. Balleine BW. The neural basis of choice and decision making. Journal of Neuroscience. 2007;27(31):8159–8160. doi: 10.1523/JNEUROSCI.1939-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bandlow A, Matzen LE, Cole KS, Dornburg CC, Geiseler CJ, Greenfield JA, et al. HCI International 2011–Posters’ Extended Abstracts. 2011. Evaluating Information Visualizations with Working Memory Metrics; pp. 265–269. [Google Scholar]
  6. Belia S, Fidler F, Williams J, Cumming G. Researchers misunderstand confidence intervals and standard error bars. Psychological Methods. 2005;10(4):389. doi: 10.1037/1082-989X.10.4.389. [DOI] [PubMed] [Google Scholar]
  7. Bertin, J. (1983). Semiology of graphics: Diagrams, networks, maps. ​Madison: University of Wisconsin Press.
  8. Boone, A., Gunalp, P., & Hegarty, M. (in press). Explicit versus Actionable Knowledge: The Influence of Explaining Graphical Conventions on Interpretation of Hurricane Forecast Visualizations. Journal of Experimental Psychology: Applied. [DOI] [PubMed]
  9. Brügger A, Fabrikant SI, Çöltekin A. An empirical evaluation of three elevation change symbolization methods along routes in bicycle maps. Cartography and Geographic Information Science. 2017;44(5):436–451. doi: 10.1080/15230406.2016.1193766. [DOI] [Google Scholar]
  10. Caffò AO, Picucci L, Di Masi MN, Bosco A. Working memory: capacity, developments and improvement techniques. Hauppage: Nova Science Publishers; 2011. Working memory components and virtual reorientation: A dual-task study; pp. 249–266. [Google Scholar]
  11. Card, S. K., Mackinlay, J. D., & Shneiderman, B. (1999). Readings in information visualization: using vision to think.  San Francisco: Morgan Kaufmann Publishers Inc.
  12. Castro, S. C., Strayer, D. L., Matzke, D., & Heathcote, A. (2018). Cognitive Workload Measurement and Modeling Under Divided Attention. Journal of Experimental Psychology: General. [DOI] [PubMed]
  13. Cheong L, Bleisch S, Kealy A, Tolhurst K, Wilkening T, Duckham M. Evaluating the impact of visualization of wildfire hazard upon decision-making under uncertainty. International Journal of Geographical Information Science. 2016;30(7):1377–1404. doi: 10.1080/13658816.2015.1131829. [DOI] [Google Scholar]
  14. Connor CE, Egeth HE, Yantis S. Visual attention: Bottom-up versus top-down. Current Biology. 2004;14(19):R850–R852. doi: 10.1016/j.cub.2004.09.041. [DOI] [PubMed] [Google Scholar]
  15. Cowan N. The many faces of working memory and short-term storage. Psychonomic Bulletin & Review. 2017;24(4):1158–1170. doi: 10.3758/s13423-016-1191-6. [DOI] [PubMed] [Google Scholar]
  16. Dennis AR, Carte TA. Using geographical information systems for decision making: Extending cognitive fit theory to map-based presentations. Information Systems Research. 1998;9(2):194–203. doi: 10.1287/isre.9.2.194. [DOI] [Google Scholar]
  17. Engel AK, Fries P, Singer W. Dynamic predictions: Oscillations and synchrony in top–down processing. Nature Reviews Neuroscience. 2001;2(10):704–716. doi: 10.1038/35094565. [DOI] [PubMed] [Google Scholar]
  18. Engle, R. W., Kane, M. J., & Tuholski, S. W. (1999). Individual differences in working memory capacity and what they tell us about controlled attention, general fluid intelligence, and functions of the prefrontal cortex. ​ In A. Miyake & P. Shah (Eds.), Models of working memory: Mechanisms of active maintenance and executive control (pp. 102-134). New York: Cambridge University Press.
  19. Epstein S, Pacini R, Denes-Raj V, Heier H. Individual differences in intuitive–experiential and analytical–rational thinking styles. Journal of Personality and Social Psychology. 1996;71(2):390. doi: 10.1037/0022-3514.71.2.390. [DOI] [PubMed] [Google Scholar]
  20. Evans JSB. Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology. 2008;59:255–278. doi: 10.1146/annurev.psych.59.103006.093629. [DOI] [PubMed] [Google Scholar]
  21. Evans JSB, Stanovich KE. Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science. 2013;8(3):223–241. doi: 10.1177/1745691612460685. [DOI] [PubMed] [Google Scholar]
  22. Fabrikant SI, Hespanha SR, Hegarty M. Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making. Annals of the Association of American Geographers. 2010;100(1):13–29. doi: 10.1080/00045600903362378. [DOI] [Google Scholar]
  23. Fabrikant Sara Irina, Skupin André. Exploring Geovisualization. 2005. Cognitively Plausible Information Visualization; pp. 667–690. [Google Scholar]
  24. Fagerlin A, Wang C, Ubel PA. Reducing the influence of anecdotal reasoning on people’s health care decisions: Is a picture worth a thousand statistics? Medical Decision Making. 2005;25(4):398–405. doi: 10.1177/0272989X05278931. [DOI] [PubMed] [Google Scholar]
  25. Feeney Aidan, Hola Ala K. W., Liversedge Simon P., Findlay John M., Metcalf Robert. Theory and Application of Diagrams. Berlin, Heidelberg: Springer Berlin Heidelberg; 2000. How People Extract Information from Graphs: Evidence from a Sentence-Graph Verification Paradigm; pp. 149–161. [Google Scholar]
  26. Frownfelter-Lohrke C. The effects of differing information presentations of general purpose financial statements on users’ decisions. Journal of Information Systems. 1998;12(2):99–107. [Google Scholar]
  27. Galesic M, Garcia-Retamero R. Graph literacy: A cross-cultural comparison. Medical Decision Making. 2011;31(3):444–457. doi: 10.1177/0272989X10373805. [DOI] [PubMed] [Google Scholar]
  28. Galesic M, Garcia-Retamero R, Gigerenzer G. Using icon arrays to communicate medical risks: Overcoming low numeracy. Health Psychology. 2009;28(2):210. doi: 10.1037/a0014474. [DOI] [PubMed] [Google Scholar]
  29. Garcia-Retamero, R., & Galesic, M. (2009). Trust in healthcare. In Kattan (Ed.), Encyclopedia of medical decision making, (pp. 1153–1155). Thousand Oaks: SAGE Publications.
  30. Gattis M, Holyoak KJ. Mapping conceptual to spatial relations in visual reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1996;22(1):231. doi: 10.1037//0278-7393.22.1.231. [DOI] [PubMed] [Google Scholar]
  31. Gigerenzer G, Gaissmaier W. Heuristic decision making. Annual Review of Psychology. 2011;62:451–482. doi: 10.1146/annurev-psych-120709-145346. [DOI] [PubMed] [Google Scholar]
  32. Gigerenzer, G., Todd, P. M., & ABC Research Group (2000). Simple Heuristics That Make Us Smart. ​Oxford: Oxford University Press.
  33. Grounds Margaret A., Joslyn Susan, Otsuka Kyoko. Probabilistic Interval Forecasts: An Individual Differences Approach to Understanding Forecast Communication. Advances in Meteorology. 2017;2017:1–18. doi: 10.1155/2017/3932565. [DOI] [Google Scholar]
  34. Harel, J. (2015, July 24, 2012). A Saliency Implementation in MATLAB. Retrieved from http://www.vision.caltech.edu/~harel/share/gbvs.php
  35. Hegarty M. The cognitive science of visual-spatial displays: Implications for design. Topics in Cognitive Science. 2011;3(3):446–474. doi: 10.1111/j.1756-8765.2011.01150.x. [DOI] [PubMed] [Google Scholar]
  36. Hegarty M, Canham MS, Fabrikant SI. Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2010;36(1):37. doi: 10.1037/a0017683. [DOI] [PubMed] [Google Scholar]
  37. Hegarty M, Friedman A, Boone AP, Barrett TJ. Where are you? The effect of uncertainty and its visual representation on location judgments in GPS-like displays. Journal of Experimental Psychology: Applied. 2016;22(4):381. doi: 10.1037/xap0000103. [DOI] [PubMed] [Google Scholar]
  38. Hegarty M, Smallman HS, Stull AT. Choosing and using geospatial displays: Effects of design on performance and metacognition. Journal of Experimental Psychology: Applied. 2012;18(1):1. doi: 10.1037/a0026625. [DOI] [PubMed] [Google Scholar]
  39. Hoffrage U, Gigerenzer G. Using natural frequencies to improve diagnostic inferences. Academic Medicine. 1998;73(5):538–540. doi: 10.1097/00001888-199805000-00024. [DOI] [PubMed] [Google Scholar]
  40. Hollands JG, Spence I. Judgments of change and proportion in graphical perception. Human Factors: The Journal of the Human Factors and Ergonomics Society. 1992;34(3):313–334. doi: 10.1177/001872089203400306. [DOI] [PubMed] [Google Scholar]
  41. Huang Z, Chen H, Guo F, Xu JJ, Wu S, Chen W-H. Expertise visualization: An implementation and study based on cognitive fit theory. Decision Support Systems. 2006;42(3):1539–1557. doi: 10.1016/j.dss.2006.01.006. [DOI] [Google Scholar]
  42. Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1998;20(11):1254–1259. doi: 10.1109/34.730558. [DOI] [Google Scholar]
  43. Joslyn S, LeClerc J. Decisions with uncertainty: The glass half full. Current Directions in Psychological Science. 2013;22(4):308–315. doi: 10.1177/0963721413481473. [DOI] [Google Scholar]
  44. Kahneman, D. (2011). Thinking, fast and slow. (Vol. 1). New York: Farrar, Straus and Giroux.
  45. Kahneman D, Frederick S. Heuristics and biases: The psychology of intuitive judgment. 2002. Representativeness revisited: Attribute substitution in intuitive judgment; p. 49. [Google Scholar]
  46. Kahneman D, Tversky A. Judgment under Uncertainty: Heuristics and Biases. 1. Cambridge; NY: Cambridge University Press; 1982. [DOI] [PubMed] [Google Scholar]
  47. Kane MJ, Bleckley MK, Conway ARA, Engle RW. A controlled-attention view of working-memory capacity. Journal of Experimental Psychology: General. 2001;130(2):169. doi: 10.1037/0096-3445.130.2.169. [DOI] [PubMed] [Google Scholar]
  48. Keehner M, Mayberry L, Fischer MH. Different clues from different views: The role of image format in public perceptions of neuroimaging results. Psychonomic Bulletin & Review. 2011;18(2):422–428. doi: 10.3758/s13423-010-0048-7. [DOI] [PubMed] [Google Scholar]
  49. Keller C, Siegrist M, Visschers V. Effect of risk ladder format on risk perception in high-and low-numerate individuals. Risk Analysis. 2009;29(9):1255–1264. doi: 10.1111/j.1539-6924.2009.01261.x. [DOI] [PubMed] [Google Scholar]
  50. Keren G, Schul Y. Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science. 2009;4(6):533–550. doi: 10.1111/j.1745-6924.2009.01164.x. [DOI] [PubMed] [Google Scholar]
  51. Kinkeldey C, MacEachren AM, Riveiro M, Schiewe J. Evaluating the effect of visually represented geodata uncertainty on decision-making: Systematic review, lessons learned, and recommendations. Cartography and Geographic Information Science. 2017;44(1):1–21. doi: 10.1080/15230406.2015.1089792. [DOI] [Google Scholar]
  52. Kinkeldey C, MacEachren AM, Schiewe J. How to assess visual communication of uncertainty? A systematic review of geospatial uncertainty visualisation user studies. The Cartographic Journal. 2014;51(4):372–386. doi: 10.1179/1743277414Y.0000000099. [DOI] [Google Scholar]
  53. Kriz S, Hegarty M. Top-down and bottom-up influences on learning from animations. International Journal of Human-Computer Studies. 2007;65(11):911–930. doi: 10.1016/j.ijhcs.2007.06.005. [DOI] [Google Scholar]
  54. Kunz, V. (2004). Rational choice. Frankfurt: Campus Verlag.
  55. Lallanilla, M. (2014, April 24, 2014 10:15 am). Misleading Gun-Death Chart Draws Fire. https://www.livescience.com/45083-misleading-gun-death-chart.html
  56. Lee J, Bednarz R. Effect of GIS learning on spatial thinking. Journal of Geography in Higher Education. 2009;33(2):183–198. doi: 10.1080/03098260802276714. [DOI] [Google Scholar]
  57. Liu Le, Boone Alexander P., Ruginski Ian T., Padilla Lace, Hegarty Mary, Creem-Regehr Sarah H., Thompson William B., Yuksel Cem, House Donald H. Uncertainty Visualization by Representative Sampling from Prediction Ensembles. IEEE Transactions on Visualization and Computer Graphics. 2017;23(9):2165–2178. doi: 10.1109/TVCG.2016.2607204. [DOI] [PubMed] [Google Scholar]
  58. Lobben AK. Tasks, strategies, and cognitive processes associated with navigational map reading: A review perspective. The Professional Geographer. 2004;56(2):270–281. [Google Scholar]
  59. Lohse GL. A cognitive model for understanding graphical perception. Human Computer Interaction. 1993;8(4):353–388. doi: 10.1207/s15327051hci0804_3. [DOI] [Google Scholar]
  60. Lohse GL. The role of working memory on graphical information processing. Behaviour & Information Technology. 1997;16(6):297–308. doi: 10.1080/014492997119707. [DOI] [Google Scholar]
  61. Marewski JN, Gigerenzer G. Heuristic decision making in medicine. Dialogues in Clinical Neuroscience. 2012;14(1):77–89. doi: 10.31887/DCNS.2012.14.1/jmarewski. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. McCabe DP, Castel AD. Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition. 2008;107(1):343–352. doi: 10.1016/j.cognition.2007.07.017. [DOI] [PubMed] [Google Scholar]
  63. McKenzie G, Hegarty M, Barrett T, Goodchild M. Assessing the effectiveness of different visualizations for judgments of positional uncertainty. International Journal of Geographical Information Science. 2016;30(2):221–239. doi: 10.1080/13658816.2015.1082566. [DOI] [Google Scholar]
  64. Mechelli A, Price CJ, Friston KJ, Ishai A. Where bottom-up meets top-down: Neuronal interactions during perception and imagery. Cerebral Cortex. 2004;14(11):1256–1265. doi: 10.1093/cercor/bhh087. [DOI] [PubMed] [Google Scholar]
  65. Meilinger T, Knauff M, Bülthoff HH. Working memory in wayfinding—A dual task experiment in a virtual city. Cognitive Science. 2008;32(4):755–770. doi: 10.1080/03640210802067004. [DOI] [PubMed] [Google Scholar]
  66. Meyer J. Performance with tables and graphs: Effects of training and a visual search model. Ergonomics. 2000;43(11):1840–1865. doi: 10.1080/00140130050174509. [DOI] [PubMed] [Google Scholar]
  67. Munzner T. Visualization analysis and design. Boca Raton, FL: CRC Press; 2014. [Google Scholar]
  68. Nadav-Greenberg L, Joslyn SL, Taing MU. The effect of uncertainty visualizations on decision making in weather forecasting. Journal of Cognitive Engineering and Decision Making. 2008;2(1):24–47. doi: 10.1518/155534308X284354. [DOI] [Google Scholar]
  69. Nayak JG, Hartzler AL, Macleod LC, Izard JP, Dalkin BM, Gore JL. Relevance of graph literacy in the development of patient-centered communication tools. Patient Education and Counseling. 2016;99(3):448–454. doi: 10.1016/j.pec.2015.09.009. [DOI] [PubMed] [Google Scholar]
  70. Newman GE, Scholl BJ. Bar graphs depicting averages are perceptually misinterpreted: The within-the-bar bias. Psychonomic Bulletin & Review. 2012;19(4):601–607. doi: 10.3758/s13423-012-0247-5. [DOI] [PubMed] [Google Scholar]
  71. Okan, Y., Galesic, M., & Garcia-Retamero, R. (2015). How people with low and high graph literacy process health graphs: Evidence from eye-tracking. Journal of Behavioral Decision Making.
  72. Okan Y, Garcia-Retamero R, Cokely ET, Maldonado A. Individual differences in graph literacy: Overcoming denominator neglect in risk comprehension. Journal of Behavioral Decision Making. 2012;25(4):390–401. doi: 10.1002/bdm.751. [DOI] [Google Scholar]
  73. Okan Y, Garcia-Retamero R, Galesic M, Cokely ET. When higher bars are not larger quantities: On individual differences in the use of spatial information in graph comprehension. Spatial Cognition and Computation. 2012;12(2–3):195–218. doi: 10.1080/13875868.2012.659302. [DOI] [Google Scholar]
  74. Padilla L, Hansen G, Ruginski IT, Kramer HS, Thompson WB, Creem-Regehr SH. The influence of different graphical displays on nonexpert decision making under uncertainty. Journal of Experimental Psychology: Applied. 2015;21(1):37. doi: 10.1037/xap0000037. [DOI] [PubMed] [Google Scholar]
  75. Padilla L, Quinan PS, Meyer M, Creem-Regehr SH. Evaluating the impact of binning 2d scalar fields. IEEE Transactions on Visualization and Computer Graphics. 2017;23(1):431–440. doi: 10.1109/TVCG.2016.2599106. [DOI] [PubMed] [Google Scholar]
  76. Padilla L, Ruginski IT, Creem-Regehr SH. Effects of ensemble and summary displays on interpretations of geospatial uncertainty data. Cognitive Research: Principles and Implications. 2017;2(1):40. doi: 10.1186/s41235-017-0076-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Pashler H. Dual-task interference in simple tasks: Data and theory. Psychological Bulletin. 1994;116(2):220. doi: 10.1037/0033-2909.116.2.220. [DOI] [PubMed] [Google Scholar]
  78. Patterson RE, Blaha LM, Grinstein GG, Liggett KK, Kaveney DE, Sheldon KC, et al. A human cognition framework for information visualization. Computers & Graphics. 2014;42:42–58. doi: 10.1016/j.cag.2014.03.002. [DOI] [Google Scholar]
  79. Pinker S. Artificial intelligence and the future of testing. 1990. A theory of graph comprehension; pp. 73–126. [Google Scholar]
  80. Ratliff, K. R., & Newcombe, N. S. (2005). Human spatial reorientation using dual task paradigms. Paper presented at the Proceedings of the Annual Cognitive Science Society.
  81. Reyna VF, Nelson WL, Han PK, Dieckmann NF. How numeracy influences risk comprehension and medical decision making. Psychological Bulletin. 2009;135(6):943. doi: 10.1037/a0017327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Riveiro M. Visually supported reasoning under uncertain conditions: Effects of domain expertise on air traffic risk assessment. Spatial Cognition and Computation. 2016;16(2):133–153. doi: 10.1080/13875868.2015.1137576. [DOI] [Google Scholar]
  83. Rodríguez V, Andrade AD, García-Retamero R, Anam R, Rodríguez R, Lisigurski M, et al. Health literacy, numeracy, and graphical literacy among veterans in primary care and their effect on shared decision making and trust in physicians. Journal of Health Communication. 2013;18(sup1):273–289. doi: 10.1080/10810730.2013.829137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Rosenholtz R, Jin Z. A computational form of the statistical saliency model for visual search. Journal of Vision. 2005;5(8):777–777. doi: 10.1167/5.8.777. [DOI] [Google Scholar]
  85. Ruginski IT, Boone AP, Padilla L, Liu L, Heydari N, Kramer HS, et al. Non-expert interpretations of hurricane forecast uncertainty visualizations. Spatial Cognition and Computation. 2016;16(2):154–172. doi: 10.1080/13875868.2015.1137577. [DOI] [Google Scholar]
  86. Sanchez CA, Wiley J. An examination of the seductive details effect in terms of working memory capacity. Memory & Cognition. 2006;34(2):344–355. doi: 10.3758/BF03193412. [DOI] [PubMed] [Google Scholar]
  87. Schirillo JA, Stone ER. The greater ability of graphical versus numerical displays to increase risk avoidance involves a common mechanism. Risk Analysis. 2005;25(3):555–566. doi: 10.1111/j.1539-6924.2005.00624.x. [DOI] [PubMed] [Google Scholar]
  88. Shah P, Freedman EG. Bar and line graph comprehension: An interaction of top-down and bottom-up processes. Topics in Cognitive Science. 2011;3(3):560–578. doi: 10.1111/j.1756-8765.2009.01066.x. [DOI] [PubMed] [Google Scholar]
  89. Shah, P., Freedman, E. G., & Vekiri, I. (2005). The Comprehension of Quantitative Information in Graphical Displays. In P. Shah (Ed.) & A. Miyake, The Cambridge Handbook of Visuospatial Thinking (pp. 426-476). New York: Cambridge University Press.
  90. Shah P, Miyake A. The separability of working memory resources for spatial thinking and language processing: An individual differences approach. Journal of Experimental Psychology: General. 1996;125(1):4. doi: 10.1037/0096-3445.125.1.4. [DOI] [PubMed] [Google Scholar]
  91. Shen M, Carswell M, Santhanam R, Bailey K. Emergency management information systems: Could decision makers be supported in choosing display formats? Decision Support Systems. 2012;52(2):318–330. doi: 10.1016/j.dss.2011.08.008. [DOI] [Google Scholar]
  92. Shipstead Z, Harrison TL, Engle RW. Working memory capacity and the scope and control of attention. Attention, Perception, & Psychophysics. 2015;77(6):1863–1880. doi: 10.3758/s13414-015-0899-0. [DOI] [PubMed] [Google Scholar]
  93. Simkin D, Hastie R. An information-processing analysis of graph perception. Journal of the American Statistical Association. 1987;82(398):454–465. doi: 10.1080/01621459.1987.10478448. [DOI] [Google Scholar]
  94. Sloman, S. A. (2002). Two systems of reasoning. ​ In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 379-396). New York: Cambridge University Press.
  95. Smelcer JB, Carmel E. The effectiveness of different representations for managerial problem solving: Comparing tables and maps. Decision Sciences. 1997;28(2):391. doi: 10.1111/j.1540-5915.1997.tb01316.x. [DOI] [Google Scholar]
  96. St. John M, Cowen MB, Smallman HS, Oonk HM. The use of 2D and 3D displays for shape-understanding versus relative-position tasks. Human Factors. 2001;43(1):79–98. doi: 10.1518/001872001775992534. [DOI] [PubMed] [Google Scholar]
  97. Stanovich Keith E. Who Is Rational? 1999. [Google Scholar]
  98. Stenning K, Oberlander J. A cognitive theory of graphical and linguistic reasoning: Logic and implementation. Cognitive Science. 1995;19(1):97–140. doi: 10.1207/s15516709cog1901_3. [DOI] [Google Scholar]
  99. Stone ER, Sieck WR, Bull BE, Yates JF, Parks SC, Rush CJ. Foreground: Background salience: Explaining the effects of graphical displays on risk avoidance. Organizational Behavior and Human Decision Processes. 2003;90(1):19–36. doi: 10.1016/S0749-5978(03)00003-7. [DOI] [Google Scholar]
  100. Stone ER, Yates JF, Parker AM. Effects of numerical and graphical displays on professed risk-taking behavior. Journal of Experimental Psychology: Applied. 1997;3(4):243. [Google Scholar]
  101. Trueswell JC, Papafragou A. Perceiving and remembering events cross-linguistically: Evidence from dual-task paradigms. Journal of Memory and Language. 2010;63(1):64–82. doi: 10.1016/j.jml.2010.02.006. [DOI] [Google Scholar]
  102. Tversky, B. (2005). Visuospatial reasoning. In K. Holyoak and R. G. Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning, (pp. 209-240). Cambridge: Cambridge University Press.
  103. Tversky B. Visualizing thought. Topics in Cognitive Science. 2011;3(3):499–535. doi: 10.1111/j.1756-8765.2010.01113.x. [DOI] [PubMed] [Google Scholar]
  104. Tversky Barbara, Corter James E., Yu Lixiu, Mason David L., Nickerson Jeffrey V. Diagrammatic Representation and Inference. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012. Representing Category and Continuum: Visualizing Thought; pp. 23–34. [Google Scholar]
  105. Vessey I, Galletta D. Cognitive fit: An empirical study of information acquisition. Information Systems Research. 1991;2(1):63–84. doi: 10.1287/isre.2.1.63. [DOI] [Google Scholar]
  106. Vessey I, Zhang P, Galletta D. Human-computer interaction and management information systems: Foundations. 2006. The theory of cognitive fit; pp. 141–183. [Google Scholar]
  107. Von Neumann J. Morgenstern, 0.(1944) theory of games and economic behavior. Princeton, NJ: Princeton UP; 1953. [Google Scholar]
  108. Vranas PBM. Gigerenzer's normative critique of Kahneman and Tversky. Cognition. 2000;76(3):179–193. doi: 10.1016/S0010-0277(99)00084-0. [DOI] [PubMed] [Google Scholar]
  109. Wainer H, Hambleton RK, Meara K. Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement. 1999;36(4):301–335. doi: 10.1111/j.1745-3984.1999.tb00559.x. [DOI] [Google Scholar]
  110. Waters EA, Weinstein ND, Colditz GA, Emmons K. Formats for improving risk communication in medical tradeoff decisions. Journal of Health Communication. 2006;11(2):167–182. doi: 10.1080/10810730500526695. [DOI] [PubMed] [Google Scholar]
  111. Waters EA, Weinstein ND, Colditz GA, Emmons KM. Reducing aversion to side effects in preventive medical treatment decisions. Journal of Experimental Psychology: Applied. 2007;13(1):11. doi: 10.1037/1076-898X.13.1.11. [DOI] [PubMed] [Google Scholar]
  112. Wilkening Jan, Fabrikant Sara Irina. Spatial Information Theory. Berlin, Heidelberg: Springer Berlin Heidelberg; 2011. How Do Decision Time and Realism Affect Map-Based Decision Making? pp. 1–19. [Google Scholar]
  113. Zhu B, Watts SA. Visualization of network concepts: The impact of working memory capacity differences. Information Systems Research. 2010;21(2):327–344. doi: 10.1287/isre.1080.0215. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No data were collected for this review.


Articles from Cognitive Research: Principles and Implications are provided here courtesy of Springer

RESOURCES