Skip to main content
Wiley - PMC COVID-19 Collection logoLink to Wiley - PMC COVID-19 Collection
. 2022 Oct 14;59(1):121–131. doi: 10.1002/pra2.610

User Perception and Eye Movement on A Pandemic Data Visualization Dashboard

Yu‐Wen Huang 1,, Yu‐Ju Yang 2,, Wei Jeng 1,
PMCID: PMC9874901  PMID: 36714435

Abstract

This study utilized a two‐phase user experiment to explore people’s perceptual and cognitive states interacting with the COVID‐19 dashboard to obtain outbreak information. Specifically, 27 participants were assigned to interact with this dashboard with different color arrangements and performed image‐memory, search, and browse visualization tasks sequentially. We found that the participants expected to obtain both global pandemic trends and single region/date statuses from the dashboard to help them grasp important information in the shortest possible time. They also allocated their attention differently to the dashboard’s content areas to match their individual visual movement and reading logics. Our participants indicated that the pandemic data visualization dashboard should use a principal‐color selection that is alarming but without causing panic. In the study’s second phase, an eye‐tracking experiment, it was found that the participants’ actual eye paths deviated from our expectations: clustering around headings and text, rather than on visualized charts or graphs as anticipated. Based on these findings, we provide design implications for builders of future data‐visualization and disaster dashboards.

Keywords: COVID‐19, Data visualization dashboard, eye‐tracking experiment, information design, pandemic visualization

INTRODUCTION

COVID‐19 has dramatically impacted industries and people’s lifestyles worldwide. In the face of this public‐health crisis and its unpredictable development, information on the Internet can easily panic the public, making approaches to obtaining real‐time information about the pandemic of great concern to policy‐makers and others. The ability to handle and manage such information accurately and comprehensively could help the public sector to make more effective decisions pertinent to controlling the spread of COVID‐19 and other diseases.

Among the many channels whereby people can access information, data dashboards have emerged as an important medium for assessing and presenting global COVID‐19 statistics. A data dashboard is a highly visual interface that aggregates multiple target indicators into several areas of information, thus aiding its users' data‐related observation, monitoring, and decision‐making. More specifically, each dashboard is composed of content area (i.e., the “block” where a designer can place a single chart, pure text labels, or a side‐by‐side chart) that can simultaneously help participants see multiple data charts, and connect disparate information to gain new insights (e.g., about geographically based outbreaks).

Effective data dashboards can help their users reduce noise when reading information (Smith, 2013), sharpen their foci, and help them better understand the raw data behind graphs. The uses of these tools have become increasingly diverse, but today, they are most often used to monitor individuals' job performance or other business‐relevant data (e.g., the Index of Economic Freedom, created by the Heritage Foundation and the Wall Street Journal). Various public‐sector bodies have also recently introduced dashboards to monitor infectious diseases. The U.S. Centers for Disease Control and Prevention, for example, has since 1971 operated the National Outbreak Reporting System, which uses filtering criteria including the state, year, etiology, and route of transmission. Since the outbreak of COVID‐19, governments and organizations worldwide have been dedicated to developing real‐time dashboards to assist the public in comprehending pandemic conditions both globally and locally. Moreover, dashboards can aid the public‐sector decision‐making process, with notable examples including the World Health Organization’s coronavirus dashboard, as well as the COVID‐19 dashboard of the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU) (Dong et al, 2020).

Although the frequently visited COVID‐19 dashboards mentioned above all display information related to the outbreak, they differ sharply in their use of foreground and background colors, their information layouts, and the nature of the raw data they present. Previous studies of data dashboards have shown that it is necessary to understand their users' high‐ and low‐level needs when designing them. High‐level needs refer to the dashboard’s goals, decision requirements, and workflow, whereas low‐level ones place more emphasis on the overall display of the interface, including the selection of appropriate content and graphics (Brath & Peters, 2004). Previous studies are found little attention to user studies on dashboards. As overseeing a pandemic dashboard becomes part of people’s lives, how this type of visual technology interacts with users deserve further exploration by information science researchers and professionals. In light of this, the current study explores the use of pandemic‐themed data dashboards by Internet users, and frames its research questions around the concepts of affection and cognition, as follows:

RQ1: What are users' affective responses toward the use of color and other visual components of a data dashboard they are using to access COVID‐19 information?

RQ2: What are users' perceptions while using the data dashboard to access COVID‐19 information?

This study was conducted in two phases: a main user study (hereafter: phase 1), followed by an add‐on eye‐tracking experiment (hereafter: phase 2) to complete the research inquiry that phase 1 left open. There were 27 participants in total, with 18 assigned to phase 1 and 9 to phase 2. Phase 1 was completed in the summer of 2021, and phase 2, in the spring of 2022.

Given the relative lack of information‐science research on interface‐design methods for data dashboards, not only for pandemics but for other types of disasters, it is hoped that the present work will provide useful design insights for these increasingly important tools.

METHODOLOGY

To explore users' experience, feelings about, and perceptions of a COVID‐19 data dashboard, we curated a set of actual pandemic statistics, designed dashboard prototypes, and conducted user tasks and post‐task interviews. This approach resulted in the collection of both qualitative and quantitative data. Figure 2 illustrates this study’s two‐phase experimental procedure.

Figure 1.

Figure 1

COVID‐19 data‐dashboard examples

Figure 2.

Figure 2

Experimental procedure

Data Dashboard: Design and Settings

We chose commonly seen data visualizations for the data dashboard, to avoid participants expending too much effort in trying to understand never‐before‐seen charts. Specifically, we selected charts that contained plain text, line charts, data tables, and global maps, and we felt these formats presented information that could meet users' basic information needs. The selected charts and their informational content are presented in Table 1.

Table 1.

The selected charts and their informational content

Chart Data Presentation Variables COVID‐19 Implications
Data Table Columns and rows Text / numbers Cumulative number of confirmed cases/deaths by country
Plain Text Text or numbers
Global Map Visual geographies Geographic information / colors and shades / text / numbers Cumulative number of confirmed cases/deaths by country, in the form of a map
Line Chart Connecting data points with straight lines Text / numbers / times Global number of confirmed cases/deaths in different time ranges

We used Google Data Studio to create our dashboard prototype, and further created eight high‐fidelity interactable layouts for phase 1 of the study. The COVID‐19 dashboard from Johns Hopkins University (https://coronavirus.jhu.edu/) was used as a reference for creating content areas. We divided the dashboard into four content areas, placing each content area with a single chart. By rotating the primary charts and the background colors to further explore whether the layout would influence a given user’s browsing behavior, these interfaces were designed based on two‐color backgrounds (white/dark gray) and four selected charts (see Table 1).

The layouts of the eight dashboards prototypes are shown in Figures 3 and 4. The COVID‐19 data in this study were collected from the European Centre for Disease Control and Prevention (ECDC) on October 9, 2020.

Figure 3.

Figure 3

Dashboard interface layouts

Figure 4.

Figure 4

Dashboard interface prototypes

Phase 1 Design and Study Procedure

Task Design

Phase 1 of the study consisted of three sessions. Each is dealt with in its own subsection below.

Note‐taking Tasks. In this session of phase 1, the participants were allowed to freely explore four assigned dashboard interfaces with different layouts and colors. The interfaces were only shown to them for a limited time (2 minutes), after which they were asked to note down things they remembered about the dashboard, using any type of expression they wished, e.g., writing or sketching (see Figure 5), as previous work inspired (Jeng et al., 2021). After finishing four of these note‐taking tasks, they were asked two questions about their thoughts on their notes: “During the tasks, which element was the most impressive to you?” and “Why do you remember it in particular?”

Figure 5.

Figure 5

Example drawings generated during note‐taking tasks

Searching and Browsing Tasks. Before this session, participants underwent a training session aimed at reducing differences between experienced and naïve participants caused by the learning‐curve effect. The formal task was divided into search goals and browsing goals, and—as shown in Table 2—the Latin Square method was used to arrange the viewing order of the interfaces. Through these methods, we hoped to minimize experimental errors and simplify the repetitive procedure.

Table 2.

Participants' interface‐viewing orders

Participant Numbers Order 1 Order 2 Order 3 Order 4
01, 03, 07, 11, 15 White A White B Gray C Gray D
02, 04, 08, 12, 16 Gray B Gray A White D White C
05, 09, 13, 17, 19 White C White D Gray A Gray B
06, 10, 14, 18, 20 Gray D Gray C White B White A
Table 3.

Details of the two‐phase experiment

Phase 1 Phase 2
Data‐collection Window 2021 summer 2022 spring
Total Task Duration 30–40 minutes 20–30 minutes
Number of Participants 18 9
Sessions
  • Note‐taking

  • Searching and browsing

  • Interview

  • Calibration and practice

  • Searching and browsing

  • Interview

In the search task, we asked participants three factual questions about COVID‐19 that they needed to find the answers to using the data dashboard. An example question was, “What is the current cumulative number of confirmed cases in Germany?

In the browsing task, we asked participants three questions related to global trends in COVID‐19 that they needed to answer by viewing the dashboard interface: e.g., “Please briefly describe the global trend in the number of confirmed cases from July to October.”

Post‐tasks Interview. In the final session, we asked participants for (1) their thoughts on information‐seeking strategies for searching/browsing tasks, (2) their subjective feelings about dashboard interfaces with different colors and layouts, and (3) their overall understanding of, and/or advice derived from, the pandemic‐data dashboard. Specific questions included “Why did you choose [chart name] to respond the question ‘[question from the search task]’?” “How do you feel about white and dark‐gray dashboards, respectively?” “To you, what is the most important purpose of a pandemic‐data dashboard?

Phase 2 Design and Study Procedure

Task Design

In phase 2, we used a Gazepoint GP3 mid‐level desktop eye tracker with a sampling rate of 60hz. The experiment proceeded according to the following three sessions.

Calibration and Practice

Calibration was performed before each task so that the eye tracker could accurately detect the participant’s gaze position. Because this experiment was mainly about chart‐reading, we chose to conduct five‐point calibration. Then, as a warm‐up task, the participants were asked to “walk” through a maze using their eyes. This was intended to familiarize them with the task flow in advance, and ensure that their gaze’s paths could be collected accurately across an entire screen.

Searching and Browsing Tasks

Due to its primarily visual focus, phase 2 did not include any note‐taking tasks. The searching and browsing tasks, meanwhile, were reduced to four questions in total. In the browsing task, participants were given 15 seconds to read the dashboard interface casually and try to express what they were seeing. In the search task, as in phase 1, we asked participants three factual questions about COVID‐19, which they needed to find the answers to using the dashboard. We stopped eye tracking when the participant found the answer, and confirmed their gaze paths.

Control Factors

Each participant viewed just one interface throughout phase 2. Among the eight prototypes from phase 1, we selected three white dashboards (i.e., layouts white A/B/C: see Figure 3) as the experimental objects for phase 2, for the reasons described in the following paragraph, with each of these three interfaces including three users' data.

Table 4.

Participants' colleges, by ID number

No. College/School No. College/School No. College/School
01 Arts and Humanities 10 Arts and Humanities e01 Arts and Humanities
02 Arts and Humanities 11 Arts and Humanities e02 Arts and Humanities
03 Arts and Humanities 12 Arts and Humanities e03 Arts and Humanities
04 Law 13 Bio‐resources and Agriculture e04 Arts and Humanities
05 Science 14 Bio‐resources and Agriculture e05 Arts and Humanities
06 Electrical Engineering and Computer Science 15 Arts and Humanities e06 Electrical Engineering and Computer Science
07 Electrical Engineering and Computer Science 16 Arts and Humanities e07 Electrical Engineering and Computer Science
08 Engineering 17 Engineering e08 Engineering
09 Social Science 18 Social Science e09 Engineering

First, based on the results of phase 1, we found that the background color of the interface had a little impact on visual movement and user focus. To sharpen the focus of the experiment, therefore, we decided to exclude the color factor and focus on the layout only.

Second, layout D was mainly designed to satisfy the substitution of the four information charts, and similar layouts are fairly uncommon in real life. Indeed, in phase 1, most of our participants said they thought this layout was unrealistic, so we decided to remove it from consideration in phase 2.

Finally, in light of our instruments' limitations and other experimental considerations, we decided to ignore interface interactivity and use a static interface in this task.

Participants

The participants (N = 27), of whom 18 participated in phase 1, and the other nine in phase 2, had a mean age of 22.5 years (range 18–26). We recruited participants for both phases in the same way, so all were graduate or undergraduate students from various colleges of a research university in Taiwan, the University of X (anonymized for review). Among them were two pre‐test participants (P01, P02) from phase 1, whose data were not included in the following analysis. The participants had varying levels of familiarity and experience with data dashboards.

Analysis

Interview data for phase 1 were all transcribed from audio recordings. The drawings generated in the memory task (four by each participant, making a total of 64) were used as a tool for reducing the participants' memory burden in the post‐task interviews. We scanned and stored these drawings, but no further data analysis of them was conducted. Data from phase 2's eye‐tracking experiments were collected and analyzed using GazePoint, a software desktop program accompanying with the eye tracker.

results

Our five key findings, four from phase 1 and one from phase 2, are summarized in their own subsections below.

Visual‐perception Differences Were Caused by the Dashboard’s Background Color

The participants' preferences for and visual perceptions of the two dashboard colorways are shown in Figure 6.

Figure 6.

Figure 6

Interface background‐color preferences

As shown in Figure 6, the participants' preferences for the background color of the interface were sharply divided. Their visual affection for the two colors also differed. Most of the seven participants who preferred the dashboard with a white background said that it was visually clearer and easier to read, but some participants felt that, although clear and concise, felt cheap or un‐designed: “[B]lack [i.e., dark gray …] has more sense of design […] it’s still white that is clearer, but white has a sense of cheapness, I mean, it seems like the organization doesn’t have that much money (P05).” Participants’ reasons for preferring the dark‐gray background included that it was less dazzling, calmer, and more serious, and many of them also mentioned that it was reminiscent of other interfaces they habitually used (e.g., code editor, night mode).

In terms of the colors' connotations, several participants felt that the use of dark gray was more serious (P12, P18), but others held the opposite view, noting that the websites of public health and other government departments were more likely to use white (P13).

Lastly, it is important to note when interpreting the above‐mentioned preferences and feelings about the two background colors that half the participants (n = 8) were actually unaware of the interface‐color change before being informed about it by the researcher. Conceivably, this was because the particular colors on offer, i.e., black, white and gray, all tend to be emotionally neutral (Suk & Irtel, 2010).

User Attention and Preferences about the Content Area

In phase 1, through (1) note‐taking task and (2) post‐task interviews, we attempted to discern what the participants were most concerned about in the dashboard’s content area, and their reading logics that underlay these concerns.

During the interviews, we asked for feedback on the content areas the participants had focused on. As shown in Figure 7, more than 80% preferred map and line chart. This was presumably because many participants reported difficulties processing and comprehending absolute data (e.g., precise numbers of infected people) through tables and text labels. Maps and line charts, on the other hand, were more likely to facilitate their acquisition of meaningful, albeit less precise, information. Also, they reported that graphics made it more possible for them to infer information across time and space: e.g., that if a neighboring country had a deeper color, its number of infected had recently been on the rise.

Figure 7.

Figure 7

User attention toward and preferences about the content area

Interface‐layout Preferences

We also asked our participants about their layout preferences and their rationales for them, as shown in Figure 8. From the interview data, we identified three layout aspects that drew these users' attention: 1) that it accorded well with their individual reading flow; (2) that the arrangement matched their personal preferences (i.e., purely out of subjectivity); and (3) that they agreed with the relative importance that seemed to be assigned to its components. Each of these aspects will be discussed in turn below.

Figure 8.

Figure 8

Interface‐layout preferences

First, although the four tested layouts were all made up of the same components, reading efficiency varied sharply from one to another across individuals. For example, P04 favored layout B, in which a map accounted for a considerable proportion of the design: reporting that he pictured a general image of the pandemic globally, then navigated to the condition of each country of interest, in keeping with his normal mode of acquiring information. If, however, the same participant chose layout C—in which a line chart accounted for a major proportion of the design—he said he would be “lost,” unsure what to look at first, or even what information the dashboard was intended to deliver.

Secondly, many participants stated that their choice of layout was based merely on which charts they liked and/or were comfortable with. And since each layout has a primary chart that takes up the major proportion of the design, one added that he chose over the one whose primary chart exactly aligned with his preference: “[I would]… and that was because that one (dashboard) has the map taking up its biggest proportion.” (P03).

Thirdly, a number of participants considered it important that the information contained in a layout be proportional to the amount of space it occupied. Many also mentioned in their interviews that they preferred one layout over another because reducing the size of the other three graphs did not affect its function.

Something that was more detailed [on the map] could only be understood when zoomed in [… But] as for the line chart, I didn’t expect it to be so large, as it only served as a visual aid for me to see trends. Therefore, it didn’t need to be so detailed.” (P16)[Plain text] takes up a lot of space but only provides a little information. It should be something that a line or two could explain fully. It over‐occupied space but gave the most useless information, and that was just disproportionate.” (P05)

User Expectations for the Interface Design of a Pandemic Data Dashboard

In the main study, when the participants were asked for their general thoughts about the dashboard, a number (n = 5) said that they expected to gain important information through it within a short period of time. Some also anticipated that the dashboard would raise their alertness and awareness of the pandemic, and that the interface’s layout should conform to their viewing logic to create a comfortable or smooth visual flow. The selection of the background color, some said, should connote warning but stop short of panicking members of the public (e.g., P07, P12). Many (n = 9) cared about the accuracy and reliability of data, and considered it important that headings within the layout give clear explanations of the dashboard’s content, including the data’s sources and when the information was last updated.

All the participants in the phase 1 claimed that the dashboard met their expectations, i.e., that it satisfied their basic information needs. However, most (n = 10) suggested that it would be improved if they could control its readability (e.g., adjust the background color and/or the data‐presentation modes according to their individual needs):

If there is something like a search bar, I could just type in the country that I would like to look up for information.” (P10)What if there was a function allowing me to adjust the size of each panel as I wish […]?” (P11)

Users' Visual Foci on the Pandemic Data Dashboard

Finally, the phase 1 interview data led us to recognize that certain elements of the dashboard left profound impressions on the participants. As shown in Figure 9, key examples of this phenomenon included (1) the blue and orange banners that contained deaths and confirmed cases, (2) the trend of the line chart, and (3) the dark color applied to countries on the map.

Figure 9.

Figure 9

High‐impact visual‐focus elements

The phase 2 eye‐tracker data were then used to produce the heatmap shown in Figure 10, which captures the most‐ and least‐seen areas of the three different layouts used in our browsing task. Specifically, it shows us that the participants' gaze points were concentrated on the blue and orange banners that contained deaths and confirmed cases. In light of this finding, future designers could create eye‐catching visual foci to aid decision‐making about a variety of subjects (e.g., issue advocacy or team goals).

Figure 10.

Figure 10

Heatmap of three layouts derived from phase 2 eye‐tracking data

What surprised us the most was how much the routes the participants took deviated from our expectations. That is, their gaze points clustered near the captions and the text, rather than on graphics as anticipated. This echoed their feedback in the interviews regarding the high importance of clear titles and detailed instructions.

DISCUSSION AND DESIGN IMPLICATIONS

The experimental dashboard we created, like others before it, served as a real‐time monitoring tool that aggregated complex COVID‐19‐related data into several content area designed to help the public understand the current situation, and potentially help governments, medical providers and researchers to arrive at more effective decisions. Our user data suggest that, in addition to providing accurate data from credible sources, such dashboards should emphasize graphics, readability, interoperability, and efficient information delivery. Specifically, our participants' experience of using our COVID‐19 dashboard varied depending on its colors and the configuration and presentation of the information shown. Based on these findings, several design recommendations are provided below.

Color

Visually Intense Colors Should Be Avoided on Pandemic Data Dashboards

The color perception is essential to human senses and our visual experiences (Adams & Osgood, 1973). The color use in visualization may influence user’s pleasantness/unpleasantness emotion, and enhance the information that was intended to convey (Kaya & Epps, 2004; Suk & Irtel, 2010). First, we recommend that developers and designers avoid using visually intense colors on their pandemic dashboards. Several participants in the phase 1 mentioned that the colors of a pandemic data dashboard should be alarming, but not to the point that they lead to overly negative emotional responses. For instance, P12 noted that the combination of a black background color and red chart elements could easily cause apprehension or even panic because of such imagery’s associations with blood and death. Seven participants, meanwhile, had a neutral visual perception of the dark gray and white backgrounds; but of those two, only the former gave them a sense that the dashboard was “conveying serious matters” (e.g., P07, P16). Therefore, many participants (n = 8) deemed dark gray to be suitable as a color for the pandemic data dashboard. However, people’s visual perceptions of colors are likely to vary widely depending on the situation and their personal experience, not to mention individual physiology (see next subsection). Therefore, we suggest that more in‐depth user studies focused on colors be conducted when planning future pandemic dashboards.

The Colors of a Pandemic Data Dashboard Should Strike a Balance between Universal Needs but Diversified Preferences

Because a pandemic data dashboard delivers important information about a threat to life to the public, it should, to the greatest extent possible, be equally accessible to all groups. Prior research has shown that about 8% of men and 0.5% of women suffer from color‐vision deficiency (CVD), among which red‐green colorblindness/weakness is the most common. The dashboard prototype designed for the present study used a colorblind‐friendly color combination (blue and orange) to present the main interface elements, and we recommend that this factor be carefully considered when visualizing pandemic and other public‐emergency data in the future.

Layout

The Information Configuration of a Pandemic Data Dashboard’s Interface Should Conform to Its Users’ Reading Motions

Most (n = 8) of our participants in the phase 1 expressed a belief that if the dashboard’s layout were made consistent with their reading logic and eye movements, their understanding of the messages it was intended to deliver would be enhanced considerably. Our findings suggests that future designers of such dashboards dig deeper into reading behavior, as a means of arriving at layout solutions that will optimally reduce users’ uncertainty when encountering pandemic information to achieve better efficiency. As shown in Figure 11, we captured participants gaze path and confirmed that participants’ eye trajectories were mostly aligned with what they perceived and described themselves to be during the interviews.

Figure 11.

Figure 11

Fixation map of three layouts derived from phase 2 eye‐tracking data

The Sizes and Relative Sizes of Each Component in a Pandemic Data Dashboard Should Be Carefully Considered, to Avoid Wasted Space and User Misunderstandings

Our participants also favored line chart and maps over plain text and data tables. Having a greater proportion of future dashboards consist of the former two elements would therefore be likely to increase their readability, both as regards detailed and general pandemic information. On the other hand, the effort required to read numeric information is relatively small, meaning that if space is limited, numeric presentation should not be abandoned altogether. In sum, our findings suggests that when designing a pandemic‐ or other emergency‐related dashboard, the reading burden and the space that each component takes up should both be given careful consideration.

In addition, we recommend maintaining an awareness of both the absolute and relative sizes of any line charts used, so that trends will neither be overestimated nor underestimated, as such misinterpretations of the data could potentially cause unnecessary fear, on the one hand, or prompt the public to let its guard down prematurely, on the other.

Information Content

Pandemic Data Dashboards Should Meet Users' Needs for Both Comprehensive and Specific Information

Our findings further suggest that designers should choose infographics that allow dashboard users to obtain “big‐picture” information but also to select regions and dates based on their individual interests and needs. This might involve incorporating advanced search functions into charts. In addition, some of the users we interviewed said they would welcome a feature that allowed them to position data from whatever country they were in right next to the global confirmed cases.

Titles and Headings Should Be Clear

Some of our participants mentioned that every single component of the dashboard should have a clear title explaining its content, to reduce their cognition burden (e.g., P15, P16, P18). Our prototype clearly failed to meet this expectation, but future designers should take heed of it.

limitations and DIRECTIONS FOR future work

Our high‐fidelity, multi‐functional, interactive pandemic dashboard prototype aimed to simulate actual use conditions. However, our findings indicated that the functionality it provided was affected to some extent by users' cognition and prior experience, as well as the devices on which they chose to view it. For example, when using laptops or desktop computers, people tended to be quick to realize have that the dashboard was interactive, due to operating it with via a mouse; but when seeing it on LCDs, they tended to consider it merely an information display panel, and not assume it to be user‐controllable. As a result, our participants provided various suggestions that lay beyond the scope of our research: about built‐in interactive features of the chart, readability of the fonts, and so forth. However, even if we only considered single charts, it seemed that our prototype could not guarantee a comfortable user experience. Therefore, we argue that more exploration and discussion in the field of information design will be needed before truly user‐friendly public‐emergency dashboards can be devised.

In addition, due consideration should be given to the technical and other issues that might arise if an experimental dashboard like ours were deployed to a user population that was larger and more diverse, whether in age, linguistic/cultural background, or level of familiarity with information technology. Also, none of the participants were experts, either in information systems design or public health. Future work could therefore usefully involve professionals from those fields, to spark more in‐depth discussion.

Additionally, it should be remembered that our research design explored the correspondence between users' information‐seeking and chart‐selection behaviors. For example, when they had specific information needs (e.g., looking for confirmed cases in Germany), which graph or chart would be their first reading priority? However, we do notice that different dashboard design tools and their functions will influence users' wayfinding strategies. To elaborate, in our experiment, when the mouse hovered over a country on the global map, the number of confirmed cases will be displayed; recognizing this, several participants use the map as the approach to find the number (instead of going right through the table). Since there is no unified dashboard in the world that can represent all dashboards, investigating common functions shared by different dashboards is promising for future research.

A key limitation of the eye‐tracking phase of our study was that its participants were given a non‐interactive interface, which in limited the approaches they could use to find information. This presumably tended to make their gaze trajectories more uniform than would have been the case if interactive dashboard features were enabled. Thus, in future work, we anticipate curating a dashboard that is both more delicate and richer in visualized components than the one used in the current study, as a means of seeking possible design breakthroughs. We also strongly recommend executing future eye‐tracking experiments on interactive interfaces.

Finally, via our phase 2 experiment, we verified that a text box in a colored container caught the participants' attention; but the question of whether the text, the container, or their combination was chiefly responsible for this effect remains unanswered. Further research on this topic is therefore merited.

RESEARCH DATA

Add‐on experiment results visualization (browsing task) can be accessed at https://osf.io/7sy9f/

ACKNOWLEDGEMENTS

This work was financially supported by the Ministry of Science and Technology (MOST) in Taiwan, under MOST 111‐2636‐H‐002‐004‐ and MOST 111‐2634‐F‐002‐018‐, and the Center for Research in Econometric Theory and Applications (Grant no. 111 L900204) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project, and the Universities and Colleges Humanities and Social Sciences Benchmarking Project (Grant no. 111L9A002) by the Ministry of Education (MOE) in Taiwan. This work was done while Yu‐Ju Yang was at the Department of Library and Information Science, National Taiwan University, Taiwan. Please address all the correspondence to Wei Jeng.

Contributor Information

Yu‐Wen Huang, Email: b06106019@ntu.edu.tw.

Yu‐Ju Yang, Email: yujuy@andrew.cmu.edu.

Wei Jeng, Email: wjeng@ntu.edu.tw.

REFERENCES

  1. Adams, F. M. , & Osgood, C. E. (1973). A cross‐cultural study of the affective meanings of color. Journal of Cross‐Cultural Psychology, 4(2), 135–156. https://doi.org/ 10.1177/002202217300400201 [DOI] [Google Scholar]
  2. Brath, R. & Peters, M. (2004) Dashboard design: Why design is important. Direct , October 2004. [Google Scholar]
  3. Dong E, Du H, Gardner L (2020) An interactive web‐based dashboard to track COVID‐19 in real time. Lancet Infect Dis; published online Feb 19. 10.1016/S1473-3099(20)30120-1. [DOI] [PMC free article] [PubMed]
  4. Few, S. (2006). Information dashboard design: The effective visual communication of data. Sebastopol, CA: O’Reilly Media. [Google Scholar]
  5. Heer Jeffrey and Shneiderman Ben. 2012. Interactive dynamics for visual analysis. Commun. ACM 55, 4 (April 2012), 45–54. 10.1145/2133806.2133821 [DOI] [Google Scholar]
  6. Jeng, W. , Hu, H. Y. , Tang, G. M. , & Chien, S. Y. (2021). Cultural Differences in the Allocation of Attention to Information Architecture Components. Journal of Library & Information Studies, 19(1), 19–41. [Google Scholar]
  7. Pappas, L. , & Whitman, L. (2011). Riding the Technology Wave: Effective Dashboard Data Visualization. In Smith M. J. & Salvendy G. (Eds.), Human Interface and the Management of Information. Interacting with Information. Human Interface 2011. Lecture Notes in Computer Science, vol 6771. Berlin, Heidelberg: Springer. 10.1007/978-3-642-21793-7_29 [DOI] [Google Scholar]
  8. Smith, V. S. (2013). Data dashboard as evaluation and research communication tool. In Azzam T. & Evergreen S. (Eds.), Data visualization, part 2 (Vol. 140, pp. 21–45). New Directions for Evaluation. [Google Scholar]
  9. Wexler, S. , Shaffer, J. , & Cotgreave, A. (2017). The Big Book of Dashboards: Visualizing Your Data Using Real‐World Business Scenarios (1st. ed.). Wiley Publishing. [Google Scholar]
  10. Suk, H.‐J. and Irtel, H. (2010), Emotional response to color across media. Color Research and Application, 35: 64–77. https://doi.org/ 10.1002/col.20554 [DOI] [Google Scholar]
  11. Zhuang, Mengdie & Concannon, Dave & Manley, Ed. (2020). A Framework for Evaluating Dashboards in Healthcare. [DOI] [PubMed]

Articles from Proceedings of the Association for Information Science and Technology. Association for Information Science and Technology are provided here courtesy of Wiley

RESOURCES