Skip to main content
Health Systems logoLink to Health Systems
. 2019 May 28;8(3):203–214. doi: 10.1080/20476965.2019.1620637

The potential role of dashboard use and navigation in reducing medical errors of an electronic health record system: a mixed-method simulation handoff study

Danny T Y Wu a,b,, Smruti Deoghare a, Zhe Shan c, Karthikeyan Meganathan a, Katherine Blondon d,e
PMCID: PMC6896471  PMID: 31839932

ABSTRACT

The dashboards of electronic health record (EHR) systems could potentially support the chart biopsy that occurs before or after physician handoffs. In this study, we conducted a simulation handoff study and recorded the participants’ navigation patterns in an EHR system mock-up. We analyzed the navigation patterns of dashboard use in terms of duration, frequency, and sequence, and we examined the relationship between dashboard use in chart biopsy and the errors identified after handoffs. The results show that the participants frequently used the dashboard as an information hub and as an information resource to help them navigate the EHR system and answer the questions in a nursing call. Moreover, using the dashboard as an information hub can help reduce imprecision and factual errors in handoffs. Our findings suggest the need for a “context-aware” dashboard to accommodate dynamic navigation patterns and to support clinical work as well as to reduce medical errors.

KEYWORDS: Clinical systems and informatics, health IT quality and evaluation, human-computer interaction

1. Introduction

With the high adoption rate of electronic health record (EHR) systems in clinical and healthcare organisations across the US (Hsiao & Hing, 2012), it is crucial that EHR systems can support rather than impede clinicians’ tasks and further improve productivity (Adler-Milstein & Huckman, 2013). As part of the daily clinical chores, interacting with EHR systems is a core activity for care providers. EHR systems should not only store data accurately and efficiently but also provide easy and intuitive navigation of the data. Without easy and intuitive navigation, EHR data cannot be fully used, and their value cannot be maximised. Hence, improving the design of the EHR will improve clinician experiences such that their clinical work can be efficiently and effectively supported to provide high quality and safe care. The goal of designing user-friendly, intuitive information systems drives EHR usability research. The aspects of these information systems include but are not limited to the nature of user-software interactions, the learnability of the software, the facilitation of user cognition, the design of graphics, system navigation, and editing capability and consistency among interfaces (Hollin, Griffin, & Kachnowski, 2012). Our study focuses on system navigation.

EHR navigation refers to a system’s ability to support the user’s mental model and allow for the easy reversal of actions (Hollin et al., 2012). The most common problems faced by EHR users involve technical issues during navigation or the inability to access the desired information within a few clicks (Roman, Ancker, Johnson, & Senathirajah, 2017). These problems can often lead to frustration, increasing the possibility of causing errors or compromising the patient-physician interaction, and ultimately limiting user acceptance of EHRs (Clarke et al., 2013). Several studies have shown that having an inflated number of steps required to perform certain tasks can result in a failure to maintain a detailed and comprehensive overview of information for users and could pose high mental demands for users (Ariza, Kalra, & Potts, 2015; Craig, Farrell, Fred, Filipe, & Gamboa, 2010; Saitwal, Feng, Walji, Patel, & Zhang, 2010), leading to information overload. This information overload can be overwhelming and detrimental to physician EHR interactions and the quality of care. Overload promotes an increased possibility of errors, especially for an activity such as the chart biopsy.

The chart biopsy is integral to maintaining an up-to-date patient health record. It is defined as “the activity of selectively examining portions of a patient’s health record to gather specific data or information about that patient or to get a broader sense of the patient and the care that patient has received” (Hilligoss & Zheng, 2013). Clinicians usually perform a chart biopsy after receiving a patient handoff from another clinician. Handoffs focus on the transfer of patient information, rights, duties, and accountability from one healthcare provider to another (Solet, Norvell, Rutan, & Frankel, 2005). Chart biopsies play a crucial role for physicians who are taking over the care of patients with whom they are unfamiliar, allowing them to grasp the patient’s current states rapidly. It is important to note that although handing off all patients is common practice in the US, the process varies considerably elsewhere; for example, in Switzerland, where this study was conducted (Blondon, Del Zotto, Rochat, Nendaz, & Lovis, 2017).

Handoffs are a complex procedure requiring the integration of a large body of information about each patient. Imprecisions and errors during handoffs can potentially put the patient’s safety at risk. Although the content and structure of handoffs can vary widely among providers (Arora & Johnson, 2006), a prior study on local practices in medical wards showed that there are common themes, such as pending lab results (Blondon, Wipfli, Nendaz, & Lovis, 2015). Moreover, multiple interactions among pertinent features contribute to the complexity of handoffs (Blondon et al., 2015). A recent study that surveyed medical interns indicates that 98% of participants believed that inadequate handoff education leads to adverse events (Schröder et al., 2018). Adverse events can lead to critical patient safety concerns and diminish the quality of care. Restructuring and computerising the handoff process may be a viable way to improve its quality and safety (Bomba & Prakash, 2005). A multi-site study conducted by Starmer et al. has shown that standardising written and oral communication for handoffs can reduce medical errors by 23% (Starmer et al., 2014). However, the design of health information technology to support handoffs should consider communication networks and non-linear information gathering and dissemination behaviours (Benham-Hutchins & Effken, 2010).

The EHR dashboard is one type of health information technology that offers great potential to support the chart biopsy and reduce medical errors during handoffs, if it is well designed. EHR dashboards have been designed for multiple purposes and at different scales, ranging from a one-patient chart to a patient population. One common way to use EHR dashboards is as an “information hub”, providing a glimpse of the patient’s condition as well as offering tools to explore specific aspects of the patient’s health and medical history in more detail. Studies have shown improvements in decision-making and patient outcomes with the increased use of EHR dashboards (Bakos, Zimmermann, & Moriconi, 2012; Etamesor, Ottih, Salihu, & Okpani, 2018; Franklin et al., 2017; Khan et al., 2018). Moreover, navigation patterns in EHRs may differ between cued and non-cued events. For example, EHR navigation varied in the chart biopsies of doctors who received a text message from nurses during handoffs (Kendall et al., 2013), suggesting that physicians employed diverse information-seeking strategies depending on the context. It remains challenging to improve the current EHR dashboard designs to improve the effectiveness and efficiency of information access further, especially during chart biopsies. Another common way to use the EHR dashboard is as a rich “information resource” to perform automated analytical reviews of clinical data in a more efficient way (Wilbanks & Langford, 2014), thus reducing the occurrence of medical errors (Bakos et al., 2012). It is not surprising that a dashboard can be used as a business intelligence tool to reduce dependence on memory by providing a depth and breadth of information within a limited amount of visual real estate.

In this study, we examine the role of dashboards in chart biopsies and the relationship between the navigation patterns used in the chart biopsy phase and the errors generated during handoffs, leading to our three hypotheses. These hypotheses focus on dashboard use in terms of duration, frequency and sequence, and on its correlation with different types of handoff errors. Our three hypotheses are as follows: (1) dashboard use tends to be the first step in navigation patterns (a preferred information hub) when there is a call triggering specific information needs by a clinician; (2) the dashboard provides summarised patient information (a rich information resource) so that clinicians tend to use it for longer and more frequently after a call; and (3) dashboard use can reduce medical errors in handoffs.

This paper is organised into the following sections: (1) a method discussion including the study design of the EHR mock-up system, navigation patterns, and data analysis; (2) a results section describing the major findings of our study on dashboard use and its relationship to errors in handoffs; and (3) a discussion of ideas for the design of a dashboard, the potential role of dashboard use in reducing medical errors, limitations, future work, and the conclusions.

2. Method

2.1. Study design

A mixed-method simulation study was conducted to observe clinician handoff behaviours. A total of 30 study participants were recruited, all of whom were actual users (physicians) of a home-grown EHR system at a leading European academic health centre (University Hospitals of Geneva). The participants were asked to interact with a mocked up EHR system that was implemented by using screenshots of the actual home-grown EHR system with eight fictitious patients. These eight cases were developed based on true patient examples with various medical conditions, including (1) retrosternal chest pain, (2) abdominal pain, (3) chest pain, (4) shortness of breath, (5) medication error, (6) fever, (7) skin rash, and (8) low blood pressure. The cases were randomly assigned to the study participants, whose navigation patterns within the EHR system mock-up were recorded in the form of event logs (i.e., clicking through the screenshots as webpages). Since the participants had strong familiarity with the actual home-grown EHR system prior to the study, they were expected to encounter minimal barriers while using the mock-up of the EHR system to review the patient cases.

Each study participant received six of the eight patient cases and reviewed the cases in three phases, i.e., (1) handoff, (2) chart biopsy, and (3) sign-out. Figure 1 illustrates the design of this simulation study. The study began with the first phase (handoff) in which participants received handoffs of four of the six assigned cases. Participants in the second phase (chart biopsy) were free to use the EHR system mock-up to review the four patient cases. During the second phase, standardised nurse calls were made by the research team to serve as a prompt to (re-)review the patient cases. Each of the eight cases has a corresponding call script. A sample script is provided below. After the call, during Phase 3, the participants had a face-to-face interview with the research team to give a sign-out of all six assigned patient cases (sign-out).

A sample call script:

“Hi, this is Myriam from ward 6B. Are you the doctor covering for this ward tonight? ” [Doctor responds …]

“I’m calling about Mr. Lopez, who was admitted for pneumonia. Did your colleague sign him out to you?” [If yes, then continue. If not, nurse provides a rapid summary of the rest of his medical history.]

“He’s presenting right-sided lower chest pain, assessed at 6 out of 10 on the visual analogue scale.” [Doctor responds …] “His sat’s at 88% with 2 litres of oxygen” [The next elements are provided if asked: he’s had prior similar episodes of chest pain, but never as strong, his usual sats are higher, he’s not on his C-PAP at this moment, he’s just finished his meal. The chest pain seems to be worse when he takes deep breaths but it’s sort of hard to tell. BP 130 over 80, pulse at 90 bpm.]

If needed, prompt: “What should I do for him?”

Figure 1.

Figure 1.

Study design.

*The study design involved three phases: (1) handoff, in which the participants received handoffs of four of the six assigned cases, (2) chart biopsy, where participants used the EHR system mock-up to review the four cases, and (3) sign-out, in which the participants had a face-to-face interview with the research team on all six cases.

The researcher acting as the nurse followed a standardised script, with a primary topic for the call (primary concern) followed by questions about the doctor’s plan in response to the concern. The topics of the calls varied but the nurse questions were standardised, requiring different responses from the participating doctors. Among the four cases in question, two of them had been previously discussed during the handoff phase, so the participant had prior knowledge of those two cases but not of the two undiscussed cases. In other words, the six patient cases that each participant received were grouped into three categories; the initial two were handed off but not called for, the second two were both handed off and called for, and the last two were not handed off but called for. The study was designed to simulate a real handoff situation at the study site.

While the timing and contents of the nurse calls were standardised, the order of the cases for the calls and handoffs were randomised. Consequently, the call could occur before or after the participant started reviewing the EHR about a case. It is worth noting that in some cases, the physician may not have felt the need to review a patient’s chart without a prompt (a nurse call). However, in other cases, a physician may have reviewed a chart even without a call from the nurse. For example, the handoff may have mentioned a lab result or a radiology result to monitor. The physician may have wanted to read up on a complex case. It could also be that the physician realised that he or she had forgotten to ask about some details during the handoff, or that the handoff as received was unsatisfactory and/or incomplete. Moreover, some of the patient cases in the study were designed by the research team to prompt the chart review for high case-complexity or for the purpose of action-tracking.

At the end of the simulation handoff study, the research team reviewed the sign-out information and identified the errors made by the participants. Our previous study examined the relationship between the factors in the handoff phase and the number of errors at the sign-out phase (Blondon et al., 2017). The current study further investigates the EHR navigation patterns in the chart biopsy phase (hypotheses 1 & 2) and their relation to the number of errors in the sign-out phase (hypothesis 3).

The definition of the errors identified for the sign-out phase is listed in Table 1. These error definitions were adapted from our previous study (Blondon et al., 2017). Omission errors were defined as the lack of a given element. Imprecision errors were elements that were modified or added by the participant that were not in the initial cases. Certainly, omission can lead to a certain level of imprecision. However, the mechanism in the reasoning process between a missing element and one that is present but wrong is quite different. Therefore, imprecision errors were considered as actual errors, which were differentiated from wrong facts, simply because imprecision errors did not have a notable clinical impact on the quality of care.

Table 1.

Error types and definition in the sign-out phase.

Error Type Definition*
Omission Missing information, such as fever that was not reported, with a clinical impact
Imprecision Facts that were incomplete or approximate, but with little or no impact on the comprehension of the management of the case. For example, an age reported as 80-something rather than 83 years would be an imprecision.
Wrong Facts** Facts that were mentioned in the sign-out but were not provided in any of the data sources (handoff, nurses, paper summaries or EHR)

*The definition of errors was derived from the literature and used in our previous study.

**An example of wrong facts: a participant mixed the data from two different patients during the sign-out, not realising immediately that the EHR was open to another patient’s chart.

2.2. EHR system mock-up

The EHR system mock-up was devised using screenshots of the actual home-grown EHR system that is running at the study site. The mocked up system has a web interface that renders the screenshots as webpages with pre-filled patient case information for the study participants to review. The mock-up was organised into sections (or tabs) of patient information. Table 2 lists these nine sections and the purpose of each one. It is worth noting that the mocked up EHR system did not provide search functionality to the study participants, which is available in the actual EHR system. The use of the search function in the actual EHR system was discussed with the participants following the simulation component of the study. Although a few participants were avid search function users, the majority was accustomed to browsing patient information in various sections of the EHR system, particularly when reviewing the documents on the summary page in reverse chronological order and/or by sorting by document type. In this regard, the participants in the study were, in large part, accustomed to scanning through the document titles to identify the right ones, which justifies the realistic design of our study. The search functionality was therefore excluded from the EHR system mock-up due to a possible delay when interacting with the mock-up system.

Table 2.

Nine sections in the mock-up EHR system.

Section Name* Purpose of Use
Dashboard Presenting the patient’s medical documents, including admission and discharge notes, consultations, lab and imaging reports.
Lab Providing laboratory test results divided into sub-sections by type of sample and analysis in reverse chronological order.
Progress notes Containing unstructured data blending of one-time notes (e.g., describing a discussion with a patient) and syntheses in the form of a problem list, all in a reverse chronological order.
Summary Providing a reverse chronological list of all the documents (admission and discharge notes, all lab and imaging reports).
Code status Containing administrative information such as the name of the primary care provider.
Form Showing admission and discharge notes.
Graphic Presenting vital signs over time as well as medication administration records.
Prescription Providing a list of all current and recently stopped medications with a sub-section for the prescription history.
Imaging Opening access to the radiology reports and images.

*No search functionality was provided in the HER system mock-up.

Our analysis focused on the dashboard use of EHR systems. Figure 2 shows a screenshot of the dashboard in the EHR mock-up, which was designed to provide a snapshot of the patient’s information only within the last 48 hours, pooling data from various sections of the EHR. Therefore, the dashboard was intended to serve two roles, (1) as an “information hub” to the areas of interest and (2) as an “information resource” presenting a comprehensive picture of the patient medical history, most of which could be viewed in more detail in the specific sections. However, the dashboard in both the mock-up and the actual EHR systems did not include the latest progress notes or medications, so the participants had to navigate to those specific sections using the top or side menu for these specific types of information as needed.

Figure 2.

Figure 2.

Dashboard page in the mocked up EHR.

*The dashboard was designed to provide a snapshot of the patient’s information within the last 48 hours, serving as a preferred information hub and a rich information resource.

2.3. Navigation patterns

When the participants reviewed the patient cases in the EHR system mock-up, their clicks through the webpages were automatically recorded in the form of navigation event logs. Each row in the event logs contained the participant’s study identifier, the URL of the screenshot being clicked, and the timestamp. It is worth noting that the start and end times of the EHR use were recorded by an observer in the room. Only the logs in the observed timeframes were included in the analysis.

To analyse the navigation patterns, the event logs were manipulated in three ways using a self-developed script in Python.1 The raw, staging, and manipulated data were stored in a file-based relational database called SQLite.2

First, the navigation URLs were coded into ten broad categories (hereafter referred to as “themes”) based on the sections of the mocked up EHR system (Table 2). The first nine categories, which were coded from “A” to “I”, were one-to-one mappings of the nine sections of the mocked up EHR system. For example, the URLs of webpages in the dashboard section were all coded as “A”, and the URLs in the Lab section were coded as “B”. Unrelated and uncategorisable navigation patterns were coded as “U”, which was the last of the ten themes. The coding process was completed by the three co-authors. The navigation data were first coded by a researcher using a rule-based mechanism in which the keywords of the URLs were mapped to the sections. Then, the codes were reviewed and verified by the second researcher. Any coding differences were reconciled by the third (senior) researcher based on this person’s domain expertise.

Second, the end time of each navigation event was extrapolated, and it was required to calculate the duration of a navigation event, since each record contained only one timestamp (the time when a webpage was clicked). The end time of a record was inferred by the timestamp of the next record. Hence, the duration of the current record was equal to the difference between the timestamp of the current record and that of the record immediately following it. However, the duration may be unreasonably long due to the inability of the mocked up EHR system to record outside activities (e.g., the participant talked over the phone with no EHR interaction). In this case, a threshold would be set as the maximum duration of each navigation record.

Third, the navigation records were grouped into observation sessions based on the time period provided by human observations. The time period was recorded by a researcher observing the EHR interaction activities in each participant case during the study. This grouping allowed for a more accurate calculation of the event durations since longer activities performed outside of the EHR can be identified, and therefore they would not be included in the observation sessions. This grouping also provided more context to help interpret each participant’s navigation patterns. For example, if a participant used EHR for 3 minutes, reviewed a patient case on paper for 5 minutes, and then returned to the EHR for another 2 minutes, this participant would have had two observation sessions, with one for the first and another for the third actions. If all the activities were considered as one session, the event would have an unrealistically long duration of EHR use (an increase from 5 to 10 minutes).

2.4. Data analysis, navigation event durations

The data analysis began with an examination of the distribution of the navigation event durations. Because of the study design, the participants interacted with a familiar mock-up EHR system in a controlled environment, suggesting that they were likely able to click through the webpages quickly to collect information. Thus, the majority of the event durations should be very short. The event durations factor was examined by using the distribution of the click frequencies in a line chart, with the x-axis being the duration in seconds and the y-axis being the frequency.

2.5. Data analysis, dashboard as the first step (hypothesis 1)

The second part of the data analysis was to examine our first hypothesis regarding the use of the dashboard as a preferred information hub or as the first step of the observation sessions. Two binary variables were created. One variable was called the observation type, which indicates whether or not an observation session had a nurse call before it. An observation session was categorised as “call before” when the timestamp showed that a nurse call occurred prior to the start time of that session. Other sessions were determined as “no call”. The other variable indicated whether or not an observation session began with the dashboard use. Using these two binary variables, a two-by-two contingency table was constructed. The independence between the dashboard use as the first step and the appearance of the nurse call was examined using the Chi-square test. It is expected that these two events are not independent and that the dashboard was used more often when there was a prior call. The data were further organised by observation type to list the top themes as the first step to exploring more detailed patterns.

2.6. Data analysis, longer and more frequent dashboard use (hypothesis 2)

The third part of the data analysis was used to address our second hypothesis regarding dashboard use as a rich information resource. The navigation patterns of dashboard use were quantified by three measures, including the time spent on the dashboard (duration and time allocation) and the frequency of dashboard use. These measures have been used in previous studies to analyse sequential patterns in the clinical workflow (Ozkaynak, Wu, Hannah, Dayan, & Mistry, 2018; Wu et al., 2017). Specifically, the duration of dashboard use was calculated in seconds based on the time difference between navigation events as described above. The time allocation was the duration of dashboard use as normalised by the total duration of the observation session. “Frequency” refers to the raw count of dashboard records. For example, if a participant in an observation session uses the dashboard 5 times, with each time lasting 3 seconds, and the session lasts 100 seconds, the duration, time allocation, and frequency were 15, 0.15, and 5, respectively. To examine any significant difference in the patterns with and without the calls, the mean (or median based on the normality of the data) of these three measures was compared based on the observation types. The normality of each distribution was tested using the Kolmogorov-Smirnov test. If a distribution was normal, the mean difference was examined using a one-way ANOVA with a Bonferroni correction. However, (a non-normal distribution), the median difference was examined using a non-parametric equivalent test, namely, the Kruskal-Wallis test.

2.7. Data analysis, dashboard use can help reduce medical errors (hypothesis 3)

The fourth part of the data analysis examined our third hypothesis regarding the relationship between the dashboard use and the number of errors in the sign-out phase. In this analysis, only the sessions with a “call before” observation type were selected, since the errors were identified only in these sessions. Dashboard use was quantified using the following variables: (1) dashboard use as the first step (FIRST_A), (2) duration (DUR_A), (3) time allocation (DUR_PCT_A), and (4) frequency (FREQ_A). These items were considered as the independent variables within the modelling. As described in Table 1, errors in the sign-out phase were quantified into three types, (1) the number of omissions (NUM_OMI), (2) the number of imprecisions (NUM_IMP), and (3) the number of wrong facts (NUM_WRG). These three types of errors were combined to generate the other four dependent variables, for a total of seven dependent variables. These additional variables include (1) the number of imprecisions and omissions together (NUM_IMP_OMI), (2) the number of imprecisions and wrong facts together (NUM_IMP_WRG), (3) the number of omissions and wrong facts together (NUM_OMI_WRG), and (4) the number of total errors (NUM_TOT_ERR), which is the sum of the omissions, imprecisions, and wrong facts. The independent and dependent variables were modelled using the General Linear Model (GLM) in a Poisson distribution as implemented in a Python library called StatsModels.3

Finally, the selected sessions for the third hypothesis were visualised and explored using the DISCO (Günther & Rozinat, 2012; Günther & van der Aalst, 2007) demo version, a process mining tool (Mans, van der Aalst, Vanwersch, & Moleman, 2013) based on the Fuzzy miner (Günther & van der Aalst, 2007) with advanced features such as seamless process simplification and the highlighting of frequent activities and paths. DISCO process maps have been used to visualise the sequence and timing of the activities in the event logs (Fluxicon, 2018). The navigation patterns were separated into two groups based on the number of errors (i.e., no error vs. at least one error regardless of the type) to generate two data visualisations. In these visualisations, the starting point of the navigation patterns was illustrated by a triangle symbol at the top of a process map with the ending point indicated by a square symbol at the bottom of the map. Navigation steps, or activities, were represented by boxes, and the flow between two activities was indicated by an arrow. Dashed arrows were used for activities that occurred at the very beginning or at the very end of the process. The raw frequencies from the input dataset were displayed in the activity boxes and along the flow arrows. The darkness of the activity boxes and the thickness of the flow arrows were proportional to their frequency number. These interactive visualisations can demonstrate relationships between navigation patterns and errors, which may not have been captured by the statistical model.

3. Results

3.1. Navigation event durations

A total of 9,523 navigation events were collected from the 30 participants during the “handoff”, “chart biopsy”, and “sign-out” phases. Since the present study focuses on the navigation patterns in chart biopsies, a subset of 3,989 navigation events was selected. The frequency of the navigation events by each theme is shown in Table 3. The participants frequently looked at the Lab, Summary and Progress notes sections in the EHR system mock-up. The navigation events were further grouped into 268 observation sessions based on the timestamps recorded by the human observer. In this dataset, each participant case had one to six observations, with the average number of observations being 1.62.

Table 3.

The distribution of EHR navigation themes.

Theme ID Description Count Count Rank Percentage (%)
A Dashboard 362 5 9.07
B Lab 1,277 1 32.01
C Progress notes 532 3 13.34
D Summary 603 2 15.12
E Code status 34 9 0.85
F Form 42 8 1.05
G Graphic 286 7 7.17
H Prescription 492 4 12.33
I Imaging 339 6 8.50
U Unrelated clicks 22 10 0.55
Total* 3,989 - 100.00

*The 3,989 records were further grouped into 268 observation sessions. Each participant case had 1 to 6 observations, with the average number being 1.62.

Figure 3 illustrates the distribution of the navigation event durations. Eighty percent of the events have a duration of less than 13 seconds, with the maximum duration being 188 seconds (slightly more than the 3-minute threshold, N = 1). Since all the durations were fairly short, no threshold was applied to the dataset so that all of the navigation events were included in this analysis (N = 3,989). This highly skewed distribution confirms our assumption that the participants clicked through the mocked up EHR system quickly because they were very familiar with the system layout and functionality. Figure 4 shows the duration distribution of the 268 observation sessions. While the curve appears less skewed, the majority (80.6%) of the sessions were still short (less than four minutes). Table 4 shows the sample navigation events of a participant (ID: 24) case (ID: 8). The sample data were grouped into two observation sessions with an inferred end time and duration.

Figure 3.

Figure 3.

The duration distribution of the navigation events (it only shows events lasting less than 60 seconds).

* 80% of the events have a duration of less than 13 seconds, with the maximum duration of 188 seconds.

Figure 4.

Figure 4.

The duration distribution of observation sessions (N = 268).

*The majority (80.6%) were short (<4 minutes).

Table 4.

Sample navigation events of a participant case (Participant ID: 24, Case ID: 8).

Observation
Session ID*
Start
Time
End Time Duration
(seconds)
Total Duration of
Observation
(in seconds)
Normalized
Duration
(in seconds)
Call Before**
(Yes = 1)
1 0:15:09 0:15:36 26.28 52.09 0.5045 0
1 0:15:36 0:15:46 10.06 52.09 0.1931 0
1 0:15:46 0:15:46 0.01 52.09 0.0002 0
1 0:15:46 0:15:48 2.62 52.09 0.0503 0
1 0:15:48 0:15:50 1.69 52.09 0.0324 0
1 0:15:50 0:16:00 9.56 52.09 0.1835 0
1 0:16:00 0:16:00 0.02 52.09 0.0004 0
1 0:16:00 0:16:02 1.85 52.09 0.0355 0
2 0:16:40 0:16:40 0.01 113.04 0.0001 1
2 0:16:40 0:16:44 3.59 113.04 0.0318 1
2 0:16:44 0:17:39 54.71 113.04 0.4840 1
2 0:17:39 0:17:39 0.02 113.04 0.0002 1
2 0:17:39 0:17:41 2.07 113.04 0.0183 1
2 0:17:41 0:18:31 50.62 113.04 0.4478 1
2 0:18:31 0:18:32 0.02 113.04 0.0002 1
2 0:18:32 0:18:34 2.00 113.04 0.0177 1

*These sample data were grouped into two observation sessions with an inferred end time and duration.

**Call occurred at 00:16:15 (16 minutes and 15 seconds after the session started).

3.2. Hypothesis 1: dashboard as the first step

As shown in Table 5, among the 268 observation sessions, slightly more than half of the sessions (N = 138, 51.5%) had a prior nurse call, and the remainder had no prior call (N = 130, 48.5%). More than 40% of the sessions had dashboard use as the first step in their navigation patterns (N = 116, 43.3%). The Chi-square test shows that the dashboard use as first step and the presence of nurse calls were not independent (p-value < 0.01). These results support our hypothesis that the dashboard was likely used as a first step (a preferred information hub) when there was a nurse call.

Table 5.

Observed dashboard use with and without a call.

Observed First Steps* Call before No Call Total
Dashboard 78 (29.1%) 38 (14.2%) 116 (43.3%)
Other Themes 60 (22.4%) 92 (34.3%) 152 (56.7%)
Total 138 (51.5%) 130 (48.5%) 268 (100%)

*The numbers refer to the number of observations in each group, e.g., the participants in the 78 observations had a prior call and used the dashboard as the first step. The Chi-square test shows that using the dashboard as the first step and the presence of nurse calls were not independent (p-value < 0.01).

Table 6 shows the breakdown of top themes as the first step with and without a nurse call. When there was a call triggering specific information needs, close to 30% of the sessions started with the dashboard page, followed by the summary pages (6.72%). Although the Summary section was the second-most frequent choice, it was used much less frequently than the dashboard as the first step. However, the use of the dashboard and the summary pages decreased dramatically when navigation had begun without a prior call. Specifically, using the dashboard as the first step was halved (29.1% to 14.18%), and using a summary as the first step was also halved (6% to 3%).

Table 6.

Themes as the first step.

Observation
Type
Theme Number of
Observations
Percentage of
Observations
Call before* A. Dashboard 78 29.10%
D. Summary 18 6.72%
C. Notes 12 4.48%
B. Lab 11 4.10%
G. Graphic 7 2.61%
I. Image 6 2.24%
H. Prescription 3 1.12%
E. Entering 2 0.75%
S. Special 1 0.37%
Subtotal 138 51.49%
No call before A. Dashboard 38 14.18%
B. Lab 37 13.81%
G. Graphic 22 8.21%
D. Summary 9 3.36%
I. Image 9 3.36%
H. Prescription 6 2.24%
C. Notes 5 1.87%
E. Entering 3 1.12%
S. Special 1 0.37%
Subtotal 130 48.51%

*The use of dashboard (A) is much different in the presence of a nurse call. A rate of 29.1% of the sessions used the dashboard as the first step when a nurse call triggered specific information needs. Without a call, the dashboard use decreased (14.18%).

3.3. Hypothesis 2: longer and more frequent dashboard use

Table 7 shows all three measures of the dashboard with and without a call. Since the distributions were all non-normal, the medians, rather than the means, were compared using the Kruskal-Wallis test. As shown, dashboard use following a prior call was relatively longer and more frequent compared to dashboard use without a call (p-value < 0.01). Specifically, the duration was 3 seconds longer, the time allocation was 3% higher, and the median frequency was doubled. These results support our second hypothesis of longer and more frequent dashboard use when there is a call. Together with the findings in hypothesis 1, when there was a nurse call, the participants not only tended to use the dashboard as a preferred information hub, but they also used it as a rich information source since time spent on the dashboard was increased. While the 3-second difference may seem small, it could make a difference when seeking specific information within a particular part of an EHR system. In addition, the difference in the means is larger (it increased from 4 to 12 seconds). Due to the skewness of the data, only the medians were tested for statistical significance. It is worth noting that the short duration on the dashboard is expected in this type of controlled simulation study, and the short duration may not equal the total time needed to review the full dashboard.

Table 7.

Significant differences in dashboard use.

Measure Duration
(second)
Time allocation (%) Frequency
Mean No call before
(N = 130)
4.18 0.10 1.15
Call before
(N = 138)
12.02 0.15 1.54
Difference +7.83 +0.05 +0.38
Median* No call before
(N = 130)
0.28 0.00 1.00
Call before
(N = 138)
3.20 0.03 2.00
Difference +2.93 +0.03 +1.00
p-value <0.01 <0.01 <0.01

*Due to the non-normal distributions, the medians, rather than the means, were compared using the Kruskal-Wallis test.

3.4. Hypothesis 3: dashboard use can help reduce medical errors

Table 8 reports the relationship between the dashboard use and the number of errors. The results show that using the dashboard as the first step (FIRST_A) had a negative correlation with the total number of errors (NUM_TOT_ERR) in handoffs (co-efficient = −0.531, p-value = 0.049). Moreover, the error reduction appears to be more effective regarding the imprecisions and wrong facts than the omissions (co-efficient = −0.6003, p-value = 0.043). These results partially support our third hypothesis that using dashboards helps to reduce medical errors in handoffs, with a significant correlation in the reduction of imprecision errors and wrong facts but not in omission errors.

Table 8.

General linear models of dashboard use and errors.

Model* Dependent Variable **** Coefficient of Independent Variables *****
Intercept FIRST_A DUR_A DUR_PCT_A FREQ_A
1 NUM_OMI −1.9823 −0.2069 0.0093 0.1816 −0.0111
2 NUM_IMP −0.3785 −0.5353 −0.0037 0.0590 0.0855
3 NUM_WRG −2.7395 −0.9668 0.0127 0.5635 0.2585
4 NUM_OMI_IMP −0.1961 −0.4697 −0.0005 0.1029 0.0629
5 NUM_OMI_WRG −1.5930 −0.5184 0.0106 0.3269 0.1037
6 NUM_IMP_WRG −0.3055 −0.6003** −0.0009 0.1354 0.1143
7 NUM_TOT_ERR −0.1316 −0.5310*** 0.0013 0.1568 0.0897

*The variables were modelled using the General Linear Model in the Poisson distribution.

**p-value = 0.043 ***p-value = 0.049

****NUM_OMI) number of omissions, NUM_IMP) number of imprecisions, NUM_WRG) number of wrong facts, NUM_OMI_IMP) number of omissions and imprecisions together, NUM_IMP_WRG) number of imprecisions and wrong facts together, NUM_OMI_WRG) number of omissions and wrong facts together, and NUM_TOT_ERR) number of all errors together.

*****FIRST_A) dashboard as the first step, DUR_A) duration of dashboard use, DUR_PCT_A) time allocation of dashboard use, and FREQ_A) frequency of dashboard use.

Figure 5 shows the process map for error and no-error cases (71% vs. 29%). In the error group (Figure 5(a), left), the most prominent patterns were “Dashboard (A) -> Graphic (G) -> Notes (C) -> Labs (B)”, “Dashboard (A) -> Prescription (H)”, and “Summary (D) -> Lab (B)”. However, in the no-error group (Figure 5(b), right), the prominent patterns were “Dashboard (A) -> Prescription (H) -> Summary (D)”, “Dashboard (A) -> Graphic (G)”, and “Lab (B) -> Summary (D)”. It seems that the graphic view (G) and prescriptions (H) were common next steps of the dashboard (A). Additionally, labs (B) and the summary page (D) were often viewed together. The error group showed more repetition in checking labs (B). In addition to these noticeable differences, both graphs show a frequent use of the dashboard (e.g., both started from A).

Figure 5.

Figure 5.

Navigation patterns in all called cases.

* Both graphs show a frequent use of the dashboard (A).** Legend:- Box: a theme in a navigation sequence (e.g., “A” for Dashboard)- Arrow: a transition from one theme to another- Number in a box: how many times this theme appeared in this group- Darkness of a box: proportional to the number in the box- Number along an arrow: how many times a transition occurred between two themes- Thickness of an arrow: proportional to the number along the arrow

4. Discussion

In this study, we explored the physician EHR navigation patterns in a simulation handoff study with a focus on dashboard use during chart biopsies. We found that with a nurse call, the dashboard in our mocked up EHR system was frequently used as a preferred information hub to navigate to other pages when receiving a call and as a rich information resource for reviewing patient cases. We suspect that this was because the participants had a specific and urgent need during a call to conduct a chart biopsy and that a well-designed dashboard can meet this need. However, these patterns did not exist in the absence of a call, in which situation participants used the dashboard occasionally as a transition to navigate to other tabs. These results show that a nurse call would likely structure how the participants navigated the EHRs. When called, the participants seemed to seek information in a general-to-specific manner by checking the dashboard and the patient summary as the first step to obtaining an overview of the patient case. When there was no call before an observation session, the participants seemed less likely to use the dashboard and the summary page and the navigation patterns varied. Furthermore, we found empirical evidence to support the potential effect on reducing medical errors in handoffs through the use of the EHR dashboard in chart biopsies. Specifically, using the dashboard as the first step seems more helpful in reducing imprecision errors and wrong facts than omission errors.

Based on our findings, we recommend that EHR dashboard designers consider their dual role as a preferred information hub and a rich information resource and that they optimise its capacity to reduce medical errors in handoffs. Given that EHR systems continue to collect an enormous amount of patient data, a well-designed dashboard can be a vital component of an EHR system that can maximise the utility of the data. Optimising the dashboard design to better support clinicians’ information needs in complex medical contexts is an increasingly imperative research topic. We plan to continue this work by (1) developing dashboards for various contexts to help users overcome information overload and (2) evaluating the effectiveness of various dashboards to improve the quality of care.

Our findings also suggest the need for a “context-aware” EHR dashboard that can accommodate dynamic navigation patterns between cued and un-cued information searches. Context awareness is a key concept in ubiquitous computing, and it refers to the ability of a computer system to sense and continuously adapt to the environment and the context of interest to support the communication and information needs of users in both the physical and the virtual worlds (Lopes et al., 2012). A few studies have adopted context awareness in medical fields; for example, Dergachyova, Bouget, Huaulmé, Morandi, and Jannin (2016) developed an artificial intelligence (AI)-based and ontology-supported system to automatically recognise and segment the surgical workflow. Bardram and E. (2004) experimented on a context-aware pill container, and they learned that context awareness in clinical settings is particularly useful for user-interface navigation on large clinical datasets and can suggest courses of action. Elias and Bezerianos (2012) used context awareness to support the use of annotations on visual dashboards in business intelligence. However, no studies have applied context awareness as the key principle to design and evaluate an EHR dashboard to support clinical work. Such context-aware EHR dashboards can synergise the power of large clinical datasets, machine learning and AI, and human-computer interaction to better serve clinicians’ information needs, facilitate decision-making, and reduce medical errors. One use case is that the interesting usage patterns identified in these study findings could be further defined and then used for training AI applications to adjust the dashboard presentation dynamically. For example, these results in hypothesis 1 indicate that the dashboard was likely used as a first step when there was a nurse call. Based on this hint, we can apply voice recognition technology to detect the key words in the nurse call, and then we can highlight the related patient information in the dashboard for doctors. Further research is needed to develop frameworks and best practices in this area.

5. Strengths and limitations

This study is one of the first to investigate clinician navigation patterns in a simulated handoff setting. The dashboard in this study was designed as an aggregation of recent patient data from various parts of the EHR but can also be further designed as a tool to promote and support decision-making. The study design simulating the start of an evening shift was realistic, and the participants all agreed that the mock-up EHR used in the study was close to reality. In addition, the use of standardised cases in the simulation setting allowed for several direct comparisons among the participants. We encourage future researchers to use this simulation method to examine the patterns in clinical workflows and their relationship to care quality and patient safety. This method can also be used for medical education to improve clinicians’ understanding of handoff processes and outcomes.

Our study has several limitations. First, the study was conducted in only a single institution in Europe, and it is therefore limited in its generalisability. Second, the navigation records were not large-scale, so they could not allow for a detailed examination of navigation patterns. Although we did identify significant changes in the participants’ use of the dashboard in the presence or absence of calls, further examinations of multiple groups (e.g., patient cases) and the combination of multiple factors (e.g., gender, physician expertise) was not feasible because of the small sample size. Third, this study did not include non-physician healthcare workers such as nurses, who also interact with the EHR on a regular basis. Considering the differences in medical training and clinical responsibility, a physician could be interacting with an EHR differently in contrast to non-physician staff. This difference further limits the generalisability of the present study. Next, the durations of the navigation events were small, limiting their clinical and practical significance. The short duration was expected in our simulation study, because the participants were familiar with the system, and they were aware of being timed. A field experiment study is required to demonstrate the clinical significance of this work. Finally, we did not perform independent coding on the navigation URLs or calculate the agreement. However, we improved the coding quality by using rule-based coding, followed by having two researchers review and verify the coding scheme individually.

6. Conclusion

In this study, we conducted a simulation handoff to investigate EHR navigation patterns and dashboard use. The results show that the dashboard was used as a preferred information hub and a rich information resource to support chart biopsies and that dashboard use can reduce medical errors in handoffs. We emphasise the need for designing a context-aware EHR dashboard. Our future work involves using eye-tracking data that have been collected in the current study to provide another viewpoint of EHR navigation patterns.

Notes

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  1. Adler-Milstein J., & Huckman R. S. (2013). The impact of electronic health record use on physician productivity. The American Journal of Managed Care, 19(10), SP345–52. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/24511889 [PubMed] [Google Scholar]
  2. Ariza F., Kalra D., & Potts H. W. (2015). How do clinical information systems affect the cognitive demands of general practitioners?: Usability study with a focus on cognitive workload. Journal of Innovation in Health Informatics, 22(4), 379–390. [DOI] [PubMed] [Google Scholar]
  3. Arora V., & Johnson J. (2006). A model for building a standardized hand-off protocol. The Joint Commission Journal on Quality and Patient Safety, 32(11), 646–655. [DOI] [PubMed] [Google Scholar]
  4. Bakos K. K., Zimmermann D., & Moriconi D., (2012). Implementing the clinical dashboard at VCUHS. In Proceedings of 11th International Congress on Nursing Informatics (p. 11). Montreal, Canada: American Medical Informatics Association; Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/24199038 [PMC free article] [PubMed] [Google Scholar]
  5. Bardram J. E., & E. J. (2004). Applications of context-aware computing in hospital work. In Proceedings of the 2004 ACM Symposium on Applied Computing (pp. 1574–1579). New York, NY, USA: ACM Press. doi: 10.1145/967900.968215 [DOI] [Google Scholar]
  6. Benham-Hutchins M. M., & Effken J. A. (2010). Multi-professional patterns and methods of communication during patient handoffs. International Journal of Medical Informatics, 79(4), 252–267. [DOI] [PubMed] [Google Scholar]
  7. Blondon K., Del Zotto M., Rochat J., Nendaz M. R., & Lovis C. (2017). A simulation study on handoffs and cross-coverage: Results of an error analysis. AMIA Annual Symposium Proceedings, 2017, 448–457. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/29854109 [PMC free article] [PubMed] [Google Scholar]
  8. Blondon K., Wipfli R., Nendaz M., & Lovis C. (2015). Physician handoffs: Opportunities and limitations for supportive technologies. AMIA Annual Symposium Proceedings, 2015, 339–348. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/26958165 [PMC free article] [PubMed] [Google Scholar]
  9. Bomba D. T., & Prakash R. (2005). A description of handover processes in an Australian public hospital. Australian Health Review, 29(1), 68. [DOI] [PubMed] [Google Scholar]
  10. Clarke M. A., Steege L. M., Moore J. L., Belden J. L., Koopman R. J., & Kim M. S. (2013). Addressing human computer interaction issues of electronic health record in clinical encounters. In Proceedings of Second International Conference of Design, User Experience, and Usability (pp. 381–390). Springer. [Google Scholar]
  11. Craig D., Farrell G., Fred A., Filipe J., & Gamboa H. (2010). Designing a physician-friendly interface for an electronic medical record system In HEALTHINF (pp. 324–329). [Google Scholar]
  12. Dergachyova O., Bouget D., Huaulmé A., Morandi X., & Jannin P. (2016). Automatic data-driven real-time segmentation and recognition of surgical workflow. International Journal of Computer Assisted Radiology and Surgery, 11(6), 1081–1089. [DOI] [PubMed] [Google Scholar]
  13. Elias M., & Bezerianos A. (2012). Annotating BI visualization dashboards. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (pp. 1641–1650). New York, USA: ACM Press. doi: 10.1145/2207676.2208288 [DOI] [Google Scholar]
  14. Etamesor S., Ottih C., Salihu I. N., & Okpani A. I. (2018). Data for decision making: Using a dashboard to strengthen routine immunisation in Nigeria. BMJ Global Health, 3(5), e000807. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Fluxicon (2018). Analyzing process maps. (2019, April9). Retrieved from https://coda.fluxicon.com/book/mapview.html
  16. Franklin A., Gantela S., Shifarraw S., Johnson T. R., Robinson D. J., King B. R., … Okafor N. G. (2017). Dashboard visualizations: Supporting real-time throughput decision-making. Journal of Biomedical Informatics, 71, 211–221. [DOI] [PubMed] [Google Scholar]
  17. Günther C. W., & Rozinat A. (2012). Disco: Discover your processes. In Proceedgins of 10th International Conference on Business Process Management (pp. 40–44). Tallinn, Estonia. [Google Scholar]
  18. Günther C. W., & van der Aalst W. M. P. (2007). Fuzzy mining – Adaptive process simplification based on multi-perspective metrics. In Proceedings of 5th International Conference on Business Process Management (pp. 328–343). Berlin, Heidelberg: Springer. doi: 10.1007/978-3-540-75183-0_24 [DOI] [Google Scholar]
  19. Hilligoss B., & Zheng K. (2013). Chart biopsy: An emerging medical practice enabled by electronic health records and its impacts on emergency department–Inpatient admission handoffs. Journal of the American Medical Informatics Association, 20(2), 260–267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Hollin I., Griffin M., & Kachnowski S. (2012). How will we know if it’s working? A multi-faceted approach to measuring usability of a specialty-specific electronic medical record. Health Informatics Journal, 18(3), 219–232. [DOI] [PubMed] [Google Scholar]
  21. Hsiao C.-J., & Hing E. (2012). Use and characteristics of electronic health record systems among office-based physician practices: United States, 2001–2012. Retrieved from https://stacks.cdc.gov/view/cdc/22029 [PubMed]
  22. Kendall L., Klasnja P., Iwasaki J., Best J., White A., Khalaj S., … Blondon K. (2013). Use of simulated physician handoffs to study cross-cover chart biopsy in the electronic medical record. In AMIA Annual Symposium Proceedings (pp. 766–775). Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/24551374 [PMC free article] [PubMed] [Google Scholar]
  23. Khan T., Yang J., Barkowski L., Tapper B., Lubomski L., Daniel D., & Wozniak G. (2018). A hypertension control quality improvement pilot program: Experiences and blood pressure outcomes from physician practices. International Journal of Healthcare, 4, 42–49. [Google Scholar]
  24. Lopes J. L., Souza R. S., Geyer C. R., Costa C. A., Barbosa J. V., Gusmão M. Z., & Yamin A. C. (2012). A model for context awareness in Ubicomp. In Proceedings of the 18th Brazilian Symposium on Multimedia and the Web (pp. 161–168). New York, USA: ACM Press. doi: 10.1145/2382636.2382672 [DOI] [Google Scholar]
  25. Mans R. S., van der Aalst W. M. P., Vanwersch R. J. B., & Moleman A. J. (2013). Process mining in healthcare: Data challenges when answering frequently posed questions. In Proceedings of BPM 2012 Joint Workshop on Process Support and Knowledge Representation in Health Care (pp. 140–153). Berlin, Heidelberg: Springer. doi: 10.1007/978-3-642-36438-9_10 [DOI] [Google Scholar]
  26. Ozkaynak M., Wu D., Hannah K., Dayan P., & Mistry R. (2018). Examining workflow in a pediatric emergency department to develop a clinical decision support for an antimicrobial stewardship program. Applied Clinical Informatics, 09(02), 248–260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Roman L. C., Ancker J. S., Johnson S. B., & Senathirajah Y. (2017). Navigation in the electronic health record: A review of the safety and usability literature. Journal of Biomedical Informatics, 67, 69–79. [DOI] [PubMed] [Google Scholar]
  28. Saitwal H., Feng X., Walji M., Patel V., & Zhang J. (2010). Assessing performance of an Electronic Health Record (EHR) using cognitive task analysis. International Journal of Medical Informatics, 79(7), 501–506. [DOI] [PubMed] [Google Scholar]
  29. Schröder H., Thaeter L., Henze L., Drachsler H., Rossaint R., & Sopka S. (2018). Patient handoffs in undergraduate medical education: A systematic analysis of training needs. Zeitschrift Fur Evidenz, Fortbildung Und Qualitat Im Gesundheitswesen, 135–136, 89–97. [DOI] [PubMed] [Google Scholar]
  30. Solet D. J., Norvell J. M., Rutan G. H., & Frankel R. M. (2005). Lost in translation: Challenges and opportunities in physician-to-physician communication during patient handoffs. Academic Medicine: Journal of the Association of American Medical Colleges, 80(12), 1094–1099. [DOI] [PubMed] [Google Scholar]
  31. Starmer A. J., Spector N. D., Srivastava R., West D. C., Rosenbluth G., Allen A. D., … Landrigan C. P. (2014). Changes in medical errors after implementation of a handoff program. New England Journal of Medicine, 371(19), 1803–1812. [DOI] [PubMed] [Google Scholar]
  32. Wilbanks B. A., & Langford P. A. (2014). A review of dashboards for data analytics in nursing. CIN: Computers, Informatics, Nursing, 32(11), 545–549. [DOI] [PubMed] [Google Scholar]
  33. Wu D. T. Y., Smart N., Ciemins E. L., Lanham H. J., Lindberg C., & Zheng K. (2017). Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics. AMIA Annual Symposium Proceedings, 2017, 1820–1827. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/29854253 [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. Fluxicon (2018). Analyzing process maps. (2019, April9). Retrieved from https://coda.fluxicon.com/book/mapview.html

Articles from Health Systems are provided here courtesy of Taylor & Francis

RESOURCES