Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2015 Nov 5;2015:1881–1889.

Using High-Fidelity Simulation and Eye Tracking to Characterize EHR Workflow Patterns among Hospital Physicians

Julie W Doberne 1, Ze He 2, Vishnu Mohan 1, Jeffrey A Gold 1,3, Jenna Marquard 2, Michael F Chiang 1,4
PMCID: PMC4765617  PMID: 26958287

Abstract

Modern EHR systems are complex, and end-user behavior and training are highly variable. The need for clinicians to access key clinical data is a critical patient safety issue. This study used a mixed methods approach employing a high-fidelity EHR simulation environment, eye and screen tracking, surveys, and semi-structured interviews to characterize typical EHR usage by hospital physicians (hospitalists) as they encounter a new patient. The main findings were: 1) There were strong similarities across the groups in the information types the physicians looked at most frequently, 2) While there was no overall difference in case duration between the groups, we observed two distinct workflow types between the groups with respect to gathering information in the EHR and creating a note, and 3) A majority of the case time was devoted to note composition in both groups. This has implications for EHR interface design and raises further questions about what individual user workflows exist in the EHR.

Introduction

Electronic health records (EHRs) have become a central component of the modern clinical workflow, serving as a central documentation repository, an ordering mechanism, and a provider communication tool. EHRs have been promoted as a mechanism for improving quality of healthcare delivery, patient safety and provider efficiency1. Widespread adoption has been driven in part by substantial governmental incentives through the Centers for Medicare and Medicaid Services (CMS) Meaningful Use program, with 59% of U.S. hospitals and 48% of office-based providers using EHRs as of 20142,3.

Implementations of EHRs have been shown to dramatically influence clinical workflows4,5. End-user behaviors and training approaches are highly variable6,7. Adaptive end-user behaviors such as excessive use of copy-paste/copy forward8, “overdocumentation”9, and “upcoding”9 may compromise health care quality and patient safety10; training and standardization may help reduce these practices11. Beginning efforts have been made to establish standard practices among EHR end users7. However, these efforts have largely focused on documentation and not on information review.

The challenges facing consistency in EHR training and use are diverse. Though back end databases are fairly consistent across instances of an EHR, user interfaces and workflows can be substantially different depending upon the institution and clinical environment in which the EHR is used12. Within an institution or practice group, the physician-level characteristics in usage of EHR features and usage intensity have been found to be highly variable and personalized12. The strong influence of personal experiences and preferences is thought to partly explain this variance.

Assessment of end-user EHR behaviors has often been conducted via self-reports and surveys1316, direct observation17, and meaningful use measure reporting12. Survey and reporting methods can provide a high-level perspective of provider behavior, but do not capture individual workflows. Direct observation may lack the accuracy required to quantitatively evaluate differences between individuals or groups of individuals. Real-time eye tracking technology has been shown to successfully capture user behavior in online website searching, website interface design, visual attention and video games1821. Within the realm of medicine, it has been employed to study radiologic and electrocardiography interpretation, note reading, and medication administration2228. The purpose of this study is to address this gap in knowledge by characterizing the workflow patterns of physicians using the EHR. This is done using a mixed methods approach employing a high-fidelity EHR simulation environment equipped with eye and screen tracking, surveys, and semi-structured interviews to characterize the typical EHR usage by a group of hospital physicians (hospitalists) as they encounter a new patient.

Methods

This study was approved by the Institutional Review Board at Oregon Health & Science University (OHSU; Portland, OR). Subjects signed a consent form prior to participation.

Development of Simulation Cases

An instance of our EHR environment (EpicCare, Epic Systems Inc., Verona, WI) was created to house simulated patient cases. This simulation environment imports all end-user customizations from the actual EHR environment, so the interface looked exactly as it would for each subject. Simulated cases were based upon real patient cases with common principal diagnoses (i.e., among the top 10 most common ICD-9 diagnoses for adults upon hospital discharge)29. Two patient cases (Cases A and B) were created and independently reviewed for medical accuracy and clinical realism by domain experts in accordance with previously published recommendations for high-fidelity case creation30. Both patients had previously established care at our institution, but were now presenting to the emergency department (ED) for evaluation of a new set of symptoms. Each case contained historical data in each of the categories listed in Table 1; the current ED visit contained vitals, intake/output, laboratory data, EKG, chest roentgenogram, and a half-completed ED resident note stating the history of present illness, physical exam findings, and review of systems.

Table 1.

Twenty-one information types used in video coding

1. Social history
2. Laboratory values, pathology, microscopy, cytology
3. Allergies
4. Procedure notes
5. Vital signs and weight
6. Outside records
7. Other
8. Past medical history
9. Imaging results and EKGs
10. Outpatient clinic note
11. Operative reports
12. Intake/output
13. Documentation (note)
14. Navigation
15. Past surgical history
16. Medication list
17. Inpatient clinic note
18. Problem list
19. Discharge summary
20. Documentation (non-note)
21. Family history

Recruitment and Testing of Subjects

Attending physicians from the OHSU Division of Hospital Medicine comprised the study population. Simulations were conducted on a representative active patient ward to mimic external distractions encountered by physicians as they use an EHR. Subjects were asked to act as the admitting hospitalist, review both patient charts, and create a history and physical (H&P) note complete with assessment and plan for each patient. Simulation time was not limited. Case order was held constant for all subjects throughout the study. After completing the cases, subjects were asked to verbally describe their typical workflow for admitting a patient. Semi-structured interview questions were used to elicit details about when they use the EHR during that process, their principal sources of information, note writing strategies, and the nature of the patient interaction. Lastly, subjects were asked to complete a questionnaire regarding demographic information, EHR experience, and general computer experience.

Eye and screen tracking were conducted using a Tobii X2 60 Eye Tracker (Tobii Systems, Danderyd Municipality, Sweden), a non-invasive tracker mounted below the computer monitor. All testing was conducted using a standardized computer station with consistent and static screen and chair height. Before each simulation, the eye tracker was calibrated to each subject using a 1-minute 9-point calibration algorithm provided by the manufacturer. Upon commencement of the simulation the screen tracking software (Tobii Studio, Tobii Systems, Danderyd Municipality, Sweden) captured screen video, keystrokes, mouse clicks, ocular saccades, and eye fixations. A velocity threshold identification filter was used to identify sets of fixations (gazes), using the standard definition of a fixation as lasting a minimum of 100 ms31. Each video was coded manually by a member of the research team (JD). Videos were coded by recording the information type upon which the gaze was situated at each second of the case.

Data Analysis

Simulation gaze data were divided into two major categories: informational and navigational. Informational gazes pertained to any kind of clinical data (all entries in Table 1 except 14); navigational gazes were defined as lacking clinical data, and frequently occurred on toolbars, menus, and up/down scrolling arrows. Documentation was considered a subset of the informational gaze category. Comparisons between group means were conducted using two-sided t-tests.

Several metrics were used to evaluate the data. First, we measured the average duration of each participant’s gaze on each information type (Table 1), calculated by their total duration of gazes on that information type divided by the number of gazes on that information type during the case. Next, we calculated the total number of transitions between information types for each patient case, and the total number of informational gazes for each case.

We used first-order Discrete Time Markov Chains (DTMCs) to model transitions between information types. A Markov Chain consists of a series of successive state-to-state transitions (Equation 1, Figure 1), which form a transition matrix. Equation 2 illustrates an example of a transition matrix, which has three states. The row i (i = 1,2,3) shows the transition probability distribution from information-type i to other information-types j. For example, p13 is the probability that a subject transitions from information-type 1 to 3. Higher probabilities in the matrices indicate the information-type pairs that are more likely.

Figure 1.

Figure 1

Equations used in DTMC analysis

Results

Subject characteristics

From a total of 23 eligible subjects, 17 (74%) subjects completed a total of 33 patient cases. Fifty-nine percent of subjects were male; the mean length of time since medical school graduation was 13.3 years. One hundred percent of subjects described themselves as “somewhat” or “very experienced” with computers. The mean length of time using the study EHR (EpicCare) was 6.5 years.

Simulation characteristics

The subjects were divided into two groups based upon how long into the case it took them to begin composing a note. This division was based upon a natural grouping observed in average note start times per subject (Figure 2). Subjects in Group 1 (n=8) began composing a note on average less than 2 minutes into the case; subjects in Group 2 (n=9) began composing a note on average more than 2 minutes into the case.

Figure 2.

Figure 2

Distribution of note start times

The proportion of men in Group 1 (87.5%) was significantly greater than the proportion of men in Group 2 (33.3%, p=0.02) (Table 2). Using a Likert-type scale to assess self-rated computer experience (1, less experienced; 2, somewhat experienced; 3, very experienced), Group 1 reported a higher mean experience score compared to Group 2 (2.5 and 2, respectively; p=0.03). Time since medical school graduation and length of EpicCare experience did not differ significantly between the groups.

Table 2.

Demographic and simulation characteristics

Group 1
n = 8
Group 2
n = 9
p-value

Subject Characteristics

Gender, % male 87.5% 33.3% 0.02*
Self-rated computer experience, level 2.5 2 0.03*
Years since medical school graduation 15.3 11.5 0.42
EpicCare experience, years 7.3 5.7 0.27

Simulation Characteristics

Transitions 75.5 57.5 0.04*
Gazes 81.7 62.3 0.04*
Documentation, number of gazes 32.9 21.4 <0.01*
Navigation, number of gazes 17.6 17.8 0.47
Number of unique information types 12.5 12.4 0.85
Case length, mm:ss 25:29 24:29 0.72
*

p < 0.05

The average time for Case A was significantly longer than Case B (mean±SD, 28:12±8:05 and 21:56±6:35 respectively, p=0.02). To evaluate a potential learning effect, the note start times were normalized to total case times and compared among the groups from Case A to Case B. There was a slight difference in the decrement in note start time ratio between the groups, with Group 2’s note start ratio dropping more in Case B, but this was not significant (G1 = 0.04, G2 = 0.07; p = 0.33). Number of transitions per second was evaluated as well. Whereas the transitions per second for Group 1 decreased slightly from Case A to Case B, the transitions per second for Group 2 increased slightly, but the difference between the two was not statistically significant (G1 = 0.006, G2 = −0.005; p = 0.14).

Group 1 had significantly more gazes (G1=81.7, G2=62.3; p=0.04) and transitions (G1=75.5, G2=57.5; p=0.04) over the course of each case. There was no difference between the groups in average case time, navigation time, and average number of unique information types accessed within each case.

Semi-structured interviews

Self-described workflows elicited from the semi-structured interviews were consistent with categorization into Group 1 and Group 2. Representative quotations are shown in Table 3.

Table 3.

Representative quotations from semi-structured interviews

Group 1
Early note composition, greater frequency of transitions between information types

“I often start my note right away as I go about my chart review.”
“I tend to be linear… I jump around.”
“I usually start with a note because it autopopulates with the information I need.”

Group 2
Information review with longer duration per screen and less transitions, followed by note composition

“I review all the current data, labs, and imaging. Then look at last clinic note, meds, and clinical history. I start writing a note after that.”
“I look at the meds, prior notes and imaging, then start putting a note skeleton together.”
“I do a quick review of the chart before I go see the patient. Then I build my problem list and when I create the note, it auto-imports the information I’ve collected.”

Clinical content

The information types with the longest total gaze durations are shown in Figure 3. For both Groups 1 and 2, total duration of gazes for documentation (composition of the H&P note) was much higher than all non-documentation information types (G1=851 seconds, G2=745 seconds). The non-documentation information types with the greatest average and total duration of gazes (greater 10 seconds) were imaging results, inpatient progress notes, lab values, medications, and ambulatory clinic notes. These five information types were also the most often visited throughout the simulations. Group 1 gazed at laboratory values significantly more often than Group 2 (G1=13.3, G2=8.2, p=0.02). Differences were observed in less frequently visited information types as well. Past medical history (G1=0.75, G2=1.6, p=0.06), problem list (G1=1.4, G2=2.9, p=0.08), family history (G1=0.7, G2=0.2, p=0.04) and other information types (G1=3.88, G2=2.53, p=0.04) all showed slight differences in visitation frequency between groups. These trends in frequency remained consistent when the gaze values were normalized by the total number of visits to all information types.

Figure 3.

Figure 3

Distributions of Information Types with Longest Total Gaze

Transition visualizations

Figures 4a and 4b show circle visualizations of normalized Markov Chain frequencies of information types for both groups. Nodes situated around the rim of the circle represent the various information types, ordered by size moving counter-clockwise. The sizes of the nodes are proportional to their gaze number distribution frequencies, and the thickness of the lines connecting nodes indicate the normalized frequencies of transitions (transition probability) between the two information types. For clarity, only the top 80% of total transitions are depicted here.

Figure 4a and b.

Figure 4a and b

Transition visualizations for Group 1 (top) and Group 2 (bottom)

The general patterns are the same between the two groups: most of the transitions are centered around documentation, which is also the most frequently visited information type. We can identify the top 5 most frequently visited information types, which are consistent between the two groups: documentation, lab values, inpatient progress note, vital signs and imaging results. In addition, because documentation played a larger role in group 1 than it did in group 2, there were fewer high frequency transitions compared with group 2.

One important observation from the visualization is which information types were closely related. For example, the social history, past medical history, family history, and past surgical history nodes are co-located because they were similar in visitation frequencies. In Group 1, there are more bold lines from documentation to other information types, capturing the notion that Group 1 transitioned more frequently between documentation and other information types.

Discussion

This study assessed characteristics of physicians’ information search patterns in an EHR as they created a note for a new patient. There were two discrete types of users based upon information review and documentation tasks. The key findings from our analyses are: 1) There were strong similarities across the groups in the information types the physicians looked at most frequently; 2) While there was no overall difference in case duration between the groups, we observed two distinct workflow types between the groups with respect to gathering information in the EHR and creating a note; and 3) A majority of the case time was devoted to note composition in both groups.

Both groups showed the same preferences for a small subset of information types, in terms of how long they looked at each information type (Figure 3). Imaging results, progress notes, laboratory values, medications, and prior clinic notes were looked at longest. The total number of unique information types also did not differ between groups. This suggests some uniformity in clinical reasoning that may be explained by their mutual medical specialty, common clinical environment, and/or similarities in medical training. Differences between groups were only found in lower-frequency information types; members of Group 1 spent less time reviewing past medical history and problem lists, and more time reviewing the family history.

Groups 1 and 2 exhibited significantly different workflow types, despite no overall difference in case completion time. Group 1, characterized by early note creation, transitioned frequently between information types in the EHR after starting the note (Figure 4). Group 2 physicians, characterized by later note creation, tended to dwell on information longer before starting to compose the note. Group 1 showed a markedly higher number of transitions and gazes compared to Group 2, confirming a higher rate of switching from one information type to another. We found significant differences between the groups in how the simulation time was used. Overall, Group 1 spent substantially longer time in the documentation phase of the simulation. Subjects’ self-described workflows (Table 3) supported a dichotomy between early and note creation. Subjects in Group 1 mentioned starting a note as one of their first activities prior to information gathering (and “jumping around” in the EHR when gathering information) whereas Group 2 members described reviewing a variety of information prior to note creation.

Both groups spent a much higher amount of time on note composition than any other task, including reviewing clinical information (Figure 3). The time and burden associated with documentation is noted in the literature3234. What this study highlights is the finding that documentation time may overshadow all other tasks, including time to read or review the clinical data. This raises the question of whether the time and burden of documentation relates to the untoward effects of EHR implementation on patient care that have been observed10,35. Further research is needed to elucidate the interplay between EHR documentation burden, clinical reasoning, and patient care.

This research raises several questions about the nature of EHR information seeking and patterns in end-user behavior. Several models have been proposed to describe this process3639. Traditionally information seeking is viewed as a sense-making process in which the user formulates a personal perspective through finding meaning40. A common thread shared by most proposed models of information seeking is that it is a dynamic process, where the users move non-linearly through levels of certainty based upon information encountered and judgments of relevancy and specificity. The resultant perspective or decision is not necessarily the same among individuals40 and is dependent upon the effectiveness of the user’s information retrieval41. This becomes problematic in the realm of clinical medicine, where standard of care dictates that there be some baseline level of uniformity in clinical reasoning to ensure patient safety. Ensuring some baseline level of competency in information retrieval becomes crucial when considering the complexity of modern EHR systems, where it has already been demonstrated that end-user behavior is highly variable5,7,42. This research suggests that users take different pathways to arrive at a common endpoint. The information used and the time to complete the task may not differ, but the order in which the end product (clinical note) is created may differ depending on the user. One of Nielsen’s five criteria for usability is affordance43. The different workflows described in this study support the need for interfaces to afford for 1) fluctuation between varying levels of certainty in the meaning-finding process and 2) a variety of approaches in clinical documentation.

Of note, significant differences were observed in the gender composition and self-rated computer experience level between Group 1 and Group 2. With respect to the gender difference, though the sample size of the current study is too small to draw firm conclusions from this observation, our results raise the question of whether different genders may approach the tasks of EHR information gathering and documentation differently. Gender differences in clinical reasoning and information processing have been explored previously4450. Myers-Levy’s theory of selectivity and information-processing research in other disciplines suggest that women are more likely to employ elaborative information processing strategies regardless of the task complexity, whereas men are more likely to utilize heuristic processing strategies, only switching to elaborative strategies on more complex tasks4446,49. Conversely, in the medical literature no significant gender-related difference has been found in diagnostic reasoning50. With the insertion of the EHR into clinical workflows we must consider the electronic interface as an additional layer of complexity. Research conducted on website audiences has shown that differences in perception and satisfaction can vary greatly among gender groups51. Females have demonstrated greater proficiency in computer display navigation and optical cue responsiveness on the screen52. Taken together, prior research suggests that there may be several factors that contribute to gender differences in EHR usage; further research is needed. With respect to the difference in self-rated computer experience, it is unclear whether this outcome is independent of the strong gender differences between the groups. Conflicting research exists on gender differences in perceptions of self-efficacy and attitudes toward computers48,53. If it is an independent outcome, it is not clear that the higher level of computer experience in Group 1 translated into better performance in the simulation task.

There are several important limitations to this study. First, though simulations were conducted in a high-fidelity environment, subjects were aware that the cases were fictional patients. There were no actual patients to interact with and subjects relied upon the content of the history of present illness, review of systems, and physical exam as it was documented in the EHR. Though this may have diminished the realism of the cases, it is expected that it would exert a uniform effect on the subjects. Cases were made to be slightly less complicated than the “average” patient seen at our institution, a tertiary care facility, for the purposes of simulation time. Second, interpretation of the eye tracking gaze data is still in its infancy in clinical informatics research, and it is unclear how well gazes and transitions represent clinical reasoning processes. Third, the notes created during the simulations were not evaluated for accuracy and completeness, thus we cannot comment on whether differences in search patterns affect clinical reasoning and medical decision making. Lastly, the study was conducted among one specialty of physicians at one institution.

Conclusion

This study demonstrates the presence of two information-gathering and documentation workflows among hospitalists using the EHR to admit a new patient. This has important implications for EHR interface design, specifically with respect to affordances for multiple information-gathering pathways. Future studies must continue to examine the workflow differences among individuals, specifically pertaining to note quality, clinical accuracy, and efficiency.

References

  • 1.Middleton B, Bloomrosen M, Dente MA, Hashmat B, Koppel R, Overhage JM, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013 Jun;20(e1):e2–8. doi: 10.1136/amiajnl-2012-001458. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Charles D, King J, Patel V, Furukawa MF. (ONC Data Brief 9).Adoption of electronic health record systems among U.S. non-federal acute care hospitals: 2008–2012. 2013 March 2013. [Google Scholar]
  • 3.Hsiao C, Hing E. (NCHS Data Brief 143).Use and characteristics of electronic health record systems among office-based physician practices:United States, 2001–2013. 2014 [PubMed] [Google Scholar]
  • 4.Niazkhani Z, Pirnejad H, Berg M, Aarts J. The impact of computerized provider order entry systems on inpatient clinical workflow: a literature review. J Am Med Inform Assoc. 2009 Jul-Aug;16(4):539–549. doi: 10.1197/jamia.M2419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Zheng K, Padman R, Johnson MP, Diamond HS. An interface-driven analysis of user interactions with an electronic health records system. Journal of the American Medical Informatics Association. 2009 Mar-Apr;16(2):228–237. doi: 10.1197/jamia.M2852. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Yackel TR, Embi PJ. Copy-and-paste-and-paste. JAMA. 2006 Nov 15;296(19):2315. doi: 10.1001/jama.296.19.2315-a. author reply 2315–6. [DOI] [PubMed] [Google Scholar]
  • 7.Kuhn T, Basch P, Barr M, Yackel T, Medical Informatics Committee of the American College of Physicians Clinical Documentation in the 21st Century: Executive Summary of a Policy Position Paper From the American College of Physicians. Ann Intern Med. 2015 Jan 13;162(4):301–303. doi: 10.7326/M14-2128. [DOI] [PubMed] [Google Scholar]
  • 8.Sheehy AM, Weissburg DJ, Dean SM. The role of copy-and-paste in the hospital electronic health record. JAMA Intern Med. 2014 Aug;174(8):1217–1218. doi: 10.1001/jamainternmed.2014.2110. [DOI] [PubMed] [Google Scholar]
  • 9.Department of Health and Human Services Office of Inspector General CMS and its contractors have adopted few program integrity practices to address vulnerabilities in EHRs. 2014. [Accessed March 3, 2015]. Available at: http://oig.hhs.gov/oei/reports/oei-01-11-00571.pdf.
  • 10.Koppel R, Metlay J, Cohen A, Abaluck B, Localio AR, Kimmel S, et al. Role of the computerized physician order entry systems in facilitating medication errors. J Am Med Inform Assoc. 2005;293(10):1197–203. doi: 10.1001/jama.293.10.1197. [DOI] [PubMed] [Google Scholar]
  • 11.Dastagir MT, Chin HL, McNamara M, Poteraj K, Battaglini S, Alstot L. Advanced proficiency EHR training: effect on physicians’ EHR efficiency, EHR satisfaction and job satisfaction. AMIA Annu Symp Proc. 2012;2012:136–143. [PMC free article] [PubMed] [Google Scholar]
  • 12.Ancker JS, Kern LM, Edwards A, Nosal S, Stein DM, Hauser D, et al. How is the electronic health record being used? Use of EHR data to assess physician-level variability in technology use. J Am Med Inform Assoc. 2014 Nov-Dec;21(6):1001–1008. doi: 10.1136/amiajnl-2013-002627. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Amarasingham R, Plantinga L, Diener-West M, Gaskin DJ, Powe NR. Clinical information technologies and inpatient outcomes: a multiple hospital study. Arch Intern Med. 2009 Jan 26;169(2):108–114. doi: 10.1001/archinternmed.2008.520. [DOI] [PubMed] [Google Scholar]
  • 14.Makam AN, Lanham HJ, Batchelor K, Samal L, Moran B, Howell-Stampley T, et al. Use and satisfaction with key functions of a common commercial electronic health record: a survey of primary care providers. BMC Med Inform Decis Mak. 2013 Aug 9;13:86-6947-13-86. doi: 10.1186/1472-6947-13-86. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Amarasingham R, Diener-West M, Weiner M, Lehmann H, Herbers JE, Powe NR. Clinical information technology capabilities in four U.S. hospitals: testing a new structural performance measure. Med Care. 2006 Mar;44(3):216–224. doi: 10.1097/01.mlr.0000199648.06513.22. [DOI] [PubMed] [Google Scholar]
  • 16.Poon EG, Wright A, Simon SR, Jenter CA, Kaushal R, Volk LA, et al. Relationship between use of electronic health record features and health care quality: results of a statewide survey. Med Care. 2010 Mar;48(3):203–209. doi: 10.1097/MLR.0b013e3181c16203. [DOI] [PubMed] [Google Scholar]
  • 17.Lanham HJ, Sittig DF, Leykum LK, Parchman ML, Pugh JA, McDaniel RR. Understanding differences in electronic health record (EHR) use: linking individual physicians’ perceptions of uncertainty and EHR use patterns in ambulatory care. J Am Med Inform Assoc. 2014 Jan-Feb;21(1):73–81. doi: 10.1136/amiajnl-2012-001377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Lorigo L, Haridasan M, Brynjarsdottir H, Xia L, Joachims T, Gay G, et al. Eye tracking and online search: Lessons learned and challenges ahead. J Am Soc Inf Sci Tech. 2008;59:1041–1052. [Google Scholar]
  • 19.Djamasbi S, Siegel M, Tullis T. Generation Y, web design, and eye tracking. Int J Hum Comp Stud. 2010;68:307–323. [Google Scholar]
  • 20.Reutskaja E, Nagel R, Camerer CF, Rangel A. Search dynamics in consumer choice under time pressure: An eye-tracking study. Am Econ Rev. 2011;101:900–926. [Google Scholar]
  • 21.Alkan S, Cagiltay K. Studying computer game learning experience through eye tracking. Brit J Ed Tech. 2007;38:538–542. [Google Scholar]
  • 22.Beard DV, Pisano ED, Denelsbeck KM, Johnston RE. Eye movement during computed tomography interpretation: eyetracker results and image display-time implications. J Digit Imaging. 1994 Nov;7(4):189–192. doi: 10.1007/BF03168538. [DOI] [PubMed] [Google Scholar]
  • 23.Manning DJ, Ethell SC, Donovan T. Detection or decision errors? Missed lung cancer from the posteroanterior chest radiograph. Br J Radiol. 2004 Mar;77(915):231–235. doi: 10.1259/bjr/28883951. [DOI] [PubMed] [Google Scholar]
  • 24.Tourassi G, Voisin S, Paquit V, Krupinski E. Investigating the link between radiologists’ gaze, diagnostic decision, and image content. J Am Med Inform Assoc. 2013 Nov-Dec;20(6):1067–1075. doi: 10.1136/amiajnl-2012-001503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Brown PJ, Marquard JL, Amster B, Romoser M, Friderici J, Goff S, et al. What do physicians read (and ignore) in electronic progress notes? Appl Clin Inform. 2014 Apr 23;5(2):430–444. doi: 10.4338/ACI-2014-01-RA-0003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Eghdam A, Forsman J, Falkenhav M, Lind M, Koch S. Combining usability testing with eye-tracking technology: evaluation of a visualization support for antibiotic use in intensive care. Stud Health Technol Inform. 2011;169:945–949. [PubMed] [Google Scholar]
  • 27.Breen CJ, Bond R, Finlay D. An evaluation of eye tracking technology in the assessment of 12 lead electrocardiography interpretation. J Electrocardiol. 2014 Nov-Dec;47(6):922–929. doi: 10.1016/j.jelectrocard.2014.08.008. [DOI] [PubMed] [Google Scholar]
  • 28.Marquard JL, Henneman PL, He Z, Jo J, Fisher DL, Henneman EA. Nurses’ behaviors and visual scanning patterns may reduce patient identification errors. J Exp Psychol Appl. 2011 Sep;17(3):247–256. doi: 10.1037/a0025261. [DOI] [PubMed] [Google Scholar]
  • 29.Elixhauser A, Steiner CA. Most common diagnoses and procedures in U.S. community hospitals, 1996–1999. [Accessed June 26, 2014]. Available at: http://www.hcup-us.ahrq.gov/reports/natstats/commdx/commdx.htm.
  • 30.Mohan V, Gold J. Collaborative intelligent case design model to facilitate simulated testing of clinical cognitive load Workshop on Interactive Systems in Healthcare. Washington DC: Nov 15, 2014. [Google Scholar]
  • 31.Salvucci D, Goldberg J. Identifying fixations and saccades in eye-tracking protocols. Proceedings of the Eye Tracking Research and Applications Symposium; New York, NY. 2000.ACM Press; [Google Scholar]
  • 32.Sanders DS, Read-Brown S, Tu DC, Lambert WE, Choi D, Almario BM, et al. Impact of an electronic health record operating room management system in ophthalmology on documentation time, surgical volume, and staffing. JAMA Ophthalmol. 2014 May;132(5):586–592. doi: 10.1001/jamaophthalmol.2013.8196. [DOI] [PubMed] [Google Scholar]
  • 33.Saarinen K, Aho M. Does the implementation of a clinical information system decrease the time intensive care nurses spend on documentation of care? Acta Anaesthesiol Scand. 2005 Jan;49(1):62–65. doi: 10.1111/j.1399-6576.2005.00546.x. [DOI] [PubMed] [Google Scholar]
  • 34.Read-Brown S, Sanders DS, Brown AS, Yackel TR, Choi D, Tu DC, et al. Time-motion analysis of clinical nursing documentation during implementation of an electronic operating room management system for ophthalmic surgery. AMIA Annu Symp Proc 2013. 2013 Nov 16;:1195–1204. [PMC free article] [PubMed] [Google Scholar]
  • 35.Fraenkel DJ, Cowie M, Daley P. Quality benefits of an intensive care clinical information system. Crit Care Med. 2003 Jan;31(1):120–125. doi: 10.1097/00003246-200301000-00019. [DOI] [PubMed] [Google Scholar]
  • 36.Belkin NJ. Cognitive models and information transfer. Social Science Information Studies. 1984;4:111–130. [Google Scholar]
  • 37.Gardner H. The mind’s new science: A history of the cognitive revolution. New York, NY: Basic Books; 1985. [Google Scholar]
  • 38.Kelly GA. A theory of personality: The psychology of personal constructs. New York, NY: Norton; 1963. [Google Scholar]
  • 39.Saracevic T. Relevance: A review of and a framework for the thinking on the notion of information science. Journal of the American Society for Information Science. 1975;26:321–343. [Google Scholar]
  • 40.Dervin D, Nilan M. Information needs and uses. In: Williams ME, editor. Annual Review of Information Science and Technology (ARIST) 1986. pp. 3–33. [Google Scholar]
  • 41.Kuhlthau CC. Inside the search process: Information seeking from the uer’s perspective. J Am Soc Inf Sci. 1991;42:361–371. [Google Scholar]
  • 42.Embi PJ, Yackel TR, Logan JR, Bowen JL, Cooney TG, Gorman PN. Impacts of computerized physician documentation in a teaching hospital: perceptions of faculty and resident physicians. J Am Med Inform Assoc. 2004 Jul-Aug;11(4):300–309. doi: 10.1197/jamia.M1525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Nielsen J. Usability 101: Introduction to usability. 2012. [Accessed March 7, 2015]. Available at: http://www.nngroup.com/articles/usability-101-introduction-to-usability/
  • 44.Beckwith L, Burnett M, Sorte S. Gender and end-user computing. In: Trauth E, editor. Encyclopedia of gender and information technology. Hershey, PA: Idea Group Reference; 2006. pp. 398–404. [Google Scholar]
  • 45.Myers-Levy J, Maheswaran D. Exploring differences in males’ and females’ processing strategies. Journal of Consumer Research. 1991;19:63–70. [Google Scholar]
  • 46.Myers-Levy J. Gender differences in information processing: A selectivity interpretation. In: Cafferata P, Tybout A, editors. Cognitive and affective responses to advertising. Lexington, MA: Lexington Books; 1989. [Google Scholar]
  • 47.Myers-Levy J, Sternthal B. Gender differences in the use of message cues and judgments. Journal of Marketing Research. 1991;28:84–96. [Google Scholar]
  • 48.Busch T. Gender differences in self-efficacy and attitudes toward computers. Journal of Educational Computing Research. 1995;12(2):147–158. [Google Scholar]
  • 49.O’Donnell E, Johnson E. Gender effects on processing effort during analytical procedures. International Journal of Auditing. 2001;5:91–105. [Google Scholar]
  • 50.Sobral DT. Diagnostic ability of medical students in relation to their learning characteristics and preclinical background. Med Educ. 1995 Jul;29(4):278–282. doi: 10.1111/j.1365-2923.1995.tb02849.x. [DOI] [PubMed] [Google Scholar]
  • 51.Simon SJ. The impact of culture and gender on web sites: an empirical study. ACM SIGMIS Database. 2001;32(1):18–37. [Google Scholar]
  • 52.Tan D, Czerwinski M, Robertson G. Women go with the (optical) flow. ACM Conference on Human Factors in Computing Systems; Fort Lauderdale, F.L.. 2003. [Google Scholar]
  • 53.Lundeberg MA, Fox PW, Punccohar J. Highly confident but wrong: Gender differences and similarities in confidence judgments. Journal of Educational Psychology. 1994;86(1):114–121. [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES