Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Jan 11.
Published in final edited form as: ANS Adv Nurs Sci. 2009 Jul–Sep;32(3):252–279. doi: 10.1097/ANS.0b013e3181b0d737

A Systematic Review on the Designs of Clinical Technology: Findings and Recommendations for Future Research

Greg Alexander PhD 1, Nancy Staggers 2
PMCID: PMC3018768  NIHMSID: NIHMS255635  PMID: 19707093

Abstract

Human factors (HF) studies are increasingly important as technology infuses into clinical settings. No nursing research reviews exist in this area. The authors conducted a systematic review on designs of clinical technology, 34 articles with 50 studies met inclusion criteria. Findings were classified into three categories based on HF research goals. The majority of studies evaluated effectiveness of clinical design; efficiency was fewest. Current research ranges across many interface types examined with no apparent pattern or obvious rationale. Future research should expand types, settings, participants; integrate displays; and expand outcome variables.

INTRODUCTION

Having usable technology is an imperative for contemporary nurses. Less optimal technology designs affect error generation, productivity, create extreme frustration and even result in system de-installation. The design and development of usable technology can better be assured by using human factors (HF) concepts. HF principles, research methods and techniques are widely available outside health care to enhance nurse-technology interaction effectiveness, efficiency and user satisfaction. Yet, these critical elements only trickled into health care in the early 1990s despite having completely penetrated other industries such as aviation.

The Institute of Medicine ushered HF concepts into the health care consciousness by linking HF to error prevention.1 Research in HF, usability and human-computer interaction, all related concepts, expanded greatly over the past 10–15 years. However, no review exist examining available HF-related research or its diffusion into the nursing arena. Thus, the purposes of this paper are to: 1) systematically review the literature for HF-related research in health care, 2) evaluate the impact to nursing areas of interest and 3) recommend future research directions.

BACKGROUND

Human factors is a broad term for a set of related concepts about human interactions with tools in associated environments. Figure 1 depicts these concepts and their relationships.2 All HF-related concepts consider human needs, abilities, and limitations, including cognitive aspects, and assert an axiom of user-centered design.3 4 Human factors encompasses the design, use, and evaluation of tools in a broad sense to a wide variety of tools – for instance the design and use of an airplane cockpit, the design of a hammer to fit the female human hand or incorporating known concepts about human memory and attention to improve work systems for successful sponge counts in an operating room. Ergonomics emphasizes physical attributes and designs of tools such as the size of lettering on IV pump so that labels are viewable from across the patient’s bed, the design of a computer mouse or the layout of equipment in an intensive care unit to promote optimal workflow. Human-computer interaction focuses on computers and applications for humans while its closely related concept, usability, stresses the design, interaction and evaluation of both devices and computer applications by examining specific tasks and interaction outcomes within particular contexts. Examples include the design of an electronic medication administration record for multidisciplinary use, and its subsequent re-design for specific tasks unique to an emergency department setting. Human-computer interaction can also include the design of software to support a group of users working on a shared document or social sanctions from inappropriate blogs among a group of clinicians discussing cardiomyopathy research.

Figure 1.

Figure 1

The Relationship of Human Factors Concepts

The unique methods available from the HF domain allow researchers to elicit critical thought processes (e.g., cognitive task analysis), work methods (e.g., naturalistic observation) and/or tasks that are crucially important for the design of tools, devices and information systems. Research methods such as ethnographic and qualitative techniques are also useful in defining key user requirements for tools and evaluating existing tools for effectiveness.

Most important, the commonly held goals of human-factors are to improve the effectiveness, efficiency, and satisfaction of humans interacting with tools (see Figure 2).5 Effectiveness includes the usefulness of a tool to complete work (tasks), and the safety of the tool. Examples of efficiency include productivity such as the time to complete specific tasks, the number of clicks to perform tasks, the costs of the tools and/or the amount of training time needed for users to learn a software application. Satisfaction can include the perception of any aspect of the tool and typically includes perceptions about workload or the effectiveness of the specific design.

Figure 2.

Figure 2

Human Factors Research Goals

In this review, we focus on the design and evaluation of user interfaces for clinical technology. Optimal technology design is vital to health care because the work and associated tools can be life-critical. For example, in a tragic event, faulty software design for controls in a radiation machine caused a patient to scream in pain during treatment and later die because of a radiation overdose.6 Zhang, et al.7 and Graham, et al.8 both outline serious usability problems with IV pumps, including issues that are likely to cause medical errors. Given the considerable impact of HF in health care, we examined available research about the design of clinical technology organized using the goals of HF: design effectiveness, efficiency and satisfaction.

METHODS

Formal methods were used to perform a systematic review and assure a thorough search and retrieval process. Procedures included article relevance assessments, data extraction and data analysis.9 Poor quality studies were not eliminated, as is common in many systematic reviews, because our goal was to describe the available HF research in health care. The years 1980–2009 were included. Substantial technology changes for devices and information systems since 1980 would make earlier references not pertinent. Criteria for inclusion were: Peer review publications in English; stated research findings; any study design or method from any country; analyses of medical devices, tools, user interfaces, clinical information systems, electronic health records in healthcare environments; any user including health providers or patients. Excluded articles were studies about: ergonomics (e.g. Cumulative Trauma Disorders, occupational medicine); in conference proceedings; about medical transcription devices; descriptions of human factors-related concepts without research findings; usability analyses in non-healthcare settings, designs solely for patients and descriptions of work activities or error analyses.

Extensive literature searches were conducted using the research databases Cumulative Index of Nursing and Allied Health Literature (CINAHL), Ovid MEDLINE, PsycINFO, INSPEC, and the EBM Reviews: Health Technology Assessment Database (CLHTA) from 1980 to 2009. Key search terms were: (Human Computer Interaction or HCI) and (Human factors or Usability) and (health$ or health care or medical) and (nurs$). Reference lists of publications were checked for any additional references. Authors independently reviewed citations for relevancy and applied the relevancy criteria; any questionably relevant articles were discussed until consensus was reached. The authors focused on technology targeted to clinicians only.

RESULTS

The search criteria yielded a total of 11,916 articles; delimiting articles to those with health$ or health care or medical terms resulted in 2,234 articles; again delimiting this search to manuscripts with a nursing emphasis resulted in 215 articles. The abstracts from this set of 215 articles were reviewed; 34 articles met the relevance criteria. These articles are summarized in Table 1 with all usability findings. Authors of 18/34 articles examined 17 different application or screen design interfaces, authors of 6/34 studies evaluated 5 different graphical interfaces, 5/34 different remote/telemedicine systems and 5/34 different medical device user interfaces.

Table 1.

Types of User Interfaces By Major Findings Across Combined Effectiveness, Efficiency and Satisfaction Categories

Source (1st Author) UI Type Major Findings
Graphical Displays (6)
Effken (1997) Graphical interfaces Novel display type positively affected amount of drug usage and target range but not time to treatment
Staggers (2000) Graphical vs text interfaces Response time faster, fewer errors and higher satisfaction with graphical interface for orders management
Effken (2001) Graphical interfaces Display type positively affected successful treatment, amount of drug usage, vitals target range; visualizers (cognitive style) kept vital signs in target range
van der Meijden (2003) Graphical interfaces Usability assessment due to under-utilization and low satisfaction; issues in information and quality, training, project communication most important
Liu (2004) Graphical interfaces Fewer user errors with graphical interfaces
Lamy (2008) Graphical interfaces More correct responses with graphical interfaces
Medical Device Interfaces (5)
Lin (1998) PCA pump Complex programming sequences increase user cognitive workload
Lin (2001) PCA pump Redesign of PCA modes decreased errors, improved task completion time
Zhang (2003) 1-Channel IV Pump 192 heuristic violations in 1 pump; serious violations can lead to error
Graham (2004) 3-Channel IV Pump 231 heuristic violations; severe violations in consistency, flexibility, undo
Despont-Gros (2007) Digital pen technology Unexpected cognitive burden placed on users; high acceptance
Mobile/Remote Devices (5)
Lindberg (1997) Telemedicine system Sound and visual quality of telemedicine interfered with care processes
Lin (2004) Wireless PDA with physiological monitoring High rating on performance in mobility, ease of use, and monitoring
Yoo (2006) Mobile diabetes management system Cognitive workload greater with increasing system operations; task time and error rate negatively affected by workload
Tang (2006) Digital Emergency Medical Telemedicine System 21/48 usability problems rated as catastrophic due to poor visibility and inadequate data synchronization
Wu (2008) Handheld electronic medical record Good usability included intuitive features, accessibility to information, to be considered usable needs to be fast and time saving; information completeness, ordering details, billing functionality and integration were other concerns.
Application/Screen Designs (18)
Staggers (1994) Levels of screen density Found information faster, more accurately, high satisfaction on high density
Mills (1994) Levels of screen density Cognitive characteristics predicted users’ time and accuracy
Terazzi (1998) Clinical lab software User perceived lack of control of software
Fuchs (1999) Clinical decision support converted from simple man-machine interface with complex data entry to graphics; processing time 1min per case; software easy to use, comprehensive, useful in cancer risk evaluation/early cancer detection
Alberdi (2000) Trend monitoring system Clinicians would observe patient first then use trending as adjunct only
Horsky (2003) Order entry system Considerable cognitive demands on users, patient safety errors
Patterson (2004) Clinical decision support 6/19 barriers (#1=workload reduced effectiveness of clinical reminders
Johnson (2005) Family pedigree software Less time on redesigned interface
Hortman (2005) Nurse practitioner outcomes database Mean satisfaction scores 3–8 (of 9). Discovered unclear elements, e.g., date field, how to enter vital signs
Chaikoolvatanna (2006) Diabetes management tool Useful, easy to use, and understandable; easy to move from one topic to another and designs, colors, figures, diagrams were appropriate; quality of audio and completion time a user concern
Allen (2006) Paper based screen shots 100 heuristic violations; 41% related to consistency
Peute (2007) Lab order entry system 25 usability problems - flexibility, navigation, visibility, word meanings
Staggers (2007) Electronic medication administration record High user satisfaction but only 90% of medication tasks completed correctly
Wallace (2007) 3 patient care guideline interfaces More successful searches, greater accuracy with homegrown interface than proprietary ones
Edwards (2008) Commercial electronic health record 134 potential usability issues; 10% rated as severe
Martins (2008) 3 interfaces displaying longitudinal clinical data Higher complexity queries answered faster with computerized interfaces vs paper charts
Narasimhadevar (2008) Interface for transplant nurses Absence of the ability to edit certain documents led to poor usability ratings in control. Overall high scores for helpfulness, learnability, efficiency
Fonda (2008) Internet based diabetes management program Neutral to favorable usability scores. Higher ratings for visual appeal, content vs ease of use, performance, support features

Authors included multiple outcome variables; these details are found in Table 2 divided into 50 separate studies. Studies were then classified into three categories based upon goals for human factors research: effectiveness (24/50), efficiency (10/50) and satisfaction (16/50). The study design and aims, sample, setting, methods, and findings were extracted from each relevant article.

Table 2.

Evidence Table of Clinical Technology Design (User Interface) Studies

Source Study Design, Aims Sample Setting Methods Findings
Effectiveness
Lindberg, C.S (1997) Evaluated home telemedicine device for usability 4 RNs; 4 rural project sites, 38 elderly or disabled people, Kansas, USA 3 case studies of patients. Most usability issues detected during informal interviews between telemedicine nurses, VP of Nursing Adequate sound quality is important but technically difficult to achieve. Nurses found computerized data collection process awkward, time consuming, manner of answering questions cumbersome (related to a software design problem). Core of problem is a mismatch between telemedicine, standard nursing procedures and protocols
Effken, Kim & Shaw (1997) Study 1: Compared 3 display types-traditional strip-chart or TSD, integrated balloon- IBD, etiological potentials-EPD on time to initiate treatment, detect critical events.
Study 2: Displays changed, same variables
Study 3: Studied novice, expert nurses
Study 1: 19 Psychology undergraduates
Study 2: 13 Psychology undergraduates
Study 3: 6 expert, 6 novice critical care nurses Simulation lab, Connecticut, USA
Study 1: 6 trials (3 scenarios x 2 trials) for common ICU clinical problems
Study 2: Same methods except subjects allowed to practice with display
Study 3: Same displays, methods.
Study 1: Display type not significant on time to initiate treatment; TSD and IBD had greater number of drugs than EPD; Percentage in target range greater for EPD
Study 2: EPD < IBD < TSD for time to initiate treatment and number of drugs used
Study 3: Experts outperformed novices in time in target range when using TSD, IBD. Experts did not initiate treatment later nor did they use fewer drugs than novices
Lin, Isla, Doniz, Harkness, Vicente, Doyle (1998) Study 1: Determined design flaws for patient-controlled analgesia PCA
Study 2: Compared old/new PCA perceived mental workload
Study 1: 9 nurses, Recovery room, large medical center
Study 2: 12 nursing students lab setting, Canada
Study 1: Interviews, observations PCA pumps.
Study 2: Used common programming tasks workload measured by NASA-TLX
Study 1: Complex programming sequences in old PCA; no way to remind users how many parameters to program into PCA, sequence, steps completed, remaining
Study 2: Subjects exposed to old interface first benefit most; visa versa had less benefit. No differences in mental workload.
Terazzi, Giordano, Giuseppe (1998) Evaluated home-grown clinical lab procedures software against known commercial software 44 users in cardiology, in-house lab, third-party lab (14); hospital & rehabilitation institute, Italy Interviews User interviews confirmed perceived lack of control influenced by unexpected program faults (e.g. software stops unexpectedly) occurring after initial implementation
Fuchs, Heller, Topilsky, Inbar (1999) Evaluated a computerized clinical decision support system (CaDet) used to detect cancer on validity, reliability, usability Study 1: 250 case histories of cancer patients
Study 2: 60 out-patients
Study 3: 5 general practitioners, 60 patients, Israel
Study 1: Determined relative risk of patients having cancer, need for provider action
Study 2: Compared CaDet with actual clinical information
Study 1: 82% of tumor cases would have been called to the physician’s attention before the cancer was diagnosed.
Study 2: CaDet alerted 3 cancer cases by high scores, 6 others had no cancer but nonmalignant pathology justified diagnostic procedures. No alert in 51 patients and no cancer found
Alberdi, Gilhooly, Hunter, Logie, Lyon, McIntosh, Reiss (2000) Evaluated computerized trend monitoring system (MARYTM) on use for clinical decision making 15 physicians, 19 nurses interviewed; 15 physicians, 10 nurses in simulated trends; NICU, UK 8 observation sessions for 1–2 hrs. Think aloud sessions during 14 simulated trend graphs reviews < 50% would use MARY as primary source. Would observe baby first; use computerized monitors next. Jr. doctors relied on information from direct patient contact, Sr. doctors relied on data provided by monitors
Effken, Doyle (2001) Compared cognitive style (visual or verbal) and 3 interface designs Traditional Strip Chart Display (TSD), Integrated Balloon Display (IBD), Etiologic Potentials Display (EPD) on time to initiate treatment, number of drugs used, time in target range 18 undergraduate nursing students, Arizona Health Sciences Center, US Computer simulations of hemodynamics using 3 scenarios (hypertension, heart failure, fluid overload) in 18 trials Visual, verbal scales from Richardson’s Verbalizer-Visualizer Questionnaire Clinical problems treated most successfully with EPD (80%). Patients in target range more often with EPD >IBD, TSD. Visualizers kept patient within target range 54% of time; verbalizers 44%. Visualizers (79%) corrected more problems than verbalizers (60%). Students quicker to initiate treatments, used fewer drugs with EPD > IBD, TSD. Visualizers scores related to how often to the percentage of time system kept in target range using EPD.
Horsky, Kaufman, Oppenheim, Patel (2003) Study 1: Characterized interaction complexity
Study 2: Identified sources of error, performance with an order entry system
Study 1: 2 researchers
Study 2: 7 physicians, lab setting
Development version of a commercial system
Study 1: Modified cognitive walk-through using 7 orders
Study 2: 6 clinical scenarios, wrote orders, talked aloud, were videotaped x 1 hr
Study 1: Considerable demands on user cognitive resources (details given). Users must remember system-specific knowledge at strategic points
Study 2: Errors per user=1–5. 5-omitted orders; 2-wrong allergy data; 1-wrong order set, others; patient safety implications. Heavy cognitive demands, especially on users lacking a conceptual model of system
van der Meijden, Solen, Hasman, Troost, Tange (2003) Compared workstations (graphical, electronic record for stroke patients) on work coordination 12 physicians, acute care hospital, Netherlands Audiotaped, transcribed in-depth interviews. Short questionnaires about use. Stroke UI evaluated via chart reviews, usage logs Interviews coded, analyzed using Nud*IST Usage pattern of the GUI interface varied by type of clinical area. Four points emerged after analyzing interview data: system information and quality, use, training and support, communication. Three themes were rated as very important to users of the stroke UI including: system information quality, use, project communication. Only stroke interface had all required functionality. Number of available workstations too limited. Consulting physicians refused to use stroke interface
Zhang, Johnson, Patel, Paige, Kubose (2003) Evaluated 2 1-channel volumetric infusion pumps against 14 heuristics to determine patient safety issues 4 graduate students (IT, Psychology) 2 different 1-channel volumetric infusion pumps Houston, TX USA Used 14 heuristic evaluation factors (combined Nielsen and Shneiderman factors and tailored to medical devices) 192 heuristic violations (against recommended methods) across 89 usability problems in pump 1; 121 violations for 52 usability problems in pump 2. Pump 1 had more serious problems likely to lead to medical errors. Volumetric infusions pumps were not identified by authors.
Graham, Kubose, Jordan, Zhang, Johnson, Patel (2004) Evaluated three channel IV infusion pump interface against 14 heuristics 3 cognitive science experts; 1 senior ICU RN; One 3-channel IV pump, USA Used 14 heuristic evaluation factors (from Nielsen, Shneiderman and tailored to medical devices) 231 heuristic violations (deviations from recommended design methods); most violations in consistency, language. Fewest violations occurred under Help and Documentation when help was needed. Severe violations requiring immediate attention were across factors including consistent meaning of words, flexibility in creating shortcuts, and undo which allows a user to reverse actions to recover from errors.
Liu, Osvalder (2004) Compared numerical, graphical ventilator displays on meaning of deviations from normal 6 expert ICU nurses, university hospital. Usability testing: 20 nursing students, Sweden 6 task randomized scenarios; 4 pilot tests prior to usability study Nurses had difficulty understanding traditional numerical diagrams; Ventilator modes modified based on interviews Graphical interface induced fewer errors about the meaning of deviations
Patterson, Nguyen, Halloran, Asch (2004) Determined potential barriers to the use of computerized reminders 2 pilot, 6 study sites 59 interviews of physicians, nurses, pharmacists, others. 29 observations of attendings, 1 nurse, 4 case Managers, VA, USA 2 observers did field observations while providers used 10 HIV clinical reminders; Semi-structured interviews, handwritten notes 6 of 19 barriers reduced effectiveness of HIV reminders at more than one site due to: Workload, time to document why reminder did not apply, inapplicability to situation, training, quality of patient-provider interaction, use of paper forms
Johnson, Johnson, Zhang (2005) Developed, evaluated a new interface for family history. Study 1: Design requirements Study 2: User needs; Study 3: Compared new UI to 3 other pedigree drawing programs on functionality, usability Study 4: Heuristic evaluation of new UI, Study 5: Usability test of new vs old on time Study 1:Healthcare providers in Texas Study 2: 481 members from Genetic Counselors Society; Study 3: 2 unspecified experienced users Study 4: 8 unspecified subjects; Study 5: 16 unspecified subjects Study 1: Task analyses, heuristic evaluation of old UI, open ended questions; Study 2: Survey for functional needs; Study 3: Entered 10 families’ data into the original family pedigree program; Study 4: 12 common tasks with new interface; Heuristic evaluation by researcher Study 5: 12 tasks Study 1: Current problems- visibility, consistency, use of natural language, informative feedback, minimizing memory load, reversible actions, error messages, flexibility; Study 2: Most used function = drawing a pedigree freehand. 30% used computers to collect family history; Study 3: Editing time for direct manipulation (2.6min), form fill-in (10.5min); Study 4: Major problems on new UI for how to begin, continue data entry, label information on pedigree. Redesigned; Study 5: 13–14 min less time on redesigned version.
Hun Yoo, Chul Yoon (2006) Evaluated a mobile diabetic management system on difficulty of use, task completion times 40 participants, virtual lab 2 tasks in same order Developed combined difficulty index (CDI)-relation between information, interface, task procedure experience. Simulator tracked mouse movements, task times Comparisons of CDI, user performance tests closely related; number of interfaces, available operations affect the cognitive operations, difficulty experienced by users Mean time for Task 1 and 2 was 134.2 s and 67.2 s, respectively; error rate was lower for Task 2.
Tang, Johnson, Tindall, Zhang (2006) Study 1: Compared 2 early prototypes for Emergency Medical System on usability heuristics
Study 2: Conducted a field study of 4th prototype
Study 1: 3 usability experts
Study 2: 2 paramedics
Houston TX, USA
Study 1: First, second of 4 prototypes compared on 14 usability heuristics; severity rating 0–4 (none to catastrophic problem)
Study 2: Videotaping of 2 ambulance runs
Study 1: 1st, 2nd prototype - Usability problems 45/26; heuristic violations 93/47, avg. severity rating 2.84/2.80. Most due to consistency, visibility, match to the real world violations
Study 2: 48 usability problems of which 21 described as major or catastrophic requiring immediate attention, such as not providing visible feedback when a user created a new patient record during an emergency run and also, lack of data synchronization among different system components; 6/21 problems had negative effect on paramedic performance during emergencies.
Allen, Currie, Bakken, Patel, Cimino (2006) Evaluated paper-based screen shots from a website using a condensed set of heuristics 4 usability experts, 18 screen shots, Lab setting, eastern US 5 heuristics 100 violations; 41% consistency; 41% minor; 22% low priority; 22% major usability problem; 6 % usability catastrophes Designers able to fix 70% of all issues. Validated the use of just 5 (vs 14) heuristics
Despont-Gros, Rutschmann, Geissbuhler, Lovis (2007) Evaluated a digital pen & paper technology on fit with work processes 33 ER nurses, University hospital ER, Geneva, Switzerland Pre-trial observation to understand triage process; Ethnographically-informed observations over 7 days; Acceptance survey (developed by the authors; 5 axes: Recorded users 1183 triage forms; 22 surveys ER: interruptive patterns, complex lifecycle of triage form, intricate user interactions on form, speed of decisions
Pen: improvement; acceptance “high” but unexpected cognitive burden (looks like a pen but does not behave like one; having to remember to validate data not typical for paper form; pen cap is a power switch)
Peute and Jaspers (2007) Determined critical problems with lab order entry system, analyzed critical data entry problems 2 analysts Usability testing: 7 users (3 neurologists; 4 neurologists in training), lab setting, Netherlands Cognitive walkthrough: 6 tasks, 29 actions. Coded goal problems (wrong task), action problems (doesn’t know correct action); Severity rated. Usability testing: Think aloud; 4 scenarios to order lab tests Cognitive walkthrough identified 25 potential usability problems (e.g. inflexibility of system, inability to navigate, visibility and incomprehensible button labels).
Usability testing confirmed cognitive walkthrough; 8 more problems; errors of omission, inefficient order behavior
Staggers, Kobus, Brown (2007) Study 1: Determine functions for medication activities
Study 2: Determine accuracy for a novel electronic medical administration record (eMAR) design
Study 1: 12 military nurses; 2 medical centers, 1 primary care clinic, eastern and western US
Study 2: 20 Navy clinical nurses military medical center, western US
Study 1: Videotaped interviews with talk-aloud and semi-structured questions
Study 2: 9 “Typical” medication process tasks.
Study 1: Created process flow diagrams and prototype eMAR
Study 2: 90% of all medication tasks completed correctly (low); errors in finding most current medications, routes with patient safety implications
Wallace, Bigelow, Xu, Elstein (2007) Compared 3 patient care guideline interfaces (2 proprietary, 1 homegrown) on search success 30 RNs, health science center, USA 18 clinical scenarios Higher percentage successful attempts for homegrown interface, fewer wrong items. Unsuccessful searches (18%), incomplete searches (12%). Wrong items indicate need to refine document format, content, indexing
Edwards, Moloney, Jacko, Sainfort (2008) Determined usability issues for a commercial electronic health record (with order entry, medication administration, clinical documentation 4 usability theory or practice experts 3–4 nursing, respiratory care experts, pediatric hospital, southeast USA Experts used heuristic walkthrough: 1) task-focused analysis, 2) compared to Nielsen’s heuristics. 134 potential usability issues for admission, orders functions (44% and 28%); navigation, layout (15). 10% anticipated to be severe, most minor
Lamy, Venot, Bar-Hen, Ouvrard, Duclos (2008) Compared text, graphical interfaces by question type (explicit, implicit) on errors 11 General practitioners, France Graphical had gray anatomical, functional pictograms; textual drug monograph excerpts. Searched for answers to questions Correct responses higher with graphical(16 vs 27). Most errors-contraindications or drug interactions GUI (5), TI (18).
Wu, Orr, Chignell, Straus (2008) Evaluated a prototype handheld device having an electronic medical record for usability issues, functionality 5 family physicians; 4 Internists from different settings, Toronto, Canada 3 clinical scenarios. Think aloud sessions, audio & video recorded 52/54 required tasks completed. 5 major themes developed during usability sessions using mobile EMRs including design and system characteristics, device dimensions including difficulty entering information on small devices, ability to review record and completeness of information, ability to order tests, add comments to orders and confirmation of orders, and integration of preferred functionality such as decision support and billing systems.
Efficiency
Staggers & Mills (1994) Compared 3 levels of lab information density displays on speed, accuracy 110 clinical nurses (ICU; medical surgical; maternal-child) in a medical center, eastern US Character-based lab data; 40 tasks in 5 interaction blocks Information found twice as fast on high versus low density screens overall and after practice. No difference in accuracy; error rate was 4%
Mills & Staggers (1994) Correlated spatial memory, spatial visualization, perceptual abilities on nurses’ time, errors, for 3 levels of lab info 110 clinical nurses (ICU; medical surgical; maternal child), medical center, Eastern US Character-based lab data; 40 tasks in 5 blocks Nurse cognitive characteristics predicted 35.9% of speed, 21.5% accuracy Younger nurses with higher spatial memory faster on high & low density screens; Nurses with higher spatial visualization faster on moderate density
Lin, Isla, Doniz, Harkness, Vicente, Doyle (1998) Study 1: Determine design flaws for a new patient-controlled analgesia (PCA)
Study 2: Compared old, new PCA on time, errors
Study 1: 9 nurses, Recovery room, large medical center
Study 2: 12 nursing students in a lab setting, Canada
Study 1: Interviews, observations PCA pumps.
Study 2: Used common programming tasks (PCA, Continuous, PCA + Continuous) repeated x 2 for each UI, workload measured by NASA-TLX
Study 1: Complex programming sequences in old PCA; no way to remind users how many parameters to program into PCA, sequence, steps completed, remaining
Study 2: Mean programming time 15% faster with new interface. Subjects exposed to old interface first benefit most; visa versa had less benefit. 10 errors on new PCA, 20 for old
Staggers & Kobus (2000) Compared graphical, text–based interfaces on time, errors 93 nurses, military medical center, eastern US 10 blocks of tasks 2 X faster response time with graphical Errors 6 X greater with text
Lin, Vicente, Doyle (2001) Compared two PCA interface designs on time, errors 12 recovery room nurses, Toronto General Hospital, Canada Performed 6 tasks on each Task completion faster new > old. More errors old (29) > new (13) for PCA mode selection, new UI (8) > old (4) for bolus mechanisms
Liu, Osvalder (2004) Compared numerical, graphical ventilator displays on deviation detection time, error rates 6 expert ICU nurses, university hospital; Usability testing: 20 nursing students, Sweden 6 task randomized scenarios; 4 pilot tests prior to usability study No differences for deviation detection time, error rate in assessing overall picture
Wallace, Bigelow, Xu, Elstein (2007) Compared 3 interfaces for patient care guidelines (2 proprietary, 1 homegrown) on time 30 RNs, health science center, USA 18 clinical scenarios Successful items correctly identified among three interfaces ranged from 3.0–3.4 minutes. Unsuccessful attempts to identify items among three interfaces ranged from4.7–5.4 minutes. Unsuccessful or incomplete search attempts were attributed to document format and organization of interfaces.
Martins, Shahar, Goren-Bar, et al., (2008) Compared 3 displays (KNAVE II, ESS, paper) on efficiency, accuracy of finding answers in time oriented clinical data typical for oncology protocols Study 1: 8 MD/PhD students, residents, and fellows
Study 2: 5 physicians USA, Israel
Study 1: 10 clinical queries of increasing complexity
Study 2: 6 queries of increasing difficulty
Study 1: No difference in time overall. KNAVE II faster for hard, hardest queries. Easy queries faster in ESS, paper. Higher accuracy with KNAVE
Study 2: All completed < 30 min with KNAVE, ESS; ran out of time with paper. More accuracy using KNAVE II vs ESS (110/120 correct versus 69/120)
Lamy, Venot, Bar-Hen, Ouvrard, Duclos (2008) Compared text (drug monograph), graphical (pictograms) interfaces by question type (explicit, implicit) on response time 11 General practitioners, France Searched for answers to questions Responses 2X faster with graphical than text; explicit question typestime less on graphical (15%) vs text (70%).
NarasimhadevaraRadhakrishnan, Leung, Jayakumar (2008) Usability testing of new transplant interface for nurses on learning time Stage 1: 3 transplant nurses
Stage 2: Previous 3 nurses, 1 head nurse, 1 pt care asst), 8 nurses
Stage 3: 10 transplant nurses, Montreal Canada
Stage 1, 2: Used short, iterative development cycles.
Observations, extensive-note taking.
Stage 3: Satisfaction measured by Software Usability Measurement Inventory or SUMI
Global median scores for SUMI including efficiency, affect, helpfulness, control and learnability was > 60. Greater than 50 is considered to be an indicator or good quality software in usability metrics.
Satisfaction
Staggers & Mills (1994) Compared 3 levels of lab information density displays on user satisfaction 110 clinical nurses (ICU; medical surgical; maternal-child), medical center, eastern US Character-based lab data; 40 tasks in 5 interaction blocks User satisfaction greatest for high density screen overall and after practiced with all screens
Mills & Staggers (1994) Correlation of spatial memory, spatial visualization, perceptual abilities on nurses’ speed, accuracy, on 3 levels of lab info 110 clinical nurses (ICU; medical surgical; maternal child), medical center, Eastern US Character-based lab data; 40 tasks in 5 interaction blocks on user satisfaction No relationship between cognitive variables and user satisfaction with any display types
Fuchs, Heller, Topilsky, Inbar (1999) Evaluated a clinical decision support system (CaDet) to detect cancer on ease of use Study 3: 5 general practitioners, 60 patients Tel Aviv, Israel Study 3: Patients completed a friendliness questionnaire. Physicians completed questionnaire on design, ease of use, contribution to cancer detection, acceptability Study 3: All patients found CaDet easy to use. All physicians found CaDet easy to use, thought it acceptable and that it would make a contribution in cancer detection.
Staggers & Kobus (2000) Compared graphical, text–based interfaces on user satisfaction 93 nurses, military medical center, eastern US 10 blocks of task trials. Used Questionnaire for User Interaction Satisfaction Satisfaction greater for graphical interface than text
Lin, Vicente, Doyle (2001) Compared two PCA designs on interface preference, mental workload 12 recovery room nurses, Toronto General Hospital, Canada Performed 6 tasks on each. Perceived mental workload by NASA-TLX 9 preferred new interface, 1 preferred old, 2 no preference. Workload reduction for new > old for continuous, PCA + continuous modes tasks
van der Meijden, Solen, Hasman, Troost, Tange (2003) Compared workstations (graphical, electronic record for stroke patients) on satisfaction 12 physicians, hospital setting, Netherlands Short questionnaire about user satisfaction Communication about future goals and intended benefits between management and end users was not optimal. Had there been a better dialogue between management, end users, and developers/implementers, then expectations, plans, fears, and wishes could have been exchanged; clinical workstations could have been exploited much better. Some systems like the stroke electronic patient record had greater impact on users’ work and users preferred paper formats to electronic.
Lin, Jan, Chow-In Ko, Chen, Wong, and Jan (2004) Wireless-PDA-based physiological monitoring system evaluated on Technical Verification (TV), usability perceptions compared to older commercial pulse oximetry Study 1: TV: 20 health volunteers (11 males, 1 female)
Study 2: Usability: 50 medical personnel (30 nurses; 20 doctors)
Emergency Department Taiwan
Study 1: TV: Compared pulse oximetry of wireless device to commercial oximetry.
Study 2: Usability: Used device x 1 mo, answered usability questions. Survey evaluated overall system (1–5 Likert scale 5=completely satisfied), 3 areas: mobility (size and weight), usability (easy operation; easy monitoring), performance during patient transport. (1–10 Likert scale; 10=completely satisfied)
Study 1: TV of new device: Error in pulse oximetry; < +2%; error in heart rate < +2 beats per minute. No error in real time data transmission.
Study 2: Medical staff- high rating on performance 4.64/5.00. New outperformed older models in mobility (weight 8.8 vs 4.7; size 8.9 vs. 4.9), usability (easy operation 8.6 vs. 5.1); easy monitoring 8.7 vs. 5.1)
Liu, Osvalder (2004) Compare numerical, graphical displays on preferences 6 expert ICU nurses, university hospital; Usability testing: 20 nursing students, Sweden 6 task randomized scenarios; 4 pilot tests prior to usability study Nurses preferred alarms with fewer hierarchical levels
Hortman & Thompson (2005) Evaluated an outcomes database on user satisfaction 4 nursing faculty, 1 student; Laboratory setting, midwest, US Questionnaire for User Interface Satisfaction Mean scores 3–8 (9-point scale) Specific comments: unclear date fields, unclear methods to enter vital signs, tab stops not in a logical order; limited space to type “reason for visit”
Johnson, Johnson, Zhang (2005) Developed, evaluated a new interface for family history. Study 5: Usability test of newly revised vs original on user satisfaction Study 5: 16 unspecified subjects; randomized to original, redesigned interface Study 5: Did 12 tasks, completed Computer System Usability Questionnaire. Study 5: User satisfaction improved
Chaikoolvatanna Haddawy (2006) Evaluated a provider multi-media diabetes management program on usability 12 (2 nursing, 3 pharmacy, 7 volunteer students, Lab setting, Thailand Interacted with the diabetes program, completed a 20-item survey developed by the authors about program Found computer literacy issues Students “generally thought the program was easy to use” but it took too long to complete
Staggers, Kobus, Brown (2007) Study 2: Determine user satisfaction for a novel electronic medical administration record (eMAR) design Study 2: 20 Navy clinical nurses military medical center, western US Study 2: Used 9 “Typical” medication process tasks.
Questionnaire for User Interaction Satisfaction
Study 2: High QUIS scores; 7.2–7.9 (on a 9-point scale)
Wallace, Bigelow, Xu, Elstein (2007) Compared 3 interfaces for patient care guidelines (2 proprietary, 1 homegrown) on user perceptions 30 RNs, health science center, USA 18 clinical scenarios 5 pt. Likert survey (5=very positive) of user perceptions Average ratings for information seeking sessions higher with successful search outcomes vs. unsuccessful; finding related to unsuccessful session taking longer, finding wrong information, and information not making sense to user. Easy to find information rated between 1.5–2.0 out of 5.0 for most positive responses.
Fonda, Paulsen, Perkins, Kedziora, Rodbard, Bursell (2008) Explored participants expectations, interpretations, functionality of internet-based Comprehensive Diabetes Management Program (CDMP) 5 nurses (3 diabetes nurse educators, 1 home health nurse, 1 urban hospital nurse; 1 physician from an urban hospital, Boston MA, USA Observations (mouse movements, paths tasks, verbalizations, recording errors; Usability Score survey on visual appeal, content, ease of use, performance, support features; interview data on user impressions Usability Scores neutral to favorable (range 3.20–4.04); higher for visual appeal, content vs ease of use, performance, support features. Participants wanted ability to customize application. Participants’ mental model did not match functionality. Participants did not quickly grasp all terminology
Martins, Shahar, Goren-Bar, et al (2008) Compared 3 interfaces (KNAVE II, ESS, paper) on efficiency, accuracy of finding answers in data, user satisfaction Study 1: 8 MD/PhD students, residents, and fellows
Study 2: 5 physicians USA, Israel
Study 1: 10 clinical queries of increasing complexity User satisfaction measured by Standardized Usability Score questionnaire
Study 2: 6 queries of increasing difficulty
Study 1: KNAVE II had higher usability scores
Study 2: Higher usability scores with ESS
Narasimhadevar aRadhakrishnan, Leung, Jayakumar (2008) Usability testing of new interface on perceived ease of use, user satisfaction Stage 3: 10 nurses, transplant ward, Montreal Canada Stage 3: Satisfaction measured by Software Usability Measurement Inventory or SUMI SUMI avg 62; range 57–63 (good) on all except Control subscale; low control scores due to not giving RNs control to change medications. Learnability measured through SUMI was greater than 60.

Evaluations in Effectiveness

Authors of 24 studies evaluated effectiveness aspects of user interfaces. Effectiveness is the usefulness and safety of an interface in completing a task (See Figure 2). Authors of seven studies illustrate the variability of types of software being tested, for example, the usefulness of software that automatically created a family pedigree diagram from family history data, a mobile medical emergency services medical record for paramedics, a laboratory procedures system, and a nurse practitioners outcomes database with graphics.1013 Researchers have found that users were more successful searching for information on homegrown interfaces versus proprietary ones, users prefer systems that reduce cognitive effort, and that complex queries could be answered more successfully with graphical interfaces vs paper.10, 1416 In device/system reviews using heuristics, researchers also found severe usability problems caused by limited information visibility and faulty data synchronization, possibly leading to medical errors. Also, limited system flexibility, poor navigation systems caused users to get lost in the application and confusion about what labels mean led to potential for patient harm.11, 17 To avoid some of these circumstances early in the design process researchers recommend including users in development lifecycle to identify users needs and expectations of design requirements.18, 19

Authors of four studies examined the effectiveness of graphical interface designs on clinician decision making for stroke patients, ventilator-dependent patients, patients requiring hemodynamic monitoring and the safety of using a novel electronic medication administration record. Graphical designs improved initiating treatments, determining needed medications, and detecting patients’ deviations from normal physiological parameters; visual cognitive learning styles (versus verbal) resulted in better ability for clinicians to keep vital signs within a target range with advanced physiologic monitoring interfaces.2022 However, nurses’ medication accuracy was low for medication tasks that required them to scroll beyond the current field of view in a new graphical medication record, despite substantial training with the interface.23

The authors of two studies evaluated the usability of IV pumps and judged their compliance with recognized design guidelines called heuristics. Authors found heuristic violations or non-compliance with recommended design guidelines for two different 1-channel volumetric IV pumps from two different vendors,7 and one 3-channel pump commonly used in the ICU setting.8 The vendors and model numbers were not provided. The heuristic for consistency was violated most frequently. Inconsistencies do not allow users to determine the clear meaning of interface elements such as labels. For example, one pump button labeled “off” for one infusion channel could be confused with the pump “stop” button. Authors found catastrophic usability errors in IV pumps. In one study, a pump adjustment was hidden on the rear of pump handle; this location may cause an inadvertent setting change when a user is just moving the pump. More important, the location makes the button hard to locate to readjust the pump back to normal.7

Two studies included evaluation of patient controlled analgesia (PCA) pumps. In these studies complex programming sequences and multiple user modes increased mental workload of nurses; a redesign of the PCA interfaces improved cognitive loads and potential errors in programming the devices.24, 25 Another set of authors caution that devices can be very confusing when they look like a familiar object (a pen) but behave differently (the cap on the pen was a power button).26 These kinds of designs can result in increased cognitive burden, training and/or redesign.

Authors of remote/mobile device studies examined telemedicine in home health environments,27 electronic diabetes management programs2830 and a hand held electronic medical record for physicians.31 Sound and visual quality during patient assessments interfered with effective assessments. A mismatch between manual nursing assessment practices and an early telemedicine device design caused delays and difficulties in completing care assessments.

Two different clinical decision support systems were evaluated, a cancer detection system and clinical reminders for HIV patients.32, 33 Researchers assessed the ability of a system to accurately diagnose and inform clinicians. In the HIV reminder study, researchers uncovered barriers that reduced effectiveness of the reminders: workload, time required to document information about the reminder and duplication paper form systems, among others.

One set of authors evaluated a commercial electronic health record in a clinical setting.34 Researchers identified a total of 134 usability issues; 13 (10%) were potentially severe. For example, long, multi-level screens were confusing to use during admission documentation procedures while clinicians simultaneously obtained a medical history from patients; subsequently, clinical documents in the EHR had to be reconfigured by the vendor before use.

Evaluations of Efficiency

Efficiency aspects (Figure 2) examine productivity (time), costs, efficiency errors, and learnability (defined as the capability of a software product in enabling a user to learn how to use it). Accuracy is also important here because inaccuracy in keystrokes takes more time, impacting user costs and productivity. Five of the 10 efficiency studies were evaluations of graphical interfaces (5/10). For example, researchers found that a 3-fold increase in information density on screens allowed users to be twice as fast while not impacting accuracy. Users do not have to page between screens to find data.35 Graphical user interface design compared to text or paper systems also allowed clinician users to be twice as fast and more accurate in keystrokes.15,3638

New user interfaces enhanced users’ performance. Researchers demonstrated that improved designs for PCA pumps allowed users to avoid complex programming sequences reducing the time and errors.24, 25 Design can impact search times for clinical information. One study compared search times for patient care guidelines among different displays and found that users spent nearly twice the search time with one display due to poor document format and organization in the interface.14

Evaluations of Satisfaction

User satisfaction is measured by perceived effectiveness or perceived efficiency of the user interface. Satisfaction was measured in 16/50 studies; new interfaces involving user input for graphical displays and redesigned interfaces of all kinds had higher satisfaction ratings. User satisfaction was measured in studies that evaluated new types of software for clinical processes like medication administration, order entry, or documenting on transplant patients (See Table 1). Usability problems that negatively affect user satisfaction with interfaces included system inflexibility, poor navigation, poor information quality, lack of control of the system, limited visibility of system status.39, 40 Researchers found that users want interfaces that are intuitive, formats that allow visible data input; e.g., for birthdates (e.g. MM,DD,YYYY), and include consolidated information with high level information presented first.

Clinicians want technology that is easier to operate and is easy to understand, such as alarms with fewer hierarchical levels.22, 41 To obtain favorable user satisfaction results, technologically savvy clinicians also want an option to customize the interface for their own use, for example, some clinicians want to dial in their target ranges on specific measurement levels for their patients.28

DISCUSSION

This systematic review outlines the existing research for the design of clinical technology across its outcomes of effectiveness, efficiency and satisfaction. The majority of current studies evaluated effectiveness aspects of clinical technology interfaces. Studies about interface efficiency were fewest in number. Of course, a blend of these goals would be optimal to assure efficient and effective clinical technology design.

Current Research on Technology Design

Current research ranges across a myriad of technology interface types. The types of interfaces examined to date have no apparent pattern nor have they been assessed with any obvious rationale such as their frequency of use in clinical settings.

Although usability studies have not yet penetrated health care widely, researchers have discovered elements of design worth attention. For example, dense screens are faster for nurses’ information detection than and still as accurate as less dense screens. Thus, designers will want to include dense screens in systems so that clinicians avoid unnecessary movement between screens to search for information. The caveat is that dense screens need to include pertinent information, which means that designers will need to understand how clinicians make decisions and with what information. More careful attention should be paid to attention-grabbing methods for data located outside nurses’ field of view as it can easily be missed even when nurses are trained on an application.

Graphical designs facilitate both efficiency and effectiveness measures. These designs improve time to treatment, detecting physiologic parameter deviations and time to complete a wide variety of tasks (e.g., orders, lab procedures, searching for clinical data). A graphical design is especially important for tasks requiring navigation across applications or screens in a system and can improve performance as much as two-fold.36

Researchers overall found improvements in redesigns of older interfaces and with iterative designs created in combination with user testing. Initially, readers might ascribe this finding to a publication bias; however, its prevalence across so many studies can also confirm the validity of the usability axioms of user-centered design and the value of usability testing.

Device evaluations and the sole assessment of an active EHR uncovered serious usability issues such as safe programming of PCA, IV pumps and designs that interfered with critical processes such as documenting an admission history. Serious usability issues can be alarming, for instance, nurses were able to program a pump to give an inadvertent overdose without an alarm or warning. The Food and Drug Administration (FDA) currently requires usability testing for devices; however, the seriousness of the findings in the handful of studies here suggests that the FDA expand usability testing, that facilities assess the usability of devices as part of their purchasing processes and that a department such as quality improvement evaluate devices for their safety in their institutions, especially older ones.

Future Research Directions

Recommendations for future research are made in these areas: a) Expand the types, settings and participants for usability testing, b) Develop integrated displays, c) Expand outcome variables in usability studies.

Expand the Types of Evaluations, Settings and Participants

Types of Evaluations

The types of evaluated devices are limited to date. The interfaces for a handful of devices were formally evaluated, including two IV and two PCA pumps. A systematic method for evaluation is needed such as assessing devices based upon their prevalence and use in clinical settings. Obviously, many more devices exist in the clinical setting than were examined to date. Just in an ICU setting alone, numerous physiological monitors and devices (invasive and noninvasive) have an array of alarms with distinctive tones, blinking lights of different colors and shapes, all demanding attention.

Common tools such as IV pumps and the one evaluated EHR had serious usability violations. To ensure safe practice, usability evaluations of clinical technology tools need to be greatly expanded to alleviate potential hazards. Even more important, usability studies are critically needed to examine the cognitive burden, errors and workflow issues that may exist across devices in clinical settings. How nurses learn, remember and use the myriad of devices is worthy of more investigation. How to design technology to work symbiotically across tools is needed. A national database is needed for known devices assessments particularly for older models with known safety issues.

The Institute of Medicine42 (IOM) encourages the adoption of health information technology as one solution to medical errors. Yet, only one set of authors evaluated an active EHR. HIMSS Analytics reported that over 1,300 US hospitals have at least computerized clinical documentation in place.43 With the impetus to increase EHR implementations, increased health information technology funding in 2009, and the increasing infiltration of EHRs into diverse sites, usability assessments of commercial EHRs are needed to better understand the impacts of these products. Although some vendors incorporate prototyping and usability testing into their development cycles, this practice is not yet widespread. EHR components should be rigorously and iteratively tested using human factors principles by vendors, representative end users and HF experts to assure adequate design before installation.

The majority of tested technologies are those in clinical practice. The findings from these studies are striking, illustrating sources of potential error. Technology used in educational and administrative functions is under-represented. Expanding usability testing into these arenas would be welcomed. HF evaluations of curricular software, especially commercially available products, is needed. Usability evaluations would provide important details about successes and failures for others as they plan to implement new models of learning. Optimal interfaces for nurse executives and administrators are another area for promising research.

Evaluation Settings

The majority of current research settings are laboratories or simulated clinical settings. In the future, studies in naturalistic settings are highly encouraged. These kinds of settings would allow researchers to examine the role of interruptions, competing demands and other typical work issues within the context of their particular technology design. Naturalistic settings would provide researchers with new knowledge and understanding about how technologies are actually used in clinical practice versus artificial settings. Understanding work-arounds nurses create and competing demands would be illuminating.

Participants

Interdisciplinary teams participated in 2 device studies; interface assessments included 11 interdisciplinary teams. The IV pump studies and two graphical interfaces studies used psychology studies participants. Actual clinical users should be included in the future across types of nurses including nurse anesthetists, seemingly absent from usability studies to date.

More studies are needed to emulate the kinds of teamwork that occurs with clinical technology in sites. For instance, nurses and pharmacists are under-represented in evaluations of the impact of computerized provider order entry despite the fact that they are both integral to the orders management process and safe execution of orders.44

Develop Integrated Displays

Computerized support is needed to help nurses integrate information across devices and EHR applications. These integrated data summaries would display pertinent patient data, such as at change of shift. Currently, nurses must integrate data and information from devices and EHRs themselves, typically by remembering data.45 Nurses gather data from various sources organize the information and apply knowledge to recognize untoward trends or symptoms. Clinicians currently complain that the “big picture” of the patient is difficult to obtain with the sea of data in contemporary EHRs. A recent report from the Academies Press46 recognized the urgent need for better cognitive support from EHRs, including help integrating data.

Expand Outcome Variables in Usability Studies

The most commonly examined outcome variables were user satisfaction, heuristic violations, time and errors. User satisfaction was an outcome variable in 16 studies. Yet, user satisfaction provides only a partial insight into technology design. A better assessment would allow investigators to understand why a design improves satisfaction. Plus, researchers nearly all claim high user satisfaction, although this finding may be due to a publication bias. Other variables such as performance measures (time, accuracy) and aspects of decision-making (correct treatment, detecting adverse events, and patient safety errors) may be more telling aspects of usability evaluations. An expanded list of variables is available elsewhere.2 Thoughtfully chosen outcome variables should be mainstays of future usability research. EHRs in particular should be evaluated from a multi-modal perspective to assess both efficiency and effectiveness aspects.

Last, the gap between research and practice needs to be bridged. Interface evaluation and products from research proved useful and productive. Yet, research products often remain fixed in the research arena. In the future, bridging this gap should be part of the researcher’s agenda.

Limitations

This review included literature available in refereed journals. Other relevant studies may be available in dissertations, reports and unpublished venues. In the future, other authors may wish to examine studies from conference proceedings and in other languages besides English. Synthesizing results across this myriad of studies, variables, devices, methods and participants was particularly challenging. Additional insights are possible in this body of work.

CONCLUSION

Usability analyses are critically needed in clinical care settings to evaluate the myriad of equipment, monitors, and software used by health care providers to care for patients. These kinds of analyses provide necessary information about the cognitive workload, workflow changes, and errors occurring from poor technology design. More examinations that include unstudied nursing specialties and settings are needed to provide rich, detailed accounts of experiences with clinical technology. More interdisciplinary work is needed to ensure that clinical systems are designed for maximum benefit of all stakeholders, to increase understanding of information needs and requirements across settings, and to understand shared user performance with devices. Research needs to be conducted in actual practice settings, rural and community settings to outline excellent and less optimal technology designs. Expanding this area of research would enable a better fit between nurses and technology to reduce errors and increase nurses’ productivity.

Acknowledgments

The project was supported by grant number K08HS016862 from the Agency for Healthcare Research and Quality (Alexander, PI). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

Contributor Information

Greg Alexander PhD, Email: alexanderg@missouri.edu, University of Missouri, Sinclair School of Nursing S415, Columbia MO 65211, Phone: 573-882-9346, Fax: 573-884-4544.

Nancy Staggers, Email: Nancy.staggers@hsc.utah.edu, Informatics Program, College of Nursing, 10 S. 2000 E, University of Utah, Salt Lake City, UT 84108, Phone: 801.699.0112, Fax: 801.581.4297.

Reference List

  • 1.Kohn L, Corrigan J, Donaldson M. To Err is Human. Washington DC: National Academies Press; 1999. [PubMed] [Google Scholar]
  • 2.Staggers N. Human-computer interaction. In: Englebardt S, Nelson R, editors. Information Technology in Health Care: An Interdisciplinary Approach. Harcourt Health Science Company; 2001. pp. 321–45. [Google Scholar]
  • 3.Dix A, Finlay JE, Abowd GD, Beale R. Human-Computer Interaction. 3. Essex England: Prentice Hall; 2004. [Google Scholar]
  • 4.Carayon P. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety. Mahwah, NJ: Lawrence Erlbaum Associates; 2007. Handbook of Human Factors and Ergonomics in Health Care and Patient Safety; pp. 3–5. [Google Scholar]
  • 5.TIGER (Technology Informatics Guiding Education Reform) The TIGER Initiative: Collaborating to Integrate Evidence and Informatics into Nursing Practice and Education: An Executive Summary. [Accessed May 12, 2009];2009 http://wwwtigersummitcom/uploads/TIGER_Collaborative_Exec_Summary_040509pdf.
  • 6.Sears A, Jacko J. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition (Human Factors and Ergonomics) 2. New York: Taylor and Francis Group; 2008. [Google Scholar]
  • 7.Zhang J, Johnson TR, Patel VL, Paige DL, Kubose T. Using usability heuristics to evaluate patient safety of medical devices. Journal of Biomedical Informatics. 2003;36:23–30. doi: 10.1016/s1532-0464(03)00060-1. [DOI] [PubMed] [Google Scholar]
  • 8.Graham MJ, Kubose TK, Jordan D, Zhang J, Johnson TR, Patel VL. Heuristic evaluation of infusion pumps: Implications for patient safety in Intensive Care Units. International Journal of Medical Informatics. 2004;73:771–9. doi: 10.1016/j.ijmedinf.2004.08.002. [DOI] [PubMed] [Google Scholar]
  • 9.Cochrane. The Cochrane Manual. Cochrane Collaboration; 2006. [Google Scholar]
  • 10.Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. Journal of Biomedical Informatics. 2005;38:75–87. doi: 10.1016/j.jbi.2004.11.005. [DOI] [PubMed] [Google Scholar]
  • 11.Tang Z, Johnson TR, Tindall RD, Zhang J. Applying heuristic evaluation to improve the usability of a telemedicine system. Telemedicine and e-Health. 2006;12(1):24–35. doi: 10.1089/tmj.2006.12.24. [DOI] [PubMed] [Google Scholar]
  • 12.Terazzi A, Giordano A, Minuco G. How can usability measurement affect the re-engineering process of clinical software procedures? International Journal of Medical Informatics. 1998;52:229–34. doi: 10.1016/s1386-5056(98)00141-5. [DOI] [PubMed] [Google Scholar]
  • 13.Hortman PA, Thompson CB. Evaluation of user interface satisfaction of a clinical outcomes database. CIN: Computer, Informatics, Nursing. 2005;23(6):301–7. doi: 10.1097/00024665-200511000-00004. [DOI] [PubMed] [Google Scholar]
  • 14.Wallace CJ, Bigelow S, Xu X, Elstein L. Usability of text-based, electronic patient care guidelines. CIN: Computer, Informatics, Nursing. 2007;25(1):39–44. doi: 10.1097/00024665-200701000-00012. [DOI] [PubMed] [Google Scholar]
  • 15.Martins SB, Shahar Y, Goren-Bar D, et al. Evaluation of an architecture for intelligent query and exploration of time oriented clinical data. Artificial Intelligence in Medicine. 2008;43:17–34. doi: 10.1016/j.artmed.2008.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Horsky J, Kaufman DR, Oppenheim MI, Patel VL. A framework for analyzing the cognitive complexity of computer-assisted clinical ordering. Journal of Biomedical Informatics. 2003;36:4–22. doi: 10.1016/s1532-0464(03)00062-5. [DOI] [PubMed] [Google Scholar]
  • 17.Peute LWP, Jaspers MWM. The significance of a usability evaluation of an emerging lab order entry system. International Journal of Medical Informatics. 2007;76:157–68. doi: 10.1016/j.ijmedinf.2006.06.003. [DOI] [PubMed] [Google Scholar]
  • 18.Alberdi E, Gilhooly K, Hunter J, et al. Computerisation and decision making in neonatal intensive care: A cognitive engineering investigation. Journal of Clinical Monitoring. 2000;16:85–94. doi: 10.1023/a:1009954623304. [DOI] [PubMed] [Google Scholar]
  • 19.Allen M, Currie LM, Bakken S, Patel V, Cimino JJ, Patel VL. Heuristic evaluation of paper-based Web pages: A simplified inspection usability methodology. Journal of Biomedical Informatics. 2006;39:412–23. doi: 10.1016/j.jbi.2005.10.004. [DOI] [PubMed] [Google Scholar]
  • 20.Effken JA, Kim NG, Shaw RE. Making the constraints visible: Testing the ecological approach to interface design. Ergonomics. 1997;40(1):1–27. doi: 10.1080/001401397188341. [DOI] [PubMed] [Google Scholar]
  • 21.Effken JA, Doyle M. Interface design and cognitive style in learning an instructional computer simulation. Computers in Nursing. 2001;19(4):164–71. [PubMed] [Google Scholar]
  • 22.Liu Y, Osvalder AL. Usability evaluation of a GUI prototype for a ventilator machine. Journal of Clinical Monitoring and Computing. 2004;18:365–72. doi: 10.1007/s10877-005-7997-9. [DOI] [PubMed] [Google Scholar]
  • 23.Staggers N, Kobus D, Brown C. Nurses evaluations of a novel design for an electronic medication administration record. CIN: Computer, Informatics, Nursing. 2007;25(2):67–75. doi: 10.1097/01.NCN.0000263981.38801.be. [DOI] [PubMed] [Google Scholar]
  • 24.Lin L, Vicente KJ, Doyle DJ. Patient safety, potential adverse drug events, and medical device design: a human factors engineering approach. Journal of Biomedical Informatics. 2001 August;34(4):274–84. doi: 10.1006/jbin.2001.1028. [DOI] [PubMed] [Google Scholar]
  • 25.Lin L, Isla R, Doniz K, Harkness H, Vicente KJ, Doyle DJ. Applying human factors to the design of medical equipment: patient-controlled analgesia. Journal of Clinical Monitoring & Computing. 1998 May;14(4):253–63. doi: 10.1023/a:1009928203196. [DOI] [PubMed] [Google Scholar]
  • 26.Despont-Gros C, Rutschmann O, Geissbuhler A, Lovis C. Acceptance and cognitive load in a clinical setting of a novel device allowing natural real-time data acquisition. International Journal of Medical Informatics. 2007;76:850–5. doi: 10.1016/j.ijmedinf.2006.11.001. [DOI] [PubMed] [Google Scholar]
  • 27.Lindberg C. Implementation of in-home telemedicine in rural Kansas: Answering an elderly patient's needs. Journal of the American Medical Informatics Association. 1997;4:14–7. doi: 10.1136/jamia.1997.0040014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Fonda SJ, Paulsen CA, Perkins J, Kedziora RJ, Rodbard D, Bursell SE. Usability test of an internet-based informatics tool for diabetes care providers: The Comprehensive Diabetes Management Program. Diabetes Technology & Therapeutics. 2008;10(1):16–24. doi: 10.1089/dia.2007.0252. [DOI] [PubMed] [Google Scholar]
  • 29.Chaikoolvatana A, Haddawy P. The development of a computer based learning (CBL) program in diabetes management. Journal of the Medical Association of Thailand. 2006;89(10):1742–8. [PubMed] [Google Scholar]
  • 30.Hun Yoo S, Chul Yoon W. Modeling users' task performance on the mobile device: PC convergence system. Interacting with Computers. 2006;18:1084–100. [Google Scholar]
  • 31.Wu RC, Orr MS, Chignell M, Straus SE. Usability of a mobile electronic medical record prototype: a verbal protocol analysis. Informatics for Health & Social Care. 2008;33(2):139–49. doi: 10.1080/17538150802127223. [DOI] [PubMed] [Google Scholar]
  • 32.Fuchs J, Heller I, Topilsky M, Inbar M. CaDet, a computer-based clinical decision support system for early cancer detection. Cancer Detection and Prevention. 1999;23(1):78–87. doi: 10.1046/j.1525-1500.1999.09902.x. [DOI] [PubMed] [Google Scholar]
  • 33.Patterson ES, Nguyen AD, Halloran JP, Asch SM. Human factors barriers to the effective use of ten HIV clinical reminders. Journal of the American Medical Informatics Association. 2004 January;11(1):50–9. doi: 10.1197/jamia.M1364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Edwards PJ, Moloney KP, Jacko JA, Sainfort F. Evaluating usability of a commercial electronic health record: A case study. International Journal of Human-Computer Studies. 2008;66:718–28. [Google Scholar]
  • 35.Staggers N, Mills ME. Nurse-Computer interaction: Staff performance outcomes. Nursing Research. 1994;43(3):144–50. [PubMed] [Google Scholar]
  • 36.Staggers N, Kobus D. Comparing response time, errors, and satisfaction between text-based and graphical user interfaces during nursing order tasks 135. Journal of the American Medical Informatics Association. 2000 March;7(2):164–76. doi: 10.1136/jamia.2000.0070164. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Mills EM, Staggers N. Nurse computer performance: Considerations for the nurse administrator. Journal of Nurisng Administration. 1994;24(11):30–5. doi: 10.1097/00005110-199411000-00008. [DOI] [PubMed] [Google Scholar]
  • 38.Lamy JB, Venot A, Bar-Hen A, Ouvrard P, Duclos C. Design of a graphical and interactive interface for facilitating access to drug contraindications, cautions for use, interactions and adverse effects. BMC Medical Informatics and Decision Making. 2008;8(21) doi: 10.1186/1472-6947-8-21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Narasimhadevara A, Radhadrishnan T, Leung B, Jayakumar R. On designing a usable interactive system to support transplant nursing. Journal of Biomedical Informatics. 2008;41:137–51. doi: 10.1016/j.jbi.2007.03.006. [DOI] [PubMed] [Google Scholar]
  • 40.van der Meijden MJ, Solen I, Hasman A, Troost J, Tange HJ. Two patient care information systems in the same hospital: Beyond technical aspects. Methods of Information in Medicine. 2003;42:423–7. [PubMed] [Google Scholar]
  • 41.Lin YH, Jan IC, Ko P, Chen YY, Wong JM, Jan GJ. A wireless PDA-based physiological monitoring system for patient transport. IEEE Transactions of Information Technology in Biomedicine. 2004;8(4):439–47. doi: 10.1109/titb.2004.837829. [DOI] [PubMed] [Google Scholar]
  • 42.Institute of Medicine. Key capabilities of an electronic health record system. 2003 retrieved from http://wwwnapedu/books/NI000427/html/ Available at: URL: http://www.nap.edu/books/NI000427/html/ [PubMed]
  • 43.HIMSS. [Accessed January 19, 2009];HIMSS Analytics. 2009 http://www.himssanalytics.org/ Available at: URL: http://www.himssanalytics.org/
  • 44.Weir C, Staggers N, Phansalkar S. The state of the evidence for computerized provider order entry: A systematic review and analysis of the quality of the literature. International Journal of Medical Informatics. 2009;78(6):365–74. doi: 10.1016/j.ijmedinf.2008.12.001. [DOI] [PubMed] [Google Scholar]
  • 45.Staggers N, Jennings BM. The content and context of change of shift report on medical and surgical units. Journal of Nursing Administration. 2009 doi: 10.1097/NNA.0b013e3181b3b63a. In press. [DOI] [PubMed] [Google Scholar]
  • 46.Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions. Washington DC: National Academies Press; 2009. [PubMed] [Google Scholar]

RESOURCES