Abstract
The relatively rapid transition from a paper-based system to a digital system in healthcare has not always employed a sophisticated integration of usability concepts. Yet usability is critical to safety and to effectiveness of the electronic health record, and regulators and policy makers have been increasingly focused on this area. This panel will provide a variety of perspectives on this important issue, ranging from a description of the problem based on current vendor usability practices; recommendations regarding domain content rich usability processes including use cases, assessments, and scenarios; and the extension of usability assessments and design improvements to post-system implementation.
Overview
Rollin J. (Terry) Fairbanks, MD MS
The relatively rapid transition from a paper-based system to a digital system in healthcare was facilitated by a $40B stimulus from the federal government. And although this effort has great promise, and has realized several successes, it has become clear that some of the biggest opportunities are in the area of usability. Usability of health IT, in terms of both interface design and functional support of cognitive tasks, has been increasingly recognized as critical to safety and to effectiveness of the electronic health record, and these efforts have impacted regulation and public policy.
Many hospitals across the US and Canada have hired human factors engineers and human factors psychologists to help advance their safety mission. The FDA (regulator of medical devices) has increased the bar for usability evaluation necessary for new device approval. The ONC (regulator of Health IT) has implemented a usability standard called Safety Enhanced Design. Both the National Patient Safety Foundation (private foundation) and the Agency for Healthcare Research and Quality (federal agency) recently released new guidance to assist hospitals in adverse event investigations that focus on human factors and safety, and the latter has published several requests for proposals for a human factors and usability focus. Human Factors practitioners and researchers who work within healthcare can offer an important perspective on the progress and future opportunities.
Panelists’ statements provide a range of perspectives on this important issue. The first panelist, Raj Ratwani, will set the stage by describing the current state of the art in in usability design and testing processes in use by vendors of electronic health records – in particular, how these processes are highly variable despite US government regulations which specify usability requirements for EHR certification. The next two panelists, Emilie Roth and Ann Bisantz, provide complementary perspectives on how domain rich usability processes – including appropriate use cases, and content specific assessments – can be used to ensure that IT systems can support the complex cognitive work of domain practitioners. The fourth panelist, Emily Patterson, provides additional recommendations on this theme, emphasizing the need to include complex scenarios in user centered design or usability testing processes representing situations that can lead to patient risks. The final panelist, Zach Hettinger, discusses the importance of continued usability oriented assessment after system implementation, and shows how that assessment can be facilitated by data collected on EHR use.
Panelist Statements
Health Information Technology Vendor Adherence to Usability Standards
Raj Ratwani, PhD
In high risk industries like healthcare, there are often government regulations to guide the design and development of technology to ensure the technology is usable and safe. The Office of the National Coordinator of Health Information Technology (ONC) is the government agency that oversees the certification of electronic health records (EHRs) (Department of Health and Human Services, 2012). Safety enhanced design is the specific ONC certification criteria that stipulates minimum usability requirements. These requirements are that: (1) vendors must attest to using a user centered design process when developing their EHR product by providing a written statement of the process, and (2) vendors must conduct a summative usability test of their final product and report the results of the test.
Our research has demonstrated that there is tremendous variability in the usability processes employed by EHR vendors despite the ONC’s certification requirements (Ratwani, Fairbanks, Hettinger, & Benda, 2015; Ratwani, Benda, Hettinger, & Fairbanks, 2015). Many vendors use too few participants in their usability studies and do not use participants with a clinical background when the products that are being developed are intended for use by clinicians. Many vendors rely on use cases in their summative usability testing that are not rigorous enough to effectively test the product. Further, some vendors do not use widely accepted measures of efficiency, effectiveness, and satisfaction.
The results of our analysis have implications for health information technology usability policies. Specifically, these results suggest that new policy requirements and enforcement strategies may be needed that are aligned with vendor processes and development timelines. When considering policy changes it is critical that these policies further safety while preventing unnecessary burden on vendors that may stifle innovation.
Leveraging Usability Studies to Propel Design
Emilie Roth, PhD
Eliciting feedback from users has become a routine element of human-computer interaction design -- be it via agile design methods or formal usability evaluations. One of the potential pitfalls of eliciting user feedback, particularly of early prototypes, is premature focus on the ‘look and feel’ of the displays (the usability of the features depicted in design prototype ) at the expense of continued exploration of the support requirements for effective work (the usefulness the features depicted in the design prototype and what additional features are needed to more effectively support work). Meaningful usability assessment should simultaneously strive to evaluate a system with respect to usability (ease of use), usefulness (effectiveness in supporting the user’s work) and impact on organization goals such as safety, efficiency, and/or effectiveness (Roth & Eggleston, 2010). Premature focus on usability is a particular concern when users are asked to work through highly stylized, straightforward ‘use cases’. Under those circumstances users and designers can become rapidly focused on the need for improvements to easy to fix usability features such as color or shape of an icon, or the desire for shortcut commands. As a consequence they may fail to uncover more fundamental design limitations that would prevent the envisioned system from handling more complex cases. In our own work we have attempted to overcome this pitfall by designing evaluations that require users to use the prototype to solve representatively complex domain problems. One particularly successful approach has been to request users to provide the problems to be solved themselves.
We recently took this approach in the design of an automated scheduler for airlift missions (Scott, et. al., 2015). We asked users to each provide and use our prototype to work through two cases they recently experienced, one relatively straightforward, and one more complex. The objective was to explore the problem space and understand the boundaries of our system as currently envisioned. The results were highly informative. While users expressed enthusiastic support for prototype system, several of the problems they submitted and their approach to solving them were unexpected, serving to propel subsequent design iterations. For example, in one case, the number of missions scheduled to fly out of a particular location and point in time was greater than the number of aircraft available. Rather than sliding some of the missions forward in time, or switching missions to fly out of a different location as we had expected, the user proceeded to re-aggregate the cargo across missions so as to create fewer missions flying on different, larger capacity, aircraft. As a consequence, the need to enable users to re-aggregate cargo across missions emerged as a high priority requirement for next design iterations. This example illustrates the value of leveraging usability assessments to probe the boundaries of support, and uncover important new design requirements. While the particular example comes from aviation, the same usability assessment strategies could be profitably applied to healthcare to uncover aspects of domain complexity that had not been previously recognized.
Domain Specific Formative and Summative Usability Assessments
Ann Bisantz, PhD
General purpose usability questionnaires (e.g., the System Usability Scale, or SUS; the Questionnaire for User Interface Satisfaction, or QUIS), allow users to provide information about their opinions regarding overall system usability, or their satisfaction with particular facets of an interactive system (e.g., terminology, screen design), and can support comparisons of usability across different applications. However, system designers often need more specific feedback about whether, and how, system concepts support complex cognitive goals and work tasks. For instance, designers need to understand not only if a novel information visualization is understandable, but if the right information to support situation assessment, planning, or decision-making is being provided. In health care, for instance, it is certainly true that users must not be burdened with systems that require “too many clicks” to obtain information; limit flexibility in describing patient conditions, and clutter the interface with large numbers of difficult to identify or remember icons. More fundamentally, however, are questions such as: does this interface help integrate information about a patient? Does it assist clinicians in making better diagnoses and treatment plans?
Recent research we have conducted to support patient tracking and overall management of hospital emergency departments required us to assess in detail how novel visualization concepts supported the work of ED physicians and nurses. These concepts, based on an in depth cognitive systems engineering analysis, combined with an interactive user-centered design process, were intended to support clinicians overall situation awareness of the state of the ED (in terms of number of patients, backups and delays, available resources) as well as individual patients (severity, where patients were in the care process) (Guarrera et al, 2015). From a usability standpoint, we were interested in 1) whether the information we chose (identified through the cognitive systems analysis) was useful in supporting clinician goals; and 2) whether the visualization concepts we developed (through the iterative process) presented information in an effective (“usable”) way.
To perform this analysis, we presented participants (ED nurses and physicians) with realistic, simulated patient data and asked them to work through realistic, challenging work tasks (e.g., orienting to the ED after a break; planning for incoming mass casualties). We then asked participants (based prior work by Truxler et al., 2012) to assess 1) the degree to which the display concepts supported specific cognitive work objectives (e.g., identifying bottlenecks in patient care); 2) the usefulness and 3) effectiveness of specific interface features (e.g., the visualization of patient wait times vs. severity while in the waiting room); and 4) the frequency with which they would use specific features during a shift. We found these measures, which were tailored to both the specific goals for which the system was designed, as well as specific interface elements, were sensitive to differences across both objectives and features. For instance, participants rated information related to historic trends as less useful; and indicated that the system provided less support for task management (which we had excluded from an initial set of design objectives). Similar methods can be deployed both in system development and also to more informatively compare candidate systems being acquired.
Linking Risk Analysis and Scenario-based Usability Assessment
Emily S. Patterson, PhD
Usability assessments for electronic health records need to be focused on proactively identifying and addressing patient safety risks. A critical step when designing representative scenarios for usability assessments, either conducted by software developers or organizations which implement and maintain customized systems, is identifying potential patient safety risks. These can be derived empirically from extant data on patient safety issues for systems already in use, or they can be predicted based upon anticipated interaction concerns. A necessary, but not sufficient, element is to select test scenarios that are face valid to end users as well as designed to ‘probe’ tasks where EHR design and/or implementation elements could result in direct patient harm, missed care, or a delay to diagnosis or life-saving treatment. In our past work, we have provided examples of usability test scenario narratives which probe patient safety risks during medication ordering, review of critical lab data, and diagnosis based upon review of prior clinical documentation for physicians and nurses using an EHR for both outpatient and inpatient settings (Lowry et al., 2015).
In a recent study, risks were identified that stemmed from both the design and implementation choices for Electronic Health Records (EHRs) which did not fit the clinical work demands of ‘sharp end’ practitioners. Poorly designed usability of electronic health record (EHR) systems can lead to user errors or to frustrated users who resort to work-arounds that further erode patient safety. Scenario-based usability assessments can help to prevent ‘never events’ and associated patient harm by proactively addressing and mitigating the root causes of errors linked to EHR design and implementation features. Areas of particular interest include evaluating patient safety risks when 1) information is not retrievable, trustworthy or accurate for a core clinical task that is frequently performed, 2) information is not available because it was lost, documented in a different system, or has not yet been documented in real-time, 3) information is located in the wrong patient’s chart, and 4) information is not retrievable because of how the information was entered into the system and subsequently displayed.
The Importance of Post-Implementation Usability Assessments in Electronic Health Records
A. Zachary Hettinger, MD
Much of the national spotlight surrounding the usability of electronic health records (EHR) has focused on the role of summative testing of finished systems (in a simulated environment) compared to the need of strong user-centered design processes at an early stage in the development of EHRs (Ratwani, Benda, Hettinger, & Fairbanks, 2015; Ratwani, Fairbanks, Hettinger, & Benda, 2015) While strong usability methods for the design and development of future EHRs will be critical to long-term safe and efficient systems, it is critical to have an equally strong program monitoring and testing for health information technology facilitated errors. In other high-risk industries with more mature user-centered design many of these potential errors would be designed out of systems prior to implementation, but in healthcare many of these potential hazards are unrecognized until they lead to near misses or more likely to patient harm.
There are numerous examples of of poor usability in health IT systems that have lead to near misses and patient harm. Examples range from basic usability concepts such as data visualization, the use of drop down menus and poor spacing, to more complex issues such as the lack of support of for common clinical workflows leading to less efficient systems, interface design that facilitate error, missing information and lack of recovering from interruptions. While most of the former basic usability errors will be designed out of most EHRs in the near future, the later error types that lack the cognitive support of the users must be deliberately sought and recognized if progress is to be made on the true usability of health IT systems.(Hettinger, Ratwani, & Fairbanks, 2015) Patient safety event reporting systems and mining of clinical data have lead to insights on how health IT systems can be designed safer, not only from the ground-up but through post-implementation surveillance.
Acknowledgments
Authors were supported by the following grants from AHRQ: R01HS022542 (Fairbanks, Bisantz, Hettinger, Roth, and Ratwani); and R18HS020433 (Bisantz, Fairbanks, Hettinger). Dr. Fairbanks was supported by an NIH Career Development Award from the National Institute of Biomedical Engineering and BioImaging (K08EB009090). Dr. Patterson was supported by the National Institute of Standards and Technology and the Institute Designed for Environments Aligned for Patient Safety (IDEA4PS) at The Ohio State University which is sponsored by the Agency for Healthcare Research & Quality (P30HS024379). Views expressed do not necessarily represent the views of NIH, NIST, OSU, or AHRQ.
Contributor Information
Rollin J. (Terry) Fairbanks, National Center for Human Factors in Health Care, Medstar Institute for Innovation, MedStar Health; Georgetown University.
Ann Bisantz, University at Buffalo.
A. Zachary Hettinger, National Center For Human Factors in Health Care, Medstar Research Institute.
Raj Ratwani, National Center For Human Factors in Health Care, Medstar Research Institute.
Emily Patterson, Ohio State University.
Emilie Roth, Roth Cognitive Engineering.
References
- Department of Health and Human Services. (2012). Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition (Vol. 77). [PubMed] [Google Scholar]
- Guarrera T, McGeorge N, Clark L, et al. (2015). Cognitive Engineering Design of an Emergency Department Information System In: Bisantz A, Fairbanks R, Burns C, eds. Cognitive Engineering for Better Health Care Systems. CRC Press. [Google Scholar]
- Hettinger AZ, Ratwani R, & Fairbanks RJ (2015). New Insights on Safety and Health IT. http://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.7804-1.pdf. Accessed February 1, 2016.
- Lowry SZ, Ramaiah M, Taylor S, Patterson ES, Prettyman SS, Simmons D, … & Gibbons MC. Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records: Empirically Based Use Cases for Validating Safety-Enhanced Usability and Guidelines for Standardization. 2015. NIST-IR-7804-1.
- Ratwani RM, Benda NC, Hettinger AZ, & Fairbanks RJ (2015). Electronic health record vendor adherence to usability certification requirements and testing standards. JAMA, 314(10), 1070–1071. [DOI] [PubMed] [Google Scholar]
- Ratwani RM, Fairbanks RJ, Hettinger AZ, & Benda N (2015). Electronic Health Record Usability: Analysis of the User Centered Design Processes of Eleven Electronic Health Record Vendors. Journal of the American Medical Informatics Association, (In Press). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roth EM & Eggleston RG (2010). Forging new evaluation paradigms: Beyond statistical generalization In Patterson ES, Miller J. (Eds) Macrocognition Metrics and Scenarios: Design and Evaluation for Real-World Teams. Ashgate Publishing; (pp 203–219). [Google Scholar]
- Scott R, Roth EM, DePass B and Wampler J (2015). Externalizing planning constraints for more effective joint human-automation planning. In conference proceedings of the 12th international naturalistic decision making conference, held June 9 – 12, 2015, Mitre Corporation, McLean, Virginia. [Google Scholar]
- Truxler R, Roth E, Scott R, Smith S, Wampler J. Designing Collaborative Automated Planners for Agile Adaptation to Dynamic Change. Proc Hum Factors Ergon Soc Annu Meet. 2012;56:223–227. [Google Scholar]