Abstract
The overarching objective of this research is to reduce the burden of documentation in electronic health records by registered nurses in hospitals. Registered nurses have consistently reported that e-documentation is a concern with the introduction of electronic health records. As a result, many nurses use handwritten notes in order to avoid using electronic health records to access information about patients. At the top of these notes are patient identifiers. By identifying aspects of good and suboptimal headers, we can begin to form a model of how to effectively support identifying patients during assessments and care activities. The primary finding is that nurses use room number as the primary patient identifier in the hospital setting, not the patient’s last name. In addition, the last name, gender, and age are sufficiently important identifiers that they are frequently recorded at the top of handwritten notes. Clearly distinguishable field labels and values are helpful in quickly scanning the identifier for identifying information. A web based annotator was designed as a first step towards machine learning approaches to recognize handwritten or printed data on paper sheets in future research.
BACKGROUND
The overarching objective of this research is to reduce the burden of documentation in electronic health records by registered nurses in hospitals. Having sufficient time at the bedside for registered nurses (RNs) to provide ‘hands-on- care to patients in the hospital is associated with reduced patient mortality in hospital care (Kane and colleagues, 2007). With additional ‘hands-on’ patient care time, RNs can thoroughly assess the patient (i.e., avoid delays in diagnosis and missed treatment), administer the correct medications (i.e., avoid medication errors), assist the patient with mobility (i.e., avoid patient falls), and support the patient in playing an active role in their healing (i.e., provide patient education for post-discharge home-based activities).
The objectives of the paper are:
To describe the content and format of patient identifiers in the header section of nurses’ ‘brains’
To identify the characteristics of an ideal and subtoptimal header
To identify potential implications for the design of a human-in-the-loop information extraction tool that reduces the burden of nursing documentation in the electronic health record
Registered nurses have consistently reported that e-documentation is a concern with the introduction of electronic health records (EHRs), electronic medication administration records (e-MARs), and nursing flowsheet software (Staggers and colleagues, 2015). Although reducing the burden of e-documentation for physicians has received much attention (e.g., Sinsky et al., 2016), it is noteworthy that there are four times as many active RNs as active state-licensed physicians in the United States.[10] In addition, there is a higher dissatisfaction rate with EHRs for RNs (94% respondents were dissatisfied in a 2014 Black Book survey of nurses (2017) and a more widespread reliance upon complex, time-consuming ‘workaround’ paper artifacts (e.g., ‘brains’ sheets) in place of EHR use. An additional concern is that nurses in hospitals can be faced with the threat of termination of their employment if they are accused of inappropriately using copy paste functionality in nursing flowsheet software for vital signs and assessment data from other nurses (e.g., Ohio wrongful termination lawsuit in 2015).
Nursing care in the hospital is complex, characterized by time pressure, uncertain information, conflicting goals, interruptions, and often stressful conditions which makes it a challenging environment for supporting real-time electronic documentation. Therefore, it is not surprising that nurses frequently use handwritten ‘jots’ to record information to be entered at a later time in formal electronic documentation, such as in an Electronic Health Record. In recording information at the end of a shift or during a slower period of time, there is a risk of ‘wrong patient documentation errors’ in the sense of having information that is accurate about one patient entered incorrectly into the chart of a different patient. Therefore, how patients are identified in the ‘headers’ of personal handwritten notes where data are jotted for later entry can provide important insights into the mental models used by nurses about how to distinguish different kinds of patients. Insights generated by analyzing the content and format of headers on ‘brains’ could therefore potentially have implications for the design of electronic headers in health information technology software.
For the conceptual framework, macrocognition is defined as cognitive adaptation to complexity (Klein et al., 2000). In macrocognition, we have proposed that there are five primary functions (Patterson and Hoffman, 2012): detecting problems, sensemaking, re-planning, deciding, and coordinating. In this paper, we focus on sensemaking, which is defined as “a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively.” (Klein et al., 2006). HIT can be designed to avoid doing ‘information foraging’ across a fragmented information space (Garrett et al., 2010) or to resolve conflicting data generated from overuse of copy-paste without sufficient verification and review of auto-generated content.
For this study, based on the conceptual framework, we anticipated that sensemaking (nursing assessment of symptoms) and replanning (to-lists of activities to complete during a shift) would be organized by each patient on an organized set of handwritten notes. Further, we anticipated that there would be patterns in how the ‘header’ of these notes were used to identify patients for nurses who relied upon these notes heavily during the provision of care during a shift. Stated simply, we expected that the cognitive work of the nurses would be organized around a patient which required identification at the time of assessment as well as at the time of documentation in the electronic health record.
METHODS
This research was approved by the Institutional Review Board at The Ohio State University. Informed consent was obtained from each participant. The study site consisted of three units from a large academic medical center in the Midwest with unionized nursing personnel and one unit from a community hospital with non-unionized nursing personnel. Each unit had 20-35 beds with 5-9 Registered Nurses (RNs) staffed during 12-hour shifts with support from patient care associates (PCAs) and a unit clerk. The units included a cardiovascular step down unit, a cardiovascular extended stay unit for surgical patients, a general surgery, burn, and ophthalmology unit, and a unit with orthopedic, neurological, and trauma patients.
Twenty RNs were recruited. Participants were currently employed RNs working on the four units. Recruitment was a purposeful and convenient sample based on willingness and working during the planned observation period. Diversity with respect to years of nursing experience, gender, and night vs. day shift was sought during recruitment.
Data collection was undertaken by a single investigator. Field notes were handwritten while shadowing an RN during four-hour windows starting at the beginning of the shift, starting at either 7 AM or 7 PM. Photographs of the ‘brains’ were taken after the patient handoff and the end of the observation period.
For analysis, a digital photograph of the header of each patient’s ‘brain’ was used in conjunction with the field notes to analyze the format and layout and the information content for the header and body.
RESULTS
A ‘header’ is a specific area of a ‘brain’, containing patient identifier and other patient information deemed important by the RN. The ‘header’ is a separate section than the content in the body of the ‘brain’. The location of the ‘header’ was nearly always in the upper left (19/20; 95%), and one was in the upper right (1/20; 5%). RNs varied in what content they included in the header. A majority of RNs (>50%) had patient name, patient age, code status, and consult physician name in the header.
Four raters independently rated 20 headers, one per Registered Nurse. Each header was rated on a Likert scale where 1 = best and 5 = worst. The results are displayed in Table 1. There was more consensus on the ‘good’ headers than the poor headers, but some of the raters focused more on content than on format, and vice versa.
Table 1.
Rater scores for each of 20 ‘brain headers
| 'Brain' header |
Rater 1 |
Rater 2 |
Rater 3 |
Rater 4 |
Average | S.D. |
|---|---|---|---|---|---|---|
| 1 | 1 | 1 | 1 | 1 | 1.00 | 0.00 |
| 2 | 5 | 3 | 2.5 | 4 | 3.63 | 1.11 |
| 3 | 5 | 1 | 2 | 3 | 2.75 | 1.71 |
| 4 | 3 | 2 | 2.5 | 2 | 2.38 | 0.48 |
| 5 | 4 | 2 | 3 | 1 | 2.50 | 1.29 |
| 6 | 1 | 1 | 3 | 1 | 1.50 | 1.00 |
| 7 | 2 | 3 | 2 | 2 | 2.25 | 0.50 |
| 8 | 4 | 3 | 1.5 | 3 | 2.88 | 1.03 |
| 9 | 3 | 3 | 1 | 2 | 2.25 | 0.96 |
| 10 | 5 | 4 | 4.5 | 3 | 4.13 | 0.85 |
| 11 | 5 | 4 | 3 | 3 | 3.75 | 0.96 |
| 12 | 5 | 4 | 3 | 4 | 4.00 | 0.82 |
| 13 | 5 | 5 | 3 | 4 | 4.25 | 0.96 |
| 14 | 5 | 4 | 3.5 | 5 | 4.38 | 0.75 |
| 15 | 5 | 5 | 3.5 | 3 | 4.13 | 1.03 |
| 16 | 4 | 4 | 2.5 | 3 | 3.38 | 0.75 |
| 17 | 4 | 2 | 2.5 | 1 | 2.38 | 1.25 |
| 18 | 3 | 1 | 2 | 2 | 2.00 | 0.82 |
| 19 | 2 | 2 | 1.5 | 4 | 2.38 | 1.11 |
| 20 | 3 | 4 | 3 | 3 | 3.25 | 0.50 |
Each rater described the rationale for their scores. A summary of their rationales for the best scores for headers are displayed in Table 2.
Table 2.
Preferred Content, Structure, Annotations for Headers
| Content | Structure | Annotations |
|---|---|---|
| Adequate level of information to identify patient: More than room number and patient last name |
Well formatted: Printed labels/easy to read Designated areas for explicitly defined field labels Clear demarcation using visual separators between header and body |
No ‘margin jots’ No writing crammed into a small space Condensed precise format for efficient use |
A recreated, de-identified example of a ‘good’ header (similar to brain header #2) is provided in Figure 1. Please note that the field labels were typically typed and not handwritten with ‘good’ headers, although there were exceptions for headers which were particularly neat and well-organized.
Figure 1.
Composite recreated example of highest-rated ‘brain’ header
Agreement on the properties of a good header are provided in Table 3.
Table 3.
Agreement on Properties of a Good Header
| Properties | Agreement (%) |
|---|---|
| Well formatted | 100 |
| Field labels are explicitly mentioned |
100 |
| Field values are clearly distinguishable |
100 |
| Have sufficient information to be used as identifier |
100 |
| Printed labels | 50 |
| Clear demarcation using visual separators other than whitespace |
50 |
| Highly efficient | 25 |
The ‘middle’ of the scale typically consisted of solely handwritten information without field labels in a structured format. In addition, these headers tended to contain about 15 data elements of information in a packed space. For example, the header might contain handwritten values in an organized fashion employing white space demarcation for room number, patient first and last name, age, gender, no known drug allergies, full isolation code, clear liquid diet, nothing to eat after midnight, and organ donor (brain header #20).
In Table 4, the elements at the bottom of the scale, indicating the least preferred headers, are described.
Table 4.
Suboptimal Content, Structure, Annotations for Headers
| Content | Structure | Annotations |
|---|---|---|
| Insufficient level of information to identify patient |
Unclear what category data are in Unclear distinction between field labels and values Labels are inconsistent or absent No clear separation between header and body areas and data |
Annotations not on a clear white space Many ‘margin jots’ Annotations not easy to understand or not concise |
A recreated, de-identified example of a ‘poor’ header (brain header #2) is provided in Figure 2. Please note that the field labels typically were handwritten on headers rather lower than average.
Figure 2.
Composite recreated example of lowest-rated ‘brain’ header
DISCUSSION
Based on these findings, it is clear that the room number of a patient is the primary reference for nurses in hospitals caring for patients. This is not surprising given that nurses likely get patient assignments which are from the same rooms over multiple days even when patients are different in the rooms due to turnover. Therefore, patients can use similar strategies for remembering aspects about patients by using the geographic location of the patient room, which does not change as frequently as the last name of the patient in a room does. The finding that nurses in acute care inpatient settings use rooms as their primary identifier is apparently a novel finding, and also is unique to nursing personnel. Given that physicians, respiratory therapists, physical therapists, and occupational therapists, among others, will often have patients to care for on multiple units and less regularly receive patients in similar rooms in order to minimize the number of steps to take during a work shift, it makes sense that physicians do not identify patients primarily on the basis of the room number.
In this project, we have created:
A web based annotator for document image annotation with printed and handwritten information specific to a domain that lays the foundation for automated content identification of a data table in keys (category labels) and values (quantitative and text data) of a structured field
Ability to automatically update, similar to prior research with assisted querying with instant-response interfaces (Nandi & Jagadish, 2007)
A visual feature based hierarchical layout analysis of documents
These findings suggest that starting a personal ‘brains’ notes sheet with a printout with labeled categories of summarized auto-pulled EHR data in the header could be useful. On the other hand, the large variation in what information is included and where it is placed in the header suggests that the content and formatting of the layout of the header could be individualized for each Registered Nurse. A personalized ‘header’ layout can be generated using a machine learning model after training it on a dataset of manually annotated ‘headers’. However, this requires an annotated training set to be created beforehand, using a custom image annotation tool.
A well-trained machine learning model can subsequently automate several information retrieval tasks on a handwritten or hybrid ‘brain’ header.
Finally, patient header information could theoretically be utilized to reduce the risk of ‘wrong patient documentation errors’. A scanner could be used for this purpose, to input the content of a header and some other interpolated information about the room, last name, age, and gender which can be confirmed against respective electronic equivalents from an electronic health record header.
ACKNOWLEDGMENTS
This project was supported by the Institute for the Design of Environments Aligned for Patient Safety (IDEA4PS) at The Ohio State University which is sponsored by the Agency for Healthcare Research & Quality (P30HS024379). The authors views do not necessarily represent the views of AHRQ.
Contributor Information
Ritesh Sarkhel, Ohio State University Columbus, OH.
Jacob J. Socha, Ohio State University Columbus, OH
Austin Mount-Campbell, Ohio State University Columbus, OH.
Susan Moffatt-Bruce, Ohio State University Columbus, OH.
Simon Fernandez, Ohio State University Columbus, OH.
Kashvi Patel, Ohio State University Columbus, OH.
Arnab Nandi, Ohio State University Columbus, OH.
Emily S. Patterson, Ohio State University Columbus, OH
REFERENCES
- Black Book HER Loyalty Index. http://www.blackbookmarketresearch.com/shop/2014-black-book-ehr-loyalty-index-q3-results/. Accessed February 15, 2017.
- Garrett SK, Caldwell BS, & Ebright PR (2010). Provider information and resource foraging in healthcare delivery. International Journal of Collaborative Enterprise, 1(3-4), 381–393. [Google Scholar]
- IMMORMINO v. LAKE HOSPITAL SYSTEM, INC., No. 1: 13CV1818 (N.D. Ohio: August 31, 2015). [Google Scholar]
- Kane RL, Shamliyan TA, Mueller C, Duval S, & Wilt TJ (2007). The association of registered nurse staffing levels and patient outcomes: systematic review and meta-analysis. Medical care, 45(12), 1195–1204. [DOI] [PubMed] [Google Scholar]
- Kaiser Family Foundation. The Kaiser Family Foundation’s State Health Facts: Total professionally active physicians. 2016. Apr. http://kff.org/other/state-indicator/total-active-physicians/Data source: Redi-Data Inc. Accessed September 24, 2016. [Google Scholar]
- Klein DE, Klein HA, & Klein G (2000). Macrocognition: Linking cognitive psychology and cognitive ergonomics, Proceedings of the 5th International Conference on Human Interactions with Complex Systems (pp. 173–177). Urbana-Champaign: University of Illinois at Urbana-Champaign. [Google Scholar]
- Klein G, Moon B, & Hoffman RR (2006). Making sense of sensemaking 1: Alternative perspectives. IEEE intelligent systems, 21(4), 70–73. [Google Scholar]
- Nandi A, & Jagadish HV (2007, June). Assisted querying using instant-response interfaces. In Proceedings of the 2007 ACM SIGMOD international conference on Management of data (pp. 1156–1158). ACM. [Google Scholar]
- Patterson ES, & Hoffman RR (2012). Visualization framework of macrocognition functions. Cognition, Technology & Work, 14(3), 221–227. [Google Scholar]
- Staggers N, Elias BL, Hunt JR, Makar E, & Alexander GL (2015). Nursing-Centric Technology and Usability A Call to Action. CIN: Computers, Informatics, Nursing, 55(8), 325–332. [DOI] [PubMed] [Google Scholar]
- Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, … & Blike G (2016). Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 SpecialtiesAllocation of Physician Time in Ambulatory Practice. Annals of internal medicine, 165(11), 753–760. [DOI] [PubMed] [Google Scholar]


