Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2020 May 7;27(6):924–928. doi: 10.1093/jamia/ocaa047

Varying rates of patient identity verification when using computerized provider order entry

Emilie Fortman o1, A Zachary Hettinger o1,o2, Jessica L Howe o2, Allan Fong o2, Zoe Pruitt o2, Kristen Miller o1,o2, Raj M Ratwani o1,o2,
PMCID: PMC7647277  PMID: 32377679

Abstract

Objective

We sought to determine rates of computerized provider order entry (CPOE) patient identity verification and when and where in the ordering process verification occurred.

Materials and Methods

Fifty-five physicians from 4 healthcare systems completed simulated patient scenarios using their respective CPOE system (Epic or Cerner). Eye movements were recorded and analyzed.

Results

Across all participants patient id was verified significantly more often than not (62.4% vs 37.6%). Vendor A had significantly higher verification rates than not; vendor B had no difference. Participants using vendor A verified information significantly more often before signing the order than after (88.4% vs 11.6%); there was no difference in vendor B. The banner bar was the most frequent verification location.

Discussion

Factors such as CPOE design, physician training, and the use of a simulated methodology may be impacting verification rates.

Conclusions

Verification rates vary by CPOE product, and this can have patient safety consequences.

Keywords: electronic health records, patient safety, patient identification

INTRODUCTION

BACKGROUND AND SIGNIFICANCE

Computerized provider order entry (CPOE), a central function of health information technology, has now been widely adopted by most healthcare provider organizations across the United States, and in several other countries. CPOE allows for rapid communication of medication, laboratory, and imaging orders in a standardized electronic format. There are many benefits to this technology, including the reduction of costs and certain types of errors.1,2 CPOE has been especially beneficial for high-volume clinics that require efficiency in placing and carrying out orders.3

While there are numerous benefits to CPOE, the design, development, and implementation of the technology has resulted in unintended patient safety consequences.4–6 These consequences include the risk of clinicians unintentionally selecting the wrong patient record and placing orders on the wrong patient.7–9 Wrong patient errors can harm patients that receive inappropriate care and delay care for the patient for whom the treatment was intended. Causes of wrong patient errors include mistaking patients with similar names or geographic locations when selecting records, having multiple patient files open simultaneously and signing orders on the wrong one, or interruptions and busy work environments.10 This capacity for error is especially relevant for emergency medicine physicians because they are more likely to be treating several patients at any one time and experience a large number of task interruptions in their busy clinical environments.11–13

Although paper-based orders were not immune to wrong patient errors, CPOE facilitates providers signing orders on the wrong patient by removing the barrier of having to obtain a physical chart or speaking directly with the patient’s nurse. For these reasons, it is critical that physicians use patient identifiers in the electronic record to verify that they are placing orders for the intended patient. A priority for the Joint Commission is for providers to use at least 2 identifiers to verify a patient’s identity. These identifiers include information like the patient’s full name, date of birth, or medical record number, which would help providers distinguish between patients that have similar or matching names. Healthcare professionals should check 2 patient identifiers, including when placing orders using CPOE. However, some studies in simulation have shown that clinicians do not thoroughly verify patient identity. One study found verification rates of approximately 20%.14

OBJECTIVE

While there are several different points in the care process when patient identification may be verified including patient registration, ordering through CPOE, medication administration, and before procedures, we focused on verification when entering orders using CPOE. CPOE interfaces and workflows differ across vendors and provider organizations. Because of these differences, rates of verification of patient identification for CPOE products are still not well understood. We sought to determine rates of verification of patient identification in Epic (Epic Systems, Verona, WI) and Cerner (Cerner, North Kansas City, MO) CPOE products using eye tracking, to see whether physicians look at patient identification information when placing orders in a simulated setting using the physician’s respective CPOE product. Further, when patient identification was verified, we examined where in the CPOE workflow process (ie, before, during, or after placing the order) verification occurred and where on the CPOE interface information was verified.

MATERIALS AND METHODS

Study setting and participants

Fifty-five emergency medicine resident and attending physicians participated from 4 healthcare systems in the United States. Participants were selected based on convenience and were asked to complete 6 different clinical scenarios using their respective electronic health record (EHR) testing environment. Two sites used an Epic Systems EHR (n = 26) and 2 used Cerner Millennium (n = 29). Sample sizes were estimated based on previous eye-tracking studies in the human factors literature.15,16 This study was approved by an institutional review board, and all participants consented to participation.

Study design

Six different clinical scenarios were created by 2 emergency physicians and then reviewed by an emergency physician from each participating site. Four of the scenarios included a component in which participants placed orders for a fictitious patient using the CPOE function of their respective EHR system. These 4 scenarios did not require any documentation and were isolated to use of CPOE to place orders. Each participant received a pen and paper for taking notes. For each scenario, the participant was read a brief description of the patient, including their age, gender, and chief complaint, followed by a detailed list of orders that they needed to place for that patient. These instructions could be repeated if the participant desired. After listening to the instructions, the participant was given the patient’s name and instructed to open the patient’s record and complete the tasks as if they were in their own clinical environment. However, the study took place in an office setting without the noise and interruptions of a typical emergency department. The task was completed when the participant gave verbal confirmation and the test administrator began reading the next set of instructions.

Measures and analysis

In order to assess how the participants performed each task and where they looked on the screen, eye movements were recorded using the Mobile Eye XG from Applied Science Laboratories (Bedford, MA). Additionally, audio and video were recorded with a GoPro and screen capture with mouse movement data was collected using Morae version 3.3.1 (TechSmith, Okemos, MI). For the purposes of this study, we analyzed the eye movement data to determine whether the participant was looking at patient identifying information when placing CPOE orders during each of the 4 clinical scenarios. In a previously published work, we analyzed error rates and time to complete tasks.17

Valid identifying information for this study was defined as patient name and date of birth. A positive check of patient identity was recorded if a participant fixated within approximately 1.5 degrees of visual angle of any piece of identifying information on the EHR interface, which accounts for the margin of error of the eye tracker. Because the 4 scenarios we analyzed were focused on use of CPOE with no documentation requirements, any eye movements to parts of the EHR interface that had patient identification information were likely in service of verifying identification, and it is unlikely the eye movements were for other purposes, such as documentation, because this was not part of the scenarios. Any scenarios with eye tracking or video failures were excluded from analysis.

For each CPOE scenario, we recorded whether the participant verified patient information, which step in the CPOE ordering process verification occurred, and where on the EHR interface information was verified. For verification location, there were 2 areas in both Epic and Cerner from which a participant could verify patient identity: a primary location on the left side of the top panel (sometimes called a banner bar) of the patient record and a secondary location on a pop-up order window. Verification using the primary location was further subdivided based on when during the task the verification took place: upon opening the record, while selecting orders, or after signing orders. If a physician verified patient identification information more than once, only the data from the first instance of verification were analyzed. Table 1 shows the characteristics of the banners in each of the vendor products from each study site.

Table 1.

Characteristics of electronic health record banner bars by vendor and study site

Name font size (pixels) and color Banner DOB font size (pixels) and color Informational items in the banner Screen space used for the banner (%)
Vendor A, site 1 13, bold, caps, black 10, no bold, no caps, black 15 5.6
Vendor A, site 2 13, bold, caps, black 10, no bold, no caps, black 12 5.6
Vendor B, site 3 13, bold, no caps, black 10, no bold, no caps, black 23 6.7
Vendor B, site 4 14. bold, no caps, black 10, no bold, no caps, black 19 5.4

DOB: Date of Birth.

For each participant, verification rate was recorded as the percentage of scenarios in which they verified patient identity. For each participant that verified patient identification information, rates were then calculated to determine frequency, where in the EHR information was verified, and when in the ordering process verification occurred. Mean rates were statistically compared using a 2-tailed t test (ɑ = .05). When reporting results, we do not associate the specific vendor with the results.

Expert panel review

With unique differences between Cerner and Epic platforms and differences by site from local configuration and customization 4 health information technology usability and safety experts with first-hand Cerner and Epic experience were recruited to participate in an expert panel. Each panel member was individually presented with visual representations of the 4 different banner bars and study results, and was asked to generate up to 3 plausible explanations for the results without being exposed to the interpretations of the study team. These explanations were discussed and documented. The explanations from all panel members were then synthesized by 2 members of the study team to identify emerging themes.

RESULTS

Out of 55 total participants, 16 were excluded for eye-tracking or video failures, and 6 individual scenarios had to be excluded for similar failures. In total, 39 (21 Epic, 18 Cerner) participants and 150 scenarios were analyzed.

Across all participants, patient identification was verified before or immediately after signing orders (Mean [M] = 62.4%) more frequently than not verifying at all (M = 37.6%) (P < .01). However, this depended on the type of EHR (Figure 1). Participants using vendor A verified significantly more often (M = 79.6%) than not (M = 20.4%) (P < .01), as compared with physicians using vendor B, in which there was no significant difference in verification (M = 47.6%) compared with no verification (M = 52.4%) (P < .01). Verification rates when using vendor A (M = 79.6%) were significantly higher than when using vendor B (M = 47.6%) (P < .01).

Figure 1.

Figure 1.

Patient identification verification rates by electronic health record vendor.

When participants verified information using vendor A, they more frequently verified before signing the orders (M = 88.4%) as compared with after signing (M = 11.6%) (P < .001). There was no significant difference between verification before (M = 51.5%) as compared with after (M = 48.5%) when using vendor B (P = .86). In both vendors A and B, the top-panel banner bar was the primary location for verification (M = 87.8%), as compared with looking at identification information appearing in the pop-up order window (M = 12.2%) (P < .001). The rates by location in the EHR interface and where in the ordering process information was verified are shown in Table 2.

Table 2.

Verification rates by location and when in ordering process verification occurred

Before signing order
After signing order
Total before (%) Top banner bar on opening of chart (%) Top banner bar while ordering (%) Order window pop-up (%) Top banner after signing (%)
Overall rate 70.5% 40.9 17.5 12.1 29.5
 Vendor A 88.4 62.9 14.3 11.2 11.6
 Vendor B 51.4 17.6 20.6 13.3 48.5

Based on the expert panel 3 plausible explanations emerged. One explanation for the results was focused on the visual clutter and density of information in the banner bar. The experts described that some banner bars have too much information and include patient identification information as well other clinical information comingled. With high information density, physicians may not want to exert the cognitive effort to find the patient identification information.

Another explanation focused on the number of charts a physician has open at any given time. There may be variability in the number of charts open based on healthcare facility policies and individual preferences. If physicians are accustomed to working with only one chart open, they may not feel compelled to check the patient identification information because they may be more confident that they have the correct patient, whereas physicians that have multiple charts open may be more likely to check patient identification information. Institutional rules or physician preferences and resulting behaviors may account for the differences observed in this study.

Finally, the experts also discussed how certain institutions promote the importance of patient identification verification more than others and that the occurrence of previous wrong patient errors impacts how frequently this importance is communicated to physicians by the institution. The variability in this study may be driven by recent communications about the importance of patient identification verification, as well as by general institutional culture and norms.

DISCUSSION

The purpose of this study was to determine rates of patient identification in commercially available and widely used CPOE products. Rates of patient verification, what stage in the CPOE ordering process, and where on the EHR interface information was verified were determined using eye tracking in a simulated setting. While physicians verified patient identification more frequently than they did not verify information, verification rates were only 62.4%, despite the Joint Commission recommending verification of 2 patient identifiers when placing orders. When participants verified information, the banner bar located at the top of the interface was the primary location for verification. Patient identification information located on the order pop-up window was only used for verification 12.1% of the time.

There was a significant difference in verification rates between physicians using the 2 different EHR vendor products. When physicians verified information, there was also a difference in whether the verification occurred before or after signing the order. While verifying patient identity after signing is better than not verifying at all, this practice can lead to inefficiencies and potential errors if the order is not quickly corrected and the care team notified of the correction. These differences between verification rates on the 2 vendor products may be due to several factors. Three possible explanations emerged from the expert panel. While we were not able to determine the number of charts each physician typically has open or specific information on communications and culture around patient identification at each site, the information on characteristics of the banner bars in Table 1 supports the information density explanation. There is a numerical difference in the number of information items in the banner bar between the 2 vendor products. Both vendor B products have more information in the banner bars compared with vendor A products, and this may be adding clutter and distracting the physician from looking at the specific patient identifiers. However, there are also variations in font size of the patient name, total percent of the screen dedicated to the banner bar, as well as potential differences in variables that were not measured, such as saliency. More controlled experiments will be needed to confirm that the information density of the banner bar and visual clutter are associated with lower verification rates.

The variability in the different banner bars highlights the need for evidence-based standards for how to effectively display patient information. Certain high-risk industries, like aviation, have guidelines and standards for colors, font size, iconography, and contrast in cockpit displays to promote usability and safe use.18 While there are general recommendations for EHR interface design from the National Institute for Standards and Technology and research programs that have been sponsored by the Office of the National Coordinator for Health Information Technology (ie, SHARP-C), these guidelines have not been tested on a large scale. Standardizing patient identification information is particularly important, given that many clinicians work on different EHR systems, and the lack of standardization may be contributing to the lack of verifying patient information. In this absence of standards, general usability principles outlined by National Institute for Standards and Technology and SHARP-C should be followed, which include:

  • Make patient identification information salient by using large font sizes, distinct and easy-to-perceive colors, and appropriate contrast, and position this information on the interface in an area that is frequently looked at such as the top left corner or center of the screen.

  • Reduce visual clutter so that the patient identification information can easily be found on the interface.

  • Display the information in the same place across different EHR screens and capabilities so that the user knows where to find this information.

There are other methods to promote patient identification verification, such as the use of photographs or hard stops. Studies using photographs and hard stops have been shown to be effective in reducing wrong patient errors, at least immediately after being implemented.19 However, it will be important to ensure that any method implemented to promote verification does not have the unintended consequence of increasing work demands for clinicians or introducing new errors. Use of photographs may require additional equipment and personnel time, and hard stops may introduce a time cost and clinician frustration. Additional research is needed to identify effective yet low-burden methods to improve patient verification when using CPOE. Further, methods should be used to identify actual patient identification error rates, such as the retract and reorder tool developed by Adelman et al.8 Comparing these error rates across vendors and provider sites in the context of EHR interface design differences may provide additional information on the impact of design on identify verification.

Other factors that may account for the discrepancy in verification rates include process differences between the various sites, safety culture, and physician training. Understanding the factors that drive these differences will be an important next step. All clinicians should be properly trained and consistently reminded of the importance of, and process for, verifying patient identification information. In addition, novel technologies could be used to ensure verification is occurring. For example, advances in eye tracking technology could allow provider organizations to determine the frequency of verification by using this technology during clinical practice and identifying how often clinicians are looking at the patient identification information through retrospective analysis. In addition, the eye-tracking technology can be used in real-time to determine if a clinician has looked at the name of the patient on the interface, and if the name has not been fixated on, the provider can be notified that this information has not been checked.20

There are limitations to the study. The participants were performing simulated clinical scenarios and may not have interacted with the EHR as they normally would with real patient information. The clinical scenarios were performed at each institutions EHR test system, not the production system, and while these systems are similar, they are not perfectly identical. The eye-tracking data provide information on where the physician is looking on the interface; however, looking at a location does not mean the information is being cognitively encoded and processed. Future studies could address this limitation by using methods such as talk-aloud protocol. Participants could be asked to describe where they are looking and the information they are seeing while eye-movement data are also collected. While data from 16 participants were removed from this study due to eye-tracking or video failures, the sample sizes were sufficient, and nearly equal samples were included from each study site.

CONCLUSION

Despite the importance and emphasis on patient identification verification when using CPOE, verification is not completed all of the time, and there is variability by CPOE product resulting in patient safety implications. CPOE design and other factors should be addressed to improve verification rates.

FUNDING

This work was supported by the American Medical Association.

AUTHOR CONTRIBUTIONS

EF and RR led analysis of the data. All authors were involved in data interpretation. EF and RR led initial drafting of the article. All authors reviewed and approved the final article.

ACKNOWLEDGMENTS

The authors would like to thank Saif Khairat for supporting data collection efforts.

CONFLICT OF INTEREST STATEMENT

None declared.

REFERENCES

  • 1. Kuperman GJ, Gibson RF.. Computer physician order entry: benefits, costs, and issues. Ann Emerg Med 2003; 139 (1): 31–9. [DOI] [PubMed] [Google Scholar]
  • 2. Zhan C, Hicks RW, Blanchette CM, Keyes MA, Cousins DD.. Potential benefits and problems with computerized prescriber order entry: analysis of a voluntary medication error-reporting database. Am J Heal Pharm 2006; 63 (4): 353–8. [DOI] [PubMed] [Google Scholar]
  • 3. Farley H, Baumlin K, Hamedani A, et al. Quality and safety implications of emergency department information systems. Ann Emerg Med 2013; 62 (4): 399–407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Howe JL, Adams KT, Hettinger AZ, Ratwani RM.. Electronic health record usability issues and potential contribution to patient harm. JAMA 2018; 319 (12): 1276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Ratwani RM, Savage E, Will A, et al. Identifying electronic health record usability and safety challenges in pediatric settings. Health Aff (Millwood) 2018; 37 (11): 1752–9. [DOI] [PubMed] [Google Scholar]
  • 6. Walker JM, Carayon P, Leveson N, et al. EHR safety: the way forward to safe and effective systems. J Am Med Inform Assoc 2008; 15 (3): 272–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Henneman PL, Fisher DL, Henneman EA, Pham TA, Campbell MM, Nathanson BH.. Patient identification errors are common in a simulated setting. Ann Emerg Med 2010; 55 (6): 503–9. [DOI] [PubMed] [Google Scholar]
  • 8. Adelman JS, Kalkut GE, Schechter CB, et al. Understanding and preventing wrong-patient electronic orders: a randomized controlled trial. J Am Med Inform Assoc 2013; 20 (2): 305–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Amato MG, Salazar A, Hickman T-TT, et al. Computerized prescriber order entry–related patient safety reports: analysis of 2522 medication errors. J Am Med Inform Assoc 2017; 24 (2): 316–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Adelman JS, Applebaum JR, Schechter CB, et al. Effect of restriction of the number of concurrently open records in an electronic health record on wrong-patient order errors: a randomized clinical trial. JAMA 2019; 321 (18): 1780–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Westbrook JI, Coiera E, Dunsmuir WTM, et al. The impact of interruptions on clinical task completion. Qual Saf Heal Care 2010; 19 (4): 284–9. [DOI] [PubMed] [Google Scholar]
  • 12. Chisholm CD, Pencek AM, Cordell WH, Nelson DR.. Interruptions and task performance in emergency departments compared with primary care offices. Acad Emerg Med 1998; 5 (5): 470. [Google Scholar]
  • 13. Ratwani RM, Fong A, Puthumana JS, Hettinger AZ.. Emergency physician use of cognitive strategies to manage interruptions. Ann Emerg Med 2017; 70 (5): 683–7. [DOI] [PubMed] [Google Scholar]
  • 14. Henneman PL, Fisher DL, Henneman Ea, et al. Providers do not verify patient identity during computer order entry. Acad Emerg Med 2008; 15 (7): 641–8. [DOI] [PubMed] [Google Scholar]
  • 15. Ratwani R, Trafton JG, Boehm-Davis DA.. Thinking graphically: Connecting vision and cognition during graph comprehension. J Exp Psychol Appl 2008; 14 (1): 36–49. [DOI] [PubMed] [Google Scholar]
  • 16. Rayner K. Eye movements in reading and information processing. Psychol Bull 1978; 85 (3): 618–60. [PubMed] [Google Scholar]
  • 17. Ratwani RM, Savage E, Will A, et al. A usability and safety analysis of electronic health records : a multi-center study. J Am Med Inform Assoc 2018; 25 (9): 1197–201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Yeh M, Jo YJ, Donovan C, Gabree S.. Human Factors Considerations in the Design and Evaluation of Electronic Flight Deck Displays and Controls Washington, DC: Federal Aviation Administration; 2013. https://www.volpe.dot.gov/sites/volpe.dot.gov/files/docs/Human_Factors_Considerations_in_the_Design_and_Evaluation_of_Flight_Deck_Displays_and_Controls_V2.pdf Accessed August 14, 2019.
  • 19. Hyman D, Laire M, Redmond D, Kaplan DW.. The use of patient pictures and verification screens to reduce computerized provider order entry errors. Pediatrics 2012; 130 (1): e211–9. [DOI] [PubMed] [Google Scholar]
  • 20. Ratwani R, Trafton JG.. A real-time eye tracking system for predicting and preventing postcompletion errors. Human-Computer Interact 2011; 26 (3): 205–45. [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES