Abstract
Many point-of-care laboratory tests are manually entered into the electronic health record by ambulatory clinic staff, but the rate of manual transcription error for this testing is poorly characterized. Using a dataset arising from a duplicated workflow that created a set of paired interfaced and manually entered point-of-care glucose measurements, we found that 260 of 6930 (3.7%) manual entries were discrepant from their interfaced result. Thirty-seven of the 260 (14.2%) errors were discrepant by more than 20% and included potentially dangerous mistranscriptions. An additional 37 (14.2%) errors were due to inclusion of non-numeric characters. Staff-entered result flags deviated from the result flag generated in the laboratory information system in 5121 of 6930 (73.9%) pairs. These data demonstrate that clinically significant discrepancies for clinic-entered point of care results occurred at a rate of approximately 5 per 1000 results and they underline the importance of interfacing instruments when feasible.
Keywords: clinical laboratory information systems, point-of-care testing, medical errors, laboratories, electronic health records
BACKGROUND AND INTRODUCTION
Electronic interfaces are the safest and most reliable methods for the transfer of data from laboratory instruments to laboratory information systems (LISs) and electronic health records (EHRs). Unfortunately, organizational circumstances, technical barriers, and resource limitations often prevent the interfacing of all instruments in the clinical setting, and interface issues among instruments, middleware, and LISs can pose significant hurdles to interfaced reporting.1 The result is that many laboratory tests, especially point-of-care (POC) tests, are dependent on manual entry into the EHR for reporting to the treating clinician. The process of manual entry has an inherent risk of postanalytic transcription error that has been well characterized in the laboratory and clinical research setting, but only rarely studied in a clinical care context. Here, we report the results of a duplicated workflow that allowed us to measure the rate of manual transcription error in POC testing performed in an outpatient clinical setting.
Manual transcription and its rate of error have been well studied in the laboratory setting. While prior researchers agree that the rate of error can vary depending on the nature of the value (eg numeric vs text) and the circumstances of entry, most studies appear to measure rates of error in low single-digit percentages. McSwiney and Woodrow2 reported a rate of clerical error in their test results of 1.14%; Tuckerman and Henderson3 reported an error rate of 3–5% in their laboratory; and Shaw et al4 found a 0.83% rate of error per keystroke in a clinical microbiology lab. Even barcoded data entry is not absent of risk: Snyder et al5 estimated a barcode substitution error rate of >1 in 84 000 scans—a behavior that was disproportionately found in POC scanners. There are additional inherent work environment risks such as that of scanning a stray label from another patient.
Similarly, multiple studies have assessed this question in the clinical research setting. Norton et al6 found a 4.2% rate of manual entry error in a study of data entry errors of baptismal records. In a study of manual entry of pathology records into a clinical data repository, Hong et al7 found a 2.8% overall rate of error, which ranged from 0.5% to 6.4% depending on the field in question. In a study of patient-reported outcome questionnaires, Paulsen et al8 found a 2.02% rate of error for single-key entered data and 1.01% rate of error for double-key entered data.
Although it is a widespread practice and has a direct relationship to medical error and patient safety, manual entry of laboratory results for clinical care is comparatively poorly studied. The accuracy of electronic medical records has been a longstanding issue,9 but only a few studies have attempted to quantify transcription error during clinical care. Most of these studies have been in the inpatient hospital setting. In a study of 100 consecutive patients in the intensive care setting, Black et al10 found an 8.8% rate of transcription error in laboratory results. Artis et al11 found in a study of daily critical care rounding practices that 38.9% of laboratory data were miscommunicated in some fashion, most often by omission or reporting of out-of-date values. This study found that only 1.0% of errors were due to mistranscription—a finding that the authors ascribed to house staff use of printed progress note templates with automatically imported laboratory values. Perhaps most relevantly to this current study, Carraro and Plebani12 found manual transcription errors in 3.2% of POC glucose measurements in an inpatient hospital setting. To our knowledge, only Wilton and Pennisi13 have explicitly studied the rate of manual transcription error in an outpatient clinic setting; in a study of manually transcribed immunization data in a pediatric clinic, they found a 10.2% rate of error. The published rates of manual transcription error in a clinical setting thus span an order of magnitude, and further characterization is needed to define the scope of potential medical error associated with POC testing entered into the record by clinic staff.
At our institution, POC glucose testing in most clinics is interfaced from glucometers to the LIS via middleware when the instruments are docked; all LIS results are then transmitted to the EHR. However, issues with billing documentation resulted in the decision across affected clinics to manually re-enter the exact same POC glucose measurement into the EHR by clinical support staff. After laboratory staff discovered the duplicated workflow, they halted the practice and instituted a review of the resulting data. The resulting workflow, while inefficient and resulting in duplicated display of the same value, provides an opportunity to detect the rate of error in manually transcribed laboratory results.
METHOD
Study setting and approval
This retrospective study was performed using data from 60 primary care and subspecialty clinics associated with 2 academic medical centers: a county hospital (institution 1) and a tertiary care center (institution 2) treating medically complex populations, including cancer and transplant patients. Both sites utilize the same LIS (Sunquest version 7.2; Sunquest Information Systems, Tucson, AZ), outpatient EHR (EpicCare Ambulatory versions 2014 and 2017; Epic, Verona, WI), and middleware supporting the interface of glucometers (RALS version 5.16; Alere Informatics, Charlottesville, VA). The study was approved by the University of Washington (IRB ID STUDY00003874). EHR and laboratory data for this study were obtained from January 1, 2015, to December 3, 2017.
Study design and analysis
At our institution, POC glucose testing in many clinics is interfaced via POC middleware to the LIS and then to the EHR. After the glucometer is docked in its holster, an order for the POC glucose is generated by the interface and the result is transmitted to the EHR. However, because the glucometers do not contain information on the ordering provider, there is insufficient information in the instrument-interfaced result for billing documentation. A clinical workflow was instituted wherein after a patient presents at clinic for a POC glucose measurement, the provider would separately order another POC glucose in the EHR and a medical assistant or nurse would manually enter the previously performed result directly into the EHR. Both results displayed in the EHR but with separate timestamps and associated orders. Upon discovery of this workflow, which resulted in duplicative results, we instituted a review of POC glucose orders during this period.
Our group queried the institutional electronic data warehouse for all outpatient clinic POC glucose testing from January 1, 2015, to December 3, 2017. We then filtered the dataset to remove everything but completed tests and removed all examples of patients being tested multiple times in an encounter. This was done because the duplicated manual entry values had different order and result timestamps and appropriate association of a manually entered value to its interfaced value became unreliable. The result was a dataset of 6930 interfaced and manual-entry POC glucose pairs. Manually entered values were then compared with their interfaced counterparts. Chart review was performed on all pairs discrepant by greater than 20%. Data analysis was performed using the R software environment for statistical computing (version 3.3.1; R Foundation for Statistical Computing, Vienna, Austria) with the following packages: ggplot2, dplyr, ega, and readr.
RESULTS
The final dataset contained 6930 events and 2992 patients. These measurements occurred in 60 clinics and were entered by 506 clinical staff on behalf of 280 ordering providers. Of the 6930 events, 260 (3.7%) had a discrepancy between manual and interfaced results. Thirty-seven of these (14.2% of errors and 0.5% of all events) were markedly discrepant, which was defined as discrepant by >20% (ie outside region A of a Clarke error grid).14 An additional 37 (14.2%) errors were due to inclusion of non-numeric characters, which were accepted as valid entries in the EHR user interface and displayed as entered. Excluding entries with non-numeric characters, 223 of 6930 (3.2%) events had a mistranscription event.
Viewing the discrepancies on a Clarke error grid (Figure 1 ), multiple transcription errors occurred leading to risk of mistreatment. A Clarke error grid assigns risk categories to differences between discrepant glucose measurements, ranging from clinically insignificant events (region A) to potentially severe misdiagnosis (regions D and E). We then conducted a systematic chart review of all 74 cases with either marked discrepancy or inclusion of non-numeric characters. We found no apparent cases of attributable harm caused by misentered values but did identify 5 cases in which providers charted the in-error value in their note rather than the interfaced value. Only 1 provider noted a discrepancy in their chart, stating that the reported glucose of 13 mg/dL in an asymptomatic patient was likely a transcription error; the true value was 132 mg/dL.
Figure 1.
Clarke error grid comparing discrepant interfaced glucose results to manual-entry glucose results. A Clarke error grid assigns risk categories to discrepancies between 2 methods.14 Region A describes events within 20% of each other, whereas region B describes a discrepancy >20% but one unlikely to lead to clinical harm. Region C identifies discrepancies that may lead to unnecessary treatment. Region D describes events in which hyperglycemia or hypoglycemia was missed, and region E identifies cases in which there was outright confusion between hyperglycemia and hypoglycemia. A total of 14.2% of errors (0.5% of all events) were outside region A of the Clarke error grid, and several events had risk of mistreatment on the basis of mistranscribed values.
Clinical staff use of result flagging was also irregular. 5121 of 6930 (73.9%) pairs had a manual-entry flag discrepant from the flag generated in the LIS based on the interfaced test value. However, only 31 of 5122 (0.6%) of abnormal values had no flag at all indicating an abnormal result; staff generally preferred to flag values as “abnormal” rather than “high” or “low.”
DISCUSSION
Based on a clinical workflow that encouraged tandem manual entry and interfaced result reporting, we found that manual transcription errors occurred at a rate of 3.7% (inclusive of non-numeric character inclusion) and 3.2% looking strictly at rate of mistranscription of numeric values. This finding is consistent with the prior reported measurements in the laboratory and clinical research literature. Reassuringly, the rate of error was substantially less frequent than the previously discussed measurements of manual transcription error in the outpatient or critical care context.
A substantial portion of errors (14%) were markedly discrepant from the interfaced glucometer results. Because discrepancies are most likely to occur by the inversion, loss, or addition of digits (eg, 153 mg/dL becoming transcribed as 53 mg/dL, 1553 mg/dL, or 513 mg/dL), occasional markedly discrepant results are expected. On the basis of these findings and the risk for mistreatment on an in-error result, the manual-entry process was halted. In work processes where manual entry is the only viable option, certain user interface improvements could prevent a portion of errors. Such safeguards include requiring only numeric characters, requiring values in a physiologically possible range for a given analyte, or required double entry of all manually entered values in situations requiring high levels of safety.
A limitation of this study is the lack of information regarding workflow processes for individual clinics. Individual clinics or providers may have instituted procedures or quality assurance measures to improve transcription accuracy, but the breadth of the study prevented assessment at this level of granularity. In addition, because both values were displayed in the EHR, the risk of misinterpretation or clinical harm was not the same as if clinics depended solely on manual entry for data display. Clinical staff may also have not transcribed as carefully because of the knowledge that an interfaced result would also display; or, alternatively, staff could have potentially used more caution in an effort to avoid potentially confusing discrepant results. Last, this measurement of error is limited to transcription of numeric values in a specific user interface in one EHR and may not translate to other kinds of POC testing or other EHR user interfaces.
CONCLUSION
Using a dataset arising from a redundant workflow, we gathered 6390 pairs of POC glucose results consisting of manually entered values with a paired interfaced reference standard. Clinical staff made a manual transcription error 3.7% of the time; the rate of error was 3.2% when excluding mistyped non-numeric characters. These errors were often of a clinically significant magnitude and contained risk of patient harm in acting on inaccurate results. Although most laboratory result reporting is interfaced, manual transcription error in the clinical setting is still likely an under-recognized and undercharacterized source of medical error. While there are several benefits to instrument interfacing, including consistent reporting, interfaced billing, and convenience for staff, its value in patient safety bears emphasis. These results provide guidance on the potential impacts of not devoting resources to interface point-of-care instruments used in the clinic.
FUNDING
No funding.
CONTRIBUTORS
J.A.M. and P.C.M. conceived the study. J.A.M. and P.C.M. designed the study and collected the data. Data analysis and chart reviews were performed by J.A.M. Both authors were involved in interpreting the results and characterizing the implications of the findings. Initial draft of the manuscript was created by J.A.M. Both authors were involved in interpretation of results, reviewing, and revising the manuscript; both authors were significantly involved in all stages of the study and approved the final version of the manuscript.
Conflict of interest statement. None declared.
REFERENCES
- 1. Krasowski MD, Wilford JD, Howard W et al. Implementation of epic beaker clinical pathology at an academic medical center. J Pathol Inform 2016; 7 (1): 7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. McSwiney RR, Woodrow DA. Types of error within a clinical laboratory. J Med Lab Technol 1969; 26 (4): 340. [PubMed] [Google Scholar]
- 3. Tuckerman JF, Henderson AR. The clinical biochemistry laboratory computer system and result entry: validation of analytical results. Comput Methods Programs Biomed 1985; 20 (1): 103–16. [DOI] [PubMed] [Google Scholar]
- 4. Shaw R, Coia JE, Michie J. Use of bar code readers and programmable keypads to improve the speed and accuracy of manual data entry in the clinical microbiology laboratory: experience of two laboratories. J Clin Pathol 1999; 52 (1): 54–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Snyder ML, Carter A, Jenkins K, Fantz CR. Patient misidentifications caused by errors in standard bar code technology. Clin Chem 2010; 56 (10): 1554–60. [DOI] [PubMed] [Google Scholar]
- 6. Norton SL, Buchanan AV, Rossmann DL et al. Data entry errors in an on-line operation. Comput Biomed Res 1981; 14 (2): 179–98. [DOI] [PubMed] [Google Scholar]
- 7. Hong MK, Yao HH, Pedersen JS et al. Error rates in a clinical data repository: lessons from the transition to electronic data transfer—a descriptive study. BMJ Open 2013; 3 (5): e002406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Paulsen A, Overgaard S, Lauritsen JM. Quality of data entry using single entry, double entry and automated forms processing–an example based on a study of patient-reported outcomes. PLoS One 2012; 7 (4): e35087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Hogan WR, Wagner MM. Accuracy of data in computer-based patient records. J Am Med Inform Assoc 1997; 4 (5): 342–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Black R, Woolman P, Kinsella J. Variation in the transcription of laboratory data in an intensive care unit. Anaesthesia 2004; 59 (8): 767–9. doi: 10.1111/j.1365-2044.2004.03834.x. [DOI] [PubMed] [Google Scholar]
- 11. Artis KA, Dyer E, Mohan V et al. Accuracy of laboratory data communication on ICU daily rounds using an electronic health record. Crit Care Med 2017; 45 (2): 179–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Carraro P, Plebani M. Post-analytical errors with portable glucose meters in the hospital setting. Clin Chim Acta 2009; 404 (1): 65–7. [DOI] [PubMed] [Google Scholar]
- 13. Wilton R, Pennisi AJ. Evaluating the accuracy of transcribed computer-stored immunization data. Pediatrics 1994; 94 (6 Pt 1): 902–6. [PubMed] [Google Scholar]
- 14. Clarke WL, Cox D, Gonder-Frederick LA et al. Evaluating clinical accuracy of systems for self-monitoring of blood glucose. Diabetes Care 1987; 10 (5): 622–8. [DOI] [PubMed] [Google Scholar]