Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Mar 21.
Published in final edited form as: Ann Emerg Med. 2013 Sep 14;63(3):320–328. doi: 10.1016/j.annemergmed.2013.08.019

Transient and Sustained Changes in Operational Performance, Patient Evaluation, and Medication Administration During Electronic Health Record Implementation in the Emergency Department

Michael J Ward 1, Craig M Froehle 1, Kimberly W Hart 1, Sean P Collins 1, Christopher J Lindsell 1
PMCID: PMC3961764  NIHMSID: NIHMS559175  PMID: 24041783

Abstract

Study objective

Little is known about the transient and sustained operational effects of electronic health records on emergency department (ED) performance. We quantify how the implementation of a comprehensive electronic health record was associated with metrics of operational performance, test ordering, and medication administration at a single-center ED.

Methods

We performed a longitudinal analysis of electronic data from a single, suburban, academic ED during 28 weeks between May 2011 and November 2011. We assessed length of stay, use of diagnostic testing, medication administration, radiologic imaging, and patient satisfaction during a 4-week baseline measurement period and then tracked changes in these variables during the 24 weeks after implementation of the electronic health record.

Results

Median length of stay increased and patient satisfaction was reduced transiently, returning to baseline after 4 to 8 weeks. Rates of laboratory testing, medication administration, overall radiologic imaging, radiographs, computed tomography scans, and ECG ordering all showed sustained increases throughout the 24 weeks after electronic health record implementation.

Conclusion

Electronic health record implementation in this single-center study was associated with both transient and sustained changes in metrics of ED performance, as well as laboratory and medication ordering. Understanding ways in which an ED can be affected by electronic health record implementation is critical to providing insight about ways to mitigate transient disruption and to maximize potential benefits of the technology.

INTRODUCTION

Background

Electronic health records are purported to reduce health care costs, lessen unnecessary testing, and improve operational performance of the health system, as well as individual health care settings.19 However, the projected cost savings and efficiency gains have recently been called into question.10 Further challenges include the actual process of implementation, which is tremendously complex for any major information technology project. This is certainly the case in emergency departments (EDs), and this disruptive period, albeit temporary, can threaten to offset anticipated benefits.11,12 Failure during the implementation phase is not uncommon, and the resulting removal of a major information technology system can be very costly, cause significant long-term organizational disruption, and jeopardize future information technology investments.13,14

Even in well-managed implementations, numerous factors, such as the organizational learning curve, major changes in workflow, and unintended consequences of a new information technology system, frequently result in a reduction in performance during implementation.15,16 One study found that physician documentation time increased 4- to 5-fold after a paper-to-electronic electronic health record transition.16 It can take many months to recover from the performance degradation and realize the operational benefits the electronic health record was intended to produce.15 In overburdened EDs, it is increasingly recognized that the transient reduction in performance has a negative effect on health care delivery. Implementation of an electronic health record reduces staff productivity and morale and affects patient care.17 Length of stay in the ED has been shown to increase and evidence suggests that patient care changes with an increase in both imaging and laboratory use.1820 Indeed, the intensity of ED visits as a result of increased laboratory and radiologic testing, as well as medication administration, may contribute to ED crowding just as much as patients in the ED awaiting an inpatient bed.21 Similarly, the implementation of computerized physician order entry as a component of electronic health records can result in an increase in errors, with resulting detrimental patient outcomes.2226 There is a need to better understand how adult EDs perform during electronic health record implementation and how patients’ care changes so that interventions can be identified to offset potential adverse effects.

Importance

As health care moves toward pay for performance, operational efficiency and patient care practices will increasingly be scrutinized. Any degradation in performance or inappropriate patient care during electronic health record implementation may cause EDs to suffer both operationally and financially.27,28 Quantifying the duration and magnitude of the transient effects of electronic health record implementation on operational performance and on patient care is essential for preparing to mitigate any negative implications of the endeavor.

Goals of This Investigation

We sought to characterize how ED operational performance, measured with metrics describing patient throughput, diagnostic testing, medication administration and patient satisfaction, changed during electronic health record implementation. We also sought to determine whether changes were temporary or sustained. We hypothesized that length of stay would be temporarily prolonged after electronic health record implementation, with a commensurate decrease in patient satisfaction. We also hypothesized that there would be a temporary increase in the volume of diagnostic testing, imaging, and medication administration.

MATERIALS AND METHODS

Study Design, Setting, and Selection of Participants

We conducted a longitudinal analysis of data from a 24-bed, suburban, academic ED in Cincinnati, OH, after approval from the institutional review board. The annual volume was approximately 34,000 patients, and the ED was staffed with board-certified and -prepared emergency physicians, emergency medicine and internal medicine residents, and physician assistants and nurse practitioners. All patient presentations to the ED between May 15, 2011, and November 26, 2011, were included.

To prepare for the implementation, physicians and midlevel providers underwent 10 hours of electronic health record training and 2 hours of training to use an optional voice-recognition system. Residents who had experience working with an electronic health record from the same vendor at another clinical facility were given an abbreviated training session of 4 hours. Nurses and technicians underwent 14 hours of training and unit clerks received 5 hours of training. After implementation, new nurses, technicians, and unit clerks received 8, 2, and 2 hours of training, respectively.

Before implementation, multiple disparate computer systems were used for information retrieval, documentation, and ordering. A paper chart for notes and physician orders, along with an electronic track board (Horizon Emergency Care ED Track Board; McKesson Corporation, San Francisco, CA) and telephone dictation, was used. On June 12, 2011, a single, comprehensive electronic health record (EPIC ASAP; Epic Systems Corporation, Verona, WI) was implemented as part of a hospital-wide system. Voice-recognition software was deployed simultaneously for physician documentation. Decision support for laboratory testing and radiologic imaging were not included as features of the electronic health record implementation software, nor were they available before implementation in the paper environment. Paper-based order sets were available for disease-specific conditions before implementation, and these were replicated in the electronic environment once the electronic health record implementation occurred.

Data Collection and Processing

Data were obtained from the administrative electronic tracking systems in operation for the 4 weeks before electronic health record implementation and served as the baseline. After electronic health record implementation, 24 weeks of data were collected from the electronic health record. The total observation period of 28 weeks was selected because we wanted to be able to assess the steady state immediately before implementation and determine the transience or permanence of any change; a previous study suggested return to baseline on ED throughput after approximately 3 months.18

Data were transmitted to study investigators weekly in electronic files and subsequently imported into SPSS for analysis (version 21; IBM Corporation, Armonk, NY). For every patient visit to the ED, patient age, sex, race, acuity, mode of arrival, insurance status, operational time stamps for the ABCDEs of patient throughput in the ED (A, arrival; B, bed placement; C, clinician initial evaluation; D, disposition decision; and E, exit time), and disposition were collected. Service intervals (the duration between any 2 time stamps) and total length of stay were calculated for each patient in minutes.

We also quantified the number of patient complaints, patient satisfaction, and quality metrics (ED deaths, return admissions within 72 hours, transfers, and those who left against medical advice). These were obtained from ED administrators, who regularly collected and aggregated these data. Last, we quantified use of radiologic imaging (radiography, computed tomography [CT], magnetic resonance imaging [MRI], and formal ultrasonography), medication administration, ECGs, and laboratory testing for each patient. Radiologic tests were included only if they had an interpretation. For laboratory tests, we included only orders that had a result in the system, excluding tests that were ordered and subsequently cancelled. Only completed ECGs were counted. Last, only medications administered were included.

Outcome Measures

The primary outcome measures were operational service intervals using the ABCDE time stamps of ED patient throughput. Length of stay (ie, the arrival-exit interval) was categorized for all patients combined and separately for admitted and discharged patients. The ABCDEs represent time stamps for patient throughput that are generalizable to most EDs.29 Secondary measures included ED operational and patient care metrics, such as diagnostic testing rates, medication administration rates, and patient satisfaction.

Primary Data Analysis

Considering that operational data frequently contains errors, we planned to assess the data for missingness, systematic errors, and negative and large values. When missing values were encountered, we did not plan to perform imputation. However, we planned to include patients in analyses even if they had some missing values. Additionally, we planned to assess the data for identical time stamps and time stamp values that were over represented in the data and so likely to contain error. Last, after calculating service intervals, we planned to evaluate for negative values and to set these values to missing because a negative interval cannot exist. To evaluate whether this assumption will hold, we will evaluate these data both with and without negative values. Additionally, we planned to test these data for positive outliers (more than 1,440 minutes). Although we did not plan to remove these large values because their validity could not be negated, quantifying their frequency helps to assess their presence in our data set.

Medians and 95% confidence intervals of the medians were calculated for each service interval. All statistical analyses were conducted with SPSS (IBM Corporation). The unit of analysis for this study was the 4-week period; that is, operational and performance metrics were calculated for each 4-week interval.

RESULTS

Patient populations were similar before and after electronic health record implementation, with the exception that patient acuity was slightly higher afterward than before; a training workshop on assigning patient acuity levels had been held shortly after electronic health record implementation (Table 1), which may have influenced some assignments. No ambulance diversion occurred during the study period.

Table 1.

Demographics of ED patient population before and after electronic health record implementation.*

Characteristics Preimplementation Postimplementation
Age (mean), y 50 50
Pediatric rate (<18 y) 103 4.0 597 4.5
Geriatric rate (>65 y) 708 27.4 3,493 26.1
Sex
 Female 1,559 58.1 7,897 59.0
Race
 White 1,811 67.5 8,962 67.0
Financial class
 Government 1,285 47.9 6,493 48.6
 Commercial insurance 887 33.1 4,271 31.9
 Self-pay 511 19.0 2,605 19.5
Emergency severity index
 1 16 0.6 69 0.5
 2 135 5.1 1,376 10.3
 3 1,737 65.1 8,695 65.0
 4 758 28.4 3,129 23.4
 5 24 0.9 112 0.8
Mode of arrival
 EMS 543 20.3 2,655 19.8
Disposition
 Discharge 1,804 67.2 9,514 71.1
 Admit 785 29.3 3,458 25.8
 Other 94 3.5 409 3.1
Admission location
 ICU 87 3.4 366 2.7

EMS, Emergency medicine services.

*

Data are presented as No. (%) unless otherwise indicated.

When evaluating the service intervals, we set a total of 601 (0.66% of 90,958 total service interval data points) negative values for service intervals to missing, of which 576 occurred after implementation. The bed to clinician service interval accounted for a majority of negative service intervals (316; 52.6% of total negatives), followed by disposition to exit (133; 22.1%) and clinician to disposition (125; 20.8%). When the service intervals were evaluated both with and without the negative service intervals, the values and their 95% confidence intervals were exactly the same. The data set was evaluated for extreme positive values over 1,440 minutes, and a total of 8 (0.0088% of 90,958 total service interval data points) were identified. Evaluation of median service times and 95% confidence intervals of the median before and after removal of negative values showed no differences in either the median or the confidence intervals.

Main Results

Median length of stay increased for admitted and discharged patients, a change that lasted approximately 8 weeks. The service intervals for arrival to bed and from bed to clinician were prolonged for 4 weeks and returned to baseline during the subsequent 4-week period. The interval from clinician to disposition showed a sustained increase and never returned to baseline. The interval from disposition to exit was unchanged immediately after implementation but then decreased below and remained below the baseline for the remainder of the study period (Table 2; Figure 1). After implementation, the time stamp for the disposition decision was found to be redefined for admitted patients: before implementation, this time stamp indicated the time the emergency physician placed the order to admit the patient; in contrast, after implementation this time stamp reflected the time the admitting physician completed his or her orders.

Table 2.

Median ED service intervals (in minutes) by weeks for patients presenting to the ED.*

Service Interval Weeks (95% Confidence Interval)
−4 to 0 0 to 4 4 to 8 8 to 12 12 to 16 16 to 20 20 to 24
Arrival to bed 10 (10–11) 12 (11–13) 8 (7–8) 5 (5–5) 5 (5–5) 6 (5–6) 5 (5–5)
Bed to clinician 6 (6–7) 8 (8–9) 6 (6–7) 6 (5–6) 5 (5–6) 4 (3–4) 4 (4–5)
Clinician to disposition 95 (92–99) 131 (128–136) 134 (129–139) 119 (115–124) 121 (117–126) 121 (116–125) 115 (111–118)
Disposition to exit 38 (36–41) 33 (31–35) 28 (26–29) 26 (25–27) 26 (24–27) 24 (24–26) 25 (23–26)
Length of stay
 Total 185 (179–192) 224 (216–231) 198 (193–205) 179 (173–184) 177 (172–183) 174 (168–180) 163 (157–168)
 Admit 303 (295–317) 332 (322–343) 308 (296–318) 278 (269–292) 277 (267–284) 274 (261–284) 260 (251–271)
 Discharge 146 (141–153) 172 (168–178) 167 (160–173) 143 (138–148) 143 (138–149) 141 (136–147) 131 (127–135)
*

“0” Indicates the date of electronic health record implementation.

Figure 1.

Figure 1

Whisker box plots of the service intervals (in log-transformed minutes) by weeks for A, arrival to bed; B, bed to clinician; C, clinician to disposition; D, disposition to exit; and E, total length of stay. The boxes represent the interquartile range; the bar within the boxes represents the median value. The whiskers represent the limit for outliers, shown as circles (○), defined as values falling between 1.5 and 3 box lengths from the end of the box. Cases shown as stars (*) are extreme values, defined as greater than 3 box lengths from the end of the box. “0” Indicates the date of electronic health record implementation. To display extreme values while providing sufficient details around the normative values, the display of the vertical axis has been transformed logarithmically. This transformation of the axis units is only for visualization purposes; the data values themselves were not transformed.

Patient volume was higher before implementation than after it (Table 3). Additionally, medication administration per 100 patients nearly doubled after implementation, and this difference was sustained through the entire study period. The number of ECGs performed increased from 23.7 to 35.7 per 100 patients and laboratory testing with results available increased from 225.4 to 374.5 tests per 100 patients. Radiologic imaging did not show large changes either overall or for the different imaging modalities. For specific imaging modalities, the use of CT increased from 24.3 studies per 100 patients to 25.9 per 100 patients, whereas radiography (73.3 to 75 studies per 100 patients) and MRI (0.5 to 0.7 studies per 100 patients) increased. Formal ultrasonography increased from 1.8 to 2.3 studies per 100 patients, with a single decrease in weeks 16 to 20 to 1.3 tests performed per 100 patients (Figure 2).

Table 3.

ED operating characteristics, billing, complaints, and use of radiologic imaging, medications, ECGs, and laboratory testing before and after electronic health record implementation by 4-week block.

Characteristics 4-Week Block
−4 to 0 0 to 4 4 to 8 8 to 12 12 to 16 16 to 20 20 to 24
Volume (N) 2,588 2,256 2,198 2,180 2,199 2,287 2,258
Medications administered (N/100) 125 246 250 265 248 226 216
Laboratory testing (N/100) 225 303 432 425 392 388 375
ECGs (N/100) 23.7 46.7 40.6 39.1 36.8 27.6 35.7
Overall radiologic imaging tests (N/100) 100 107 111 111 104 111 104
 Radiograph 73.3 76.8 78.2 79.0 74.8 80.0 75.0
 CT 24.3 26.5 30.0 27.7 26.1 28.6 25.9
 MRI 0.50 0.93 1.09 0.92 1.09 0.87 0.71
 Ultrasonography 1.78 2.22 2.05 2.61 2.05 1.31 2.30
Morbidity and mortality metrics (%)
 AMA 0.43 0.49 0.27 0.69 0.68 0.13 0.62
 ED deaths 0.08 0.13 0.05 0.05 0.18 0.09 0.04
 Transfers 1.47 0.80 0.73 1.10 1.05 0.87 0.89
 R72 1.16 0.58 0.96 0.78 0.86 0.61 0.97
Complaints (N/1,000) 4.64 3.10 2.73 4.13 1.82 1.75 3.99

AMA, Against medical advice; R72, return and admit within 72 hours.

Figure 2.

Figure 2

Use of ECGs, laboratory testing, medication administration, CT, MRI, ultrasonography, and plain radiography per 100 ED visits by weeks as a percentage of the preimplementation baseline. “0” Indicates the period of electronic health record implementation. Meds, Medications; Rads, radiology.

Note: CT, computed tomography; ECGs, electrocardiograms; Meds, medications; MRI, magnetic resonance imaging; Rads, radiology; ULT, ultrasound; Radg, radiograph.

Measures of morbidity and mortality showed that the proportion of patients leaving against medical advice and deaths in the ED went from 0.4% to 0.6% and 0.08% to 0.04%, respectively. Transfers and return admissions within 72 hours both decreased, from 1.5% to 0.9% and 1.2% to 1%, respectively. The number of complaints per 1,000 patients decreased from 4.6 to 3.1 immediately after implementation and was at 4.0 per 1,000 patients at the end of the study period. Mean monthly patient satisfaction scores (“overall rating ER care”) were obtained from the hospital’s Press-Ganey Emergency Room Survey and showed a decrease during the first 2 months after implementation, with a return toward baseline during the subsequent periods (90%, 79.9%, and 78.6% and 89.1%, 83.6%, 85.8%, and 87.3%, respectively).

Before implementation, the hospital planned to increase nursing hours by nearly 23% during the first 4 weeks. After the first 4 weeks, and for the remainder of the study period, nursing hours were sustained at 2% to 5% above baseline levels. Physician staffing hours were temporarily augmented by 8.3% for the first 4 weeks, although 80 of the 84 additional hours were during the first week of the implementation and included 75 hours of backup coverage because of high volume rather than a scheduled staffing increase. After this temporary increase, physician staffing returned to the baseline levels. Unit clerk hours had decreased by 33% from baseline by the end of the study period, which was a planned decrease because of the increased administrative work that would be performed in the electronic health record by clinicians. There were minimal changes in midlevel provider hours (Table 4).

Table 4.

Total ED staffing by type of position by weeks.*

Staffing Role Weeks From EHR Implementation, Hours
−4 to 0 0 to 4 4 to 8 8 to 12 12 to 16 16 to 20 20 to 24
Nursing 4,434 5,433 4,642 4,573 4,541 4,544 4,534
Physician 1,008 1,092 1,008 1,008 1,008 1,008 1,008
Unit clerks 1,008 1,008 1,008 1,008 1,008 900 672
Midlevel providers (NP, PA) 476 513 489 475 512 470 459
Technician 1,344 1,344 1,344 1,344 1,344 1,344 1,344

NP, nurse practitioner; PA, physician assistant.

*

“0” Indicates the date of electronic health record implementation.

LIMITATIONS

Our study shows that during the first 6 months after implementation of an electronic health record, there are both transient and persistent changes in measures of emergency care system operational performance and patient care. Changes were both positive and negative in consequence. These results should be considered in light of the limitations of this study.

Our study did not consider a comprehensive view of ED operational performance. Many aspects of this complex emergency care system, including patient safety, quality, and user interface with information technology, were not considered.

The baseline period before electronic health record implementation was a single 4-week period. We chose the baseline period of 4 weeks and a follow-up period of 6 months a priori according to preimplementation expectations about patient volumes needed for establishing baseline metrics and the time needed to arrive at stable postimplementation operations. We observed that patient volume was between 11.6% and 15.8% higher than in the period after implementation. Had the volume not differed, it is possible we would have observed a greater difference in patient throughput. Because the lower volumes and subsequent ED crowding observed post–electronic health record implementation would tend to be associated with shorter length of stay, the observed increase in length of stay over baseline is perhaps even more noteworthy.30 There is also likely to be some interaction between volume and both patient care and documentation of acuity, although it is unclear in which direction this would have biased results. Future studies should consider using a longer baseline period to establish a better understanding of the preimplementation operating characteristics.

This study was conducted in a single, academic, community ED and the generalizability of findings is unknown, given the multitude of environments in which EDs exist. Additionally, these findings are limited to the ED and do not generalize to other clinical settings. It is possible that although increased testing in the ED before inpatient admission would have prolonged the ED stay, this could have benefited the inpatient evaluation such that the overall hospital length of stay was reduced. The electronic time stamp data were subject to multiple systematic errors and biases that resulted in small increases in service interval inaccuracies. Although the reproducibility of service intervals increased with electronic health record implementation, their accuracy and bias may have worsened.31 Using nonparametric statistics helps to minimize the effects of outliers on comparisons, although it might be argued that it also minimizes the effects of true outliers on performance; these outliers can represent instances in which the system fails extensively.

Many operational metrics such as radiologic imaging intervals were not available electronically, limiting analysis of how these aspects of patient care affected more global operational measures.

The study institution was heavily reliant on point-of-care testing, and so we included these tests to obtain a better understanding of the amount of laboratory testing conducted.

Another potential confounder was the augmented staffing levels that were planned for. This covered all staff except for midlevel providers and physicians. However, augmented staffing would bias against delayed throughput during the implementation phase, potentially making our results more conservative.

Last, patient-centered outcomes of deaths and readmissions were limited only to those reported in the system and have not been validated through any follow-up with the patients.

DISCUSSION

This study explores some of the transient and persistent effects of electronic health record implementation on ED operations and patient care at a single institution. Others have shown effects of an electronic health record implementation on operational metrics at a single pediatric ED population and a single Australian ED, and our findings are consistent in an adult setting in the United States. More important, we additionally demonstrate that there are both transient and persistent changes in operational measures that arise with the implementation of an electronic health record. The disruption in throughput appeared to be temporary, followed by throughput that was improved beyond the baseline period after implementation, consistent with results of other studies.15,18 Effects on patient satisfaction also appeared to be temporary and may have been related to the transient disruption in throughput. Length of stay is known to be associated with patient satisfaction.3235 Baseline satisfaction was not quite achieved by the end of the study period, but there was a consistent improvement during the 24-week period.

There are several possible explanations for our findings. First, the temporary operational disruption may have been attributable to delays associated with an entire organization learning a new workflow as a result of the conversion from a partially paper-based, semielectronic process to a fully electronic environment. Additional changes in the physician documentation process, such as adopting voice-recognition software, likely further contributed to delays while providers adjusted to the new technology and processes.16 Our findings are consistent with results of major information technology implementations from other industries and with research from a small number of ED settings that show that as users learn the new system and become more proficient, operational inefficiencies are ameliorated.15,18 Our results suggest that the period of disruption ranges from 4 to 16 weeks. However, not all operational metrics were affected for the same time frame; throughput returned to baseline by 8 weeks, whereas satisfaction was subjected to a more persistent effect. How and whether these time frames generalize to other ED settings is unknown.

We also found that one of the time stamps, disposition decision, had its operational definition changed after electronic health record implementation. Although this limits our assessment of the service intervals after electronic health record implementation, we still gain insight.

First, by changing the disposition decision time stamp, this artificially prolongs the clinician evaluation-disposition decision interval while decreasing the disposition decision-exit interval. Only the latter is a scrutinized metric on public Web sites and this redefined time stamp offers an artificially enhanced perception of performance despite no real change occurring in patient flow.

Second, although the immediate postimplementation service times cannot be directly compared with baseline, we can observe a change during the subsequent 6 months to determine whether there is an increase or decrease. Additionally, there was a combined increase of 31 minutes after implementation and this reduced to −3 minutes compared with the baseline. Although the change in definition limits the usability of these service intervals, there is significant value in understanding the measurement and reporting implications of these changes.

Perhaps the most pertinent finding from our study is that the observed increases in laboratory testing, radiologic imaging, and ECG rates appeared to persist throughout the follow-up period. This corroborates with large database studies that have demonstrated that diagnostic testing and radiologic imaging rates tend to increase after electronic health record implementation in the ambulatory care setting, and it may provide a partial explanation for the increase in operational disruption.19,36 The use of advanced imaging modalities and laboratory evaluations, sometimes referred to in the literature as practice intensity, has been found to be a primary contributor to increased ED crowding.21 Although advanced imaging use was minimally increased, the overall increased use of testing and ordering may have been made more efficient after electronic health record implementation. Order sets are used by physician groups to increase adherence to local practice patterns and to speed the ordering of complex sets of orders,37,38 and when formalized as part of the electronic health record implementation they may also have increased the use of radiologic tests, medications, and laboratory tests that would not have been ordered in a paper-based system. Also, the electronic health record may have enhanced the ability to order tests by simply clicking a button or through the use of order sets. Although paper order sets were in use before the electronic health record implementation at this particular facility, adoption of electronic order entry has been shown to increase ordering of tests and medications through order sets.39

Lack of adequate decision support could have accounted for the change in radiologic and laboratory testing and medication administration rates we observed. Clinical decision support has been shown to reduce CT testing for pulmonary embolism.40 However, clinical decision support has also been shown to increase testing when underused before implementation.41 Ultimately, the effect on test ordering has been found to be mixed. Additionally, the implementation and number of applicable diseases for clinical decision support in the ED are limited. Increased ordering of laboratory and radiologic tests may also have been observed if physicians wanted to avoid logging in and out of the electronic health record or having to use a tap badge multiple times. It is also possible that the increased usage rates reflect better data capture in the new electronic system. However, the process of capturing ECG usage did not change after implementation, and we still observed a near doubling in use per 100 patients. Finally, during the period of this study there was an initiative to decrease the time from patient presentation to being treated by a physician that may have resulted in increased ordering of tests before full physician evaluation. Each of these theories needs further exploration, and mixed-methods approaches could be used to better understand how physician behavior changes with an electronic health record implementation. Moreover, whether or how these changes affected patient outcomes remains to be seen.

Another issue raised by this study is that of the metrics used for evaluation of ED operations and the effect of electronic health record implementation. We used what was readily available to try to paint a comprehensive view of the implementation itself, using standardized and frequently used and well-understood metrics. Although the benefit of this is that other institutions can compare themselves with these standardized measures, they do not paint a complete picture of the electronic health record implementation process. For example, they do not account for rework or how time spent by nursing and clinical staff has changed after an information technology implementation. Although we obtained patient satisfaction data, we did not gauge how staff satisfaction changes with the electronic health record implementation.

This study provides insight for ED staff, clinicians, and administration into how electronic health record implementation affects usual ED operations during the start-up period and for the subsequent 6 months. By distinguishing between transient and persistent effects, ED administrators, researchers, and policymakers have an enhanced description of the immediate disruption and longer-term changes associated with electronic health record implementation. Although more work needs to be done to better understand how workflows change and how users interact with the technology, our results provide insight into the implementation process itself. Research such as this will inform efforts aimed at reducing the time frame of temporary disruption and maximizing the benefits or mitigating the deleterious effects of persistent changes. Improving the implementation experience will eliminate a major barrier to successful electronic health record implementation and allow us to focus research endeavors on enhancing the technology available to providers as they seek to increase the efficiency and the quality of care provided to patients.

Electronic health record implementation in the ED is associated with sustained increases in diagnostic, laboratory, and imaging rates, in addition to transient operational inefficiency and diminished patient experience. Recognizing the temporal nature of disruptions allows ED leadership to adequately prepare for implementations and focus on improving health care operations and care delivery.

Acknowledgments

Funding and support: By Annals policy, all authors are required to disclose any and all commercial, financial, and other relationships in any way related to the subject of this article as per ICMJE conflict of interest guidelines (see www.icmje.org). The authors have stated that no such relationships exist. During the study, Dr. Ward was supported by a research fellowship from the Emergency Medicine Foundation. He is also supported by a K12 grant from the National Heart, Lung, and Blood Institute (K12HL109019).

Footnotes

Author contributions: MJW, CMF, SPC, and CJL conceived the study. MJW and CJL obtained research funding. MJW supervised the conduct of the data collection. MJW, KWH, and CJL performed analysis of the data. All authors interpreted the results. MJW drafted the article, and all authors contributed substantially to its revision. MJW takes responsibility for the paper as a whole.

References

  • 1.Tierney WM, Miller ME, McDonald CJ. The effect on test ordering of informing physicians of the charges for outpatient diagnostic tests. N Engl J Med. 1990;322:1499–1504. doi: 10.1056/NEJM199005243222105. [DOI] [PubMed] [Google Scholar]
  • 2.Tierney WM, McDonald CJ, Martin DK, et al. Computerized display of past test results. Effect on outpatient testing. Ann Intern Med. 1987;107:569–574. doi: 10.7326/0003-4819-107-4-569. [DOI] [PubMed] [Google Scholar]
  • 3.Wilson GA, McDonald CJ, McCabe GP., Jr The effect of immediate access to a computerized medical record on physician test ordering: a controlled clinical trial in the emergency room. Am J Public Health. 1982;72:698–702. doi: 10.2105/ajph.72.7.698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Stair TO. Reduction of redundant laboratory orders by access to computerized patient records. J Emerg Med. 1998;16:895–897. doi: 10.1016/s0736-4679(98)00106-1. [DOI] [PubMed] [Google Scholar]
  • 5.Stewart BA, Fernandes S, Rodriguez-Huertas E, et al. A preliminary look at duplicate testing associated with lack of electronic health record interoperability for transferred patients. J Am Med Inform Assoc. 2010;17:341–344. doi: 10.1136/jamia.2009.001750. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Thomas SH, Orf J, Peterson C, et al. Frequency and costs of laboratory and radiograph repetition in trauma patients undergoing interfacility transfer. Am J Emerg Med. 2000;18:156–158. doi: 10.1016/s0735-6757(00)90008-1. [DOI] [PubMed] [Google Scholar]
  • 7.Spalding SC, Mayer PH, Ginde AA, et al. Impact of computerized physician order entry on ED patient length of stay. Am J Emerg Med. 2011;29:207–211. doi: 10.1016/j.ajem.2009.10.007. [DOI] [PubMed] [Google Scholar]
  • 8.Boger E. Electronic tracking board reduces ED patient length of stay at Indiana Hospital. J Emerg Nurs. 2003;29:39–43. doi: 10.1067/men.2003.13. [DOI] [PubMed] [Google Scholar]
  • 9.Baumlin KM, Shapiro JS, Weiner C, et al. Clinical information system and process redesign improves emergency department efficiency. Jt Comm J Qual Patient Saf. 2010;36:179–185. doi: 10.1016/s1553-7250(10)36030-2. [DOI] [PubMed] [Google Scholar]
  • 10.Kellermann AL, Jones SS. What it will take to achieve the as-yet-unfulfilled promises of health information technology. Health Aff (Millwood) 2013;32:63–68. doi: 10.1377/hlthaff.2012.0693. [DOI] [PubMed] [Google Scholar]
  • 11.Aarts J, Doorewaard H, Berg M. Understanding implementation: the case of a computerized physician order entry system in a large Dutch university medical center. J Am Med Inform Assoc. 2004;11:207–216. doi: 10.1197/jamia.M1372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Agarwal R, Gao GD, DesRoches C, et al. The digital transformation of healthcare: current status and the road ahead. Inform Syst Res. 2010;21:796–809. [Google Scholar]
  • 13.Al-Mashari M, Al-Mudimigh A. ERP implementation: lessons from a case study. Inform Technol People. 2003;16:21–33. [Google Scholar]
  • 14.Scott JT, Rundall TG, Vogt TM, et al. Kaiser Permanente’s experience of implementing an electronic medical record: a qualitative study. BMJ. 2005;331:1313–1316. doi: 10.1136/bmj.38638.497477.68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mcafee A. The impact of enterprise information technology adoption on operational performance: an empirical investigation. Prod Oper Manag. 2002;11:33–53. [Google Scholar]
  • 16.Park SY, Lee SY, Chen Y. The effects of EMR deployment on doctors’ work practices: a qualitative study in the emergency department of a teaching hospital. Int J Med Inform. 2012;81:204–217. doi: 10.1016/j.ijmedinf.2011.12.001. [DOI] [PubMed] [Google Scholar]
  • 17.Patrick J. A Study of a Health Enterprise Information System. University of Sydney; 2011. [Google Scholar]
  • 18.Kennebeck SS, Timm N, Farrell MK, et al. Impact of electronic health record implementation on patient flow metrics in a pediatric emergency department. J Am Med Inform Assoc. 2011 doi: 10.1136/amiajnl-2011-000462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.McCormick D, Bor DH, Woolhandler S, et al. Giving office-based physicians electronic access to patients’ prior imaging and lab results did not deter ordering of tests. Health Aff (Millwood) 2012;31:488–496. doi: 10.1377/hlthaff.2011.0876. [DOI] [PubMed] [Google Scholar]
  • 20.Mohan MK, Bishop RO, Mallows JL. Effect of an electronic medical record information system on emergency department performance. Med J Aust. 2013;198:201–204. doi: 10.5694/mja12.10499. [DOI] [PubMed] [Google Scholar]
  • 21.Pitts SR, Pines JM, Handrigan MT, et al. National trends in emergency department occupancy, 2001 to 2008: effect of inpatient admissions versus emergency department practice intensity. Ann Emerg Med. 2012;60:679–686. doi: 10.1016/j.annemergmed.2012.05.014. [DOI] [PubMed] [Google Scholar]
  • 22.Sittig DF, Ash JS, Zhang J, et al. Lessons from “Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system”. Pediatrics. 2006;118:797–801. doi: 10.1542/peds.2005-3132. [DOI] [PubMed] [Google Scholar]
  • 23.Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005;116:1506–1512. doi: 10.1542/peds.2005-1287. [DOI] [PubMed] [Google Scholar]
  • 24.Del Beccaro MA, Jeffries HE, Eisenberg MA, et al. Computerized provider order entry implementation: no association with increased mortality rates in an intensive care unit. Pediatrics. 2006;118:290–295. doi: 10.1542/peds.2006-0367. [DOI] [PubMed] [Google Scholar]
  • 25.King WJ, Paice N, Rangrej J, et al. The effect of computerized physician order entry on medication errors and adverse drug events in pediatric inpatients. Pediatrics. 2003;112:506–509. doi: 10.1542/peds.112.3.506. [DOI] [PubMed] [Google Scholar]
  • 26.Maslove DM, Rizk N, Lowe HJ. Computerized physician order entry in the critical care environment: a review of current literature. J Intensive Care Med. 2011;26:165–171. doi: 10.1177/0885066610387984. [DOI] [PubMed] [Google Scholar]
  • 27.Forum NQ. National Voluntary Consensus Standards for Emergency Care: A Consensus Report. 2009 [Google Scholar]
  • 28.Sikka R. Pay for performance in emergency medicine. Ann Emerg Med. 2007;49:756–761. doi: 10.1016/j.annemergmed.2006.06.032. [DOI] [PubMed] [Google Scholar]
  • 29.Welch SJ, Asplin BR, Stone-Griffith S, et al. Emergency department operational metrics, measures and definitions: results of the second performance measures and benchmarking summit. Ann Emerg Med. 2010 doi: 10.1016/j.annemergmed.2010.08.040. [DOI] [PubMed] [Google Scholar]
  • 30.McCarthy ML, Zeger SL, Ding R, et al. Crowding delays treatment and lengthens emergency department length of stay, even among high-acuity patients. Ann Emerg Med. 2009;54:492–503. e4. doi: 10.1016/j.annemergmed.2009.03.006. [DOI] [PubMed] [Google Scholar]
  • 31.Ward MJ, Hart KW, Lindsell CJ. Operational data integrity during electronic health record implementation in the ED. Am J Emerg Med. 2013;31:1029–1033. doi: 10.1016/j.ajem.2013.03.027. [DOI] [PubMed] [Google Scholar]
  • 32.Fernandes CM, Daya MR, Barry S, et al. Emergency department patients who leave without seeing a physician: the Toronto Hospital experience. Ann Emerg Med. 1994;24:1092–1096. doi: 10.1016/s0196-0644(94)70238-1. [DOI] [PubMed] [Google Scholar]
  • 33.Polevoi SK, Quinn JV, Kramer NR. Factors associated with patients who leave without being seen. Acad Emerg Med. 2005;12:232–236. doi: 10.1197/j.aem.2004.10.029. [DOI] [PubMed] [Google Scholar]
  • 34.Kennedy M, MacBean CE, Brand C, et al. Review article: leaving the emergency department without being seen. Emerg Med Australas. 2008;20:306–313. doi: 10.1111/j.1742-6723.2008.01103.x. [DOI] [PubMed] [Google Scholar]
  • 35.Hall MF, Press I. Keys to patient satisfaction in the emergency department: results of a multiple facility study. Hosp Health Serv Adm. 1996;41:515–532. [PubMed] [Google Scholar]
  • 36.Kocher KE, Meurer WJ, Desmond JS, et al. Effect of testing and treatment on emergency department length of stay using a national database. Acad Emerg Med. 2012;19:525–534. doi: 10.1111/j.1553-2712.2012.01353.x. [DOI] [PubMed] [Google Scholar]
  • 37.Kuperman GJ, Teich JM, Gandhi TK, et al. Patient safety and computerized medication ordering at Brigham and Women’s Hospital. Jt Comm J Qual Improv. 2001;27:509–521. doi: 10.1016/s1070-3241(01)27045-x. [DOI] [PubMed] [Google Scholar]
  • 38.Kuperman GJ, Gibson RF. Computer physician order entry: benefits, costs, and issues. Ann Intern Med. 2003;139:31–39. doi: 10.7326/0003-4819-139-1-200307010-00010. [DOI] [PubMed] [Google Scholar]
  • 39.Brokel JM, Ward MM, Wakefield DS, et al. Changing patient care orders from paper to computerized provider order entry-based process. Comput Inform Nurs. 2012;30:417–425. doi: 10.1097/NXN.0b013e318251076e. [DOI] [PubMed] [Google Scholar]
  • 40.Raja AS, Ip IK, Prevedello LM, et al. Effect of computerized clinical decision support on the use and yield of CT pulmonary angiography in the emergency department. Radiology. 2012;262:468–474. doi: 10.1148/radiol.11110951. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Wu WY, Hripcsak G, Lurio J, et al. Impact of integrating public health clinical decision support alerts into electronic health records on testing for gastrointestinal illness. J Public Health Manag Pract. 2012;18:224–227. doi: 10.1097/PHH.0b013e318241555d. [DOI] [PubMed] [Google Scholar]

RESOURCES