Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2015 Nov 5;2015:1909–1917.

Secondary Use of EHR Timestamp data: Validation and Application for Workflow Optimization

Michelle R Hribar 2, Sarah Read-Brown 1, Leah Reznick 1, Lorinna Lombardi 1, Mansi Parikh 1, Thomas R Yackel 2, Michael F Chiang 1,2
PMCID: PMC4765636  PMID: 26958290

Introduction

Electronic health records (EHRs) have potential to improve the quality, efficiency, and cost of health care.16 The transition from traditional paper-based care to EHRs within both hospitals and ambulatory practices has been aggressively promoted by federal initiatives7,8 and is rapidly transforming the process of health care delivery throughout the United States.911 However, clinicians have raised concerns that EHR implementation has negatively impacted their real-world clinical productivity.1216 For example, at Oregon Health & Science University (OHSU), we have one of the leading biomedical informatics departments in the world and completed a successful EHR implementation in 2006 that received national publicity. Yet we have published studies showing that OHSU ophthalmologists currently see 3–5% fewer patients than before EHR implementation and require >40% additional time for each patient encounter.17

Approaches toward improving the efficiency of clinical workflow using EHRs would have significant real-world impact. Clinicians are pressured to see more patients in less time for less reimbursement due to persistent concerns about the accessibility and cost of health care.18,19 Providers today are facing increased patient loads along with increased encounter times due to EHR use, but do not have guidance or information about how to meet these demands. For example, ophthalmologists typically see 15–30 patients or more in a half-day session, utilize multiple exam rooms simultaneously, work with ancillary staff (e.g., technicians, ophthalmic photographers), and examine patients in multiple stages (e.g., before and after dilation of eyes, before and after ophthalmic imaging studies). This creates enormous challenges in workflow and scheduling, and large variability in operational approaches.20

Patient wait time is a result of pressure on provider time as well as clinic inefficiency; wait time has been shown to affect patient satisfaction as well as create barriers to health care.21,22. Mathematics, specifically queueing theory, explains waiting by the mismatch of arrival times and service times (time with a physician).23 This mismatch can be increased by ad-hoc scheduling protocols that artificially increase patient wait time.24,25 Addressing this mismatch using smarter scheduling strategies has potential for improving patient wait time.26 Studying and evaluating appointment scheduling strategies in clinical settings is impractical, however, since patient and provider time is too valuable for experimentation. Empirical models of clinical processes using discrete event simulation (DES) can evaluate potential scheduling strategies effectively before implementing them in clinical settings. DES requires large amounts of workflow timing data—much more than can reasonably be collected using traditional time-motion studies. We believe that data to address these problems is currently available within EHR. One major benefit of EHR systems is that clinical data can be applied for “secondary use” beyond direct provision of clinical care; current efforts have focused on areas such as clinical research, public health, adverse event reporting, and quality assurance.2729 Data mining the EHR data has been used to determine patient no-shows with success30, grouping patients in emergency departments (ED)31, and for quality assurance in the ED.32,33 DES has been used for quality improvement in healthcare, but not for evaluating scheduling strategies based on secondary use EHR data and detailed workflow data.30,31,34,35

In this paper, we present the results of using secondary EHR data for modeling clinical workflow in 3 outpatient ophthalmology clinics at OHSU. Ophthalmology is an ideal domain for these studies because it is a high-volume, high-complexity field that combines both medical and surgical practice. Our results show that the secondary use of EHR data for workflow data shows promise; it matches the trends of observed clinic workflows and is available for thousands of patient encounters. Further, workflow data can be used to build simulation models for evaluating scheduling strategies based on patient classification.

Methods

This study was approved by the Institutional Review Board at Oregon Health & Science University (OHSU).

Study Environment

OHSU is a large academic medical center in Portland, Oregon. The ophthalmology department includes over 50 faculty providers, who perform over 90,000 annual outpatient examinations. The department provides primary eye care, and serves as a major referral center in Pacific Northwest and nationally. We selected 3 outpatient clinics to study: 1) pediatric ophthalmology (LR), 2) comprehensive eye care (LL), and 3) glaucoma (MP). These 3 clinics represent the diversity of outpatient care in ophthalmology at OHSU.

Over several years, an institution-wide EHR system (EpicCare; Epic Systems, Madison, WI) was implemented throughout OHSU. This vendor develops software for mid-size and large medical practices, is a market share leader among large hospitals, and has implemented its EHR systems at over 200 hospital systems in the United States. In 2006, all ophthalmologists at OHSU began using this EHR. All ambulatory practice management, clinical documentation, order entry, medication prescribing, and billing tasks are performed using components of the EHR.

Workflow Modeling and Reference Data Collection

Interviews with staff and observations of each of the three clinics were performed to determine the basic patient flow. For the three outpatient ophthalmology clinics we studied, this flow consists of interactions with ancillary staff and the physicians, along with possible dilation time between interactions. Once the workflow was understood, we performed time-motion studies for 3 – 6 half-days at each of the clinics. One to two observers recorded timestamps of physicians and staff as they entered and exited exam rooms; these timestamps were then processed later to determine the duration of time spent in exam rooms with patients. This observational timing data served as our reference data for validation of the EHR timestamp data.

Collection of EHR Timestamp Data

Through preliminary iterative data collection and analysis, we identified a set of EHR timestamp data that represents the different steps of the patient flow for each clinic. The source for this data is the clinical data warehouse for our EHR (EpicCare; Epic Systems, Madison, WI). While these timestamps are specific to OHSU’s implementation in ophthalmology, comparable timestamps are available for other vendors, installations and specialties.

  1. Start and End of Patient Visit: Check-in, check-out and after visit summary printing timestamps. For some clinics, the check-out process happens after the patient leaves; in this case, the timestamp of the after visit summary printing better represents the end of the exam.

  2. Start and End of Each Provider Interaction: Audit log timestamps. Timestamps from the audit log can be used to represent the beginning and ending of individual provider interactions during the course of the office visit. In addition to timestamp data, this required data about providers (user IDs), exam rooms (workstation IDs) and patient encounters (encounter IDs) to select the proper context for the timestamps.

  3. Time of Dilation: Structured ophthalmology documentation form. Eye dilation information is entered in the structured ophthalmology documentation form of our EHR. From this data, we can determine if a patient’s eyes were dilated and at what time.

After obtaining the data through queries, we processed the data and computed the workflow timings from EHR timestamps using R programming language scripts36.

Data Analysis

For each clinic, we compared the timing data collected through observation to the timings computed from the EHR timestamps. We looked at the difference in interaction lengths with staff and physicians, as well as differences in overall summary statistics using the reference timings we observed and the time estimates computed from the EHR timestamps. We also included one year of EHR timing data from each of the 3 ophthalmology physicians to compare trends with the observed clinic days, because the longer-term goal of using EHR data for workflow timings is to generate a large validated dataset.

Simulation Models

We used Arena simulation software37 to build a model of one clinic’s workflow–the pediatric clinic (LR)–using one year of EHR timestamp data. In this model, we focused on the steps of the exam: the initial exam by ancillary staff, eye dilation, and the ophthalmology physician exam. We evaluated scheduling strategies based on patient classification; we studied the best time to schedule new patients to minimize average patient wait time.

Results

Clinic Workflow Modeling and Reference Data Collection

To collect time-motion workflow data, we observed each of the 3 clinics for 3–6 half-days (162 clinical encounters, 263 interactions between staff and patients or between ophthalmologists and patients). For the clinics we observed, 15–20 patients were typically seen during the half-day clinic session but ophthalmologists at OHSU may see up to 30 patients per half-day. Figure 1 summarizes the workflow of the clinics. Patients have an initial exam with an ancillary staff member—either a certified ophthalmic technician (tech) or an orthoptist. The pediatric clinic (LR) uses both types of ancillary staff; the other clinics use only techs. After the initial exam, patients’ eyes may be dilated (approximately 50% of patients) which requires an average additional wait time of 25 minutes. Finally, the physician examines the patient. We noted that the techs and orthoptists performed documentation immediately prior to and following the initial exams.

Figure 1:

Figure 1:

Clinic Workflow. Flowchart representation of the workflow for all clinics. In the pediatric clinic (LR), patients see either an orthoptist or a certified ophthalmic technician (tech) for an initial exam; the other two clinics use only techs for the initial exam. Patients’ eyes may be dilated before the physician examines them.

Validation of EHR Timestamp Data for Workflow Timing

After extracting EHR timestamps and computing provider interaction lengths, we compared them to those from observations as shown in Tables 13. Overall, the time differences at all 3 clinics were similar: 65 – 69% of the total EHR interaction estimates for each clinic were ≤3 minutes from the observed reference timings, with only 17 – 24% of the EHR interaction estimates >5 minutes from the observed reference timings. For the comprehensive eye care (LL) and glaucoma (MP) clinics, the EHR timings for physician exams were particularly close to the observed ones, but the tech exams were not. In these clinics, we observed that techs occasionally charted in exam rooms before and after seeing patients, which resulted in EHR timings that inaccurately overestimated the actual interaction times. Conversely, we observed providers interacting with patients before and after using the EHR, which resulted in EHR timings that inaccurately underestimated the actual interaction times.

Table 1:

Tables 1–3: Time Difference between EHR Timestamps and Observed Data. For each clinic, we calculated the difference between the observed interaction time and the EHR estimates. For all the clinics, 65–69% of all EHR estimates were within 3 minutes of the observed times; at least 76% of the EHR estimates at each clinics were within 5 minutes.

Table 1: Time Difference EHR Timestamp vs. Observed Data (Pediatric-LR)

< 1 Min. 1–3 Min. 3–5 Min. > 5 Min. N

Orthoptist exam 10 (31%) 10 (31%) 4 (13%) 8 (25%) 32
Tech Exam 7 (28%) 7 (28%) 5 (20%) 6 (35%) 25
Physician Exam 27 (30%) 34 (38%) 7 (8%) 21 (24%) 89

Total 44 (30%) 51 (35%) 16(11%) 35 (24%) 146

Table 2: Time Difference EHR Timestamp vs. Observed Data (Comprehensive-LL)

< 1 Min. 1–3 Min. 3–5 Min. > 5 Min. N

Tech Exam 2 (8%) 10(42%) 2 (8%) 10 (42%) 24
Physician Exam 12 (41%) 13(45%) 2 (7%) 2 (7%) 29

Total 14 (26%) 23 (43%) 4 (8%) 12 (23%) 53

Table 3: Time Difference EHR Timestamp vs. Observed Data (Glaucoma-MP)

< 1 Min. 1–3 Min. 3–5 Min. > 5 Min. N

Tech Exam 1 (5%) 4 (20%) 8 (40%) 7 (35%) 20
Physician Exam 23 (52%) 16(36%) 1 (2%) 4 (9%) 44

Total 24 (38%) 20(31%) 9 (14%) 11 (17%) 64

Table 4:

Results of statistical analysis of differences between EHR timestamps for observed days and 1 yr. EHR timestamps are significantly different for ancillary staff when observed days are compared to 1 year of EHR data.

Difference Between Observed Timings & EHR Timestamps

Comp-LL Median (Min.) (Obs, 1 yr) p Value (Obs, lyr) % < 5 Min. (Obs)

Tech −3.21, −3.50 0.07,<0.01 58%
Physician 0.24, −0.10 0.58, .85 93%

Glaucoma-MP Median (Min.) (Obs, 1 yr) p Value (Obs, lyr) % < 5 Min. (Obs)

Tech −3.94, −3.20 <0.01,<0.01 65%
Physician 0.44, 0.05 0.07, 0.84 91%

Pediatrics-LR Median (Min.) (Obs, 1 yr) p Value (Obs, lyr) % < 5 Min. (Obs)

Tech 0.06,5.18 0.94, <0.01 65%
Orthoptist 0.54,2.87 0.43,<0.01 75%
Physician 1.13,0.78 <0.01, 0.25 76%

Table 5:

Average exam time and wait time (minutes). Data are displayed for 4 days of observed data, 1 year of EHR time stamping data from a single physician clinic (Pediatrics-LR), and 100 iterations of software simulated data. Exam times for the simulated data are similar to the observed & EHR data, but there are larger differences in the wait times.

Data Source Exam Time (Min.) (Mean ± SD) Wait Time (Min.) (Mean ± SD)

Observed 18.9 ± 11.1 41.2 ± 29.0
EHR 24.4 ± 13.0 35.7 ± 26.1
Simulated 22.9 ± 2.6 42.7 ± 13.4

To validate the use of large amounts of EHR data, we looked at summary statistics for the observed timings, the EHR timings for the observed days and the EHR data for one year for each of the 3 clinics as shown in Figure 2. While there was often significant variance between observed data and EHR timestamp data, the general trends matched closely. The EHR estimates tended to be high for ancillary staff and low for physicians. We presume that this is due to techs performing documentation before and after exams, and due to physicians interacting with patients before and after using the EHR.

Figure 2:

Figure 2:

Average exam times (in minutes) for staff in 3 study clinics. Data are displayed for technicians (left), orthoptists (center) for pediatric, and physicians (right) comparing interaction times from one year of EHR data, observational data and EHR estimates for the observational data. While the estimated EHR timings differ from the observed timings, the trends are similar.

We then performed a statistical analysis comparing the EHR timestamps to the observed ones; the results are given in Table 4. Using a Wilcoxon signed rank test, we compared the observed data to their associated EHR timestamps to determine if they were different. The results are given in the “Obs” position for the median difference and the p values in Table 4. Only two sets of timings were statistically significantly different: the tech for Glaucoma (MP) and the physician for Pediatrics (LR). Since we’re interested in using the EHR timings to generate a large dataset, we want to compare the observed timings to one year of EHR timing data. We used the Wilcoxon rank sum test to determine if the 1 year of EHR timing data was significantly different from our observed timings (given in the “1 yr” position for the median difference and p values in Table 4). In this case, the test shows that all ancillary staff’s 1 year of EHR timings are significantly different than the observed timings, but the physicians’ EHR timings were not. This is consistent with the graph shown in Figure 2: the physician’s average 1 year EHR timings are similar to the observed, but the ancillary staff’s are not. We presumed the difference was due to EHR timings including staff documentation time. However, because documentation is part of the workflow, including it in the timing data may still accurately reflect the workflow in simulation models.

Table 6:

Total exam times for different patient classifications for pediatric clinic (LR). From the EHR timestamps, we calculated the total exam times for different patients. As the classification becomes more specific, the means and standard deviation change. For example, the adult patients take more time and have a greater standard deviation than children, which can skew classifications that do not consider age.

New patient
Returning patient
Exam Time (Min.) (Mean ± SD) Exam Time (Min.) (Mean ± SD)

Child 30.3 ± 15.1 21.4 ± 11.7
Adult 36.1 ± 16.9 29.2 ± 14.5

All 31.0 ± 15.1 22.0 ± 12.1

Development of Computer Based Models

To investigate the utility of the EHR timing data, we started with creating simulation models for one clinic, the pediatric ophthalmology clinic (LR). Using the 1 year EHR dataset, we built a simulation model of the clinic workflow, ran the simulation 100 times and measured the average total exam time (the amount of time spent with a provider) and the average wait time (Table 5). As shown, the average exam and wait times in the simulated model are similar to those we observed and those we calculated using the EHR timestamps. This suggests simulation models that use EHR timing data provide a reasonable approximation of workflows, even though they didn’t match the observed ancillary staff interactions. We presume that this is because the EHR timings include documentation, which is a part of the clinic workflow.

Table 7:

Average wait times and clinic lengths for different scheduling strategies. This was based on scheduling 15 patients in a half-day clinic session, with 3 new patients (1 adult, 2 child). Scheduling new patients last results in less average wait time than scheduling them first, but it also causes the clinic to be longer. Scheduling them near the end (2 returning patients after) balances wait time and clinic length.

Strategy Averages in Minutes
Patient Wait (Mean ± SD) Clinic Length (Mean ± SD)

New First 45.4 ± 2.7 274 ± 4.7
New Last 29.3 ± 2.1 293 ± 5.6
New Near Last 31.0 ± 1.9 282 ± 5.2

To better schedule patients, we have performed preliminary studies based on patient classification using the relationship between clinical and demographic factors and visit length using timing data from the EHR.38 Table 6 shows the mean exam time as determined by EHR timestamps for a simple patient classification: new and returning patients at the pediatric clinic (LR). When we further break these classifications down into child and adult patients, we see that the means and standard deviations change. The pediatric clinic treats adult strabismus patients who weren’t diagnosed or treated as children. The adult patients’ means and standard deviations are considerably larger than those of children. For example, a returning adult patient takes about as long as a new child patient; therefore, classifying patients and representing them accurately in our model is key for meaningful analysis.

We have also studied a scheduling strategy based on patient classification: testing the ideal time to schedule new patients. As we see in Table 6, new patient exams are longer than returning patients and have greater variability. We compared different ways of scheduling new patients and compared the resulting wait times and clinic length. For each test, we scheduled 15 patients in a 4 hour half day of clinic, 3 of whom were new patients (1 adult, 2 child). This is consistent with the mix of patients normally seen in the pediatric clinic (LR). The “New First” strategy scheduled the new patients first, while the “New Last” strategy scheduled the new patients last. The “New Near Last” scheduled the new patients in the 11th, 12th and 13th slots with two returning patients in the last two slots. The results are summarized in Table 7, suggesting that scheduling the new patients last reduces wait time and variability (but also increases the length of the clinic), and that scheduling the new patients near the end of the clinic is the best strategy for minimizing both the average patient wait and clinic length. We note that scheduling the new patients first results in dramatically longer patient wait times. These wait times are longer than when the schedules don’t take into account patient classification, as is the case in Table 5 for our observed data, EHR timing data and initial simulation models. Our later simulation models predict that scheduling new patients last or near to last will drastically improve the wait time over what we observed.

Discussion

This study has the following key findings: 1) secondary use of EHR timestamp data is generally accurate for measuring patient interaction length, 2) timestamp data are less reflective of patient interaction length when EHR use does not coincide with actual clinical workflow, and 3) these data may be used for novel activities such as developing simulation models for alternative clinical scheduling and workflow strategies.

1. Secondary use of EHR timestamp data is generally accurate for measuring patient interaction length

Typically, secondary uses of EHR data have been for clinical research, quality assurance, and public health, rather than for operational purposes.2729 While emergency departments have used EHR timing data for tracking patients and quality assurance32,33,39,40, our study focuses on using EHR data for modeling outpatient workflows and studying scheduling strategies. Our study shows that the data needed to study workflow can be mined from the EHR and that it represents general trends of clinic workflow. Patient flow is a concern for all areas of healthcare, in both inpatient and outpatient settings.41 As patients move through the stages of their care, bottlenecks occur at points where demand for resources (providers, beds, etc.) exceeds availability. Obtaining workflow data using time-motion studies is not feasible for the large amounts of observations necessary for determining variability, and using EHR large-scale timestamp data may be an alternative approach.

2. Timestamp data are less reflective of patient interaction length when EHR use does not coincide with actual clinical workflow

Data from this study show that EHR data contain information about provider interaction time, but only when the systems are being used in a predictable workflow. It is impossible to discern whether the remaining time is spent with a patient or not. We found that the EHR timestamps overestimated interaction lengths with ancillary staff when they charted in the EHR before and after the patient exam. Of the 31 ancillary staff interactions where the difference was greater than 5 minutes (Tables 13), 22 of them (71%) were overestimated by the EHR timestamps. Similarly, we found that EHR timestamps underestimated provider exam times when physicians interacted with patients before and after using the EHR during the exam. Of the 10 physician interactions that were more than 5 minutes different, 9 of them were underestimated by the EHR timestamps. Similarly, statistical analysis shows that the ancillary staff’s 1 year EHR timings are different when compared to the observed timing data; this is due to EHR documentation done prior to and following the patient interaction. Since this documentation time is a part of the workflow, the EHR timing data may still be used successfully in simulation models. In addition, we are currently investigating other automated methods for gathering workflow timing data, such as combining EHR data with timing and location tracking data gathered from indoor location services (i.e. iBeacons42). These methods have the potential for generating more complete and accurate timing data than EHR timestamps alone.

3. These data may be used for novel activities such as developing simulation models for alternative clinical scheduling and workflow strategies

Extracting workflow timing data from the EHR for large amounts of patient encounters allows us to determine trends for average times and variability, determine probability distributions for the workflow timings and build simulation models based on these distributions. Our initial simulation model accurately predicts the wait time resulting from the variability and patterns of patient arrival and provider exam times. In addition, the EHR data allows us to classify patients; in this case, looking at adult vs. child patients as well as new vs. returning patients for a pediatric ophthalmology clinic. We found that adult patients and new patients had exam times that were longer and had greater variability. The simulation models suggest that scheduling these patients later in the clinic helps to minimize the average patient wait time; this is consistent with previous studies that found that high variability patients are better scheduled at the end of the clinic.43

Limitations

There are several limitations to our study. First, in order to find the necessary data in the EHR, we had to discern when the provider was using the EHR during a patient interaction. At OHSU, we have uniquely named workstations in each exam room, which makes identifying patient interactions easier. If laptops were used, it would be much more difficult to determine when a provider was with a patient versus charting in an office. Second, the EHR timestamps do not always capture time spent with patients when the provider is not using the EHR. While we can determine what times the providers are not using the EHR, we cannot pinpoint what they are doing at those times.

Conclusion and Future Directions

The secondary use of EHR data for workflow models and optimization is promising. We have shown that EHR timestamps represent the trends of workflow timing data even if they do not capture all the details of the workflow. Having workflow timing data available for all patient encounters allows for creating simulation models for testing different clinic scheduling strategies. However, because EHR data does not accurately capture the entire workflow, we are looking at ways to automatically collect complete workflow data using tracking devices and indoor location services (e.g. iBeacons42). From this complete data, we hope to build more accurate simulation models for evaluating scheduling strategies for all of the 3 clinics as well as generalize these models for other outpatient clinics. We plan to validate our simulation results by implementing the new scheduling strategies in the clinics and monitoring the effects on patient wait time. Larger studies are needed to validate this approach for general use, but the secondary use of EHR timestamp data has implications for broadening the use of EHRs from a repository of clinical data toward a holistic tool for managing clinical workflow.

References

  • 1.Committee on Quality of Healthcare in American, Institute of Medicine . To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000. [Google Scholar]
  • 2.Kassirer JP. The next transformation in the delivery of health care. N Engl J Med. 1995;332(1):52–4. doi: 10.1056/NEJM199501053320110. [DOI] [PubMed] [Google Scholar]
  • 3.Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330:765. doi: 10.1136/bmj.38398.500764.8F. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Buntin M, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominately positive results. Health Aff Milwood. 2011;30(3):464–71. doi: 10.1377/hlthaff.2011.0178. [DOI] [PubMed] [Google Scholar]
  • 5.Committee on Improving the Patient Record, Division of Health Care Services, Institute of Medicine . The Computer-Based Patient Record: An Essential Technology for Health Care. Revised Edition. Washington, DC: National Academy Press; 1997. pp. 45–46. [Google Scholar]
  • 6.Buntin MB, Jain SH, Blumenthal D. Health information technology: laying the infrastructure for national health reform. Health Aff. 2010;29(6):1214–9. doi: 10.1377/hlthaff.2010.0503. [DOI] [PubMed] [Google Scholar]
  • 7.Blumenthal D. Implementation of the federal health information technology initiative. N Engl J Med. 2011;365(25):2426–31. doi: 10.1056/NEJMsr1112158. [DOI] [PubMed] [Google Scholar]
  • 8.Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med. 2010;363(6):501–4. doi: 10.1056/NEJMp1006114. [DOI] [PubMed] [Google Scholar]
  • 9.Shea S, Hripcsak G. Accelerating the use of electronic health records in physician practices. N Engl J Med. 2010;362(3):192–5. doi: 10.1056/NEJMp0910140. [DOI] [PubMed] [Google Scholar]
  • 10.Chiang MF, Boland MV, Margolis JW, Lum F, Abramoff MD, Hildebrand PL. Adoption and perceptions of electronic health record systems by ophthalmologists: an American Academy of Ophthalmology survey. Ophthalmology. 2008;115(9):1591–7. doi: 10.1016/j.ophtha.2008.03.024. [DOI] [PubMed] [Google Scholar]
  • 11.Boland MV, Chiang MF, Lim MC, Wedemeyer L, Epley KD, McCannel CA. Adoption of electronic health records and preparations for demonstrating meaningful use: an American Academy of Ophthalmology survey. Ophthalmology. 2013;120(8):1702–10. doi: 10.1016/j.ophtha.2013.04.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Miller R, Sim I. Physicians’ use of electronic medical records: barriers and solutions. Health Aff. 2004;23(2):116–26. doi: 10.1377/hlthaff.23.2.116. [DOI] [PubMed] [Google Scholar]
  • 13.Gans D, Kralewski J, T H, Dowd B. Medical groups’ adoption of electronic health records and information systems. Health Aff. 2005;24(5):1323–33. doi: 10.1377/hlthaff.24.5.1323. [DOI] [PubMed] [Google Scholar]
  • 14.Loomis G, Ries J, Saywell RJ, Thakker NR. If electronic medical records are so great, why aren’t family physicians using them? J Fam Pr. 2002;51(7):636–41. [PubMed] [Google Scholar]
  • 15.Sanders DS, Lattin DJ, Read-Brown S, Tu DC, Wilson D, Hwang TS. Electronic health record systems in ophthalmology: impact on clinical documentation. Ophthalmology. 2013;120(9):1745–55. doi: 10.1016/j.ophtha.2013.02.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Verdon DR. Medical Economics EHR survey probes physician angst about adoption, use of technology [Internet] Medical Economics; 2014. [cited 2015 Feb 3]. Available from: http://medicaleconomics.modernmedicine.com/medical-economics/content/tags/ehr/slideshow-medical-economics-ehr-survey-probes-physician-angst-abo. [Google Scholar]
  • 17.Chiang MF, Read-Brown S, Tu DC, Choi D, Sanders DS, Hwang TS. Evaluation of electronic health record implementation in ophthalmology at an academic medical center (an American Ophthalmology Society thesis) Trans Am Ophthalmol Soc. 2013;(111):34–56. [PMC free article] [PubMed] [Google Scholar]
  • 18.Blumenthal D, Collins S. Health care coverage under the Affordable Care Act: a progress report. N Engl J Med. 2014;371:275–81. doi: 10.1056/NEJMhpr1405667. [DOI] [PubMed] [Google Scholar]
  • 19.Hu P, Reuben DB. Effects of managed care on the length of time that elderly patients spend with physicians during ambulatory visits: National Ambulatory Medical Care Survey. Med Care. 2002 Jul;40(7):606–13. doi: 10.1097/00005650-200207000-00007. [DOI] [PubMed] [Google Scholar]
  • 20.Chiang MF, Boland MV, Brewer A, Epley KD, Horton MB, Lim MC. Special Requirements for electronic health record systems in ophthalmology. Ophthalmology. 2011;118(8):1681–7. doi: 10.1016/j.ophtha.2011.04.015. [DOI] [PubMed] [Google Scholar]
  • 21.McMullen M, Netland PA. Wait time as a driver of overall patient satisfaction in an ophthalmology clinic. Clin Ophthalmol. 2013:1655–60. doi: 10.2147/OPTH.S49382. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lee BW, Murakami Y, Duncan MT, Kao AA, Huang J-Y, Lin S, Singh K. Patient-Related and System-Related Barriers to Glaucoma Follow-up in a County Hospital Population. Invest Ophthalmol Vis Sci. 2013:6542–8. doi: 10.1167/iovs.13-12108. [DOI] [PubMed] [Google Scholar]
  • 23.Kleinrock L. Queueing Systems Volume 1: Theory. New York: John Wiley & Sons; 1975. [Google Scholar]
  • 24.NHS Institute for Innovation and Improvement . Quality and Service Improvement Tools: Variation – An Overview [Internet] NHS Institute for Innovation and Improvement; 2008. [cited 2015 Feb 2]. Available from: http://www.institute.nhs.uk/quality_and_service_improvement_tools/quality_and_service_improvement_tools/variation_-_an_overview.html. [Google Scholar]
  • 25.Litvak E, Buerhaus PI, Davidoff F, Long MC, McManus ML, Berwick DM. Managing Unnecessary Variability in Patient Demand to Reduce Nursing Stress and Improve Patient Safety. Jt Commison J Qual Patient Saf. 31(6):330–8. doi: 10.1016/s1553-7250(05)31044-0. [DOI] [PubMed] [Google Scholar]
  • 26.Gupta D, Denton B. Appointment scheduling in health care: Challenges and opportunities. IIE Trans. 2008:800–19. [Google Scholar]
  • 27.Sandhu E, Weinstein S, McKethan A, Jain SH. Secondary uses of electronic health record data: benefits and barriers. Jt Comm J Qual Patient Saf. 2012;38(1):34–40. doi: 10.1016/s1553-7250(12)38005-7. [DOI] [PubMed] [Google Scholar]
  • 28.Linder JA, Haas JS, Iyer A, Labuzzetta MA, Ibara M, Celeste M, Getty G, Bates DW. Secondary use of electronic health record data: spontaneous triggered adverse drug event reporting. Parmacoepidemiol Drug Saf. 2010;19(12):1211–5. doi: 10.1002/pds.2027. [DOI] [PubMed] [Google Scholar]
  • 29.Hersh WR. Adding value to the electronic health record through secondary use of data for quality assurance, research, and surveillance. Am J Manag Care. 2007;13(6 part 1):277–8. [PubMed] [Google Scholar]
  • 30.Glowacka KJ, Henry RM, May JH. A Hybrid Data Mining/Simulation Approach for Modelling Outpatient No-Shows in Clinic Scheduling. J Oper Res Soc. 2009:1056–68. [Google Scholar]
  • 31.Ceglowski R, Churilov L, Wasserthiel J. Combining Data-Mining and Discrete Event Simulation for a Value-Added View of a Hospital Emergency Department. J Oper Res Soc. 2007:246–54. [Google Scholar]
  • 32.Gordon BD, Flottemesch TJ, Asplin BR. Accuracy of Staff-Initiated Emergency Department Tracking System Timestamps in Identifying Actual Event Times. Ann Emerg Med. 2008 Nov;52(5):504–11. doi: 10.1016/j.annemergmed.2007.11.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Boger E. Electronic tracking board reduces ED patient length of stay at Indiana hospital. J Emerg Nurs. 2003 Feb;29(1):39–43. doi: 10.1067/men.2003.13. [DOI] [PubMed] [Google Scholar]
  • 34.Rutberg MH, Wenczel S, Devaney J, Goldlust EJ, Day TE. Incorporating Discrete Event Simulation into Quality Improvement Efforts in Health Care Systems. Am J Med Qual. 2013;XX(X):1–5. doi: 10.1177/1062860613512863. [DOI] [PubMed] [Google Scholar]
  • 35.Hung GR, Whitehouse SR, O’Neill C, Gray AP, Kissoon N. Computer Modeling of Patient Flow in a Pediatric Emergency Department Using Discrete Event Simulation. Pediatr Emerg Care. 2007;23(1):5–10. doi: 10.1097/PEC.0b013e31802c611e. [DOI] [PubMed] [Google Scholar]
  • 36.R Core Team . R: A Language and Environment for Statistical Computing [Internet] Vienna, Austria: R Foundation for Statistical Computing; 2014. Available from: http://www.R-project.org. [Google Scholar]
  • 37.Arena Simulation Software [Internet] 2015. [cited 2015 Feb 3]. Available from: https://www.arenasimulation.com/
  • 38.Aaker G, Read-Brown S, Sanders D, Hribar MR, Reznick L, Yackel TR, Chiang MF. Identification of factors leading to increase pediatric ophthalmology visit times using electronic health record data. Chicago, IL: American Academy of Ophthalmology; 2014. [Google Scholar]
  • 39.Wiler JL, Gentle C, Halfpenny JM, Heins A, Mehrotra A, Mikhail MG, Fite D. Optimizing Emergency Department Front-End Operations. Ann Emerg Med. 2010 Feb;55(2):142–60.e1. doi: 10.1016/j.annemergmed.2009.05.021. [DOI] [PubMed] [Google Scholar]
  • 40.Eitel DR, Rudkin SE, Malvehy MA, Killeen JP, Pines JM. Improving Service Quality by Understanding Emergency Department Flow: A White Paper and Position Statement Prepared For the American Academy of Emergency Medicine. J Emerg Med. 2010 Jan;38(1):70–9. doi: 10.1016/j.jemermed.2008.03.038. [DOI] [PubMed] [Google Scholar]
  • 41.Hall RW. Patient Flow: The New Queueing Theory for Healthcare. ORMS Today [Internet]; 2006. Jun, Available from: http://www.orms-today.org/orms-6-06/patientflow.html. [Google Scholar]
  • 42.Gimbal Proximity-Based Mobile Engagement Platform [Internet] Gimbal Proximity-Based Mobile Engagement Platform. [cited 2015 Feb 3]. Available from: http://gimbal.com/
  • 43.Rohleder TR, Klassen KJ. Using client-variance information to improve dynamic appointment scheduling performance. Omega. 2000:293–302. [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES