Abstract
Introduction:
Understanding and managing clinician workload is important for clinician (nurses, physicians and advanced practice providers) occupational health as well as patient safety. Efforts have been made on developing strategies for managing clinician workload by improving patient assignment. The goal of the current study is to use electronic health record (EHR) data to predict the amount of work that individual patients contribute to clinician workload (patient-related workload).
Methods:
One month of EHR data was retrieved from an emergency department (ED). A list of indicators of workload and five potential workload proxies were extracted from the data. Linear regression and four machine learning classification algorithms were utilized to model the relationship between the indicators and the proxies.
Results:
Linear regression proved that the indicators explained a substantial amount of variance of the proxies (four out of five proxies were modeled with R2 > .80). Classification algorithms also showed success in classifying patient as having high or low task demand based on data from early in the ED visit (e.g. 80% accurate binary classification with data from the first hour).
Conclusion:
The main contribution of this study is demonstrating the potential of using EHR data to predict patient-related workload automatically in the ED. The predicted workload can potentially help in managing clinician workload by supporting the decisions regarding assigning new patients to providers. Future work should focus on identifying the relation between workload proxies and actual workload, as well as improving prediction performance of regression and multi-class classification.
Keywords: Electronic Health Record, Workload, Emergency Department, Machine Learning
1. Introduction
High risk jobs are often characterized by complex cognitive demands, and at times, high levels of workload. Excess workload can result in human performance issues such as errors. Therefore, the ability to anticipate and manage the workload of healthcare workers is important both from the perspective of enhancing the wellbeing of healthcare providers and maintaining patient safety.
Emergency departments (ED) are characterized by high acuity patients, intense time pressures, and inconsistent patient arrivals. It is important to understand providers’ current workload, as well as their capacity to effectively take on additional work. However, EDs do not share a consistent strategy for making equitable patient assignment. Nurse-led triage using the five-level Emergency Severity Index (ESI) is by far the most commonly used system in the U.S. (McHugh et al., 2012). However, the ESI is primarily based on the acuity of the patient, as well as resources (diagnostic tests, procedures, and therapeutic treatments) needed. It does not predict complexity and treatment time (Saghafian et al., 2014), or the amount of work that patients contribute to the workload of clinicians (nurses, physicians and advanced practice providers).
Efforts have been made in developing different strategies for managing clinician workload by improving patient flow such as placing an advanced provider at triage, streaming patients through a “fast track”, telemedical triage, and complexity-based triage (Jarvis, 2016; Traub et al., 2016). In a recent study, Benda et al. (2018) examined the feasibility of designing a visual representation of patient-related drivers of clinician workload and integrating it into a live EHR to improve patient assignment. A display was designed to visualize the predicted amount of clinician workload contributed by individual patients (hereinafter referred to as “patient-related workload”), and allowed comparison across multiple clinicians’ workload. The display was driven by an algorithm that predicted patient-related workload based on a combination of ESI score and number of orders placed in the EHR. While the visualization was evaluated positively, the underlying algorithm driving the display was found to be a poor indicator of workload.
The current study builds on Benda et al. (2018) and explores the potential of using EHR data to improve ED efficiency. Much has been documented on the impact of EHR systems in ED. Noblin et al. (2013) found that the EHR system improved completeness of documentation and internal communication between providers. Knepper et al. (2018) found that patients with a prior EHR experienced reduced the use of CT scans. Ben-Assuli et al. (2015) and Mullins et al. (2020) reviewed EHR’s impact on ED performance and found evidence for both positive and negative impacts of EHR on ED efficiency, but both suggested that EHR has great potential on improving ED efficiency when implemented in a right way. Kannampallil et al. (2018) analyzed EHR log files of physician activities and identified potential associations between a physician’s EHR-based activities and the overall efficiency of the ED.
The current study focuses on modeling patient-related workload using EHR data, with an aim toward providing a better indicator of clinician workload. Both regression and machine learning classification algorithms were tested on a dataset collected from an academic, urban hospital ED. The specific aim of the current study is to model the relationship between a set of indicators of patient-related workload and proxies of workload that are available in the EHR system, and explore the potential of providing early prediction to support better patient assignment in the ED.
2. Methods
Our modeling approach consists of three phases: 1) identify a list of indicators of workload (input/independent variables) that could be used to model provider workload; 2) identify a list of proxies of workload (output/dependent variables) that could be used to ‘stand in’ for actual provider workload since it is not possible to directly measure workload; 3) model the relationship between the indicators and proxies. The selection criteria for both indicators and proxies of workload was that the information should be available as structured data recorded in the EHR to enable automatic prediction. Figure 1 shows the relationship between indicators of workload, the model, the proxies of workload, and actual workload. One limitation of our approach is the gap between workload proxies and actual workload. Future work around this gap is addressed in the Discussion section.
Figure 1.

Using Indicators of Workload to Model Proxies of Workload
Potential indicators of workload were identified based on our previous work and expert knowledge. In our previous study (Benda et al., 2018), a literature review was conducted on indicators of workload. Existing research on critically ill patients has used clinical parameters that need to be measured over time, or are not practical when patient first arrived in the ED (e.g. 24-hour urine output to measure patient fluid status). Other indicators rely largely on subjective estimates and are generally not available in the EHR (e.g. psychosocial needs, ability to care for themselves). In the current study, two ED expert clinicians created a list of indicators that are associated with patients with high task demand based on their experience. An additional criterion for indicator selection is that the information should be available during the early phase of the patient visit. The list included both general information such as number of lab tests ordered, and specific information such as whether there were use of ventilator. For a detailed list of indicators see (Wang et al., 2019).
Through discussion with subject matter experts, five proxies of workload were identified as potentially reflecting provider workload associated with a patient. As workload proxies, Length of Stay, Number of Orders, and Number of Events (e.g. medication administrations, diagnostic laboratory test and imaging, etc.) reflect the total amount of work activities generated by a patient. Density of Orders and Density of Events (calculated as Number of Orders/Events divided by Length of Stay) reflect frequency of demands, an indicator of work tempo and potentially complexity of work. Length of Stay was calculated as the earliest point that the patient presented to the ED, registration in the EHR, to the point in time the patient physically left the ED (e.g. admission to the hospital, transfer or discharge home). This would include the time the patient was waiting for a bed in the hospital or transportation from the ED which could last for hours or even days if the hospital was boarding patients in the ED.
The study was approved by the cognizant institutional review board. One month of de-identified patient visit records were retrieved from a U.S. urban academic tertiary care hospital based emergency department with an estimated annual census of 90,000 annual patient visits. There were 5,532 patient visit records in total. 78 records were excluded because the departure time was earlier than the arrival time. Since linear regression is sensitive to outliers, 27 records with particularly long Length of Stay (larger than 50 hours) were excluded. Indicators and proxies were extracted or calculated from the remaining 5,427 patient visits.
We took a sequential modeling approach towards dynamic, real-time prediction of workload by updating the predicted workload throughout the patient visit as the care plan and resulting workload changed. As a first step we included data over the entire ED visit in the modeling to determine whether the indicators had any utility in modeling workload proxies. Then, we used only the first one and then two hours of data to test whether it was feasible to provide a workload prediction in the early stages of a visit.
Both regression and classification algorithms were applied in the modeling. Linear regression models were fitted using SPSS Statistics 19. Four supervised machine learning classification algorithms (Logistic Regression, k-Nearest Neighbor, Support Vector Machine, and Random Forest) were trained using Python package scikit-learn. Algorithms were chosen from the most popular algorithms that represent different paradigms of supervised learning (Tan et al., 2005). Specifically, k-Nearest Neighbor is an instance-based algorithm that predicts by looking at known similar instances. Logistic regression is a linear statistical algorithm. Support Vector Machine is a non-probabilistic, well-known black box algorithm. Random forest is representative for tree-based algorithms.
Because all proxies are continuous variables, they were first split into two, three or five even classes separately for classification. For example, with two classes, proxies were split by the median, and with five classes proxies were split by the 20th, 40th, 60th and 80th percentile. In practice, for the estimation of Length of Stay, linear regression will predict how many minutes the patient will stay, while binary classification will predict whether the stay is shorter or longer than the median. Finally, 80% percent of the data were randomly selected to be used as training set and the rest were used as testing set.
3. Results
Table 1 shows descriptive statistics of the five proxies (N = 5,427 visits). Density of Events/Orders shows Number of Events/Orders per minute. For example, the median Density of Order is 3.45 orders per minute. Number/Density of Orders were high because Number of Orders include both orders put in by the physicians and orders generated automatically by the EHR system.
Table 1.
Proxies of Workload
| Minimum | Maximum | Mean | Median | Standard Deviation | |
|---|---|---|---|---|---|
| Length of Stay (min) | 4.00 | 2792.00 | 407.97 | 298.00 | 379.88 |
| Number of Events | 2.00 | 2652.00 | 289.08 | 233.00 | 215.82 |
| Number of Orders | 1.00 | 3127.00 | 73.58 | 15.00 | 168.83 |
| Density of Events | 1.44 | 480.00 | 54.47 | 46.36 | 35.15 |
| Density of Orders | 0.28 | 1165.34 | 10.97 | 3.45 | 35.79 |
Table 2 shows the correlation between the five proxies. All proxies were significantly correlated with each other (p < 0.001) except between Length of stay and Density of Orders (p = 0.741).
Table 2.
Correlation Between the Proxies
| Length of Stay | Number of Events | Number of Orders | Density of Events | Density of Orders | ||
|---|---|---|---|---|---|---|
| Length of Stay | r | 1 | ||||
| Number of Events | r | .734** | 1 | |||
| Number of Orders | r | .294** | .510** | 1 | ||
| Density of Events | r | −.365** | .090** | .132 | 1 | |
| Density of Orders | r | −.004+ | .210** | .774** | .288** | 1 |
p<0.001
p<0.1
Table 3 provides descriptive statistics of the continuous variables we considered as potential predictive indicators of workload, across the whole visit.
Table 3.
Continuous Variables used as Workload Indicators (Whole Visit)
| Minimum | Maximum | Mean | Standard Deviation | |
|---|---|---|---|---|
| Number of Vital Signs Recorded | 0 | 491 | 28.79 | 31.978 |
| Number of Medications Administered | 0 | 186 | 3.49 | 7.591 |
| Number of Pharmacy Orders | 0 | 651 | 18.14 | 39.286 |
| Number of Injection Medication Orders | 0 | 132 | 3.77 | 10.424 |
| Number of Lab Test Orders | 0 | 864 | 17.48 | 42.851 |
| Number of Disposition Orders | 0 | 12 | 1.63 | 1.360 |
| Number of Patient Refusals | 0 | 29 | .10 | .757 |
Table 4 shows the descriptive statistics of the categorical variables considered as indicators of workload, across the whole visit. These categorical variables were selected because they indicate treatment plans with high demands on staff. When an indicator appears in the patient visit record, it is labeled as 1, otherwise it is assigned a value of 0. Table 4 shows the total number of occurrences for cases that included each indicator. For example, among the 5,427 patient visits, 167 had a blood transfusion.
Table 4.
Categorical Variables in the Workload Indicators (Whole Visit)
| Frequency a | Percentage b | |
|---|---|---|
| Order for a specific lab test | ||
| Blood Transfusion | 167 | 3.1 |
| Lactic Acid | 229 | 4.2 |
| Order for specific imaging | ||
| Angiogram | 228 | 4.2 |
| Xray to confirm line/tube placement | 25 | 0.5 |
| Order for specific medications | ||
| Haloperidol | 97 | 1.8 |
| LORazepam | 215 | 4.0 |
| OLANZapine | 39 | 0.7 |
| Morphine | 288 | 5.3 |
| Etomidate | 99 | 1.8 |
| Succinylcholine | 108 | 2.0 |
| Other event | ||
| Presence of a physician order for “restraints” | 381 | 7.0 |
| Presence/use of ventilator | 243 | 4.5 |
Number of patient visits that included the order or event
Percentage of patient visits that included the order or event (N = 5427)
Potential workload indicators in Table 3 and Table 4 were used together to predict workload proxies.
Figure 2 shows the R2 of linear regression models for each of the five proxies using indicator data from the first one hour, the first two hours, and the whole visit separately. Overall, all models except for the proxy Density of events showed high R2 (> .80) for the whole visit model, indicating that workload indicators collectively explained a substantial amount of variance of the proxies. However, R2 is low when using only the data from the first one or two hours of the visit, indicating low efficacy in early prediction of workload, for linear regression models. Thus, instead of trying to predict the accurate numbers for the proxies, we proceeded with modeling with classification algorithms.
Figure 2.

R-squared values of Linear Regression Model (The R-squared of Number of Orders with whole visit data is 1.00 because Number of Orders was also used as an indicator of workload)
Table 5 shows the classification accuracy using Random Forest, Logistics Regression, k-Nearest Neighbor, and Support Vector Machine. We started by training binary (2-class) classification models, for each of the workload proxies, using indicators for the entire visit and first hour. Among the four machine learning classification algorithms, the random forest algorithm and support vector machine algorithm showed better prediction performance. We proceeded with these two algorithms for the rest model training, and random forest algorithm showed the best prediction performance.
Table 5.
Binary Classification Accuracy of the Four Algorithms
| Random Forest | Logistic Regression | k-Nearest Neighbor | Support Vector Machine | ||
|---|---|---|---|---|---|
| Length of Stay | whole visit | 83.20% | 80.10% | 80.10% | 80.50% |
| 1 hour | 70.40% | 68.80% | 59.60% | 68.50% | |
| Number of Events | whole visit | 91.20% | 89.10% | 91.20% | 90.10% |
| 1 hour | 78.50% | 77.50% | 79.70% | 77.60% | |
| Number of Orders | whole visit | 95.40% | 95.90% | 95.30% | 95.90% |
| 1 hour | 80.00% | 78.80% | 78.90% | 78.70% | |
| Density of Events | whole visit | 62.40% | 51.90% | 59.70% | 52.90% |
| 1 hour | 59.80% | 60.90% | 57.81% | 61.00% | |
| Density of Orders | whole visit | 80.70% | 81.30% | 79.20% | 79.60% |
| 1 hour | 75.30% | 75.30% | 74.30% | 75.70% | |
Figure 3 shows the clasification accuracy (percentage of correct clasification) using random forest, for each of the workload proxies, using indicators for the first hour, the first two hours, and the entire visit. For example, using the first hours’ data, the model can predict the total Number of Events (for the entire visit) to be higher or lower than the median with an accuracy of 78.5%.
Figure 3.

Classification Accuracy of the Random Forest Models
As with linear regression, the prediction accuracy of random forest is lower when using a smaller period of data of the visit compared to the entire visit, but the difference was not as dramatic as linear regression. For example, the accuracy of prediction for binary classification for the proxy Length of Stay is 70.4%, 73.4% and 83.2% (first hour, first two hours, and whole visit, respectively), whereas the corresponding R2 of the linear regression models are 0.13, 0.21 and 0.80. The one-hour, two-hour and whole-visit results for all five proxy measures consistently indicate that if we update the prediction as the patient visit proceeds, the prediction accuracy increases.
Prediction accuracy is also lower when we try to classify the proxies into more classes. For example, accuracy of classification using first hour data for Length of Stay is 70.4%, 50.1%, and 33.4% with two classes, three classes and five classes respectively.
Figure 4 shows another performance metric for random forest models - Recall. Recall shows proportion of actual positives (class with higher workload) identified correctly. Recall showed similar patterns as accuracy.
Figure 4.

Classification Recall of the Random Forest Models
The modeling with random forest was most successful using Number of Orders as the workload proxy, followed by Number of Events and Length of Stay. Density of Events again showed weaker modeling performance.
4. Discussion
The main contribution of this study is demonstrating the potential of using EHR data to predict patient-related workload during an ED visit. We identified a list of indicators and proxies of workload that are available from the EHR data, and modeled the relationship between them. Using data from the entire patient visit, we modeled five proxies of workload using linear regression, with four out of five modeled with R2 > .80. Classification algorithms were also used and could classify patients as having high or low workload. In fact, the binary classification with random forest algorithm had 80% accuracy using data just from the first hour of the visit. It is unclear yet whether the current prediction accuracy is sufficient, and if not, what accuracy is necessary for the predictions to be practically useful. Future work should test the practical efficacy of the models by applying them in ED operations or through simulation techniques in operations research.
Comparing the classification performance of random forest using data of the first one hour, two hours and the entire visit, we found that accuracy increases with more data, which shows a promising trend that the prediction can be updated to be more accurate over the course of the patient’s visit. It is worth noting that one- or two-hours cut-off were chosen arbitrarily to investigate whether prediction performance improve with more data. Some visits were less than one hour, which make it meaningless to ‘predict’ at one hour; while for some visits, patient might have stayed in the waiting room and no measurable clinical care occurred to feed in to the models. When applying such models in practical use, a starting point of meaningful prediction should be decided based on the operational process of the ED. Potential starting points include when a physician assigns themselves to a patient, when the patient is assigned to a bed/room and when diagnostic testing is viewed and acted upon with further orders being placed.
Although similar work was not found about predicting the four proxies about orders/events, the idea of predicting ED length of stay for individual patients was not new. For example, Yoon et al. (2003) conducted a retrospective review and found that diagnostic imaging and laboratory tests were associated with prolonged stay. Ding et al. (2009) used quantile regression to predict ED waiting time, treatment time and boarding time. Rahman et al. (2020) used decision tree to predict whether ED length of stay will be greater than four hours and yielded 85% accuracy. The purpose of the studies, however, was to reduce prolonged stay and improve ED throughput.
One of the implications of our study is that it is possible to develop algorithms that are predictive of patient-related workload. Such algorithms can be used to assess and manage provider workload. With the predicted amount of workload contributed by individual patients, the total amount of workload assigned to each clinician can be visualized in a management tool (as demonstrated by Benda et al., 2018). This information could be used in assigning new patients to clinicians in a way that better balanced workload, or to identify clinicians that are overloaded. The workload display designed by Benda et al. (2018) was novel and positively received but it lacked a reliable algorithm to predict workload. The current predictive models can be integrated into the display and make it ready to use. The display can then be tested in the ED operations to verify whether it supports better patient assignment.
One limitation of the current study is the gap between the workload proxies we modeled and actual workload. Future research should investigate how the workload proxies align with clinician’s assessment on the actual workload (y’ and y in Figure 1). For example, a set of patient visit profiles could be retrieved from the EHR. Expert clinicians could be invited to review data retrospectively and provide estimates of workload based on subject matter expertise. Actual workload measurements would require direct interactions with the clinical team members in near-real-time during clinical work. This mechanism can be challenging during clinical responsibilities and may cause additional workload burden on the research subjects. Recent studies have shown potential of using wearable physiological sensors to measure clinician workload in real-time unobtrusively (Wu et al., 2020; Zhou et al., 2020). Future work can focus on comparing the validity of different measurement approaches.
There is also a gap between the patient-related workload that we modeled and clinicians’ perceived workload, or the ED flow. As described in our previous work (Wang et al., 2019) and Figure 5, the current paper uses cues about each patient (x) to model proxies which to some extent reflect the amount of work that a patient brings to the ED (y). Patient-related workload (y) may collectively indicate a clinician’s perceived workload (z), which may then collectively indicate the workload of the whole ED. However, additional modeling needs to be done when moving from patient-related workload to clinician perceived workload, and eventually to the workload management of the whole ED. Despite complexity of care of patients assigned to the clinicians, other factors also exist that affect clinicians’ perceived workload including clinician type, experience and psychosocial work factors (Schneider & Weigl, 2018) which may or may not be available in the EHR data.
Figure 5.

From Patient-related Workload to Clinician Perceived Workload and ED Flow
To our knowledge, our study is the first to break the workload prediction into three stages. Some recent works used EHR data to predict clinician perceived workload or ED flow directly. For example, Clopton and Hyrkäs (2020) used EHR data to model perceived nursing workload (ED crowding) collected by a rating instrument. Other studies investigated the potential of using EHR data to predict ED patient arrival (Carvalho-Silva et al., 2018), crowding (Rauch et al., 2019; Wiler et al., 2011) or revisit (Ben-Assuli & Vest, 2020; Vest & Ben-Assuli, 2019).
It is possible that adding patient demographic data (age, gender, history, initial vital sign) to the indicators of workload could improve model prediction. We were not able to include these in the current study because they were not included in the dataset available to us. Similarly including chief complaint on arrival might improve model prediction. However, chief complaint is typically only available as free-text making it technically challenging to incorporate in computational algorithm.
Another limitation of our study is that the dataset we used was a convenience sample (one month of data from one hospital, with patient volume lower than average). Future work should focus on testing this approach with different sampling methods. Different database structure of EHR systems may also affect the ease of data retrieval, feature extraction and modeling.
Future work should also investigate which indicators contributed most to the models, as well as why certain models showed better performance. Importance of indicators can be calculated using black box auditing techniques (Adler et al., 2018), and compared across different algorithms and different proxies. Interpretability of machine learning models have received increasing attention, and is particularly important in healthcare context in improving its transparency (Shickel et al., 2018; Xiao et al., 2018). Knowing which indicators were most helpful can also potentially decrease the number of indicators used in future models, which is beneficial to the ease of deployment.
Highlights.
Efforts has been made on improving patient assignment at triage to improve clinician workload management.
Data contained within the EHR have the potential to support automatic patient-related workload prediction
Using indicators that are available in the EHR data, one can predict patient-related workload at the early stage of patient visit and update the prediction as the visit proceeds
The predicted workload can be used to in assigning new patients to clinicians in a way that better balanced workload, or to identify clinicians that are overloaded.
Acknowledgement
The project described was supported by Grant Number R01 HS022542 from the Agency for Healthcare Research and Quality. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Agency for Healthcare Research and Quality.
Abbreviations:
- HER
Electronic Health Record
- ED
Emergency Department
- ESI
Emergency Severity Index
Footnotes
Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Reference
- Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, & Venkatasubramanian S (2018). Auditing black-box models for indirect influence. Knowledge and Information Systems, 54(1), 95–122. 10.1007/s10115-017-1116-3 [DOI] [Google Scholar]
- Ben-Assuli O (2015). Electronic health records, adoption, quality of care, legal and privacy issues and their implementation in emergency departments. Health Policy, 119(3), 287–297. 10.1016/j.healthpol.2014.11.014 [DOI] [PubMed] [Google Scholar]
- Ben-Assuli O, & Vest JR (2020). Data mining techniques utilizing latent class models to evaluate emergency department revisits. Journal of Biomedical Informatics, 101, 103341. 10.1016/j.jbi.2019.103341 [DOI] [PubMed] [Google Scholar]
- Benda NC, Blumenthal HJ, Hettinger AZ, Hoffman DJ, LaVergne DT, Franklin ES, Roth EM, Perry SJ, & Bisantz AM (2018). Human Factors Design in the Clinical Environment: Development and Assessment of an Interface for Visualizing Emergency Medicine Clinician Workload. IISE Transactions on Occupational Ergonomics and Human Factors, 6(3–4), 225–237. 10.1080/24725838.2018.1522392 [DOI] [Google Scholar]
- Carvalho-Silva M, Monteiro MTT, Sá-Soares F. de, & Dória-Nóbrega S (2018). Assessment of forecasting models for patients arrival at Emergency Department. Operations Research for Health Care, 18, 112–118. 10.1016/j.orhc.2017.05.001 [DOI] [Google Scholar]
- Clopton EL, & Hyrkäs EK (2020). Modeling emergency department nursing workload in real time: An exploratory study. International Emergency Nursing, 48, 100793. 10.1016/j.ienj.2019.100793 [DOI] [PubMed] [Google Scholar]
- Ding R, McCarthy ML, Lee J, Desmond JS, Zeger SL, & Aronsky D (2009). Predicting Emergency Department Length of Stay Using Quantile Regression. 2009 International Conference on Management and Service Science, 1–4. 10.1109/ICMSS.2009.5300861 [DOI] [Google Scholar]
- Jarvis PRE (2016). Improving emergency department patient flow. Clinical and Experimental Emergency Medicine, 3(2), 63–68. 10.15441/ceem.16.127 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kannampallil TG, Denton CA, Shapiro JS, & Patel VL (2018). Efficiency of Emergency Physicians: Insights from an Observational Study using EHR Log Files. Applied Clinical Informatics, 9(1), 99–104. 10.1055/s-0037-1621705 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Knepper MM, Castillo EM, Chan TC, & Guss DA (2018). The Effect of Access to Electronic Health Records on Throughput Efficiency and Imaging Utilization in the Emergency Department. Health Services Research, 53(2), 787–802. 10.1111/1475-6773.12695 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McHugh M, Tanabe P, McClelland M, & Khare RK (2012). More Patients Are Triaged Using the Emergency Severity Index Than Any Other Triage Acuity System in the United States. Academic Emergency Medicine, 19(1), 106–109. 10.1111/j.1553-2712.2011.01240.x [DOI] [PubMed] [Google Scholar]
- Mullins A, O’Donnell R, Mousa M, Rankin D, Ben-Meir M, Boyd-Skinner C, & Skouteris H (2020). Health Outcomes and Healthcare Efficiencies Associated with the Use of Electronic Health Records in Hospital Emergency Departments: A Systematic Review. Journal of Medical Systems, 44(12), 200. 10.1007/s10916-020-01660-0 [DOI] [PubMed] [Google Scholar]
- Noblin A, Cortelyou-Ward K, Cantiello J, Breyer T, Oliveira L, Dangiolo M, Cannarozzi M, Yeung T, & Berman S (2013). EHR Implementation in a New Clinic: A Case Study of Clinician Perceptions. Journal of Medical Systems, 37(4), 9955. 10.1007/s10916-013-9955-2 [DOI] [PubMed] [Google Scholar]
- Rahman MA, Honan B, Glanville T, Hough P, & Walker K (2020). Using data mining to predict emergency department length of stay greater than 4 hours: Derivation and single-site validation of a decision tree algorithm. Emergency Medicine Australasia, 32(3), 416–421. 10.1111/1742-6723.13421 [DOI] [PubMed] [Google Scholar]
- Rauch J, Hübner U, Denter M, & Babitsch B (2019). Improving the Prediction of Emergency Department Crowding: A Time Series Analysis Including Road Traffic Flow. Studies in Health Technology and Informatics, 260, 57–64. [PubMed] [Google Scholar]
- Saghafian S, Hopp WJ, Van Oyen MP, Desmond JS, & Kronick SL (2014). Complexity-Augmented Triage: A Tool for Improving Patient Safety and Operational Efficiency. Manufacturing & Service Operations Management, 16(3), 329–345. 10.1287/msom.2014.0487 [DOI] [Google Scholar]
- Schneider A, & Weigl M (2018). Associations between psychosocial work factors and provider mental well-being in emergency departments: A systematic review. PloS One, 13(6), e0197375. 10.1371/journal.pone.0197375 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shickel B, Tighe PJ, Bihorac A, & Rashidi P (2018). Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE Journal of Biomedical and Health Informatics, 22(5), 1589–1604. 10.1109/JBHI.2017.2767063 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tan P-N, Steinbach M, & Kumar V (2005). Introduction to Data Mining, (First Edition). Addison-Wesley Longman Publishing Co., Inc. [Google Scholar]
- Traub SJ, Stewart CF, Didehban R, Bartley AC, Saghafian S, Smith VD, Silvers SM, LeCheminant R, & Lipinski CA (2016). Emergency Department Rotational Patient Assignment. Annals of Emergency Medicine, 67(2), 206–215. 10.1016/j.annemergmed.2015.07.008 [DOI] [PubMed] [Google Scholar]
- Vest JR, & Ben-Assuli O (2019). Prediction of emergency department revisits using area-level social determinants of health measures and health information exchange information. International Journal of Medical Informatics, 129, 205–210. 10.1016/j.ijmedinf.2019.06.013 [DOI] [PubMed] [Google Scholar]
- Wang X, Blumenthal HJ, Hoffman D, Benda N, Kim T, Perry S, Franklin ES, Roth EM, Hettinger AZ, & Bisantz AM (2019). Patient-related Workload Prediction in the Emergency Department: A Big Data Approach. Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care, 8(1), 33–36. [Google Scholar]
- Wiler JL, Griffey RT, & Olsen T (2011). Review of Modeling Approaches for Emergency Department Patient Flow and Crowding Research. Academic Emergency Medicine, 18(12), 1371–1379. 10.1111/j.1553-2712.2011.01135.x [DOI] [PubMed] [Google Scholar]
- Wu C, Cha J, Sulek J, Zhou T, Sundaram CP, Wachs J, & Yu D (2020). Eye-Tracking Metrics Predict Perceived Workload in Robotic Surgical Skills Training. Human Factors, 62(8), 1365–1386. 10.1177/0018720819874544 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xiao C, Choi E, & Sun J (2018). Opportunities and challenges in developing deep learning models using electronic health records data: A systematic review. Journal of the American Medical Informatics Association: JAMIA, 25(10), 1419–1428. 10.1093/jamia/ocy068 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yoon P, Steiner I, & Reinhardt G (2003). Analysis of factors influencing length of stay in the emergency department. Canadian Journal of Emergency Medicine, 5(3), 155–161. 10.1017/S1481803500006539 [DOI] [PubMed] [Google Scholar]
- Zhou T, Cha JS, Gonzalez G, Wachs JP, Sundaram CP, & Yu D (2020). Multimodal Physiological Signals for Workload Prediction in Robot-assisted Surgery. ACM Transactions on Human-Robot Interaction, 9(2), 12:1–12:26. 10.1145/3368589 [DOI] [Google Scholar]
