Abstract
Objective
The study sought to examine the association between clinician burnout and measures of electronic health record (EHR) workload and efficiency, using vendor-derived EHR action log data.
Materials and Methods
We combined data from a statewide clinician survey on burnout with Epic EHR data from the ambulatory sites of 2 large health systems; the combined dataset included 422 clinicians. We examined whether specific EHR workload and efficiency measures were independently associated with burnout symptoms, using multivariable logistic regression and controlling for clinician characteristics.
Results
Clinicians with the highest volume of patient call messages had almost 4 times the odds of burnout compared with clinicians with the fewest (adjusted odds ratio, 3.81; 95% confidence interval, 1.44-10.14; P = .007). No other workload measures were significantly associated with burnout. No efficiency variables were significantly associated with burnout in the main analysis; however, in a subset of clinicians for whom note entry data were available, clinicians in the top quartile of copy and paste use were significantly less likely to report burnout, with an adjusted odds ratio of 0.22 (95% confidence interval, 0.05-0.93; P = .039).
Discussion
High volumes of patient call messages were significantly associated with clinician burnout, even when accounting for other measures of workload and efficiency. In the EHR, “patient calls” encompass many of the inbox tasks occurring outside of face-to-face visits and likely represent an important target for improving clinician well-being.
Conclusions
Our results suggest that increased workload is associated with burnout and that EHR efficiency tools are not likely to reduce burnout symptoms, with the exception of copy and paste.
Keywords: professional burnout, occupational stress, electronic health records, health information technology, medical informatics
INTRODUCTION
Over the past decade, researchers, physicians, and increasingly, the public have focused their attention on burnout among healthcare workers.1–4 As the field moves from recognition of the widespread prevalence of burnout5 to assessment of its impact on patient outcomes6,7 and industry costs,8–10 investigators are also studying approaches to mitigate burnout.11,12 To successfully reduce burnout for physicians and other providers, interventions need to target multiple contributing factors, among which is health information technology (HIT).13,14 Stress related to HIT is both measurable and common, with two-thirds of physicians and half of advance practice providers reporting in one survey that electronic health records (EHRs) add to the frustration of their workday.15,16
Several studies have examined the impact of EHRs on clinician well-being.17–21 In a large national study, investigators found low satisfaction with EHRs generally; computerized physician order entry, in particular, was associated with a 30% increase in the risk of burning out among survey respondents.22 Others have explored burnout less directly by quantifying the amount of time physicians spend interacting with their EHR during the workday, as well as time spent on tasks in the EHR after clinic hours and on days without appointments—collectively referred to as “work outside of work” or “pajama time.” One study determined that for every hour an outpatient physician spends face to face with patients, they spend 2 additional hours on EHR and desk work during office hours; physicians then log another 1-2 hours each night to complete their unfinished work.23 Another study found similar EHR usage after clinic hours, with primary care physicians (PCPs) spending approximately 6 hours per day interacting with the EHR.24 Physicians also spend 3 hours in their EHRs on days when they do not have appointments scheduled completing documentation or handling other tasks.25
In light of the evidence demonstrating that clerical and administrative tasks and inbox management consume a substantial proportion of clinicians’ total EHR time and contribute to burnout,24,26,27 we set out to identify whether certain components of work within the EHR contribute more to burnout than others, with particular attention to measures of workload and efficiency within the EHR itself. Specifically, the aim of this study was to examine the association between clinician burnout and measures of EHR workload and efficiency, using EHR usage data. We hope that by identifying individual EHR elements that are associated with burnout, we can then more effectively direct interventions to improve well-being among physicians and other providers.
MATERIALS AND METHODS
Study design and population
We conducted a retrospective cohort study of clinicians practicing in ambulatory sites across the 2 largest health systems in Rhode Island. Both health systems use Epic EHRs (Epic Systems, Verona, WI) at their ambulatory sites. These 2 systems both have large academic missions and encompass several large hospitals and a myriad of both academic and private practice ambulatory sites covering virtually all specialties. The study population included physicians, advanced practice registered nurses (APRNs), and physician assistants (PAs)—across a range of specialties—who are in active practice. The study was reviewed and approved by the Lifespan Institutional Review Board.
Data sources
Data for this study came from 2 sources: (1) EHR usage data from Epic and (2) the 2017 Rhode Island Department of Health Physician and Advance Practice Provider Health Information Technology Survey. Regarding the EHR usage data, we partnered with Epic to obtain preprocessed Provider Efficiency Panel (PEP) data from ambulatory EHRs for both health systems, with the approval of HIT leadership at each institution. Data were provided for 1469 unique clinicians. The EHR usage data covered a period from the end of March to the beginning of June 2017 because those dates most closely matched the 2017 administration of the Rhode Island Department of Health survey. EHR use data were not available for every week during the study period in either health system.
Regarding the Rhode Island Department of Health survey, we obtained a dataset from the 2017 administration of the survey via a data use agreement. The Rhode Island Department of Health has administered the Physician and Advanced Practice Provider HIT Survey since 2008 as part of the state’s legislatively mandated Healthcare Quality Reporting Program. Survey data are used to measure and report process measures related to EHR use, as well as the impact of technology on clinicians’ workflow and well-being. The survey dataset incorporates age and gender from licensure files. The 2017 survey was sent to all 4197 physicians and 1686 advanced practice providers with active Rhode Island licenses and who had current addresses in Rhode Island or 1 of the 2 adjacent states (Connecticut or Massachusetts). Resident and fellow physicians were excluded. Survey responses are not anonymous, and no incentive is provided for completion. The 2017 survey was administered from May 8 to June 12, 2017. A total of 2310 clinicians responded, for a response rate of 39.3%. A copy of the 2017 survey is included in the Supplementary Appendix (note that not all questions were answered by all respondents given branching logic within the survey).
We merged the clinician-level EHR usage data with the Rhode Island Department of Health survey data using National Provider Identifier numbers for matching. The National Provider Identifier numbers were subsequently purged to yield a de-identified dataset. The combined dataset included 422 clinicians with both EHR usage data and responses to the 2017 survey; no clinicians were dropped due to lack of matching.
EHR workload and efficiency variables
We obtained the EHR usage data from PEP metrics calculated by Epic Systems. PEP metrics are proprietary measures of EHR usage within the Epic system of a given installation and are only available to current Epic users (these are now included in Epic’s Signal product). Prior studies have identified inbox management volume along with data entry tasks and documentation burden as key drivers of burnout.17,18,24,26 Based on these findings, we defined measures of workload to include the following: number of daily appointments (averaged over the study period), the minutes spent reviewing patient charts (weekly average), medication orders authorized by the clinician (weekly average), nonmedication orders authorized by the clinician (weekly average), patient call messages received (weekly average), results messages received (weekly average), and note length per visit in characters (averaged over the study period). It is important to note that patient call messages not only refer to phone calls from patients and families, but also encompass other patient care tasks occurring outside of a face-to-face visit. Thus, patient call messages include everything from refill requests (that did not come via an electronic interface), patient requests and questions, various patient care forms, and many other tasks. In many systems, these patient call messages are the workhorse tool for communication and coordination of care between visits. These messages, in addition to medication authorizations, compose a significant portion of clinicians’ inbox work.
We defined measures of efficiency to include the following: use of any precharting of visit notes (ie, notes started before the patient was checked in) during the study period (dichotomous), use of the Chart Search function (a tool to find help clinicians find information located anywhere in the chart) during the study period (dichotomous), the number of SmartPhrases (personalized text templates inserted into notes by typing a few characters, commonly known as “dot phrases”) owned by or shared with the clinician during the study period, and the percent of the clinician’s orders that were placed either from a preference list (a personalized set of frequently used orders with information prepopulated) or from a SmartSet (a combination of orders related to a particular clinical scenario with information prepopulated used in the ambulatory clinical setting).
For clinicians in one health system, we had data on the number of characters in a clinician’s notes that were entered by methods other than clinician directly entering text, including dictation. From these data, we calculated the following additional efficiency variables for the subset of clinicians with these data available: the percent of a clinician’s note content that was entered using SmartTools (documentation shortcuts to insert templates or preconfigured blocks of text), the percent of a clinician’s note content that was entered using copy and paste, and whether the clinician used transcription or voice recognition technology during the study period (dichotomous).
Several of the EHR workload and efficiency variables had skewed distributions due to outlier variables. We defined variables as having outliers if the ratio of the mean to the median was >2; we then excluded individuals with values at least 3 SDs above the mean from the analysis. These variables included the following: medication orders authorized by the clinician (7 outlier clinicians excluded), nonmedication orders authorized by the clinician (11 excluded), patient call messages received (10 excluded), and number of SmartPhrases owned by or shared with the clinician (12 excluded). We chose to exclude these clinicians from the primary analysis because these likely represent clinicians with job types that inherently involve very high EHR usage for certain tasks and are therefore not representative of usual ambulatory practice. For example, these could represent residency training clinic sites where a small number of physicians or advanced practice practitioners handle most calls or refill orders. We performed a sensitivity analysis to examine whether including outliers would affect any of the measures’ association with burnout.
Demographic and practice variables
We obtained clinicians’ demographic and practice characteristics from the Rhode Island Department of Health survey. The variables included clinician gender, age (categorized into <40, 40-60, and over 60 years of age), provider type (physician, APRN, or PA), practice size (categorized as 1-3 clinicians, 4-15 clinicians, 16 or more clinicians), whether they provide primary care (dichotomous), and whether they use a medical scribe (dichotomous). Physicians provided information regarding their specialty; specialty responses were grouped into 5 categories: medicine or pediatrics (which included family medicine, internal medicine, and pediatrics), obstetrics and gynecology, psychiatry, surgery (general and subspecialty), and other/unknown. APRNs were added to specialty groups based on their license type. All PAs were included in the other/unknown category because they were not asked to provide their specialty in the survey.
Burnout variable
Our main dependent variable was the presence of self-reported burnout symptoms. Burnout was measured in the 2017 survey using a single question item from the Mini Z Survey, a 10-item instrument developed from the Physician Work Life Study.28–30 This single-item measure has been previously validated for physicians31 and shown to have a sensitivity of 83.2% and specificity of 87.4% when compared with the longer and more detailed Maslach Burnout Inventory.32 Respondents were asked to characterize their symptoms of burnout using a 5-point scale: (1) “I enjoy my work. I have no symptoms of burnout”; (2) “I am under stress, and don’t always have as much energy as I did, but I don’t feel burned out”; (3) “I am definitely burning out and have one or more symptoms of burnout, eg, emotional exhaustion”; (4) “The symptoms of burnout I am experiencing won’t go away. I think about work frustrations a lot”; and (5) “I feel completely burned out. I am at the point where I may need to seek help.” Similar to previous studies, we dichotomized this measure into “no symptoms of burnout” (≤2 on the 5-point scale) and “one or more symptoms of burnout” (≥3 on the 5-point scale).29,30 We performed a sensitivity analysis with the dependent variable as the 5-level burnout scale, instead of the dichotomized response categories used in the main analysis, using an ordered logit model.
Statistical analysis
We used univariable statistics to describe the sample characteristics, the prevalence of burnout and the EHR workload and efficiency measures. We used t tests to compare the EHR usage data between PCPs vs non-PCPs. Logistic regression was used to measure the unadjusted associations between burnout and individual clinician characteristics (age, gender, practice size, primary care status, specialty, use of a medical scribe, clinician type, and health system site), workload measures (number of daily appointments, number of minutes spent reviewing patient charts per week scaled into 5-minute increments, number of medication and nonmedication orders authorized by the clinician per week categorized into quartiles, patient call messages received per week categorized into quartiles, results messages received per week categorized into quartiles, and note length per visit scaled into 500-character increments), and efficiency measures (any precharting of visit notes, any use of the Chart Search function, number of SmartPhrases, the percent of orders placed from preference list or from a SmartSet, and where available, the percent of note content entered using SmartTools categorized into quartiles, the percent of note content entered using copy and paste categorized into quartiles, and any use of transcription or voice recognition entry).
We used multivariable logistic regression to measure the association between burnout and the individual clinician and practice characteristics, workload measures, and efficiency measures. Because the efficiency measures related to note entry were only available for clinicians from 1 of the 2 health systems, we conducted the regression analysis both with and without the note entry measures. In addition, all statistical analyses were conducted using SAS version 9.3 (SAS Institute, Cary, NC). Owing to potential concerns for collinearity among the variables, multiple collinearity diagnostics33 were examined and indicated no evidence of multicollinearity (Pearson correlation coefficients <0.8, variance inflation factors <10, and tolerance values >0.1).
RESULTS
Characteristics of the sample are shown in Table 1. More than half the clinicians (56.6%) were between 40 and 60 years of age, and 55% were women. About a quarter (25.8%) identified themselves as providing primary care. Within our cohort, 116 (27.5%) clinicians reported 1 or more symptoms of burnout. A higher proportion of PCPs reported burnout than non-PCPs (39.5% vs 23.6%; P = .001).
Table 1.
Age | |
<40 y | 117 (27.7) |
40-60 y | 239 (56.6) |
>60 y | 66 (15.6) |
Female | 232 (55.0) |
Practice size | |
1-3 clinicians | 82 (19.4) |
4-15 clinicians | 175 (41.5) |
16 or more clinicians | 163 (38.6) |
Primary care clinician a | 109 (25.8) |
Specialty | |
Medicine or pediatricsb | 209 (49.5) |
Obstetrics and gynecology | 40 (9.5) |
Psychiatry | 53 (12.6) |
Surgery | 51 (12.1) |
Other/unknown | 69 (16.4) |
Use of medical scribe | 20 (4.7) |
Clinician type | |
Physician | 358 (84.8) |
Advance practice registered nurse | 47 (11.1) |
Physician assistant | 17 (4.0) |
Health system | |
A | 116 (27.5) |
B | 306 (72.5) |
Burnout prevalence | |
1. “I enjoy my work. I have no symptoms of burnout” | 94 (22.3) |
2. “I am under stress, and don't always have as much energy as I did, but I don't feel burned out” | 209 (49.5) |
3. “I am definitely burning out and have one or more symptoms of burnout, eg, emotional exhaustion” | 85 (20.1) |
4. “The symptoms of burnout I am experiencing won't go away. I think about work frustrations a lot” | 25 (5.9) |
5. “I feel completely burned out. I am at the point where I may need to seek help” | 6 (1.4) |
One or more symptoms of burnout present c | 116 (27.5) |
Primary care clinician | 43 (39.5) |
Non–primary care clinician | 73 (23.6) |
Values are n (%). Column totals may not sum to sample size due to missing responses.
Survey respondents who replied yes to the question: “Do you provide primary care?”
Includes internal medicine, family medicine, and pediatrics.
Burnout measure was dichotomized into “no symptoms of burnout” (≤2 on 5-point scale) and “one or more symptoms of burnout” (≥3 on 5-point scale).
EHR measures of workload and efficiency for the study population are shown in Table 2. Clinicians had a mean of 7.58 appointments per day, and they spent a mean of approximately 80 ± 79.3 minutes per week reviewing patient charts. They authorized a mean of more than 50 ± 53.14 medication and nonmedication orders per week, combined, and received a mean of more than 30 ± 31.62 patient call and results messages per week, combined. Fewer than half of clinicians precharted their notes at any point during the study period (43.84%), and only 17% used the Chart Search function. Among the subset of clinicians for whom note entry data were available, almost half of their note content was entered using SmartTools (45.97%), and a quarter was entered via copy and paste (24.24%); fewer than 5% of clinicians (4.14%) used transcription or voice recognition technology to compose their notes during the study period. PCPs, on average, had a higher workload than non-PCPs. For example, PCPs received 4 times as many patient call messages as non-PCPs, with an average of 20 messages a week compared with an average of 5.1 messages among non-PCPs (P < .001) (Table 2).
Table 2.
All physicians | PCP | Non-PCP | P value | |
---|---|---|---|---|
Workload variables | ||||
Number of daily appointmentsa | 7.58 ± 5.4; 6.32 (3.51-10.3) | 10.70 ± 6.36 | 6.49 ± 4.60 | <.001 |
Minutes spent reviewing charts per week | 79.80 ± 79.3; 56.59 (16.6-121.3) | 116.76 ± 89.17 | 66.93 ± 71.37 | <.001 |
Medication orders authorized per week | 13.16 ± 18.3; 5.23 (0.6-17.3) | 28.39 ± 22.98 | 8.00 ± 12.83 | <.001 |
Nonmedication orders authorized per week | 39.98 ± 61.8; 12.93 (0.7-55.7) | 96.16 ± 84.54 | 22.39 ± 38.62 | <.001 |
Patient call messages received per week | 8.65 ± 14.6; 1.30 (0.0-11.8) | 20.01 ± 21.58 | 5.05 ± 9.04 | <.001 |
Results messages received per week | 22.97 ± 30.8; 11.15 (1.6-30.4) | 45.41 ± 41.13 | 15.15 ± 21.30 | <.001 |
Note length per visit, characters | 6304.0 ± 5627.4; 5138.8 (3019.0-7726.7) | 5822.17 ± 3774.45 | 6471.84 ± 6139.19 | .196 |
Efficiency variables | ||||
Any precharting of visit notes | 185 (43.8) | 57 (52.3) | 128 (40.9) | .039 |
Any use of Chart Search function | 70 (16.6) | 23 (21.1) | 47 (15.0) | .142 |
Number of user SmartPhrasesb | 40.4 ± 41.4; 27.00 (11.0-56.5) | 60.60 ± 53.68 | 33.40 ± 33.58 | <.001 |
Percent of orders placed from preference lists or SmartSetsc | 54.2 ± 36.7; 65.8 (12.5-88.0) | 75.7 ± 22.2 | 46.7 ± 37.8 | <.001 |
Percent of note entered using SmartToolsd,e | 46.0 ± 25.1; 48.2 (29.9-64.0) | 61.1 ± 16.2 | 43.1 ± 25.5 | <.001 |
Percent of note entered using copy/pastee | 24.2 ± 24.5; 15.3 (2.4-41.1) | 7.5 ± 11.1 | 27.4 ± 25.1 | <.001 |
Any use of transcription or voice recognition technology for note entrye | 12 (4.1) | 0 (0.0) | 12 (4.9) | <.001 |
Values are mean ± SD, median (interquartile range), or n (%).
PCP: primary care physician.
Average daily appointments over the entire study period.
Epic SmartPhrases are personalized shortcuts to auto-populate large blocks of text in a note by typing a few characters, also known as “dot phrases.” These are either created by the clinician or shared with them by another user.
Preference lists within Epic are a personalized set of frequently used orders with information prepopulated. These are either created or modified by the clinician or by institutional information services support. SmartSets, in this context, are a combination of orders related to a particular clinical scenario with information prepopulated; at the time of this study, they could only be created by information services support.
SmartTools are a set of documentation shortcuts that enable insertion of preconfigured phrases, selectable lists, and links to data within the electronic health record that are intended to standardize and streamline documentation.
Data were available for this variable for 290 clinicians in 1 of the 2 health systems.
In the unadjusted model, we identified several individual characteristics, as well as EHR workload and efficiency variables, associated with higher odds of burnout (Table 3). For example, clinicians placing the most nonmedication orders had twice the odds of burnout compared with those placing the fewest orders (odds ratio [OR] for the fourth quartile vs the first, 1.98; 95% confidence interval [CI], 1.05-3.74; P = .035). Clinicians receiving the most patient call messages had almost 3 times the odds of burnout compared with those receiving the fewest (OR for the fourth quartile vs the first, 2.88; 95% CI, 1.53-5.39; P = .001). In addition to nonmedication orders and call messages, other variables associated with increased odds of burnout in the unadjusted model included the number of minutes per week spent reviewing patient information in the EHR, the number of medication orders authorized per week, the number of results messages received per week, and the percent of orders placed from preference lists or SmartSets. Among the subset of clinicians for whom note entry data were available, clinicians who used SmartTools in a higher proportion of their notes had higher odds of burnout (OR for the fourth quartile vs the first, 2.94; 95% CI, 1.37-6.31; P = .006), while those who used a higher proportion of copy and paste had lower odds of burnout (OR for the fourth quartile vs the first, 0.44; 95% CI, 0.20-0.98, P = .044). PCPs and older clinicians were also more likely to report symptoms of burnout in the unadjusted model.
Table 3.
Unadjusted model | Adjusted model | Adjusted model (Subset of clinicians with note composition data) | |
---|---|---|---|
Workload variables | |||
Number of daily appointments | 1.00 | 0.95 | 1.01 |
Minutes spent reviewing charts per week, in 5-min increments | 1.02b | 1.01 | 1.01 |
Medication orders authorized per week | |||
Quartile 1 | Ref | Ref | Ref |
Quartile 2 | 1.80a | 1.30 | 1.31 |
Quartile 3 | 2.12b | 1.03 | 0.89 |
Quartile 4 | 1.81a | 0.54 | 0.43 |
Nonmedication orders authorized per week | |||
Quartile 1 | Ref | Ref | Ref |
Quartile 2 | 1.13 | 0.79 | 0.75 |
Quartile 3 | 2.02b | 0.95 | 0.75 |
Quartile 4 | 1.98b | 0.53 | 0.43 |
Patient call messages received per week | |||
Quartile 1 | Ref | Ref | Ref |
Quartile 2 | 1.27 | 1.27 | 1.65 |
Quartile 3 | 1.47 | 1.32 | 1.70 |
Quartile 4 | 2.88c | 3.81c | 6.59c |
Results messages received per week | |||
Quartile 1 | Ref | Ref | Ref |
Quartile 2 | 1.55 | 1.41 | 1.44 |
Quartile 3 | 1.55 | 1.19 | 0.99 |
Quartile 4 | 1.97b | 1.49 | 1.55 |
Note length per visit, in 500-character increments | 1.01 | 1.00 | 1.01 |
Efficiency variables | |||
Any precharting of visit notes | |||
No | Ref | Ref | Ref |
Yes | 0.96 | 0.74 | 0.84 |
Any use of Chart Search function | |||
No | Ref | Ref | Ref |
Yes | 1.50 | 1.00 | 0.68 |
Number of user SmartPhrases | 1.00b | 1.01a | 1.01 |
Percent of orders placed from preference lists or SmartSets | 2.17b | 1.90 | 2.60 |
Percent of note entered using SmartToolsd | |||
Quartile 1 | Ref | Ref | |
Quartile 2 | 2.07a | — | 1.49 |
Quartile 3 | 1.11 | — | 0.48 |
Quartile 4 | 2.94c | — | 1.48 |
Percent of note entered using copy and pasted | |||
Quartile 1 | Ref | Ref | |
Quartile 2 | 1.26 | — | 0.70 |
Quartile 3 | 0.61 | — | 0.46 |
Quartile 4 | 0.44b | — | 0.22b |
Any use of transcription or voice recognition technology for note entryd | |||
No | Ref | Ref | |
Yes | 0.91 | — | 1.47 |
Individual characteristics | |||
Age | |||
<40 y | Ref | Ref | Ref |
40-60 y | 1.12 | 1.02 | 0.68 |
>60 y | 1.89a | 2.29b | 1.84 |
Gender | |||
Male | Ref | Ref | Ref |
Female | 1.12 | 1.12 | 0.86 |
Practice size | |||
1-3 clinicians | Ref | Ref | Ref |
4-15 clinicians | 0.70 | 0.77 | 0.66 |
16 or more clinicians | 0.78 | 1.00 | 0.76 |
Primary care clinician | |||
No | Ref | Ref | Ref |
Yese | 2.12c | 2.16b | 2.51a |
Specialty | |||
Medicine or pediatricsf | Ref | Ref | Ref |
Obstetrics and gynecology | 0.61 | 0.76 | 0.52 |
Psychiatry | 0.68 | 2.67a | 5.96b |
Surgery | 0.51a | 1.62 | 0.49 |
Other/unknown | 0.70 | 1.47 | 0.87 |
Use of medical scribe | |||
No | Ref | Ref | Ref |
Yes | 0.45 | 0.54 | 1.97 |
Clinician type | |||
Advanced practice providerg | Ref | Ref | Ref |
Physician | 0.87 | 1.13 | 1.51 |
Health system | |||
A | Ref | Ref | — |
B | 0.80 | 0.74 | — |
See Supplementary Table 1 for results with confidence intervals and P values. Epic SmartPhrases are personalized shortcuts to auto-populate large blocks of text in a note by typing a few characters, also known as “dot phrases.” These are either created by the clinician or shared with them by another user. Preference lists within Epic are a personalized set of frequently used orders with information prepopulated. These are either created or modified by the clinician or by institutional information services support. SmartSets, in this context, are a combination of orders related to a particular clinical scenario with information prepopulated; at the time of this study, they could only be created by information services support. SmartTools are a set of documentation shortcuts that enable insertion of preconfigured phrases, selectable lists, and links to data within the electronic health record that are intended to standardize and streamline documentation.
P < .10.
P < .05.
P < .01.
Data were available for this variable for 290 clinicians in 1 of the 2 health systems.
Survey respondents who replied yes to the question: “Do you provide primary care?”
Includes internal medicine, family medicine, and pediatrics.
Includes advance practice registered nurses and physician assistants.
In the fully adjusted model, PCP status and older age remained significant predictors of burnout symptoms (adjusted OR [AOR] for PCPs vs non-PCPs, 2.16; 95% CI, 1.02-4.59; P = .044; AOR for clinicians >60 years of age vs those <40 years of age, 2.29; 95% CI, 1.02-5.13; P = .044) (Table 3). Gender, practice size, specialty, clinician type, use of a scribe, and health system were not associated with either increased or decreased odds of burnout.
Among the workload variables, the number of patient call messages per week remained significant in the fully adjusted model (Table 3). Those in the highest quartile of patient call messages received had almost 4 times the odds of burnout compared with clinicians in the lowest quartile (AOR, 3.81; 95% CI, 1.44-10.14; P = .007). Overall, the number of orders placed, results messages received, and daily appointments were not associated with burnout.
Among the measures of efficiency—precharting of notes, use of the Chart Search function, number of SmartPhrases, and percent of orders placed from preference lists or SmartSets—none were significantly associated with burnout in the fully adjusted model (Table 3). However, when we looked at the subset of clinicians for whom note entry data were available, we found that clinicians in the top quartile of copy and paste use were significantly less likely to report burnout, with an AOR of 0.22 (95% CI, 0.05-0.93; P = .039). Neither a higher proportion of SmartTools use in notes nor use of transcription or voice recognition technology was associated with lower burnout prevalence. Interestingly, the majority of PCPs fell into the higher quartiles of SmartTool use for documentation and overall into the lower quartiles for copy and paste use (Table 2).
When the participants were stratified by PCP status or by median number of daily appointments, we found that the association between burnout and the volume of call messages persisted, although it was no longer statistically significant in the subset of PCPs (Table 4). For the subset of clinicians with a higher-than-average number of daily appointments, the odds of burnout were lower with higher volumes of medication orders. This relationship, although not statistically significant, was also observed among the subset of PCPs.
Table 4.
By PCP status |
By median number of appointments |
||||
---|---|---|---|---|---|
Full sample | PCP | Non-PCP | Above median | Below median | |
Workload variables | |||||
Number of daily appointments | 0.95 | 0.89 | 0.95 | — | — |
Minutes spent reviewing charts per week, in 5-min increments | 1.01 | 1.02 | 1.01 | 1.03 | 1.00 |
Medication orders authorized per week | |||||
Quartile 1 | Ref | Ref | Ref | Ref | Ref |
Quartile 2 | 1.30 | 0.01b | 2.13 | 0.24 | 2.11 |
Quartile 3 | 1.03 | 0.07 | 1.49 | 0.11b | 1.72 |
Quartile 4 | 0.54 | 0.05 | 0.48 | 0.03c | 0.49 |
Nonmedication orders authorized per week | |||||
Quartile 1 | ref | ref | Ref | ref | ref |
Quartile 2 | 0.79 | 0.32 | 0.77 | 2.09 | 1.05 |
Quartile 3 | 0.95 | 3.67 | 0.90 | 4.18 | 0.47 |
Quartile 4 | 0.53 | 0.58 | 0.38 | 1.57 | <0.01 |
Patient call messages received per week | |||||
Quartile 1 | Ref | Ref | Ref | Ref | Ref |
Quartile 2 | 1.27 | 1.77 | 1.13 | 2.03 | 0.69 |
Quartile 3 | 1.32 | 1.16 | 1.05 | 2.22 | 1.02 |
Quartile 4 | 3.81c | 2.01 | 3.61b | 5.37b | 5.86b |
Results messages received per week | |||||
Quartile 1 | Ref | Ref | Ref | Ref | Ref |
Quartile 2 | 1.41 | 1.95 | 1.59 | 3.05 | 1.52 |
Quartile 3 | 1.19 | 4.69 | 1.77 | 1.38 | 1.79 |
Quartile 4 | 1.49 | 5.41 | 1.81 | 3.50 | 1.53 |
Note length per visit, in 500-character increments | 1.00 | 1.03 | 1.01 | 0.99 | 1.01 |
Efficiency variables | |||||
Any precharting of visit notes | |||||
No | Ref | ||||
Yes | 0.74 | 0.61 | 0.72 | 1.31 | 0.56 |
Any use of Chart Search function | |||||
No | Ref | Ref | Ref | Ref | Ref |
Yes | 1.00 | 0.80 | 0.96 | 0.59 | 1.06 |
Number of user SmartPhrases | 1.01a | 1.02b | 1.00 | 1.00 | 1.01 |
Percent of orders placed from preference lists or SmartSets | 1.90 | 0.52 | 2.21 | 5.54a | 2.22 |
PCP status was determined by survey respondents who replied yes to the question: “Do you provide primary care?” The model fit for the below-median stratification may not be valid, owing to quasi-complete separation of data points. Models in the table were adjusted for the following: age, gender, practice size, primary care status, specialty, use of a medical scribe, clinician type, health system site, number of daily appointments, number of minutes spent reviewing patient charts per week, number of medication and nonmedication orders authorized per week, patient call messages received per week, results messages received per week, note length per visit, any precharting of visit notes, any use of the Chart Search function, number of SmartPhrases, and the percent of orders placed from preference list or from a SmartSet. The results stratified by PCP status did not include the PCP variable in the model, and the results stratified by number of daily appointments did not include the appointments variable in the model. See Supplementary Table 2 for results with confidence intervals and P values. Epic SmartPhrases are personalized shortcuts to auto-populate large blocks of text in a note by typing a few characters, also known as “dot phrases.” These are either created by the clinician or shared with them by another user. Preference lists within Epic are a personalized set of frequently used orders with information prepopulated. These are either created or modified by the clinician or by institutional information services support. SmartSets, in this context, are a combination of orders related to a particular clinical scenario with information prepopulated; at the time of this study, they could only be created by information services support.
PCP: primary care physician.
P < .10
.
P < .05.
P < .01.
The sensitivity analysis that included the dependent variable burnout with its ordinal response categories (vs dichotomized response categories) produced results similar to those of the primary analysis, as did the sensitivity analysis that included all outlier clinicians.
DISCUSSION
In this study examining burnout and actual EHR usage, we found that physicians and other providers with the highest volume of call messages had almost 4 times the odds of burnout as clinicians with the fewest call messages, even when controlling for demographic and practice characteristics and workload and efficiency measures. We also found that, with the exception of copy and pasting note content, EHR-based efficiency tools were not associated with decreased odds of burnout, suggesting that these strategies, as currently deployed, are not sufficient to mitigate burnout related to EHR-based tasks.34 In fact, these suggested efficiency tools may not provide for or measure efficiency at all.
We identified the inbox volume of patient call messages as the most significant predictor of self-reported burnout among clinicians. In this context, the category of patient calls not only includes telephone inquiries from patients, but also represents much of the care coordination and other inbox tasks that occur outside of a face-to-face visit. Work generated under this heading can include medication refills, prior authorization forms, disability paperwork, and communication with other physicians, among many other tasks. PCPs had the largest burden of these messages, along with a higher burden of results and orders. Perhaps the call volume measure is associated with increased burnout because virtually all of the tasks are uncompensated. Medicare has attempted to address this lack of compensation by implementing a separately billable Chronic Care Management code to reimburse clinicians for time spent coordinating care between visits, but uptake has been inconsistent and generally low.35
Compensation is just one explanation for why patient call volume was strongly associated with burnout; lack of control over workload, an excessive amount of time spent on the EHR at home, and a high proportion of work not requiring physician-level skills likely contribute substantially.36,37 A study of cardiologists found that those reporting poor control over their workload had twice the prevalence of burnout, after controlling for demographic factors, perceived discrimination, and characteristics of the work environment.38 Similar findings in a study of general internists led to an intervention specifically designed to address stress and burnout related to inbox volume. The group hired a nurse practitioner to help with inbox tasks and created 2 administrative “desktop” slots during each clinic session to give physicians time to complete tasks during the workday, decreasing relative value unit expectations accordingly. The study demonstrated marked improvement in stress and burnout and found that lack of control over workload decreased from 61% to 31% after the intervention.39
In addition to reducing message volume, those committed to addressing the impact of inbox tasks will also likely need to target EHR usability.36,37,40 Qualitative studies suggest that EHR vendors and healthcare organizations could improve inbox usability by reducing the complexity of processing inbox messages; study participants suggested better matching of EHR workflows with clinical workflows and reducing the number of mouse clicks for inbox tasks. Implementing strategies to reduce clinicians’ cognitive load by simplifying the design of the inbox and streamlining message content may also improve usability. In addition, others have suggested upstream interventions that can reduce the volume of inbox tasks, including ordering labs before visits and standardizing yearly refills on most prescriptions.41 Finally, these studies recommended designing inbox management tools that ensure that messages are related to patient care and relevant to the clinician receiving them.36,40
We also found that physicians and other providers who used a higher proportion of copy and paste for their documentation were less likely to report burnout symptoms, even after controlling for workload and PCP vs specialist status. This association has face validity, as copy and paste likely allows for completion of documentation in a shorter time frame and with less effort. Certainly, there is an argument for improved efficiency; however, copy and paste often leads to longer, less useful notes and potentially dangerous errors or miscommunication.42 In addition, reading copy-and-pasted note content was independently associated with increased stress and burnout in a large study of ambulatory clinicians,39 suggesting that a decrease in burnout for the note writer may be offset by an increase in the note reader. Higher-than-average use of copy and paste may also be a marker for professionalism concerns43 or for deficits in clinical reasoning among trainees.44 We also wonder if there might be an element of underlying moral unease related to copy and paste that has other unmeasured adverse impact on clinicians who rely on this tool for a substantial proportion of their notes. Interestingly, though we were able to study a number of tools designed to improve documentation efficiency, copy and paste was the only one associated with decreased burnout, although we found that 75% of the note content was populated either by copy and paste or templates. However, use of efficiency tools among our study participants was relatively low, which may affect our ability to detect an impact on burnout.
Strengths of this study include a large, diverse, state-wide sample of physicians and other providers, including the majority of ambulatory clinicians in the state who use an Epic EHR. Importantly, we were able to link burnout with actual keystroke and mouse click data within the EHR itself, instead of relying on self-reported EHR use. However, our findings should be considered in the context of several limitations. First, the survey response rate may affect generalizability. Though our sample size is large, and the response rate is high for an uncompensated physician survey, there are likely differences between respondents and nonrespondents. Generalizability may also be limited by the fact that our data are from a single state and a single EHR vendor, as well as the fact that data on note efficiency tools were available only for a subset of clinicians. Second, clinicians may have been reluctant to report the full extent of their burnout symptoms because the survey was not anonymous and was administered by the state Department of Health, which also oversees medical licensure; therefore, the prevalence of burnout may be underestimated. Third, the survey was administered electronically, potentially selecting for respondents who are more comfortable with computers in general. Fourth, the study design does not allow us to determine whether the association between burnout and the number of call messages is related to the volume of work, regardless of technology, vs something inherent in how tasks are generated, delegated, and completed in the EHR specifically—or some combination of these. Last, the current metrics do not allow for identification of workload that falls solely on the physician or other provider vs work that is shared by support staff, nor were we able to stratify results based on individual practice settings and resources, which likely vary substantially among practice locations.
CONCLUSION
In conclusion, this study used EHR usage data to examine the association between clinician burnout and measures of workload and efficiency and found that clinicians with the highest volume of patient call messages had 4 times the odds of burnout as did those with the fewest calls. Our results also suggest that EHR efficiency tools, as currently used by clinicians in the study, are not likely to reduce burnout symptoms, with the possible exception of copy and paste. In addition to delegating appropriate inbox messages to nonphysician staff and improving EHR usability, we recommend that future studies explore prospectively testing a model of EHR use characteristics predictive of burnout, so that individual institutions could provide customized assistance to clinicians.
AUTHOR CONTRIBUTIONS
RWH, JH, and RLG all made substantial contributions to the conception and design of the work as well as interpretation of data; RWH drafted the work with significant assistance from JH, and RLG assisted with revisions prior to submission. JH aided in the statistical analysis and interpretation of the results. All authors give approval for the final version to be published and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
Supplementary Material
ACKNOWLEDGMENTS
The authors are grateful to Dr. Mark Schleinitz for his thoughtful review of an earlier version of the manuscript. They are also thankful for the invaluable help from the Epic Physician Well-Being team.
CONFLICT OF INTEREST STATEMENT
The authors have no competing interests to declare.
REFERENCES
- 1. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014; 12 (6): 573–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Rodrigues H, Cobucci R, Oliveira A, et al. Burnout syndrome among medical residents: a systematic review and meta-analysis. PLoS One 2018; 13 (11): e0206840. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Pradas-Hernández L, Ariza T, Gómez-Urquiza JL, Albendín-García L, De la Fuente EI, Cañadas-De la Fuente GA. Prevalence of burnout in paediatric nurses: a systematic review and meta-analysis. PLoS One 2018; 13 (4): e0195039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Lebares CC, Guvva EV, Ascher NL, O'Sullivan PS, Harris HW, Epel ES. Burnout and stress among US Surgery Residents: psychological distress and resilience. J Am Coll Surg 2018; 226 (1): 80–90. [DOI] [PubMed] [Google Scholar]
- 5. Shanafelt TD, Hasan O, Dyrbye LN, et al. Changes in burnout and satisfaction with work-life balance in physicians and the general US working population between 2011 and 2014. Mayo Clin Proc 2015; 90 (12): 1600–13. [DOI] [PubMed] [Google Scholar]
- 6. Tawfik DS, Profit J, Morgenthaler TI, et al. Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. Mayo Clin Proc 2018; 93 (11): 1571–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Panagioti M, Geraghty K, Johnson J, et al. Association between physician burnout and patient safety, professionalism, and patient satisfaction: a systematic review and meta-analysis. JAMA Intern Med 2018; 178 (10): 1317–30. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
- 8. Han S, Shanafelt TD, Sinsky CA, et al. Estimating the attributable cost of physician burnout in the United States. Ann Intern Med 2019; 170 (11): 784–90. [DOI] [PubMed] [Google Scholar]
- 9. Shanafelt T, Goh J, Sinsky C. The business case for investing in physician well-being. JAMA Intern Med 2017; 177 (12): 1826–32. [DOI] [PubMed] [Google Scholar]
- 10. Sinsky CA, Dyrbye LN, West CP, Satele D, Tutty M, Shanafelt TD. Professional satisfaction and the career plans of US physicians. Mayo Clin Proc 2017; 92 (11): 1625–35. [DOI] [PubMed] [Google Scholar]
- 11. Dyrbye LN, Shanafelt TD, Gill PR, Satele DV, West CP. Effect of a professional coaching intervention on the well-being and distress of physicians: a pilot randomized clinical trial. JAMA Intern Med 2019; 179 (10): 1406–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med 2014; 174 (4): 527–33. [DOI] [PubMed] [Google Scholar]
- 13. Linzer M, Poplau S, Babbott S, et al. Worklife and wellness in academic general internal medicine: results from a national survey. J Gen Intern Med 2016; 31 (9): 1004–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Olson K, Sinsky C, Rinne ST, et al. Cross-sectional survey of workplace stressors associated with physician burnout measured by the Mini-Z and the Maslach Burnout Inventory. Stress Health 2019; 35 (2): 157–75. [DOI] [PubMed] [Google Scholar]
- 15. Gardner RL, Cooper E, Haskell J, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 26 (2): 106–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Harris DA, Haskell J, Cooper E, Crouse N, Gardner R. Estimating the association between burnout and electronic health record-related stress among advanced practice registered nurses. Appl Nurs Res 2018; 43: 36–41. [DOI] [PubMed] [Google Scholar]
- 17. Kroth PJ, Morioka-Douglas N, Veres S, et al. Association of electronic health record design and use factors with clinician stress and burnout. JAMA Netw Open 2019; 2 (8): e199609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Kroth PJ, Morioka-Douglas N, Veres S, et al. The electronic elephant in the room: physicians and the electronic health record. JAMIA Open 2018; 1 (1): 49–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Babbott S, Manwell LB, Brown R, et al. Electronic medical records and physician stress in primary care: results from the MEMO Study. J Am Med Inform Assoc 2014; 21 (e1): e100–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Flanagan ME, Militello LG, Rattray NA, Cottingham AH, Frankel RM. The thrill is gone: burdensome electronic documentation takes its toll on physicians’ time and attention. J Gen Intern Med 2019; 34 (7): 1096–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Jones CD, Holmes GM, Lewis SE, Thompson KW, Cykert S, DeWalt DA. Satisfaction with electronic health records is associated with job satisfaction among primary care physicians. Inform Prim Care 2013; 21 (1): 18–20. [DOI] [PubMed] [Google Scholar]
- 22. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc 2016; 91 (7): 836–48. [DOI] [PubMed] [Google Scholar]
- 23. Sinsky C, Colligan L, Li L, et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med 2016; 165 (11): 753–60. [DOI] [PubMed] [Google Scholar]
- 24. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (5): 419–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Saag HS, Shah K, Jones SA, Testa PA, Horwitz LI. Pajama time: working after work in the electronic health record. J Gen Intern Med 2019; 34 (9): 1695–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Murphy DR, Meyer AN, Russo E, Sittig DF, Wei L, Singh H. The burden of inbox notifications in commercial electronic health records. JAMA Intern Med 2016; 176 (4): 559–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Murphy DR, Reis B, Sittig DF, Singh H. Notifications received by primary care practitioners in electronic health records: a taxonomy and time analysis. Am J Med 2012; 125 (2): 209.e1–7. [DOI] [PubMed] [Google Scholar]
- 28. Williams ES, Konrad TR, Linzer M, et al. Refining the measurement of physician job satisfaction: results from the Physician Worklife Survey. SGIM Career Satisfaction Study Group. Society of General Internal Medicine. Med Care 1999; 37 (11): 1140–54. [DOI] [PubMed] [Google Scholar]
- 29. McMurray JE, Linzer M, Konrad TR, Douglas J, Shugerman R, Nelson K; SGIM Career Satisfaction Study Group. The work lives of women physicians results from the physician work life study. The SGIM Career Satisfaction Study Group. J Gen Intern Med 2000; 15 (6): 372–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Schmoldt RA, Freeborn DK, Klevit HD. Physician burnout: recommendations for HMO managers. HMO Pract 1994; 8 (2): 58–63. [PubMed] [Google Scholar]
- 31. Rohland BM, Kruse GR, Rohrer JE. Validation of a single‐item measure of burnout against the Maslach Burnout Inventory among physicians. Stress Health 2004; 20 (2): 75. [Google Scholar]
- 32. Dolan ED, Mohr D, Lempa M, et al. Using a single item to measure burnout in primary care staff: a psychometric evaluation. J Gen Intern Med 2015; 30 (5): 582–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Schreiber-Gregory DN, Jackson HM. “Multicollinearity: What Is It, Why Should We Care, and How Can It Be Controlled?” SESUG Paper SD-160-2017 (2017).
- 34. Melnick ER, Dyrbye LN, Sinsky CA, et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (3): 476–87. [DOI] [PubMed] [Google Scholar]
- 35. Gardner RL, Youssef R, Morphis B, DaCunha A, Pelland K, Cooper E. Use of chronic care management codes for Medicare beneficiaries: a missed opportunity? J Gen Intern Med 2018; 33 (11): 1892–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Murphy DR, Satterly T, Giardina TD, Sittig DF, Singh H. Practicing clinicians’ recommendations to reduce burden from the electronic health record inbox: a mixed-methods study. J Gen Intern Med 2019; 34 (9): 1825–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Erickson SM, Rockwern B, Koltov M, McLean RM, Medical Practice and Quality Committee of the American College of Physicians. Putting patients first by reducing administrative tasks in health care: a position paper of the American College of Physicians. Ann Intern Med 2017; 166 (9): 659–61. [DOI] [PubMed] [Google Scholar]
- 38. Mehta LS, Lewis SJ, Duvernoy CS, et al. Burnout and career satisfaction among U.S. Cardiologists. J Am Coll Cardiol 2019; 73 (25): 3345–8. [DOI] [PubMed] [Google Scholar]
- 39. Lee JS, Karliner LS, Julian KA, Linzer M, Feldman MD. Change in faculty perceptions of burnout and work life in an academic general medicine clinic: a pre-post study. J Gen Intern Med 2019; 34 (10): 1973–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Murphy DR, Giardina TD, Satterly T, Sittig DF, Singh H. An exploration of barriers, facilitators, and suggestions for improving electronic health record inbox-related usability: a qualitative analysis. JAMA Netw Open 2019; 2 (10): e1912638. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Jerzak J, Sinsky C. EHR in-basket restructuring for improved efficiency: efficiently manage you in-basket to provide better, more timely patient care. https://edhub.ama-assn.org/steps-forward/module/2702694 Accessed March 20, 2020.
- 42. Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR. Systematic review, recommendations, and novel model for health IT Collaboration. Appl Clin Inform 2017; 26 (1): 12–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Wagner R, Koh N NJ, Patow C, Newton R, Casey BR, Weiss KB. Detailed Findings from the CLER National Report of Findings 2016. J Grad Med Educ 2016; 8 (2 suppl 1): 35–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Monahan K, Ye C, Gould E, et al. Copy-and-paste in medical student notes: extent, temporal trends, and relationship to scholastic performance. Appl Clin Inform 2019; 10 (03): 479–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.