Abstract
Objective
To give providers a better understanding of how to use the electronic health record (EHR), improve efficiency, and reduce burnout.
Materials and Methods
All ambulatory providers were offered at least 1 one-on-one session with an “optimizer” focusing on filling gaps in EHR knowledge and lack of customization. Success was measured using pre- and post-surveys that consisted of validated tools and homegrown questions. Only participants who returned both surveys were included in our calculations.
Results
Out of 1155 eligible providers, 1010 participated in optimization sessions. Pre-survey return rate was 90% (1034/1155) and post-survey was 54% (541/1010). 451 participants completed both surveys. After completing their optimization sessions, respondents reported a 26% improvement in mean knowledge of EHR functionality (P < .01), a 19% increase in the mean efficiency in the EHR (P < .01), and a 17% decrease in mean after-hours EHR usage (P < .01). Of the 401 providers asked to rate their burnout, 32% reported feelings of burnout in the pre-survey compared to 23% in the post-survey (P < .01). Providers were also likely to recommend colleagues participate in the program, with a Net Promoter Score of 41.
Discussion
It is possible to improve provider efficiency and feelings of burnout with a personalized optimization program. We ascribe these improvements to the one-on-one nature of our program which provides both training as well as addressing the feeling of isolation many providers feel after implementation.
Conclusion
It is possible to reduce burnout in ambulatory providers with personalized retraining designed to improve efficiency and knowledge of the EHR.
Keywords: Burnout, provider efficiency, optimization, computer user training, electronic health records
INTRODUCTION
Provider burnout has seen a huge rise over the past decade which is directly related to the increased adoption of the electronic health record (EHR) over the same period.1 This link between EHR use and burnout has been well documented in both the medical literature and the lay press.2,3 In these publications, the blame has centered on some consistent tropes: a lack of user-centered design, increased clerical work at the expense of face-to-face clinical work, and an overwhelming documentation burden that serves billing purposes more than clinical care delivery.4–8
We hypothesize that the implementation and post-implementation processes, themselves, inadvertently create an environment conducive to EHR-related burnout. During an EHR go-live, providers understand that they are being asked to learn a new, complex system and they open themselves up to learning the new tools and workflows. The ever-present trainers, both in the classroom and at-the-elbow, facilitate this and allow people to learn. However, after go-live, when clinic schedules ramp back up to normal and the extra training resources are gone, providers close themselves up again and return to the imprinted workflows they had been using. Even when shown a new tool or tip or trick to improve their efficiency, the activation energy it takes to open themselves up again is too great, and they often keep using their inefficient workflows.
Additionally, whether after a phased rollout or a big bang implementation, the post-implementation period usually focuses on fixing what is broken and not on improving the efficiency of the existing build. Enterprising users may put in tickets for “enhancements”—new build to improve workflow—but that work is usually given a low priority, as the information services department focuses on more essential operations. As time goes on and the enhancement ticket queue grows, providers implicitly assume that their requests are not important, and so they learn to suffer in silence.
Recently, another important cause of EHR-related burnout has gained visibility—quality of training. A lack of adequate training has been shown to be a significant driver in poor EHR satisfaction which, in itself, has been shown to be a driver of burnout.2,9 If improved training can increase a user’s perceived usability of the EHR, that will have a direct impact on their emotional exhaustion and depersonalization.5 Additionally, when users are taught to customize and personalize their EHRs, some of the factors that lead to EHR-related stress, like difficulty navigating the system and entering data, can be ameliorated.4
Along these lines, the resource-intensive training that occurs during implementation is not designed to persist indefinitely. Thus, as new build or upgrades occur, training becomes impersonal, often distributed via slide presentations sent by e-mail or an organization’s learning management system. However, busy providers often don’t have time to fully consume these materials, never mind adopt them into their workflows. As such, most users either never see the training materials or never fully understand them, often causing them to struggle with the changes or to create inefficient work-arounds. It is unsurprising that this workforce of undertrained, inefficient users who feel isolated from any possible help has increased feelings of burnout.
OBJECTIVE
To address providers’ feelings of isolation and their lack of training, we created an improvement program designed to give personalized attention and targeted retraining. Our primary aim was to give providers a better understanding of how to use the EHR and to improve their efficiency with a secondary aim to reduce burnout.
MATERIALS AND METHODS
Participants
Children’s Hospital of Philadelphia (CHOP) has over 1653 ambulatory providers in primary care, medical and surgical specialties, and behavioral health. Providers see patients in 14 outpatient specialty care sites and 31 primary care offices. For the purposes of this program, we defined a provider as a user who saw patients off of their own schedule; this included attending physicians, physician fellows, advanced practice providers (eg, nurse practitioners, physician assistants), psychologists, and other roles, such as social workers. The program was restricted to ambulatory providers. The primary factors in this decision arose from the need to limit scope in the early phase of the project and recognition that the expertise of the resources we had at the time were outpatient-focused. This project was undertaken as a Quality Improvement Initiative and, as such, does not constitute human subjects research.
Team structure
Initially, our team consisted of 1 physician clinical informatician and 5 “optimizers”—3 CHOP employees and 2 consultants all of whom were EHR analysts with training experience. After early success, the team added a manger and 2 additional consultants for a total of 7 training analysts.
Program design
In an early iteration of our optimization efforts, we performed clinic observations designed to identify sources of inefficiency. A consistent finding during those observations included a large variation of providers’ knowledge of EHR functionality as well as the number of user-based customizations. Even the “best” EHR users had significant gaps in their knowledge and had not taken full advantage of customization opportunities. To address those findings, we designed an at-the-elbow training program to bring all providers to the same baseline of EHR knowledge.
We developed a list of EHR tools grouped by functionality to create a training agenda. We then created a checklist for our optimizers to use during their sessions with providers which served as the baseline level of EHR knowledge. The checklist was frequently reviewed and updated to include up-to-date functionality, such as upgrades and e-health tools.
Sessions were 1 hour in length and performed outside of patient care in the provider’s office. While we attempted to do most of our sessions at our main campus, our optimizers traveled to outside offices if that was the provider’s primary location. All sessions were conducted in the EHR production environment in a manner allowing the provider to be the primary “doer” with the optimizer explaining features and benefits and allowing the provider to immediately begin using the customizations and tools. We recommended 2 one-on-one sessions for each provider scheduled at least 2 weeks apart to allow the provider time to process and use the new skills. Additional sessions could be scheduled with subsequent sessions usually focusing on further customization of the EHR to the provider’s particular needs. There was no limit to the number of sessions a provider could have.
We approached providers by academic division for specialty care and by practice for primary care. This gave us a number of advantages including using the division’s/practice’s administrative infrastructure to help schedule the sessions more efficiently, improve survey completion, and incorporate any division-specific needs into our teaching (eg, a surgical division needed their providers to be taught a specific way to create their pre-op notes). Our process consisted of 6 steps: 1. Kick off meeting with division/practice, 2. Pre-survey distribution, 3. Observations of representative providers during clinic hours, 4. Optimization sessions, 5. Post-survey distribution, and 6. Closing meeting to share metrics.
Measurement
Program participants completed near-identical pre- and post-surveys addressing their experience with the program. The surveys were made up of 5 subsurveys described below. The pre-survey consisted of 36 questions and the post-survey consisted of 42. The post-survey was made up of the same questions as the pre-survey with the addition of the program evaluation subsurvey and the removal of the demographic questions. Every provider received the pre-survey, but the post-survey was only sent to providers who completed at least 1 session with an optimizer. Pre-surveys were sent after the kickoff meeting with a clinic/division and the post-survey was sent 4 weeks after the final session. Only data from providers who answered both the pre- and post-surveys were used in our analysis. Their paired survey responses were analyzed using the Wilcoxon signed rank test for Likert responses and Chi square test for dichotomous responses.
Homegrown subsurvey
We developed a 21-question survey to assess demographic information as well as providers’ self-reported knowledge and usage of EHR tools, efficiency, and time spent after-hours. The questions were mostly multiple choice but also included open-ended questions. The optimizers used the answers in this pre-survey to help personalize their approach to the provider.
Physician Work Life Study single item burnout subsurvey
Because of the length of the survey, and to align with our organization’s greater wellness improvement effort, we chose the Physician Work Life Study Single Item Burnout Survey as our burnout measure. This survey has been previously shown to correlate well to the Maslach Burnout Inventory.10 This survey asks respondents to identify their symptoms of burnout on a 5-point scale. The choices are: 1. “I enjoy my work. I have no symptoms of burnout”; 2. “I am under stress, and don’t always have as much energy as I did, but I don’t feel burned out”; 3. “I am definitely burning out and have one or more symptoms of burnout, eg, emotional exhaustion”; 4. “The symptoms of burnout I am experiencing won’t go away. I think about work frustrations a lot”; and 5. “I feel completely burned out. I am at the point where I may need to seek help.”
Stanford WellMD EHR questions subsurvey
The Stanford WellMD survey contains 7 EHR-related questions.11 Four of these questions ask about positive aspects of the EHR (“EHR tools help me communicate with patients effectively,” “I am able to quickly locate information I need,” “EHR tools help me enter orders efficiently,” “EHR tools help me coordinate care efficiently”), and 3 ask about negative aspects (“EHR work makes it hard for me to pay undivided attention to my patients during face-to-face visits,” “I have to spend too much time completing EHR tasks other team members could do,” “The amount of work I have to do in the EHR per patient is excessive”). Each of these questions ask the respondent to rate the statement on a 5-point Likert scale.
Technology acceptance model (TAM) subsurvey
The TAM has often been used to evaluate the perceived usefulness and perceived ease of use of health information technology.12 We used a modified, 7-question version which added questions about the EHR’s impact on patient quality and safety. All questions were answered on a 7-point Likert scale.
Program evaluation/net promoter score subsurvey
In our post-survey, we included questions evaluating the program itself. The 7 question subsurvey consisted of both multiple choice and free response questions. It also included a version of the Net Promoter Score (NPS)—a customer satisfaction tool13—which has been used previously in evaluating other provider efficiency programs.14 The respondent is asked “How likely are you to recommend the optimization program to a friend or colleague?” For our purposes, we modified the NPS from its usual 10-point scale to a 7-point version (scale from “very unlikely” to “very likely”). Participants giving responses of 1–4 (“very unlikely” to “neither likely or unlikely”) are considered to be detractors and those giving a top score of 7 (“very likely”) are promoters. The NPS is calculated as the percentage of promoters minus the percentage of detractors and ranges from −100 to +100. Any score over 0 is considered good and over 50 is excellent.15 Comparison of NPS scores between sub-groups was statistically analyzed by 2-sample t-tests.
RESULTS
In the program, 1155 providers were eligible: 1010 had at least 1 session, 448 providers had more than 1 session, and 145 providers declined. Pre-survey return rate was 90% (1034/1155) and post-survey was 54% (541/1010). Among participants, 451 providers completed both surveys, 234 of whom had 1 optimization session and 217 had more than 1. Characteristics of this provider group are described in Table 1. Although 451 providers completed both pre- and post-surveys, the survey itself evolved over time with subsequent addition of the burnout question and the Stanford WellMD questions. As a result, only 401 completed the burnout question and 370 completed the Stanford questions.
Table 1.
Characteristic | Number of providers |
---|---|
Provider Specialty | |
Medical Subspecialty | 248 |
Primary Care | 120 |
Surgical | 83 |
Provider Role | |
Attending Physician | 274 |
Fellow Physician | 43 |
Nurse Practitioner | 77 |
Physician Assistant | 21 |
Psychologist | 15 |
Counselor and Social Worker | 12 |
Clinical Technician | 9 |
Years of experience with the EHR | |
>10 years | 166 |
5-10 years | 178 |
3-5 years | 53 |
1-3 years | 42 |
< 1 year | 12 |
The results of a subset of the homegrown subsurvey questions is shown in Table 2. After completing the program, providers reported both a 26% increase in awareness of current EHR functionality and a 19% increase in efficiency using the EHR (mean 3.1 to 3.9, and 3.1 to 3.7 respectively; both P < .01). The percentage of providers reporting spending time in the EHR after-hours also decreased from 78% to 65% (P < .01).
Table 2.
Question | Pre-survey | Post-survey | P value |
---|---|---|---|
How aware are you of current EHR functionality? | Mean: 3.1 | Mean: 3.9 | < .01a |
SD: ± 0.8 | SD: ± 0.7 | ||
How efficient are you in your use of EHR? | Mean: 3.1 | Mean: 3.7 | < .01a |
SD: ± 0.9 | SD: ± 0.7 | ||
Do you spend time in the EHR after-hours (defined as more than 45 minutes after you are done seeing your last scheduled patient)? (Yes/No) | Yes: 78% | No: 65% | < .01b |
SD = Standard Deviation.
EHR functionality and efficiency questions on 5-point Likert scale (1 = Not very, 2 = Somewhat, 3 = Average, 4 = Very good, 5 = Excellent. For the Likert scale questions, differences between 2 paired samples were clinically significant at P < .05 by the Wilcoxon signed rank test.
Post-survey results were significantly shifted greater than 0.
Chi square result was clinically significant at P < .05.
The TAM subsurvey results are summarized in Table 3. After completing the program, the providers reported statistically significant higher scoring of their usage of the EHR in 6 of the 7 dimensions, but not on the impact on patient safety which was unchanged. Similarly, the Stanford WellMD EHR questions subsurvey showed statistically significant increases in ratings of the positive aspects of the EHR and statistically significant decreases in ratings of the negative aspects (Table 4). There were no statistically significant differences in the TAM or Stanford WellMD EHR subsurveys questions when comparing subgroups of providers who participated in 1 vs more than 1 session.
Table 3.
Question | Pre-survey mean ± standard deviation | Post-survey mean ± standard deviation | P value |
---|---|---|---|
I find the EHR easy to use. | 4.9 | 5.3 | < .01a |
± 1.3 | ± 1.2 | ||
My usage of the EHR improves my productivity. | 4.4 | 5.0 | < .01a |
± 1.3 | ± 1.3 | ||
I can accomplish tasks quickly in the EHR. | 4.6 | 5.2 | < .01a |
± 1.3 | ± 1.1 | ||
My usage of the EHR fits into my workflow. | 4.6 | 5.1 | < .01a |
± 1.3 | ± 1.2 | ||
My usage of the EHR has a positive impact on patient safety. | 5.1 | 5.1 | .73 |
± 1.2 | ± 1.2 | ||
My usage of the EHR improves the quality of patient care. | 4.8 | 5.0 | .03a |
± 1.3 | ± 1.2 | ||
Overall, I am satisfied with my usage of the EHR. | 4.6 | 5.2 | < .01a |
± 1.4 | ± 1.2 |
7-point Likert scale (1 = Strongly Disagree, 2 = Disagree, 3 = Somewhat Disagree, 4 = Neither Agree or Disagree, 5 = Somewhat Agree, 6 = Agree, 7 = Strongly Agree). Differences between 2 paired samples were clinically significant at P < .05 by the Wilcoxon signed rank test.
Post-survey results were significantly shifted greater than 0.
Table 4.
Question | Pre-survey mean ± standard deviation | Post-survey Mean ± standard deviation | P value |
---|---|---|---|
EHR tools help me communicate with patients efficiently. | 3.3 | 3.5 | < .01a |
± 0.9 | ± 1.0 | ||
I am able to quickly locate information I need. | 3.7 | 3.9 | < .01a |
± 0.7 | ± 0.7 | ||
EHR tools help me enter orders efficiently. | 3.4 | 3.8 | < .01a |
± 0.9 | ± 0.8 | ||
EHR tools help me coordinate care efficiently. | 3.4 | 3.7 | < .01a |
± 0.8 | ± 0.8 | ||
The EHR makes it hard for me to pay undivided attention to my patients during face-to-face visits. | 3.2 | 2.8 | < .01b |
± 1.1 | ± 1.1 | ||
I have to spend too much time completing EHR tasks other team members could do. | 3.0 | 2.8 | < .01b |
± 1.1 | ± 1.0 | ||
The amount of work I have to do in the EHR per patient is excessive. | 3.3 | 2.8 | < .01b |
± 1.0 | ± 0.9 |
5-point Likert scale (1 = Never, 2 = Rarely, 3 = Sometimes, 4 = Often, 5 = Always). Differences between 2 paired samples were clinically significant at P < .05 by the Wilcoxon signed rank test.
Post-survey results were significantly shifted greater than 0.
Post-survey results were significantly shifted less than 0.
Among the 401 participants who completed the Physician Work Life Study Single Item Burnout Survey question, 32% reported feelings of burnout in the pre-survey compared to 23% in the post-survey. Twenty-five percent of participants had a lower level of burnout in the post-survey compared to 11% who had a higher level, while 64% had no change in burnout rating. The mean response shows a decrease from 2.3 to 2.1, and the Wilcoxon signed rank test showed a significant decrease in burnout (P < .01).
The likelihood of participants recommending the program to colleagues was very high. Based on a 7-point Likert scale, 51.2% of the 451 participants were promoters of the program, while only 10.2% were detractors. This yielded an NPS of 41. Among providers who participated in 1 session, they had an NPS of 28 (promoters 41.5%, detractors 13.2%), while providers who participated in more than 1 session had an NPS of 46 (promoters 62.4%, detractors 16.5%). A 2-sample t-test comparing these NPS responses showed this was a significant difference (P < .01).
DISCUSSION
EHRs are extraordinarily complex pieces of software which are difficult to learn. We discovered that one-on-one training outside of the clinic setting was very effective at achieving our intended result: improve providers’ knowledge of EHR functionality and help them customize their workspaces. Our providers reported that they felt more efficient using the EHR and used the EHR less after hours. This is supported by their indication that they could find information, write orders more easily, and perform tasks more quickly, suggesting that they were able to not only learn about the tools but adopt them into their workflow.
One of the most important decisions we made was to conduct individual, one-on-one sessions at-the-elbow instead of in small or large group classes. In an early iteration of our program, we did our teaching in a classroom in front of computers with a teacher and 2–3 trainers in support. The first class in a division of 6 providers was a success; however, the next class with a division of 22 providers was less productive. As is often the case in a classroom setting, our trainers found it difficult to engage the entire class. Moreover, individuals often feel inhibited to ask questions in a group setting and we could not personalize the content further, which may have contributed to the providers’ feelings of isolation. After that experience, we concluded that even though one-on-one training was resource-intensive, it was the most likely method that would result in a uniform level of attention and personalized education for each provider.
We also believe that an additional factor to our success was performing our optimization sessions outside of clinical care. Often, asking a provider to learn something new while trying to see patients in a busy clinic is not conducive to productive learning. By removing the stress of staying on time as well as existing EHR-related stress producers like clerical work and documentation, providers could focus on learning and were more likely to adopt new skills into their normal workflow.
Although we refer to our approach as “resource-intensive,” those resources were entirely human with little to no other expenses. During their sessions, our optimizers used either the providers’ computers or their organization-issued one which all of our information services employees receive upon hiring. It is important to note that the use of consultants is much more expensive than internal employees and our use of them was precipitated by an accelerated timeline and an attempt to avoid the lag time in ramping up new employees.
Our program design was influenced by others whose successes have previously been documented, although we were not able to adopt all of their practices. We liked the idea of Robinson and Kersey’s program at Kaiser Permanente16 which consisted of 3-day training seminars. Unfortunately, that was not feasible as we felt that was too much time out of clinical duties for our organization to allow. UC Health’s “sprints” 14 had one-on-one training as part of their program, but they also performed build for clinics which would have required more resources than we were allotted. Our program was most similar to Stanford’s Home4Dinner program17; however, we did not take the time to observe every provider and create an individualized learning plan as our primary goal was to get all providers to the same baseline of knowledge and customization and, thus, did not focus on individualization. As we move on to the next phase of our program, we are hopeful that we can take a more individualized approach for those who need more help.
We found that providers not only felt more efficient, but also felt better about the EHR in general. They indicated that the EHR was easier to use, fit their workflow better, and were more satisfied with the EHR. In addition, providers felt that they could spend more face-to-face time with their patients and communicate with them more efficiently. This perceived improvement in a number of drivers of EHR-related burnout likely contributed to the improved burnout scores in our program participants. Though we did not attempt to address user satisfaction with the EHR directly, we found that it increased as well. Given our focus on improving providers’ understanding of how to use the EHR better, it was not unexpected that providers would have a more favorable impression of the EHR overall. Of note, the one question in the TAM subsurvey that did not show a significant difference was the one that asked about safety. This may be due to the fact that our optimization efforts were not focused on improving safety or that the safety question already had the highest pre-survey scoring for any of the TAM questions.
What we did not expect was the finding that providers felt that their team-based use of the EHR was improved. Providers felt that they had to do fewer tasks in the EHR that other team members could do and that care coordination was easier. Although our program did not work with any roles other than providers and we did not address any team-based workflow issues, several customization tips may have contributed to this finding. Among these are improved use of the EHR’s mobile apps and a better understanding of in-basket functionality including best practices in using team pools. We also believe the improved knowledge and sense of efficiency created a feeling of empowerment and confidence in providers’ use of the EHR which reduced frustration and feelings of helplessness. Additionally, a number of providers have reached out repeatedly to their optimizers for help or advice suggesting that having a direct contact in the information services department may decrease the feeling of isolation, perhaps to such a degree that providers felt better about their overall practice of medicine and not just the EHR.
In our evaluation of the program itself, we found that providers found it beneficial and were likely to recommend participation in the program based on the overall NPS of 41. Also, the more sessions providers had, the more likely they were to recommend the program. This correlation between engagement and satisfaction emphasizes the need for us to continue to foster relationships with our users as we move forward.
Following the success of our program, we are planning on expanding our personalized training in our institution. We have already expanded the one-on-one training beyond ambulatory providers, offering a modified version of the program to ambulatory nurses and medical assistants. Next, we plan to include inpatient providers and nurses as well as other nonprovider roles, such as therapists and care managers. Longer term, we are looking for opportunities to continue to re-engage with providers on a personal level. We have developed a process for a user to request an optimization “consult” in a number of categories (eg, documentation, ordering, in basket, customization) or a general optimization session. Also, we continue to work with hospital leadership to identify staff who are struggling and may benefit from an intensive optimization program over several one-on-one sessions.
The success of the personalized approach to training has fostered the next iteration of our program which focuses on team-based improvements. These “optimization sprints”—inspired by UC Health14—are concentrated on 1 ambulatory division/clinic at a time and include customized training and new build that is intended to benefit the entire care team. Our team spends approximately 3 months during the engagement in the stages of discovery, build, and training. As of this writing, we have completed optimization sprints for 6 divisions in the current year and plan to complete another 7 next year. The early success of this new program has generated interest in creating additional teams with the ultimate goal of offering annual optimization sprints to all divisions.
One of the biggest limitations was our inability to measure time objectively in any of our metrics. Ideally, we would use quantitative measurements of time to evaluate the reports of improved efficiency. While vendor-neutral measurement standards are being established, we had a number of challenges using EHR time as a longitudinal metric.18 This was due to a number of factors including our vendor changing their algorithms in calculating time per event in the EHR and their unrealistic definitions of “after-hours.”19 We continue to work with our vendor to try to incorporate the use of time in our evaluations.
Another limitation is that there were a number of providers who completed the pre-survey but did not participate in the program, as well as a number of providers who did not complete the post-survey after participation. We analyzed the pre-survey data to compare providers who had 0, 1, and more than 1 session, but there were no statistically significant differences in response to the homegrown questions in Table 2. We also looked at burnout and EHR satisfaction, and there were no significant differences between these groups survey responses (data not shown). Our future work should investigate why providers do not participate and to identify potential barriers to participation and post-survey completion.
Other considerations that might limit the generalizability of our study are that our project was conducted at only 1 institution, only in the ambulatory setting, and lacked a control group. Additionally, it should be noted that not all of our participants were given the burnout subsurvey, which could affect our results.
CONCLUSION
In this project, we demonstrated that it is possible to reduce burnout in ambulatory providers with personalized retraining designed to improve efficiency and knowledge of the EHR. In addition, we found that providers’ overall satisfaction with the EHR improved as did their feelings of frustration with lack of clinic teamwork. We believe that our program not only improved the amount and quality of training providers received, but also addressed the feeling of isolation that they felt after implementation. We encourage other organizations to embrace this type of resource-intensive program to battle the isolation and depersonalization caused by EHR implementation and usage as part of their strategy to combat burnout.
FUNDING
This work received no funding.
AUTHOR CONTRIBUTIONS
EML conceptualized and designed the program, assisted in the survey design, assisted in the measurement design, drafted the initial manuscript, and reviewed and revised the manuscript.
LHU designed the surveys, assisted in the measurement design, assisted in data collection, carried out the analyses, and reviewed and revised the manuscript.
MFR assisted in the conceptualization and design of the program, assisted in the survey creation, assisted in the measurement design, and reviewed and revised the manuscript.
LW assisted in the conceptualization and design of the program, assisted in the survey creation, and reviewed and revised the manuscript.
CY assisted in the survey design, coordinated and supervised data collection, assisted in data analysis, and reviewed and revised the manuscript.
SMG assisted in the survey design, assisted in the measurement design, and reviewed and revised the manuscript.
All authors approved the final manuscript as submitted.
ACKNOWLEDGMENTS
The authors would like to thank Dean Karavite for his assistance in crafting our surveys; Miriam Stewart for her guidance on burnout and wellness measures to include in our surveys; Mitchell Maltenfort for his help on statistical analysis; our optimizers: Josie Malak, Daryl Douglas, Myles Tabong, Tim Brick, Fe Fernando, Agha Khan, and Christina LeBron; and our CHOP information services leaders for their ongoing support and guidance: Kisha Hortman Hawthorne, Bimal Desai, Kimberly Burress, Cherie Pardue, and Scott Aikey.
CONFLICT OF INTEREST STATEMENT
None declared.
REFERENCES
- 1. Shanafelt TD, Boone S, Tan L, et al. Burnout and satisfaction with work-life balance among US physicians relative to the general US population. Arch Intern Med 2012; 172 (18): 1377–85. [DOI] [PubMed] [Google Scholar]
- 2. Gardner RL, Cooper E, Haskell J, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 26 (2): 106–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Gawande A. Why doctors hate their computers. The New Yorker. November 12, 2018. https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers. Accessed February 28, 2019.
- 4. Kroth PJ, Morioka-Douglas N, Veres S, et al. Association of electronic health record design and use factors with clinician stress and burnout. JAMA Netw Open 2019; 2 (8): e199609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Melnick ER, Dyrbye LN, Sinsky CA, et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (3): 476–87. [DOI] [PubMed] [Google Scholar]
- 6. Sinsky C, Colligan L, Li L, et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med 2016; 165 (11): 753–60. [DOI] [PubMed] [Google Scholar]
- 7. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (5): 419–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Downing NL, Bates DW, Longhurst CA.. Physician burnout in the electronic health record era: are we ignoring the real cause? Ann Intern Med 2018. http://annals.org/article.aspx? doi=10.7326/M18-0139. [DOI] [PubMed] [Google Scholar]
- 9. Longhurst CA, Davis T, Maneker A Jr, et al. , on behalf of the Arch Collaborative. Local investment in training drives electronic health record user satisfaction. Appl Clin Inform 2019; 10 (02): 331–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Rohland BM, Kruse GR, Rohrer JE.. Validation of a single-item measure of burnout against the Maslach Burnout Inventory among physicians. Stress Health 2004; 20 (2): 75–9. [Google Scholar]
- 11.Physician Wellness Survey. WellMD. 2018. https://wellmd.stanford.edu/center1/survey.html Accessed November 9, 2018.
- 12. Holden RJ, Karsh B-T.. The technology acceptance model: its past and its future in health care. J Biomed Inform 2010; 43 (1): 159–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Reichheld FF. The one number you need to grow. Harv Bus Rev 2003; 81 (12): 46–54. [PubMed] [Google Scholar]
- 14. Sieja A, Markley K, Pell J, et al. Optimization sprints: improving clinician satisfaction and teamwork by rapidly reducing electronic health record burden. Mayo Clin Proc 2019; 94 (5): 793–802. [DOI] [PubMed] [Google Scholar]
- 15. Reichheld FF, Markey R.. The Ultimate Question 2.0: how Net Promoter Companies Thrive in a Customer-Driven World. Boston: Harvard Business Press; 2011: 304. [Google Scholar]
- 16. Robinson KE, Kersey JA.. Novel electronic health record (EHR) education intervention in large healthcare organization improves quality, efficiency, time, and impact on burnout. Medicine (Baltimore) 2018; 97 (38): e12319. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Stevens LA, DiAngi YT, Schremp JD, et al. Designing an individualized EHR learning plan for providers. Appl Clin Inform 2017; 08 (03): 924–35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Sinsky CA, Rule A, Cohen G, et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (4): 639–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Hron JD, Lourie E.. Have you got the time? Challenges using vendor electronic health record metrics of provider efficiency. J Am Med Inform Assoc 2020; 27 (4): 644–6. [DOI] [PMC free article] [PubMed] [Google Scholar]