Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jul 1.
Published in final edited form as: Contemp Clin Trials. 2021 May 25;106:106455. doi: 10.1016/j.cct.2021.106455

Design and implementation of a cluster randomized trial measuring benefits of medical scribes in the VA

Paul R Shafer a,b,*, Melissa M Garrido a,b, Elsa Pearson a,b, Sivagaminathan Palani a,b, Alex Woodruff a,b, Amanda M Lyn c,d, Katherine M Williams d, Susan R Kirsh d, Steven D Pizer a,b
PMCID: PMC8319919  NIHMSID: NIHMS1725916  PMID: 34048944

Abstract

Background:

Medical scribes are trained professionals who assist health care providers by administratively expediting patient encounters. Section 507 of the MISSION Act of 2018 mandated a 2-year study of medical scribes in VA Medical Centers (VAMC). This study began in 2020 in the emergency departments and specialty clinics of 12 randomly selected VAMCs across the country, in which 48 scribes are being deployed.

Methods:

We are using a cluster randomized trial to assess the effects of medical scribes on productivity (visits and relative value units [RVUs]), wait times, and patient satisfaction in selected specialties within the VA that traditionally have high wait times. Scribes will be assigned to emergency departments and/or specialty clinics (cardiology, orthopedics) in VAMCs randomized into the intervention. Remaining sites that expressed interest but were not randomized to the intervention will be used as a comparison group.

Results:

Process measures from early implementation of the trial indicate that contracting may hold an advantage over direct hiring in terms of reaching staffing targets, although onboarding contractor scribes has taken somewhat longer (from job posting to start date).

Conclusions:

Our evaluation findings will provide insight into whether scribes can increase provider productivity and decrease wait times for high demand specialties in the VA without adversely affecting patient satisfaction.

Implications:

As a learning health care system, this trial has great potential to increase our understanding of the potential effects of scribes while also informing a real policy problem in high wait times and provider administrative burdens.

Keywords: Veterans, MISSION Act, Medical scribes, Productivity, Wait times

1. Introduction

Section 507 of the VA MISSION Act of 2018, formally the John S. McCain III, Daniel K. Akaka, and Samuel R. Johnson VA Maintaining Internal Systems and Strengthening Integrated Outside Networks Act of 2018, mandated a 2-year pilot study of the introduction of medical scribes in VA Medical Centers (VAMC), focused on emergency departments (ED) and high wait time specialty clinics [1]. Medical scribes are trained professionals but not clinically licensed, assisting health care providers by administratively expediting an episode of care. Prior to this study, scribes have rarely been employed by the Veterans Health Administration (VHA). According to the law, scribes will assist providers in navigating and documenting patient information into the electronic health record (Appendix A). Our objective was to build a randomized evaluation around the requirements of the pilot to better understand how the introduction of scribes affects provider productivity and patient satisfaction.

Outside of the VHA, evidence suggests that scribes may increase provider productivity and satisfaction, and decrease provider time spent on documentation without affecting patient satisfaction [2,3]. A randomized trial in a family medicine clinic in Northern California found no effect on patient satisfaction and significant improvements in physician satisfaction and productivity [4]. In a study where seven scribes were introduced to an urban safety net primary care clinic, face-to-face time increased by 57% and computer time associated with visits decreased by 39% [5]. There were no changes in total visit or provider cycle time, but work relative value units (wRVUs) and patients per hour increased. Patients were less likely to report being “very comfortable” with the number of people in the exam room following the introduction of scribes. Another recent study assessed the effect of scribes in a suburban, non-academic, community ED, finding consistent declines in wait times, visit times, and total length of stay coupled with improvements in patient and provider satisfaction [6]. Similarly, wRVUs and patients per hour increased with chart review and post-visit documentation time falling. Studies in a tertiary academic ED corroborated others’ findings of decreased provider documentation burden and an increase in RVUs for adult patients with mixed findings for length of stay [710]. Outside of the US, a multicenter randomized trial in five Australian EDs found a 16% increase in physician productivity and a 19 min reduction in median length of stay [11]. However, none of this evidence was generated in a VA setting and was generally on a much smaller scale than this Congressionally mandated study [12]. Process, staffing, and patient mix differences could yield different results and several of the studies used an uncontrolled pre-post design, limiting causal interpretation of their results.

This 2-year study officially began on June 30, 2020 in the EDs and specialty clinics of twelve VAMCs across the country, in which 48 scribes are being deployed. The goal of this evaluation is to understand how the introduction of scribes affects provider efficiency, wait times, daily patient volume, and patient satisfaction.

2. Methods

2.1. Study setting

Section 507 of the MISSION Act specified that at least four participating medical centers must be located in urban and rural areas along with two in underserved areas with a need for increased access [1]. The VA Office of Veterans Access to Care (OVAC) developed an initial list of 32 VAMCs interested in participating in the study, which were then categorized based on rurality and location in an underserved area. OVAC is actively exploring additional metrics to measure access to care beyond wait times but for the purposes of this study, underserved status was determined based on high specialty care wait times for new patients following Congressionally-defined standards [14]. VAMCs were sorted into categories by location (urban, rural, underserved) and specialty (ED, specialty care), according to the requirements of the MISSION Act, leadership preferences, and site capabilities before randomization. To represent specialty care while minimizing heterogeneity in the study, leadership chose to focus on cardiology and orthopedics. For randomization, VAMCs were stratified into five different categories: 1) urban ED, 2) rural ED, 3) urban specialty care, 4) rural specialty care, and 5) underserved specialty care. We used the sample function in R to assign random numbers to VAMCs within each category. Then, we used the function to choose numbers to randomly assign to the intervention group. The VAMCs not randomly selected within each stratification were assigned to the control group. This software-assisted random selection was used to ensure allocation concealment given the Congressionally-mandated framework of the trial.

2.2. Study design

We are using a cluster randomized trial to assess the effects of medical scribes on productivity, wait times, and patient satisfaction, incorporating both within-facility controls where appropriate (e.g., more difficult in the ED setting where providers are floating from patient to patient) and comparing across intervention and comparison sites. Using both within- and between-facility variation is important as scribes may not only affect productivity for their assigned provider, but have spillover effects for the clinic. For providers in specialty clinics, the use of scribes will be primarily focused on outpatient visits; however, they may also round on the inpatient service as needed or desired. Provider productivity, patient volume, wait times, and patient satisfaction from the treated sites, using the measures noted below, will be compared to baseline (pre-intervention) data as well as data from comparison sites. We will note the impact of scribes in different clinical settings, including rural versus urban, specialty clinic versus ED, and underserved areas. A schedule of enrollment, interventions, and assessments is shown as Table 1. This study is being coordinated by the Partnered Evidence-based Policy Resource Center (PEPReC) at the VA Boston Healthcare System in collaboration with OVAC, the VA Collaborative Evaluation Center (VACE), and participating VAMCs across the US.

Table 1.

Schedule of enrollment, interventions, and assessments.

graphic file with name nihms-1725916-t0002.jpg

As implementation of the scribes pilot program was Congressionally mandated, it is part of VA’s required operations and therefore exempt from Institutional Review Board review. A memorandum from OVAC (Appendix D) directs PEPReC to design and execute this evaluation using secondary data that is already collected by VHA in its routine clinical operations and human resources functions. The study was registered on ClinicalTrials.gov (NCT04154462) in November 2019 as hiring of scribes began (Appendix E). The only deviation from our ClinicalTrials. gov record is that hiring had been expected to be completed by March 2020, allowing the study implementation to begin, but the COVID-19 pandemic resulted in a several month delay.

The randomization of VAMCs was finalized in April 2019 with hiring of scribes beginning in November 2019 and the 2-year study period officially beginning on June 30th, 2020, having been delayed several months by the onset of the COVID-19 pandemic. The MISSION Act also specified that 30% of the scribes were to be assigned to emergency departments and the other 70% to specialty care. New Jersey requested to have their scribes split between the emergency department and specialty care. Two other sites (Clarksburg, West Virginia and Hampton, Virginia) requested to split their scribes within specialty, across cardiology and orthopedics. The randomization and these requests should yield 31 scribes in specialty care (65%) and 17 in emergency departments (35%), which is close to the targeted split specified in the Act. Five backup sites were also selected should one or more chosen sites fail to participate, based on being next in the randomization order for a given location-specialty combination. The remainder of the VAMCs that expressed interest in the study are being used as a comparison group, including any backup sites that do not transition into the intervention group during the study. The mix of locations and specialties randomly assigned to implement scribes as well as the backup and comparison sites are shown in Table 2.

Table 2.

Study locations and specialties.

Location Specialty Randomization group Station name (number)
Urban Emergency department Intervention New Jersey (561)a
Temple, TX (674)
Southern Arizona (678)
San Antonio, TX (671)
Backup Indianapolis, IN (583)
Comparison St. Louis, MO (657)
Reno, NV (661)
Las Vegas (654)
Salt Lake City, UT (660)
Minneapolis, MN (618)
Salisbury, NC (659)
Greater Los Angeles, CA (603)
Louisville, KY (603)
Erie, PA (562)
Hampton, VA (590)
Specialty care Intervention New Jersey (561)a
Louisville, KY (603)
Backup Northport, NY (632)
Comparison Long Beach, CA (600)
St. Louis, MO (657)
Minneapolis, MN (618)
Southern Arizona (678)
Greater Los Angeles, CA (603)
Salt Lake City, UT (660)
Cincinnati, OH (539)
Salisbury, NC (659)
Rural Emergency department Intervention Togus, ME (402)
Backup St. Cloud, MN (656)
Comparison Clarksburg, WV (540)
New Mexico (501)
Fargo, ND (437)
Salem, VA (658)
Specialty care Intervention Montana (436)
Fargo, ND (437)
Clarksburg, WV (540)
Manchester, NH (608)
Backup Salem, VA (658)
Comparison St. Cloud, MN (656)
Underserved Specialty care Intervention Hampton, VA (590)
Oklahoma City, OK (635)
Backup Columbus, OH (757)
Comparison Durham, NC (558)
Montana (436)
Asheville, NC (637)

Backup sites will be used as comparison sites unless an intervention site fails to hire and implement scribes.

a

After randomization, New Jersey requested to split scribes between their emergency and specialty care departments.

2.3. Intervention

Four medical scribes are to be assigned to each of the VAMC sites randomized into the intervention group with the VA hiring half as new employees and the remaining half as contractors. Two scribes, ideally one VA employee and one contractor, will be assigned to each participating provider, with two providers participating in the study at each facility. The VHA is relying on providers to volunteer for pairing with a scribe, which limits generalizability as it has in other studies [13]. Adoption of scribes is voluntary in this study but likely would not be in a broader implementation, which has implications for scaling any effects identified. We are also interested in whether the mode of hiring, as VA employees or as contractors, plays any role in scribes’ effectiveness.

The goal is to keep the provider-scribe pairs consistent throughout the study to the extent possible. Scribes will work with other Licensed Independent Practitioners (LIPs) in their assigned specialty if their provider partner is not available. Pairing two scribes with each provider allows the provider to rotate between scribes for face-to-face patient encounters while the other scribe has time to finish their notes on the prior patient. Clinical notes taken by scribes must be tagged by each scribe with their name, date, and time and be approved by the provider before becoming viewable in the electronic health record. This will also allow us to identify which visits involved scribes for the purposes of evaluating the program. We have included a program training manual and policy statement describing how scribes will assist in documentation of patient encounters (Appendix B and C).

2.4. Outcomes

For our outcome evaluation, we will measure the impact of medical scribes on provider efficiency, wait times, patient volume, and patient satisfaction, using variation between facilities and providers having and not having scribes as well as pre-intervention baseline outcome data across intervention and comparison sites. As mandated by law, we will also study differences in provider efficiency among providers with VA scribes and those with contracted scribes to the extent possible. Our ability to assess the latter will be dependent on how VA and contracted scribes ultimately are distributed across facilities and physicians.

We will use several data sources to develop the outcome measures described in Table 3 with detailed descriptions contained in Appendix F. We will use appointment, visit, and procedure data from the VHA Corporate Data Warehouse (CDW) to capture wait times for patients, visits, and services performed by physicians. We will use the Personnel and Accounting Integrated Data (PAID) database from the VHA Workforce Management and Consulting Office to obtain data on work hours during the study period for capturing full-time equivalents (FTE) for physicians and providers in each pay period included in the study. Patient satisfaction will be assessed using survey data from Veterans Signals (V-Signals), a nationally standardized tool aimed at better understanding the Veteran experience and satisfaction with the care received at VAMCs. V-Signals is an ongoing email-based survey conducted by the Veterans Experience Office (VEO), with over 3 million lifetime responses and a historical response rate around 20%. Any Veteran receiving outpatient care within the last week is eligible to receive a survey and the survey remains open for 2 weeks after invitation. These surveys will also be supplemented in a qualitative evaluation of the pilot being conducted by VACE. We are also using data from the VHA Planning Systems Support Group (PSSG) and Support Service Center (VSSC) databases, American Community Survey, Centers for Medicare and Medicaid Services, and Zillow to develop measures of intervention and control variables summarized in Table 4 and described in detail in Appendix F.

Table 3.

Outcome measures.

Domain Measure Level
Provider efficiency Work relative value-based provider efficiency Facility-pay period, provider-pay period
Visit-based provider efficiency Facility-pay period, provider-pay period
Daily visit-based provider efficiency Facility-pay period, provider-pay period; scaled by FTE days
Wait times Days to completed consult Facility-pay period, provider-pay period
Days to scheduled consult Facility-pay period, provider-pay period
Patient volume Unique patient volume per day Facility-pay period, provider-pay period
Patient satisfaction “It was easy to get my appointment” Facility-pay period
“After I checked in for my appointment, I knew what to expect” Facility-pay period
“I got my appointment on a date/time that worked for me” Facility-pay period
“I trust this clinic for my healthcare needs” Facility-pay period
“My provider listened carefully to me” Facility-pay period
“My provider explained things in a way that I could understand” Facility-pay period
“I am satisfied with the service I received from the VA clinic” Facility-pay period

Table 4.

Intervention measures and control variables.

Measure Level Source
Intervention measures
Scribe FTEs per 1000 patients (provider efficiency, wait times, and patient volume models) Facility-pay period Personnel and Accounting Integrated Data
Physician FTEs per 1000 patients (wait times models) Facility-pay period Personnel and Accounting Integrated Data
Quartiles of scribe FTEs per physician FTE (patient satisfaction models) Facility-pay period Personnel and Accounting Integrated Data
Control variables
Percentage of enrollees over age 65 Facility-year, derived from county-level Planning Systems Support Group
Percentage of enrollees under age 50 Facility-year, derived from county-level Planning Systems Support Group
Percentage of high priority status enrollees (7 and 8) Facility-year, derived from county-level Planning Systems Support Group
Insured rate for 18- to 64-year-old males Facility-year, derived from county-level American Community Survey
Median household income Facility-year, derived from county-level American Community Survey
Veteran unemployment rate Facility-year, derived from county-level American Community Survey
Home prices Facility-year, derived from county-level Zillow Home Value Index
Medicare Advantage (MA) penetration Facility-year, derived from county-level Centers for Medicare and Medicaid Services
Average patient risk scores Facility-year Support Service Center
Average enrollee driving distancea Facility-year Planning Systems Support Group
Average community care wait times for specialty carea Facility-year Corporate Data Warehouse
a

Applies to specialty care only.

We will also collect process measures related to implementation, focusing on the trajectory of hiring of VA versus contract scribes and average time to hire. These data will be based on the ‘Entrance on Duty’ (EOD) date, when the scribe has completed all the necessary steps to begin working. We will describe achieved and projected hiring trends, including scribes that are currently going through the onboarding process but have not yet reached their EOD date. Their projected EOD dates were calculated by using the most conservative estimate of onboarding time available (the scribe that took the longest to reach their EOD date), adding that time to the date when a scribe began onboarding.

2.5. Sample size

We have conducted power analyses to determine the minimum detectable effect size for each outcome that can be detected with 80% power, which will be useful for putting our final results into context. We conservatively assumed 24 pay periods (1 year) worth of intervention data given that hiring and training delays could result in less than two full years of implementation, with 24 providers (two scribes per provider) at 12 intervention sites and 48 providers in comparison sites. We used the observed standard deviation for each outcome in the baseline period, averaged across specialties, as our assumption in the power analysis. For provider efficiency and patient volume, the number of provider-pay periods corresponds to a sample size of 576 in the intervention group and 1152 in the comparison group. As wait times are only measured for specialty care, the number of provider-pay periods corresponds to a sample size of 384 in the intervention group and 768 in the comparison group. For patient satisfaction, we had to make additional assumptions about patient volume and response rate to project the number of responses per provider-pay period. Based on baseline patient volume and a 15% response rate (historically 20%), we project 12 completed patient satisfaction surveys per provider-pay period, which yields sample sizes of 6912 in the intervention group and 13824 in the comparison group.

Under these assumptions, we would have 80% power to detect a 25.85 increase in wRVUs per physician FTE and a 13.63 increase in visits per physician FTE related to the introduction of scribes. Another study found a 95 wRVU increase on average per physician hour in a community ED, much larger than our minimum detectable effect sizes if rescaled appropriately [6]. We would be powered to detect a 5.82 day decrease in wait times for specialty care and a 0.55 increase in unique patients seen per day per physician FTE. For context, a prior analysis of 2012 VHA data found a 28.8 days to completion average for specialty care consults [13]. A prior study examining scribes in primary care found an increase of 0.16 patients per hour [5]. If we assume an 8 h workday and that these magnitudes are comparable for specialty and ED care, we would have power to detect an effect size considerably smaller. Our minimum detectable effect size for patient satisfaction, using “I am satisfied with the service I received from the VA clinic”, is an increase of approximately 0.82%, small enough to detect any meaningful difference in patient satisfaction. Assessment of differences in the effectiveness of VA-hired versus contract-hired scribes is Congressionally mandated, but not powered for analysis. We will rely on descriptive and qualitative analysis to explore whether there are any noteworthy differences by mode of hiring that might inform how such an intervention would be scaled nationally (Table 5).

Table 5.

Minimum detectable effect sizes at 80% power.

Outcome Number of providers
Size of clusters
Sample size
Standard deviation ICC Minimum detectable effect size
Comparison Intervention Comparison Intervention Comparison Intervention
wRVU-based productivity 48 24 24 24 1152 576 163 0.01 +25.85
Visit-based productivity 48 24 24 24 1152 576 86 0.01 +13.63
Days to completed consulta 32 16 24 24 768 384 30 0.01 −5.82
Patient volume 48 24 24 24 1152 576 3.5 0.01 +0.55
Patient satisfaction 48 24 288 288 13,824 6912 10 0.01 +0.82

ICC – intra-class correlation.

Size of clusters represents the number of pay periods for wRVU-based productivity, visit-based productivity, wait times, and patient volume, and number of respondents per pay period for patient satisfaction. For patient satisfaction, the item “I am satisfied with the service I received from the VA clinic” was used in our power analysis.

a

Applies to specialty care only.

2.6. Statistical analysis

We will use descriptive statistics and multivariable regression analysis to describe the impact of scribes on the outcome measures summarized in Table 3. We plan to use facility and provider-level fixed effects models that exploit within-facility and within-provider variation in outcomes and presence of scribes over time, incorporating the control variables described below and summarized in Table 4. We will also account for supply and demand factors for VA health care known to be associated with our outcomes [15], assuming there is enough variation over time to justify their inclusion in addition to facility fixed effects. These include facility-level enrollment characteristics, such as percentage of enrollees over age 65, percentage under age 50, and percentage of low priority status enrollees (7 and 8) from PSSG. PSSG captures enrollee counts at the county level for each facility, which we aggregate to the appropriate facility level. We also include the facility-level insured rate for 18- to 64-year-old males (proxy for veteran insurance coverage), median household income, home prices, veteran unemployment rate, and Medicare Advantage penetration, derived from county-level measures and weighted by enrollment in each VA facility. Facility-level annual average patient risk scores (Nosos scores) will be included to account for differences in relative comorbidity burden of facilities’ patient populations, obtained from VSSC [16]. For the specialty care wait times models only, we will include facility-level average enrollee driving distance, from PSSG, and community care wait times (for care outside of VA facilities), from CDW.

There may be baseline differences between the intervention and comparison groups on our control variables, described above and summarized in Table 4. If we observe a greater than 10% standardized difference, a commonly used threshold, we will also explore using coarsened exact matching on facility characteristics to pair intervention and comparison sites to improve our ability to provide a causal interpretation for our findings [1720]. There is also the risk of randomization failure due to incomplete or less than ideal implementation of the intervention, including clinic drop out or difficulty in hiring and retention of scribes, in which case we will explore using randomization as an instrumental variable.

3. Implementation progress

Our results at this early stage focus on the implementation of the trial, which involves the hiring of 48 medical scribes in the 12 participating VAMCs as either VA employees or contractors. A key implementation outcome is the difference in hiring experiences between VA and contract scribes. Despite having reached the official start date, which itself was delayed due to COVID-19, hiring is still in progress and is expected to continue until all positions are filled, and thereafter as needed due to attrition. As the progression of the pandemic and hiring for the pilot are unknown, we felt it worthwhile to report our findings despite the challenges we have encountered in implementing the trial. Our initial data (Fig. 1) shows substantial differences in the hiring experience of VA versus contact scribes. Contracting has been outpacing VA in terms of reaching its hiring target (24 each, for a total of 48 scribes in total) to date though with a nearly 50% longer time to hire per scribe. This graph depicts attrition after a scribe has reached their EOD date, not accounting for potential attrition for projected contract and VA scribes in the onboarding process. At this time, we do not anticipate that attrition during the onboarding process will significantly change these hiring projections.

Fig. 1.

Fig. 1.

Hiring of contract and employee scribes.

Fig. 1 does not explicitly break down any difference in the time required to onboard the scribes through these two channels, which is an important consideration. For the scribes that have an EOD date, we also analyzed the differences in time to hire between employee and contract scribes. We measured the time to hire as the time between the job being posted and EOD date, which includes the onboarding time for candidates. Employee scribes had an average time to hire of approximately 120 days versus approximately 198 days for contract scribes. Based on our initial observations of hiring to date, contracting seems to hold a significant advantage in terms of reaching its target but does seem to take somewhat longer when compared against the fewer completed employee hires. More data and a deeper analysis of the underlying causes of this variation in hiring trajectories between the employee and contract scribes will provide valuable insights if such scribes were to be deployed at scale within VHA.

4. Discussion

As a learning health care system and the largest integrated delivery system in the United States, VHA considers this trial to have great potential to increase our understanding of the potential effect of scribes while also informing a very real policy problem involving high wait times and high provider administrative burdens. Our evaluation findings from this pilot will provide insight into whether scribes can help increase provider productivity and decrease wait times for high demand specialties in the VA without adversely affecting patient satisfaction.

Implementation was complicated by the COVID-19 pandemic but a key takeaway thus far is that scribe hiring through contracting has been able to hit its target while VA hiring has fallen far short. Ongoing hiring results as the pilot continues, in filling the remaining positions and any attrition during the 2-year study period, will help inform what effect COVID-19 had and if it differed between VA and contract hiring. Every 180 days, the VA must submit a report to Congress on the progress and impact of the program on provider efficiency, patient satisfaction, and average wait times. At the end of the study, the Comptroller General will submit a report comparing this program to similar programs conducted in the private sector. VACE will also be conducting a separate qualitative analysis, not detailed here, to understand contextual factors impacting study outcomes. These reports will inform policy makers on the strengths and limitations of using VA and contract medical scribes as part of VHA care.

Supplementary Material

Appendix A
Appendix B
Appendix C
Appendix D
Appendix E
Appendix F

Acknowledgements

The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Footnotes

Conflicts of interest

None to declare.

Appendix A. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.cct.2021.106455.

References

  • [1].S.2372 – 115th Congress (2017–2018): VA MISSION Act of 2018. https://www.congress.gov/bill/115th-congress/senate-bill/2372/text, 2018.
  • [2].Pearson E, Frakt A., Medical scribes, productivity, and satisfaction, JAMA 321 (7) (2019) 635–636, 10.1001/jama.2019.0268. [DOI] [PubMed] [Google Scholar]
  • [3].Pearson E, Frakt A, Pizer S., Medical scribes, productivity, and satisfaction, Partn. Evid.-Based Policy Resour. Cent. Policy Brief 3 (2) (2018). https://www.peprec.research.va.gov/PEPRECRESEARCH/docs/Policy_Brief_5a.pdf. Accessed December 18, 2019. [DOI] [PubMed] [Google Scholar]
  • [4].Gidwani R, Nguyen C, Kofoed A, Carragee C, Rydel T, Nelligan I, Sattler A, Mahoney M, Lin S., Impact of scribes on physician satisfaction, patient satisfaction, and charting efficiency: a randomized controlled trial, Ann. Fam. Med. 15 (5) (2017) 427–433, 10.1370/afm.2122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Zallman L, Finnegan K, Roll D, Todaro M, Oneiz R, Sayah A., Impact of medical scribes in primary care on productivity, face-to-face time, and patient comfort, J. Am. Board Fam. Med. 31 (4) (2018) 612–619, 10.3122/jabfm.2018.04.170325. [DOI] [PubMed] [Google Scholar]
  • [6].Shuaib W, Hilmi J, Caballero J, et al. , Impact of a scribe program on patient throughput, physician productivity, and patient satisfaction in a community-based emergency department, Health Inform. J. 25 (1) (2019) 216–224, 10.1177/1460458217704255. [DOI] [PubMed] [Google Scholar]
  • [7].Heaton HA, Nestler DM, Jones DD, et al. , Impact of scribes on patient throughput in adult and pediatric academic EDs, Am. J. Emerg. Med. 34 (10) (2016) 1982–1985, 10.1016/j.ajem.2016.07.011. [DOI] [PubMed] [Google Scholar]
  • [8].Heaton HA, Nestler DM, Lohse CM, Sadosty AT, Impact of scribes on emergency department patient throughput one year after implementation, Am. J. Emerg. Med. 35 (2) (2017) 311–314, 10.1016/j.ajem.2016.11.017. [DOI] [PubMed] [Google Scholar]
  • [9].Heaton HA, Nestler DM, Jones DD, et al. , Impact of scribes on billed relative value units in an academic emergency department, J. Emerg. Med. 52 (3) (2017) 370–376, 10.1016/j.jemermed.2016.11.017. [DOI] [PubMed] [Google Scholar]
  • [10].Heaton HA, Wang R, Farrell KJ, et al. , Time motion analysis: impact of scribes on provider time management, J. Emerg. Med. 55 (1) (2018) 135–140, 10.1016/j.jemermed.2018.04.018. [DOI] [PubMed] [Google Scholar]
  • [11].Walker K, Ben-Meir M, Dunlop W, et al. , Impact of scribes on emergency medicine doctors’ productivity and patient throughput: multicentre randomised trial, BMJ 364 (2019) I121, 10.1136/bmj.l121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Ullman K, McKenzie L, Bart B, Park G, MacDonald R, Linskens E, Wilt TJ, The Effect of Medical Scribes in Cardiology, Orthopedic, and Emergency Departments: A Systematic Review, Evidence Synthesis Program, Health Services Research and Development Service, Office of Research and Development, Department of Veterans Affairs, Washington, DC, 2020. VA ESP Project #09–009. [PubMed] [Google Scholar]
  • [13].Graves PS, Graves SR, Minhas T, Lewinson RE, Vallerand IA, Lewinson RT, Effects of medical scribes on physician productivity in a Canadian emergency department: a pilot study, CMAJ Open. 6 (3) (2018) E360–E364, 10.9778/cmajo.20180031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Pizer S, Davies M, Prentice J., Consult coordination affects patient experience, Am. J. Manag. Care 5 (1) (2017) 23–28. [PMC free article] [PubMed] [Google Scholar]
  • [15].Hanchate AD, Frakt AB, Kressin NR, et al. , External determinants of Veterans’ utilization of VA health care, Health Serv. Res. 53 (6) (2018) 4224–4247, 10.1111/1475-6773.13011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Wagner T, Stefos T, Moran E, et al. , Risk Adjustment: Guide to the V21 and Nosos Risk Score Programs, VA Palo Alto, Health Economics Resource Center, Menlo Park, CA, 2016. https://www.herc.research.va.gov/include/page.asp?idtechnical-report-risk-adjustment. Accessed February 11, 2020. [Google Scholar]
  • [17].Austin PC, Using the standardized difference to compare the prevalence of a binary variable between two groups in observational research, Commun. Stat. – Simul. Comput. 38 (6) (2009) 1228–1234, 10.1080/03610910902859574. [DOI] [Google Scholar]
  • [18].Blackwell M, Iacus S, King G, Porro G., Cem: coarsened exact matching in stata, Stata J. 9 (4) (2009) 524–546. [Google Scholar]
  • [19].Austin PC, Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples, Stat. Med. 28 (25) (2009) 3083–3107, 10.1002/sim.3697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Iacus SM, King G, Porro G., Causal inference without balance checking: coarsened exact matching, Polit. Anal. 20 (1) (2012) 1–24, 10.1093/pan/mpr013. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix A
Appendix B
Appendix C
Appendix D
Appendix E
Appendix F

RESOURCES