Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 May 1.
Published in final edited form as: J Health Econ. 2021 Feb 23;77:102442. doi: 10.1016/j.jhealeco.2021.102442

Docs with their Eyes on the Clock? The Effect of Time Pressures on Primary Care Productivity

Seth Freedman 1, Ezra Golberstein 2, Tsan-Yao Huang 3, David Satin 4, Laura Barrie Smith 5
PMCID: PMC8122046  NIHMSID: NIHMS1676587  PMID: 33684849

Abstract

This paper examines how time pressure, an important constraint faced by medical care providers, affects productivity in primary care. We generate empirical predictions by incorporating time pressure into a model of physician behavior by Tai-Seale and McGuire (2012). We use data from the electronic health records of a large integrated delivery system and leverage unexpected schedule changes as variation in time pressure. We find that greater time pressure reduces the number of diagnoses recorded during a visit and increases both scheduled and unscheduled follow-up care. We also find some evidence of increased low-value care, decreased preventive care, and decreased opioid prescribing.

Keywords: provider decisionmaking, primary care, health care productivity

1. Introduction

Primary care is the point of first contact for many patients and is where many initial diagnosis and treatment decisions are made. Understanding the productivity of primary care providers (PCPs) is therefore important for health economics and policy, but is complicated by the fact that PCPs deliver care in a multitasking work setting (Holmstrom and Milgrom 1991, Ma 1994). PCP productivity encompasses both the quantity of care delivered (i.e., the number of patients and the number of health conditions covered per patient visit) and the quality of care that is delivered, often along multiple dimensions that are difficult for patients and payers to observe. The structure of the PCP work setting ostensibly influences productivity. This structure includes incentives for delivering both quantity and quality of services, along with any constraints on PCP behavior and decision-making. One potentially important constraint is time. Half of physicians report time pressure during office visits and that their work pace is chaotic (Linzer 2009). Inadequate time for primary care visits may reduce the opportunity for thorough examination, diagnosis, communication, and shared decision making between the PCP and the patient – potentially leading to poorer quality of care and less-favorable health outcomes (Christianson, Warrick et al. 2012, Powell, Bloomfield et al. 2013, Linzer, Bitton et al. 2015, Koven 2016).

Surveys that solicited physicians’ perceptions of causes of overuse of unnecessary treatment – one type of poor quality of care – corroborate the potential importance of time pressure. Significant pluralities of respondents indicated that inadequate time with patients led to more unnecessary care or not enough time to discuss risks and benefits of unnecessary care (Sirovich, Woloshin et al. 2011, The ABIM Foundation 2014, Sears, Caverly et al. 2016). One survey also asked about potential solutions for overuse, and 78% of physicians indicated that more time with patients to discuss alternatives would be at least somewhat effective (The ABIM Foundation 2014). Another recent survey found that 37% of physicians believe that inadequate time with patients is a top reason for overutilization (Lyu, Xu et al. 2017).

The goal of this paper is to understand if time pressure faced by PCPs causally affects the productivity of PCPs within a visit, both in terms of quantity and quality, using a unique data source and empirical research design. We examine several dimensions of productivity, including the number of topics covered per visit measured by number of diagnoses, the likelihood that patients have return visits that are either planned (i.e., recommended by the PCP) or unplanned, and whether patients have subsequent hospital visits. We also analyze use of low-value services, a potentially inappropriate service that may substitute for higher-quality though time-consuming patient engagement (opioid prescribing), and high-value preventive care services.

We use data from the electronic health records (EHR) of a large health care delivery system in a major U.S. metropolitan area, and we use unexpected changes to a PCP’s schedule as a source of random variation in time pressure. Our empirical specifications include PCP fixed effects and other fixed effects to control for seasonality, time of day, and day of week that determine expected variation in schedule changes. As we discuss in more detail below, schedule changes in the clinics we study are implemented in ways that are not driven by PCP preferences or characteristics of other scheduled visits that day.

We find that increased time pressure reduces the number of diagnoses recorded in a visit, which suggests providers facing time pressure engage in fewer topics with their patients. Moving from the first to the fourth quartile of our measure of time pressure reduces the number of diagnoses by 1.8%. Visits in the fourth quartile of time pressure experience 6% more planned and 4.5% more unplanned follow-up visits, suggesting that providers delay care to a future visit and that patients experience adverse events or dissatisfaction with care when provided under time pressure. We also find evidence that more time pressure increases the likelihood of subsequent hospitalization (4.1% increase from the first to the fourth quartile), which may also signal poor quality primary care. We find some evidence that more time pressure affects low-value care, reduces opioid prescribing, and reduces recommended preventive care.

Understanding the causal effect of time pressure on the quality of care delivered in primary care settings is important for several reasons. The majority of health care visits in the U.S. are with PCPs, which include physicians and other clinicians like physician assistants (PAs) and nurse practitioners (NPs) (Bodenheimer and Pham 2010). PCPs play many key roles including making initial diagnoses, managing chronic diseases, and making treatment decisions that include prescribing, ordering tests, and referring to specialists. If more time pressure reduces quality of care in primary care, then providers, payers, and policymakers face a trade-off. From a provider perspective, reducing time pressure within visits comes at a cost of fewer visits per shift (and less fee-for-service revenue) or of investments in additional personnel to allow providers to work more efficiently. If the effect of time pressure on quality of care is negative and clinically meaningful, then practices might take this tradeoff seriously. From the perspective of payers and policymakers, whether time pressure affects quality and value of care has important implications for designing payment contracts in a multitasking environment (Ma 1994, Ma and McGuire 1997). If additional time in a primary care visit significantly improves the quality of care, then changing fees to encourage longer visits on the margins might be a worthwhile investment. However, these implications only apply if time pressure indeed causally affects quality of care. While we do find a number of statistically significant effects of time pressure, the magnitudes of these effects are for the most part relatively small. This is consistent with other concurrent research on the effects of time constraints on primary care provider behavior (Shurtz, Eizenberg et al. 2019), and suggests that the tradeoffs from having more time pressure in primary care may not be considerably consequential for the outcomes we examine.

2. Literature Review

Our paper contributes to the literature on determinants of health care treatment choices. Chandra et al. (2012) review studies of various factors that influence physician decision making, including financial incentives, specialization and training, malpractice concerns, and more recently behavioral influences such as availability, status quo, or framing heuristics. This work is typically based on models where providers make decisions balancing patient benefit, provider earnings, and potentially other factors such as availability of technology or professional guidelines (Ellis and Mcguire 1986, Chandra, Cutler et al. 2012). Our work illustrates that time constraints can alter these tradeoffs and can generate variation in treatment decisions.

Previous research has explored correlations between time pressure and visit outcomes. Researchers have calculated that the amount of time necessary for a PCP to follow standard guidelines with a typical patient panel would exceed the length of a normal work day (Yarnall, Pollak et al. 2003, Østbye, Yarnall et al. 2005), that physicians and patients often spend very little time discussing each topic of a visit (Tai-Seale, McGuire et al. 2007), and that the likelihood of PCPs prescribing antibiotics for acute upper respiratory infections and opioids for pain diagnoses increases over the course of a day, which may reflect decision fatigue (Linder, Doctor et al. 2014, Neprash and Barnett 2019). Other studies use multivariate regression analysis to compare outcomes of shorter and longer visits. For example, shorter visits are associated with decreased appropriate screening for abdominal aortic aneurysm (Eaton, Reed et al. 2012); decreased appropriate diet and exercise counseling and blood pressure screening (Chen, Farwell et al. 2009); decreased depression screening (Schmitt, Miller et al. 2010); and increased prescribing of antibiotics for upper respiratory tract infections (Linder, Singer et al. 2003). These studies, however, are unable to distinguish the true causal effects of visit length from other, unobserved provider and patient determinants of clinical outcomes.

One study of 34 physicians assessed how randomly-assigned time constraints affect clinical decision making in the context of patient vignettes and found that time constrained physicians ask fewer questions (Tsiga, Panagopoulou et al. 2013). The most similar published study to our own exploits the duration of non-mental health visits in a practice as a source of exogenous variation in the length of mental health visits, and finds that visit length has a small and statistically insignificant impact on the diagnosis of mental health conditions (Glied 1998).

Several recent and concurrent papers in economics share two themes with our paper: understanding how health care professionals respond to changes to their workload and/or time pressures, and using data derived from EHR systems. One paper examines variation driven by shocks to provider availability in five public health clinics in Tennessee (Harris, Liu et al. 2020). They find that reduced nurse capacity leads to fewer visits but only small effects on visit length (although the scope of services offered in this setting is relatively limited). Another paper which is more similar to ours uses data from 11 Israeli primary care clinics and uses variation in the number of patients seen per provider per day driven by the absence of a colleague (Shurtz, Eizenberg et al. 2019). They find that shorter average visit length leads to fewer specialist referrals and lab tests, and weak evidence of increased antibiotic prescribing. They do not find evidence that visit length affects referrals to imaging, prescription of painkillers, or subsequent visits. Our paper differs from these two papers in terms of setting, and our paper also uses a distinct source of variation. We ask how PCPs respond to short-term demand shocks rather than short-term supply shocks. These adjustments could differ, if, for example, PCPs respond differently to the absence of a colleague that is revealed before a shift begins relative to shocks that occur throughout the day as patients are added or subtracted from their schedule in real time.

Our research also shares some themes with recent research on emergency department physicians by Chan (2018), who uses detailed EHR data to examine (among other things) the implications of scheduling for physician productivity and the degree to which time complements or substitutes other inputs in patient care. Freedman (2016) also examines variation in patient flows, but finds that excess capacity can lead to additional treatment intensity in the hospital setting where marginal reimbursement rates are much higher than in primary care. Finally, our paper also relates to other research in economics outside of the health care context, on the effect of time pressure on employee productivity (Frakes and Wasserman 2017).

3. Theoretical Model of Time Pressure in Primary Care

To generate predictions of how time pressure may affect outcomes in primary care, we extend Tai-Seale and McGuire’s (2012) model, in which providers make decisions about when to end an visit based on the expected benefit to the patient of addressing a new “topic” of discussion (assuming that topics take a fixed amount of time to discuss), and the shadow price of time.1 Let t index topics and τ index the time elapsed during the visit. Vt represents the expected value of discussing topic t. In Figure 1A, an example of this function is depicted by the downward sloping step function. Conditional on having discussed t topics, the provider decides to also discuss topic t+1 if Vt+1> λ(τ), where λ(τ) is the shadow price of time given the amount of time elapsed to complete topic t+1. Tai-Seale and McGuire find empirical support for the hypothesis that providers act as if they have a “target” visit time, which implies a shadow price function that is flat initially and increases steeply around the target time. We assume this behavioral model, as shown in Figure 1A, where the provider would allow only three topics into discussion, because the expected value of the fourth topic is below the shadow price of time.

Figure 1:

Figure 1:

Modeling the Decision to Admit Topics Under Time Pressure

To add the notion of time pressure to this model, we assume that a provider facing more time pressure will have a shorter target visit length, and a shadow price function that increases earlier. In Figure 1B, λ(τ)2 represents a shadow price function with more time pressure than λ(τ)1, where the provider would only complete two topics. Our first empirical prediction is thus that providers facing more time pressure may address fewer topics per visit. When marginal topics are not discussed due to time pressure, the marginal topic will have greater value to the patient than in a visit with less time pressure. The value of the marginal topic may be even greater if patients save important topics for later in the visit. Our second empirical prediction is thus that providers facing more time pressure are more likely to schedule a return visit with the patient to discuss valuable topics that did not meet the threshold for inclusion in the visit.

We also extend the model by allowing for providers to choose an alternative topic treatment path with more topics addressed per visit, but with less patient value per topic. This can represent both objective and patient-perceived quality. The provider then chooses the path that maximizes total expected value to the patient, given time pressures. Spending less time per topic could mean relying on heuristics, rather than customizing treatment choices. It could also mean prescribing “low-value” or unnecessary care that a patient requests rather than taking time to discuss why it is not optimal, or taking less time to diagnose symptoms, understand patient preferences, or explain prognoses or treatment plans.2 These trade-offs are depicted in Figure 1C. The black step function is the original value function, and the gray step function is a lower-value treatment path. Without time pressure (under λ(τ)1) the provider could treat three topics on the high-value path or three topics on the low-value path, and would choose the high-value path. Under time pressure (λ(τ)2), the provider can treat two topics on the high-value path or three on the low-value path. In this example, the area under the low-value curve would be greater than the area under the high-value curve, and the provider would choose the treatment path with lower per-topic value, but a higher total overall value to the patient.

On net, the relationship between time pressure and the likelihood that low-value care is recommended is ambiguous. For example, imaging referral for low-back pain is in most cases inappropriate. Time pressure may increase the likelihood of imaging referral, conditional on the topic being discussed, but time pressure may reduce the likelihood that the topic is even discussed, foreclosing the possibility of an imaging referral. On the other hand, we predict that recommended preventive care would be reduced by time pressure, either if the “topic” of preventive care is not discussed or the provider foregoes ordering preventive care to save time.

To the extent that a provider chooses a treatment path involving less time per topic, it increases the likelihood that the patient will need, or perceive the need for, follow-up care. As such, we predict that follow-up visits that were not ordered by the provider will increase with time pressure. And, because decreases in both the quantity of topics discussed and the per-topic quality provided could lead to lower quality care, we predict visits with more time pressure would have a greater likelihood of subsequent hospitalizations and emergency department visits.

4. Data and Methods

4.1. Data

We use visit level data from the EHR system of Fairview Health Services, a large integrated delivery system in the greater Twin Cities region. In the period that we study, Fairview’s system included 39 primary care clinics and six hospitals. Since 2005, Fairview has used an Epic® EHR system in its clinics. The EHR data include information on patients (age, sex, language, insurance status, etc.), clinical encounters (problem list, diagnoses recorded, orders, labs, procedures, test results, medications, etc.), providers (type, title, etc.), and appointment scheduling (scheduled start time, scheduled length, date scheduled and/or cancelled, etc.). We can follow both providers and patients over time, enabling us to study within-provider variation and to observe characteristics of patients based on past encounters within the Fairview system. We obtained data from all visits at 31 primary care clinics for the years 2005-2015 (N=6,361,942).

Six of the 31 clinics are in urban areas and 25 are in suburban areas. The clinics range in size from six to 30 PCPs, and also vary in the racial and ethnic diversity of patients served as well as the share of patients on Medicaid. Table 1 provides summary statistics about the clinics. Our full analytic sample includes all primary care visits to physicians, PAs, and NPs for patients age 18 and older, restricting to visits not booked on the same day as the actual visit (N=2,671,789).3 We also consider various condition-specific subsamples that we describe below and in Appendix 1. We rely on scheduling data from all visits in order to accurately capture time pressures faced by providers during a work shift.

Table 1:

Primary Care Clinic Summary Statistics (N=31)

Mean
(across clinics)
Standard
Deviation
Range
Number of providers 13.0 5.5 6-30
  Physicians 8.0 4.0 3-20
  Nurse practitioners 0.9 0.8 0-3
  Physician assistants 1.5 1.3 0-6
Patients seen per half-day shift per PCP 8.1 1.0 6.3-10.8
% non-Hispanic white patients 82.3 9.9 48.4-96.4
% patients on Medicaid 6.0 2.5 3.1-14.0

Notes: The provider counts represent the average number of providers practicing at a clinic within a calendar year.

4.2. Measuring schedule changes as a proxy for time pressure

Time pressures emerge from a variety of factors relating to the PCP, the patient, and their interaction. However, determinants of time pressure are likely to be correlated with other determinants of the outcome of a particular patient visit, including productivity and treatment decisions. We deal with this difficulty by focusing on shocks to the degree of time pressure within patient visits. We posit that unexpected changes to the number of patients on a PCP’s schedule influences the time pressure felt in each visit during that shift. With a roughly fixed total number of hours in a shift, having more or less patients visit the PCP should increase or decrease the time pressure felt in each visit, all else equal.4

Our main measure of time pressure summarizes each PCP’s schedule changes at the visit level, distinguishing between morning and afternoon shifts. We identify three sources of schedule changes: 1) Appointments that have the status of “no-show;” 2) Same-day cancellations, based on the visit date and the cancellation date being the same; and 3) Same-day visits, based on the visit date and the scheduling date being the same.5 Using the scheduled length of each visit, we calculate the number of minutes added or subtracted from the PCP’s schedule by these events as well as the net minutes the schedule has changed by (length of same-day visits minus scheduled length of no-shows and cancellations). We define this measure to be visit-specific. No-shows that occur after an appointment do not affect that appointment’s time pressure. Cancellations and same-day appointments during the half-day shift are counted for all patients in that shift, even though they potentially take place after the appointment’s scheduled time, because we do not observe what time the patient called to cancel or make the appointment. We do not consider this a large limitation, because the PCPs in this system receive regularly updated information about cancelled and newly-scheduled visits that will be occurring later in their shift, and thus awareness of schedule changes later in a shift may contribute to PCPs’ sense of time pressure during a given visit. We elaborate on the details of how no-shows, same-day cancellations, and same-day visits play out in the primary care clinics in Section 4.7.

4.3. Number of Diagnoses and Follow-up Visit Outcomes

The first outcome we examine is the number of diagnoses recorded during the visit. One limitation of our data is that we do not observe the actual duration of the visit, so we cannot verify if visits in shifts with more time pressure by our measure are actually shorter. We view the number of diagnoses as a proxy for how many topics the patient and PCP discussed during the visit. We therefore view a negative correlation between higher levels of our two schedule change variables and the number of diagnoses recorded as verification that net schedule changes do reflect time pressure. This also serves as a test of the first hypothesis that time pressure reduces the number of topics addressed during a visit. We create two diagnosis count measures: one that counts all diagnoses recorded, and one that count only diagnoses that are new (i.e. have not previously been recorded in the EHR system for that patient).

We create several measures of follow-up care. First, we explore whether time pressure affects the likelihood of ordering a referral. The net effect of time pressure on referrals is likely ambiguous, since time pressure may lead a PCP to more quickly order a referral to defer the care to another provider, or it may lead the PCP to lack time to realize that the patient’s care would be better suited to a specialist. Second, we test how time pressure affects the scheduling of follow-up visits at the same clinic. One way in which PCPs may cope with time pressure would be to suggest a follow-up visit to be able to spend more time on a topic. In this case, the patient may eventually get similar quality care despite the time pressure, but their value of care would be lower in a present value sense or due to the hassle cost and additional fee of a follow-up visit. To test this, we create a dichotomous outcome variable called “scheduled follow-up,” equal to one if the patient schedules a follow-up visit to occur within 14 days of the index visit and schedules that visit on the same day as the index visit.6 Third, we create an outcome for an “unscheduled follow-up” equal to one if the patient schedules a follow-up visit to occur within 14 days of the index visit but schedules the visit on a day after the index visit. We view this measure as a proxy for bad outcomes that follow the visit or the patient perceiving the original care as inadequate.

We also create measures of follow-up care in hospital-based settings. If the care received in a primary care visit is inadequate and leads to an adverse event, patients may seek subsequent care at hospitals. For each visit we identify whether the patient had any inpatient and/or any emergency department care in the 14 days and 30 days following the visit. We can only observe subsequent hospital-based care if the patient seeks this care at a facility owned by the Fairview delivery system. While Fairview owns six hospitals in the metropolitan area and has expressed to us that they have a high degree of patient retention within their delivery system, this measure undercounts hospitalizations and ED visits.7

4.4. Measures of low-value care

We focus on two outcomes that represent low-value care in primary care settings: prescribing antibiotics for acute upper respiratory infection and ordering an imaging study for low-back pain. We chose these because they are understood to be inappropriate or low-value in a broad range of clinical scenarios, can be validly constructed from EHR data, and correspond to common clinical scenarios. The Choosing Wisely campaign has included both of these services in its recommendations, and several medical specialty societies have promulgated guidelines labeling these services as inappropriate. Previous research has examined both of these outcomes, and they have been found to be reliably extracted from structured information contained in EHR data (Linder, Doctor et al. 2014, Shetty, Meeker et al. 2015). Additionally, respiratory problems and back pain represent two of the most common reasons for primary care visits. However, these two outcomes differ from each other in an important way. Prescribing an antibiotic is fast and can end a topic or a visit. However, ordering an X-ray can require significant additional time since many of the clinics in our data have in-house X-rays and ordering an X-ray usually involves taking the time to conduct the X-ray and review the results.

We follow recent research and clinical guidelines to define the acute upper respiratory tract infection (ARI) samples and outcomes (Linder, Doctor et al. 2014, Meeker, Linder et al. 2016). We create two samples, one with visits where antibiotics are considered “never appropriate” and the other with visits where antibiotics are in the “grey area” of being inappropriate (i.e. “potentially inappropriate”). Additional details about these samples are described in Appendix 1. The outcome in both ARI samples is an indicator variable for all visits with an order for a prescription drug with USP category, “antibacterials” (USP 2019). We exclude visits prior to 2011 from the ARI samples due to inter-clinic variation in use of the EHR system for prescription orders prior to 2011. The “never appropriate” sample consists of 23,421 visits with 46% prescribed an antibiotic. The “potentially inappropriate” sample consists of 33,043 visits with 58% prescribed an antibiotic.

We similarly follow previous literature to measure low-value care for low back pain (LBP) patients, with details described in Appendix 1. We construct three outcomes for this sample: orders for X-ray, advanced scanning (CT and MRI), or any imaging. For the LBP sample, we exclude observations after the fourth quarter of 2013, because the delivery system implemented an intervention to limit LBP imaging at that time which nearly eliminated low-value imaging referrals. We also exclude three clinics because they had newly joined the system and had incomplete data in these fields. After excluding same day visits, the final analytic sample for LBP consists of 78,636 observations. Of these, 12.4% has either X-ray or advanced imaging, with X-ray use accounting for the vast majority (11%) of total inappropriate imaging.

4.5. Opioid prescribing

To analyze the effects of time pressure on opioid prescribing, we create two subsamples of chronic pain patients (described in Appendix 1). We then identify patients with opioid use in the past six months, and consider all other patients to be "opioid naive." Similar to the antibiotics sample, we exclude all visits prior to 2011. The sample of all chronic pain visits has 320,109 observations with an opioid prescribing rate of 22.4%. The opioid naive sample consists of 245,639 visits and has an opioid prescribing rate of 11.0%.

4.6. Measures of Preventive Care Outcomes and Samples

We also examine the effect of time pressure on recommended preventive care. We use two women’s health screening tests and three diabetes management outcomes (samples described in Appendix 1). Women’s health screenings include mammography for breast cancer screening and Pap tests for cervical cancer screening. We identify the outcomes using orders for mammography and whether a Pap test was conducted using lab test records. There are 741,483 visits in our mammography sample and 1% have a mammography order. There are 639,719 visits in our Pap test sample, with 14.9% of visits receiving a Pap test.8

Diabetes management outcomes include LDL tests, referrals for eye exams, and HbA1c tests. We identify testing outcomes using lab test data and eye exam referrals based on referrals ordered to optometrists or ophthalmologists. The diabetes sample consists of 103,130 visits. 23% receive an HbA1c test, 15% receive an LDL test, and 5% receive an eye exam referral.

4.7. Empirical approach

Our empirical approach assumes that for a given “focal” patient in a pre-scheduled (i.e., not same-day) visit, the characteristics of the focal patient and the PCP that influence clinical decision making and quality are not correlated with unexpected changes in the PCP’s workload. We exclude same-day visits from the analysis sample because health status and outcomes of same-day patients may be correlated with our measure of schedule changes.

We estimate multivariate regression models using the sample of pre-scheduled visits:

Yipt=α+βScheduleChangeMinutesipt+δPatientXit+γPastUit+θPatientXit×PastUit+MonthYeart+WOMt+DOWt+Holidayt+TSt+Lengthipt+PCPp+εipt

Y represents the visit outcome for patient i with PCP p at a specific time and day t. The key independent variable(s), ScheduleChangeMinutes, are the measures of schedule changes within a given PCP and shift. We present specifications where we include a linear measure of net schedule changes and specifications where we include dummies for quartiles of net schedule change minutes to examine nonlinear effects as schedule change shocks become larger.

We also control for other potential determinants of outcomes, including observable characteristics of the patient (PatientX: age dummies, race/ethnicity, sex, language, insurance type, the number of reasons for visit, whether the patient had seen the PCP within the past year), dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit (PastU),9 and interactions between the vector of patient variables and past utilization variables. We also control for all observable and unobservable determinants of the outcomes that vary by date, by time of day, and by PCP by including a rich set of fixed effects for each month by year pair (MonthYear), week of month (WOM), day of week (DOW), holidays, hour of day in which the visit is scheduled (TS), scheduled visit length (Length) and PCPs. The month by year fixed effects allow us to flexibly control for general trends over time, while also allowing seasonality to vary across years. Standard errors are clustered at the PCP level to allow correlated observations within PCPs (Cameron and Miller 2015).

The key assumption in this model is that conditional on the covariates, the unanticipated time pressure facing a particular visit is uncorrelated with the unobserved determinants of the outcome. Because of the fixed effects, estimation is driven by within PCP changes in patient flows over time that are not systematically determined by day of week, time of day, or month. While PCPs expect some amount of changes to their schedule on the typical day, we argue that these fixed effects allow us to isolate the impact of schedule changes that are not predictable.

We consulted with several PCPs and several clinic managers from the Fairview system to understand how same-day visits are scheduled and allocated across PCPs. The process of accepting same-day patients and allocating those patients to PCPs varies somewhat across clinics and across PCPs. Some clinics accept very few same-day visits while others accept relatively more, and most PCPs allow clinic schedulers to add visits to open slots while relatively few only allow it with the PCP’s specific permission.10 One common theme, however, was a preference to schedule same-day visits with a patient’s existing PCP, and if that PCP was unavailable, to schedule with another PCP in the clinic who had an open slot at the specified time. Therefore, we expect that variation in schedule changes at the PCP level is driven predominantly by demand-side shocks and not by provider preferences that could be correlated with treatment choices.

Our approach explicitly or implicitly controls for the factors that are known to influence the delivery of inappropriate services: clinic norms and environment (Powell, Bloomfield et al. 2013) and provider’s underlying propensity towards inappropriate services (by including PCP fixed effects) (Gidwani, Sinnott et al. 2016), time of visit within the day (Linder, Doctor et al. 2014), and patient health and sociodemographic characteristics (Colla, Morden et al. 2015).

5. Results

5.1. Summary Statistics

Table 1 describes the clinics included in our analysis. The clinics range in size from 6 to 30 providers. The average clinic has 13 providers, including 8 primary care physicians and 2-3 NPs or PAs. In terms of demographics, 82.3 percent of patients treated at the average clinic are non-Hispanic white and 6 percent are covered by Medicaid. Across clinics, the average number of patients per half day shift seen by each PCP is 8.1, and this number does not vary greatly across clinics, with a standard deviation of 1.

Table 2 lists visit-level summary statistics of our full analytic sample (Appendix Table A1 presents these summary statistics for each condition specific sub-sample). The mean age is 51.8. The mean visit has 3.4 recorded diagnoses (1.7 new to that visit). There are also 1.5 reasons for visit recorded in the average visit. Most visits (77.6%) are for patients who have previously seen the provider in our data. On average the patients have experienced 3.9 outpatient visits in the previous six months and less than 0.5 specialist, inpatient, or ED visits.

Table 2:

Visit-Level Summary Statistics: Full Analytic Sample (N= 2,671,789)

Mean Standard Deviation
Age at visit 51.8 18.4
Number of diagnoses recorded 3.4 1.9
Number of new diagnoses recorded 1.7 1.6
Number of “reasons for visit” recorded 1.5 0.8
Scheduled visit length 24.8 8.9
Charlson index (previous year) 0.4 0.9
% Female 59.7 49.0
% non-Hispanic white 83.1 37.4
% on Medicaid 5.1 22.0
% English-speaking 92.2 26.8
% visit with same provider in past year 77.6 41.7
Number outpatient visits in past six months 3.9 6.5
Number specialist visits in past six months 0.5 4.2
Number inpatient stays in past six months <0.1 0.3
Number ED visits in past six months 0.1 0.6

Note: The analytic sample includes all pre-scheduled visits for patients age 18+. The sample size for the Charlson index is 2,097,487 since patients’ first visits in our data our data are automatically excluded from this measure.

Figure 2 plots trends in net schedule change minutes over time by calendar quarter. There is clear seasonality, with the most net minutes added to schedules in the first quarter of the year and the fewest in the third quarter of the year. The trend is relatively flat over time, other than a large dip in late 2010 and early 2011.

Figure 2: Trends in Net Schedule Changes.

Figure 2:

Notes: Net schedule change minutes are defined as the total scheduled length of same-day visits minus total scheduled length of no-shows and cancellations occurring within a shift.

Figure 3A plots the distribution of shift-level net schedule change minutes. On average, each shift has an addition of 17 net minutes, but there is a great deal of variability. The standard deviation is 34 minutes and there are many shifts with as many as 50-100 net additional minutes. There are also many shifts with net decreases in minutes. Figure 3B shows the distribution after removing provider, day of week, week of month, and month of year fixed effects (and then adding back the sample mean). The remaining variation is what we use in our identification, and we find that there is still a great deal of variability in time pressure shocks. The standard deviation is 31.7 and there are still large within-PCP swings in both directions.

Figure 3A:

Figure 3A:

Distribution of Net Schedule Changes

Figure 3B: Distribution of Net Schedule Changes, Adjusted for Fixed Effects.

Figure 3B:

Notes: Net schedule change minutes are defined as the total scheduled length of same-day visits minus total scheduled length of no-shows and cancellations occurring within a shift. Values are adjusted for day of week, week of month, month of year, and provider fixed effects.

While there is clear variation in net schedule changes, if PCPs have sufficient slack in their schedules, changes in net schedule minutes may not reflect actual changes in time pressure. However, as discussed above, the typical PCP sees eight patients per half-day shift. Taken together with the average appointment length of 25 minutes, this suggests PCPs are using most of the typical four-hour shift for patient care. As a one standard deviation shock to scheduled minutes is 31.7 minutes, the variation in our data is large relative to the amount of slack in a PCP’s schedule. In the median shift, 69% of time is scheduled in advance prior to any schedule changes occur, and the 25th percentile is 56% scheduled. We take this as evidence that PCPs come into most shifts relatively busy, particularly considering that several studies document that only between 33% and 62% of time in a clinical workday is spent in direct patient care and the remainder is spent on other types of “deskwork” (Chen, Hollenberg et al. 2011, Gottschalk and Flocke 2005, Sinksy, Colligan et al. 2016).

We also calculate the mean net schedule change minutes for shifts above and below the median of advanced scheduling. For shifts above the median time pre-scheduled, the mean of net schedule changes is 13 minutes. For shifts below the median of time pre-scheduled, the mean of net schedule changes is 44 minutes. This suggests that there is meaningful variation in schedule changes during both busy and slower shifts.

5.2. Covariate Balance

The key identifying assumption of our empirical model is that within-PCP net schedule changes are uncorrelated with unobserved characteristics. While this is not directly testable, we provide suggestive evidence in Table 3. This table shows mean patient characteristics separately for visits with “low” and “high” time pressure, based on splitting the distribution of residualized time pressure at the median. We find that visits with low and high time pressure shocks have similar characteristics, including the same number of reasons for visit and similar demographics. We do find that patients with low time pressure are three years older and have a slightly higher Charlson comorbidity index, on average. We suspect this is driven by a compositional effect driven by cancellations and no-show visits. Patients who cancel or do not show up at their visits are on average five to 10 years younger than patients who complete their pre-scheduled visits. Therefore, on days with higher time pressure due to fewer cancellations/no-shows, the patients in our analytic sample of completed pre-schedule visits are younger than on days with more cancellations/no-shows. As a result, we control for single year age dummies in all specifications.

Table 3:

Comparison of Low- and High- Time Pressure Visits

Low Time
Pressures
(N=
1,322,854)
High Time
Pressures
(N= 1,348,935)
Association
with Time
Pressure
(N=2,671,789)
Age at visit 53.21
(18.52)
50.35
(18.27)
−0.082***
(0.006)
Number of “reasons for visit” recorded 1.51
(0.86)
1.47
(0.83)
−0.002***
(0.000)
Scheduled visit length 25.16
(9.05)
24.47
(8.77)
−0.000
(0.001)
Charlson index (previous year) 0.42
(0.98)
0.37
(0.92)
0.000
(0.000)
% female 60.7
(48.8)
58.8
(49.2)
0.000**
(0.000)
% non-Hispanic white 83.2
(37.4)
83.2
(0.37.4)
−0.000
(0.000)
% on Medicaid 5.0
(21.8)
5.2
(22.2)
−0.000
(0.000)
% English-speaking 92.1
(26.9)
92.3
(26.7)
−0.000**
(0.000)
% visit with same provider in past year 79.6
(40.3)
75.6
(43.0)
−0.002***
(0.000)
Number outpatient visits in past six months 3.954
(6.595)
3.826
(6.491)
0.001
(0.002)
Number specialist visits in past six months 0.598
(4.319)
0.484
(4.004)
0.003**
(0.001)
Number inpatient stays in past six months 0.0476
(0.314)
0.0404
(0.286)
0.000**
(0.000)
Number ED visits in past six months 0.118
(0.596)
0.108
(0.555)
0.000**
(0.000)

Note: Values in the first two columns reflect sub-sample means; standard deviations are in parentheses. “Low time pressure” visits (column 1) are those with net schedule changes less than 13.9 minutes, the median value of net schedule changes after adjusting for day of week, week of month, month of year, and clinic-shift fixed effects. “High time pressure” visits (column 2) are those with net schedule changes greater than (or equal to) 14.0 minutes. The third column contains coefficient estimates and standard errors from the results of a linear regression of each row’s variable on time pressures as well as fixed effects for day of week, week of month, month of year, clinic-shift, visit length, hour of day, provider, and patient age (excluding the ‘Age at visit’ outcome). The analytic samples include all pre-scheduled visits for patients age 18+. The sample sizes for the Charlson indices are 1,053,710/1,043,777/2,097,487 for low/high time pressures and associations, respectively, since patients’ first visits in our data are automatically excluded from this measure.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

In column 3 of Table 3 we report regression estimates of the correlation between net schedule changes and each patient characteristic, conditional on provider, day of week, week of month, and month of year fixed effects, along with age dummies. We find that some of these coefficients are statistically significant, but the magnitudes are very small. For example, we find 10 additional net schedule change minutes is correlated with 0.0016 fewer reasons for visit and 0.003 more specialist visits in the past six months. Importantly, the Charlson comorbidity index is not statistically significantly related to net schedule changes. We also conduct this analysis using quartiles of the net schedule change variable, and find similar results (Appendix Table A2).

Before turning to regression results, we provide additional graphical analysis showing how unexpected, within-PCP variation in net schedule changes is correlated with both actual outcomes and the portion of outcomes predicted by observed patient characteristics and past utilization. The former provides evidence of the effects of time pressure, and the latter provides a further test of our identifying assumptions.

We first regress each of our outcomes on patient characteristics, past utilization, and their interactions, and then calculate predicted outcomes. These predictions can be interpreted as indices of observed patient characteristics. We then regress actual outcomes, predicted outcomes, and net schedule changes on PCP and time fixed effects and compute residuals. These residuals reflect the variation in outcomes, predicted outcomes, and net schedule changes that are not a function of PCP, seasonality, day of week, time of day, holidays, and scheduled visit length. The residuals of net schedule changes represent the key identifying variation that we are exploiting. If this variation in time pressure is not correlated with the residualized predicted outcomes it suggests that changes in patient characteristics are not correlated with changes in time pressure, and, therefore, unlikely to be correlated with unobserved changes in patient characteristics.

Figure 4 plots binned scatter plots of these relationships for number of diagnoses and follow-up care outcomes, while Appendix Figure A1 and A2 show similar plots for low value care, opioid prescribing, and preventive care outcomes. The relationships between residualized schedule changes and predicted outcomes are virtually flat. The exceptions are for number of diagnoses outcomes and Pap tests. The relationship for number of diagnoses is flat through much of the distribution of schedule change shocks, but large shocks to schedule changes are correlated with slightly lower predicted numbers of diagnoses. For both number of new diagnoses and Pap tests, schedule changes are correlated with higher predicted outcomes and lower actual outcomes. Therefore, any selection in these latter two outcomes would likely lead us to underestimate the decreases. Overall, we interpret these figures as strong evidence for our research design, especially for outcomes measuring follow-up care and treatment decisions.

Figure 4: Correlation of Observed and Predicted Outcomes with Net Schedule Changes.

Figure 4:

Notes: Predicted outcomes are generated from regressions of each outcome on patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), and interactions between demographics and past utilization. Residuals are generated from regressions of net schedule changes, outcomes, and predicted outcomes on fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length.

The relationships between residualized schedule changes and residualized outcomes correspond to the regression results discussed below, which quantify the magnitudes of these relationships. These plots suggest that time pressure in the form of net schedule changes reduces the number of diagnoses and the number of new diagnoses. Additionally, net schedule changes appear to increase scheduled and unscheduled follow-ups and reduce referral orders. There are also slight increases in hospitalization outcomes with more time pressure.

5.3. Regression Results, Diagnoses and Follow-up Care

Figure 5 plots regression coefficients and 95% confidence intervals for the number of diagnoses and follow-up care outcomes. We first plot results that only include PCP and time fixed effects. From left to right, we add single-year age dummies, patient demographics, past utilization variables, and finally interactions of each past utilization variable with age fixed effects and patient demographics. For the number of diagnoses outcomes, we find that point estimates change when we add age dummies and patient demographics, but are stable when we add past utilization and interaction terms. However, all of the follow-up care estimates are quite stable regardless of specification. Figure 6 plots similar figures with quartiles of net schedule changes and shows that these results are also stable for follow-up care outcomes, and most of the change in the number of diagnoses estimates comes from the fourth quartile. This is consistent with the graphical evidence where large changes to net schedule changes are slightly correlated with predicted number of diagnoses. Overall, the stability of regression estimates with different sets of controls provides further support of our identification strategy.

Figure 5: Sensitivity of Linear Regression Coefficients to Controls (Diagnoses and Follow-up Care).

Figure 5:

Notes: Baseline fixed effects include provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Age fixed effects are single year age dummies. Demographics include race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year. Past utilization includes dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit. Standard errors are clustered at the provider level. Bars represent 95% confidence intervals.

Figure 6: Sensitivity of Quartile Regression Coefficients to Controls (Diagnoses and Follow-up Care).

Figure 6:

Notes: Baseline fixed effects include provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Age fixed effects are single year age dummies. Demographics include race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year. Past utilization includes dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit. Standard errors are clustered at the provider level. Bars represent 95% confidence intervals.

Coefficient estimates and standard errors of the final specification with all controls are presented in Table 4, with linear estimates in the first column and quartile estimates in subsequent columns. When a visit occurs after 10 net minutes have been added to the PCP’s schedule, the number of diagnosis decreases by 0.0065 and the number of new diagnoses falls by 0.0025. These estimates are statistically significant at the 1% level. Since the average scheduled visit length in our data is about 25 minutes long,11 these estimates imply that one additional net patient added to a PCP’s half-day shift decreases both outcomes by about 0.5% and 0.4% of their sample means, respectively. The magnitudes of these effects are small, but do suggest that net schedule changes decrease the number of topics discussed during the visit. When we examine quartiles of the net time change variable we find a monotonic relationship between net time change quartile and diagnoses recorded. Moving from the first to the fourth quartile reduces the number of diagnoses by 1.7% of the sample mean and new diagnoses by 1% of the sample mean.

Table 4:

Regression Results, Diagnoses and Follow-up Care

Net Schedule
Change (10
minutes)
Quartile 2 Quartile 3 Quartile 4 N Mean
# of Diagnoses −0.00645***
(0.0008)
−0.0235***
(0.0047)
−0.0347***
(0.0048)
−0.0589***
(0.0066)
2,671,789 3.37
# of New Diagnoses −0.00250***
(0.0005)
−0.00877**
(0.0035)
−0.0155***
(0.0034)
−0.0182***
(0.0050)
2,671,789 1.74
Scheduled Follow-up (14 days) 0.00016***
(0.00003)
0.00010***
(0.00033)
0.00092***
0.00029)
0.0012***
(0.00029)
2,671,789 0.02
Unscheduled Follow-up (14 days) 0.00028***
(0.00006)
−0.00127**
(0.00051)
0.00088*
(0.00049)
0.00253***
(0.00053)
2,671,789 0.056
Referral Order −0.000787***
(0.0001)
−0.00231**
(0.0010)
−0.00409***
(0.0010)
−0.00697***
(0.0011)
2,671,789 0.246
Hospital-Based Care (14 days) 0.000036
(0.000022)
0.000017
(0.00017)
0.00017
(0.00017)
0.000414**
(0.00018)
2,671,789 0.010
Emergency Department Visit (14 days) 0.00001
(0.00002)
0.00001
(0.00016)
0.000004
(0.00015)
0.00015
(0.00016)
2,671,789 0.007
Inpatient Visit, 14 days (14 days) 0.000023**
(0.00001)
0.00009
(0.00008)
0.00008
(0.00008)
0.000206***
(0.00008)
2,671,789 0.002
Hospital-Based Care (30 days) 0.0000739**
(0.0000)
0.000028
(0.0002)
0.000484**
(0.0002)
0.000714***
(0.0003)
2,671,789 0.018
Emergency Department Visit (30 days) 0.0000318
(0.0000)
0.0000261
(0.0002)
0.000308
(0.0002)
0.000345
(0.0002)
2,671,789 0.013
Inpatient Visit (30 days) 0.0000303**
(0.0000)
0.0000493
(0.0001)
0.000114
(0.0001)
0.000261**
(0.0001)
2,671,789 0.004

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clustered at the provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

We find that greater time pressure increases both scheduled and unscheduled follow-ups, with both point estimates positive and statistically significant at the 1% level. A one-patient addition increases the likelihood of a scheduled follow-up by 2% relative to the sample mean, and increase the likelihood of an unscheduled follow-up by 1.3%. This suggests that increased time pressure both leads PCPs to postpone care to future visits and leads patients to schedule a future visit in response to the care received under time pressure.

For the referral order outcome where theoretical predictions are less clear, our results suggest that on net, added time pressure leads to a small and statistically significant reduction in referral orders. The magnitude of the effect is smaller than for follow-up visits. A one-visit (25 minute) schedule addition reduces the relative likelihood of referral by 0.8%.

When we measure net time change in quartiles, we find nearly monotonic relationships between net time change quartile and the likelihood of follow-up visits and referral. We also find evidence of a nonlinear relationship, where the top quartile of net schedule changes (i.e., most time pressure) is associated with disproportionately larger increases in the outcomes. Being in the top quartile leads to meaningful increases in scheduled and unscheduled follow-ups (relative increases of 6.0% and 4.5%, respectively), and reductions in referrals (2.8%).

We find a small and not statistically significant effect on 14-day hospital-based care. When separated into ED and inpatient visits, we find statistically significant effects only for inpatient visits. Adding an additional visit leads to a 2.9% relative increase in the likelihood of inpatient admission in the subsequent two weeks. We find a largely monotonic, and nonlinear relationship between quartiles and the likelihood of hospital-based care in the subsequent two weeks. Being in the top quartile of net schedule changes is associated with a 4.1% relative increase in the likelihood of any hospital-based care and a 10.3% relative increase in the likelihood of inpatient care. We also examine hospital-based care in the 30 days after an index visit and find similar results in terms of relative effects but with greater precision.

5.4. Low-Value Care, Opioid Prescribing, and Preventive Care Outcomes

We now present estimates for treatment decisions. Appendix Figures 3 and 4 show that results are all quite stable as we subsequently add controls in both the linear and quartile specification for low-value care, opioid prescribing, and preventive care outcomes. Regression estimates and standard errors for the final specification with all controls are in Table 5. For antibiotics prescriptions we find no statistically significant effect of net change minutes on inappropriate prescriptions. However, we do find that greater net schedule change minutes increase the use of antibiotic prescriptions that are in the “grey area” of being potentially appropriate. While the linear regression coefficient is not statistically significant, the fourth quartile of net change increases potentially appropriate antibiotic prescribing by 2.2% of the sample mean and is statistically significant at the 10% level. Taken together, the antibiotic results suggest that increased time pressure may increase low-value care, but is more likely to increase care that is marginally low-value rather than care that is unquestionably low-value.

Table 5:

Regression Results, Low Value Care and Opioid Prescribing

Net Schedule
Change (10 minutes)
Quartile 2 Quartile 3 Quartile 4 N Mean
Antibiotics
Not Appropriate −0.000978
(0.000917)
−0.017222
(0.011667)
−0.005842
(0.009129)
−0.005465
(0.008664)
23,421 0.462
Potentially Appropriate 0.001119
(0.000748)
0.002084
(0.009407)
0.003159
(0.007571)
0.012847 *
(0.007325)
33,043 0.579
Low Back Pain
Any Imaging −0.000498
(0.000452)
0.003077
(0.004465)
−0.000954
(0.004164)
−0.00223
(0.003981)
78,636 0.124
Xray −0.000562
(0.000431)
0.002244
(0.004260)
0.001071
(0.003930)
−0.002742
(0.003740)
78,636 0.109
Advanced Imaging −0.000039
(0.000193)
0.000474
(0.001474)
−0.002456 *
(0.001392)
−0.000113
(0.001392)
78,636 0.018
PT Referral −0.000302
(0.000244)
−0.002645
(0.002186)
−0.003731 *
(0.002186)
−0.004593 **
(0.002147)
78,636 0.0584
Opioid
Opioid Naïve Only −0.000342
(0.000219)
−0.004693 **
(0.002331)
0.002174
(0.001723)
−0.002151
(0.001845)
245,639 0.11
All Patients −0.000729 ***
(0.000242)
−0.001000
(0.002495)
0.0000924
(0.001927)
−0.005466 ***
(0.002048)
320,109 0.224
Women’s Health Screening
Mammography −0.000044
(0.000060)
0.001052 *
(0.000552)
−0.000985 *
(0.000573)
−0.000717
(0.000610)
741,483 0.012
Pap Test −0.000579 ***
(0.000167)
0.000442
(0.001283)
−0.003881***
(0.001273)
−0.008097 ***
(0.001412)
639,719 0.149
Diabetes Management
HbA1c Test −0.00008
(0.000490)
0.001748
(0.003542)
0.001003
(0.003677)
0.000128
(0.003981)
103,130 0.231
LDL Test −0.000179
(0.000390)
0.004032
(0.003072)
−0.002886
(0.003129)
0.000719
(0.003261)
103,130 0.15
Referral, Dilated Eye Exam −0.000065
(0.000236)
−0.001651
(0.002170)
−0.000136
(0.002088)
0.000542
(0.002020)
103,130 0.051

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clustered at the provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01

For the low back pain sample, we find net schedule changes do not affect imaging orders. We also explore physical therapy referrals, which could be considered the high-value alternative to imaging, but we find no statistically significant effect on these referrals. However, we find a monotonic relationship between quartiles of net schedule changes and the likelihood of physical therapy referral. The upper quartile of net schedule changes is associated with a 7.9% relative reduction in the likelihood of a physical therapy referral.

We find no impact of time pressure on opioids in the sample of chronic pain visits that have not had a recent opioid prescription (“opioid-naïve”), but among all chronic pain patients, we find a reduction in prescriptions in response to increased net schedule change minutes. An additional 25-minute visit would reduce the likelihood of an opioid prescription by 0.8% relative to the sample mean. We find evidence of nonlinearity in this relationship, as the upper quartile of net schedule changes is associated with a 2.4% relative reduction in opioid prescribing.

In contrast to low-value care outcomes, we expect time pressure to unambiguously negatively affect recommended preventive care. We find that increases in net schedule change minutes do lead to relatively small but statistically significant decreases in Pap tests. We do not find a relationship between net schedule change minutes and the other preventive care measures. The relationship between net schedule change quartiles and Pap tests is monotonic, and the upper quartile is associated with a 5.4% relative reduction in Pap tests.

5.5. Robustness Checks

We perform a variety of robustness checks that are shown in the appendix. First, while we show in Table 3 and the balance figures that patient characteristics are similar in visits with different levels of time pressure, we explore this further in a regression framework, focusing on the Charlson index of comorbidities. We use a one-year lagged measure of the Charlson index, since we find that time pressure affects the diagnoses that make up the index. In Table A3 we find no statistically significant relationship between schedule changes and the Charlson index, providing further evidence that patient health is uncorrelated with within-PCP variation in time pressure. We do not include the previous year’s Charlson as a control variable in our main results, because it would require dropping one year of data for each patient, but the results are robust to using this smaller sample and including this control in Tables A4.

In Table A5 we consider that schedule add-ons and cancellations may have asymmetric effects. For example, add-on visits may lead PCPs to reduce the care they provide, but new free time due to cancellations could have a smaller impact if PCPs use that newly available time for non-clinical work. For most outcomes where we find an effect in our previous analysis, we find opposite signed coefficients on same-day minutes and cancellation/no-show minutes. This is because same-day minutes increase time pressure and cancellation/no-show minutes decrease time pressure. However, we do not find any obvious patterns of asymmetry in the magnitudes, except for opioids and Pap tests, where we find the effect of net schedule change minutes on overall opioid prescribing and Pap tests to be driven by increases in same-day minutes.

In Table A6.a and A6.b we look at the low-value care and opioid prescribing results after defining the samples based on reason for visit rather than diagnoses codes. Because we find an effect of time pressure on diagnoses, we might worry that the sample construction using diagnoses codes would be endogenous. Most results are similar when using reason for visit, which is determined before the visit, suggesting that this is not an important limitation.

We also investigate whether the relationship between net schedule changes and our outcomes vary by primary care clinics, and we do find substantial heterogeneity across clinics (Table A7).12 This suggests that the effects of time pressure that we observe are potentially avoidable, and could relate to local staffing, scheduling, or organizational models.

6. Conclusion

We present a novel approach to a question that is at the heart of many current policy discussions about the quantity and quality of health care utilization and how to optimally pay for primary care services. We find that added worktime within a fixed shift reduces the number of diagnoses recorded within a visit. This suggests that our proxy measure for time pressure, within-work shift schedule changes, does matter for a dimension of productivity within a visit, albeit by small amounts. We also find that increases in time pressure lead to modestly increased rates of both scheduled and unscheduled follow-up visits to the same provider, and these increases are more meaningful in magnitude at higher levels of time pressure. This suggests that PCPs cope with time pressure by delaying care to a future visit in the case of scheduled follow-ups, but also that time pressure may be reducing quality or patient-perceived quality in the case of unscheduled follow-ups. These decreases in quality may in fact translate into adverse events, as we do find small increases in hospitalizations following visits with time pressure. We also find that higher levels of time pressure increase some measures of low-value care, such as potentially inappropriate antibiotic prescriptions. We also find that time pressure reduces the provision of recommended preventive care. Many of these effects on quality of care are relatively small, but some are clinically and economically meaningful, particularly at high levels of time pressure.

Our results suggest negative externalities of adding patients to PCP schedules. However, we interpret our results as being for the most part small in magnitude, suggesting that the tradeoff from increasing time pressure by seeing additional patients in a workshift exists but is unlikely to be very costly. In Appendix 2, we present a simple back-of-the-envelope calculation of the costs of adding an additional 25-minute visit to a workshift for the other patients seen in the workshift.13 On a per-patient basis these costs are quite small, and even from the clinic’s perspective the sum of those patient costs is far short of the added revenue of an additional 25 minute visit. The relatively small magnitudes that we observe are broadly consistent with results from similar research on PCP responses to time pressure that uses a different identification strategy and examines Israeli primary care clinics (Shurtz, Eizenberg et al. 2019). The small magnitude of the results contrasts with prior descriptive research on the relationship between time pressure and health care delivery, though we are not able to assess all outcomes that matter for primary care productivity and quality of care.

Given our empirical strategy, our estimates necessarily reflect partial equilibrium responses to short-term changes in time pressure. Longer run influences on time pressure, such as changes in demand from increased insurance coverage or changes in supply from changes in the medical training pipeline, may result in different types of responses and adjustments by providers. However, even in the context of these longer run changes, providers are likely to always face short-run fluctuations in demand. Our results speak to the benefits of potential adjustments to primary care practice organization that might mitigate these types of shocks, such as delegating some providers to float between different practices locations to smooth capacity constraints when there is a surge in demand.

Our approach and results suggest other interesting avenues for future research on how time pressure affects provider behavior and productivity. For instance, assessing whether time pressure affects clinicians’ biases and reliance on heuristics, assessing whether different racial and ethnic groups are differentially affected by time pressure, and assessing whether time pressure influences the degree to which providers customize their treatments to specific patients may all be fruitful research questions. Furthermore, understanding how the organizational, staffing, and workflow characteristics of specific clinics moderate the effects of time pressure on individual providers are important extensions of this research, along with understanding the costs associated with practice models that prevent time pressures from affecting provider productivity.

Acknowledgments:

We would like to thank Hannah Neprash, Dan Sacks, Jonathan Skinner, Aaron Sojourner, participants at the 2017 APPAM Fall Research Conference, 2017 IU/UofL/VU Health Economics and Policy Conference, 2018 ASHEcon Conference, 2018 American-European Health Economics Study Group, and seminar participants at the Emory University School of Public Health, IUPUI Department of Economics, IUPUI Department of Health Management and Policy for helpful comments and suggestions. We are grateful to staff of Fairview Health Services for answering many questions about institutional details, and we are grateful to Gretchen Sieger for assistance in obtaining data. Our paper uses proprietary data from an integrated delivery system through its data sharing agreement with the University of Minnesota. Others may apply for access to the data with the sponsorship of a University of Minnesota affiliated researcher. We can provide information about how other researches can apply for the data and we can share all of the STATA code that generates the reported results. This research was supported by the National Institutes of Health’s National Center for Advancing Translational Sciences, grant UL1TR002494. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health’s National Center for Advancing Translational Sciences. This research was also supported by a University of Minnesota Academic Health Center Seed Grant (#16.22). Dr. Smith was supported by an AHRQ institutional predoctoral training grant. All errors are our own. No conflicts of interest are reported for any author.

Appendix

Appendix 1: Treatment Decision Sub-Samples

Antibiotic Prescribing

The “never appropriate” sample includes visits with ICD-9 variations of acute nasopharyngitis (common cold), acute bronchitis, acute laryngitis, and/or influenza. The “potentially inappropriate” sample includes visits with ICD-9 variations of sinusitis, pharyngitis, pneumonia, and otitis media. For both outcomes, we exclude patients with a positive rapid strep test result on the same day as the visit or in the preceding week. We exclude visits with a secondary ICD-9 diagnosis for which antibiotics might be appropriate (e.g. a non-respiratory infection). We also exclude patients with certain medical comorbidities (e.g. cancer) that are guideline exclusions (Meeker, Linder et al. 2016). The complete list of ICD-9 codes used to define the ARI outcome denominators is in Appendix Table A10.

Low Back Pain

The sample includes visits with any occurrence of LBP as either the primary or secondary ICD-9 diagnosis code following Javik et al. (2012). We exclude patients with conditions that would make them potentially appropriate for imaging according to the National Quality Forum. We exclude patients with previous onset of LBP within six months, which we identify by previous visits from the same patient. We also exclude patients with cancer, trauma, intravenous drug abuse, or neurologic impairment, which we identify based on codes from Schwartz et al (2014).

Opioid Prescribing

We start with all visits with ICD-9 variations of chronic and acute back pain, chronic joint and musculoskeletal pain, and migraines (Table A11) (Centers for Disease Control and Prevention 2013), and exclude all patients with a cancer diagnosis.

Women’s Health Screening

For mammography, we identify visits for women between 42 and 69 years of age with no prior history of bilateral mastectomy, excluding women who have had mammography within two years of the visits (Atlas, Grant et al. 2009). The Pap test outcome measure denominator is women between 21 to 64 years of age with no prior history of hysterectomy and who had not received a Pap test within three years of the visit.

Diabetes Management

The sample includes visits with a diabetes diagnosis in the past 12 months, excluding patients with an LDL test, referral for eye exam, or HbA1c test within the past year (Bardach, et al. 2013).

Appendix 2: Back of the Envelope Calculation

To further help interpret the magnitude of our regression coefficients and their policy implications, we do a simple back-of-the-envelope calculation of the cost of added time pressure within a workshift from one additional visit.

First, we assume 25 minutes of added time to the workshift, which is the average visit length in our data. We also assume that there are 8 patients who were already scheduled within the workshift and are affected by the added time, which is the average patients per workshift.

Second, we focus on outcomes where there is a quantifiable cost associated with subsequent health care use: 30-day inpatient use, 30-day emergency department use, scheduled and unscheduled 14-day follow-up visits, and referral orders. We get cost estimates from public reports from the Health Care Cost Institute which represent true total prices for commercially-insured patients (including both insurer and patient payments). We assume that an inpatient admission costs $22,000 on average (https://healthcostinstitute.org/hcci-research/inpatient-admissions-by-diagnosis-in-2018). We assume that an emergency room visit costs $924 on average (https://healthcostinstitute.org/images/pdfs/ARM2019_ER_Posterv2.pdf). We assume that a primary care visit costs $106 on average (https://healthcostinstitute.org/hcci-research/trends-in-primary-care-visits). Average cost for referral services are more complicated for several reasons. Many specialty referral services cost more than a primary care visit, whereas some likely cost less. Additionally, a referral order does not necessarily translate to actual service use. Acknowledging this uncertainty, we assign the cost of a primary care visit ($106) for the expected cost of a referral order.

We present three versions of estimates based on coefficients from Table 4. The first relies on the coefficients from the models that use net time change as a continuous variable in 10-minute increments (multiplied by 2.5 to equal a 25 minute change). The second version relies on the coefficients from the models using quartiles of net schedule changes, focusing on the difference between third quartile (15-30 net schedule change minutes) coefficient and the second quartile (0-15 net schedule change minutes). The third version also relies on the coefficients from the models using quartiles of net schedule changes, but looks at the difference between fourth quartile (>30 net schedule change minutes) coefficient and the second quartile (0-15 net schedule change minutes). An added 25 minutes of time per shift could be consistent with estimates in either the second or third versions of these calculations.

We estimate that the expected costs for the other eight patients in a workshift from adding an additional 25 minute appointment range from $13.05 to $43.12. From the perspective of each individual patient in the workshift, these are small effects, although there may be other valuable outcomes that are not quantified in this exercise. The costs are more meaningful from the perspective of the provider or clinic, but for comparison, the average allowed Medicare cost for a 25-minute outpatient Evaluation and Management appointment is $104.64, which far exceeds our estimates of the expected costs for all eight patients from a workshift.

Version 1.

Estimates based on continuous measure of net schedule changes

Expected cost per
outcome
Coefficient x 2.5 x 8 patients Total expected cost
30-day inpatient $22,000 0.0006 $13.2
30-day ED $924 0.00064 $0.59
14-day planned follow-up $106 0.0032 $0.34
14-day unplanned follow-up $106 0.0056 $0.59
Referral $106 −0.0158 $−1.67
Total $13.05

Version 2.

Estimates based on difference between third and second quartile coefficients of net schedule changes

Expected cost per
outcome
(3rd quartile coef – 2nd
quartile coef) x 8 patients
Total expected cost
30-day inpatient $22,000 0.0006696 $14.73
30-day ED $924 0.0022096 $2.04
14-day planned follow-up $106 0.00656 $0.70
14-day unplanned follow-up $106 0.0172 $1.82
Referral $106 −0.01424 −$1.51
Total $17.78

Version 3.

Estimates based on difference between fourth and second quartile coefficients of net schedule changes

Expected cost per
outcome
(4th quartile coef – 2nd
quartile coef) x 8 patients
Total expected cost
30-day inpatient $22,000 0.0018456 $40.60
30-day ED $924 0.0025056 $2.32
14-day planned follow-up $106 0.0088 $0.93
14-day unplanned follow-up $106 0.0304 $3.22
Referral $106 −0.03728 −$3.95
Total $43.12

Appendix Figure A1: Correlation of Observed and Predicted Outcomes with Net Schedule Changes (Low Value Care and Opioid Prescribing).

Appendix Figure A1:

Notes: Predicted outcomes are generated from regressions of each outcome on patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), and interactions between demographics and past utilization. Residuals are generated from regressions of net schedule changes, outcomes, and predicted outcomes on fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length.

Appendix Figure A2: Correlation of Observed and Predicted Outcomes with Net Schedule Changes (Preventive Care).

Appendix Figure A2:

Notes: Predicted outcomes are generated from regressions of each outcome on patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), and interactions between demographics and past utilization. Residuals are generated from regressions of net schedule changes, outcomes, and predicted outcomes on fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length.

Appendix Figure A3: Sensitivity of Linear Regression Coefficients to Controls (Low Value Care and Opioid Prescribing).

Appendix Figure A3:

Notes: Baseline fixed effects include provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Age fixed effects are single year age dummies. Demographics include race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year. Past utilization includes dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit. Standard errors are clustered at the provider level. Bars represent 95% confidence intervals.

Appendix Figure A4: Sensitivity of Quartile Regression Coefficients to Controls (Low Value Care and Opioid Prescribing).

Appendix Figure A4:

Notes: Baseline fixed effects include provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Age fixed effects are single year age dummies. Demographics include race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year. Past utilization includes dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit. Standard errors are clustered at the provider level. Bars represent 95% confidence intervals.

Appendix Figure A5: Sensitivity of Linear Regression Coefficients to Controls (Preventive Care).

Appendix Figure A5:

Notes: Baseline fixed effects include provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Age fixed effects are single year age dummies. Demographics include race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year. Past utilization includes dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit. Standard errors are clustered at the provider level. Bars represent 95% confidence intervals.

Appendix Figure A6: Sensitivity of Quartile Regression Coefficients to Controls (Preventive Care).

Appendix Figure A6:

Notes: Baseline fixed effects include provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Age fixed effects are single year age dummies. Demographics include race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year. Past utilization includes dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit. Standard errors are clustered at the provider level. Bars represent 95% confidence intervals.

Table A1:

Visit-level Summary Statistics for Analytic Sub-Samples

Sample: (1) (2) (3) (4) (5) (6) (7) (8)
Number of Visits 23,421 33,043 78,636 245,639 320,109 741,487 639,723 103,130
% Prescribed antibiotics 0.462
(0.499)
0.580
(0.494)
% Imaging ordered 0.159
(0.366)
% Prescribed opioids 0.113
(0.316)
0.227
(0.419)
% Mammography ordered 0.0117
(0.107)
% Pap test ordered 0.149
(0.356)
% HBA1C lab ordered 0.231
(0.422)
% LDL test ordered 0.150
(0.357)
% Eye dilation test ordered 0.0509
(0.220)
Age at visit 52.83
(17.77)
48.81
(17.84)
53.04
(17.72)
54.36
(17.30)
54.50
(17.01)
54.89
(7.821)
44.07
(12.38)
63.50
(13.77)
Number of diagnoses 3.099
(1.876)
3.146
(1.836)
3.764
(2.019)
4.048
(2.080)
4.066
(2.045)
3.603
(1.991)
3.295
(1.937)
4.258
(1.913)
Number of new diagnoses 1.627
(1.344)
1.752
(1.384)
1.946
(1.575)
2.026
(1.620)
1.824
(1.580)
1.807
(1.628)
2.072
(1.754)
1.288
(1.359)
Number of reasons for visit 1.607
(0.892)
1.585
(0.881)
1.619
(0.953)
1.712
(0.981)
1.700
(0.972)
1.517
(0.865)
1.462
(0.822)
1.706
(1.015)
Visit length 23.04
(9.134)
22.75
(8.647)
25.26
(8.703)
25.93
(9.016)
25.58
(8.848)
25.12
(9.209)
24.88
(9.139)
24.91
(8.595)
% female 0.598
(0.490)
0.645
(0.479)
0.602
(0.489)
0.629
(0.483)
0.633
(0.482)
1
(0)
1
(0)
0.498
(0.500)
% non-Hispanic white 0.852
(0.355)
0.870
(0.336)
0.818
(0.386)
0.856
(0.351)
0.859
(0.348)
0.858
(0.349)
0.805
(0.396)
0.875
(0.331)
% Medicaid 0.0579
(0.234)
0.0605
(0.238)
0.0723
(0.259)
0.0564
(0.231)
0.0739
(0.262)
0.0585
(0.235)
0.0735
(0.261)
0.0387
(0.193)
% No visit in past year 0.684
(0.465)
0.658
(0.474)
0.783
(0.412)
0.783
(0.412)
0.821
(0.383)
0.819
(0.385)
0.707
(0.455)
0.917
(0.276)
% English-speaking 0.952
(0.214)
0.962
(0.191)
0.912
(0.283)
0.951
(0.215)
0.957
(0.202)
0.938
(0.242)
0.913
(0.282)
0.946
(0.226)
% Scheduled follow-up 0.0170
(0.129)
0.0208
(0.143)
0.0208
(0.143)
0.0169
(0.129)
0.0183
(0.134)
0.0198
(0.139)
0.0219
(0.146)
0.0184
(0.134)
% Unscheduled follow-up 0.0812
(0.273)
0.0834
(0.276)
0.0681
(0.252)
0.0543
(0.227)
0.0612
(0.240)
0.0550
(0.228)
0.0605
(0.238)
0.0454
(0.208)
% Referral 0.155
(0.362)
0.177
(0.381)
0.406
(0.491)
0.401
(0.490)
0.382
(0.486)
0.280
(0.449)
0.253
(0.435)
0.284
(0.451)
% Hospital-based care in 14 days 0.0192
(0.137)
0.0227
(0.149)
0.00806
(0.0894)
0.0150
(0.121)
0.0186
(0.135)
0.00900
(0.0944)
0.00972
(0.0981)
0.0130
(0.113)
% ED care in 14 days 0.0137
(0.116)
0.0166
(0.128)
0.00619
(0.0785)
0.0110
(0.104)
0.0136
(0.116)
0.00668
(0.0815)
0.00775
(0.0877)
0.00755
(0.0866)
% Inpatient visit in 14 days 0.00431
(0.0655)
0.00544
(0.0736)
0.00163
(0.0403)
0.00310
(0.0556)
0.00396
(0.0628)
0.00186
(0.0431)
0.00160
(0.0399)
0.00444
(0.0665)

Note: Values reflect sample means; standard errors are in parentheses. The analytic samples include all pre-scheduled visits for patients age 18+. Samples are defined as follows:

(1) Upper respiratory tract infection, antibiotics not appropriate

(2) Upper respiratory tract infection, antibiotics potentially appropriate

(3) Uncomplicated low-back pain

(4) Chronic pain diagnosis, only patients with no opioid use in past 6 months

(5) Chronic pain diagnosis, all patients

(6) Women eligible for mammography

(7) Women eligible for pap test

(8) Patients with diabetes

Table A2:

Comparison of Low- and High- Time Pressure Visits, using Quartiles

Time Pressures
Coefficient,
Quartile 2
Time Pressures
Coefficient,
Quartile 3
Time Pressures
Coefficient,
Quartile 4
Age at visit −0.234***
(0.041)
−0.349***
(0.040)
−0.740***
(0.052)
Number of “reasons for visit” recorded −0.008**
(0.003)
−0.006**
(0.003)
−0.017***
(0.005)
Scheduled visit length 0.032
(0.037)
−0.039***
(0.013)
−0.036*
(0.021)
Charlson index (previous year) −0.003
(0.002)
−0.008
(0.002)
0.002
(0.002)
Female −0.001
(0.001)
−0.003
(0.001)
0.002**
(0.001)
non-Hispanic white 0.002**
(0.001)
0.000
(0.001)
−0.001*
(0.001)
Medicaid −0.001
(0.000)
−0.001
(0.000)
0.001
(0.000)
English-speaking 0.002***
(0.001)
−0.000
(0.001)
−0.002***
(0.001)
Visit in past year −0.001
(0.001)
−0.005***
(0.001)
−0.014***
(0.002)
Number outpatient visits in past six months −0.016
(0.012)
0.002
(0.013)
0.021
(0.017)
Number specialist visits in past six months −0.000
(0.007)
0.006
(0.008)
0.025**
(0.010)
Number inpatient stays in past six months −0.001
(0.006)
0.000
(0.001)
0.001**
(0.001)
Number ED visits in past six months −0.001
(0.001)
−0.000
(0.001)
0.004**
(0.002)

Note: Coefficient estimates (standard errors in parentheses) are from a regression of each row’s variable on dummies for each time pressure quartile, relative to Quartile 1, the lowest quartile of schedule change minutes (Q1: < 0 minutes; Q2: 0-15 minutes; Q3: 15-30 minutes; Q4: >30 minutes). The regressions also include fixed effects for day of week, week of month, month of year, clinic-shift, provider, and patient age (excluding the ‘Age at visit’ outcome). The analytic samples include all pre-scheduled visits for patients age 18+. The sample size for the Charlson index is 2,097,487, since patients’ first visits in our data are automatically excluded from this measure.

*

if p- value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A3:

Exogeneity Test of Charlson Risk Adjustment Index

Lagged
Charlson
Index
Charlson Index
Model 1 Net schedule change 0.000259
(0.000254)
0.000315
(0.000267)
Model 2 From same day visits 0.000285
(0.000270)
0.000357
(0.000282)
Appointment cancellation and no-show −0.000138
(0.000349)
−0.000125
(0.000396)
Sample Size 2,120,953 2,687,637
Mean 0.399 0.467

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographic characteristics, provider fixed effects and a vector of time fixed effects. Standard errors clustered at the provider level are in parentheses.

*

p< 0.1

**

p< 0.05

***

p < 0.01.

Table A4:

Sensitivity to Controlling for Charlson Index

# of Diagnoses # of new diagnoses Scheduled follow-up Unscheduled follow-up Referral
No Charlson Index Net schedule change −0.0064***
(0.0008)
−0.0015***
(0.0005)
0.00012***
(0.0000)
0.0003***
(0.0000)
−0.0008***
(0.0001)
Charlson Index Net schedule change −0.0064***
(0.0008)
−0.0015***
(0.0005)
0.00012***
(0.0000)
0.0003***
(0.0000)
−0.0008***
(0.0001)
N 2,097,487 2,097,487 2,097,487 2,097,487 2,097,487
Mean 3.5 1.5 0.02 0.05 0.25

Note: Time pressure variables are measured in terms of 10 minutes. In Addition to Charlson Index, all regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses. “Charlson Index” estimates include patients’ Charlson Index from the previous year as a covariate. “No Charlson Index” estimates are limited to the sample from the Charlson Index regressions, but do not include the Charlson index as a covariate.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A5:

Alternative Specification: Asymmetric Time Pressure Effects

All Visits Follow-up Referral Hospital-based Antibiotics
# of
Diagnoses
# of new
diagnoses
Unscheduled
follow-up in
14 days
Any
Referral
Hospital-based care
in 14 days
ED visit in 14
days
Inpatient
visit in 14
days
Not
Appropriate
Potentially
Appropriate
From same day visits −0.00719***
(0.000802)
−0.00231***
(0.000598)
0.00031***
(0.00007)
−0.00075***
(0.00013)
0.00004**
(0.00002)
0.000011
(0.00002)
0.00002**
(0.00001)
−0.001107
(0.00092)
0.001598**
(0.00077)
From cancellation and no show 0.00310***
(0.00085)
0.00338***
(0.00060)
−0.00011
(0.00008)
0.00094***
(0.00016)
−0.000001
(0.000036)
0.00002
(0.00003)
−0.000029**
(0.000016)
−0.00029**
(0.001589)
−0.00148**
(0.00131)
N 2,671,789 2,671,789 2,671,789 2,671,789 2,671,789 2,671,789 2,671,789 23,421 33,043
Mean 3.37 1.74 0.06 0.25 0.01 0.007 0.002 0.46 0.58
 
Opioid LBP Mammography Pap Test DM
Opioid
Naive
Only
All Patients Imaging Xray Advanced
Imaging
PT
Referral
Mammography Pap Test HbA1c
Test
LDL Test Referral,
Dilated
Eye Exam
From same day visits −0.0003
(0.0002)
−0.0008***
(0.0003)
−0.00049
(0.00048)
−0.00057
(0.00046)
−0.00001
(0.00019)
−0.00038
(0.00025)
−0.00006
(0.00007)
−0.00071***
(0.00018)
−0.00031
(0.00053)
−0.00041
(0.00043)
−0.0002
(0.0002)
From cancellation and no show 0.0004
(0.0003)
0.0004
(0.0004)
0.0005
(0.0006)
0.00054
(0.0006)
0.00002
(0.00027)
−0.00001
(0.0004)
−0.00003
(0.00008)
−0.00002
(0.00023)
−0.00088
(0.00074)
−0.00078
(0.0006)
−0.0005
(0.0004)
N 244,639 320,109 78,636 78,636 78,636 78,636 741,483 639,719 103,130 103.130 103,130
Mean 0.11 0.22 0.159 0.142 0.0212 0.058 0.0119 0.149 0.231 0.150 0.0509

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A6.a:

Regression Results for Low-Value Care and Opioid Prescribing: Any “Reason for Visit” Denominator

Antibiotics Opioids Lower Back Pain
Not
Appropriate
Potentially
Appropriate
Opioid
Naive
Only
All Patients Any Imaging Xray Advanced Imaging PT Referral
Net schedule change −0.000267
(0.0007)
0.00124
(0.001)
−0.0004
(0.0003)
−0.0010***
(0.0003)
−0.000329
(0.000682)
−0.000322
(0.000671)
−0.000101
(0.000259)
−0.000300
(0.000403)
N 41,344 18,107 106,172 133,336 28386 28386 28386 28386
Mean 0.35 0.66 0.109 0.21 0.198 0.177 0.0253 0.0683

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A6.b:

Regression Results for Low-Value Care and Opioid Prescribing: All “Reason for Visit” Denominator

Antibiotics Opioids Lower Back Pain
Not
Appropriate
Potentially
Appropriate
Opioid
Naive
Only
All
Patients
Any Imaging Xray Advanced Imaging PT Referral
Net schedule change 0.0008
(0.0009)
−0.0001
(0.0012)
−0.0002
(0.0005)
−0.0007
(0.0005)
0.000561
(0.00107)
0.000765
(0.00108)
−0.000312
(0.000405)
−0.000390
(0.000640)
N 20,849 11,045 43,518 55,320 13673 13673 13673 13673
Mean 0.42 0.67 0.12 0.23 0.217 0.191 0.0324 0.0696

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A7:

Clinic-Level Heterogeneity

Diagnoses Two-Week Follow-up Referral
# of
Diagnoses
# of New
Diagnoses
Scheduled Unscheduled Referral
Main estimate −0.00645*** −0.00250*** 0.00016*** 0.00028*** −0.00079***
Clinic Specific Estimates
25th Percentile −0.00774 −0.00406 0.000047 0.000038 −0.00104
Median −0.00462 −0.00258 0.000135 0.000278 −0.00053
75th Percentile −0.00016 −0.00102 0.000303 0.000562 −0.00029
P-value from F-test of joint significance of interaction terms between clinic dummies and net schedule changes 0.000 0.006 0.143 0.115 0.001

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01. Clinic Specific Estimates report the 25th, 50th, and 75th percentiles of clinic specific effects of net schedule changes implied by a regression that includes interactions between net schedule changes and dummies for each clinic.

Table A8:

Controlling for Double-Booked Appointments, Full Sample

Net Schedule Change
(10 minutes)
Double-booked
appt.
N Mean
# of Diagnoses −0.00624***
(0.0008)
−0.10879***
(0.0129)
2,671,789 3.37
# of New Diagnoses −0.00229***
(0.0006)
−0.05764***
(0.0094)
2,671,789 1.74
Scheduled Follow-up, 14 days 0.00017***
(0.00003)
0.00076
(0.00051)
2,671,789 0.02
Unscheduled Follow-up, 14 days 0.000302***
(0.0001)
(0.0001)
−0.000538
(0.0007)
(0.00123)
2,671,789 0.06
Referral Order −0.00075***
(0.0001)
−0.00879***
(0.0012)
2,671,789 0.25
Hospital-Based Care, 14 days 0.00004*
(0.0000)
0.0003
(0.0003)
2,671,789 0.01
Emergency Department Visit, 14 days 0.00000
(0.0000)
0.00018
(0.0002)
2,671,789 0.007
Inpatient Visit, 14 days 0.000022**
(0.00001)
0.00017
(0.0001)
2,671,789 0.002
Hospital-Based Care, 30 days 0.00008***
(0.00003)
−0.00041
(0.00036)
2,671,789 0.02
Emergency Department Visit, 30 days 0.00004
(0.00003)
−0.00036
(0.00033)
2,671,789 0.02
Inpatient Visit, 30 days −0.0000**
(0.0000)
−0.0000
(0.0002)
2,671,789 0.004

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A9:

Controlling for Double-Booked Appointments, Low Value Care and Opioid Prescribing

Net Schedule Change
(10 minutes)
Double-
booked appt.
N Mean
Antibiotics
Not Appropriate −0.000905
(0.000915)
0.00301
(0.01534)
23,421 0.46
Potentially Appropriate 0.001209
(0.000758)
0.007947
(0.01437)
33,043 0.58
Low Back Pain
Any Imaging −0.000537
(0.00046)
0.009268
(0.006244)
78,636 0.159
Xray −0.00061
(0.00044)
0.009062
(0.005755)
78,636 0.142
Advanced Imaging −0.00001
(0.00020)
−0.000412
(0.002686)
78,636 0.021
PT Referral −0.000257
(0.000248)
−0.002901
(0.00322)
78,636 0.0584
Opioid
Opioid Naïve Only −0.000336
(0.000220)
0.00229**
(0.00340)
245,639 0.11
All Patients −0.000748 ***
(0.000242)
0.004196
(0.003947)
320,109 0.22
Women’s Health Screening
Mammography −0.000048
(0.000060)
−0.00001
(0.00099)
741,483 0.012
Pap Test −0.000574 ***
(0.00017)
−0.016703 ***
(0.004618)
639,719 0.149
Diabetes Management
HbA1c Test −0.00014
(0.000490)
−0.00489
(0.00734)
103,130 0.231
LDL Test −0.00009
(0.000390)
−0.01794***
(0.00619)
103,130 0.15
Referral, Dilated Eye Exam −0.00008
(0.00023)
0.003433
(0.003595)
103,130 0.051

Note: Time pressure variables are measured in terms of 10 minutes. All regressions include controls for patient demographics (age dummies, race/ethnicity, sex, language, insurance type, number of reasons for visit, whether the patient had seen the PCP within the past year), past utilization (dummy variables measuring the number of inpatient visits, the number of outpatient visits, the number of ED visits, and the number of specialist visits in the six months prior to the visit), interactions between demographics and past utilization, and a set of fixed effects for provider, month by year pairs, week of month, day of week, holidays, hour of day in which the visit is scheduled, and scheduled visit length. Standard errors clusters at provider level are in parentheses.

*

if p-value < 0.1

**

if p< 0.05 and

***

if p < 0.01.

Table A10:

ICD-9 Codes Used to Create Acute Upper Respiratory Tract Infection Outcome Denominator

Diagnosis codes indicating ARI for which antibiotics are never appropriate (inclusion in ‘never appropriate’ denominator) 460, 464, 464.0, 464.00, 464.1, 464.10, 464.2, 464.20, 464.4, 464.50, 465, 465.0, 465.8, 465.9, 466, 466.0, 466.1, 466.11, 466.19, 487, 487.1, 487.8, 490
Diagnosis codes indicating ARI for which antibiotics are potentially appropriate (inclusion in ‘potentially appropriate’ denominator) 034.0, 033.9, 041.2, 041.3, 041.5, 381.01, 381.4, 382.00, 382.01, 382.4, 382.9, 384.0, 386.33, 388.60, 461, 461.0, 461.2, 461.3, 461.8, 461.1, 461.9, 462, 463, 464.01, 464.11, 464.21, 464.3, 464.30, 464.31, 464.51, 475, 481, 482, 482.0, 482.1, 482.2, 482.3, 482.4, 482.40, 482.41, 482.42, 482.49, 482.8, 482.9, 484.3, 485, 486, 487.0
Diagnosis codes indicating non- acute upper respiratory infection for which antibiotics are appropriate (exclusion) 006.2, 008.43, 008.45, 008.49, 009.0, 009.2, 009.3, 026.1, 031.8, 031.9, 035, 038.0, 038.10, 038.4, 041.01, 041.05, 041.1, 041.10, 041.11, 041.12, 041.6, 041.7, 041.84, 041.85, 041.86, 079.88, 079.98, 083.0, 088.81, 097.1, 098.0, 098.12, 098.15, 098.32, 098.33, 098.89, 099.4, 099.50, 099.9, 130.0, 131.01, 131.03, 131.9, 322.9, 323.82, 323.9, 324.0, 357.0, 373.13, 376.02, 380.11, 421.0, 451.9, 522.4, 522.5, 523.3, 523.30, 523.41, 527.2, 528.3, 540, 540.0, 540.9, 555.9, 562.11, 566, 567.2, 567.22, 567.29, 567.31, 567.38, 567.9, 569.71, 572.0, 574, 574.0, 574.1, 574.10, 574.11, 574.3, 574.30, 574.4, 574.40, 575.0, 576.1, 590.10, 590.11, 590.3, 590.8, 590.80, 595.0, 595.2, 595.4, 595.9, 597.8, 597.80, 598.0, 601.0, 601.1, 601.4, 601.9, 603.1, 604.9, 604.90, 604.91, 604.99, 614.5, 614.9, 616, 616.0, 616.1, 616.10, 616.11, 616.2, 616.4, 646.6, 646.60, 646.64, 647.9, 647.90, 658.4, 658.40, 658.41, 675.1, 675.10, 675.14, 675.9, 675.90, 675.91, 675.94, 680.0, 680.1, 680.2, 680.3, 680.4, 680.5, 680.6, 680.8, 680.9, 681.00, 681.02, 681.10, 681.11, 681.9, 682.0, 682.1, 682.2, 682.3, 682.4, 682.5, 682.6, 682.7, 682.8, 682.9, 684, 685, 685.0, 685.1, 686.9, 711.01, 711.03, 711.08, 711.84, 711.86, 711.89, 711.95, 711.97, 711.98, 728.0, 730, 730.0, 730.03, 730.06, 730.12, 730.13, 730.16, 730.18, 730.2, 730.20, 730.21, 730.22, 730.23, 730.25, 730.26, 730.27, 730.28, 730.29, 730.36, 730.9, 730.90, 730.97, 730.98, 731.1, 771.5, 771.81, 771.82, 771.89, 790.7, 911.7, 914.1, 915.1, 915.3, 915.9, 919.5, 997.31, 998.59, 999.31, V02.51, V02.54
Diagnosis codes indicating medical comorbidities (exclusion) 042, 079.53, 135, 162, 162.0, 162.2, 162.3, 162.4, 162.5, 162.8, 162.9, 199, 199.0, 239, 239.8, 239.9, 288, 288.0, 456.0, 456.1, 456.2, 456.20, 456.21, 491, 491.0, 491.1, 491.2, 491.20, 491.21, 491.22, 491.8, 491.9, 492, 492.0, 492.8, 493, 493.0, 493.00, 493.01, 493.02, 493.1, 493.10, 493.11, 493.12, 493.20, 493.21, 493.22, 493.8, 493.81, 493.82, 493.9, 493.90, 493.91, 493.92, 494, 494.0, 494.1, 495, 495.0, 495.1, 495.2 495.3, 495.4, 495.5, 495.6, 495.7, 495.8, 495.9, 496, 500, 501, 502, 503, 504, 506, 506.0, 506.1, 506.2, 506.3, 506.4, 506.9, 507.8, 508.9, 515, 516, 516.0, 516.1, 516.2, 516.3, 516.8, 516.9, 517, 517.1, 517.2, 517.3, 517.8, 518.1, 518.2, 518.3, 519.11, 519.8, 572.2, 572.3, 572.4, 572.8, 714.81, 748, 748.3, 748.4, 748.5, 748.6, 748.60, 748.61, 748.69, 748.8, 748.9, 770.2, 795.71, 799.1, 996.81, 996.82, 996.83, 996.84, 996.85, E878.0, V08, V10, V42, V42.0, V42.1, V42.6, V42.7, V42.8, V42.9, V58.11

Note: This list of diagnosis codes is borrowed from Meeker, et. al, 2016.

Table A11:

ICD-9 Diagnoses Codes Used to Create Opioid Prescribing Denominator

Chronic pain 338.21 338.22 338.28 338.29 338.4 346.0 346.1 346.2 346.3 346.4 346.5 346.6 346.7 346.8 346.9 307.81 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729
Back pain, acute or chronic 307.89 721.2 721.3 724.2 724.4 724.5 724.6 724.7 724.8 846, 846.0 846.1 846.2 846.3 846.8 846.9 847 847.2 847.4 847.9

Note: borrowed directly from Center for Disease Control and Prevention. Guide to ICD-9-CM and ICD-10 Codes Related to Poisoning and Pain. Atlanta, GA. 2013.

Footnotes

Declarations of interest for all authors: None.

1

Tai-Seale and McGuire assume that topics arise in descending order of importance (i.e., expected value) throughout the visit. We briefly discuss situations where patients delay talking about important topics below.

2

It is also possible that in addition to affecting quality of care through time allocated per topic, time pressure could affect the quality of care per topic to the extent that it increases levels of distraction or stress for the provider.

3

We drop a small number of visits that are not with a physician, nurse practitioner, or physician’s assistant, such as a pharmacist or resident.

4

We cannot directly assess whether schedules are binding such that net schedule increases cause time pressure to increase. As such, we cannot rule out that effects of net schedule increases may also reflect fatigue from more appointments in a shift rather than time pressure, per se. We are grateful to an anonymous referee for noting this.

5

Although our analytic sample only includes non-same day visits, we do use information on same-day visits to construct our measures of time pressure.

6

The results were similar when we examined a 7-day follow-up period.

7

We might underestimate the impact of time pressure on hospitalizations and ED visits if patients respond to perceived lower quality care driven by time pressure within the Fairview system by seeking follow-up care outside of the system.

8

Clinical guidelines for Pap testing changed in 2013. Results are insensitive to dropping observations from 2014 and 2015.

9

These include dummies for any inpatient visits; one, two to four, or five or more outpatient visits; any emergency department visit; and any specialist visits in the past six months. Because we only observe patients following their first primary care visit within the data window, we interact these variables with a dummy indicating the observation is within the first six months.

10

We also learned that a small number of clinics have a small number of PCPs with either fully- or nearly-open schedules to accommodate walk-in appointments. We identified the PCPs in our data with >75% of appointments that were same-day scheduled and ran a sensitivity check excluding these PCPs. Very few PCPs met this criterion, consistent with what the clinic managers told us, and the results were insensitive to excluding these PCPs.

11

Given the average net schedule change is 17 minutes and the standard deviation of residualized schedule changes is 31.7 minutes, a 25 minute change is a common occurrence.

12

We are grateful to an anonymous referee for this suggestion.

13

We are grateful to an anonymous referee for this suggestion.

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Seth Freedman, Indiana University.

Ezra Golberstein, University of Minnesota.

Tsan-Yao Huang, University of Minnesota.

David Satin, University of Minnesota.

Laura Barrie Smith, Urban Institute.

References

  1. Atlas SJ, Grant RW, Ferris TG, Chang Y and Barry MJ (2009). "Patient-physician connectedness and quality of primary care." Ann Intern Med 150(5): 325–335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bardach NS, Wang JJ, De Leon SF, Shih SC, Boscardin WJ, Goldman LE and Dudley RA (2013). "Effect of pay-for-performance incentives on quality of care in small practices with electronic health records: a randomized trial." JAMA 310(10): 1051–1059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bodenheimer T and Pham HH (2010). "Primary care: current problems and proposed solutions." Health affairs (Project Hope) 29: 799–805. [DOI] [PubMed] [Google Scholar]
  4. Cameron AC and Miller DL (2015). "A Practitioner's Guide to Cluster-Robust Inference." Journal of Human Resources 50(2): 317–372. [Google Scholar]
  5. Centers for Disease Control and Prevention. (2013). "Prescription Drug and Overdose Data & Statistics: Guide to ICD-9-CM and ICD-10 Codes Related to Poisoning and Pain." from https://www.cdc.gov/drugoverdose/pdf/pdo_guide_to_icd-9-cm_and_icd-10_codes-a.pdf
  6. Chan DC (2018). "The Efficiency of Slacking off: Evidence From the Emergency Department." Econometrica 86(3): 997–1030. [Google Scholar]
  7. Chandra A, Cutler D and Song Z (2012). Who Ordered That? The Economics of Treatment Choiced in Medical Care. Handbook of Health Economics Volume 2. Pauly MV, McGuire TG and Barros PP, North-Holland. 2: 397–432. [Google Scholar]
  8. Chen LM, Farwell WR and Jha AK (2009). "Primary care visit duration and quality: does good care take longer?" Archives of internal medicine 169: 1866–1872. [DOI] [PubMed] [Google Scholar]
  9. Chen MA, Hollenberg JP, Michelen W, Peterson JC, Casalino LP (2011). “Patient Care Outside of Office Visits: A Primary Care Physician Time Study.” Journal of General Internal Medicine 26(1): 58–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Christianson JB, Warrick LH, Finch M and Jonas W (2012). Physician Communication with Patients: Research Findings and Challenges. Ann Arbor, University of Michigan Press. [Google Scholar]
  11. Colla CH, Morden NE, Sequist TD, Schpero WL and Rosenthal MB (2015). "Choosing wisely: prevalence and correlates of low-value health care services in the United States." J Gen Intern Med 30(2): 221–228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Eaton J, Reed D, Angstman KB, Thomas K, North F, Stroebel R, Tulledge-Scheitel SM and Chaudhry R (2012). "Effect of visit length and a clinical decision support tool on abdominal aortic aneurysm screening rates in a primary care practice." Journal of evaluation in clinical practice 18: 593–598. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ellis RP and Mcguire TG (1986). "Provider Behavior under Prospective Reimbursement - Cost-Sharing and Supply." Journal of Health Economics 5(2): 129–151. [DOI] [PubMed] [Google Scholar]
  14. Frakes MD and Wasserman MF (2017). "Is the Time Allocated to Review Patent Applications Inducing Examiners to Grant Invalid Patents? Evidence from Microlevel Application Data." Review of Economics and Statistics 99(3): 550–563. [Google Scholar]
  15. Freedman S (2016). "Capacity and Utilization in Health Care: The Effect of Empty Beds on Neonatal Intensive Care Admission." American Economic Journal-Economic Policy 8(2): 154–185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gidwani R, Sinnott P, Avoundjian T, Lo J, Asch SM and Barnett PG (2016). "Inappropriate ordering of lumbar spine magnetic resonance imaging: are providers Choosing Wisely?" Am J Manag Care 22(2): e68–76. [PubMed] [Google Scholar]
  17. Glied S (1998). "Too little time? The recognition and treatment of mental health problems in primary care." Health services research 33: 891–910. [PMC free article] [PubMed] [Google Scholar]
  18. Gottschalk A, and Flocke SA. (2005). “Time Spent in Face-to-Face Patient Care and Work Outside the Examination Room.” Annals of Family Medicine 3: 488–493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Harris MC, Liu Y and McCarthy I (2020). "Capacity Constraints and Time Allocation in Public Health Clinics." Health Economics 29(3): 324–336. [DOI] [PubMed] [Google Scholar]
  20. Holmstrom B and Milgrom P (1991). "Multitask Principal Agent Analyses - Incentive Contracts, Asset Ownership, and Job Design." Journal of Law Economics & Organization 7: 24–52. [Google Scholar]
  21. Jarvik JG, Comstock BA, Bresnahan BW, Nedeljkovic SS, Nerenz DR, Bauer Z, Avins AL, James K, Turner JA, Heagerty P, Kessler L, Friedly JL, Sullivan SD and Deyo RA (2012). "Study protocol: the Back Pain Outcomes using Longitudinal Data (BOLD) registry." BMC Musculoskelet Disord 13: 64. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Koven S (2016). "The Doctor's New Dilemma." N Engl J Med 374(7): 608–609. [DOI] [PubMed] [Google Scholar]
  23. Linder JA, Doctor JN, Friedberg MW, Reyes Nieva H, Birks C, Meeker D and Fox CR (2014). "Time of day and the decision to prescribe antibiotics." JAMA internal medicine 174: 2029–2031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Linder JA, Singer DE and Stafford RS (2003). "Association between antibiotic prescribing and visit duration in adults with upper respiratory tract infections." Clinical Therapeutics 25: 2419–2430. [DOI] [PubMed] [Google Scholar]
  25. Linzer M (2009). "Working Conditions in Primary Care: Physician Reactions and Care Quality." Annals of Internal Medicine 151: 28. [DOI] [PubMed] [Google Scholar]
  26. Linzer M, Bitton A, Tu S-P, Plews-Ogan M, Horowitz KR and Schwartz MD (2015). "The End of the 15-20 Minute Primary Care Visit." Journal of general internal medicine 30: 1584–1586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Lyu H, Xu T, Brotman D, Mayer-Blackwell B, Cooper M, Daniel M, Wick EC, Saini V, Brownlee S and Makary MA (2017). "Overtreatment in the United States." PLoS One 12(9): e0181970. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Ma CTA (1994). "Health Care Payment Systems: Cost and Quality Incentives." Journal of Economics & Management Strategy 3(1): 93–112. [Google Scholar]
  29. Ma CTA and McGuire TG (1997). "Optimal health insurance and provider payment." American Economic Review 87(4): 685–704. [Google Scholar]
  30. Meeker D, Linder JA, Fox CR, Friedberg MW, Persell SD, Goldstein NJ, Knight TK, Hay JW and Doctor JN (2016). "Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial." JAMA 315(6): 562–570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Neprash HT, and Barnett ML (2019). “Association of Primary Care Clinic Appointment Time with Opioid Prescribing.” JAMA Network Open 2(8)e1910373. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Østbye T, Yarnall KSH, Krause KM, Pollak KI, Gradison M and Michener JL (2005). "Is there time for management of patients with chronic diseases in primary care?" Annals of family medicine 3: 209–214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Powell AA, Bloomfield HE, Burgess DJ, Wilt TJ and Partin MR (2013). "A conceptual framework for understanding and reducing overuse by primary care providers." Medical care research and review : MCRR 70: 451–472. [DOI] [PubMed] [Google Scholar]
  34. Schmitt MR, Miller MJ, Harrison DL and Touchet BK (2010). "Relationship of depression screening and physician office visit duration in a national sample." Psychiatr Serv 61(11): 1126–1131. [DOI] [PubMed] [Google Scholar]
  35. Schwartz AL, Landon BE, Elshaug AG, Chernew ME and McWilliams JM (2014). "Measuring low-value care in Medicare." JAMA Intern Med 174(7): 1067–1076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Sears ED, Caverly TJ, Kullgren JT, Fagerlin A, Zikmund-Fisher BJ, Prenovost K and Kerr EA (2016). "Clinicians' Perceptions of Barriers to Avoiding Inappropriate Imaging for Low Back Pain-Knowing Is Not Enough." JAMA Intern Med. [DOI] [PubMed] [Google Scholar]
  37. Shetty KD, Meeker D, Schneider EC, Hussey PS and Damberg CL (2015). "Evaluating the feasibility and utility of translating Choosing Wisely recommendations into e-Measures." Healthc (Amst) 3(1): 24–37. [DOI] [PubMed] [Google Scholar]
  38. Shurtz I, Eizenberg A, Alkalay A and Lahad A (2019). "Physician workload and treatment choice: the case of primary care." Working paper. [Google Scholar]
  39. Sinksy C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, Westbrook J, Tutty M, Blike G (2016). “Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties.” Annals of Internal Medicine 165: 753–760. [DOI] [PubMed] [Google Scholar]
  40. Sirovich BE, Woloshin S and Schwartz LM (2011). "Too Little? Too Much? Primary care physicians' views on US health care: a brief report." Arch Intern Med 171(17): 1582–1585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Tai-Seale M and McGuire T (2012). "Time is up: increasing shadow price of time in primary-care office visits." Health economics 21: 457–476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Tai-Seale M, McGuire TG and Zhang W (2007). "Time allocation in primary care office visits." Health services research 42: 1871–1894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. The ABIM Foundation (2014). Unnecessary Test and Procedures in the Health Care System. What Physicians Say About the Problem, the Causes, and the Solutions. [Google Scholar]
  44. Tsiga E, Panagopoulou E, Sevdalis N, Montgomery A and Benos A (2013). "The influence of time pressure on adherence to guidelines in primary care: an experimental study." BMJ open 3: e002700-. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. USP (2019). “USP Drug Classification.” http://www.usp.org/health-quality-safety/usp-drug-classification-system
  46. Yarnall KSH, Poliak KI, Østbye T, Krause KM and Michener JL (2003). "Primary Care: Is There Enough Time for Prevention?" American Journal of Public Health 93: 635–641. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES