Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 16.
Published in final edited form as: Health Econ. 2020 Jan 30;29(3):306–323. doi: 10.1002/hec.3982

Priority access to health care: Evidence from an exogenous policy shock

Christine A Yee 1,2, Aaron Legler 1,3, Michael Davies 4, Julia Prentice 5,6, Steven Pizer 1,3
PMCID: PMC8284942  NIHMSID: NIHMS1718529  PMID: 31999884

Abstract

Access to care is an important issue in public health care systems. Unlike private systems, in which price equilibrates supply and demand, public systems often ration medical services through wait times. Access that is given on a first come, first served basis might not yield an allocation of resources that maximizes the health of a population, potentially creating suboptimal heterogeneity in wait times. In this study, we examine an access disparity between two groups of patients—established patients and new patients. We exploit an exogenous policy change—implemented by the U.S. Veterans Health Administration—that removed the disparity and homogenized the wait time. We find strong evidence that without such a policy, established patients have priority access over new patients. We discuss whether this is a suboptimal allocation of resources. We additionally find that established patient priority access is an important determinant of access for new patients; accounting for it increased the explanatory power of our statistical model of new patient wait times by a factor of five. The findings imply that policy and management decisions may be more effective in achieving the optimal distribution of access if access heterogeneity is recognized and accounted for explicitly.

Keywords: access to care, prioritization, public health care, resource allocation, wait time

1 |. INTRODUCTION

Access to care is a critical issue in public health care delivery systems: resources are constrained by a congressionally determined budget, prices are minimal, and wait times are often a means to rationing care. Limited resources make it difficult to finance supply expansion to meet changes in demand from the designated population. Thus, waiting times occur (Besley, Hall, & Preston, 1999; Cullis, Jones, & Propper, 2000; Goddard, Malek, & Tavakoli, 1995; Lindsey & Feigenbaum, 1984). Because the value of care may decay as time passes between diagnosis and treatment, wait times act like a price, imposing a cost on the patient. If supply is fixed and demand increases, wait times increase until equilibrium is reached (Pizer & Prentice, 2011; Yee, Minegishi, Frakt, & Pizer, 2018).

Many studies on access report average wait times for specific types of medical care for an entire population (e.g., Viberg, Forsberg, Borowitz, & Molin, 2013). Although helpful in getting a general sense of access, heterogeneity in wait times can be obscured. For example, studies have shown that privately insured patients have better access to physicians than publicly insured patients do (Kuchinke, Sauerland, & Wubker, 2009, found this to be the case in Germany for hospital admissions; Czypionka, Kraus, Riedel, & Rohrling, 2007, in Austria for elective surgery; MedPAC, 2017, in the United States for routine care). This is conceivably the rational choice given that private providers may be trying to maximize profit.

However, in a public delivery system in which physicians are salaried, it is sometimes unclear whether and how patients are prioritized (Goddard & Smith, 2001; Landi, Ivaldi, & Testi, 2018). For example, during the U.S. Veterans Health Administration (VHA) access crisis of 2014, in which several veterans died while waiting to see a provider, 55% of veterans surveyed reported that they always received routine appointments as soon as they needed it (RAND Health, 2015). With an objective to maximize population health or welfare (rather than profit) with a given amount of resources, public systems should be implementing policies that help prioritize patients who receive the most value or health benefit per dollar cost. Gravelle and Siciliani (2009) provide a theoretical model that shows heterogeneous wait times can be welfare maximizing if groups of patients have similar costs for treatment but differing distributions of health gains, elasticities of demand with respect to wait times, and disutilities from waiting. Most emergency departments have implemented triage guidelines that prioritize patients who have the highest risk of dying (Christ, Grossmann, Winter, Bingisser, & Platz, 2010). Canada, New Zealand, Spain, and Sweden have implemented prioritization schemes for elective surgery based on need (Siciliani & Hurst, 2005). Other countries, for example, England, may not have formal national prioritization policies, but providers may implicitly prioritize (Gutacker, Siciliani, & Cookson, 2016).

In this study, we exploit a policy change that unintentionally shifted the prioritization between two types of patients—established patients and new patients—in the VHA in the United States. Established patients are patients who are returning to the clinic within 2 years of their previous visit. Patients are considered new if they have not visited the clinic in the preceding 2 years. Prior to this study, the existence of prioritization among established and new patients was unrecognized and not known by VHA leadership. The VHA is the largest public delivery system in the United States, serving over 9 million veterans. We study the prioritization of patients in the primary care setting. Primary care is the central command for many public and private systems, and as such, optimizing access to primary care plays a major role in optimizing the overall health of a population.

We find evidence of prioritization in this setting; without a regulatory intervention (i.e., in the natural, laissez-faire equilibrium), established patients have an access advantage over new patients to the schedule of available appointments. Established patient appointments (in particular, follow-up appointments) can crowd out new patient appointments, unintentionally forcing new patients to wait a long time to see a provider. Lower equilibrium wait times for established patients suggest that the benefit to the marginal established patient is less than the benefit to the marginal new patient. Depending on the respective distributions of the cost of waiting, demand for health benefits, and cost of treatment, it may be possible to improve population health by reducing the crowd out effect and redistributing the wait time.

Optimality from a manager’s perspective is an allocation of wait times that maximizes the sum of health benefits less cost of treatment (and potentially consumer disutility from waiting) across all patients. To achieve the optimal allocation, policies may be necessary to prioritize certain patients over other patients. We find that appointment scheduling policies are an effective way to redistribute the wait time. We identify a case in which the distribution of wait times potentially was not optimal but was temporarily corrected through a scheduling policy.

2 |. OPTIMAL WAITS FOR NEW AND ESTABLISHED PATIENTS

Determining whether new patient appointments should be prioritized over established patient follow-up appointments is not a trivial problem. In terms of maximizing population health with respect to a fixed-resource constraint, policymakers must decide on the allocation of resources. This optimization problem results in a trade-off between treating more patients less intensively (extensive margin) and treating fewer patients more intensively (intensive margin). Resources dedicated to new patients improve population health on the extensive margin, whereas resources dedicated to established patients improve it on the intensive margin. If the marginal net return to treatment is diminishing, prioritizing new patients may be optimal.

In their seminal paper, Gravelle and Siciliani (2009) provide a theoretical model from the perspective of patients, which shows that prioritization depends on the elasticity of patient demand for a medical service (i.e., the willingness to wait distribution) of one group of patients versus another and on the disutility from waiting of one group versus another. With potentially more urgent medical issues, new patients may have more inelastic demand relative to established patients regarding their follow-up appointments, and the cost of waiting may be higher for new patients. Aggregate patient welfare is maximized when systems prioritize patients who are affected most adversely from an increase in waiting times.

Some evidence suggests that new patients have more urgency to be seen relative to established patients who are following up with a provider (Prentice, Davies, and Pizer, 2013), implying they would be more adversely affected by an increase in waiting times. New patients are likely experiencing a new medical condition or going in for their initial evaluation, whereas established patients (especially in their follow-up appointments) are managing chronic conditions (e.g., Type 2 diabetes; see Table 1). Prentice, Dy, Davies, and Pizer (2013) show that longer wait times for new patients are associated with lower satisfaction with access to care; however, equivalent variations in appointment timing for established patients are not associated with lower satisfaction. This suggests that established patients’ preferences are not sensitive to their follow-up appointments being exactly at, say, 6 months from their original appointment.

TABLE 1.

Top 20 health conditions (2016–2017)

New Patient
Est Pt Follow-up
Ratio New to Est Pt Follow-up
Condition Rank % of Appts Rank % of Appts
Essential (primary) hypertension 1 15.8 1 22.3 0.71
Type 2 diabetes mellitus 2 8.5 2 18.3 0.46
General examination and investigation of persons without complaint and reported diagnosis 3 8.4 8 2.1 4.10
Dorsalgia 4 7.1 4 5.3 1.33
Other joint disorders, not elsewhere classified 5 4.9 6 2.7 1.80
Disorders of lipoprotein metabolism and other lipidaemias 6 4.4 3 6.1 0.72
Chronic ischemic heart disease 7 2.8 5 4.0 0.69
Reaction to severe stress, and adjustment disorders 8 2.7 13 1.0 2.76
Other chronic obstructive pulmonary disease 9 1.5 7 2.3 0.66
Sleep disorders 10 1.5 12 1.0 1.46
Gastro-esophageal reflux disease 11 1.4 11 1.2 1.25
Other hearing loss 12 1.1 46 0.2 5.59
Need for immunization against single bacterial diseases 13 1.1 20 0.6 1.84
Recurrent depressive disorder 14 1.1 22 0.5 2.28
Other soft tissue disorders, not elsewhere classified 15 1.1 16 0.7 1.45
Atrial fibrillation and flutter 16 1.1 9 1.3 0.82
Obesity 17 1.0 17 0.7 1.36
Other hypothyroidism 18 0.9 10 1.2 0.77
Other anxiety disorders 19 0.9 30 0.3 2.63
Persons encountering health services for other counselling and medical advice, not elsewhere classified 20 0.7 26 0.4 1.64

Abbreviation: Appt, appointment; Est, established; Pt, patient.

Note. Italics indicate conditions that are more than twice as prevalent among new patient appointments than among established patient follow-up appointments. Four conditions in the top 20 for established patient follow-ups but not in the top 20 for new patients include: Elevated blood glucose level (rank = 14), hypertensive heart disease (15), gonarthrosis (arthrosis of knee) (18), and chronic kidney disease (19).

In addition, several studies document the value of physical examinations and initial primary care visits (Solberg, Maciosek, and Edwards, 2008; Zaman, 2018), which are the third most common reason for new patient appointments (Table 1). New patients also are more likely than established patients to need treatment for hearing loss and stress or mental health-related treatment—conditions for which timeliness of care is of utmost importance (Reichert & Jacobs, 2018). Veterans are a vulnerable population with a higher incidence of mental health issues and alcohol and drug abuse. Some veterans are already symptomatic for mental health issues by the time of their first appointment with a primary care provider (as shown in the table). Suicide rates are relatively high in the veteran population compared with the general population (Kaplan, Huguet, McFarland, & Newsom, 2007). Moreover, some veterans may not be aware of underlying issues that need attention.

There is evidence to suggest that new patients should be prioritized beyond the patient welfare considerations of the Gravelle and Siciliani (2009) model. The model assumes that administrative and system costs are the same between different groups of patients and that transactional switching costs from one type of patient to another are negligible. However, it is possible that treating new patients yields externalities that affect the overall productivity of a clinic.

In other words, new patients and established patients may not be “exchangeable” patients. New patient visits take longer than established patient visits. New patients need to discuss their medical histories. Established patients have experience with specific clinic operations like scheduling, insurance processing, and navigation, requiring less assistance from clinic staff (e.g., Chan, Webster, & Marquart, 2011). Once a patient establishes a relationship with the provider and shares a history of prior encounters, treatment plan decisions may be more effective, leading to better outcomes at a lower cost (Frank & Zeckhauser, 2007). These considerations imply that appointments with established patients might be more efficient in terms of benefit per unit cost (e.g., time and administrative cost) than those with new patients. Seeing a new patient may be at the expense of more than one established patient visit.

However, established patient follow-up appointments that are booked far in advance are often cancelled. If they are cancelled without much notice, centers may not be able to fill their schedules completely. This wasted time is a negative externality from giving established patients priority access. We test this hypothesis (results provided in the Section 5) and find that giving established patients an access advantage (through the policy change) decreases the number of visits that a medical center provides.

Taken together, from an optimal waiting time perspective, we find little evidence to suggest that established patients should be given priority access over new patients. Unfortunately, in what follows, we show that recent VHA scheduling policy changes inadvertently gave established patients priority access. We use data from these changes to quantify the trade-off between new and established patient access and explore policy alternatives.

3 |. DATA AND SAMPLE

We used data from the U.S. Department of Veterans Affairs Corporate Data Warehouse, which is a national repository of several VHA clinical and administrative systems. The Corporate Data Warehouse contains veteran enrollment status, the date on which a veteran booked an appointment, and the date of the scheduled appointment. We extracted data for primary care appointments.

The sample includes 129 VHA medical centers (or sometimes referred to as centers or facilities) in the United States, spanning from October 2012 to September 2017. The unit of observation in the analysis is defined by a medical center, year (2012–2017), and month (January–December). We excluded centers that were not in the United States (e.g., Manila, Philippines). Some centers had limited data for several years during the study period. We performed sensitivity analyses that excluded these centers. We also performed sensitivity analyses that used a longer timeframe: October 2003 to September 2017.

We created medical center-level variables by aggregating primary care appointment-level data. Appointments were categorized as either new patient appointments or established patient appointments. New patients were those who had not had a primary care appointment in the past 24 months. All other patients were considered established patients. The 24-month threshold has been used in prior studies (Prentice et al., 2013) and is used in VHA outpatient scheduling protocols (Department of Veterans Affairs, 2010).

We merged in several other data sources to control for patient demographics, the local availability of alternative health coverage, and other potentially confounding phenomena. These control variables are discussed in separate subsections below.

4 |. EMPIRICAL METHODOLOGY

We exploit changes in a scheduling policy to determine whether established patients are given priority over new patients and to estimate the potential crowd out effect on new patients for primary care appointments. Between September 2009 and May 2016, all VHA medical centers were directed to implement a Recall Reminder Scheduling protocol and software system. The purpose was to reduce no-shows and improve the use of limited resources. It was not designed to optimize wait times for patients. The protocol is similar to call-recall or invite-reminder systems used in other countries, for example, United Kingdom, often for child immunizations.

The Recall Reminder protocol prevents patients from scheduling their follow-up appointments immediately. Patients were only allowed to schedule them at a date closer to when they were supposed to return to the clinic. Before September 2009, there was no consistent nationwide standard for how to book follow-up appointments. Some centers had already implemented their own versions of Recall Reminder; for example, the Miami VA Geriatrics Clinic started such a system in 2006 and showed reductions in missed appointments (Peterson, McCleery, Anderson, Waldrip, & Helfand, 2015). In an Office of Inspector General audit (2008) of VHA’s effort to reduce unused outpatient appointments, 32% of the interviewed clinics reported using some kind of Recall Reminder process. However, most facilities were booking follow-up appointments at the time of the originating appointment, no matter how far into the future the follow-up appointments were requested.

Figure 1 illustrates two protocols. Under the default (without Recall Reminder), a patient can book a 6-month follow-up appointment at the originating appointment, giving the patient a 6-month booking lead time—the time between when an appointment is made and when it would occur. Under Recall Reminder, the patient would be put in a queue and contacted later to schedule their follow-up, for example, 1 month before their intended return to clinic date.

FIGURE 1.

FIGURE 1

Booking lead times by scheduling protocol

The Recall Reminder policy was intended to reduce no-shows and cancellations, not to allocate wait times optimally. However, due to the lack of supporting evidence and the burden it caused for schedulers (Peterson et al., 2015), the directive to use Recall Reminder was rescinded in May 2016.1 Recall Reminder was replaced by a “Patient-Centered Scheduling Protocol.” This protocol allowed patients (but likely, schedulers) to choose whether to schedule their follow-ups at the time of the originating appointment or to use a Patient-Centered Appointment Reminder to schedule their appointment later (similar to Recall Reminder). Many medical centers (and their schedulers) reverted to the default scheduling practice of scheduling follow-ups at the time of the originating appointment, whereas others maintained the Recall Reminder protocol. We use this within-medical-center variation in the timing of discontinuing the protocol (among discontinuers) to identify the effect of established patient access advantage on new patient access.

The change in policy provides a shock to the access that established patients experienced. Under Recall Reminder, established patients did not have priority access, and the future schedule was kept open for new patients. We believe the shock is exogenous because the intention of the policy was not driven by new patient wait times or redistributing resources from established patients to new patients. Moreover, new patients comprise a small fraction of primary care appointments, and as such, their access to care has received less attention than established patient access. We do not think that discontinuers of the Recall Reminder protocol stopped their use of the protocol in relation to new patient access. We use this exogenous policy change to identify the crowd out effect of established patient scheduling practices on the wait times for new patients, which in turn can be used to estimate the wait time distributions that would result from alternative scheduling policies. We focus on the removal of the Recall Reminder protocol in 2016, due to reasons regarding data availability and a cleaner exogenous policy change (which we discuss below). In Appendix S1, we provide sensitivity analysis results that extend the time frame to 6 years before the nationwide initiation of Recall Reminder.

Using ordinary least squares, we estimate the crowd out effect on new patients with the following model of monthly wait times for new patients:

Waitf,t=β1PriorityAccessf,t+β2Xf,t+αf+γt+μt+uf,t (1)

where Waitf,t is the average wait time for new patients scheduling primary care appointments at a given medical center or facility f in a given year-month t. The wait time for a new patient appointment is the number of days between the date when the patient made the appointment and the date of the scheduled appointment.2 The unit of observation is the medical center-year-month.

New patient wait times are modeled as a function of preferential access experienced by established patients (PriorityAccessf,t), medical center fixed effects (αf), year indicators (γt, with 2013 as the base year), month indicators (μt, with January as the base month), and other control variables (Xf,t, discussed further below). PriorityAccessf,t is measured in two ways, discussed below. Facility fixed effects control for time-constant differences in wait times across facilities, which may be an artifact of how efficiently a facility is run in general or the typical budget of each facility (to hire staff). The year and monthly indicators control for national, annual, and seasonal changes in wait times for new patient primary care appointments. We control for the possibility that the facility is congested and that even established patients experience long wait times. We also control for changes in patient demographic characteristics, case mix, and access to alternative sources of health care coverage.

The identifying variation for β1 is from the timing and effectiveness (in terms or reintroducing priority access) of an individual facility’s revocation (if any) of the Recall Reminder policy.3 Some facilities scheduled established patient follow-up appointments at the time of the originating appointment as soon as they were allowed to do so by the policy revocation in June 2016. Some facilities made patients aware that they could choose the Recall Reminder approach or they could choose to schedule their follow-up immediately (as the new “Patient-Centered Scheduling” policy instructs facilities to do). Others continued to use the Recall Reminder system for some time. This creates a variation in the use of Recall Reminder among facilities. The timing of and how quickly they each discontinued the use of Recall Reminder creates a within-facility variation. The effect of established patient access advantage on new patient access is identified from the change in this access advantage among discontinuers.4

4.1 |. Measuring priority access affected by recall reminder compliance

Due to the way the data were recorded, it was not feasible to directly measure a medical center’s use of the Recall Reminder Scheduling software system. Instead, we created two measures of compliance. The first measure exploits a discontinuity generated by an arbitrary threshold set by the Recall Reminder policy. The second measure takes advantage of a common practice by physicians with regard to follow-up appointments.

4.2 |. Priority Access Measure 1

The Recall Reminder protocol required schedulers to postpone scheduling appointments for patients who were ordered to return to clinic more than 90 days after the originating appointment. If medical centers comply with the protocol, the booking lead time for follow-up appointments should drop below 90 days. Thus, we created a measure of Recall Reminder (non)compliance by computing the proportion of established patient appointments that had more than 90 days of booking lead time at a given medical center in a particular year and month. Assuming exogeneity of the change in this measure within-medical center over time (discussed below), the interpretation of β1 in Model 1 would be the effect on new patient wait times from a 1 percentage point increase in the proportion of established patient appointments with greater than 90-day booking lead times. Figure 2 shows the change over time in the proportion of established patient appointments with more than 90 days of booking lead time. The proportion declined between 2003 and 2010 (as individual medical centers may have adopted their own Recall Reminder processes). The median was lowest between 2010 and the first half of 2016, which corresponds to the timing of the national Recall Reminder policy. The proportion begins to rise sharply in the fourth quarter of 2016 shortly after Recall Reminder was rescinded in June 2016, suggesting that priority access for established patients increased quickly after the revocation. Our main analysis focuses on the period between October 2012 and September 2017, highlighted by the shaded box.

FIGURE 2.

FIGURE 2

Proportion of established patient appointments with >90-day booking lead times

Note: The shaded box represents the sample period of the main analysis. The Recall Reminder policy change (removal) started in June 2016. Sensitivity analyses using the entire sample are provided in Appendix S1.

4.3 |. Priority Access Measure 2

Not all established patient appointments with booking lead times greater than 90 days were necessarily follow-up appointments that should have triggered the use of Recall Reminder. Some medical centers may simply have very long wait times, extending beyond 90 days. In addition, appointments with more than 90-day booking lead times may not necessarily be follow-up appointments but rather appointments that were scheduled far in advance for convenience. Thus, we created a second protocol compliance measure that takes advantage of a common practice by physicians. Physicians often recommend 3-, 6-, and 12-month follow-up appointments. This is illustrated in the data (Figure 3), which show spikes in the distribution of booking lead times at 3, 6, and 12 months.

FIGURE 3.

FIGURE 3

Distribution of Established Patient Booking Lead Times

Note: The figure depicts the distribution of booking lead times for established patients, aggregated across all medical centers and months in the sample. The dotted vertical lines represent (from left to right) 3-, 6-, and 12-month booking lead times. There are 4.34524 weeks per month on average, and the data are bi-weekly. Thus, 3 months is equivalent to 6.5 bi-weeks (or 3*4.34524 / 2); 6 months is equivalent to 13.0 bi-weeks; and 12 months is equivalent to 26.1 bi-weeks.

The spikes at 6 and 12 months were low in 2010 and 2015 and noticeably higher in 2005 and 2017. This corresponds to the timing of the Recall Reminder policy. The pattern does not occur with the spike at 3 months, because 3 months is within the 90-day Recall Reminder threshold. In fact, we should expect the mass of appointments within the 90-day threshold to increase under the Recall Reminder policy because the reminders to schedule were typically a month or two prior to the suggested follow-up return to clinic date. Figure 3 illustrates the compression of the distribution with the density shifting left during Recall Reminder.

Our second protocol compliance measure is generated using three steps: (a) count the number of follow-up appointments in a given month that were scheduled 6 and 12 months in advance, (b) predict (using a Poisson model, although other specifications were explored) the number of appointments at 6 and 12 months had there been a smooth distribution of lead times, and (c) divide the difference between 1 and 2 by the total number of established patient appointments that were scheduled in the month. This (non)compliance measure is approximately the proportion of established patient appointments in a given month who scheduled their 6- or 12-month follow-up appointments at the originating appointment in spite of the Recall Reminder policy. Put another way, this compliance measure captures the number of potential follow-up appointments that should have been booked using Recall Reminder but were actually booked using a protocol that gives established patients priority access. Thus, it reflects the priority access given to established patients. Using this measure, we can interpret β1 in Model 1 as the effect on new patient wait times from a 1 percentage point increase in the proportion of appointments that should have been scheduled using Recall Reminder but were not and consequently were scheduled far in advance. Please see Appendix S1 for more details on how we computed this measure.

In 2003, this second measure of Recall Reminder compliance was approximately 0.18 on average (Figure 4), roughly suggesting that 18% of established patient appointments were follow-ups that should have been booked using Recall Reminder. Similar to Priority Access Measure 1, this measure decreased between 2003 and 2010 and was approximately 0.00 from the second half of 2010 to the first half of 2016. It then increased at a rapid rate afterwards. The period of our main analysis—October 2012 through September 2017—is highlighted by the shaded box in Figure 4.

FIGURE 4.

FIGURE 4

Priority Access Measure 2 over time

Note: The shaded box represents the sample period of the main analysis. The Recall Reminder policy change (removal) started in June 2016. Sensitivity analyses using the entire sample are provided in the Appendix S1. Priority Access Measure 2 is the proportion of established patient appointments that are considered follow-up appointments at 6 and 12 months.

4.4 |. Controlling for congestion

A positive correlation between new patient wait times and established patient booking lead times in part could be explained by overall congestion at a given medical center, resulting in all patients—new and established—having a hard time scheduling appointments. For example, a center with unexpectedly high demand in a given month would have higher wait times for both established patients and new patients. Exploiting exogenous, policy-induced changes in established patient priority access helps reduce the potential endogeneity. To reduce it further, we control for congestion and the possibility that even established patients have limited access. Established patient appointments represent on average (across facility-year-months) 91% of PCP appointments.

To control for established patient congestion, we created six measures. Five were derived from the Survey of Healthcare Experiences of Patients, which surveys patients a month after their visits. The data were available from fiscal year 2013 to 2017. We selected patients who were likely established patients by identifying patients who reported having seen their primary care physician for more than a year. We computed the proportion of these “established” patients who responded that they (a) never or sometimes (as opposed to usually or always) got a routine care appointment as soon as they needed it,5 (b) never or sometimes got an appointment for urgent care as soon as they needed it, and (c) were not able to be seen within seven days of their urgent care appointment request. We additionally computed the proportion who requested routine visits and urgent visits. Having more requests for these visits potentially fill up the schedule, which could affect both established and new patient ability to schedule appointments.

Finally, we controlled for congestion with the average wait time for established patients for appointments with lead times less than 3 months. By restricting it to less than 3 months, this measure captures the difficulty of established patients to get appointments right away. It is likely to be negatively correlated with our Recall Reminder-induced priority access measures because Recall Reminder caused established patients to schedule their follow-ups 1–2 months before the intended follow-up date. Thus, including this control variable is likely to increase the estimated impact of priority access on new patient wait times. The benefit of this congestion measure (over the survey measures) is that it is available for time periods before the start of the Recall Reminder policy.

4.5 |. Controlling for fluctuations in demand for VHA care and case mix

Changes in the population needing VHA health care may affect wait times because certain populations may have higher demand for VHA care than other populations. For example, an increase in the proportion of patients (veterans) with a college education might be associated with a shift in demand toward employer-sponsored health insurance and away from VHA health care.

We controlled for demographics, potential access to alternative sources of health care coverage, and local affluence. The demographic variables—age, race, gender, health status, and frequency of visiting the doctor—were derived from the Survey of Healthcare Experiences of Patients data. We controlled for Medicare Advantage penetration (provided by the Center of Medicaid and Medicare Services) and housing prices (provided by Zillow) in the catchment area surrounding each medical center. We also controlled for the proportion of enrolled veterans who have agreed to pay copayments for VHA medical care. These veterans are typically not disabled or earn more than a certain amount of income (i.e., designated with VHA Priority Status 7 or 8). Veterans who live in more affluent counties or who pay copayments may be more likely to have access to employer-sponsored insurance. Appendix S1 provides more detail on the variable construction.

4.6 |. Summary statistics

Table 2 provides summary statistics, which are weighted by the number of new patients in 2015. This is also the weight used in the regressions, which adjusts for the dependent variable being an average of new patient wait times per facility-month. New patient wait times were on average 30.1 days in both the preperiod and postperiod. There is substantial variation across medical centers; it could be as low as 5.8 days and as high as 156.4 days.

TABLE 2.

Summary statistics

Total Sample Period
Pre-Period
Post-Period
Names Definition Mean Standard Deviation Mean (N = 5,674) Mean (N = 2,064)
New Patient Wait Time Average wait time (days) for new patients per facility-year-month 30.1 11.7 30.1 30.1
Priority Access Measure 1 % of established patient appointments that had >90-day lead times 11.9 14.0 9.6 18.2
Priority Access Measure 2 Predicted % of established patient appointments that were identified as follow-up appointments at 6 or 12 months, i.e., the difference between the number of appointments at 6 and 12 months minus the predicted number of appointments at 6 or 12 months divided by the number of established patient appointments 0.3 6.4 −0.8 3.2
Established Patient Wait/<3 months Average wait time (days) for established patients given the wait time is less than 90 days per facility-year-month 28.6 6.6 29.0 27.3
Congestion for Routine Visits % of SHEP respondents who reported that they never or sometimes (as opposed to usually or always) got an appointment for a check-up or routine care as soon as they needed it in the past 12 months 12.7 7.3 12.8 12.2
Congestion for Urgent Visits % of SHEP respondents who reported that they never or sometimes (as opposed to usually or always) got an appointment for urgent care as soon as they needed it in the past 12 months 22.6 12.1 23.0 21.3
% Requested Routine % of SHEP respondents who requested a checkup or routine visit in the past 12 months 81.7 8.4 82.5 79.6
% Requested Urgent % of SHEP respondents who requested an urgent visit in the past 12 months 37.8 8.8 38.3 36.5
% Urgent Seen 7+ days Later % of SHEP respondents requesting urgent care who were seen 7+ days after request in the past 12 months 21.3 12.5 21.1 21.9
% Received Reminder % SHEP respondents received a reminder for an appointment in the past 12 months 81.1 6.4 81.5 80.1
% Black % of SHEP respondents who self-identified their race as Black. Base group is White 12.6 12.6 12.2 13.7
% Hispanic % of SHEP respondents who self-identified their race as Hispanic 6.6 12.5 6.5 7.0
% Other Race % of SHEP respondents who self-identified their race as Other 4.4 6.1 4.2 4.9
% Some College or Above % of SHEP respondents who reported having an education level of Some College or Above 57.9 11.2 56.9 60.6
% Age 40–59 % of SHEP respondents who reported an age between 40 and 59 years 14.5 6.5 14.6 14.3
% Age 60–79 % of SHEP respondents who reported an age between 60 and 79 years 64.1 7.7 63.8 65.1
% Age over 80 % of SHEP respondents who reported an age over 80 years 19.7 7.9 20.1 18.5
% Male % of SHEP respondents who are male 95.4 3.4 95.6 94.7
% in Good Health % of SHEP respondents who reported being in good health 32.8 8.6 32.7 32.8
% in Very Good or Excellent Health % of SHEP respondents who reported being in very good or excellent health 28.9 7.8 29.0 28.7
% Saw Provider 5+ Times % of SHEP respondents who saw the Provider 5+ times in the past 12 months 8.2 5.9 9.7 4.1
Housing Price Index Median estimated housing price in area surrounding facility in a given year-month (performed by Zillow) 221.4 40.3 212.3 246.5
% Medicare Advantage % of Medicare beneficiaries who are enrolled in Medicare Advantage in the catchment area of the facility in a given year-month 30.3 11.6 29.5 32.6
% Pay Copay % of veterans who have Priority Status 7 or 8 (i.e., no service connected disability and either agreed to pay copayments or above minimum income threshold) 22.9 6.3 23.1 22.3

Note. The unit of observation is a facility, year, month. The number of observations in the sample is 7,738: the pre-period has 5,674 observations and the post-period has 2,064 observations. The statistics are weighted by the number of appointments for new patients in 2015. The pre-period is defined as October 2012 to May 2016. The post-period is defined as June 2016 to September 2017.

The first measure of established patient priority access—the proportion of established patient appointments with longer than 90-day booking lead times—on average was 11.9% across centers. In the preperiod (October 2012–May 2016), the average was 9.6%; and in the postperiod (June 2016–September 2017), it was 18.2%. The second measure—the proportion of established patient appointments that were predicted to be 6- or 12-month follow-ups scheduled at the originating appointment—was −0.8% on average in the preperiod and 3.2% in the postperiod. There was substantial variation in each of the priority access measures. Summary statistics of the other variables are discussed in Appendix S1.

5 |. RESULTS

The results show that default scheduling practices tend to give priority to established patients. Established patients can crowd out new patients, and the Recall Reminder protocol lessened the priority access effect. A 1 percentage point decrease (8.4% of the mean) in the proportion of established patient appointments with more than 90-day booking lead times (Priority Access Measure 1) was associated with a reduction of 0.42 days (1.4% of the mean) in new patient wait times (Table 3, Specification 2). Put another way, a one-standard deviation reduction in priority access (14.0 percentage points) would reduce new patient wait times by 5.9 days or 50% of the sample standard deviation in new patient wait times. Translating to the effect of rescinding Recall Reminder—which increased priority access by 8.6 percentage points (90%)—the revocation led to a 4.3-day increase in new patient wait times (14%). This result controls for facility, annual, and monthly differences in new patient wait times. The relationship is robust after controlling for established patient congestion, demographic characteristics, the health status measures, VHA enrollment, local area affluence, housing prices, and alternative health care coverage (Table 3, Specifications 3–7).

TABLE 3.

Effect of priority access on new patient wait times: Measure 1

[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
(SE) (SE) (SE) (SE) (SE) (SE) (SE) (SE)
Priority Access Measure 1 0.4236*** 0.4685*** 0.4262*** 0.4261*** 0.4262*** 0.424*** 0.4233***
(0.0533) (0.0528) (0.0531) (0.0523) (0.0526) (0.0505) (0.0511)
Est. Pt. Mean Wait Given Wait < 90 days 0.8797***
(0.0767)
Congestion for Routine Visits 0.0881*** 0.078*** 0.0798*** 0.0818*** 0.0921***
(0.0232) (0.0239) (0.0229) (0.0220) (0.0209)
Congestion for Urgent Visits −0.0155 −0.016 −0.0139
(0.0144) (0.0154) (0.0152)
% Requested Routine −0.003 −0.0022 0.006
(0.0225) (0.0227) (0.0213)
% Requested Urgent −0.0256* −0.0187 −0.0254
(0.0153) (0.0155) (0.0166)
% Urgent Seen 7+ Days Later 0.0502*** 0.0492*** 0.0478***
(0.0135) (0.0129) (0.0128)
% Received Reminder 0.0071 0.0021 −0.0042
(0.0321) (0.0299) (0.0287)
% Black 0.029 0.0436 0.0406
(0.0285) (0.0269) (0.0266)
% Hispanic 0.0047 0.0072 0.0072
(0.0344) (0.0325) (0.0329)
% Other Race 0.0181 0.0144 0.0166
(0.0313) (0.0293) (0.0301)
% Some College or Above −0.0026 −0.0009 0.0001
(0.0127) (0.0122) (0.0122)
% Age 40–59 0.072 0.0852 0.0863
(0.0641) (0.0598) (0.0606)
% Age 60–79 0.1001 0.1078* 0.1101*
(0.0688) (0.0650) (0.0656)
% Age over 80 0.1095 0.1168* 0.1200*
(0.0738) (0.0695) (0.0701)
% Male −0.0624 −0.0673* −0.0597
(0.0406) (0.0397) (0.0392)
% in Good Health 0.016 0.0167 0.0146
(0.0145) (0.0147) (0.0148)
% in Very Good or Excellent Health 0.0137 0.0136 0.0150
(0.0166) (0.0158) (0.0158)
% Saw Provider 5+ Times −0.0691** −0.0785** −0.0915***
(0.0334) (0.0323) (0.0326)
Housing Price Index 0.1126** 0.1138**
(0.0463) (0.0464)
% Medicare Advantage −0.0952 −0.0927
(0.2314) (0.2297)
% Pay Copay 0.4757 0.4668
(0.7654) (0.7699)
Constant 34.1685*** 30.1239*** 2.7298 29.0503*** 29.8682*** 24.9208*** 22.0747 21.1381
(0.8924) (1.0338) (2.6105) (1.1213) (2.1161) (8.8152) (20.3148) (20.4080)
R2 0.039 0.205 0.359 0.209 0.212 0.215 0.232 0.229
Number of facility-year-months 7,740 7,738 7,738 7,738 7,735 7,735 7,729 7,732

Abbreviation: Coeff., coefficient; Est. Pt., established patient.

Note: The dependent variable is the average wait time for new patients to see a primary care provider per facility-year-month. All models include facility, year, and month fixed effects. All models are weighted by the average number of new patients per facility in 2015, the middle year of the data. Priority Access Measure 1 is the proportion of established patient appointments with >90-day lead times.

*

p < .1.

**

p < .05.

***

p < .01.

We find similar results using the second measure of priority access. A 1 percentage point reduction in the proportion of established patient appointments that were 6- or 12-month follow-ups scheduled early (and not scheduled using Recall Reminder) was associated with a reduction of 0.52 days in new patient wait times (Table 4, Specification 2). Translating to the effect of rescinding Recall Reminder—which increased priority access by 4.0 percentage points (500%)—the revocation led to a 2.1-day increase in new patient wait times (7.0%). The effect is robust with the inclusion of control variables: It varied between 0.53 and 0.57 days, depending on specification (Table 4, Specifications 3–7).

TABLE 4.

Effect of priority access on new patient wait times: Measure 2

[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
(SE) (SE) (SE) (SE) (SE) (SE) (SE) (SE)
Priority Access Measure 2 0.5245*** 0.5658*** 0.53*** 0.5269*** 0.5255*** 0.5363*** 0.5388***
(0.1051) (0.1080) (0.1043) (0.1026) (0.1028) (0.0971) (0.0984)
Est. Pt. Mean Wait Given Wait < 90 days 0.8119***
(0.1115)
Congestion for Routine Visits 0.085*** 0.0714*** 0.0756*** 0.0772*** 0.0917***
(0.0251) (0.0257) (0.0244) (0.0235) (0.0227)
Congestion for Urgent Visits −0.008 −0.0071 −0.0043
(0.0152) (0.0163) (0.0157)
% Requested Routine −0.0511* −0.0511* −0.042
(0.0296) (0.0299) (0.0269)
% Requested Urgent −0.0354** −0.0313* −0.0373**
(0.0164) (0.0166) (0.0175)
% Urgent Seen 7+ Days Later 0.0495*** 0.05*** 0.0468***
(0.0142) (0.0136) (0.0129)
% Received Reminder 0.0352 0.0271 0.0161
(0.0333) (0.0301) (0.0294)
% Black 0.031 0.0495* 0.0463
(0.0321) (0.0290) (0.0288)
% Hispanic 0.0152 0.0182 0.0191
(0.0369) (0.0352) (0.0357)
% Other Race 0.0136 0.0081 0.0094
(0.0333) (0.0319) (0.0326)
% Some College or Above −0.0026 −0.0001 −0.0015
(0.0141) (0.0136) (0.0131)
% Age 40–59 0.0514 0.065 0.0619
(0.0668) (0.0632) (0.0635)
% Age 60–79 0.0943 0.0991 0.0987
(0.0729) (0.0705) (0.0705)
% Age over 80 0.0981 0.1026 0.1038
(0.0792) (0.0768) (0.0766)
% Male −0.0929** −0.0959** −0.0880**
(0.0443) (0.0424) (0.0418)
% in Good Health 0.0048 0.0052 0.0029
(0.0168) (0.0166) (0.0166)
% in Very Good or Excellent Health 0.0019 0.0029 0.0043
(0.0170) (0.0161) (0.0161)
% Saw Provider 5+ Times −0.0596 −0.0721** −0.092**
(0.0377) (0.0356) (0.0363)
Housing Price Index 0.1476*** 0.1494***
(0.0448) (0.0448)
% Medicare Advantage −0.0436 −0.0492
(0.2501) (0.2489)
% Pay Copay 0.1569 0.1618
(0.8651) (0.8746)
Constant 34.1685*** 34.9045*** 10.0759*** 33.9002*** 38.9789*** 36.1824*** 32.8015 28.2669
(0.8924) (0.8577) (3.4070) (0.9841) (2.8627) (9.5063) (23.8458) (24.0001)
R2 0.039 0.121 0.253 0.124 0.13 0.133 0.158 0.153
Number of facility-year-months 7,740 7,738 7,738 7,738 7,735 7,735 7,729 7,732

Abbreviation: Est. Pt., established patient.

Note. The dependent variable is the average wait time for new patients to see a primary care provider per facility-year-month. All models include facility, year, and month fixed effects. All models are weighted by the average number of new patients per facility in 2015, the middle year of the data. Priority Access Measure 2 is the proportion of established patient appointments that are considered follow-up appointments at 6 and 12 months.

*

p < .1.

**

p < .05.

***

p < .01.

Priority access seems to explain a substantial amount of the variation in new patient wait times. Without priority access (controlling only for facility, annual, and monthly differences in new patient wait times), the R2 is 0.04 (Table 3). Including Priority Access Measure 1 increased the R2 to 0.21 and Measure 2 to 0.12 (Table 4), tripling or quintupling the explanatory power of our new patient wait time model for primary care appointments.

Several of the control variables are statistically significant. Overall congestion seems to be an important factor in determining new patient wait times. A 1 percentage point increase in the proportion of established patients who reported difficult access was associated with a 0.07- to 0.09-day increase in average new patient wait times, regardless of how established patients’ priority access was measured. A 1 percentage point increase in the proportion of patients who reported not being seen within 7 days of an urgent request was associated with a 0.47- to 0.50-day increase in new patient wait times. Interestingly, the coefficients on the year indicators suggest that primary care appointment wait times for new patients seem to be decreasing nationwide during this time period. This is consistent with Penn et al. (2019) who found that VHA wait times decreased after the 2014 scandal (Griffin, 2014) that led to the resignation of the Secretary of Veterans Affairs (Shear & Oppel, 2014). Many of the other controls—for example, race and ethnicity compositions, education levels, age, and health status—did not have consistently significant effects.

In Appendix S1, we show results from three sets of robustness checks: one that uses a longer time period from October 2003 to September 2017, one that excludes two facilities that opened during the time frame of the original analysis (2013–2017), and one that excludes year fixed effects because the policy revocation was national at a certain time period and may absorb some of the policy-induced variation. The time period expansion, sample exclusion, and exclusion of year fixed effects do not seem to alter the estimates. Additionally in Appendix S1, we show results for the impact of priority access and the early scheduling of follow-ups on the frequency of no-shows and cancellations. The results suggest that priority access may be weakly associated with more no-shows and strongly associated with more cancellations. This implies that booking appointments far in advance may lead to other inefficiencies in the system. In addition, increases in reminders are associated with fewer no-shows and cancellations (as expected). Longer waits for urgent care among established patients were associated with more cancellations. Patients who reported having very good or excellent health were more likely to no-show. Finally, increases in affluence were associated with more cancellations.

Table 5 shows the relationship between established patient priority access and medical center productivity. Medical center productivity is measured as the total number of monthly visits that the primary care division produced, in other words, the number of appointments less the number of cancellations. We test the concern that established patient priority access can lead to cancellations, which can leave open slots in the schedule, causing an inefficient use of resources. The estimated relationship suggests that allowing established patients to have priority access is associated with lower medical center productivity, although the relationship is not always statistically significant at the 10% level.

TABLE 5.

Effect of priority access on medical center productivity

[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
Coeff.
(SE) (SE) (SE) (SE) (SE) (SE) (SE) (SE)
Sample Period FY13-FY17 FY13-FY17 FY04-FY17 FY04-FY17 FY13-FY17 FY13-FY17 FY04-FY17 FY04-FY17
Congestion Controls: SHEP or Est. Pt. Wait Given <90 days (EPW90) SHEP SHEP EPW90 EPW90 SHEP SHEP EPW90 EPW90
Year Indicators No Yes No Yes No Yes No Yes
Priority Access Measure 1 −1.1823
(4.0561)
−2.2704
(3.6409)
−3.9175
(4.6140)
−8.6466***
(3.1192)
Priority Access Measure 2 −14.7062*
(7.8145)
−12.7344
(7.7426)
−9.3958*
(5.0971)
−17.4018***
(5.1161)
R2
Number of
0.383 0.337 0.182 0.066 0.39 0.342 0.185 0.072

Note. The dependent variable is the number of monthly primary care visits that a medical center produced, i.e., the number of appointments less the number of cancellations. All models include facility, year, and month fixed effects. Priority Access Measure 1 is the proportion of established patient appointments with >90-day lead times.Priority Access Measure 2 is the proportion of established patient appointments that are considered follow-up appointments at 6 and 12 months.

*

p < .1.

**

p < .05.

***

p < .01.

6 |. CONCLUSION

In this study, we exploit an exogenous change in scheduling policy to measure a trade-off in access to primary care between two groups of patients. This trade-off is a result of limited resources. Often, patients use primary care to access specialty care. Thus, extending wait times at this point in care is only one part of the full wait time for treatment. Our findings suggest that established patient access is important when considering new patient access. The inclusion of the established patient access measure quintupled the explanatory power of our statistical model of wait times for new patients seeking primary care appointments in the VHA. Moreover, we found that established patients have an access advantage over new patients and that default scheduling practices prioritize established patients, which may not be an optimal allocation of resources. This access advantage, which can lead to crowding out new patients when resources are scarce, was mitigated under Recall Reminder. Rescinding Recall Reminder in 2016 led to the unintended consequence of worsening (lengthening) of new patient wait times by 2.1 to 4.3 days (or 7% to 14%). It also led to more cancellations, which can lead to an inefficient use of resources depending on whether the cancellations were filled by other appointments.

This study has two major limitations. First and most importantly, due to data limitations, we did not have a direct measure of compliance with the Recall Reminder policy. We assumed that the policy was the reason for changes in scheduling practices, which altered the magnitude of priority access. However, there may be unobserved factors that changed scheduling practices or affected access. The lack of a direct compliance measure introduces uncertainty in the estimate of the trade-off between established and new patient access. We used two measures of priority access—the first being a simple and direct measure of established patients’ ability to fill up the schedule more than 3 months in advance and the second being more directly linked to the Recall Reminder policy. We found that the results were robust to different measures and a variety of sensitivity analyses, strengthening our confidence that the policy change led to changes in access for the two groups and that rescinding the Recall Reminder policy inadvertently re-established preferential access for established patients.

The second limitation is generalizability. VHA patients are mostly older, male, and low income. Most also have other sources of health insurance, including commercial coverage, Medicare, or Medicaid. Patients with other sources of insurance may respond to waiting times differently than those without alternatives, so established patient priority access might be more damaging to new patients with fewer alternatives.

Our results suggest that there may be other scheduling practices that affect access advantages for certain patients. In the VHA, access is preferentially given to established patients over new patients. Outside the VHA, access may be preferentially given to those who are willing to pay the most. Siciliani and Hurst (2005) found evidence suggesting preferential access for privately insured patients may substantially degrade access for publicly insured patients in some countries. Disparities in access may be caused by providers preferring to treat privately insured patients (e.g., patients covered by employer-sponsored insurance in the United States) rather than publicly insured ones (e.g., patients covered by Medicaid), especially if treating the prior group is financially more rewarding. Analyses that adapt our methodology to these settings could shed light on the effectiveness of policies designed to preserve access for disadvantaged patients. Such policies include Recall Reminder policies (Peterson et al., 2015) and dual-practicing protocols for physicians working at both public and private hospitals (Siciliani & Hurst, 2005).

Taken together, the evidence implies that social welfare and resource management may be improved by recognizing differential access to care among different groups of patients. In the case of the VHA, recognizing this distinction for established and new patients substantially improves our understanding of variations in new patient wait times and suggests that some policy options may be more effective than others at improving access for new patients. More research is needed to quantify the effects of patient distinctions in other public health care delivery systems.

Supplementary Material

Supporting Info

ACKNOWLEDGEMENTS

We thank Megan Price and Taeko Minegishi for excellent research assistantship. We are grateful for the extremely useful comments from our anonymous reviewers as well as from seminar and conference participants at Boston University, the American Society of Health Economists, and Academy Health. The views expressed in this paper are solely our own and do not reflect the official positions of the U.S. Department of Veterans Affairs, Boston University, Harvard University, Northeastern University, and the University of Maryland Baltimore County.

FUNDING INFORMATION

This work was supported by the Partnered Evidence-based Policy Resource Center’s Quality Enhancement Research Initiative and Health Services Research and Development Grants PEC 16–001 and SDR 16–196, respectively, from the United States Department of Veterans Affairs.

Footnotes

1

Interestingly, our data indicate that the Recall Reminder policy may have led to reductions in at least cancellations (see the Appendix S1).

2

To avoid issues of misreporting (as documented by investigations following the VA Scandal of 2014), we did not use the time between the desired date of appointment and the appointment date as the wait time measure (Office of Inspector General, 2014).

3

We do not simply use a dummy variable to compare prerevocation and postrevocation of Recall Reminder because other changes may have occurred at the same time in the U.S. health care market. We also do not use an instrumental variables approach using this kind of dummy; although it passes the Stock and Yogo (2005) thresholds for instrument strength, it likely does not pass the exclusion restriction. However, we do use our estimates to compute the likely effect of Recall Reminder on new patient wait times.

4

Our estimate of β1 may be underestimating the true effect if facilities respond to longer new patient wait times by using Recall Reminder. This mechanism would lead to a negative relationship between new patient wait times and established patient booking lead times, which would bias the estimate toward zero. However, because facility managers were unaware of the relationship between Recall Reminder and new patient waiting times, we believe this is an unlikely source of bias.

5

The corresponding survey question was: In the last 12 months, when you made an appointment for a check-up or routine care with this provider, how often did you get an appointment as soon as you needed? The response options are never, sometimes, usually, and always. The other survey questions were similar.

CONFLICT OF INTEREST

We report grants from the U.S. Department of Veterans Affairs Quality Enhancement Research Initiative and Health Services Research and Development during the conduct of the study. We have no other conflicts of interest.

SUPPORTING INFORMATION

Additional supporting information may be found online in the Supporting Information section at the end of this article.

REFERENCES

  1. Besley T, Hall J, & Preston I (1999). The demand for private health insurance: Do waiting lists matter? Journal of Public Economics, 72(2), 155–181. 10.1016/S0047-2727(98)00108-X [DOI] [Google Scholar]
  2. Chan RJ, Webster J, & Marquart L (2011). Information interventions for orienting patients and their carers to cancer care facilities. The Cochrane Database of Systematic Reviews, 7(12), CD008273. 10.1002/14651858.CD008273.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Christ M, Grossmann F, Winter D, Bingisser R, & Platz E (2010). Modern triage in the emergency department. Deutsches Ärzteblatt International, 107(50), 892–898. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Cullis JG, Jones PR, & Propper C (2000). Chapter 23 Waiting lists and medical treatment: Analysis and policies. In Pauly MV, Mcguire TG, & Barros PP (Eds.), Handbook of Health Economics (Vol. 1 part B) (pp. 1201–1249). North Holland: Amsterdam. [Google Scholar]
  5. Czypionka T, Kraus M, Riedel M, & Rohrling G (2007). Waiting times for elective operations in Austria: A question of transparency. In The Institute for Advanced Studies Health System Watch Quarterly IV (pp. 1810–2271). Vienna: Institute for Advanced Studies. http://www.ihs.ac.at/departments/fin/HealthEcon/watch/hsw07_4e.pdf [Google Scholar]
  6. Department of Veterans Affairs. (2010). VHA outpatient scheduling processes and procedures, 2010. http://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2252.
  7. Frank RG, & Zeckhauser R (2007). Custom-made versus ready-to-wear treatments: Behavioral propensities in physicians’ choices. Journal of Health Economics, 26(6), 1101–1127. 10.1016/j.jhealeco.2007.08.002 [DOI] [PubMed] [Google Scholar]
  8. Goddard JA, Malek M, & Tavakoli M (1995). An economic model of the market for hospital treatment for non-urgent conditions. Health Economics, 4(1), 41–55. 10.1002/hec.4730040105 [DOI] [PubMed] [Google Scholar]
  9. Goddard M, & Smith P (2001). Equity of access to health care services: Theory and evidence from the UK. Social Science & Medicine, 53(9), 1149–1162. 10.1016/S0277-9536(00)00415-9 [DOI] [PubMed] [Google Scholar]
  10. Gravelle H, & Siciliani L (2009). Third degrees waiting time discrimination: optimal allocation of a public sector healthcare treatment under rationing by waiting. Health Economics, 18, 977–986. 10.1002/hec.1423 [DOI] [PubMed] [Google Scholar]
  11. Griffin R Interim report: Review of patient wait times, scheduling practices, and alleged patient deaths at the phoenix health care system. Washington, DC: VA Office of Inspector General, Veterans Health Administration, Department of Veterans Affairs, May 28, 2014. 14–02603-178. https://www.va.gov/oig/pubs/VAOIG-14-02603-178.pdf. [Google Scholar]
  12. Gutacker N, Siciliani L, & Cookson R (2016). Waiting time prioritisation: Evidence from England. Social Science & Medicine, 159, 140–151. ISSN 0277–9536. 10.1016/j.socscimed.2016.05.007 [DOI] [PubMed] [Google Scholar]
  13. Kaplan MS, Huguet N, McFarland BH, & Newsom JT (2007). Suicide among male veterans: A prospective population-based study. Journal of Epidemiology & Community Health., 61(7), 619–624. 10.1136/jech.2006.054346 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Kuchinke BA, Sauerland D, & Wubker A (2009). The influence of insurance status on waiting times in German acute care hospitals: An empirical analysis of new data. International Journal of Equity in Health, 8, 44. 10.1186/1475-9276-8-44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Landi S, Ivaldi E, & Testi A (2018). Socioeconomic status and waiting times for health services: An international literature review and evidence from the Italian National Health System. Health Policy, 122(4), 334–351. ISSN 0168–8510. 10.1016/j.healthpol.2018.01.003 [DOI] [PubMed] [Google Scholar]
  16. Lindsey C, & Feigenbaum B (1984). Rationing by waiting lists. The American Economic Review, 74(3), 404–417. http://www.jstor.org/stable/1804016 [PubMed] [Google Scholar]
  17. MedPAC (2017). Report to congress: Medicare payment policy March 2017. Washington, DC: Medicare Payment Advisory Commission. http://medpac.gov/docs/default-source/reports/mar17_entirereport.pdf [Google Scholar]
  18. Penn M, Bhatnagar S, Kuy S, Lieberman S, Elnahal S, Clancy C, & Shulkin D (2019). Comparison of wait times for new patients between the private sector and United States Department of Veterans Affairs Medical Centers. Journal of American Medical Association Network Open, 2(1), e187096. 10.1001/jamanetworkopen.2018.7096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Peterson K, McCleery E, Anderson J, Waldrip K, & Helfand M (2015). Evidence brief: Comparative effectiveness of appointment recall reminder procedures for follow-up appointments. Washington, DC: Veterans Health Administration, Department of Veterans Affairs. ESP Project #09–199. https://www.hsrd.research.va.gov/publications/esp/RecallReminders.pdf [PubMed] [Google Scholar]
  20. Pizer SD, & Prentice J (2011). Time is money: Outpatient waiting times and health insurance choices of elderly veterans in the United States. Journal of Health Economics, 30(4), 626–636. 10.1016/j.jhealeco.2011.05.004 [DOI] [PubMed] [Google Scholar]
  21. Prentice JC, Dy S, Davies ML, & Pizer SD (2013). Using health outcomes to validate access quality measures. American Journal of Managed Care, 19(11), e367–e377. [PubMed] [Google Scholar]
  22. RAND Health (2015). Resources and capabilities of the Department of Veterans affairs to provide timely and accessible care to veterans. Santa Monica: RAND Corporation. https://www.rand.org/contentdam/rand/pubs/research_reports/RR1100/RR1165z2/RAND_RR1165z2.pdf [PMC free article] [PubMed] [Google Scholar]
  23. Reichert A, & Jacobs R (2018). The impact of waiting time on patient outcomes: Evidence from early intervention in psychosis services in England. Health Economics., 27(11), 1772–1787. 10.1002/hec.3800 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Shear MD, Oppel RA. (2014, May 30). V.A. chief resigns in face of furor on delayed care. The New York Times. https://www.nytimes.com/2014/05/31/us/politics/eric-shinseki-resigns-as-veterans-affairs-head.html.
  25. Siciliani L, & Hurst J (2005). Tackling excessive waiting times for elective surgery: a comparative analysis of policies in 12 OECD countries. Health Policy, 72(2), 201–215. 10.1016/j.healthpol.2004.07.003 [DOI] [PubMed] [Google Scholar]
  26. Solberg LI, Maciosek MV, & Edwards NM (2008). Primary care intervention to reduce alcohol misuse: Ranking its health impact and cost effectiveness. American Journal of Preventive Medicine, 34(2), 143–152.e3. 10.1016/j.amepre.2007.09.035 [DOI] [PubMed] [Google Scholar]
  27. Viberg N, Forsberg BC, Borowitz M, & Molin R (2013). International comparisons of waiting times in health care—Limitations and prospects. Health Policy, 112(1–2), 53–61. 10.1016/j.healthpol.2013.06.013 [DOI] [PubMed] [Google Scholar]
  28. Yee CA, Minegishi T, Frakt A, Pizer SD. (2018, March 14). Optimizing resource allocation in a public delivery system: Evidence from the Veterans Health Administration. Working paper. [Google Scholar]
  29. Zaman JAB (2018). The Enduring Value of the Physical Examination. Med Clin North Am, 102(3), 417–423. 10.1016/j.mcna.2017.12.003 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Info

RESOURCES