Abstract
OBJECTIVES:
The Medicare accountable care organization (ACO) program financially rewards ACOs for providing high-quality healthcare, and also factors in the patient experience of care. This study examined whether administrative measures of wait times for specialist consults are associated with self-reported patient satisfaction.
STUDY DESIGN:
Analyses used administrative and survey data from a clinically integrated healthcare system similar to an ACO.
METHODS:
Veterans Health Administration (VHA) data from 2012 was obtained. Administrative access metrics included the number of days between the creation of the consult request and: 1) first action taken on the consult, 2) scheduling of the consult, and 3) completion of the consult. The Survey of Healthcare Experiences of Patients—which is modeled after the Consumer Assessment of Healthcare Providers and Systems family of survey instruments used by ACOs to measure patient experience—provided the outcome measures. Outcomes included general VHA satisfaction measures and satisfaction with timeliness of care, including wait times for specialists and treatments. Logistic regression models predicted the likelihood of patients reporting being satisfied on each outcome. Models were risk adjusted for demographics, self-reported health, and healthcare use.
RESULTS:
Longer waits for the scheduling of consults and completed consults were found to be significantly associated with decreased patient satisfaction.
CONCLUSIONS:
Because patients often report high levels of powerlessness and uncertainty while waiting for consultation, these wait times are an important patient-centered access metric for ACOs to consider. ACOs should have systems and tools in place to streamline the specialist consult referral process and increase care coordination.
The Medicare accountable care organization (ACO) program financially rewards ACOs for providing high-quality healthcare. Participating providers are then financially rewarded if healthcare spending is kept below targets set by Medicare, which can be achieved by the prevention of medical errors and duplication of services.1,2 The success of ACOs assumes that a structure of clinical integration and appropriately targeted incentives will improve coordination of care and quality.3 A key measure of quality of care is patient experience of care, including perceived coordination. Using the Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of survey instruments, ACOs are required to collect information on patient experiences, such as the ability to obtain timely care and access to specialists.4
Care coordination between primary and specialty care has received little attention despite the fact that specialty care accounts for more healthcare resource use than primary care, and specialists outnumber primary care physicians in the United States.5 Primary care and specialist providers report inadequate communication between each other about referrals, which compromises their ability to provide high-quality care6 and may also have a negative impact on self-reported patient satisfaction.
Patients report high levels of uncertainty and powerlessness during the period of time between a requested referral and subsequent action, as they wait for clarity on disease outcomes.7–9 Consequently, if patients experience inadequate coordination between primary and specialty care, their experiences may suffer. Previous research has found that shorter wait times for appointments are not always the most important priority for patients. Patients are willing to wait longer to get an appointment at a convenient time or to see a preferred provider, especially for low-worry, long-standing conditions. Yet, when there is a new health concern, faster access becomes a higher priority.10–12
The Veterans Health Administration (VHA)—the largest clinically integrated system similar to an ACO in the United States that coordinates primary and specialty care—is a source of data that can provide important insights into the effect of the consult process on patient experience. In 2014, the VHA had over 9 million enrollees and provided or coordinated over 92 million outpatient encounters.13 Since 2009, the VHA has consistently measured patient experiences and satisfaction with the Survey of Healthcare Experience of Patients (SHEP) Survey that is modeled after the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Survey that ACOs use to measure patient experiences.4,14 This paper investigates the effect of several different measures of VHA consult wait times on self-reported patient satisfaction. This effect on patient satisfaction provides a specific point of intervention for ACOs to consider when thinking of ways to measurably improve patient experiences.
METHODS
As discussed in detail below, analyses used administrative data on wait times for consults in the VHA to predict self-reported patient satisfaction with care. Study approval for human subjects was obtained from the VA Boston Healthcare System Institutional Review Board.
Administrative Consult Wait Time Measures
The VHA’s electronic consult system was implemented in 1999 and its use is mandated for all consultation requests (according to internal VHA data). Data from this study was extracted from fiscal year (FY) 2012. The consult system automatically records time stamps when consult-related administrative events occur. These events can be used to understand the number of elapsed days between consult creation and first action and/or scheduling. An additional measure included total consult resolution time (Figure and Table 1); in other words, consults are considered resolved when the appointment (if performed) has been completed and the report is written and signed. Time stamps are also recorded when the consult is updated, discontinued, or returned to the sending service for clarification. When consults are returned for clarification of the request, the consult wait time clock is not reset. The time stamps include the time from consult creation through appointment scheduling, and eventually to consult completion. In contrast, discontinued consults stop the clock.
Figure.
Consult Wait Calculations
Days to first action = 1 day | Days to scheduled = 10 days | Days to completed = 15 days
Table 1.
Summary of Consults Wait Time Measures
Measure | Algorithm | Example Calculation | Actions Included in Measure |
---|---|---|---|
Days to completed (retrospective) | Completed consult date minus consult create date | Consult is entered into the system on 03/01/2012 and completed on 03/15/2012. 3/15/2012 – 03/1/2012 = 15 days |
Appointments can be considered resolved with a completed, discontinued, or cancelled appointment Completed is when the appointment occurs. Discontinued is when the patient dies before being seen or the receiving clinic refuses to accept the consult because they do not have the capacity at that time. Canceled is when the receiving service sends the consult back because it was entered incorrectly. |
Days to first action (prospective) | Date when action taken minus consult create date | Consult is entered into the system on 03/01/2012 and order is sent to receiving clinic on 03/02/2012. 3/02/2012 – 03/1/2012 = 1 day |
First action can be a variety of options including printing the order, scheduling the appointment or sending the order to the receiving clinic. |
Days to scheduled (prospective) | Scheduled appointment date minus consult create date | Patient X has a consult scheduled for 03/10/2012 that was created on 03/1/2012. 3/10/2012 – 3/1/2012 = 10 days |
Scheduling of appointment is the only action included in this measure |
Standardization of the consult system enables distinctions between documents used for traditional clinical consultation and those used for other administrative purposes, such as for requesting transportation.15 With this in mind, we narrowed the 2012 data to focus on consults for clinical services by excluding administrative consults and non-VHA care consults.
Following our previous work, the wait times were weighted by a national proportion based on FY2011 data. Weights were developed based on the frequency of different consult appointments. If a station did not have a consult request for every type of appointment in a month, the remaining appointment weights were adjusted so they would sum to 100.16,17 Wait time measures were used in 2 ways in statistical models: 1) as a continuous variable; and 2) categorized roughly into quartiles, with the lowest quartile used as the reference group.
Sample Selection
Consult wait times were linked to self-reported satisfaction using the 2012 Survey of Healthcare Experiences of Patients (SHEP), which is modeled after the Consumer Assessment of Healthcare Providers and Systems (CAHPS) family of survey instruments. Managed by the VHA Office of Analytics and Business Intelligence, SHEP is an ongoing nationwide survey that seeks to obtain patient feedback on recent episodes of VHA inpatient or outpatient care. For outpatient care, a simple random sample of patients with completed appointments at VHA facilities is selected each month (according to internal VHA Support Service Center data). The overall response rate was 53% and respondents came from all VHA medical centers (n = 130).
Different sample selection rules were applied to each consult wait measure. First, all individuals who had the visit date in SHEP match the date for a completed/updated consult were flagged. In addition, we required the station (medical center code) in SHEP to match the station of the completed consult. This was the sample for the completed consult wait measure (n = 28,328). The wait computed was the facility average for all resolved consults in that month. Not surprisingly, because a clinic visit triggers a patient to be eligible to be contacted for SHEP, 90% of the individuals in this sample had a completed/updated status compared with a discontinued or canceled status.
For the next 2 measures—days to first action and days to scheduled consult—all individuals who had a visit date in SHEP match the date a consult was initiated were flagged (n = 44,387). The receiving (not the sending) station for these requests was linked because receiving stations actually do the scheduling. We computed a facility average wait time looking forward for all consults requested at that receiving station in the month.
Patient Satisfaction Dependent Variables
Satisfaction measures were selected and operationalized following previous work.18 Satisfaction with timeliness of care was measured by asking respondents how often they were able to get VHA appointments as soon as they thought they needed care, excluding times they needed urgent care. Access to VHA tests, treatments, and appointments with VHA specialists was measured by asking how easy it was to get this care in the last 12 months. Response options for the above 3 measures included “always,” “usually,” “sometimes,” and “never”; we estimated the likelihood of answering always/usually compared with sometimes/never. We also examined more general satisfaction measures that wait times for consults may influence. General satisfaction is measured by asking respondents to rate VHA healthcare in the last 12 months on a scale of 0 to 10 and their satisfaction with their most recent VHA visit using a Likert scale ranging from 1 to 7 (higher numbers indicate greater satisfaction). We estimated the likelihood of a 9 or 10 rating compared with less than a 9 on the first measure, and the likelihood of a 6 or 7 compared with less than a 6 on the second measure.
Risk Adjustors
Risk adjustors included age, gender, race/ethnicity, education level, number of visits to a doctor’s office in the last 12 months, and self-reported health status—all obtained from SHEP FY2012. Models also included month fixed effects to control for secular changes in wait times and a VHA medical center random effect to control for facility quality and case-mix differences.
Analyses
STATA version 10.0 (Statacorp, College Station, Texas) was used to estimate logistic regression models that predicted the dichotomized patient satisfaction variables.
RESULTS
The SHEP respondents selected for this sample reflect the larger VHA patient population. Respondents were predominantly male, in poor health, and frequent healthcare users. There is evidence of high satisfaction with VHA care, with nearly 80% of respondents reporting they usually or always received appointments, treatment, or specialist care in a timely fashion. Additionally, 81% expressed the top 2 highest satisfaction levels for the most recent visit and 58% expressed the highest satisfaction levels with VHA care in the last 12 months (Table 2).
Table 2.
Descriptive Statistics of Individuals in SHEP Sample Selected by Consult Date
Demographics (n = 56,686)a | Mean or % |
---|---|
Age | 66.88 |
Male | 95% |
Had some college | 55% |
White | 78% |
Black | 10% |
Other | 12% |
≥5 visits to a doctor’s office in the last 12 months | 28% |
Excellent/very good self-reported health status in the last 12 months | 25% |
Patient Satisfaction Measures | |
Timely visit: receiving an appointment as soon as you thought you needed one | 80% |
Always or usually versus sometimes or never (n = 21,472)b | |
VHA rating: rate all VHA care in the last 12 months on a scale of 0 to 10 (10 = highest rating) | 58% |
9 or 10 versus <9 (n = 29,143) | |
Treatment access: how often it easy to get treatment or tests | 82% |
Always or usually versus sometimes or never (n = 25,214) | |
Specialist access: how often it was easy to get an appointment with a specialist | |
Always or usually versus sometimes or never (n = 19,087) | 79% |
VHA satisfaction: satisfaction with VHA care at most recent visit on scale of 1 to 7 (7 = most satisfied) | 81% |
6 or 7 versus <6 (n = 28,929) |
SHEP indicates Survey of Healthcare Experiences of Patients; VHA, Veterans Health Administration.
The sample includes everyone with no missing information on any of the risk adjustors and a patient that was in the resolved consult and/or the pending consult sample.
Sample sizes differ between outcomes due to not all SHEP respondents answering every satisfaction question.
There is significant variation between facilities regarding consult wait times. Facilities in the top quartile have waits that are more than 10 days longer than facilities in the lowest quartile (33.5 days vs 23 days) for consult completion. Facilities in the highest quartile took about a week longer to schedule appointments in response to a consult request than facilities in the lowest quartile. There was very little variation in time to first action, with less than a half-day difference in the highest quartile compared with the lowest quartile (Table 3).
Table 3.
Descriptive Statistics of Clinical Consult Wait Time Measures
Mean | 25% | 50% | 75% | |
---|---|---|---|---|
Days to completed (n = 27,300) | 28.8 | 23.0 | 27.1 | 33.5 |
Days to first action (n = 42,802) | 0.09 | 0 | 0.02 | 0.06 |
Days to scheduled consult (n = 42,802) | 10.3 | 6.3 | 8.8 | 13.0 |
The measures for completed consult and time to scheduled consult have strong and consistent relationships with patient satisfaction (Table 4). Generally, there is a linear relationship with satisfaction decreasing for patients who visit facilities with longer waits (the higher quartiles of wait times). Patients who visit facilities in the highest quartiles of wait times are significantly less satisfied than patients who visit facilities in the lowest quartile for these 2 measures. Sensitivity analyses that included wait times as a continuous measure found that longer waits were significantly associated with lower satisfaction on these measures for every outcome except the model using the completed consult wait to predict the overall VHA satisfaction measure (data not shown). There was no significant relationship between the days to first action measure and patient satisfaction.
Table 4.
Timely Visit | VHA Rating | Treatment Access | Specialist Access | VHA Satisfaction | |
---|---|---|---|---|---|
Days to completed (ref = <23 days) | (n = 20,000) | (n = 27,095) | (n = 23,497) | (n = 17,797) | (n = 26,957) |
≥23 and <27.1 | 0.85** | 0.95 | 0.88** | 0.89 | 0.90** |
≥27.1 and <33.5 | 0.82** | 0.87** | 0.85** | 0.86** | 0.84** |
≥33.5 | 0.76** | 0.86** | 0.80** | 0.79** | 0.86** |
Days to first action (ref = 0 days) | (n = 31,324) | (n = 42,462) | (n = 36,492) | (n = 26,461) | (n = 42,333) |
>0 and <0.02 | 0.96 | 0.98 | 0.94 | 0.93 | 0.97 |
≥0.02 and <0.06 | 0.95 | 0.95 | 0.97 | 1.00 | 0.92* |
≥0.06 | 1.04 | 0.98 | 1.00 | 0.96 | 0.98 |
Days to scheduled (ref = <6.5 days) | (n = 31,324) | (n = 42,462) | (n = 36,492) | (n = 26,641) | (n = 42,333) |
≥6.5 and <9.0 | 0.88** | 0.99 | 0.95 | 0.96 | 0.95 |
≥9.0 and <13.5 | 0.77** | 0.91** | 0.89* | 0.87** | 0.89** |
≥13.5 | 0.71** | 0.85** | 0.79** | 0.77** | 0.79** |
Ref indicates referent; VHA, Veterans Health Administration.
Models include demographics, self-reported health status, healthcare utilization, month fixed effects and VHA medical center random effect.
Reported numbers are odds ratios.
Numbers in parentheses indicate sample sizes. Sample sizes differ between models due to not all Survey of Healthcare Experiences of Patients Health respondents answering every satisfaction question.
indicates P <.10;
indicates P <.05.
DISCUSSION
This study finds a consistent relationship between measures of consult wait times and patient satisfaction. Longer waits between initial request and either scheduling of the consult or the completion of the consult are associated with poorer satisfaction. Generally, there was a stronger negative relationship between waits and satisfaction measures that were specifically related to accessing care, treatments, or specialists compared with more general satisfaction measures. There was no relationship between the waits for time to first action and patient satisfaction.
These findings are consistent with previous research that validated access metrics for appointments using patient satisfaction as an outcome. Prentice et al (2014) found that different types of access metrics predicted patient satisfaction for new and returning patients.18 Specifically, the wait time between appointment creation and appointment completion for new, but not established, patients strongly predicts patient satisfaction. One reason this relationship pertains to new patients but not established ones may be because new patients typically want to be seen right away due to emerging health concerns. This previously validated measure is consistent with the time to scheduling and time to completion measures used here that predict the same relationship between consult waits and patient satisfaction. Patients being referred to specialty care are likely to have new problems and want to be seen as soon as possible after the need is identified.
The findings from this study also expand our understanding of administrative access metrics that are patient centered. In contrast to the measures, days to scheduled consult and days to completed consult, the days to first action measure had no relationship with patient satisfaction. This metric largely measures “behind the scenes” processes of transfer and scheduling of consults. Patients repeatedly reported a sense of powerlessness and uncertainty, as well as a feeling of “living their life on hold” when waiting for diagnosis or treatment that is compounded by a lack of information from the healthcare system.7–9 As ACOs and the VHA put a greater emphasis on the experiences of patients, these findings suggest that metrics should focus on measuring tangible processes that patients easily understand as action being taken on their behalf, such as scheduling appointments.
Given that longer waits for consults negatively impact patient experience, ACOs should have systems and tools in place to streamline the consult referral process and increase care coordination. These tools include policy, training, appropriate technological support tools, and organizational culture. Greenberg et al (2014) argue for the importance of developing a collaborative care agreement that delineates the roles of the referring and consulting provider in the pre-consult exchange, the actual consultation, and any co-management of the patient required after the consult is completed.5 Others argue that ACOs need to specifically focus on improving the structure of care coordination. This includes appropriate training to all providers and care teams on best practices in care coordination, technological support tools to help providers get a complete picture of a patient’s care, as well as to achieve care coordination goals and a culture that prioritizes such coordination, including protected time during the workday for care coordination activities.3,19
Limitations
The main limitation of this study is that we cannot definitely state that the relationship between longer consult wait times and lower patient satisfaction is causal because omitted variables may be responsible. Our models attempted to minimize this possibility by including month fixed effects to control for secular changes in wait times and a medical center random effect to control for facility quality and case-mix differences. On the other hand, research has repeatedly found a relationship between longer wait times and poorer outcomes, including decreased patient satisfaction and poorer health outcomes in a variety of veteran populations and time periods, thus strengthening the inference that the relationship may be causal.16–18,20–22
Another limitation is that the study sample is largely elderly and male, so results may not be generalizable to other patient populations. Finally, wait times for all types of consults may not have the same impact on patient satisfaction. Due to data availability at the time of the study, our measures included all clinical consults. Long waits for a recommended preventive screening (eg, colonoscopy) may not have the same effect on patient satisfaction as waits for consults that are a result of new health concerns. Future work should consider these nuances to determine consult access that has the largest impact on patient experiences.
CONCLUSIONS
The consult process occurs at an anxiety-producing time for patients. Findings from this study suggest that certain types of consult waits that can be easily obtained from the scheduling system are strong predictors of patient satisfaction. As ACOs reorganize to become more patient-centered, better management of consult waits has the potential to significantly improve patient satisfaction.
Acknowledgments
The authors are indebted to Aaron Legler for programming support.
Source of Funding:
Funding for this research was provided by the Access Clinic and Administration Program (now the Office of Veterans Access to Care) in the Department of Veterans Affairs.
Footnotes
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
REFERENCES
- 1.Accountable care organizations (ACOs): general information. CMS website https://innovation.cms.gov/initiatives/aco/. Updated December 15, 2016. Accessed December 9, 2015.
- 2.McWilliams JM, Landon BE, Chernew ME, Zaslavsky AM. Changes in patients’ experiences in Medicare accountable care organizations. N Engl J Med. 2014;371(18):1715–1724. doi: 10.1056/NEJMsa1406552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Press MJ, Michelow MD, MacPhail LH. Care coordination in accountable care organizations: moving beyond structure and incentives. Am J Manag Care. 2012;18(12):778–780. [PubMed] [Google Scholar]
- 4.CAHPS Survey for Accountable Care Organizations (ACOs) Participating in Medicare Initiatives—2016 ACO-9 Survey version required (English). CMS website http://acocahps.cms.gov/globalassets/aco---epi-2-new-site/pdfs-for-aco/survey-instruments/2016-aco-survey/english/2016_aco-9_mail_survey_english.pdf. Accessed December 9, 2015.
- 5.Greenberg JO, Barnett ML, Spinks MA, Dudley MJ, Frolkis JP. The “medical neighborhood”: integrating primary and specialty care for ambulatory patients. JAMA Intern Med. 2014;174(3):454–457. doi: 10.1001/jamainternmed.2013.14093. [DOI] [PubMed] [Google Scholar]
- 6.O’Malley AS, Reschovsky JD. Referral and consultation communication between primary care and specialist physicians: finding common ground. Arch Intern Med. 2011;171(1):56–65. doi: 10.1001/archinternmed.2010.480. [DOI] [PubMed] [Google Scholar]
- 7.Fogarty C, Cronin P. Waiting for healthcare: a concept analysis. J Adv Nurs. 2007;61(4):463–471. doi: 10.1111/j.1365-2648.2007.04507.x. [DOI] [PubMed] [Google Scholar]
- 8.Hansen BS, Rørtveit K, Leiknes I, et al. Patient experiences of uncertainty—a synthesis to guide nursing practice and research. J Nurs Manag. 2012;20(2):266–277. doi: 10.1111/j.1365-2834.2011.01369.x. [DOI] [PubMed] [Google Scholar]
- 9.Rittenmeyer L, Huffman D, Godfrey C. The experience of patients, families, and/or significant others of waiting when engaging with the healthcare system: a systematic qualitative review. JBI Database Systematic Rev Implement Rep. 2014;12(8):198–258. doi: 10.11124/jbisrir-2014-1664. [DOI] [Google Scholar]
- 10.Rubin G, Bate A, George A, Shackley P, Hall N. Preferences for access to the GP: a discrete choice experiment. Br J Gen Pract. 2006;56(531):743–748. [PMC free article] [PubMed] [Google Scholar]
- 11.Gerard K, Salisbury C, Street D, Pope C, Baxter H. Is fast access to general practice all that should matter? a discrete choice experiment of patient preferences. J Health Serv Res Policy. 2008;13(suppl 2):3–10. doi: 10.1258/jhsrp.2007.007087. [DOI] [PubMed] [Google Scholar]
- 12.Salisbury C, Goodall S, Montgomery AA, et al. Does advanced access improve access to primary health care? questionnaire survey of patients. Br J Gen Pract. 2007;57(541):615–621. [PMC free article] [PubMed] [Google Scholar]
- 13.National Center for Veterans Analysis and Statistics. Selected Veterans Health Administration characteristics: FY2002 to FY2014. Veterans Affairs website http://www.va.gov/vetdata/Utilization.asp. Published September 28, 2014. Accessed December 9, 2015.
- 14.VHA facility quality and safety report fiscal year 2012 data. Veterans Affairs website www.va.gov/health/docs/vha_quality_and_safety_report_2013.pdf. Published December 2013. Accessed December 15, 2015.
- 15.Waiting for care: examining patient wait times at VA. U.S. House of Representative Committee on Veterans Affairs Hearing. House Committee on Veterans Affairs website https://veterans.house.gov/sites/republicans.veterans.house.gov/files/documents/113-11_0.pdf. Published March 14, 2013. Accessed July 10, 2015.
- 16.Prentice J, Pizer SD. Delayed access to health care and mortality. Health Serv Res. 2007;42(2):644–662. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Prentice JC, Pizer SD. Waiting times and hospitalizations for ambulatory care sensitive conditions. Health Serv Outcomes Res Methodol. 2008;8(1):1–18. doi: 10.1007/s10742-007-0024-5. [DOI] [Google Scholar]
- 18.Prentice JC, Davies ML, Pizer SD. Which outpatient waittime measures are related to patient satisfaction? Am J Med Qual. 2014;29(3):227–235. doi: 10.1177/1062860613494750. [DOI] [PubMed] [Google Scholar]
- 19.Oliver P, Bacheller S. ACOs: what every care coordinator needs in their tool box. Am J Accountable Care. 2015;3(3):18–20. [Google Scholar]
- 20.Pizer SD, Prentice JC. What are the consequences of waiting for health care in the veteran population? J Gen Intern Med. 2011;26(suppl 2):676–682. doi: 10.1007/s11606-011-1819-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Prentice JC, Fincke BG, Miller DR, Pizer SD. Outpatient wait time and diabetes care quality improvement. Am J Manag Care. 2011;17(2):e43–e54. [PubMed] [Google Scholar]
- 22.Prentice JC, Dy S, Davies ML, Pizer SD. Using health outcomes to validate access quality measures. Am J Manag Care. 2013;19(11):e367–e377. [PubMed] [Google Scholar]