This cohort study compares patient-reported experiences at safety-net and non–safety-net hospitals before and after the Centers for Medicare & Medicaid Services implemented the Hospital Value-Based Purchasing program.
Key Points
Question
Is the Centers for Medicare & Medicaid Services Hospital Value-Based Purchasing (VBP) program associated with changes in patient-reported experience in safety-net vs non–safety-net Hospitals?
Findings
In this cohort study of 2266 US hospitals, safety-net hospitals had lower performance than non–safety-net hospitals across all measures of patient experience and satisfaction from 2008 through 2019. The VBP program implementation was not associated with improvement in measures of patient experience in safety-net vs non–safety-net hospitals.
Meaning
Findings of this study suggest that the VBP program was not associated with improved patient experience at safety-net vs non–safety-net hospitals; policy makers may need to explore strategies beyond pay-for-performance programs to address the differences in patient-reported experience at these hospitals.
Abstract
Importance
Safety-net hospitals, which have limited financial resources and care for disadvantaged populations, have lower performance on measures of patient experience than non–safety-net hospitals. In 2011, the Centers for Medicare & Medicaid Services Hospital Value-Based Purchasing (VBP) program began tying hospital payments to patient-reported experience scores, but whether implementation of this program narrowed differences in scores between safety-net and non–safety-net hospitals is unknown.
Objective
To evaluate whether the VBP program’s implementation was associated with changes in measures of patient-reported experience at safety-net hospitals compared with non–safety-net hospitals between 2008 and 2019.
Design, Setting, and Participants
This cohort study evaluated 2266 US hospitals that participated in the VBP program between 2008 and 2019. Safety-net hospitals were defined as those in the highest quartile of the disproportionate share hospital index. Data were analyzed from December 2021 to February 2022.
Main Outcomes and Measures
The primary outcomes were the Hospital Consumer Assessment of Healthcare Providers and Systems global measures of patient-reported experience and satisfaction, including a patient’s overall rating of a hospital and willingness to recommend a hospital. Secondary outcomes included the 7 other Hospital Consumer Assessment of Healthcare Providers and Systems measures encompassing communication ratings, clinical processes ratings, and hospital environment ratings. Piecewise linear mixed regression models were used to assess annual trends in performance on each patient experience measure by hospital safety-net status before (July 1, 2007-June 30, 2011) and after (July 1, 2011-June 30, 2019) implementation of the VBP program.
Results
Of 2266 US hospitals, 549 (24.2%) were safety-net hospitals. Safety-net hospitals were more likely than non–safety-net hospitals to be nonteaching (67.6% [371 of 549] vs 53.1% [912 of 1717]; P < .001) and urban (82.5% [453 of 549] vs 77.4% [1329 of 1717]; P = .01). Safety-net hospitals consistently had lower patient experience scores than non–safety-net hospitals across all measures from 2008 to 2019. The percentage of patients rating safety-net hospitals as a 9 or 10 out of 10 increased during the pre-VBP program period (annual percentage change, 1.84%; 95% CI, 1.73%-1.96%) and at a slower rate after VBP program implementation (annual percentage change, 0.49%; 95% CI, 0.45%-0.53%) at safety-net hospitals. Similar patterns were observed at non–safety-net hospitals (pre-VBP program annual percentage change, 1.84% [95% CI, 1.77%-1.90%] and post-VBP program annual percentage change, 0.42% [95% CI, 0.41%-0.45%]). There was no differential change in performance between these sites after the VBP program implementation (adjusted differential change, 0.07% [95% CI, −0.08% to 0.23%]; P = .36). These patterns were similar for the global measure that assessed whether patients would definitely recommend a hospital. There was also no differential change in performance between safety-net and non–safety-net hospitals under the VBP program across measures of communication, including doctor (adjusted differential change, −0.09% [95% CI, −0.19% to 0.01%]; P = .08) and nurse (adjusted differential change, −0.01% [95% CI, −0.12% to 0.10%]; P = .86) communication as well as clinical process measures (staff responsiveness adjusted differential change, 0.13% [95% CI, −0.03% to 0.29%]; P = .11; and discharge instructions adjusted differential change, −0.02% [95% CI, −0.12% to 0.07%]; P = .62).
Conclusions and Relevance
This cohort study of 2266 US hospitals found that the VBP program was not associated with improved patient experience at safety-net hospitals vs non–safety-net hospitals during an 8-year period. Policy makers may need to explore other strategies to address ongoing differences in patient experience and satisfaction, including additional support for safety-net hospitals.
Introduction
In the US, policy makers and clinical leaders have prioritized efforts to improve patient experience and satisfaction with hospital care.1 Safety-net hospitals, which care for populations with lower income that face barriers to care and are racially and ethnically diverse, have lower performance on measures of patient experience than non–safety-net hospitals.2 These inequities are concerning and suggest that some populations may have substandard experiences with the health care system.
As part of a national effort to improve patient experience, the Centers for Medicare & Medicaid Services (CMS) implemented the Hospital Value-Based Purchasing (VBP) program in 2011, which financially rewards or penalizes US hospitals based in part on patient-reported experience scores.3 Performance on patient experience scores accounts for 25% of the VBP program’s total performance score, which is used to calculate payment adjustments for each hospital.3 Although early evaluations of the first 3 years of the VBP program suggest that it did not meaningfully improve patient experience across all US hospitals,4,5 little is known about whether the program has narrowed—or widened—differences in patient experience between safety-net and non–safety-net hospitals over the long term. It is possible that during a longer period of time, safety-net hospitals were able to meaningfully invest in strategies and changes in care delivery to improve patients’ experience. On the other hand, because the VBP program has disproportionately penalized safety-net hospitals6,7 and taken resources away from already resource-constrained sites, disparities in patient experience may have increased.
Understanding how patient care experience has changed after implementation of the VBP program at safety-net hospitals that care for lower-income and racial and ethnic minority populations is critically important from a health equity perspective, particularly because self-reported experience is associated with adherence to treatment, recovery from illness, and health outcomes.2,8,9 Therefore, in this long-term evaluation of the VBP program, we examined how measures of patient-reported experience changed at safety-net hospitals compared with non–safety-net hospitals in the 8 years after the program’s implementation.
Methods
Data Sources and Study Population
This cohort study was approved by the institutional review board at Beth Israel Deaconess Medical Center, and the requirement for written informed consent was waived because only deidentified data were used. We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.
We used CMS Hospital Compare files to obtain performance data on patient satisfaction and experience for all US hospitals from 2008 to 2019. We then further restricted the sample to those hospitals that participated in the VBP program (eg, were eligible for financial adjustments under the program) in all years since the program’s implementation in 2011. For the main analysis, hospitals were classified as safety-net sites if they were in the highest quartile of the Disproportionate Share Hospital (DSH) index nationally, consistent with past studies.2,6,7,10,11 In addition, we ran a sensitivity analysis using the highest DSH quintile to define safety-net status. American Hospital Association annual survey data were used to identify other hospital characteristics (eg, number of beds, teaching status, rural vs urban location). A flow diagram of hospitals included in the analysis is shown in eFigure 1 in the Supplement with demographic characteristics described in eTable 1 in the Supplement.
We obtained annual patient satisfaction ratings for each hospital based on the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey (eMethods 1 in the Supplement). The HCAHPS survey is designed to capture patients’ perspectives about their care experience and allows for comparisons of hospital performance on topics that are important to patients. The survey consists of 29 questions regarding patient experience and is administered by hospitals to a random sample of adult patients in the period 48 hours to 6 weeks after discharge. Then CMS reports performance on 10 measures based on these questions. These measures include 2 global measures, including the patient’s overall rating of the hospital on a scale of 0 to 10 (with 10 the highest rating) and willingness to recommend the hospital; 3 communication measures, including communication with doctors, communication with nurses, and communication regarding medications; 3 clinical processes measures, including responsiveness of staff, care transition, and discharge planning; and 2 hospital environment measures, including quietness and cleanliness of the environment. Because the care transition measure of HCAHPS was first introduced in 2013, and no information about this measure was available before this year, we excluded it from the analysis.12
Outcomes
The primary outcomes of interest were the HCAHPS global measures of satisfaction, which include a patient’s overall rating of a hospital and willingness to recommend a hospital. We examined the percentage of surveyed patients reporting both the most positive responses (or top-box responses) for each of these measures as categorized by CMS (eg, for the cleanliness measure, the question “During this hospital stay, how often were your room and bathroom kept clean?” could be answered as always [top-box response], usually [middle-box response], sometimes [middle-box response], or never [bottom-box response]). For the overall rating measure, we identified the percentage of patients who awarded a hospital an overall rating of 9 or 10 (on a scale of 0 to 10, with higher scores indicating a greater level of patient satisfaction) and the percentage of patients who would definitely recommend the hospital (these are designated by CMS as top-box responses). Secondary outcomes included the 7 other HCAHPS measures encompassing communication ratings, clinical processes ratings, and hospital environment ratings. We identified the percentage of patients choosing the most positive rating options (top-box responses) available for each of the 7 scales. We focused on the top-box scores for all measures except for the “willingness to recommend a hospital” measure because CMS uses these top-box scores to evaluate hospital performance as part of the VBP program.
Statistical Analysis
We first compared the characteristics of safety-net hospitals and non–safety-net hospitals that participated in the VBP program using 2-sample t tests for continuous variables and χ2 tests for categorical variables. We then fit piecewise linear mixed regression models to assess annual trends in performance on each patient experience measure by hospital safety-net status before (July 1, 2007-June 30, 2011) and after (July 1, 2011-June 30, 2019) implementation of the VBP program. The piecewise linear mixed regression model assumes a continuous outcome is linearly associated with a set of explanatory variables but allows for the trend after an implementation event to be different from the trend before it. To test our assumption of linearity, we fit an additional regression model that included squared time terms and compared the fit statistics with a linear model without squared time terms. Because both models had comparable fit statistics, we used the model fitting linear trends (eMethods 2 in the Supplement). The models were adjusted for other hospital characteristics (ownership status, teaching status, hospital size, rural vs urban location, and region).3 A random intercept was included for each hospital to account for the intraclass correlation between repeated measurements each year at the same hospitals. To examine whether there was a differential change in annual trends for outcomes at safety-net compared with non–safety-net hospitals after implementation of the VBP program, we included an interaction term for safety-net status and calendar years during the post-VBP program period. This variable allowed us to compare the differential change in slope estimates (for outcomes) from periods before and after VBP program implementation between hospital groups. We also formally evaluated whether trends in performance at safety-net and non–safety-net hospitals were similar before implementation of the VBP program with visual inspection and by testing the interaction term for pre-VBP program calendar years and safety-net status for each respective outcome (eMethods 3 in the Supplement). Trends in performance did not significantly differ between hospital groups in the pre-VBP program period, indicating appropriateness of a parallel trends assumption for our modeling (eMethods 3 in the Supplement). Our modeling approach is described in more detail in eMethods 2 in the Supplement.
Because our analysis included 9 outcomes, we used a Bonferroni correction to account for potential multiple testing, setting the α threshold for significance at .05 ÷ 9 = .006, with P < .006 considered to be statistically significant. All analyses were performed using R software, version 4.1.2 (R Foundation for Statistical Computing) from December 2021 to February 2022.
Results
A total of 2266 hospitals were included in our analysis, 549 (24.2%) of which were classified as safety-net hospitals. Characteristics of safety-net and non–safety-net hospitals are shown in Table 1. Safety-net hospitals were more likely than non–safety-net hospitals to be nonteaching (67.6% [371 of 549] vs 53.1% [912 of 1717]; P < .001) and located in urban areas (82.5% [453 of 549] vs 77.4% [1329 of 1717]; P = .01). Safety-net hospitals had lower performance scores than non–safety-net hospitals on HCAHPS satisfactions ratings across all measures (Figure; eFigures 2-4 in the Supplement).
Table 1. Characteristics of Safety-net vs Non–Safety-net Hospitals in the Hospital Value-Based Purchasing Program.
Safety-net hospitals, No. (%) (n = 549) | Non–safety-net hospitals, No. % (n = 1717) | P value | |
---|---|---|---|
No. of beds, mean (SD) | 324 (284) | 241 (225) | |
Hospital size | |||
Large (≥400 beds) | 150 (27.3) | 262 (15.3) | <.001 |
Medium (100-399 beds) | 321 (58.5) | 1025 (59.7) | |
Small (<100 beds) | 78 (14.2) | 430 (25.0) | |
Ownership status | |||
For profit | 131 (23.9) | 290 (16.9) | <.001 |
Government | 94 (17.1) | 172 (10.0) | |
Nonprofit | 324 (59.0) | 1255 (73.1) | |
Teaching hospital | |||
Yes | 178 (32.4) | 805 (46.9) | <.001 |
No | 371 (67.6) | 912 (53.1) | |
Location | |||
Rural area | 96 (17.5) | 388 (22.6) | .01 |
Urban area | 453 (82.5) | 1329 (77.4) | |
Region | |||
Northeast | 91 (16.6) | 314 (18.3) | <.001 |
West | 194 (35.3) | 229 (13.3) | |
South | 189 (34.4) | 666 (38.8) | |
Midwest | 75 (13.7) | 508 (29.6) |
Figure. Trends in Hospital Performance on Global Satisfaction Measures From 2008 to 2019.
Unadjusted annual trends in performance on the overall hospital rating measure (A) and the hospital recommendation measures (B) at safety-net hospitals and non–safety-net hospitals from 2008 to 2019. The dashed line represents implementation of the Centers for Medicare & Medicaid Services Value-Based Purchasing program in 2011. Safety-net hospitals were classified as those in the highest quartile of the Disproportionate Share Hospital index. Hospital-level performance on patient experience measures was obtained from the publicly reported Hospital Consumer Assessment of Healthcare Providers and Systems survey.
Global Satisfaction
The percentage of patients rating safety-net hospitals as 9 or 10 out of 10 increased before VBP program implementation (annual percentage change, 1.84%; 95% CI, 1.73%-1.96%) and at a slower rate after the VBP program implementation (annual percentage change, 0.49%; 95% CI, 0.45%-0.53%) (Table 2). Similar patterns were observed at non–safety-net hospitals during the periods before (annual percentage change, 1.84%; 95% CI, 1.77%-1.90%) and after VBP program implementation (annual percentage change, 0.42%; 95% CI, 0.41%-0.45%). There was no differential change in performance on this measure between safety-net and non–safety-net hospitals after implementation of the VBP program (adjusted differential change, 0.07%; 95% CI, –0.08% to 0.23%; P = .36). These patterns were similar for the second overall measure that assessed the percentage of patients who reported they would definitely recommend the hospital (Table 2), and there was no differential change in performance on this measure between safety-net and non–safety-net hospitals under the VBP program (adjusted differential change, 0.01%; 95% CI, –0.15% to 0.17%; P = .92).
Table 2. Association of the VBP Program With Global Patient Satisfaction Measures at Safety-net vs Non–Safety-net Hospitals From 2008 to 2019.
% Change in measures (95% CI) | P value | |||
---|---|---|---|---|
Pre-VBP program implementation annual changea | Post-VBP program implementation annual changeb | Adjusted differential changec | ||
Rating of 9 or 10d | ||||
Safety-net hospitals | 1.84 (1.73 to 1.96) | 0.49 (0.45 to 0.53) | 0.07 (−0.08 to 0.23) | .36 |
Non–safety-net hospitals | 1.84 (1.77 to 1.90) | 0.42 (0.41 to 0.45) | ||
Would definitely recommend | ||||
Safety-net hospitals | 1.21 (1.09 to 1.32) | 0.13 (0.09 to 0.17) | 0.01 (−0.15 to 0.17) | .92 |
Non–safety-net hospitals | 1.13 (1.07 to 1.20) | 0.05 (0.02 to 0.07) |
Abbreviation: VBP, Centers for Medicare & Medicaid Services Hospital Value-Based Purchasing Program.
Adjusted annual percentage change in patient satisfaction ratings during the pre-VBP program period (July 1, 2007-June 30, 2011).
Adjusted annual percentage change in patient satisfaction ratings during the post-VBP program period (July 1, 2011-June 30, 2019).
Estimated percentage-point difference in outcomes at safety-net vs non–safety-net hospitals since the implementation of the VBP program.
On a scale of 0 to 10, with higher scores indicating a greater level of satisfaction.
Communication
The percentage of patients reporting that their nurses at safety-net hospitals always communicated well increased before VBP program implementation (annual percentage change, 1.50%; 95% CI, 1.42%-1.58%) and at a slower rate after VBP program implementation (annual percentage change, 0.46%; 95% CI, 0.43%-0.48%) (Table 3). Similar patterns were observed at non–safety-net hospitals before VBP program implementation (annual percentage change, 1.41%; 95% CI, 1.36%-1.45%) and after VBP program implementation (annual percentage change, 0.38%; 95% CI, 0.36%-0.39%) (Table 3). There was no differential change in performance between safety-net and non–safety-net hospitals on this measure after implementation of the VBP program (adjusted differential change, –0.01%; 95% CI, –0.12% to 0.10%; P = .86). These patterns were similar for the communication about medications measure before and after VBP program implementation at both safety-net hospitals (annual percentage change, 1.39%; 95% CI, 1.29%-1.49% vs annual percentage change, 0.43%; 95% CI, 0.39%-0.46%) and non–safety-net hospitals (annual percentage change, 1.42%; 95% CI, 1.36%-1.48% vs annual percentage change, 0.40%; 95% CI, 0.38%-0.42%) with no differential change between these groups after implementation of the VBP program (adjusted differential change, 0.05%; 95% CI, –0.09% to 0.19%; P = .46). Although the percentage of patients reporting that their doctors at safety-net hospitals always communicated well increased before VBP program implementation for both safety-net hospitals (annual percentage change, 0.61%; 95% CI, 0.54%-0.68%) and non–safety-net hospitals (annual percentage change, 0.51%; 95% CI, 0.47%-0.55%), these changes appeared to plateau during the period after implementation of the program at both safety-net hospitals (annual percentage change, 0.02%; 95% CI, 0%-0.05%) and non–safety-net hospitals (annual percentage change, 0.01%; 95% CI, –0.01% to 0.02%), and there was no significant differential change in performance between these groups after the VBP program’s implementation (adjusted differential change, –0.09%; 95% CI, –0.19% to 0.01%; P = .08).
Table 3. Association of the VBP Program With Communication, Clinical Processes, and Hospital Environment Patient Satisfaction Measures at Safety-net vs Non–Safety-net Hospitals From 2008 to 2019.
% Change in measures (95% CI) | P value | |||
---|---|---|---|---|
Pre-VBP program annual changea | Post-VBP program annual changeb | Adjusted differential changec | ||
Communication | ||||
Doctor communication | ||||
Safety-net hospitals | 0.61 (0.54 to 0.68) | 0.02 (0 to 0.05) | −0.09 (−0.19 to 0.01) | .08 |
Non–safety-net hospitals | 0.51 (0.47 to 0.55) | 0.01 (−0.01 to 0.02) | ||
Nurse communication | ||||
Safety-net hospitals | 1.50 (1.42 to 1.58) | 0.46 (0.43 to 0.48) | −0.01 (−0.12 to 0.10) | .86 |
Non–safety-net hospitals | 1.41 (1.36 to 1.45) | 0.38 (0.36 to 0.39) | ||
Medication communication | ||||
Safety-net hospitals | 1.39 (1.29 to 1.49) | 0.43 (0.39 to 0.46) | 0.05 (−0.09 to 0.19) | .46 |
Non–safety-net hospitals | 1.42 (1.36 to 1.48) | 0.40 (0.38 to 0.42) | ||
Clinical processes | ||||
Staff responsiveness | ||||
Safety-net hospitals | 1.25 (1.13 to 1.36) | 0.50 (0.46 to 0.54) | 0.13 (−0.03 to 0.29) | .11 |
Non–safety-net hospitals | 1.26 (1.19 to 1.32) | 0.38 (0.36,0.40) | ||
Discharge instructions | ||||
Safety-net hospitals | 1.44 (1.38 to 1.51) | 0.54 (0.52 to 0.57) | −0.02 (−0.12 to 0.07) | .62 |
Non–safety-net hospitals | 1.39 (1.35 to 1.43) | 0.52 (0.50 to 0.53) | ||
Hospital environment | ||||
Quietness | ||||
Safety-net hospitals | 1.34 (1.22 to 1.45) | 0.16 (0.12 to 0.20) | 0.03 (−0.14 to 0.19) | .76 |
Non–safety-net hospitals | 1.42 (1.35 to 1.49) | 0.22 (0.20 to 0.24) | ||
Cleanliness | ||||
Safety-net hospitals | 1.13 (1.02 to 1.24) | 0.37 (0.33 to 0.40) | 0.19 (0.03 to 0.35) | .02 |
Non–safety-net hospitals | 1.21 (1.15 to 1.28) | 0.26 (0.24 to 0.28) |
Abbreviation: VBP, Centers for Medicare & Medicaid Services Hospital Value-Based Purchasing Program.
Adjusted annual percentage change in patient satisfaction ratings during the pre-VBP program period (July 1, 2007-June 30, 2011).
Adjusted annual percentage change in patient satisfaction ratings during the post-VBP program period (July 1, 2011-June 30, 2019).
Estimated percentage-point difference in outcomes at safety-net vs non–safety-net hospitals since the implementation of the VBP program.
Clinical Processes
The percentage of patients reporting that they always received help as soon as they wanted increased before VBP program implementation at both safety-net hospitals (annual percentage change, 1.25%; 95% CI, 1.13%-1.36%) and non–safety-net hospitals (annual percentage change, 1.26%; 95% CI, 1.19%-1.32%), and these improvements continued (at a slower rate) during the period after VBP program implementation for both groups (Table 3). However, there was no significant differential change on the measure of staff responsiveness between safety-net and non–safety-net hospitals after implementation of the VBP program (adjusted differential change, 0.13%; 95% CI, –0.03% to 0.29%; P = .11). These patterns were similar for the discharge instructions measure (adjusted differential change, –0.02%; 95% CI, –0.12% to 0.07%; P = .62).
Hospital Environment
The percentage of patients reporting that the area around their room was always quiet increased before VBP program implementation at both safety-net hospitals (annual percentage change, 1.34%; 95% CI, 1.22%-1.45%) and non–safety-net hospitals (annual percentage change, 1.42%; 95% CI, 1.35%-1.49%), but there was no differential change between these groups after the VBP program’s implementation (adjusted differential change, 0.03%; 95% CI, –0.14% to 0.19%; P = .76) (Table 3). For the cleanliness measure, the percentage of patients reporting that the area around their room and bathroom was always clean increased before VBP program implementation at both safety-net hospitals (annual percentage change, 1.13%; 95% CI, 1.02%-1.24%) and non–safety-net hospitals (annual percentage change, 1.21%; 95% CI, 1.15%-1.28%), although these increases appeared to slow at both safety-net hospitals (annual percentage change, 0.37%; 95% CI, 0.33%-0.40%) and non–safety-net hospitals (annual percentage change, 0.26%; 95% CI, 0.24%-0.28%) after VBP program implementation. There was no change in hospital cleanliness at safety-net hospitals compared with non–safety-net hospitals after implementation of the VBP program (adjusted differential change, 0.19%; 95% CI, 0.03%-0.35%; P = .02).
The results remained consistent after performing a sensitivity analysis that defined safety-net hospitals as those in the highest DSH quintile (eTables 2 and 3 in the Supplement).
Discussion
In the US, safety-net hospitals had lower patient-reported experience scores than non–safety-net hospitals across measures of global satisfaction, communication, processes of care, and hospital environment from 2008 to 2019. The VBP program was implemented in 2011 and aimed to motivate improvements in patient experience by linking financial incentives to performance on these measures. Although short-term evaluations of the VBP program found that it did not improve patient experience,4 policy makers hoped that over the long term, low-performing hospitals would implement changes in care delivery to improve patient-reported experiences. However, in this national study, we found that in the 8 years since the VBP program’s financial incentives were introduced, patient experience and satisfaction have not improved at safety-net hospitals compared with non–safety-net hospitals.
Safety-net hospitals tend to care for groups that have a lower income, face barriers to care, and are racially and ethnically diverse, and improving the care experience of these populations is a policy priority. Chatterjee and colleagues2 first described disparities between safety-net and non–safety-net hospitals in patient experience and satisfaction from 2007 to 2010, before implementation of the VBP program. After the VBP program was implemented nationwide, 2 key studies found that the program was not associated with improvements in patient experience across all US hospitals during the first 3 to 4 years.4,5 This study builds on these previous analyses by focusing specifically on whether the VBP program has narrowed the disparity in performance between safety-net and non–safety-net hospitals. Although safety-net hospitals have been heavily penalized by the VBP program since its inception,6,11 findings of the present study highlight that the program’s financial incentives have not meaningfully changed disparities in performance between safety-net and non–safety-net hospitals, even over the long term. If anything, improvements in patient satisfaction that preceded the VBP program appear to have slowed considerably at both groups of hospitals after implementation of the program. As policy makers intensify efforts to improve care delivery for disadvantaged populations, this study suggests that they may need to consider strategies beyond financial incentives to meaningfully improve patients’ interactions with the health care system.
Although safety-net hospitals serve a critically important role in the US health care system by providing care to a large proportion of underserved patients regardless of their ability to pay, they do so while operating with fewer financial resources and greater financial stress.13,14 As a result, these hospitals have fewer avenues to respond to external financial pressures, including pay-for-performance incentives implemented by CMS.14,15,16 The VBP program was intentionally designed to motivate improvements among hospitals with the lowest performance by assigning 2 scores to each hospital—1 for performance, and 1 for improvement—and the higher of the 2 scores is used to identify VBP program payment adjustments.11 Although the VBP program’s design, in theory, provides a direct incentive for greater improvement among hospitals with the lowest performance, the findings of the present study underscore that this strategy has not been successful for improving measures of patient-reported experience at safety-net hospitals. One potential explanation for these findings is that the VBP program’s financial incentives are too modest to motivate meaningful changes, with most hospitals experiencing Medicare payment adjustments of just a fraction of 1%.17,18 More likely, however, is that safety-net hospitals face substantial barriers in implementing strategies to respond to Medicare pay-for-performance programs, including limited financial resources.19 Furthermore, we note that improvements in measures of patient experience slowed for both safety-net and non–safety-net hospitals over time—despite implementation of the VBP program. This plateauing outcome may be because marginal improvements may be more difficult to achieve after a certain level of progress from baseline. Alternatively, it is possible that greater rates of improvement occurred before the VBP program in anticipation of the program being implemented.
The findings of the present study may have important policy implications as CMS weighs strategies to advance health equity across US hospitals.20 During the past decade, clinicians, researchers, and policy makers have increasingly become concerned that pay-for-performance programs, such as the VBP program, may impede care delivery by taking resources away from already resource-constrained sites, such as safety-net hospitals.11,21,22,23,24,25 Although it is reassuring that we did not observe increased disparities in patient experience between safety-net and nonsafety-net sites during the study period, these findings build on a growing evidence base that suggests that pay-for-performance programs may not be meaningfully improving care.26,27,28 For example, recent studies suggest that other CMS pay-for-performance initiatives, such as the Hospital Readmissions Reduction Program, have largely not been effective and have potentially had unintended consequences (eg, several studies suggest that the program was potentially associated with an increase in heart failure mortality) while disproportionately penalizing safety-net hospitals.6,7,29,30,31,32,33,34,35,36 As the US continues its transition toward a value-based payment system, this long-term evaluation of the VBP program highlights that federal quality improvement initiatives will need to prioritize health care equity to meaningfully improve care delivery for populations that face barriers to care.
Limitations
Our study has limitations. First, there is no standard approach for classifying hospital safety-net status. We defined safety-net hospitals as those in the highest quartile of the DSH index, a definition commonly used by researchers that is consistent with a number of past studies.2,6,7,10,11 In addition, we ran a sensitivity analysis defining safety-net hospitals as those in the highest quintile of the DSH index—these findings were consistent with our main analysis. Second, although HCAHPS response rates are low, rigorous evaluations of the survey have found that the likelihood of nonresponse bias is minimal.37,38 Furthermore, nonresponse bias has been shown to be less related to response rates than to use of rigorous standard protocols,39,40 and the HCAHPS survey is carefully designed and deemed valid for use in CMS public reporting and hospital payment determinations. Response rates each year are found in eTable 4 in the Supplement.
Conclusions
In this cohort study of 2266 US hospitals, patient-reported experience across measures of global satisfaction, communication, processes of care, and hospital environment did not differentially improve at safety-net hospitals compared with non–safety-net hospitals in the 8 years after the implementation of the VBP program. These findings suggest that even over the long term, pay-for-performance initiatives may not meaningfully improve health care equity. Policy makers may need to explore other strategies to improve patient-reported experience and satisfaction, including additional support for safety-net hospitals.
eMethods 1. Details on the Publicly Reported Hospital Consumer Assessment of Healthcare Providers and Systems Survey
eMethods 2. Methodological Approach for Difference-in-Differences in Analysis
eMethods 3. Statistical Testing for Parallel (Pre-VBP) Trends
eTable 1. Characteristics of Hospitals Included vs. Not Included in the Analysis
eTable 2. Primary Outcome Sensitivity Analysis: Defining Safety-Net Hospitals Using Top Quintile DSH Scores
eTable 3. Secondary Outcome Sensitivity Analysis: Defining Safety-Net Hospitals Using Top Quintile DSH Scores
eTable 4. Annual Response Rates in the HCAHPS Survey
eFigure 1. Hospitals Participating in the VBP From 2008 to 2019
eFigure 2. Trends in Hospital Performance on Communication Measures From 2008 to 2019
eFigure 3. Trends in Hospital Performance on Clinical Processes Measures From 2008 to 2019
eFigure 4. Trends in Hospital Performance on Hospital Environment Measures From 2008 to 2019
References
- 1.Browne K, Roseman D, Shaller D, Edgman-Levitan S. Measuring patient experience as a strategy for improving primary care. Health Aff (Millwood). 2010;29(5):921-925. doi: 10.1377/hlthaff.2010.0238 [DOI] [PubMed] [Google Scholar]
- 2.Chatterjee P, Joynt KE, Orav EJ, Jha AK. Patient experience in safety-net hospitals: implications for improving care and value-based purchasing. Arch Intern Med. 2012;172(16):1204-1210. doi: 10.1001/archinternmed.2012.3158 [DOI] [PubMed] [Google Scholar]
- 3.Centers for Medicare & Medicaid Services . Medicare program; hospital inpatient value-based purchasing program. Final rule. Fed Regist. 2011;76(88):26490-26547. [PubMed] [Google Scholar]
- 4.Papanicolas I, Figueroa JF, Orav EJ, Jha AK. Patient hospital experience improved modestly, but no evidence Medicare incentives promoted meaningful gains. Health Aff (Millwood). 2017;36(1):133-140. doi: 10.1377/hlthaff.2016.0808 [DOI] [PubMed] [Google Scholar]
- 5.Ryan AM, Krinsky S, Maurer KA, Dimick JB. Changes in hospital quality associated with hospital value-based purchasing. N Engl J Med. 2017;376(24):2358-2366. doi: 10.1056/NEJMsa1613412 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Aggarwal R, Hammond JG, Joynt Maddox KE, Yeh RW, Wadhera RK. Association between the proportion of black patients cared for at hospitals and financial penalties under value-based payment programs. JAMA. 2021;325(12):1219-1221. doi: 10.1001/jama.2021.0026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342-343. doi: 10.1001/jama.2012.94856 [DOI] [PubMed] [Google Scholar]
- 8.Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients’ perception of care and measures of hospital quality and safety. Health Serv Res. 2010;45(4):1024-1040. doi: 10.1111/j.1475-6773.2010.01122.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Tsai TC, Orav EJ, Jha AK. Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):2-8. doi: 10.1097/SLA.0000000000000765 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Liu M, Figueroa JF, Song Y, Wadhera RK. Mortality and postdischarge acute care utilization for cardiovascular conditions at safety-net versus non-safety-net hospitals. J Am Coll Cardiol. 2022;79(1):83-87. doi: 10.1016/j.jacc.2021.10.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Gilman M, Adams EK, Hockenberry JM, Milstein AS, Wilson IB, Becker ER. Safety-net hospitals more likely than other hospitals to fare poorly under Medicare’s value-based purchasing. Health Aff (Millwood). 2015;34(3):398-405. doi: 10.1377/hlthaff.2014.1059 [DOI] [PubMed] [Google Scholar]
- 12.Hod R, Maimon O, Zimlichman E. Does care transition matter? exploring the newly published HCAHPS measure. Am J Med Qual. 2020;35(5):380-387. doi: 10.1177/1062860620905310 [DOI] [PubMed] [Google Scholar]
- 13.Zuckerman S, Bazzoli G, Davidoff A, LoSasso A. How did safety-net hospitals cope in the 1990s? Health Aff (Millwood). 2001;20(4):159-168. doi: 10.1377/hlthaff.20.4.159 [DOI] [PubMed] [Google Scholar]
- 14.Sutton JP, Washington RE, Fingar KR, Elixhauser A. Characteristics of Safety-Net Hospitals , 2014. Agency for Healthcare Research and Quality. October 2016. Accessed January 16, 2022. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb213-Safety-Net-Hospitals-2014.jsp [PubMed]
- 15.Ross JS, Cha SS, Epstein AJ, et al. Quality of care for acute myocardial infarction at urban safety-net hospitals. Health Aff (Millwood). 2007;26(1):238-248. doi: 10.1377/hlthaff.26.1.238 [DOI] [PubMed] [Google Scholar]
- 16.Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non-safety-net hospitals. JAMA. 2008;299(18):2180-2187. doi: 10.1001/jama.299.18.2180 [DOI] [PubMed] [Google Scholar]
- 17.Werner RM, Dudley RA. Medicare’s new hospital value-based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31(9):1932-1940. doi: 10.1377/hlthaff.2011.0990 [DOI] [PubMed] [Google Scholar]
- 18.Kahn CN III, Ault T, Potetz L, Walke T, Chambers JH, Burch S. Assessing Medicare’s hospital pay-for-performance programs and whether they are achieving their goals. Health Aff (Millwood). 2015;34(8):1281-1288. doi: 10.1377/hlthaff.2015.0158 [DOI] [PubMed] [Google Scholar]
- 19.Figueroa JF, Joynt KE, Zhou X, Orav EJ, Jha AK. Safety-net hospitals face more barriers yet use fewer strategies to reduce readmissions. Med Care. 2017;55(3):229-235. doi: 10.1097/MLR.0000000000000687 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Centers for Medicare & Medicaid Services Office of Minority Health. The CMS Equity Plan for Improving Quality in Medicare. September 2015. Accessed January 10, 2022. https://www.cms.gov/about-cms/agency-information/omh/omh_dwnld-cms_equityplanformedicare_090615.pdf
- 21.Neuhausen K, Katz MH. Patient satisfaction and safety-net hospitals: carrots, not sticks, are a better approach. Arch Intern Med. 2012;172(16):1202-1203. doi: 10.1001/archinternmed.2012.3175 [DOI] [PubMed] [Google Scholar]
- 22.Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay-for-performance and quality reporting affect health care disparities? Health Aff (Millwood). 2007;26(suppl 2):w405-w414. doi: 10.1377/hlthaff.26.3.w405 [DOI] [PubMed] [Google Scholar]
- 23.Gilman M, Hockenberry JM, Adams EK, Milstein AS, Wilson IB, Becker ER. The financial effect of value-based purchasing and the hospital readmissions reduction program on safety-net hospitals in 2014: a cohort study. Ann Intern Med. 2015;163(6):427-436. doi: 10.7326/M14-2813 [DOI] [PubMed] [Google Scholar]
- 24.Epstein AM. Pay for performance at the tipping point. N Engl J Med. 2007;356(5):515-517. doi: 10.1056/NEJMe078002 [DOI] [PubMed] [Google Scholar]
- 25.James J. Pay-for-performance. Health Affairs. October 11, 2012. Accessed January 16, 2022. https://www.healthaffairs.org/do/10.1377/hpb20121011.90233/full/healthpolicybrief_78.pdf
- 26.Sankaran R, Sukul D, Nuliyalu U, et al. Changes in hospital safety following penalties in the US Hospital Acquired Condition Reduction Program: retrospective cohort study. BMJ. 2019;366:l4109. doi: 10.1136/bmj.l4109 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Sheetz KH, Dimick JB, Englesbe MJ, Ryan AM. Hospital-acquired condition reduction program is not associated with additional patient safety improvement. Health Aff (Millwood). 2019;38(11):1858-1865. doi: 10.1377/hlthaff.2018.05504 [DOI] [PubMed] [Google Scholar]
- 28.Hsu HE, Wang R, Jentzsch MS, et al. Association between value-based incentive programs and catheter-associated urinary tract infection rates in the critical care setting. JAMA. 2019;321(5):509-511. doi: 10.1001/jama.2018.18997 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Wadhera RK, Yeh RW, Joynt Maddox KE. The hospital readmissions reduction program—time for a reboot. N Engl J Med. 2019;380(24):2289-2291. doi: 10.1056/NEJMp1901225 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Wadhera RK, Joynt Maddox KE, Wasfy JH, Haneuse S, Shen C, Yeh RW. Association of the hospital readmissions reduction program with mortality among Medicare beneficiaries hospitalized for heart failure, acute myocardial infarction, and pneumonia. JAMA. 2018;320(24):2542-2552. doi: 10.1001/jama.2018.19232 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Huckfeldt P, Escarce J, Sood N, Yang Z, Popescu I, Nuckols T. Thirty-day postdischarge mortality among Black and White patients 65 years and older in the Medicare hospital readmissions reduction program. JAMA Netw Open. 2019;2(3):e190634. doi: 10.1001/jamanetworkopen.2019.0634 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Huckfeldt P, Escarce J, Wilcock A, et al. HF mortality trends under Medicare readmissions reduction program at penalized and nonpenalized hospitals. J Am Coll Cardiol. 2018;72(20):2539-2540. doi: 10.1016/j.jacc.2018.08.2174 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Gupta A, Allen LA, Bhatt DL, et al. Association of the hospital readmissions reduction program implementation with readmission and mortality outcomes in heart failure. JAMA Cardiol. 2018;3(1):44-53. doi: 10.1001/jamacardio.2017.4265 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wadhera RK, Joynt Maddox KE, Kazi DS, Shen C, Yeh RW. Hospital revisits within 30 days after discharge for medical conditions targeted by the Hospital Readmissions Reduction Program in the United States: national retrospective analysis. BMJ. 2019;366:l4563. doi: 10.1136/bmj.l4563 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Chaiyachati KH, Qi M, Werner RM. Changes to racial disparities in readmission rates after Medicare’s hospital readmissions reduction program within safety-net and non-safety-net hospitals. JAMA Netw Open. 2018;1(7):e184154. doi: 10.1001/jamanetworkopen.2018.4154 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Ibrahim AM, Dimick JB, Sinha SS, Hollingsworth JM, Nuliyalu U, Ryan AM. Association of coded severity with readmission reduction after the hospital readmissions reduction program. JAMA Intern Med. 2018;178(2):290-292. doi: 10.1001/jamainternmed.2017.6148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Elliott MN, Cohea CW, Lehrman WG, et al. Accelerating improvement and narrowing gaps: trends in patients’ experiences with hospital care reflected in HCAHPS public reporting. Health Serv Res. 2015;50(6):1850-1867. doi: 10.1111/1475-6773.12305 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Elliott MN, Lehrman WG, Goldstein EH, et al. Hospital survey shows improvements in patient experience. Health Aff (Millwood). 2010;29(11):2061-2067. doi: 10.1377/hlthaff.2009.0876 [DOI] [PubMed] [Google Scholar]
- 39.Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias: a meta-analysis. Public Opin Q. 2008;72(2):167-189. doi: 10.1093/poq/nfn011 [DOI] [Google Scholar]
- 40.Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opin Q. 2006;70(5):646-675. doi: 10.1093/poq/nfl033 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
eMethods 1. Details on the Publicly Reported Hospital Consumer Assessment of Healthcare Providers and Systems Survey
eMethods 2. Methodological Approach for Difference-in-Differences in Analysis
eMethods 3. Statistical Testing for Parallel (Pre-VBP) Trends
eTable 1. Characteristics of Hospitals Included vs. Not Included in the Analysis
eTable 2. Primary Outcome Sensitivity Analysis: Defining Safety-Net Hospitals Using Top Quintile DSH Scores
eTable 3. Secondary Outcome Sensitivity Analysis: Defining Safety-Net Hospitals Using Top Quintile DSH Scores
eTable 4. Annual Response Rates in the HCAHPS Survey
eFigure 1. Hospitals Participating in the VBP From 2008 to 2019
eFigure 2. Trends in Hospital Performance on Communication Measures From 2008 to 2019
eFigure 3. Trends in Hospital Performance on Clinical Processes Measures From 2008 to 2019
eFigure 4. Trends in Hospital Performance on Hospital Environment Measures From 2008 to 2019