Abstract
Objective
To compare the impact of the introduction of two distinct sets of star ratings, quality of care, and patient experience, on home health agency (HHA) selection.
Data Sources
We utilized 2014–2016 home health Outcome and Assessment Information Set (OASIS) assessments, as well as publicly reported data from the Home Health Compare website.
Data Collection/Extraction Methods
We identified a 5% random sample of admissions (186,498 admissions) for new Medicare Fee‐for‐Service home health users.
Study Design
This admission‐level assessment compared HHA selection before (July 2014–June 2015) and after (February–December 2016) star ratings were published. We utilized a conditional logit, discrete choice model, which accounted for all HHAs that each patient could have selected (i.e., the choice set) based on ZIP codes. Our explanatory variables of interest were the interactions between star ratings and time period (pre/post stars). We stratified our analyses by race, admission source, and Medicaid eligibility. We adjusted for HHA characteristics and distance between patients' homes and HHAs.
Principal Findings
The introduction of star ratings was associated with a 0.88‐percentage‐point increase in the probability of selecting a high‐quality HHA and a 0.81‐percentage‐point increase in the probability of selecting a highly ranked patient experience HHA. Patients admitted from the community, and black and Medicare–Medicaid dual‐eligible beneficiaries experienced larger increases in their likelihood of selecting high‐rated agencies than inpatient, white, and nondual beneficiaries.
Conclusions
The introduction of quality of care and patient experience stars were associated with changes in HHA selection; however, the strength of these relationships was weaker than observed in other health care settings where a single star rating was reported. The introduction of star ratings may mitigate disparities in HHA selection. Our findings highlight the importance of reporting information about quality and satisfaction separately and conducting research to understand the mechanisms driving HHA selection.
Keywords: home health care, patient experience, provider selection, public reporting, quality of care, star ratings
What is known on this topic
Although the Centers for Medicare and Medicaid Services (CMS) spends over $18 billion annually to care for 3.4 million home health beneficiaries, we know very little about how these beneficiaries select home health agencies (HHAs).
To facilitate HHA selection, CMS introduced quality of care and patient experience summary star ratings on the HH Compare website in July 2015 and January 2016.
Star ratings have been modestly effective in changing provider selection patterns in other health care settings. Home health is unique in that two distinct summary ratings are presented, rather than the single global rating that is published in other health care settings.
What this study adds
The introduction of quality of care and patient experience stars were both associated with changes in HHA selection; however, the strength of these relationships was weaker than observed in other health care settings where a single star rating was reported.
The impact of both star ratings on selection highlights the importance of continuing to present information about quality and satisfaction separately.
The introduction of star ratings may mitigate racial and socioeconomic disparities observed in HHA selection.
1. INTRODUCTION
Each year, 3.4 million 1 Medicare beneficiaries utilize home health services and select a home health agency (HHA). The annual cost of home health care for these beneficiaries has more than doubled since 2000, to $17.7 billion in 2017. 1 The trend toward increased spending will likely continue in the coming years as alternative payment models incentivize the use of home health instead of institutional care. Additionally, the coronavirus disease 2019 (COVID‐19) pandemic has introduced heightened concerns regarding institutional care, which may further accelerate this trend.
Despite the growing importance of home health care, we know very little about how patients select HHAs. This lack of understanding is particularly concerning because the home health population is comprised of medically complex and vulnerable individuals: half are age 75 years or older, 2 31% have a diagnosis of Alzheimer's disease, 2 and all are homebound due to severe illness and/or functional limitation. Selecting a low‐quality HHA could result in lower satisfaction with their care, as well as increased risk for harmful and costly adverse outcomes such as hospitalizations, medication errors, falls, and functional decline. 3 To facilitate more informed HHA selection, the Centers for Medicare and Medicaid Services (CMS) began reporting information about the quality of HHAs on the Home Health Compare (HHC) website in 2003. 4
Public reports of quality measures, such as those on HHC, are designed to steer patients to higher quality providers and incentivize providers to improve care. 5 However, a study by Jung et al. 6 found that the publication of quality information on HHC had only a small impact on HHA market share. This finding is not surprising, given the evidence that patients, particularly older patients, have trouble interpreting multiple quality scores at once and respond better to summary scores of health care quality. 7 , 8 , 9 , 10 , 11 Thus, CMS introduced two distinct sets of summary star ratings on the HHC website: the quality of care star ratings in July 2015 and patient experience summary star rating in January 2016.
Evidence regarding the selection of nursing homes (NHs) and Medicare Advantage (MA) plans suggests that the introduction of a single star rating influences consumer selection. 12 , 13 , 14 In 2016, Werner et al. 13 found that the introduction of NH star ratings was associated with a 6% gain in market share for five‐star NHs. Although some insights can be gained from work examining star ratings in other settings, home health is unique for a number of reasons: first, care is delivered at home, eliminating patient and family travel to care; second, Medicare Fee‐for‐Service (FFS) beneficiaries do not participate in cost‐sharing for home health services; third, CMS reports two unique star ratings for each HHA and does not provide a global star rating. Evidence indicates that these ratings are weakly correlated (r = 0.13). 15 Thus, it is likely that patients and referring providers may view conflicting information about the quality of care (e.g., high performance on one star rating and low performance on the other). We do not know how patients and their families will respond when presented with potentially conflicting information about the quality of an HHA.
The objective of this study is to estimate and compare the impact of the introduction of quality of care and patient experience star ratings on HHA selection among Medicare beneficiaries. To achieve this objective, we compare HHA selection before and after the publication of star ratings, using a conditional logit discrete choice model, which is designed to account for the set of HHAs from which each home health user can reasonably select an agency, and control for both HHA characteristics and geographic/market differences that may impact HHA selection. Our approach assumes that patients, family members, friends, and/or referring providers are accessing and using the information on HHC and attempts to isolate the initial introduction of star ratings to understand their role in HHA selection.
2. METHODS
2.1. Data sources
We identified our study sample and obtained the majority of our variables from home health Outcome and Assessment Information Set (OASIS). All Medicare‐certified HHAs are required to submit OASIS assessments for adult patients that receive skilled home health services and are covered by Medicare or Medicaid. OASIS includes detailed demographic, case mix, and outcomes information. 4 OASIS data were linked with data from the Medicare Master Beneficiary Summary File (MBSF) to identify enrollment status, location, and key demographic data.
We obtained publicly reported HHA star ratings, as well as HHA characteristics and agency‐reported service areas from the HHC website. We supplemented HHA characteristic data with information from the CMS Provider of Service (POS) file and the CMS rural ZIP code indicator.
2.2. Study design and population
We conducted admission‐level analyses to compare HHA selection between two time periods as follows: (1) before star ratings were released (July 2014–June 2015) and (2) after both star ratings were released (February–December 2016). We excluded the 6‐month period (July–December 2015) in which the quality of care star rating was available, but the patient experience star rating had not yet been released. We also incorporate a 1‐month lag into the post period because the star rating data were not released until January 28, 2016. We identified all Medicare FFS admissions to HHAs during the study period for beneficiaries that had no home health use in the year prior to their admission date using OASIS start of care assessments. We excluded repeat home health users because their selection process is likely different from new users. Similarly, we excluded MA users as their selection process is complicated by cost‐sharing and provider networks that we cannot capture.
We extracted a 5% random sample (206,893 admissions) from 4,137,860 new FFS home health admissions in our study period. For each new admission, we constructed the beneficiary's HHA choice set—a list of all HHAs the beneficiary could have reasonably selected. An individual's choice‐set is comprised of all HHAs that fulfill at least one of three criteria as follows: (1) the HHA serves the patient's ZIP code as reported on HHC in that year, (2) the centroid of the ZIP code of the HHA's office is within 20 miles of the centroid of the beneficiary's home ZIP code centroid, or (3) the HHA is one of the closest 20 HHAs to the beneficiary's ZIP code (using a 100% sample of HHAs). Given these criteria, 17,534 admissions (8.5%) were excluded from our final sample because they selected an HHA outside of their choiceset (7%) or only had one HHA in their choice‐set (1.5%). Additionally, we excluded 1,861 admissions (0.9%) due to missing data. Our final study sample included 187,894 home health admissions, 101,103 (53.8%) of which were in the pre‐period.
2.3. Outcome
Our outcome was a binary variable that indicated the selection of a particular HHA. The selected HHA was assigned the value of 1, while all other HHAs in each individual's choice‐set were assigned the value 0.
2.4. Explanatory variables
Our primary variables of interest were the interactions between star ratings and time period. Our time period indicator had two levels as follows: (1) before star ratings were released and (2) after patient experience and quality of care stars were released.
We included separate indicators for patient experience stars and quality of care stars. The two star ratings are each represented by a single number that summarizes HHA performance on several different component measures. The quality of care star is a composite of six outcome and three process of care measures, while the patient experience star is a composite of the results of the Home Health Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey. To ensure we were measuring the impact of the introduction of the star ratings, rather than the impact of changes in star ratings over time, we utilized the publicly reported January 2016 star ratings for the entire study period.
Each HHA received between one and five stars for both the quality of care and patient experience star ratings, with five indicating the best performance. Patient experience stars are reported as integers and quality of care stars are reported in half‐star intervals. To capture meaningful differences, we collapsed both sets of star ratings into three levels as follows: low (1–2.5), middle (3–3.5), and high (4, 5) stars. The distribution of star ratings across selected HHAs in our sample is displayed in Table 1. Because 7% of admissions occurred at HHAs without patient experience star ratings (due to sample size, unreliable data, or recent Medicare certification) and the absence of information could inform decision making, we treated “unrated” as a unique category for each star rating.
TABLE 1.
Sources for and distribution of HHA characteristics across all selected HHAs
| Variable | Source | Prestar ratings | Poststar ratings |
|---|---|---|---|
| %/Mean (SD) | %/Mean (SD) | ||
| Star ratings | |||
| Quality stars | HHC | ||
| Low (≤2.5) | 12.47% | 11.86% | |
| Middle (3–3.5) | 53.58% | 53.30% | |
| High (>4) | 33.39% | 34.16% | |
| Unrated | 0.55% | 0.68% | |
| Patient experience stars | HHC | ||
| Low (≤2.5) | 6.12% | 5.10% | |
| Middle (3–3.5) | 15.89% | 15.52% | |
| High (>4) | 71.29% | 72.70% | |
| Unrated | 6.70% | 7.18% | |
| Facility type/structure | |||
| Agency type | POS file | ||
| Nonprofit | 40.10% | 38.85% | |
| Proprietary | 56.13% | 57.90% | |
| Government | 3.46% | 3.25% | |
| Number of admissions | OASIS | 4061.95 (10,006.32) | 33,331.72 (6078.63) |
| Years certified | HHC | 1.50 (2.08) | 1.46 (2.01) |
| Services offered | HHC | ||
| Physical therapy | 99.91% | 99.90% | |
| Occupational therapy | 98.30% | 98.37% | |
| Speech language pathology | 96.68% | 96.53% | |
| Medical‐social services | 95.21% | 95.16% | |
| Home health aides | 98.79% | 98.64% | |
| Change of ownership past year | POS file | 1.84% | 1.91% |
| Agency owns multiple subunits | POS file | 41.44% | 41.04% |
| Medicare patient mix | |||
| Black (%) | MBSF | 12.54 (14.74) | 12.54 (14.76) |
| Hispanic (%) | MBSF | 6.10 (12.34) | 5.84 (11.92) |
| Dually enrolled in Medicaid (%) | MBSF | 29.00 (16.46) | 28.87 (16.25) |
| Average age (years) | MBSF | 76.89 (3.13) | 76.89 (3.12) |
| Medicare advantage (%) | MBSF | 19.89 (16.59) | 19.57 (16.51) |
| Male (%) | MBSF | 38.38 (6.38) | 38.43 (6.76) |
| Average length of stay (days) | OASIS | 54.93 (30.85) | 55.07 (31.18) |
| Average Elixhauser a score | 1.09 (0.23) | 1.09 (0.24) | |
| Average function b score | OASIS | 3.16 (0.58) | 3.17 (0.58) |
| Inpatient discharge (within 2 weeks prior to admission) | OASIS | ||
| Acute care (%) | 44.63 (17.17) | 44.53 (17.30) | |
| Skilled nursing facility (%) | 16.71 (11.81) | 16.90 (12.17) | |
| Other inpatient (%) | 8.99 (6.92) | 8.97 (6.73) | |
| On oxygen (%) | OASIS | 14.44 (7.74) | 14.56 (7.87) |
| On ventilator (%) | OASIS | 0.10 (0.31) | 0.10 (0.26) |
| Lives in congregate living (%) | OASIS | 10.25 (11.84) | 10.41 (12.09) |
| Lives in rural county (%) | Rural indicator | 15.40 (36.10) | 15.73 (3.64) |
| N (Admissions) | 101,103 | 86,395 | |
| N (HHAs) | 8182 | 7382 | |
Note: Statistics summarize the HHA characteristics of only the selected HHA for all admissions included in the conditional logit model.
Abbreviations: HHA, home health agency; HHC, Home Health Compare; MBSF, CMS Master Beneficiary Summary file; OASIS, outcome and assessment information set; POS, provider of service file; SD, standard deviation.
Elixhauser comorbidity score calculated using all International Classification of Diseases (ICD)‐9/ICD‐10 codes listed on the OASIS admission assessment.
Function score is an 8‐point scale, which combines performance on the eight activities of daily living questions included on the OASIS admission assessment. Lower scores indicate lower dependency.
2.5. Covariates
We adjusted our analyses for HHA‐level covariates. We derived HHA‐level averages for case mix and patient demographic variables using data from OASIS admission assessments, the MBSF, and the CMS rural indicator. Data related to agency structure were obtained from HHC, as well as the POS file. A list of all HHA‐level variables included in our model, as well as their data sources, and distribution is available in Table 1. At the beneficiary level, we included the Geodesic distance from the HHA's address to the centroid of the beneficiary's ZIP code of residence. 16
2.6. Analysis
We utilized a conditional logit discrete choice model based on McFadden's 17 , 18 random utility model to test for changes in HHA selection and assumed that each individual selects from the full choice‐set of HHAs serving his/her ZIP code. We model patient utility as a function of star ratings and HHA characteristics. Expanding on Werner's et al. 13 approach to modeling NH selection as a function of both star ratings, we utilized the following model:
Here, we model the utility of patient i selecting HHA j, time period is represented by , with before star ratings set as baseline. The main effect for the time period indicator is the same for each HHA in an individual's choice set and thus is not estimated in the model. The model includes four indicators ( of the quality of care star ratings (low, middle, high, and unrated), with low star set as baseline, and a similar set of indicators ( of the patient experience star ratings. Our primary variables of interest are the interaction between time period and quality of care, and the interaction between time period and patient experience stars. represents a vector of HHA‐level covariates and distance. And our errors are clustered on patient state (. We converted the log‐odds estimates to odds ratios and our coefficients of interest, and are interpreted as the change in odds of selecting an HHA of a certain quality of care or patient experience star rating (compared to low star), after star ratings were released.
To further ease interpretation, we generated predictions (probability of a positive outcome) from our choice model. For every patient, we summed the predicted probabilities across all HHAs in their choice set separately for each star category (low, middle, high, and unrated) and divided this sum by the total probability across all HHAs in their choice set, to determine the individual's probability of selecting a high, middle, low, or unrated quality or patient experience HHA. We calculated the average probability for each rating across all individuals in the pre‐ and postperiod to derive the adjusted probability of selecting each type of HHA in each time period.
To better understand the impact of the star ratings, we conducted a simulation using the predictions from our choice model. We identified the 10 closest HHAs to each beneficiary and simulated four probabilities: the probability of selection if both star rating were low, if quality was high and patient experience was low, if patient experience was high and quality was low, and if both star ratings were high. Using these predictions, we compared the probability of selecting a low rated (for both stars) HHAs to the probability of selecting HHAs with one or both high ratings. These differences represent the marginal effect of being a highly instead of a poorly rated HHA.
2.7. Sensitivity and stratified analyses
We conducted several sensitivity analyses to ensure the robustness of our results. Given the small (5%) size of our random sample, we ran our analyses using a second 5% random sample. To account for changes in star ratings over time, we tested our analyses using an average star rating, rather than first reported star rating. Additionally, our model excludes the period (July 2015–January 2016) during which the quality of care star ratings became available, but the patient experience star ratings had not yet been published. Thus, we conducted a sensitivity analysis using three time periods as follows: before star ratings (July 2014–June 2015), only quality stars available (July–December 2015), and both star ratings available (January–December 2016). Finally, we specified our choice model excluding individuals living in a congregate setting, such as assisted living (as indicated on their OASIS assessment), as it is possible their selection may be influenced by relationships between their living facility and specific HHAs.
To assess the impact of star ratings across different groups of beneficiaries, we stratified our analyses by Medicaid eligibility status (i.e., dually enrolled in Medicare and Medicaid vs. not eligible for Medicaid) race and ethnicity (i.e., white vs. black vs. Hispanic), admission source (inpatient stay within 2 weeks of admission vs. admitted from the community [no inpatient stay]), and cognitive function (low [requires assistance] vs. high [does not require assistance]). In cases where our sample size was small, we obtained additional randomly selected admissions to ensure a sample size over 135,000 admissions for each stratification. Analyses were conducted using Stata 14.0 (Statacorp).
3. RESULTS
Our final analysis included 187,498 admissions for new HHA users (101,103 in the preperiod and 86,395 in the postperiod.). Table 1 displays the distribution of HHA characteristics for all admissions included in our study period.
The results from our primary conditional logit model assessing the relationship between the introduction of star ratings and HHA selection are available in Table 2. The odds ratios reflect the odds of admission to an HHA at each quality of care/patient experience level, compared to low‐quality/patient experience, in the prestar period, and the marginal change in odds after the star ratings were introduced. The positive, statistically significant odds ratios in the postperiod indicate that that the introduction of quality and patient experience stars were both associated with increases in the odds of selecting middle, high, and unrated HHAs.
TABLE 2.
Relationship between the introduction of home health agency star ratings and agency selection, results from conditional logit discrete choice model
| Quality of care | Patient experience | |
|---|---|---|
| Choice model | Choice model | |
| Odds ratio (standard error) | Odds ratio (standard error) | |
| Middle (3–3.5) star | 1.152 (0.015)** | 1.294 (0.034)** |
| High (≥4) star | 1.172 (0.014)** | 1.481 (0.027)** |
| Unrated | 0.227 (0.020)** | 0.431 (0.010)** |
| Poststars × middle star | 1.058 (0.020)** | 1.124 (0.038)** |
| Poststars × high star | 1.088 (0.023)** | 1.162 (0.037)** |
| Poststars × unrated | 1.251 (0.083)** | 1.196 (0.044)** |
| Adjusted probability | Adjusted probability | |
|---|---|---|
| Preperiod low star | 12.67% | 6.05% |
| Postperiod low star | 11.99% | 5.08% |
| Difference low star | −0.68 | −0.97 |
| Preperiod high star | 33.23% | 71.13% |
| Postperiod high star | 34.11% | 71.94% |
| Difference high star | 0.88 | 0.81 |
Note: N = 187,498 home health agency (HHA) admissions. Although results are presented separately by star rating, they represent the results of a single conditional logit model that includes both sets of star ratings. Odds ratios represent the exponentiated coefficients from the conditional logit model, first in the preperiod, prior to the introduction of star ratings (July 2014–June 2015) and then for the interaction between the postperiod (February–December 2016) and the star ratings. Adjusted probabilities are calculated based on the predictions from the conditional logit model. Model is controlled for all HHA variables displayed in Table 1 and distance between the beneficiary's home and the HHA office. Errors are clustered on patient state. **p < 0.01.
Table 2 also displays the adjusted probability of selecting a low‐ or high‐rated agency during the pre‐ and postperiod, along with the change in probability between the two time periods. The publication of star ratings was associated with a 0.68‐percentage‐point decrease in the probability of selecting a low quality of care HHA and a 0.88‐percentage‐point increase in the probability of selecting a high‐quality HHA. Similarly, publication of ratings was associated with a 0.97‐percentage‐point decrease in the probability of selecting a low‐patient‐experience HHA and a 0.81‐percentage‐point increase in the probability of selecting a high‐patient‐experience HHA.
The results of the simulation are displayed in Figure 1. In this graph, we model the probability of selecting the 10 closest HHAs if all characteristics stayed the same, except the star rating. The lines display the difference in probability of selection if the agency received a high star rating for one or both HHAs versus if it received a low star rating for both or the marginal effect of high ratings (compared to low) in both the pre‐ and postperiods. The marginal effect of having a high rating was higher in the postperiod compared to the preperiod for both star ratings, with the closest agencies experiencing approximately a 0.5% larger boost in the marginal effect of a high‐quality rating, and a 1.0% larger boost for a high patient experience star rating, in the postperiod. A high ranking for both stars (compared to low on both) was associated with a 2% increase in the marginal effect on selection in the postperiod.
FIGURE 1.

Results of simulation modeling the marginal effect (ME) of receiving a low versus high star rating for the 10 closest home health agencies (HHAs) to each beneficiary. For the 10 closest HHAs to each beneficiary, we generated predictions in the pre‐ and postperiod based on the results of the conditional logit model. The predictions assume that all characteristics except the quality of care or patient experience star rating changed. The top left box displays the marginal effect of receiving a high‐quality (and low patient experience) rating, compared to low ratings in both during both time periods. The top right box displays the marginal effect of receiving a high patient experience (and low‐quality) rating, compared to low ratings in both. And the bottom box displays the marginal effect of receiving a high ratings for both stars
The results of our sensitivity analyses are displayed Table 3. Our findings regarding high, middle, and low quality of care stars were robust across sensitivity analyses. When we utilized a 1‐year average, rather than first reported star rating, the change in odds of selecting an unrated HHA in the postperiod was not statistically significant. When we conducted our analysis across three time periods (pre/before stars, only quality stars available, and both stars available), we observed a relationship between quality stars and selection in both postperiods; however, the relationship between patient experience stars and selection was only present when both star ratings were available
TABLE 3.
Results of sensitivity and stratified analyses
| Alternate a 5% sample | Average star rating a | Thee time periods a | Congregate living excluded a | Not Medicaid eligible a | Dually enrolled in Medicaid a | White a | Black a | Hispanic a | ||
|---|---|---|---|---|---|---|---|---|---|---|
| OR (SE) | OR (SE) | OR (SE) | OR (SE) | OR (SE) | OR (SE) | OR (SE) | OR (SE) | OR (SE) | ||
| Quality of care stars | ||||||||||
| Middle (3–3.5) star | 1.154 (0.013)** | 1.187 (0.012)** | 1.127 (0.014)** | 1.58 (0.014)** | 1.137 (0.015)** | 1.144 (0.012)** | 1.159 (0.015)** | 1.168 (0.014)** | 1.053 (0.012)** | |
| High (≥4) star | 1.171 (0.014)** | 1.154 (0.013)** | 1.139 (0.015)** | 1.180 (0.015)** | 1.150 (0.017)** | 1.160 (0.013)** | 1.178 (0.017)** | 1.162 (0.016)** | 1.037 (0.013)* | |
| Unrated | 0.227 (0.010)** | 0.196 (0.011)** | 0.233 (0.012)** | 0.206 (0.010)** | 0.170 (0.011)** | 0.292 (0.009)** | 0.181 (0.012)** | 0.283 (0.010)** | 0.247 (0.008)** | |
| Only quality | Both stars available | |||||||||
| Poststars × middle star | 1.059 (0.017)** | 1.081 (0.156)** | 1.044 (021)** | 1.058 (0.022)** | 1.048 (0.018)** | 1.062 (0.021)** | 1.019 (0.015) | 1.048 (0.020)** | 1.103 (0.018) | 1.016 (0.016) |
| Poststars × high star | 1.100 (0.019)** | 1.117 (0.183)** | 1.063 (0.023)** | 1.106 (0.251)** | 1.079 (0.020)** | 1.087 (0.023)** | 1.054 (0.017)* | 1.070 (0.023)** | 1.082 (0.021)** | 1.018 (0.017) |
| Poststars × unrated | 1.162 (0.075)** | 0.961 (0.83) | 1.327 (0.091)** | 1.232 (0.090)** | 1.354 (0.093)** | 1.478 (0.130)** | 1.040 (0.048) | 1.045 (0.125)** | 1.060 (0.156) | 1.321 (0.157)** |
| Patient experience stars | ||||||||||
| Middle (3–3.5) star | 1.291 (0.022)** | 1.27 (0.015)** | 1.256 (0.026)** | 1.350 (0.025)** | 1.469 (0.031)** | 0.99 (0.015) | 1.435 (0.031)* | 1.050 (0.018)** | 1.035 (0.016)* | |
| High (≥4) star | 1.473 (0.024)** | 1.35 (0.017)** | 1.471 (0.028)** | 1.543 (0.028)** | 1.697 (0.035)** | 1.084 (0.016)** | 1.660 (0.034)** | 1.137 (0.026)** | 1.0333 (0.014)* | |
| Unrated | 0.432 (0.009)** | 0.34 (0.006)** | 0.431 (0.010)** | 0.441 (0.010)** | 0.442 (0.011)** | 0.372 (0.0068) | 0.445 (0.011)** | 1.032 (0.030) | 0.417 (0.007)** | |
| Only quality | Both stars available | |||||||||
| Poststars × middle star | 1.105 (0.028)** | 1.08 (0.019)** | 1.021 (0.032) | 1.138 (0.041)** | 1.146 (0.032)** | 1.114 (0.034)** | 1.195 (0.027)** | 1.132 (0.035)** | 1.089 (0.027)** | 1.169 (0.026)** |
| Poststars × high star | 1.151 (0.027)** | 1.05 (0.09)** | 1.005 (0.029) | 1.127 (0.038)** | 1.194 (0.031)** | 1.156 (0.035)** | 1.174 (0.025)** | 1.174 (0034)** | 1.137 (0.026)** | 1.202 (0.025)** |
| Poststars × unrated | 1.1811 (0.035)** | 1.04 (0.029) | 0.977 (0.0.035) | 1.140 (0.045) | 1.206 (0.038)** | 1.262 (0.047)** | 1.154 (0.029) | 1.126 (0.034)** | 1.032 (0.030) | 1.053 (0.024) |
| N | 185, 966 | 187, 498 | 216,945 | 159,510 | 141,528 | 153,6620 | 150,114 | 145,037 | 159.948 | |
| Inpatient care in 2 weeks prior a | Admitted from community a (no inpatient care in 2 weeks prior) | Low cognitive function a | High cognitive function a | |
|---|---|---|---|---|
| OR (SE) | OR (SE) | OR (SE) | OR (SE) | |
| Quality of care stars | ||||
| Middle (3–3.5) star | 1.161 (0.916)** | 1.131 (0.013)** | 1.111 (0.015)** | 1.155 (0.014)** |
| High (≥4) star | 1.173 (0.018)** | 1.121 (0.015)** | 1.118 (0.017)** | 1.165 (0.015)** |
| Unrated | 0.180 (0.122)** | 0.232 (0.009)** | 0.244 (0.132)** | 0.199 (0.010)** |
| Poststars × middle star | 1.036 (0.021) | 1.054 (0.018)* | 1.143 (0.023)** | 1.065 (0.018)** |
| Poststars × high star | 1.074 (0.023)* | 1.359 (0.076)** | 1.134 (0.024)** | 1.100 (0.019)** |
| Poststars × unrated | 1.123 (0.115)* | 1.140 (0.021)** | 1.211 (0.092)* | 1.323 (0.90)** |
| Patient experience stars | ||||
| Middle (3–3.5) star | 1.363 (0.285)** | 1.134 (0.021)** | 1.287 (0.026)** | 1.273 (0.023)** |
| High (≥4) star | 1.513 (0.030)** | 1.327 (0.023)** | 1.562 (0.030)** | 1.467 (0.025)** |
| Unrated | 0.371 (0.010)** | 0.468 (0.009)** | 0.434 (0.011)** | 0.430 (0.008)** |
| Poststars × middle star | 1.104 (0.033)** | 1.154 (0.032)** | 1.179 (0.035)** | 1.178 (0.031)** |
| Poststars × high star | 1.147 (0.0.032)** | 1.207 (0.314)** | 1.196 (0.033)** | 1.190 (0.029)** |
| Poststars × unrated | 1.272 (0.048)** | 1.137 (0.034)** | 1.216 (0.043)** | 1.217 (0.037)** |
| N | 135,992 | 161,202 | 145,850 | 198.032 |
Note: For each model, odds ratios represent the exponentiated coefficients from the conditional logit model, first in the preperiod, prior to the introduction of star ratings, and then for the interaction between the postperiod and the star ratings.
Abbreviations: OR, odds ratio; SE, standard error.
Sensitivity analyses include the following: An analysis using an alternative 5% random sample, the use of an average star rating, rather than the first reported star rating, an analysis which accounts for the 6 months during which only star ratings were available: and includes with three time periods (prestars [July 2014–June 2015], only quality stars available [July–December 2015], and both stars available [February–December 2016]), and a model excluding individuals living in a group setting. All population‐specific analyses were run on stratified samples extracted from our original 5% sample of new home health admissions during the study period. In cases where our sample size was small, we obtained additional randomly selected admissions to ensure a sample size over 135,000 admissions for each stratification. *p < 0.05, **p < 0.01.
The odds ratios from our stratified conditional logit models are also available in Table 3. Compared to their inpatient nondual, white, and high cognitive function counterparts, we observed larger increases in the odds of selecting a highly rated HHA on either dimension for patients admitted from the community, black, and dual‐eligible beneficiaries, and patients with lower cognitive function. Among Hispanic beneficiaries, we observed a statistically significant relationship between the introduction of patient experience star ratings and selection, but no relationship for quality of care star ratings. The stratified adjusted probabilities of selecting HHAs of each star rating during the pre and postperiods are displayed in Table 4. Dual beneficiaries experienced larger decreases in their likelihood of selecting low‐rated quality and patient experience HHAs, compared to their nondual counterparts. Duals also experienced a larger increase in the likelihood of selecting a high quality of care HHA 1.15% versus 0.65% percentage point change. Similarly, black beneficiaries experienced a larger decrease in their likelihood of selecting low‐rated HHAs and larger increases in their likelihood of selecting HHAs with high quality (0.56% vs. 1.40% change) and high patient experience (0.54% vs. 2.05% change) star ratings. Patients admitted from the community experienced larger changes selection, particularly in regards to the likelihood of selecting a high‐patient‐experience HHA (1.97 vs. 0.45% change).
TABLE 4.
Stratified adjusted probabilities of selecting a low‐ or high‐rated home health agency
| Not Medicaid eligible | Dually enrolled in Medicaid | White | Black | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Prestars | Poststars | Difference | Prestars | Poststars | Difference | Prestars | Poststars | Difference | Prestars | Poststars | Difference | |
| Quality of care | ||||||||||||
| Low (≤2.5) star | 11.61% | 11.04% | −0.57 | 16.09% | 15.39% | −0.70 | 11.68% | 11.19% | −0.49 | 14.53% | 13.62% | −0.91 |
| High (≥4) star | 34.11% | 34.76% | 0.65 | 30.56% | 31.71% | 1.15 | 33.86% | 34.42% | 0.56 | 29.16% | 30.56% | 1.40 |
| Patient experience | ||||||||||||
| Low star | 5.09% | 4.26% | −0.83 | 8.99% | 7.52% | −1.47 | 4.80% | 4.01% | −0.79 | 9.85% | 8.745% | −1.11 |
| High star | 71.77% | 74.32% | 2.55 | 63.60% | 64.55% | 0.95 | 74.74% | 75.28% | 0.54 | 62.35% | 64.40% | 2.05 |
| Inpatient care in 2 weeks prior | Admitted from community (no inpatient care in 2 weeks prior) | Low cognitive function | High cognitive function | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Prestars | Poststars | Difference | Prestars | Poststars | Difference | Prestars | Poststars | Difference | Prestars | Poststars | Difference | |
| Quality of care | ||||||||||||
| Low (≤2.5) star | 11.29% | 10.93% | −0.36 | 16.08% | 14.97% | −1.11 | 13.17% | 12.13% | −1.04 | 12.65% | 11.87% | −0.78 |
| High (≥4) star | 34.10% | 35.00% | 0.91 | 31.30% | 31.95% | 0.65 | 33.64% | 33.71% | −0.07 | 33.08% | 34.04% | 0.96 |
| Patient experience | ||||||||||||
| Low star | 5.72% | 4.80% | −0.92 | 6.67% | 5.38% | −1.29 | 6.43% | 5.20% | −1.23 | 5.99% | 4.91% | −1.08 |
| High star | 73.36% | 73.81% | 0.45 | 65.78% | 67.75% | 1.97 | 69.95% | 70.93% | −0.98 | 71.37% | 72.14% | 0.77 |
Note: Adjusted probabilities are calculated based on the predictions from the stratified conditional logit models.
4. DISCUSSION
We observed small but statistically significant relationships between the publication of both quality of care and patient experience star ratings and HHA selection. The introduction of stars was associated with more patients selecting higher quality and higher patient experience HHAs. Although the magnitude of the differences we observed were relatively small, rough estimates indicate that in the year following the publication of star ratings, approximately 33,000 more new home health users were served by high‐quality HHAs than would have been the case in the year prior, and 30,375 more were served by HHAs with high patient experience.
Our findings indicate that the publication of summary star ratings on the HHC website had an impact on selection. However, the magnitude of this impact was smaller than observed for NHs, 13 , 14 even though evidence indicates a lack of awareness of both NH and HHC websites. 19 , 20 This may be because our analysis was limited to the first year after the star ratings were introduced, and their impact on HHA selection will change over time. Similarly, Jung 6 observed an increased change in market share over time, with a smaller magnitude in the first year, as a result of the publicly reported (nonsummary) home health quality information. It is important to note, however, that even when we take time into consideration, the initial impact of HHA star ratings was smaller than that of the initial (first year) impact of the NH star ratings.
We are unable to observe the HHA selection process and cannot fully explain the mechanism behind the differences compared to selection in NHs. Available evidence suggests that HHA selection is influenced by referring providers and their formal and informal referral networks. 19 While discharge planners in inpatient settings do have access to the HHA star ratings, they also have relationships with and pre‐existing beliefs about HHAs, thus, it is unclear how they would use the star ratings to guide their patients. In our analysis, we observed a larger impact of the HHA star ratings among patients who did not have a prior inpatient stay, suggesting that those patients who do not have another reliable source of information about the quality of HHAs may be more responsive to star ratings.
Another key factor that differentiates the home health stars from NH stars is the fact that two summary star ratings are reported. Information about satisfaction is not available on NH Compare, nor is it incorporated in the star ratings. Several researchers have highlighted the importance of including this information on NH Compare because NH residents and their families value this information and cannot make a fully informed decision without it. 20 , 21 , 22 In our study, the magnitude of the impact of quality of care and patient experience stars was similar. However, we found that patients who were admitted from the community, and are less likely to be relying on a discharge planner to select an HHA, had a larger response to the patient experience star ratings than the quality of care ratings. This finding aligns with literature suggesting that patients and their families place a high value on topics such as provider communication and respect. 23 , 24 , 25 Meanwhile, referring providers may place higher value on quality of care. Although we cannot determine who viewed and/or utilized the star ratings during the selection process, if at all, our findings suggest that some decision makers are prioritize quality while others prioritize patient experience. Additionally, the findings from our sensitivity analysis using two postperiods (only quality available and both stars available) indicate that the relationship between patient experience and selection was not present until patient experience stars became available on HH Compare, while the relationship for quality stars was present in both postperiods after they became available. This finding strengthens the possibility that patients and providers are utilizing and differentiating between the two sets of star ratings. Our results highlight the importance of providing comprehensive information by reporting both star ratings separately, in order to meet the needs of all stakeholders.
When we stratified our analyses by race and Medicaid eligibility, disparities among black and dual‐eligible beneficiaries were attenuated by the introduction of star ratings. Both dual and black beneficiaries experienced larger decreases in their likelihood of selecting low‐rated HHAs and larger increases in their likelihood of selecting high‐rated HHAs than their nondual and white counterparts, respectively. This may be because non‐summary quality information was previously available on HH Compare, and less vulnerable individuals and their providers were more likely to utilize it and/or it aligned with what they already suspected about quality. Thus, the initial introduction of star ratings provided information that was more novel for vulnerable patients and their providers, bolstering the interpretation that public reporting may provide increased access to information and serve as an equalizer for these populations. 26 , 27 However, a 2015 NH study indicates that dual‐eligible beneficiaries are being served by more highly rated NHs over time, but that this shift is likely due to NHs improving their rating, rather than a change in provider selection among the dual‐eligible population. 28 A longitudinal analysis of disparities in HHA selection is an important next step to understanding our findings.
We found that the publication of star ratings was associated with increased likelihood of selecting an unrated HHA; however, this finding was not robust in our sensitivity analyses. It is important to note that 17.53% of HHAs were missing a quality star rating, and 49.42% were missing a patient experience star rating, thus the overall rate of selecting an HHA with missing data (0.62% and 7.09%) was quite low relative to the distribution of stars across HHAs, indicating that patients generally opted not to select unrated HHAs.
Our findings should be considered in light of several limitations. Both sets of star ratings were introduced nationally, thus, we were unable to include a control group that was not exposed to star ratings in our analysis. We also excluded 7% of eligible admissions because the patients selected HHAs outside of their choice sets. These cases may represent patients who moved prior to their home health admission (such that their current ZIP code no longer matched their ZIP code in the January enrollment file) or were traveling; however, it is impossible to identify the true reason for the selection outside of the choice set. Thus, we cannot make concrete conclusions regarding the implications for HHA selection.
Our study period is limited to the year after star ratings were published on HHC, and it may be several years before their full impact is apparent. Additionally the Discharge Planning Rule of 2019 requires all hospitals to provide patients with information about the quality of postacute care providers at discharge. This change could potentially have a significant impact on the HHA referral and selection process and warrants further examination.
To account for changes in star ratings over time, we utilized an intent‐to‐treat analysis, with the first reported star rating for the duration of the study period. In a given quarter between 9.75% and 20.84% of the 11,319 HHAs in our analysis had a quality of care star rating category that differed from their January 2016 category and between 9.20% and 12.42% had a patient experience category that differed from their January 2016 category. Although we cannot fully account for these changes, our primary results were consistent when we used an average star rating, rather than the first reported star ratings over time.
There are many aspects of the HHA selection process that we are not able to observe or address in our model (e.g., provider referral networks, family recommendations) It is important to study the impact of these unobservable factors in the selection process. For example, we found that individuals with lower cognitive function were more likely to respond to the star ratings than their high‐functioning counterparts. Although future research into the mechanisms behind and variation in response to star ratings is important, our primary interest in this article was measuring the initial response to star ratings. Finally, our results may not be generalizable to MA or non‐Medicare beneficiaries that need to consider cost‐sharing and provider networks in their HHA selection process.
Despite these limitations, this is the first study to evaluate the direct impact of star ratings on selection of home health agencies and the first to directly compare the impact of two unique summary measures. Although we observed a relationship between the publication of both sets of star ratings and HHA selection, this relationships were weaker than observed in other health care settings where a single star rating was reported. The impact of both star ratings on selection highlights the importance of continuing to present information about quality and satisfaction separately. Additionally, we found that the presence of star ratings may help to mitigate some of the racial and socioeconomic disparities observed during the selection process. Additional research into the mechanisms and role of various stakeholders in HHA selection, as well as a longitudinal analysis of the impact of star ratings, and further research into the relationship between star ratings and patient outcomes and satisfaction are needed. These studies will inform future discussions about how to ensure home health patients and their families can effectively utilize the information on HHC to make informed selections of HHAs.
ACKNOWLEDGEMENTS
This research and Dr. Schwartz' time on this work were funded by a grant from the Agency for Healthcare Research and Quality (AHRQ), Grant # R36HS026440‐01 REVISED. The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the AHRQ. Dr. Schwartz was a paid consultant for BAYADA Home Health in 2018, unrelated to her work on this project. Dr. Mor is a paid consultant and chair of the Scientific Advisory Board for NaviHealth, Inc. He also holds equity in and is the former director at PointRight. His work in these roles is unrelated to the work on this project. Drs. Rahman, Thomas and Konetzka have nothing to disclose.
Schwartz ML, Rahman M, Thomas KS, Konetzka RT, Mor V. Consumer selection and home health agency quality and patient experience stars. Health Serv Res. 2022;57(1):113‐124. doi: 10.1111/1475-6773.13867
Funding information Agency for Healthcare Research and Quality, Grant/Award Number: 1R36HS026440‐01 REVISED
REFERENCES
- 1. Medicare Payment Advisory Commission . Report to Congress, Medicare Payment Policy: Chapter 9, Washington D.C.: Home Health Care Services; 2019. Accessed May 17, 2020. http://www.medpac.gov/docs/default-source/reports/mar19_medpac_ch9_sec.pdf?sfvrsn=0 [Google Scholar]
- 2. Harris‐Kojetin L, Sengupta M, Park‐Lee E. Long‐Term Care Providers and Services Users in the United States: Data from the National Study of Long‐Term Care Providers, 2013–2014. Hyattsville, Maryland: National Center for Health Statistics; 2016. [PubMed] [Google Scholar]
- 3. Ellenbecker CH, Samia L, Cushman MJ, Alster K. Patient safety and quality in home health care. In: Hughes RG, ed. Patient Safety and Quality: An Evidence‐Based Handbook for Nurses. Advances in Patient Safety. Rockville, MD: Agency for Healthcare Research and Quality; 2008. http://www.ncbi.nlm.nih.gov/books/NBK2631/ [PubMed] [Google Scholar]
- 4. Centers for Medicare and Medicaid Services . Home Health Quality Reporting Program; Woodlawn, MD: Centers for Medicare and Medicaid Services; 2020. Accessed May 1, 2020. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HomeHealthQualityInits [Google Scholar]
- 5. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care. 2003;41(1 suppl):I30‐I38. [DOI] [PubMed] [Google Scholar]
- 6. Jung JK, Wu B, Kim H, Polsky D. The effect of publicized quality information on home health agency choice. Med Care Res Rev. 2016;73(6):703‐723. 10.1177/1077558715623718 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Hibbard JH, Jewett JJ. Will quality report cards help consumers? Health Aff. 1997;16(3):218‐228. 10.1377/hlthaff.16.3.218 [DOI] [PubMed] [Google Scholar]
- 8. Hibbard JH, Peters E, Slovic P, Finucane ML, Tusler M. Making health care quality reports easier to use. Jt Comm J Qual Improv. 2001;27(11):591‐604. [DOI] [PubMed] [Google Scholar]
- 9. Schultz J, Thiede Call K, Feldman R, Christianson J. Do employees use report cards to assess health care provider systems? Health Serv Res. 2001;36(3):509‐530. [PMC free article] [PubMed] [Google Scholar]
- 10. Peters E, Dieckmann N, Dixon A, Hibbard JH, Mertz CK. Less is more in presenting quality information to consumers. Med Care Res Rev. 2007;64(2):169‐190. 10.1177/10775587070640020301 [DOI] [PubMed] [Google Scholar]
- 11. Palsbo SE, Kroll T. Meeting information needs to facilitate decision making: report cards for people with disabilities. Health Expect. 2007;10(3):278‐285. 10.1111/j.1369-7625.2007.00453.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Reid RO, Deb P, Howell BL, Shrank WH. Association between medicare advantage plan star ratings and enrollment. JAMA. 2013;309(3):267‐274. 10.1001/jama.2012.173925 [DOI] [PubMed] [Google Scholar]
- 13. Werner RM, Konetzka RT, Polsky D. Changes in consumer demand following public reporting of summary quality ratings: an evaluation in nursing homes. Health Serv Res. 2016;51:1291‐1309. 10.1111/1475-6773.12459 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Perraillon MC, Konetzka RT, He D, Werner RM. Consumer response to composite ratings of nursing home quality. Am J Health Econ. 2019;5(2):165‐190. 10.1162/ajhe_a_00115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Schwartz ML, Mroz TM, Thomas KS. Are patient experience and outcomes for home health agencies related? Med Care Res Rev. 2020. 10.1177/1077558720968365 [DOI] [PubMed] [Google Scholar]
- 16. Werner RM, Coe NB, Qi M, Konetzka RT. Patient outcomes after hospital discharge to home with home health care vs to a skilled nursing facility. JAMA Intern Med. 2019;179(5):617‐623. 10.1001/jamainternmed.2018.7998 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. McFadden DL. Conditional logit analysis of qualitative choice behavior. Frontiers in Econometrics. New York, NY: Academic Press; 1974:105‐142. [Google Scholar]
- 18. McFadden DL. In: Karlqvist A, Lundqvist L, Snickars F, Weibull J, eds. Modeling the Choice of Residential Location; North‐Holland Publishing Company : distributors for the U.S.A. and Canada, Elsevier North‐Holland; 1978:75‐96. [Google Scholar]
- 19. Baier RR, Wysocki A, Gravenstein S, Cooper E, Mor V, Clark M. A qualitative study of choosing home health care after hospitalization: the unintended consequences of “patient choice” requirements. J Gen Intern Med. 2015;30(5):634‐640. 10.1007/s11606-014-3164-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Konetzka RT, Perraillon MC. Use of nursing home compare website appears limited by lack of awareness and initial mistrust of the data. Health Aff. 2016;35(4):706‐713. 10.1377/hlthaff.2015.1377 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Tamara Konetzka R, Yan K, Werner RM. Two decades of nursing home compare: what have we learned? Med Care Res Rev. 2020;13:295‐310. 10.1177/1077558720931652 [DOI] [PubMed] [Google Scholar]
- 22. Schapira MM, Shea JA, Duey KA, Kleiman C, Werner RM. The nursing home compare report card: perceptions of residents and caregivers regarding quality ratings and nursing home choice. Health Serv Res. 2016;51(suppl 2):1212‐1228. 10.1111/1475-6773.12458 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Kane RL, Kane RA. What older people want from long‐term care, and how they can get it. Health Aff. 2001;20(6):114‐127. 10.1377/hlthaff.20.6.114 [DOI] [PubMed] [Google Scholar]
- 24. Sofaer S, Crofton C, Goldstein E, Hoy E, Crabb J. What do consumers want to know about the quality of care in hospitals? Health Serv Res. 2005;40(6 pt 2):2018‐2036. 10.1111/j.1475-6773.2005.00473.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Shugarman L, Brown J. Findings from Focus Groups of Consumers and Information Intermediaries. Nursing Home Selection: How Do Consumers Choose? Vol I. Washington, D.C.: RAND Corporation. Prepared for Office of Disability, Aging and Long‐Term Care Policy Office of the Assistant Secretary for Planning and Evaluation U.S. Department of Health and Human Services; 2006. [Google Scholar]
- 26. Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay‐for‐performance and quality reporting affect health care disparities? Health Aff. 2007;26(suppl 2):w405‐w414. 10.1377/hlthaff.26.3.w405 [DOI] [PubMed] [Google Scholar]
- 27. Mukamel DB, Weimer DL, Zwanziger J, Gorthy S‐FH, Mushlin AI. Quality report cards, selection of cardiac surgeons, and racial disparities: a study of the publication of the New York state cardiac surgery reports. Inquiry. 2004;41(4):435‐446. 10.5034/inquiryjrnl_41.4.435 [DOI] [PubMed] [Google Scholar]
- 28. Konetzka RT, Grabowski DC, Perraillon MC, Werner RM. Nursing home 5‐star rating system exacerbates disparities in quality, by payer source. Health Aff. 2015;34(5):819‐827. 10.1377/hlthaff.2014.1084 [DOI] [PMC free article] [PubMed] [Google Scholar]
