Abstract
Health care report cards improve information and are a crucial part of health care reform of the federal government of the United States. I exploit a natural experiment in the home health sector to assess whether a higher rating under the star ratings program affects patient choice. Higher-rated agencies increased their market share by 1.4% or 0.25 (95% CI: −0.63 to 1.12) percentage points, a practically and statistically insignificant amount. I find no evidence of heterogeneous effects across the rating distribution or over time. I also find precise null effects among consumers expected to be more responsive, including community-entry patients and patients in competitive markets with more options and star types. Agencies may have modestly impeded consumer choice by engaging in some patient selection behaviors, although the evidence is only weakly suggestive. The star ratings are unlikely to improve home health quality despite continued policymaker interest.
Keywords: Information asymmetry, long-term care, post-acute care, home health, quality, report cards, health care reform, Medicare, health insurance
I. INTRODUCTION
“Public reporting is a key driver for improving health care quality by supporting consumer choice and incentivizing provider quality improvement,” is the view that underpinned the 2010 Patient Protection and Affordable Care Act’s massive expansion of public report cards in the US health care system (2015b; Reineck & Kahn, 2013). A prominent example is the Home Health Star Ratings program. Beginning in July 2015, CMS assigned each home health agency a 5-star rating based on its performance on nine quality measures relative to other agencies in the nation (Centers for Medicare & Medicaid Services, 2015a). The theoretical mechanisms that the Home Health Star ratings and other public reporting programs use to achieve better quality are simple: they rely on reallocation of consumers based on ratings (Bundorf et al., 2009). In one mechanism, if patients (or their surrogates) can identify and choose high-quality agencies, then average quality improves through reallocation of patients from lower- to higher-quality agencies, even if agencies do not alter quality. In another mechanism, if patients (or their surrogates) choose higher-quality agencies and agencies believe that patient demand for agency services is in part based on quality, then agencies will be incentivized to compete (and improve) on quality, leading to better quality overall.
Despite its theoretical appeal and continued investment of resources by the federal government, the empirical literature on public reporting in health care suggests mixed and, at best, modest success (Kolstad & Chernew, 2009). One proposed reason has been information complexity, prohibiting consumers from understanding and using the information, a problem that is especially salient for older people with physical and cognitive needs (Brook et al., 2002; Hibbard & Peters, 2003; Peters et al., 2007). Simpler, star ratings format may have greater success in eliciting demand responses for the Medicare population (Darden & McCarthy, 2015; Konetzka et al., 2021; Perraillon et al., 2017; Werner et al., 2016). Only one study thus far has examined the effects of star ratings on consumer demand in home health ((Schwartz et al., 2022). Schwartz et al. (2022) assessed the effects of introducing the star ratings on home health admissions among agencies with low, medium, and high star ratings, before and after the program was implemented. Like other studies using pre-post designs, however, the empirical approach used by Schwartz et al. makes it difficult to differentiate the effects of the 5-star format, a specific policy intervention, from the effects of other contemporaneous changes. I augment the existing literature by determining whether quality information in the form of a higher summary star rating affects patients’ choice of home health agency, using a natural experiment.
There are several unique features of the home health sector that makes it interesting and important to study. First, the introduction of the summary star ratings to home health may have significantly reduced cognitive load and simplified complex quality information for patients. Unlike previous versions of quality disclosures for home health, the star ratings no longer require consumers to compare agencies measure by measure, prioritize measures, or reconcile discrepancies in measure-specific performance among agencies, which requires numeracy and can be cognitively taxing (Jung et al., 2016). Additionally, unlike other types of health care, there are zero out-of-pocket fees and no travel costs associated with home health care for Medicare patients because they receive care at home. Thus, Medicare home health patients have no competing cost concerns, making them the ideal subjects for studying how quality information influences consumer choice (Jung et al., 2016). Moreover, home health is a rapidly growing sector with variable quality of care, making it critical that policymakers identify successful interventions while sunsetting ineffective ones. To do so, it is imperative that we have rigorous evidence to discern the full spectrum of effects of the star ratings on patient care.
II. NEW CONTRIBUTION
This study makes three specific contributions. First, it identifies the causal effect of an additional half star on new patient market shares in the first 1.5 years of the Home Health Star Ratings program. It uses a sharp regression discontinuity design to compare agencies that were virtually identical but barely on the opposite sides of an arbitrary star threshold. Second, the current study examines the effects of a one unit increase in ratings (i.e., half star) on consumer choice, including for agencies in the middle of the star distribution. By construction, most home health agencies fall in the middle of the star distribution and patients usually only have choices between agencies with small differences in ratings. Therefore, discerning the effects of a unit higher in star ratings is crucial for determining the mechanism of star ratings as a policy intervention. Third, given the reliance of policymakers on public reporting to improve quality, this study undertakes an extensive investigation to interrogate the claim that the star ratings affect patient use of home health agencies. Thus, in addition to (1) discerning the effects of an additional unit-increase in star rating on patient volume, this study considers a variety of scenarios that may lead to heterogeneous effects, including (2) one more half star at various points across the star rating distribution, (3) dynamics over time, (4) among patients better positioned to search for information (e.g., community-entry patients) or with more to gain from searching (e.g., patients in markets with more home health options or star types), and (5) whether home health agencies may have restricted where patients received care by selectively admitting more profitable patients while hindering access for less profitable patients.
III. CONCEPTUAL FRAMEWORK
The standard utility maximizing model specifies that consumers (patients and their surrogates) determine their best option (a specific provider) using available information and their preferences. Medicare fee for service (FFS) patients have no travel or out-of-pocket costs, and Medicare sets uniform prices agencies. Thus, perceived quality and availability determine home health agency choice. Quality varies among agencies in this market, but consumers have incomplete information at decision time. Interviews of patients indicate that they do not have sufficient information to make informed decisions (Baier et al., 2015). As a result, consumers may mistakenly believe that providers are of equal quality. One reason for incomplete information could be that consumers are unable to take advantage of available information. In situations where consumers must make decisions quickly, such as choosing post-acute care while being discharged from an inpatient stay, additional pressures may prevent them from searching for or synthesizing available information (Jung et al., 2016; Werner et al., 2012). Therefore, proponents argue that because consumers have imperfect information, home health agencies are not incentivized to compete on quality due to relatively unresponsive demand for quality (Grabowski & Town, 2011). The Medicare 5-star ratings aim to reduce cognitive barriers associated with information gathering and interpretation so patients can more easily distinguish agencies across the full spectrum of quality (Centers for Medicare & Medicaid Services, 2015b). All else equal, if 5-star ratings make comparing home health agencies easier, demand for higher-rated agencies should increase, leading to more patients in higher-rated agencies.
IV. INSTITUTIONAL BACKGROUND
The largest payer of home health services is Medicare (Centers for Medicare & Medicaid Services, 2021). In 2019, Medicare paid $17.8 billion to over 11,000 home health agencies for care delivered to 3.3 million FFS beneficiaries (Medicare Payment Advisory Commission, 2021). To be eligible for Medicare covered home health care, a physician or other qualified practitioner must first certify a patient’s need for intermittent skilled care and vouch that the patient cannot leave home without help (Centers for Medicare & Medicaid Services). Home health patients are twice as likely to be at least 85 years old, 36% more likely to have at least three chronic conditions, 80% more likely to report fair or poor health, and 30% more likely to have incomes at or below 200% of the federal poverty level, when compared to the general Medicare population (Avalere, 2015).
For eligible beneficiaries, Medicare’s home health benefit covers patients with no out-of-pocket costs. Although formally considered a post-acute care service, there are two types of patients that receive home health care, those who need short-term, rehabilitative care (post-acute) and chronically ill people who need longer-term support (community entry) (Murtaugh, 2008). All services are delivered to patients at their homes, and can include skilled nursing, physical therapy, speech-language pathology, occupational therapy, medical social services, home health aide services, and medical supplies. Patients are unconstrained in the number of episodes of care they can receive (Medicare Payment Advisory Commission, 2020).
Often, many agencies are available to patients. More than 80 percent of patients live in ZIP codes with at least 5 agencies serving it (Medicare Payment Advisory Commission, 2021). Medicare FFS patients can choose any Medicare-certified home health agency willing to render care. Thus, patients’ choice sets are locally determined – constrained by agencies that serve their residential areas and are willing to treat them.
Patients frequently select post-acute care providers based on personal experience or family and friends’ recommendations (Baier et al., 2015; Gadbois et al., 2017). One or more formal caregivers, such as a hospital discharge planner, case manager, or physician, typically influence a patient’s decision (Konetzka & Perraillon, 2016; Swope & Brown, 2015). For instance, a hospital discharge planner arranging a referral to home health could choose an agency for the patient or influence the patient’s selection by streamlining the options offered (Baier et al., 2015; Li et al., 2023; Swope & Brown, 2015). Thus, the observable demand response is a culmination of multiple actors’ input.
Recognizing that many patients struggle to choose post-acute care, Medicare’s first home health information disclosure initiative began in November 2005. CMS produced a list of up to 11 quality measures for each home health agency. Early research suggest that the initiative did not meaningfully influenced patient demand for higher-rated agencies (Jung et al., 2016). More than a decade later, CMS updated the report card system with composite star ratings, the Quality of Patient Care star ratings (hereinafter referred to as “star ratings”). The new format summarized several agency-level clinical quality measures based on the agency’s skill, effort, and patient characteristics. New and small agencies were excluded (Centers for Medicare & Medicaid Services, 2015d), leading to approximately 24% of Medicare-certified agencies without a star rating in the first release. Since July 2015, each eligible agency has been rated between 1 and 5 stars in half-star increments every quarter, and their summary ratings have been available on the CMS Compare website. In January 2016, CMS developed a second set of home health star ratings to summarize patient experience. CMS also assigns 5-star ratings to skilled nursing facilities, hospitals, dialysis facilities, clinicians, and Medicare Advantage plans, in addition to home health care.
V. EMPIRICAL APPROACH
Study Population.
This study uses individual data from January 2014 through December 2016 to assess Medicare FFS beneficiaries who received home health care in one of the 50 US states and Washington, DC. Patients were included if they had continuous FFS coverage in the year before care began. They must have also started treatment from a star-rated agency during some point in the first six quarterly ratings. I focus on Medicare FFS patients because their selection of agencies is not constrained by insurance restrictions, allowing for a more accurate measure of patient choice. Because patients are likely to choose an agency they have used before, I focus on new patients, which I define as Medicare FFS patients without use of home health services in the 12 months before the start of home health care, consistent with prior approaches (Schwartz et al., 2022).
Data.
The home health Outcome and Assessment Information Set (OASIS) comprises patient data such as service use dates, residential ZIP codes, demographic and clinical characteristics, payment source, and whether the patient was admitted from an inpatient setting. The OASIS is collected by Medicare-certified home health agencies for all adult patients (Centers for Medicare & Medicaid Services, 2015c). I linked OASIS data to the Master Beneficiary Summary File to identify FFS status.
The Home Health Compare website provides ratings and release dates. These ratings were released quarterly beginning July 16, 2015. For the first set of star ratings, measures were based on data collected between October 1, 2013, and December 31, 2014, depending on the measure. Subsequent star ratings were based on successive historical data, also with 6-to-9-month lags. I obtained the unrounded ratings underlying the publicly displayed ratings in Home Health Compare through a Freedom of Information Act data request to CMS.
I also gathered state, years in operation, and for-profit status of agencies from Home Health Compare. I used the fiscal year 2014 and 2015 Healthcare Cost Report Information System to identify agency-level affiliations with a chain organization.
Outcome: Patient Share.
My primary objective is to determine whether star ratings resulted in increased patient demand for highly rated agencies for one more half star in ratings. If patients use and respond to star ratings, then agencies with more stars should serve a greater share of patients in their market. I focus on ZIP codes since they are the smallest geographic unit available to the public when searching for home health agencies. For each quarterly release, I calculate the percent of new patients as the number of new Medicare FFS home health patients treated by an agency out of all new Medicare FFS home health patients in the agency’s ZIP codes served during that period. Each quarterly release period includes the day after the release and up to, but not including the day of the subsequent release (Appendix Table 1). I exclude the day of release because Medicare provides the exact day but not the time that the new ratings are posted.
For the patient share measure to reflect changes in demand, it is crucial that agencies did not expand or contract their market areas. Holding patient volume constant, an agency that increases (or decreases) the number of ZIP codes served may appear to have lost (or gained) market shares, simply due to a wider geographic spread.
I do not find evidence that agencies changed the number of ZIP codes they served (Appendix Table 2), indicating that the patient share measure is unlikely to be confounded by changes in geographic market expansions or contractions.
Treatment: One more half star.
The treatment of interest is the public display of one more half star in clinical quality ratings, where an agency obtains one more half star due to arbitrary rounding rules. Therefore, the causal estimate obtained from the analysis is the patient share effects from an agency having a unit higher in ratings during the program’s implementation (as opposed to the patient share effects from the introduction of the program).
Star ratings fall on a half-star scale between 1 and 5 stars. Most agencies fell in the 2.5- to 4-star range and the pattern was similar over time (Appendix Figure 1). To get to the final half-star rating, CMS averages across nine individual star ratings based on nine individual measures (Appendix Table 3). Each measure-specific star rating is rounded to the nearest half decimal point, which is then averaged to arrive at a composite, unrounded star rating that extends out to three decimal places and is discrete. The unrounded star ratings cluster around certain values because CMS takes the average of rounded numbers. Unrounded star ratings were then rounded to the nearest half point to arrive at the final composite rating (Centers for Medicare & Medicaid Services, 2015d). For instance, an agency receives 2.5 stars if the agency’s unrounded composite rating is 2.251 and receives 2 stars if its rating is 2.249. Therefore, as shown in Appendix Figure 2, unrounded star ratings determine the public facing star rating received by each agency, where there is a sharp treatment discontinuity due to the rounding threshold cutoff (dashed line). Together, the treatment group consists of agencies with one more half-star at a given threshold (i.e., right of the threshold) and the control group consists of agencies with one fewer half-star at a given threshold.
Analysis.
To isolate the effects of one more half-star on new patient share, I use a sharp regression discontinuity design that leverages CMS’s composite star rating assignment rules. I compare agencies that are virtually identical but are barely on the opposite sides of an arbitrary star threshold (i.e., rounding of a continuous, underlying score to nearest half star). Precise knowledge of Medicare’s rules provides the identifying source of variation.
I use parametric extrapolation as my primary approach. Figure 1 shows a visual description of the data, which suggests that a linear model is appropriate to capture any changes in the outcome across the threshold. In equation (1), my preferred specification, I use ordinary least squares regression to estimate the level shifts in the cross-sectional relationship between the share of new patients per agency per quarter and an additional half star (see Appendix I for other specifications as sensitivity tests). This specification performed the best among alternatives in terms of Akaike information criterion, such as adjusting for quadratic functions of the unrounded star ratings and allowing for different slopes across the rounding threshold (Appendix Tables 4–6). It is also preferred due to small sample concerns and because higher-order polynomial specifications are subject to overfitting problems.
Figure 1:

Descriptive relationship between unrounded home health star ratings and share of new patients pooling all thresholds for agencies with a home health star rating from July 2015−December 2016
Notes: Pooled threshold sample includes agencies with unrounded star ratings that are centered at the rounding threshold and up to, but not including ±0.25 on either side. Each point represents an unrounded star rating value.
| (1) |
In this specification, is the share of new patients per agency in quarter . is equal to 1 if the observation received a higher star rating (right of the threshold) or 0 otherwise; and is the unrounded star rating for agency in quarter , centered at the rounding threshold, which is calculated from taking the difference between unrounded star rating and the threshold, where threshold can take on the values 1.25, 1.75, 2.25, 2.75, 3.25, 3.75, 4.25, and 4.75. The unrounded star rating is included as a covariate to control for any selection bias due to quality differences across agencies (Heckman & Robb, 1985). Because I combine six quarterly releases, I have repeated observations. Therefore, to account for within-agency correlations from combining the six quarterly releases, I cluster standard errors at the home health agency level. would be consistent with the notion that one more half star led to an increase in patient market share.
I examine the effects of an additional half star on patient shares in two ways. First, I combine the thresholds and six quarterly releases into one sample, resulting in a total of 8,806 agencies. Each unrounded star rating within ±0.25 on either side of a threshold is included and centered at the rounding threshold. Combining the thresholds makes the assumption that the effects of one more half star are constant (Cattaneo et al., 2016). Conceptually, homogeneous effects could hold true since crossing the threshold always results in the same treatment (a half star more) and choosing an agency with a higher rating is presumably always better than an agency with a lower rating, all other factors equal.
I also allow for heterogenous effects across thresholds. Thus, in the second way, I examine each threshold separately to determine whether having an additional half-star yield different effects across the distribution of the stars. Heterogeneity may be plausible if, for example, patients place more value on a change from 3.5 to 4 stars (from average to above average) than from 1 to 1.5 stars (worst to slightly less bad). For each threshold, agencies with unrounded star ratings of ±0.25 on either side of a threshold are included and centered at each rounding threshold.
Threats to Identification and robustness checks.
The continuity assumption is a key criteria for the internal validity of the regression discontinuity design, which states that absent treatment, there should be no discontinuous changes in potential outcomes at the threshold (Cunningham, 2021). This has several implications, including (1) the running score is not influenced by the treatment, (2) the cutoff threshold is determined independently of the rating variable and that assignment to treatment is entirely determined by the running score and threshold, and (3) there are no other discontinuities beyond treatment status at the threshold (Cunningham, 2021).
To examine whether the assumptions underlying the design are met, I conduct several tests. First, I examine whether agencies may have precisely manipulated their running scores relative to each rating cutoff. The quarterly star ratings are computed by Medicare using historical data, and exact scores are calculated based on a national distribution. Thus, to manipulate their score, an agency must accurately estimate their score and the national distribution well in advance. Although unlikely (and visually does not appear to be so, Appendix Figure 3), I formally test for manipulation of the unrounded star ratings using the Frandsen manipulation test for discrete running score using the Stata command rddistestk (Frandsen, 2017). The Frandsen test uses the support points at and immediately adjacent to the threshold to test the null hypothesis that the probability mass function is smooth around the threshold. To implement the test, one must specify the parameter k ≥ 0 that determines the degree of deviations from linearity in the probability mass function that would lead the test to reject the null assumption of no manipulation. A small k means that small nonlinearities would result in the test rejecting the null hypothesis of no manipulation with high probability, while a large k means that the probability mass function can be highly nonlinear at the threshold before the test has sufficient power to detect manipulation. Following the approach outlined in Frandsen (2017) to estimate k, the p value is 1.000 when k=0.138. When k=0, the corresponding p value is 0.901, consistent with no manipulation (Appendix Table 7).
Second, I compare sample characteristics by treatment status (Appendix Table 8). These descriptive statistics show that agencies with an additional half-star were comparable to those below the threshold. The treatment and control agencies averaged nearly a decade in existence, were mostly for-profit, around 20% were chains, and had similar patient composition. In the half year prior to the star ratings from January through June 2015, the average agency treated 16 percent of all new FFS patients within its market. These descriptive statistics suggest that the treatment and control groups were comparable and balanced in covariates.
Third, I formally compare agency characteristics using regression by testing for discontinuous jumps in agency-level characteristics (i.e., differences in characteristics across the threshold), including years of operation, for-profit status, and chain affiliation. I also test whether the ratings affected the agency’s patient composition in the half-year preceding the star ratings program. Out of the 16 placebo tests, I find one marginally significant (p = 0.085) difference of 3 percent or 0.46 (SE=0.27) years between the treatment and control group in agency age. Collectively, I find no evidence of systematic differences between the two groups (Appendix Table 8).
Forth, I minimize potential bias from nonrandom heaping in the running variable, unrounded star ratings by stratifying by heaped points (i.e., clustering of agencies of a particular type on specific running scores) following the approach suggested by Barreca, Lindo, and Waddell (2016). The heap sample is much larger and captures most of the home health agencies in the nation (>99%) (see Appendix Figure 4). A disproportionate share of heaped versus non-heaped observations across the cutoff would be indicative of non-comparability in the treatment and control groups across the threshold and that ignoring nonrandom heaping could bias the estimates.
Finally, I conduct four additional robustness checks, which include (1) adjusting for covariates as a sensitivity check for all regressions. They consisted of pre-treatment data from January 2015 through June 2015 on each agency’s share of new Medicare FFS patients, total new FFS patients, total admissions (all payers), percent of post-acute entry patients, percent of patients by race and ethnicity, percent by payer, percent who were dually eligible for Medicare and Medicaid, mean patient age, and release date dummies, and agency age, chain organization, for-profit status, and agency participation in the alternative payment model Home Health Value-Based Purchasing program implemented in 2016 in 9 states. (2) Examining alternative specifications to my preferred base specification which include higher-order polynomials (Appendix I). (3) Testing the sensitivity of the estimates with varying bandwidths ranging from , and . (4) Examining all research questions without imposing parametric assumptions, by using local randomization as an alternative (Cattaneo et al., 2019) (Appendix Tables 9–22).
VI. RESULTS
Main Estimates.
The estimated effect of one more half star on the market share of new patients was small and non-significant (Table 1). Gaining a half star corresponded to an increase of 0.25 (SE = 0.45) percentage points, or approximately a 1-percent increase from an average of 18 percent; the corresponding 95 percent confidence interval rules out effects larger than 1.12 percentage points. When adjusted for covariates, the results are even smaller, with a point estimate of 0.03 (SE=0.19) percentage points and an upper bound of 0.39 percentage points. Although not directly comparable, these estimates are consistent with the most recent estimates from Schwartz et al. (2022), which suggested modest although statistically significant effects.
Table 1:
Regression discontinuity estimates of the effect of having one more half star on market shares.
| Pooled thresholds | Covariates | N agencies | Mean | (SD) | Coef. | (SE) |
|---|---|---|---|---|---|---|
| (−0.25, 0.25) | 8,806 | 17.55 | (21.81) | |||
| No | 0.25 | (0.45) | ||||
| Yes | 0.03 | (0.19) | ||||
| (−0.15, 0.15) | 8,633 | 17.55 | (21.77) | |||
| No | 0.17 | (0.53) | ||||
| Yes | −0.17 | (0.22) | ||||
| (−0.11, 0.11) | 8,112 | 17.58 | (21.95) | |||
| No | 0.80 | (0.72) | ||||
| Yes | 0.07 | (0.30) |
Notes: Estimates obtained from ordinary least squares regression assessing level shifts in the cross-sectional relationship between the share of new patients per agency and an additional half star in rating. Covariates include agency organizational characteristics (agency age, chain affiliation, for-profit status, Home Health Value-Based Purchasing participation), pre-star ratings Medicare patient characteristics from 1/2015 to 6/2015 (total patient count, percent discharged from an inpatient institution, percent female, percent white, percent black, percent Hispanic, average age, percent that were fee-for-service enrollees, percent that were Medicare Advantage enrollees, percent that were Medicaid enrollees, percent that were dually enrolled in Medicaid and Medicare, number of new patients, share of new patients), and star rating release dummy variables. Standard errors were clustered at the home health agency level.
The results were similarly small in magnitude and not statistically significant when the sample was restricted to narrower bandwidths around the cutoff. Estimates from the narrowest bandwidth, within ±0.11, suggest an increase of 0.80 (SE = 0.72) percentage points. Based on the 95 percent confidence interval, I am unable to rule out a larger effect of 2.20 percentage points, but a 2-percentage-point increase is still small, even though it is on the higher end of what home health report cards have shown so far.
The threshold-specific point estimates were generally similar to the point estimate from the main analysis and did not display clear patterns (Figure 2). For instance, unadjusted estimates show that 5 out of 8 point estimates were within a magnitude of 1-percentage point, including thresholds at the low (1 vs 1.5, 1.5 vs 2), middle (2.5 vs 3) and high end of the distribution (3.5 vs 4, 4 vs 4.5). While most point estimates were positive, thresholds for 3 vs 3.5 and 3.5 vs 4 were negative. Covariate adjusted estimates were similar, with 6 out of 8 within a half-percentage point in magnitude and with all estimates within 1-percentage point except for the estimate for the 1 vs 1.5 threshold.
Figure 2:

Effects of one more half star on new patient market share across the star distribution
Notes: Graph displays point estimates and 95% confidence intervals. Horizontal, dashed blue line represents the pooled threshold estimate across all releases. Estimates obtained from ordinary least squares regression assessing level shifts in the cross-sectional relationship between the share of new patients per agency and an additional half star in rating. Covariates include agency organizational characteristics (agency age, chain affiliation, for-profit status, Home Health Value-Based Purchasing participation), pre-star ratings Medicare patient characteristics from 1/2015 to 6/2015 (total patient count, percent discharged from an inpatient institution, percent female, percent white, percent black, percent Hispanic, average age, percent that were fee-for-service enrollees, percent that were Medicare Advantage enrollees, percent that were Medicaid enrollees, percent that were dually enrolled in Medicaid and Medicare, number of new patients, share of new patients), and star rating release dummy variables. Standard errors were clustered at the home health agency level.
Dynamic Effects.
One potential reason for the null effects is that the star ratings could affect consumer demand differently over time, such as from learning. I first examine whether there was a positive correlation between the program age and patient shares, which would indicate delayed awareness of the ratings, but I find no discernable pattern indicative of growing awareness (Appendix Figure 5). In the quarter following the first release of star ratings, unadjusted point estimates indicate effects of 1.18 (SE=1.10) percentage points. By the last release in 2016, the point estimates were about 0.63 (SE=1.12) percentage points.
These results were also consistent with Google Trends data which show “home health star rating” search frequency from 2014 to January 2020 (Appendix Figure 6). Since CMS’s website is the public’s source for star ratings, Google’s search data are likely to capture growing interest in the ratings. Search frequency peaked in July 2015, when the star ratings began, followed by a 50% drop. At least from Google searches, interest in home health star ratings did not increase over time and was relatively constant into 2020.
Outdated information could also delay responses and patient shares may not increase in the immediate quarter after each release. This is plausible, if, for example, a hospital incorporates star ratings in their discharge planning documents but they do not update the information frequently. To assess whether lags in consumer response to each release could have masked changes in patient demand, I examine up to six lags. For agencies rated in July 2015, for instance, I regard treatment status from the first release as fixed and compare outcomes for each subsequent quarter for the same cohort. This results in six cohorts of agencies. A positive correlation suggests information delays, a negative correlation indicates timely response to the ratings, and no correlation suggests no demand response. I do not find evidence of changing patient shares (Appendix Figure 7). Regardless of the lag, point estimates were flat, compatible with no demand response. Together, these results indicate that heterogeneous responses over time are unlikely to explain the null effect estimates in the first 1.5 years.
Variability in search behaviors.
To detect whether the star ratings had larger effects among populations that were more likely to search for information, I separately focus on patients that were community-entry who may have more time to search for information (Jung et al., 2016; Schwartz et al., 2022), and people residing in markets with many home health agencies or many star-rated options. I measure the availability of agencies and star types using the Herfindahl-Hirschman Index (HHI) (appendix II). I find no evidence that an increase in star ratings produced larger effects among community-entry patients (Appendix Figure 8) or patients in markets with more options (Appendix Figure 9). For instance, unadjusted estimates suggest that one more half star increased the share of new community-entry patients by 0.19 (SE=0.51) percentage points and post-acute admissions by 0.30 (SE=0.50) percentage points.
Patient selection by agencies.
Even if consumer demand for higher-rated agencies increased, short-term capacity constraints are likely to hinder agencies from increasing total patient volume. And because prices are administratively set by public payers or pre-negotiated and cannot be changed quarterly, one possible response by home health agencies in reaction to increased consume demand is to cherry-pick more profitable patients.
One proxy for a patient’s profitability is the patient’s local ZIP code income. Local area income is correlated with people’s abilities to manage their health and crime (Chen et al., 2014; Dong et al., 2020), both of which are factors cited as leading reasons for home health agencies to refuse patients (Centers for Medicare & Medicaid Services). I use median income among people 65 years or older from the American Community Survey at the ZIP code level to construct a weighted average for ZIP codes served by an agency, where the weight is each ZIP code’s share of episodes for an agency in each period. The average income of ZIP codes served by agencies was $41,511 and was modestly higher for agencies with higher star ratings. For instance, the mean ZIP code income for agencies at the cusp of 1.5 and 2 stars was $37,091 while the mean was $41,756 for agencies at the cusp of 4.5 and 5 stars. An increase in the average income of ZIP codes served by an agency with an additional half star would imply a shift toward more desirable markets, consistent with patient selection.
The effect of one more half star on the average income of ZIP codes served by an agency was $257 (SE=196) with pooled thresholds (p value = 0.190) and $311 (SE=158) (p value = 0.050) with covariate adjustment (Appendix Figure 10). Across the star thresholds, most estimates were positive, small, and not statistically significant. The largest point estimate was for 4.5 vs. 5 stars, $636 (SE=568), followed by 3 vs 3.5 stars, $420 (SE=279). Thus, while there is some suggestive evidence that agencies with one more half-star treated more patients from higher-income ZIP codes in the quarter after the ratings were released, the effects were modest.
I also measure profitability at the patient level. I focus on the share of low-profit margin Medicare patients and Medicare-Medicaid dually eligible patients (Centers for Medicare & Medicaid Services). Low-profit margin patients are FFS enrollees with clinical characteristics associated with lower profit margins than other Medicare FFS patients on average, which includes people with poor control of clinical conditions (10 percent lower), traumatic wounds or ulcers (20 percent lower), significant bathing needs (20 percent lower), overall high risk (20 percent lower), and recipients of intravenous therapy or parenteral nutrition at home (15 percent lower); Medicare-Medicaid dually-eligible patients are also associated with decreased profit margins (20 percent lower) and generally have more complex social and clinical needs than Medicare-only populations which may make them more costly for home health agencies to manage. I identify low-profit margin patients using OASIS data and dually eligible status from the MSBF. For this analysis, my sample includes all Medicare FFS patients, regardless of whether they used home health care in the past year.
I anticipate that decisions to admit a given patient are affected by capacity constraints, which I proxy using county-level home health worker availability per 1,000 persons 65 years or older. Home health agencies depend heavily on labor inputs from workers such as nurses, therapists, and aides, and worker shortages are often touted as reasons for capacity constraints by industry experts (Chen, 2018; Galewitz, 2021). I use county-level data from the Quarterly Census of Employment and Wages to get the number of workers in home health care services (NAICS 6216). I divide markets into those with fewer workers, ZIP codes in the bottom tertile, and many workers, ZIP codes in the top tertile.
The patterns of the estimated effects of one more half star across the star distribution varied by area-level worker capacity (Appendix Figure 11). For agencies serving ZIP codes with fewer workers, estimates suggest increased shares of less desirable patients for agencies at the tails of the star distribution and potentially decreased shares for those in the middle (Panel A). In areas with fewer workers, agencies in the middle of the distribution—the most prevalent types of agencies—may have admitted fewer undesirable patients after receiving a higher rating, thereby diverting these patients to lower-rated agencies. In contrast, none of the adjusted point estimates for ZIP codes with more workers exceeded 2 percentage points and there was also less variation across the distribution (Panel B).
However, if agencies in areas with fewer workers in the middle of the distribution were able to cherry-pick preferrable patients due to increased patient demand, one would expect an increase in patient shares among higher rated home health agencies where there were more workers, which was not observed. Thus, together with the findings from the market income analysis, these results suggest that the star ratings did not have a meaningful effect on how patients chose home health agencies or how home health agencies chose patients.
VII. DISCUSSION
This study examines an intervention to mitigate imperfect information in the home health care industry. I find that the 5-star ratings had no discernible effect on consumer choice. To establish this, I show that one more half-star (i) did not increase market shares for home health agencies, (ii) did not result in heterogeneous effects over the star rating distribution, over time, or for patients with more opportunities to search for information or with more home health options to consider, (iii) might only modestly increased patient selection behaviors by home health agencies.
In prior literature, Schwartz et al. (2022) examined the introduction of the star rating program on consumer demand and found that the initial implementation of the public reporting program was associated with a modest one-percentage point increase in market shares of highly rated agencies for both quality and patient experience star ratings. This study adds to this strand of literature by focusing on the market share effects of an agency obtaining a higher star rating in the quality ratings to understand the ongoing effectiveness of the program. Although this study is not directly comparable to Schwartz et al. (2022), as this study does not examine the effects of introduction of the quality star ratings and focuses instead on the effects of a higher rating during the program, formal results were consistent with descriptive Google Trends data examined which also suggested that awareness of the program was likely highest at the program’s start. The lack of discernable effects on consumer choice from an agency obtaining a higher star rating throughout the program suggests that the clinical star ratings likely had little impact on increasing competition for quality in the home health care sector.
One may question why the star ratings did not have a larger effect on consumer choice particularly since costs are not competing factors and larger effects have been observed for skilled nursing facilities. General awareness of report cards for home health may be low, especially compared to skilled nursing facilities (Baier et al., 2015). A recent study examining hospital discharge planning practices to home health care continue to indicate that a majority of discharge planners do not incorporate star ratings when compiling home health care options for patients and most patients are not given accompanying home health care quality information when asked to select a home health agency (Li et al., 2023). It is also possible that the star ratings information does not correlate with dimensions of quality important to consumers. Even for the included measures, the correlations between the underlying measures and the overall star ratings ranged from low to moderate (Appendix Figure 12). Moreover, the majority of measures were self-reported by agencies and may be unreliable.
This study has several limitations. First, as with all regression discontinuity designs, the causal estimates are limited to the local average treatment effects. In other words, the effects estimated are limited to agencies close to each star rating rounding threshold. Second, this analysis is based on data up to 2016. Therefore, results may not extrapolate to more recent years. Google trends data up to 2020, however, do not indicate increased searches for home health star ratings, providing some reassurance that the results were unlikely to have changed since 2016. Finally, this study’s findings are relevant to the quality star ratings. It is possible that patients respond more strongly to the patient experience star ratings, which were implemented in 2016. Schwartz et al (2022), however, found similar effects between the introduction of the two ratings.
Ultimately, the goal of information disclosure policies is to improve patient choice and increase provider accountability. Rather than targeting FFS patients, CMS might be better served if the ratings were designed for Medicare Advantage plans or referring providers instead. If private insurers respond to ratings or if referring providers are incentivized to send patients to highly rated care, then star ratings could add value even if there is minimal direct use from FFS Medicare patients. Even if CMS adapts the star ratings and elicits a larger demand response in the future, such as by targeting health plans or providers, any designs must guard against unintended side effects such as for providers to cherry-pick patients, manipulate performance measures, or neglect unmeasured tasks (Dranove et al., 2003; Eggleston, 2005). It remains to be seen whether report cards can be an effective tool for improving the health care system and patient welfare.
Supplementary Material
Acknowledgements:
This work was supported by the Agency for Healthcare Research and Quality [1R36HS026836].
Footnotes
Conflict of interest: I have no conflicts of interest to declare.
REFERENCES
- Avalere. (2015). Home Health Chartbook 2015: Prepared for the Alliance for Home Health Quality and Innovation. http://ahhqi.org/images/uploads/AHHQI_2015_Chartbook_FINAL_October.pdf
- Baier RR, Wysocki A, Gravenstein S, Cooper E, Mor V, & Clark M (2015). A Qualitative Study of Choosing Home Health Care After Hospitalization: The Unintended Consequences of ‘Patient Choice’ Requirements. Journal of General Internal Medicine, 30(5), 634–640. 10.1007/s11606-014-3164-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brook RH, McGlynn EA, Shekelle PG, Marshall M, Leatherman S, Adams JL, Hicks J, & Klein DJ (2002). Report Cards for Health Care: Is Anyone Checking Them? RAND Corporation. 10.7249/RB4544 [DOI] [Google Scholar]
- Bundorf MK, Chun N, Goda GS, & Kessler DP (2009). Do markets respond to quality information? The case of fertility clinics. Journal of Health Economics, 28(3), 718–727. 10.1016/j.jhealeco.2009.01.001 [DOI] [PubMed] [Google Scholar]
- Cattaneo MD, Idrobo N, & Titiunik R (2019). A Practical Introduction to Regression Discontinuity Designs: Foundations. Cambridge Elements: Quantitative and Computational Methods for Social Science, 2. 10.1017/9781108684606 [DOI] [Google Scholar]
- Cattaneo MD, Keele L, Titiunik R, & Vazquez-Bare G (2016). Interpreting Regression Discontinuity Designs with Multiple Cutoffs. The Journal of Politics, 78(4), 1229–1248. 10.1086/686802 [DOI] [Google Scholar]
- Centers for Medicare & Medicaid Services. Home health services. Retrieved 1/29/2023 from https://www.medicare.gov/coverage/home-health-services
- Centers for Medicare & Medicaid Services. Report to Congress Medicare Home Health Study: An Investigation on Access to Care and Payment for Vulnerable Patient Populations. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HomeHealthPPS/Downloads/HH-Report-to-Congress.pdf
- Centers for Medicare & Medicaid Services. (2015a). Frequently Asked Questions (FAQs) about the Quality of Patient Care Star Ratings. https://www.cms.gov/files/document/quality-patient-care-star-rating-faqsupdated-5-20-15-sxfpdf
- Centers for Medicare & Medicaid Services. (2015b, July 2015). Home Health Compare Quality of Patient Care Star Ratings https://www.cms.gov/newsroom/fact-sheets/home-health-compare-quality-patient-care-star-ratings
- Centers for Medicare & Medicaid Services. (2015c). OASIS-C1/ICD-10 Guidance Manual. https://www.cms.gov/files/document/cy2015-home-health-archives.pdf [PubMed]
- Centers for Medicare & Medicaid Services. (2015d). Quality of Patient Care Star Ratings Methodology. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HomeHealthQualityInits/Downloads/Quality-of-Patient-Care-Star-Ratings-Methodology-Report-updated-5-11-15.pdf
- Centers for Medicare & Medicaid Services. (2021). National Health Expenditures: Table 14 Home Health Care Expenditures. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical
- Chen J, Mortensen K, & Bloodworth R (2014). Exploring Contextual Factors and Patient Activation: Evidence From a Nationally Representative Sample of Patients With Depression. Health Education & Behavior, 41(6), 614–624. 10.1177/1090198114531781 [DOI] [PubMed] [Google Scholar]
- Chen V (2018). Paying to stay: effects of vertical integration and competition in a post-acute care setting
- Cunningham S (2021). Causal Inference: The Mixtape. Yale University Press. https://books.google.com/books?id=PSEMEAAAQBAJ [Google Scholar]
- Darden M, & McCarthy IM (2015). The Star Treatment: Estimating the Impact of Star Ratings on Medicare Advantage Enrollments. Journal of Human Resources, 50(4), 980–1008. 10.3368/jhr.50.4.980 [DOI] [Google Scholar]
- Dong B, Egger PH, & Guo Y (2020). Is poverty the mother of crime? Evidence from homicide rates in China. PloS one, 15(5), e0233034–e0233034. 10.1371/journal.pone.0233034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dranove D, Kessler D, McClellan M, & Satterthwaite M (2003). Is More Information Better? The Effects of “Report Cards” on Health Care Providers. Journal of Political Economy, 111(3), 555–588. 10.1086/374180 [DOI] [Google Scholar]
- Eggleston K (2005). Multitasking and mixed systems for provider payment. Journal of Health Economics, 24(1), 211–223. 10.1016/j.jhealeco.2004.09.001 [DOI] [PubMed] [Google Scholar]
- Frandsen B (2017). Party Bias in Union Representation Elections: Testing for Manipulation in the Regression Discontinuity Design when the Running Variable is Discrete. In (pp. 281–315). Emerald Publishing Limited. 10.1108/S0731-905320170000038012 [DOI] [Google Scholar]
- Gadbois EA, Tyler DA, & Mor V (2017). Selecting a Skilled Nursing Facility for Postacute Care: Individual and Family Perspectives. Journal of the American Geriatrics Society. 10.1111/jgs.14988 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Galewitz P (2021, 06//). With Workers in Short Supply, Seniors Often Wait Months for Home Health Care. Kaiser Health News. [Google Scholar]
- Grabowski DC, & Town RJ (2011). Does Information Matter? Competition, Quality, and the Impact of Nursing Home Report Cards. Health Services Research, 46(6pt1), 1698–1719. 10.1111/j.1475-6773.2011.01298.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heckman JJ, & Robb R (1985). Alternative methods for evaluating the impact of interventions: An overview. Journal of Econometrics, 30(1), 239–267. 10.1016/0304-4076(85)90139-3 [DOI] [Google Scholar]
- Hibbard JH, & Peters E (2003). Supporting Informed Consumer Health Care Decisions: Data Presentation Approaches that Facilitate the Use of Information in Choice. Annual Review of Public Health, 24(1), 413–433. 10.1146/annurev.publhealth.24.100901.141005 [DOI] [PubMed] [Google Scholar]
- Jung JK, Wu B, Kim H, & Polsky D (2016). The Effect of Publicized Quality Information on Home Health Agency Choice. Medical Care Research and Review, 73(6), 703–723. 10.1177/1077558715623718 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kolstad JT, & Chernew ME (2009). Quality and Consumer Decision Making in the Market for Health Insurance and Health Care Services. Medical Care Research and Review, 66(1_suppl), 28S–52S. 10.1177/1077558708325887 [DOI] [PubMed] [Google Scholar]
- Konetzka RT, & Perraillon MC (2016). Use Of Nursing Home Compare Website Appears Limited By Lack Of Awareness And Initial Mistrust Of The Data. Health Affairs, 35(4), 706–713. 10.1377/hlthaff.2015.1377 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Konetzka RT, Yan K, & Werner RM (2021). Two Decades of Nursing Home Compare: What Have We Learned? Med Care Res Rev, 78(4), 295–310. 10.1177/1077558720931652 [DOI] [PubMed] [Google Scholar]
- Li J, Jeffers T, Ogunjesa B, & Raj M (2023). Hospital Discharge Planners Need More Information When Referring Patients to Home Health Care: Insights From the Coronavirus Disease 2019 Pandemic. Health Services Insights, 16, 11786329231211093. 10.1177/11786329231211093 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Medicare Payment Advisory Commission. (2020). Chapter 9: Home health care services (March 2020 report) (Report to the Congress: Medicare Payment Policy | Issue. https://www.medpac.gov/document/http-www-medpac-gov-docs-default-source-reports-mar20_medpac_ch9_sec-pdf/
- Medicare Payment Advisory Commission. (2021). Chapter 8: Home Health Care Services (Report to the Congress: Medicare Payment Policy |, Issue. https://www.medpac.gov/wp-content/uploads/2021/10/mar21_medpac_report_ch8_sec.pdf
- Murtaugh CM, Timothy R Peng Stanley Moore, and Maduro Gill A.. (2008). Assessing Home Health Care Quality for Post-Acute and Chronically Ill Patients: Final Report. https://aspe.hhs.gov/reports/assessing-home-health-care-quality-post-acute-chronically-ill-patients-final-report-1
- Perraillon MC, Konetzka RT, He D, & Werner RM (2017). Consumer Response to Composite Ratings of Nursing Home Quality. American Journal of Health Economics, 1–36. 10.1162/ajhe_a_00115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peters E, Dieckmann N, Dixon A, Hibbard JH, & Mertz CK (2007). Less Is More in Presenting Quality Information to Consumers. Medical Care Research and Review, 64(2), 169–190. 10.1177/10775587070640020301 [DOI] [PubMed] [Google Scholar]
- Reineck LA, & Kahn JM (2013). Quality Measurement in the Affordable Care Act. A Reaffirmed Commitment to Value in Health Care. American Journal of Respiratory and Critical Care Medicine, 187(10), 1038–1039. 10.1164/rccm.201302-0404ED [DOI] [PubMed] [Google Scholar]
- Schwartz ML, Rahman M, Thomas KS, Konetzka RT, & Mor V (2022). Consumer selection and home health agency quality and patient experience stars. Health Services Research, 57(1), 113–124. 10.1111/1475-6773.13867 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swope C, & Brown H (2015). Inside the mind of the hospital discharge planner. https://www.advisory.com
- Werner RM, Konetzka RT, & Polsky D (2016). Changes in Consumer Demand Following Public Reporting of Summary Quality Ratings: An Evaluation in Nursing Homes. Health Services Research, 51, 1291–1309. 10.1111/1475-6773.12459 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Werner RM, Norton EC, Konetzka RT, & Polsky D (2012). Do consumers respond to publicly reported quality information? Evidence from nursing homes. Journal of Health Economics, 31(1), 50–61. 10.1016/j.jhealeco.2012.01.001 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
