Abstract
Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).
Data Sources. 2000–2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file.
Study Design. We calculated 30-day readmission rates using three metrics, for three disease groups: heart failure (HF), acute myocardial infarction (AMI), and pneumonia. Using each metric, we calculated the absolute change and correlation between performance; the percent of hospitals remaining in extreme deciles and level of agreement; and differences in longitudinal performance.
Principal Findings. Average hospital rates for HF patients and the CMS metric were generally higher than for other conditions and metrics. Correlations between the ACR and CMS metrics were highest (r = 0.67–0.84). Rates calculated using the PPR and either ACR or CMS metrics were moderately correlated (r = 0.50–0.67). Between 47 and 75 percent of hospitals in an extreme decile according to one metric remained when using a different metric. Correlations among metrics were modest when measuring hospital longitudinal change.
Conclusions. Different approaches to computing readmissions can produce different hospital rankings and impact pay-for-performance. Careful consideration should be placed on readmission metric choice for these applications.
Keywords: Administrative data uses, hospitals, quality of care
Readmission rates have come to the forefront as measures of interest for assessing several dimensions of health care quality. Readmissions related to infections and other complications of care may reflect overall aspects of hospital quality—while evidence is mixed, several studies have linked hospital quality deficits to higher readmission rates (Ashton et al. 1997; Benbassat and Taragin 2000). Lower readmission rates may also signal more successful care transitions, or higher quality of outpatient chronic disease care. Readmissions are costly and may also provide an indication of care inefficiency in some cases (Friedman and Basu 2004; Jencks, Williams, and Coleman 2009).
Efforts have begun to integrate readmission measures into policy frameworks. The National Quality Forum has endorsed four readmission measures for comparative reporting, including All-Cause Readmissions and condition-specific readmission measures (National Quality Forum 2011). The Patient Protection and Affordable Care Act also contains provisions to measure and reduce readmissions in an effort to improve quality and reduce costs (2010).
A number of ways of measuring readmissions have been proposed. These metrics differ in various ways, along a couple of key dimensions. First, the time period over which readmissions are measured ranges widely, with some metrics capturing readmissions within 7 days of the first admission and others going as far as 90 days. Thirty-day rates are the most common. Some metrics focus on readmissions for any cause (“all-cause readmissions”), but other metrics attempt to exclude subsequent admissions likely to be planned or unrelated to the initial admission (Halfon et al. 2006; Goldfield et al. 2008). Some metrics count all readmissions that occur for a given patient, while others count only whether a patient has at least one readmission in a given time period. Finally, the risk adjustment approaches employed in different metrics also vary. Each of these definitional differences has the potential to affect the conclusions drawn from readmission metrics. However, there is currently no quantitative assessment of the impact of these differences. Effective efforts to reduce readmissions and to develop fair reimbursement policies related to readmissions will require reliable and well-understood metrics, and an important step toward having such a metric set is understanding how variations in metrics affect assessment of hospital performance. This study aimed to quantify the differences in measured hospital performance when applying three commonly used readmission metrics.
Study Design and Methods
Data Used
We utilized 2000–2009 California Office of Statewide Health Planning and Development Patient Discharge Data. The dataset includes information about 26.7 million unique discharges from acute-care hospitals in California. Each record contains a range of information about the hospitalization, including diagnoses, treatments, length of stay, and patient characteristics, and contained an encrypted patient identifier.
Each of the readmission metrics we studied utilizes algorithms to identify index admissions and subsequent admissions that may be counted as readmissions. For our main analyses, we limited the analyses to identified index admissions where the patient was 65 years or older and where the index admission had a principal diagnosis of acute myocardial infarction (AMI), initial episode (ICD-9-CM codes 410.xx, excluding 410.x2), heart failure (HF) (ICD-9-CM codes 402.01, 402.11, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93, 428.xx), or pneumonia (ICD-9-CM codes 480.x, 481, 482.x). AMI admissions with a length of stay of 0 or 1 day were excluded as these are unlikely to be true AMI cases. Any cases where the index admission was in an obstetric or psychiatric Diagnostic Related Group (DRG) were also excluded.
To test the impact of restricting the analysis to these diagnoses, in some further analyses we broadened our patient population to all patients age 18 and older with a surgical (surgical group) or medical (medical group) DRG in the index admission.
We did not allow admissions in December 2009 to serve as index admissions as 30-day readmissions would not be captured in the dataset. We also excluded admissions that resulted in transfer to another acute care hospital, discharge against medical advice, or death. In identifying readmissions, linkages between index and subsequent admissions were made using patient identifier, date of birth and gender.
Readmission Metrics
We calculated 30-day readmission measures using each of three metrics: All-Cause Readmission (ACR), the 3M Corporation Potentially Preventable Readmissions (PPR) (3M Health Information Systems 2008), and Centers for Medicare and Medicaid Services (CMS) 30-day readmission (Centers for Medicare and Medicaid Services 2011) for each hospital and year. Thus, the unit of analysis was hospital-year. The rates were calculated for the three patient groups, AMI, HF, and pneumonia, separately. We did not calculate a readmission rate for years in which hospitals had fewer than 20 admissions in a given condition group.
The metrics we examine differ in a number of dimensions. Table 1 summarizes key differences.
Table 1.
Typically Included Discharges | Exclusions for Index Admission | Excluded Readmissions | Risk Adjustment | Chain Logic | Clinically Related Only? | |
---|---|---|---|---|---|---|
All-cause readmission (ACR) | All eligible acute care patients* | Leukemia, lymphoma, chemotherapy, left AMA, died in hospital, transferred, hospitalizations within 30 days of prior index | Trauma, malignancies, obstetrics, transplants, cardiac procedures following AMI† | Based on APR-DRG Severity of Illness Subclass, AHRQ Comorbidity Index | One readmission counted within 30 days, no further chain logic | No restriction to clinically related readmissions |
3M potentially preventable readmissions (PPR) | All eligible acute-care patients* | Leukemia, ymphoma, chemotherapy, left AMA, neonate, palliative care, HIV | Planned readmission, transplants, “catastrophic” “error”, multiple trauma, obstetrics, specified malignancies and immunocompromised | Based on APR-DRG Severity of Illness Subclass, AHRQ Comorbidity Index | Yes, qualifying readmissions for same patient within 30 days of last admission linked to readmission “chain” | Yes, algorithm only includes readmissions determined to be clinically related to the index admission and potentially preventable |
CMS 30-day readmission | Condition-specific, AMI, HF, and pneumonia | Patient died, transferred out, left AMA, same-day discharges for AMI only, hospitalizations within 30 days of prior index | Only excludes specific procedures following AMI† | Hierarchical regression model, uses age, gender, comorbidity, history of PTCA and CABG‡ | One readmission counted within 30 days, no further chain logic | No restriction to clinically related readmissions |
For this analysis, we limited the analysis to AMI, HF, and pneumonia patients only.
Includes PTCA or CABG revascularization procedures with a principal diagnosis of heart failure, AMI, unstable angina, arrhythmia, and cardiac arrest.
We could not exclude patients with a history of PTCA and CABG due to data limitations.
All-Cause Readmissions
The ACR metric includes nearly all readmissions to patients admitted to a hospital for an index hospitalization, but it excludes cases meeting certain definitions likely to reflect planned admissions. Readmissions on the same day as the index admission are excluded as these are likely to reflect transfers rather than true readmissions.
CMS Readmission Rate
CMS includes nearly all readmissions of patients admitted to a hospital for an index hospitalization, only making exclusions for patients with specific cardiac procedures. The CMS methodology also does not allow a hospitalization to count as both a readmission and index admission. Readmissions on the same day as the index admission are excluded.
Potentially Preventable Readmissions
PPR is an algorithm aimed at reducing the number of unrelated readmissions included. For each All Patient Refined Diagnostic Related Group (APR-DRG), the algorithm identifies ICD-9-CM principal diagnosis codes for which a readmission would be deemed “potentially preventable.” For an index visit in a given APR-DRG, only these readmissions are counted. The algorithm also employs a chain logic, which combines multiple readmissions in the same patient into one readmission event, essentially measuring whether a patient has any qualifying readmissions, not the number of readmissions in cases where patients have more than one. The development of the PPR algorithm has been previously described (3M Health Information Systems 2008).
Case Mix Adjustment
All three metrics call for risk adjustment but use different approaches. For the ACR and PPR metrics, we used the APR-DRG Risk of Mortality subclass for case mix adjustment. We also applied the AHRQ Comorbidity Index (Agency for Healthcare Quality and Research 2011). We estimated coefficients using a linear regression model and applied these to patient-level data. For the CMS metric, we applied the coefficients estimated by the CMS hierarchical model (Centers for Medicare and Medicaid Services 2011). All metrics are reported as risk-adjusted rates (observed rate/expected rate).
Analyses
After calculating the ACR, PPR, and CMS rate for each hospital and year, we conducted a series of analyses to investigate the impact of metric choice on common uses of readmission metrics. First, we investigated whether readmission rates calculated using the three metrics produce consistent information about the ordering of hospitals. We assessed this using three methods. First, we determined the Pearson correlation coefficient between measures calculated using the three approaches. Second, we calculated the average absolute change among hospital-year results between metric pairs and found the percentage change compared with one of the metrics. Third, as most attention is paid to hospitals in the extreme tails of the performance distribution, we examined the share of hospitals that moved in and out of the extreme deciles when different metrics were used. To do this, we first identified hospitals in the extreme deciles using each readmission metric, and then examined the percent of hospitals that stayed in those extreme deciles when each of the other two metrics was used. We calculated this for each permutation. The reported percentage is the average percentage of changes from measure 1 to measure 2 and measure 2 to measure 1, the percent that moved out two or more deciles from an extreme decile, and the level of agreement between metrics identifying hospital-years falling outside the 95 percent confidence interval for the mean readmission rate (outliers as used in public reporting efforts) using Cohen's kappa. Second, we examined the impact of metric choice on the observed-to-expected (O-E) ratios, a similar unit to those used to adjust payment in reimbursement schemes. This statistic combined 3-year intervals of data, as CMS uses 3 years of data to calculate payment adjustment (Centers for Medicare and Medicaid Services 2012). As adjustments are only made when performance is worse than expected, we calculated the difference in O-E ratio for two metrics when both ratios were above 1, or the difference between the higher ratio and 1 when only one ratio was above 1. We calculated two statistics, one examining the percentage change regardless of which metric was higher, termed the absolute change. The second statistic examines the average change in the O-E ratio, accounting for which metric is higher. For instance, if the change between PPR and ACR is −0.20, across all hospitals the PPR O-E ratio on average is 0.20 lower than the ACR ratio. Finally, we examined the impact of metric choice on longitudinal trends within the same hospital using correlation. We calculated the change in readmission rates from combined years 2000–2002 to combined years 2007–2009. We then calculated the Pearson correlation coefficient for change in performance using different metrics.
Results
Our sample consisted of 3,824 hospital-year observations from 482 unique nonfederal acute care hospitals. About 6 percent were teaching hospitals, 72 percent large hospitals, 13 percent small hospitals. Just over half (55 percent) were nonprofit, while 25 percent were for-profit hospitals. The remaining 20 percent were state-funded hospitals. There were an average of 321 discharges per hospital per year.
The mean hospital readmission rate (Table 2) was highest for HF patients and lowest for pneumonia patients, regardless of the metric. Rates based on the CMS approach were insignificantly higher than rates based on the ACR approach for HF and pneumonia patients, and similar for AMI patients. For most comparisons, rates based on the PPR approach were about two-thirds of the rates calculated using the other metrics.
Table 2.
ACR Mean Hospital Rate (SD) | CMS Mean Hospital Rate (SD) | PPR Mean Hospital Rate (SD) | |
---|---|---|---|
AMI | 19.6 (6.0) | 19.5 (5.3) | 13.1 (5.1) |
HF | 22.5 (5.8) | 24.0 (5.7) | 15.6 (5.1) |
Pneumonia | 16.4 (5.4) | 18.7 (5.1) | 10.1 (4.0) |
All-medical condition | 12.3 (3.0) | N/A | 7.1 (2.1) |
All-surgical condition | 6.6 (2.1) | N/A | 3.9 (1.6) |
Correlations between Approaches
Rates calculated using the three approaches were moderately to highly correlated (Table 3). The highest correlations were between rates computed using the ACR and CMS approaches, for AMI (r = 0.76) and HF (r = 0.84) patients (Table 3). The lowest correlation (r = 0.50) was between rates computed using the ACR and PPR metrics, for pneumonia patients. The percentage change in absolute differences between metrics varied widely from 1 to 69 percent. The largest changes were between the ACR and PPR metrics and for pneumonia patients.
Table 3.
Acute Myocardial Infarction (AMI) | Heart Failure (HF) | Pneumonia | |
---|---|---|---|
PPR and CMS | |||
Pearson correlation coefficient | r = 0.56, p < .0001 | r = 0.67, p < .0001 | r = 0.62, p < .0001 |
Percentage change* | 30.1 | 34.7 | 41.9 |
Absolute change in O/E ratio affecting payment† | |||
Mean (SD) | 0.16 (0.15) | 0.11 (0.12) | 0.14 (0.14) |
Range | <0.01–1.21 | <0.01–1.09 | <0.01–1.23 |
Directional change mean (SD) | 0.04 (0.21) | 0.01 (0.16) | −0.01 (0.19) |
Excluded hospital-3/years (%) | 35.7 | 33.6 | 31.1 |
CMS and ACR | |||
Pearson correlation coefficient | r = 0.76, p < .0001 | r = 0.84, p < .0001 | r = 0.67, p < .0001 |
Percentage change* | 0.8 | 8.1 | 18.5 |
Absolute change in O/E ratio affecting payment† | |||
Mean (SD) | 0.11 (0.15) | 0.07 (0.08) | 0.13 (0.18) |
Range | <0.01–1.39 | <0.01–0.78 | <0.01–3.23 |
Directional change mean (SD) | −0.02 (0.19) | 0.01 (0.11) | 0.03 (0.22) |
Excluded hospital-3/years (%) | 41.7 | 36.2 | 31.3 |
ACR and PPR | |||
Pearson correlation coefficient | r = 0.53, p < .0001 | r = 0.61, p < .0001 | r = 0.50, p < .0001 |
Percentage change* | 60.3 | 51.9 | 69.1 |
Absolute change in O/E ratio affecting payment† | |||
Mean (SD) | 0.17 (0.17) | 0.12 (0.14) | 0.17 (0.20) |
Range | <0.01–1.19 | <0.01–1.57 | <0.01–3.21 |
Directional change mean (SD) | −0.03 (0.24) | −0.03 (0.18) | −0.01 (0.26) |
Excluded hospital-3/years (%) | 38.5 | 38.1 | 38.1 |
Absolute value of the mean change between the two metrics listed/second listed metric's mean rate, for example, mean of |CMS rate-ACR rate|/mean ACR rate from Table 2.
Absolute difference in observed-to-expected ratio of hospital-3 year performance between metrics.
Impact on Hospital Reimbursement
Depending on metric and condition compared, 31.3–41.7 percent of hospital- 3/years had less than or equal to expected readmissions (Table 3) and thus would not have payment adjustments by either metric. Among comparisons where at least one metric would affect payment, HF rates had the lowest average absolute change across all metric comparisons (0.07–0.12). Pneumonia and AMI rates varied by comparison (0.13–0.17 and 0.11–0.17, respectively). The average difference between O-E ratios that would actually affect payment was lowest across all conditions when comparing the CMS and ACR metrics (0.07–0.13). When taking into account the direction of the change, the average differences were much smaller (0.01–0.04).
Impact on Hospitals in Extreme Deciles
Between 47 and 75 percent of hospitals ranked in an extreme decile by one metric stayed in the same decile when a different metric was applied (Table 4). Level of agreement for identifying outliers ranged from moderate to substantial (kappa range .46–.68) (Table 4). The least stability (most movement out of an extreme decile) occurred between the PPR and either the CMS or ACR metrics, with around 50–60 percent of hospitals remaining in the extreme deciles. Level of agreement for identifying outliers among only large hospitals (those with 100 or more index admissions) followed a similar pattern (data not shown). In terms of condition, the least stability was found in the highest decile for pneumonia, with only half of hospitals remaining in that decile following the application of an alternative metric.
Table 4.
Metric | Lowest Deciles | Highest Deciles | ||||
---|---|---|---|---|---|---|
PPR-CMS | CMS-ACR | ACR-PPR | PPR-CMS | CMS-ACR | ACR-PPR | |
Acute myocardial infarction | ||||||
% Remaining in extreme deciles* | 66 | 75 | 63 | 57 | 66 | 54 |
% Moving ≥2 deciles† | 25 | 15 | 29 | 30 | 20 | 33 |
Agreement – Observed rate‡ | .52 | .66 | .46 | .52 | .65 | .46 |
Agreement – Risk-adjusted rate§ | .47 | .61 | .42 | .47 | .59 | .40 |
Heart failure | ||||||
% Remaining in extreme deciles* | 62 | 74 | 58 | 57 | 70 | 54 |
% Moving ≥2 deciles† | 25 | 12 | 33 | 27 | 14 | 28 |
Agreement – Observed rate‡ | .53 | .69 | .50 | .54 | .68 | .51 |
Agreement – Risk-adjusted rate§ | .52 | .68 | .49 | .51 | .67 | .52 |
Pneumonia | ||||||
% Remaining in extreme deciles* | 58 | 64 | 55 | 53 | 57 | 47 |
% Moving ≥2 deciles† | 28 | 21 | 30 | 26 | 30 | 30 |
Agreement – Observed rate‡ | .57 | .62 | .50 | .57 | .60 | .50 |
Agreement – Risk-adjusted rate§ | .51 | .58 | .43 | .51 | .57 | .44 |
All medical | ||||||
% Remaining in extreme deciles* | N/A | N/A | 60 | N/A | N/A | 67 |
% Moving ≥2 deciles† | N/A | N/A | 20 | N/A | N/A | 16 |
All surgical | ||||||
% Remaining in extreme deciles* | N/A | N/A | 57 | N/A | N/A | 62 |
% Moving ≥2 deciles† | N/A | N/A | 25 | N/A | N/A | 18 |
Percent of hospitals in decile as measured by first metric that remain in that decile when the second metric is applied, and vice versa.
Percent of hospitals in decile as measured by the first metric that move 2 deciles or more when the second metric is applied, and vice versa.
Includes patients in a medical DRG during the index admission, age 18 and older. Agreement measured by kappa statistic for observed rate across all hospitals meeting inclusion criteria.
Includes patients in a surgical DRG during the index admission, age 18 and older. Agreement measured by kappa statistic for risk-adjusted rate across all hospitals meeting inclusion criteria.
As the chronic diseases selected may have different patterns of preventability and recurrent readmissions, which may in turn impact the rates as calculated by these metrics, we calculated the rates for all medical or all surgical patients age 18 and older for comparative purposes. This resulted in slightly more stability for the highest decile, but not for the lowest.
Of hospitals classified in one of the extreme deciles in one metric, from 14 to 33 percent moved two or more deciles after applying a different metric (Table 4). Again, the most significant movement occurred between ACR and PPR and for pneumonia cases in the highest decile. Overall, the lowest deciles tended to be more stable than the highest deciles.
Metric Impact on Longitudinal Performance
The correlation of the change in hospital performance from time period one (2000–2002) to time period two (2007–2009) using different metrics ranging r = 0.40–0.77 (Figure 1). The relationship between measured change using different metrics was weakest for pneumonia and between PPR and ACR.
Discussion
Our findings demonstrate that metric choice significantly affects assessment of hospital performance. Specifically, for most metrics, between a quarter and half of hospitals in the extreme deciles changed when different metrics were applied and an average of a quarter of hospitals moved two or more deciles. Metric choice may also impact measured longitudinal change, with only moderate correlation between some metrics. However, the impact on longitudinal change is substantially less than relative hospital performance, suggesting that metric choice may be less important for studies that only examine within-hospital performance over time. Because the metrics intentionally capture different events, the absolute differences quantified in this study are expected, although the correlation between performance as measured by different metrics remains moderate.
Few studies report on both all-cause and potentially preventable readmissions. Halfon et al. reported on readmissions from Switzerland, noting that the correlation between all-cause readmissions and clearly avoidable and potentially avoidable readmissions was .42 and .56, slightly weaker than those identified in this study (Halfon et al. 2006). However, the method those authors used to categorize avoidable readmissions was different from the PPR metric in this study.
There are two potential explanations for the differences observed in this study. First, the indicators themselves differ on key factors. The primary difference between the ACR and CMS metrics is the exclusion of specific readmissions that are likely to be unrelated (e.g., trauma, obstetrics) or planned (e.g., cancer). Between CMS and PPR, the differences include the inclusion of only a subset of readmissions and the use of the “chain logic” by the PPR. Second, the readmission metrics may be inherently unreliable. However, we found similar results when replicating our analysis using only hospitals with larger numbers of index admissions, suggesting that weak reliability is not the primary reason for the differences we observed. In addition, it is important to understand when differences occur even if these differences are due to modest reliability.
The assessments included in this study mirror the actual uses of readmission metrics. Starting with discharges after October 1, 2012, CMS will use the predicted-to-observed ratio to adjust payment under the Hospital Readmissions Reduction Program (HRRP) (Centers for Medicare and Medicaid Services 2012). This is mathematically similar, but not identical, to our assessment of absolute change in the observed-to-expected ratio, suggesting that metric choice could substantially impact payment adjustment. For example, under HRRP a hospital with an average Medicare payment of $5,000 for each of its 250 heart failure patients in a given year and an observed readmission rate among those patients that is 20 percent higher than expected would potentially lose up to $250,000 in revenue associated with those admissions (Foster and Harkness 2010).1 In our analysis, we found that the average absolute difference in O-E ratio between the CMS and PPR metrics was 0.14, meaning that hospitals like that in the above example would, on average, experience a difference of $174,000 in payment adjustments for those 250 heart failure patients depending on the metric used. However, as shown by the directional change, one metric is not consistently more favorable for most hospitals. Actual differences in payment adjustment based on metric would vary based on each hospital's reimbursement, patient volume, and O-E ratio for each of these conditions.
These findings suggest that metric choice does not consistently result in more adjustment. For any given hospital, among readmission metrics could impact how hospitals are reimbursed under pay-for-performance programs such as HRRP, in the context of total operating expenses ($25 million in the above example) (Foster and Harkness 2010), the impact of that choice remains limited. In the above example, the $200,000 additional adjustment represents less than 1 percent of total operating expenses. We did not examine whether metric choice systematically impacts certain types of hospitals, such as tertiary care or safety net hospitals.
One of the most widely used tools for public reporting of hospital readmission rates is the Hospital Compare tool, which describes hospitals’ readmissions rate as above, below, or not statistically different from the national average (U.S. Department of Health and Human Services 2012). Other public reporting efforts use a similar approach comparing hospitals’ performance with risk-adjusted expected rates (Florida Agency for Health Care Administration 2012; Pennsylvania Health Care Cost Containment Council 2012). Kappas scores for identifying outlier hospitals were moderate (Kappas = .40–.69). As such reporting does not distinguish the majority of hospitals from others, some have advocated for more discriminatory reporting. Public reporting or pay-for-performance initiatives based on more narrow groupings, such as the deciles used in this study, would be more sensitive to choice of readmissions metric.
None of the metrics is considered a gold standard, and this research does not suggest that one measure is more valid than another. On the contrary, this study shows that the metrics are likely highlighting different aspects of patient outcomes and trajectories. A majority of metrics and studies of readmissions focus on all-cause readmissions (Jha, Orav, and Epstein 2009; Joynt, Orav, and Jha 2011). However, some researchers attempt to distinguish planned from unplanned admissions (Kossovsky et al. 2000). Often, details of the readmission metrics are not reported in studies. The ACR and CMS metrics focus on a wide array of potential readmissions, which may highlight overall patient well-being. Given the low inter-rater reliability reported for ratings of preventability (Benbassat and Taragin 2000), all-cause measures may be less subjective and less vulnerable to gaming.
However, for targeting quality improvement efforts, a measure that sorts out those preventable readmissions is highly desirable. In their systematic review, Benbasset and co-authors report that between 9 and 50 percent of readmissions are identified as preventable (Benbassat and Taragin 2000). The PPR has been designed to identify readmissions more likely to be preventable. Yet the relationship between quality of care and readmissions has been mixed. While some studies have shown expected readmissions for specific conditions following interventions (Ashton et al. 1997; Coleman et al. 2006; Jack et al. 2009) and some have shown relationships between process failures and readmissions (VanSuch et al. 2006; Balla, Malnick, and Schattner 2008), others have failed to find relationships between quality of care and readmissions (Weissman et al. 1999; Kossovsky et al. 2000; Jha, Orav, and Epstein 2009; Peikes et al. 2009). Studies that identify weak-to-moderate associations often focus on “potentially preventable” readmissions (Weissman et al. 1999; Balla, Malnick, and Schattner 2008).
A second aspect of the PPR metric is the attenuation of repeat readmissions. This may reduce the impact of end-of-life care or very ill patients. Jencks, Williams, and Coleman (2009) found that previous hospitalization prior to the index admission was a very strong predictor of readmission risk in the Medicare population; patient characteristics were shown as a principal driving factor for readmissions in Australian data (Mudge et al. 2011). Nevertheless, from an efficiency evaluation standpoint, it may be desirable to account separately for each encounter.
In the absence of a gold standard and in light of the differences identified by measures, learning to use the metrics together to create a fuller picture of readmissions has the potential to capitalize on the advantages of each measure, while creating efficient tools for quality improvement. Such applications could include tool kits for hospitals to assist in identifying potential high-yield targets based on their performance on multiple metrics, or a readmissions composite score. However they are applied, the quantitative differences in the results from different readmission metrics suggest a need for careful consideration of potential tradeoffs in any choices among these metrics for particular applications.
Our study is limited by several factors. First, we did not have access to mortality data. Patients who die within the 30-day period following hospitalization cannot be readmitted; readmission risk and mortality risk are not independent (Jencks, Williams, and Coleman 2009; Gorodeski, Starling, and Blackstone 2010). Studies have shown 30-day mortality rates of 11 percent for patients hospitalized for pneumonia or HF and about 16 percent for AMI (Krumholz et al. 2006; Bueno et al. 2010; Lindenauer et al. 2010). As repeated admissions increase near the end of life and PPR does not count repeated readmissions, the lack of mortality data may impact the metrics disproportionately. However, many readmission metrics do not incorporate 30-day mortality.
For the CMS metric, we implemented case mix adjustment based on the coefficients as specified by CMS. This assumes that the relative risk of the current study sample mirrors that of the CMS sample. As CMS applies the algorithm to national data, this is likely a reasonable assumption. However, the selected case mix adjustment for ACR and PPR is based on a different model and we estimated that model based only on California data. In addition, we were unable to apply coefficients for variables obtained from outpatient data. Thus, case mix adjustment may contribute to some of the differences observed.
The findings of this study suggest that metric choice is an important factor in the use of readmissions across a variety of applications, but that the different metrics do capture similar underlying signal. Further research to better understand the differences in the aspects of quality care captured by the different metrics has potential to improve quality measurement.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: This work was supported by a grant from the Gordon and Betty Moore Foundation Grant No. 1983 and National Institute on Aging Grant No. Ago17253. The authors take sole responsibility for the design and conduct of the study, collection, management, analysis, and interpretation of the data, and preparation, review, and approval of the manuscript. No authors have any conflicts of interest to disclose.
Disclosures: None.
Disclaimers: None.
Note
Actual lost revenue due to payment adjustment under HRRP may be limited by caps on the maximum adjustments that can be made in a given year. These maximums are based on total Medicare payments for all admissions.
Supporting Information
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
References
- Agency for Healthcare Quality and Research. 2011. “Comorbidity Software, Version 3.6” [accessed on February 15, 2011]. Available at http://www.hcup-us.ahrq.gov/toolssoftware/comorbidity/comorbidity.jsp.
- Ashton CM, Del Junco DJ, Souchek J, Wray NP, Mansyur CL. “The Association between the Quality of Inpatient Care and Early Readmission: A Meta-Analysis of the Evidence”. Medical Care. 1997;35(10):1044–59. doi: 10.1097/00005650-199710000-00006. [DOI] [PubMed] [Google Scholar]
- Balla U, Malnick S, Schattner A. “Early Readmissions to the Department of Medicine as a Screening Tool for Monitoring Quality of Care Problems”. Medicine. 2008;87(5):294–300. doi: 10.1097/MD.0b013e3181886f93. [DOI] [PubMed] [Google Scholar]
- Benbassat J, Taragin M. “Hospital Readmissions as a Measure of Quality of Health Care: Advantages and Limitations”. Archives of Internal Medicine. 2000;160(8):1074–81. doi: 10.1001/archinte.160.8.1074. [DOI] [PubMed] [Google Scholar]
- Bueno H, Ross JS, Wang Y, Chen J, Vidan MT, Normand SL, Curtis JP, Drye EE, Lichtman JH, Keenan PS, Kosiborod M, Krumholz HM. “Trends in Length of Stay and Short-Term Outcomes among Medicare Patients Hospitalized for Heart Failure, 1993–2006”. Journal of the American Medical Association. 2010;303(21):2141–7. doi: 10.1001/jama.2010.748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Centers for Medicare and Medicaid Services. 2011. “Hospital 30-Day Readmissions Measures” [accessed on March 11, 2011]. Available at http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier3&cid=1219069855841.
- Centers for Medicare and Medicaid Services. 2012. “Hospital Readmissions Reduction Program” [accessed on July 23, 2012]. Available at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html.
- Coleman EA, Parry C, Chalmers S, Min SJ. “The Care Transitions Intervention: Results of a Randomized Controlled Trial”. Archives of Internal Medicine. 2006;166(17):1822–8. doi: 10.1001/archinte.166.17.1822. [DOI] [PubMed] [Google Scholar]
- Florida Agency for Health Care Administration. 2012. “Florida Health Finder” [accessed on July 23, 2012]. Available at http://www.floridahealthfinder.gov/index.html.
- Foster D, Harkness G. Health Care Reform: Pending Changes to Reimbursements for 30-day Readmissions. Ann Arbor, MI: Thomas Reuters; 2010. [Google Scholar]
- Friedman B, Basu J. “The Rate and Cost of Hospital Readmissions for Preventable Conditions”. Medical Care Research and Review. 2004;61(2):225–40. doi: 10.1177/1077558704263799. [DOI] [PubMed] [Google Scholar]
- Goldfield NI, McCullough EC, Hughes JS, Tang AM, Eastman B, Rawlins LK, Averill RF. “Identifying Potentially Preventable Readmissions”. Health Care Financing Review. 2008;30(1):75–91. [PMC free article] [PubMed] [Google Scholar]
- Gorodeski EZ, Starling RC, Blackstone EH. “Are All Readmissions Bad Readmissions?”. New England Journal of Medicine. 2010;363(3):297–8. doi: 10.1056/NEJMc1001882. [DOI] [PubMed] [Google Scholar]
- Halfon P, Eggli Y, Pretre-Rohrbach I, Meylan D, Marazzi A, Burnand B. “Validation of the Potentially Avoidable Hospital Readmission Rate as a Routine Indicator of the Quality of Hospital Care”. Medical Care. 2006;44(11):972–81. doi: 10.1097/01.mlr.0000228002.43688.c2. [DOI] [PubMed] [Google Scholar]
- Jack BW, Chetty VK, Anthony D, Greenwald JL, Sanchez GM, Johnson AE, Forsythe SR, O'Donnell JK, Paasche-Orlow MK, Manasseh C, Martin S, Culpepper L. “A Reengineered Hospital Discharge Program to Decrease Rehospitalization: A Randomized Trial”. Annals of Internal Medicine. 2009;150(3):178–87. doi: 10.7326/0003-4819-150-3-200902030-00007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jencks SF, Williams MV, Coleman EA. “Rehospitalizations among Patients in the Medicare Fee-for-Service Program”. New England Journal of Medicine. 2009;360(14):1418–28. doi: 10.1056/NEJMsa0803563. [DOI] [PubMed] [Google Scholar]
- Jha AK, Orav EJ, Epstein AM. “Public Reporting of Discharge Planning and Rates of Readmissions”. New England Journal of Medicine. 2009;361(27):2637–45. doi: 10.1056/NEJMsa0904859. [DOI] [PubMed] [Google Scholar]
- Joynt KE, Orav EJ, Jha AK. “Thirty-day Readmission Rates for Medicare Beneficiaries by Race and Site of Care”. Journal of the American Medical Association. 2011;305(7):675–81. doi: 10.1001/jama.2011.123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kossovsky MP, Sarasin FP, Perneger TV, Chopard P, Sigaud P, Gaspoz J. “Unplanned Readmissions of Patients with Congestive Heart Failure: Do They Reflect In-Hospital Quality of Care or Patient Characteristics?”. American Journal of Medicine. 2000;109(5):386–90. doi: 10.1016/s0002-9343(00)00489-7. [DOI] [PubMed] [Google Scholar]
- Krumholz HM, Wang Y, Mattera JA, Han LF, Ingber MJ, Roman S, Normand SL. “An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30-day Mortality Rates among Patients with an Acute Myocardial Infarction”. Circulation. 2006;113(13):1683–92. doi: 10.1161/CIRCULATIONAHA.105.611186. [DOI] [PubMed] [Google Scholar]
- Lindenauer PK, Bernheim SM, Grady JN, Lin Z, Wang Y, Merrill AR, Han LF, Rapp MT, Drye EE, Normand SL, Krumholz HM. “The Performance of US Hospitals as Reflected in Risk-Standardized 30-day Mortality and Readmission Rates for Medicare Beneficiaries with Pneumonia”. Journal of Hospital Medicine. 2010;5(6):E12–8. doi: 10.1002/jhm.822. [DOI] [PubMed] [Google Scholar]
- 3M Health Information Systems. 2008. “Potentially Preventable Readmissions Classification System Methodology Overview.” [accessed on May 21, 2012]. Available at http://solutions.3m.com/wps/portal/3M/en_US/3M_Health_Information_Systems/HIS/Products/PPR/
- Mudge AM, Kasper K, Clair A, Redfern H, Bell JJ, Barras MA, Dip G, Pachana NA. “Recurrent Readmissions in Medical Patients: A Prospective Study”. Journal of Hospital Medicine. 2011;6(2):61–7. doi: 10.1002/jhm.811. [DOI] [PubMed] [Google Scholar]
- National Quality Forum. 2011. “NQF Endorsed Stasndards” [accessed on February 24, 2011]. Available at http://www.qualityforum.org/Measures_List.aspx.
- Patient Protection and Affordable Act. 119:318–319. Public Law No. 111-148. §2702, 124 Stat. [Google Scholar]
- Peikes D, Chen A, Schore J, Brown R. “Effects of Care Coordination on Hospitalization, Quality of Care, and Health Care Expenditures Among Medicare Beneficiaries: 15 Randomized Trials”. Journal of the American Medical Association. 2009;301(6):603–18. doi: 10.1001/jama.2009.126. [DOI] [PubMed] [Google Scholar]
- Pennsylvania Health Care Cost Containment Council. 2012. “Interactive Hospital Performance Report” [accessed on July 23, 2012]. Available at http://www.phc4.org/hpr/
- U.S. Department of Health and Human Services. 2012. “Hospital Compare” [accessed on July 23, 2012]. Available at http://www.hospitalcompare.hhs.gov/ [DOI] [PubMed]
- VanSuch M, Naessens JM, Stroebel RJ, Huddleston JM, Williams AR. “Effect of Discharge Instructions on Readmission of Hospitalised Patients With Heart Failure: Do All of the Joint Commission on Accreditation of Healthcare Organizations Heart Failure Core Measures Reflect Better Care?”. Quality and Safety in Health Care. 2006;15(6):414–7. doi: 10.1136/qshc.2005.017640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weissman JS, Ayanian JZ, Chasan-Taber S, Sherwood MJ, Roth C, Epstein AM. “Hospital Readmissions and Quality of Care”. Medical Care. 1999;37(5):490–501. doi: 10.1097/00005650-199905000-00008. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.