Abstract
Background
Hospital-level 30-day risk-standardized mortality and readmission rates are publicly reported for Medicare patients admitted with acute myocardial infarction (AMI), heart failure (HF) and pneumonia, but the correlations among mortality rates and among readmission rates within US hospitals for these conditions are unknown. Correlation among measures within the same hospital would suggest there are common hospital-wide quality factors.
Methods
We designed a cross-sectional study of US hospital 30-day risk-standardized mortality and readmission rates for Medicare fee-for-service beneficiaries from July, 2007 to June, 2009. We assessed the correlation between pairs of risk-standardized mortality rates and pairs of risks-standardized readmission rates for AMI, HF, and pneumonia.
Results
The mortality cohort included 4,559 hospitals, and the readmission cohort included 4,468 hospitals. Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.27-0.41, p<.0001 for all correlations). Every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.32-0.47, p<.0001 for all correlations). For each condition pair and outcome, one third or more of hospitals were in the same quartile of performance. Correlations were highest within large, non-profit, urban, and/or Council of Teaching Hospital hospitals. For any given condition pair, the correlation between readmission rates was significantly higher than the correlation between mortality rates (p<0.01 for all pairs).
Conclusion
Risk-standardized readmission rates are moderately correlated with each other within hospitals, as are risk-standardized mortality rates. This suggests that there may be common hospital-wide factors affecting hospital outcomes.
Introduction
The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital-specific 30-day risk-standardized mortality and readmission rates for Medicare fee-for-service patients admitted with acute myocardial infarction (AMI), heart failure (HF) and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2,3
Quality-of-care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition-specific process measures and either mortality or readmission rates for those conditions.4-6 Mortality and readmission rates may instead reflect broader hospital-wide or specialty-wide structure, culture and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9,10 information technology,11 patient volume,12-14 academic status12 and other institution-wide factors.12 There is now a strong policy push towards developing hospital-wide (all-condition) measures, beginning with readmission.15
It is not clear how much of the quality of care for a given condition is attributable to hospital-wide influences that affect all conditions rather than disease-specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital-wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease-specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease-specific care, such as early identification of the relevant patient population or standardizing disease-specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.
One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital-wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16,17 while other studies have found little correlation among mortality rates for various medical conditions.18,19 It is also possible that correlation varies according to hospital characteristics – for example, smaller or non-teaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly-reported estimates of risk-standardized mortality or readmission rates. In this study we use the publicly reported measures of 30-day mortality and 30-day readmission for AMI, HF and pneumonia to examine whether and to what degree mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.
Methods
Data sources
CMS calculates risk-standardized mortality and readmission rates and patient volume for all acute care non-federal hospitals with one or more eligible case for AMI, HF and pneumonia annually based on fee-for-service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the three-year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20,21 In addition, we used the 2008 American Hospital Association Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not-for-profit, for profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, non-teaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core based statistical area (division [subarea of area with urban center > 2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state’s mean private hospital Medicaid caseload using 2007 AHA Annual Survey data.
Study sample
This study includes two hospital cohorts, one for mortality and one for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk-standardized mortality rates for all three conditions (AMI, HF and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk-standardized readmission rates for all three of these conditions.
Risk-standardized measures
The measures include all FFS Medicare patients who are ≥65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with one of the three qualifying diagnoses and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only one hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge and admissions within 30 days of an index admission are not counted as index admissions.
Outcomes for all measures are all-cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non-acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23
The derivation and validation of the risk-standardized outcome measures have been previously reported.20,21,23-27 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates and a hospital-specific random effect. The rates are calculated as the ratio of the number of “predicted” outcomes (obtained from a model applying the hospital-specific effect) to the number of “expected” outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30-day rate.
Statistical analysis
We examined patterns and distributions of hospital volume, risk-standardized mortality rate, and risk-standardized readmission rate among included hospitals. To measure the degree of association among hospitals’ risk-standardized mortality rates for AMI, HF and pneumonia, we calculated Pearson correlation coefficients, resulting in three correlations for the three pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principle component method with a minimum eigenvalue of one to retain factors to determine whether there was a single common factor underlying mortality performance for the three conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk-standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any two subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk-standardized readmission rates.
To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.
All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a p-value < 0.05 to be statistically significant, and all statistical tests were two-tailed.
Results
The mortality cohort included 4,559 hospitals, and the readmission cohort included 4,468 hospitals. The majority of hospitals was small, non-teaching and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).
Table 1.
Description | Mortality Measures Hospital N=4,559 N (%)* |
Readmission Measures Hospital N=4,468 N (%)* |
---|---|---|
Number of beds | ||
> 600 | 157 (3.4) | 156 (3.5) |
300 to 600 | 628 (13.8) | 626 (14.0) |
< 300 | 3,588 (78.7) | 3,505 (78.5) |
Unknown | 186 (4.08 ) | 181 (4.1) |
Mean (SD) | 173.24 (189.52) | 175.23 (190.00) |
Ownership | ||
Not-for-profit | 2,650 (58.1) | 2,619 (58.6) |
For profit | 672 (14.7) | 663 (14.8) |
Government | 1,051 (23.1) | 1,005 (22.5) |
Unknown | 186 (4.1) | 181 (4.1) |
Teaching status | ||
COTH | 277 ( 6.1) | 276 ( 6.2) |
Teaching | 505 (11.1) | 503 (11.3) |
Non-Teaching | 3,591 (78.8) | 3,508 (78.5) |
Unknown | 186 (4.1) | 181 (4.1) |
Cardiac facility type | ||
CABG | 1,471 (32.3) | 1,467 (32.8) |
Cath lab | 578 (12.7) | 578 (12.9) |
Neither | 2,324 (51.0) | 2,242 (50.2) |
Unknown | 186 (4.1) | 181 (4.1) |
Core based statistical area | ||
Division | 621 (1 3.6) | 618 (1 3.8) |
Metro | 1,850 (40.6) | 1,835 (41.1) |
Micro | 801 (17.6) | 788 (17.6) |
Rural | 1,101 (24.2) | 1,046 (23.4) |
Unknown | 186 (4.1) | 181 (4.1) |
Safety net status | ||
No | 2,995 (65.7) | 2,967 (66.4) |
Yes | 1,377 (30.2) | 1,319 (29.5) |
Unknown | 187 (4.1) | 182 (4.1) |
Unless otherwise specified
SD: standard deviation; COTH: member of Council of Teaching Hospitals; CABG: coronary artery bypass surgery capability; cath lab: cardiac catheterization capability
For mortality measures, the smallest median number of cases per hospital was for AMI (48, interquartile range [IQR] 13,171), and the greatest number was for pneumonia (178, IQR 87, 336). The same pattern held for readmission measures (AMI median 33, IQR 9, 150; pneumonia median 191, IQR 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).
Table 2.
Description | Mortality Measures (N=4,559) | Readmission Measures (N=4,468) | ||||
---|---|---|---|---|---|---|
AMI | HF | PN | AMI | HF | PN | |
Total discharges | 558,653 | 1,094,960 | 1,114,706 | 546,514 | 1,314,394 | 1,152,708 |
Hospital volume | ||||||
Mean (SD) | 122.54 ( 172.52) | 240.18 ( 271.35) | 244.51 ( 220.74) | 122.32 ( 201.78) | 294.18 (333.2) | 257.99 (228.5) |
Median (IQR) | 48 (13, 171) | 142 (56, 337) | 178 (87, 336) | 33 (9, 150) | 172.5 (68, 407) | 191 (95, 352.5) |
Range min, max | 1, 1379 | 1, 2814 | 1, 2241 | 1, 1611 | 1, 3410 | 2, 2359 |
30-day risk-standardized rate* | ||||||
Mean (SD) | 15.7 (1.8) | 10.9 (1.6) | 11.5 (1.9) | 19.9 (1.5) | 24.8 (2.1) | 18.5 (1.7) |
Median (IQR) | 15.7 (14.5, 16.8) | 10.8 (9.9, 11.9) | 11.3 (10.2, 12.6) | 19.9 (18.9, 20.8) | 24.7 (23.4, 26.1) | 18.4 (17.3, 19.5) |
Range min, max | 10.3, 24.6 | 6.6, 18.2 | 6.7, 20.9 | 15.2, 26.3 | 17.3, 32.4 | 13.6, 26.7 |
AMI: Acute myocardial infarction; HF: heart failure; PN: pneumonia; SD: standard deviation; IQR: interquartile range
Weighted by hospital volume
Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.27-0.41, p<.0001 for all three correlations). For example, the correlation between risk standardized mortality rates for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.32-0.47, p<.0001 for all three correlations). Overall, the lowest correlation was between risk-standardized mortality rates for AMI and pneumonia (r=0.27), and the highest correlation was between risk-standardized readmission rates for HF and pneumonia (r=0.47). (Table 3).
Table 3.
Mortality Measures | Readmission Measures | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AMI and HF | AMI and PN | HF and PN | AMI and HF | AMI and PN | HF and PN | |||||||||
Description | N | r | P | r | P | r | P | N | r | P | r | P | r | P |
All | 4,559 | 0.30 | 0.27 | 0.41 | 4,468 | 0.38 | 0.32 | 0.47 | ||||||
Hospitals with ≥ 25 patients | 2,872 | 0.33 | 0.30 | 0.44 | 2,467 | 0.44 | 0.38 | 0.51 | ||||||
Number of beds | 0.15 | 0.005 | 0.0009 | <.0001 | <.0001 | <.0001 | ||||||||
> 600 | 157 | 0.38 | 0.43 | 0.51 | 156 | 0.67 | 0.50 | 0.66 | ||||||
300 to 600 | 628 | 029 | 0.30 | 0.49 | 626 | 0.54 | 0.45 | 0.58 | ||||||
< 300 | 3,588 | 0.27 | 0.23 | 0.37 | 3,505 | 0.30 | 0.26 | 0.44 | ||||||
Ownership | 0.021 | 0.05 | 0.39 | 0.0004 | 0.0004 | 0.003 | ||||||||
Not-for-profit | 2,650 | 0.32 | 0.28 | 0.42 | 2,619 | 0.43 | 0.36 | 0.50 | ||||||
For profit | 672 | 0.30 | 0.23 | 0.40 | 663 | 0.29 | 0.22 | 0.40 | ||||||
Government | 1,051 | 0.24 | 0.22 | 0.39 | 1,005 | 0.32 | 0.29 | 0.45 | ||||||
Teaching status | 0.11 | 0.08 | 0.0012 | <.0001 | 0.0002 | 0.0003 | ||||||||
COTH | 277 | 0.31 | 0.34 | 0.54 | 276 | 0.54 | 0.47 | 0.59 | ||||||
Teaching | 505 | 0.22 | 0.28 | 0.43 | 503 | 0.52 | 0.42 | 0.56 | ||||||
Non-Teaching | 3,591 | 0.29 | 0.24 | 0.39 | 3,508 | 0.32 | 0.26 | 0.44 | ||||||
Cardiac facility type | 0.022 | 0.006 | <.0001 | <.0001 | 0.0006 | 0.004 | ||||||||
CABG | 1,471 | 0.33 | 0.29 | 0.47 | 1,467 | 0.48 | 0.37 | 0.52 | ||||||
Cath lab | 578 | 0.25 | 0.26 | 0.36 | 578 | 0.32 | 0.37 | 0.47 | ||||||
Neither | 2,324 | 0.26 | 0.21 | 0.36 | 2,242 | 0.28 | 0.27 | 0.44 | ||||||
Core based statistical area | 0.0001 | <.0001 | 0.002 | <.0001 | <.0001 | <.0001 | ||||||||
Division | 621 | 0.38 | 0.34 | 0.41 | 618 | 0.46 | 0.40 | 0.56 | ||||||
Metro | 1,850 | 0.26 | 0.26 | 0.42 | 1,835 | 0.38 | 0.30 | 0.40 | ||||||
Micro | 801 | 0.23 | 0.22 | 0.34 | 788 | 0.32 | 0.30 | 0.47 | ||||||
Rural | 1,101 | 0.21 | 0.13 | 0.32 | 1,046 | 0.22 | 0.21 | 0.44 | ||||||
Safety net status | 0.001 | 0.027 | 0.68 | 0.029 | 0.037 | 0.28 | ||||||||
No | 2,995 | 0.33 | 0.28 | 0.41 | 2,967 | 0.40 | 0.33 | 0.48 | ||||||
Yes | 1,377 | 0.23 | 0.21 | 0.40 | 1,319 | 0.34 | 0.30 | 0.45 |
AMI: Acute myocardial infarction; HF: heart failure; PN: pneumonia; N: number of hospitals; r: Pearson correlation coefficient; COTH: Member of Council of Teaching Hospitals; CABG: Coronary artery bypass surgery capability; Cath lab: cardiac catheterization lab capability
p value is the minimum p value of pairwise comparisons within each subgroup
Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue greater than one. In each factor analysis, this single common factor kept more than half of the data information based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for MI, HF and PN in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).
For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27 and were highest for HF and pneumonia for both mortality and readmission rates.
Table 4.
Condition pair | Same quartile (any) |
Same quartile (Q1 or Q4) |
Q1 in one and Q4 in another |
Weighted Kappa |
Spearman correlation |
---|---|---|---|---|---|
Mortality | |||||
MI and HF | 34.8% | 20.2% | 7.9% | 0.19 | 0.25 |
MI and PN | 32.7% | 18.8% | 8.2% | 0.16 | 0.22 |
HF and PN | 35.9% | 21.8% | 5.0% | 0.26 | 0.36 |
Readmission | |||||
MI and HF | 36.6% | 21.0% | 7.5% | 0.22 | 0.28 |
MI and PN | 34.0% | 19.6% | 8.1% | 0.19 | 0.24 |
HF and PN | 37.1% | 22.6% | 5.4% | 0.27 | 0.37 |
In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r=0.51, p=.0009) and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r=0.67, p<.0001). Across both measures and all three condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital’s Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not-for-profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non-safety net hospitals (Table 3).
For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (p<.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate sized hospitals, among non-profit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non-safety net hospitals (Table 5).
Table 5.
AMI and HF | AMI and PN | HF and PN | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Description | N | MC | RC | P | N | MC | RC | P | N | MC | RC | P |
All | 4,457 | 0.31 | 0.38 | <.0001 | 4,459 | 0.27 | 0.32 | 0.007 | 4,731 | 0.41 | 0.46 | 0.0004 |
Hospitals with ≥ 25 patients | 2,472 | 0.33 | 0.44 | <0.001 | 2,463 | 0.31 | 0.38 | 0.01 | 4,104 | 0.42 | 0.47 | 0.001 |
Number of beds | ||||||||||||
> 600 | 156 | 0.38 | 0.67 | 0.0002 | 156 | 0.43 | 0.50 | 0.48 | 160 | 0.51 | 0.66 | 0.042 |
300 to 600 | 626 | 0.29 | 0.54 | <.0001 | 626 | 0.31 | 0.45 | 0.003 | 630 | 0.49 | 0.58 | 0.033 |
< 300 | 3,494 | 0.28 | 0.30 | 0.21 | 3,496 | 0.23 | 0.26 | 0.17 | 3,733 | 0.37 | 0.43 | 0.003 |
Ownership | ||||||||||||
Not-for-profit | 2,6 14 | 0.32 | 0.43 | <.0001 | 2,617 | 0.28 | 0.36 | 0.003 | 2,697 | 0.42 | 0.50 | 0.0003 |
For profit | 662 | 0.30 | 0.29 | 0.90 | 661 | 0.23 | 0.22 | 0.75 | 699 | 0.40 | 0.40 | 0.99 |
Government | 1,000 | 0.25 | 0.32 | 0.09 | 1,000 | 0.22 | 0.29 | 0.09 | 1,127 | 0.39 | 0.43 | 0.21 |
Teaching status | ||||||||||||
COTH | 276 | 0.31 | 0.54 | 0.001 | 277 | 0.35 | 0.46 | 0.10 | 278 | 0.54 | 0.59 | 0.41 |
Teaching | 504 | 0.22 | 0.52 | <.0001 | 504 | 0.28 | 0.42 | 0.012 | 508 | 0.43 | 0.56 | 0.005 |
Non-Teaching | 3,496 | 0.29 | 0.32 | 0.18 | 3,497 | 0.24 | 0.26 | 0.46 | 3,737 | 0.39 | 0.43 | 0.016 |
Cardiac facility type | ||||||||||||
CABG | 1,465 | 0.33 | 0.48 | <.0001 | 1,467 | 0.30 | 0.37 | 0.018 | 1,483 | 0.47 | 0.51 | 0.103 |
Cath lab | 577 | 0.25 | 0.32 | 0.18 | 577 | 0.26 | 0.37 | 0.046 | 579 | 0.36 | 0.47 | 0.022 |
Neither | 2,234 | 0.26 | 0.28 | 0.48 | 2,234 | 0.21 | 0.27 | 0.037 | 2,461 | 0.36 | 0.44 | 0.002 |
Core based statistical area | ||||||||||||
Division | 618 | 0.38 | 0.46 | 0.09 | 620 | 0.34 | 0.40 | 0.18 | 630 | 0.41 | 0.56 | 0.001 |
Metro | 1,833 | 0.26 | 0.38 | <.0001 | 1,832 | 0.26 | 0.30 | 0.21 | 1,896 | 0.42 | 0.40 | 0.63 |
Micro | 787 | 0.24 | 0.32 | 0.08 | 787 | 0.22 | 0.30 | 0.11 | 820 | 0.34 | 0.46 | 0.003 |
Rural | 1,038 | 0.21 | 0.22 | 0.83 | 1,039 | 0.13 | 0.21 | 0.056 | 1,177 | 0.32 | 0.43 | 0.002 |
Safety net status | ||||||||||||
No | 2,961 | 0.33 | 0.40 | 0.001 | 2,963 | 0.28 | 0.33 | 0.036 | 3,062 | 0.41 | 0.48 | 0.001 |
Yes | 1,314 | 0.23 | 0.34 | 0.003 | 1,314 | 0.22 | 0.30 | 0.015 | 1,460 | 0.40 | 0.45 | 0.14 |
AMI: Acute myocardial infarction; HF: heart failure; PN: pneumonia; MC: mortality correlation; RC: readmission correlation; r: Pearson correlation coefficient; COTH: Member of Council of Teaching Hospitals; CABG: Coronary artery bypass surgery capability; Cath lab: cardiac catheterization lab capability
Discussion
In this study we found that risk-standardized mortality rates for three common medical conditions were moderately correlated within institutions, as were risk standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.
Our findings are consistent with the hypothesis that 30-day risk-standardized mortality and 30-day risk-standardized readmission rates in part capture broad aspects of hospital quality that transcend condition-specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital-wide environment, structure and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation and discharge communication with patients and outpatient clinicians, are typically hospital-wide processes.
Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18,19 There are several possible explanations for this difference. First, during the intervening 15-25 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all three of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk-adjustment,18 and neither used the same methodology of risk standardization. Nonetheless it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes that we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16,17
Accountable care organizations will be assessed using an all-condition readmission measure,31 several states track all-condition readmission rates,32-34 and several countries measure all-condition mortality.35 An all-condition measure for quality assessment first requires that there be a hospital-wide quality signal above and beyond disease-specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across three common conditions. There are other considerations, however, in developing all-condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35
We were surprised by the finding that risk-standardized rates correlated more strongly within larger institutions than smaller ones because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all three conditions because estimates for these hospitals are more precise. Consequently we have greater confidence in results for larger volumes and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14,36
The kappa scores comparing quartile of national performance for pairs of conditions were only in the “fair” range. There are several possible explanations for this fact: 1) outcomes for these three conditions are not measuring the same constructs, 2) they are all measuring the same construct, but they are unreliable in doing so, and/or 3) hospitals have similar latent quality for all three conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.
Our study has several limitations. First, all three conditions currently publicly reported by CMS are “medical” diagnoses – although AMI patients may be cared for in distinct cardiology units and often undergo procedures – and therefore we cannot determine the degree to which correlations reflect hospital-wide quality versus medicine-wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk-adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter-hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease-specific focus on these conditions. We do not have data from non-publicly reported conditions to test this hypothesis. Fourth, there are many small volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.
This study lends credence to the hypothesis that 30-day risk-standardized mortality and readmission rates for individual conditions may reflect aspects of hospital-wide quality or at least medicine-wide quality, although the correlations are not large enough to conclude that hospital-wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.
Acknowledgments
Funding: Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr. Krumholz is supported by grant U01 HL105270-01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. The analyses upon which this publication is based were performed under Contract Number HHSM-500-2008-0025I Task Order T0001, entitled “Measure & Instrument Development and Support (MIDS)-Development and Re-evaluation of the CMS Hospital Outcomes and Efficiency Measures,” funded by the Centers for Medicare & Medicaid Services, an agency of the U.S. Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.
Footnotes
Disclosures: Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz and Wang receive support from the Centers of Medicare and Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting.
References
- 1.U.S. Department of Health and Human Services [Accessed March 5, 2011];Hospital Compare. 2011 www.hospitalcompare.hhs.gov.
- 2.Balla U, Malnick S, Schattner A. Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems. Medicine (Baltimore) 2008 Sep;87(5):294–300. doi: 10.1097/MD.0b013e3181886f93. [DOI] [PubMed] [Google Scholar]
- 3.Dubois RW, Rogers WH, Moxley JH, 3rd, Draper D, Brook RH. Hospital inpatient mortality. Is it a predictor of quality? N Engl J Med. 1987 Dec 24;317(26):1674–1680. doi: 10.1056/NEJM198712243172626. [DOI] [PubMed] [Google Scholar]
- 4.Werner RM, Bradlow ET. Relationship between Medicare’s hospital compare performance measures and mortality rates. JAMA. 2006 Dec 13;296(22):2694–2702. doi: 10.1001/jama.296.22.2694. [DOI] [PubMed] [Google Scholar]
- 5.Jha AK, Orav EJ, Epstein AM. Public reporting of discharge planning and rates of readmissions. N Engl J Med. 2009 Dec 31;361(27):2637–2645. doi: 10.1056/NEJMsa0904859. [DOI] [PubMed] [Google Scholar]
- 6.Patterson ME, Hernandez AF, Hammill BG, et al. Process of care performance measures and long-term outcomes in patients hospitalized with heart failure. Med Care. 2010 Mar;48(3):210–216. doi: 10.1097/MLR.0b013e3181ca3eb4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Chukmaitov AS, Bazzoli GJ, Harless DW, Hurley RE, Devers KJ, Zhao M. Variations in inpatient mortality among hospitals in different system types, 1995 to 2000. Med Care. 2009 Apr;47(4):466–473. doi: 10.1097/MLR.0b013e31818dcdf0. [DOI] [PubMed] [Google Scholar]
- 8.Devereaux PJ, Choi PT, Lacchetti C, et al. A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals. Cmaj. 2002 May 28;166(11):1399–1406. [PMC free article] [PubMed] [Google Scholar]
- 9.Curry LA, Spatz E, Cherlin E, et al. What distinguishes top-performing hospitals in acute myocardial infarction mortality rates? A qualitative study. Ann Intern Med. 2011 Mar 15;154(6):384–390. doi: 10.7326/0003-4819-154-6-201103150-00003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hansen LO, Williams MV, Singer SJ. Perceptions of Hospital Safety Climate and Incidence of Readmission. Health Services Research. 2011;46(2):596–616. doi: 10.1111/j.1475-6773.2010.01204.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Longhurst CA, Parast L, Sandborg CI, et al. Decrease in hospital-wide mortality rate after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2010 Jul;126(1):14–21. doi: 10.1542/peds.2009-3271. [DOI] [PubMed] [Google Scholar]
- 12.Fink A, Yano EM, Brook RH. The condition of the literature on differences in hospital mortality. Med Care. 1989 Apr;27(4):315–336. doi: 10.1097/00005650-198904000-00001. [DOI] [PubMed] [Google Scholar]
- 13.Gandjour A, Bannenberg A, Lauterbach KW. Threshold volumes associated with higher survival in health care: a systematic review. Med Care. 2003 Oct;41(10):1129–1141. doi: 10.1097/01.MLR.0000088301.06323.CA. [DOI] [PubMed] [Google Scholar]
- 14.Ross JS, Normand SL, Wang Y, et al. Hospital volume and 30-day mortality for three common medical conditions. N Engl J Med. 2010 Mar 25;362(12):1110–1118. doi: 10.1056/NEJMsa0907130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Patient Protection and Affordable Care Act. 2010. Pub. L. No. 124, §3025.
- 16.Dimick JB, Staiger DO, Birkmeyer JD. Are mortality rates for different operations related?: implications for measuring the quality of noncardiac surgery. Med Care. 2006 Aug;44(8):774–778. doi: 10.1097/01.mlr.0000215898.33228.c7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Goodney PP, O’Connor GT, Wennberg DE, Birkmeyer JD. Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement? Ann Thorac Surg. 2003 Oct;76(4):1131–1136. doi: 10.1016/s0003-4975(03)00827-0. discussion 1136-1137. [DOI] [PubMed] [Google Scholar]
- 18.Chassin MR, Park RE, Lohr KN, Keesey J, Brook RH. Differences among hospitals in Medicare patient mortality. Health Serv Res. 1989 Apr;24(1):1–31. [PMC free article] [PubMed] [Google Scholar]
- 19.Rosenthal GE, Shah A, Way LE, Harper DL. Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality. Med Care. 1998 Jul;36(7):955–964. doi: 10.1097/00005650-199807000-00003. [DOI] [PubMed] [Google Scholar]
- 20.Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30-day readmission following hospitalization for pneumonia. J Hosp Med. 2011 Mar;6(3):142–150. doi: 10.1002/jhm.890. [DOI] [PubMed] [Google Scholar]
- 21.Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:29–37. doi: 10.1161/CIRCOUTCOMES.108.802686. [DOI] [PubMed] [Google Scholar]
- 22.Ross JS, Cha SS, Epstein AJ, et al. Quality of care for acute myocardial infarction at urban safety-net hospitals. Health Aff (Millwood) 2007 Jan-Feb;26(1):238–248. doi: 10.1377/hlthaff.26.1.238. [DOI] [PubMed] [Google Scholar]
- 23.National Quality Measures Clearinghouse. [Accessed Feb 21, 2011];2011 http://www.qualitymeasures.ahrq.gov/
- 24.Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation. 2006 Apr 4;113(13):1683–1692. doi: 10.1161/CIRCULATIONAHA.105.611186. [DOI] [PubMed] [Google Scholar]
- 25.Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with heart failure. Circulation. 2006 Apr 4;113(13):1693–1701. doi: 10.1161/CIRCULATIONAHA.105.611194. [DOI] [PubMed] [Google Scholar]
- 26.Bratzler DW, Normand SL, Wang Y, et al. An administrative claims model for profiling hospital 30-day mortality rates for pneumonia patients. PLoS One. 2011;6(4):e17401. doi: 10.1371/journal.pone.0017401. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30-day all-cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011 Mar 1;4(2):243–252. doi: 10.1161/CIRCOUTCOMES.110.957498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Kaiser HF. The application of electronic computers to factor analysis. Educ Psychol Meas. 1960;20:141–151. [Google Scholar]
- 29.Fisher RA. On the ‘probable error’ of a coefficient of correlation deduced from a small sample. Metron. 1921;1:3–32. [Google Scholar]
- 30.Raghunathan TE, Rosenthal R, Rubin DB. Comparing correlated but nonoverlapping correlations. Psychological Methods. 1996;1(2):178–183. [Google Scholar]
- 31.Centers for Medicare and Medicaid Services Medicare Shared Savings Program: Accountable Care Organizations, Final Rule. 2011 Nov 2;Vol 76: Federal Register:67802–67990. [PubMed] [Google Scholar]
- 32.Massachusetts Healthcare Quality and Cost Council [Accessed February 29, 2012];Potentially Preventable Readmissions. 2011 http://www.mass.gov/hqcc/the-hcqcc-council/data-submission-information/potentially-preventable-readmissions-ppr.html.
- 33. [Accessed Feb 29, 2012];Texas Medicaid. Potentially Preventable Readmission (PPR) 2012 http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx.
- 34. [Accessed Feb 29, 2012];New York State. Potentially Preventable Readmissions. 2011 http://www.health.ny.gov/regulations/recently_adopted/docs/2011-02-23_potentially_preventable_readmissions.pdf.
- 35.Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand SL. Variability in the measurement of hospital-wide mortality rates. N Engl J Med. 2010 Dec 23;363(26):2530–2539. doi: 10.1056/NEJMsa1006396. [DOI] [PubMed] [Google Scholar]
- 36.Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in U.S. hospitals. N Engl J Med. 2009 Apr 16;360(16):1628–1638. doi: 10.1056/NEJMsa0900592. [DOI] [PubMed] [Google Scholar]