Abstract
Objective
To evaluate how the accuracy of present-on-admission (POA) reporting affects hospital 30-day acute myocardial infarction (AMI) mortality assessments.
Data Sources
A total of 2005 California patient discharge data (PDD) and vital statistics death files.
Study Design
We compared hospital performance rankings using an established model assessing hospital performance for AMI with (1) a model incorporating POA indicators of whether a secondary condition was a comorbidity or a complication of care, and (2) a simulation analysis that factored POA indicator accuracy into the hospital performance assessment. For each simulation, we changed POA indicators for six major acute risk factors of AMI mortality. The probability of POA being changed depended on patient and hospital characteristics.
Principal Findings
Comparing the performance rankings of 268 hospitals using the established model with that using the POA indicator, 67 hospitals' (25 percent) rank differed by ≥10 percent. POA reporting inaccuracy due to overreporting and underreporting had little additional impact; POA overreporting contributed to 4 percent of hospitals' difference in rank compared to the POA model and POA underreporting contributed to <1 percent difference.
Conclusion
Incorporating POA indicators into risk-adjusted models of AMI care has a substantial impact on hospital rankings of performance that is not primarily attributable to inaccuracy in POA hospital reporting.
Keywords: Present-on-admission, hospital assessments, simulation analysis
Over the past decade, there has been a dramatic growth in the use of risk-adjusted hospital performance assessments to increase the transparency of hospital quality. Performance assessments of hospital mortality rates compare the “observed” death rate at the hospital for a given diagnosis such as AMI to what the “expected” death rate would be, accounting for the health status of patients with the diagnosis at that hospital. In theory, hospitals that provide higher quality care should have lower than expected death rates, and hospitals that provide lower quality care should have higher than expected death rates.
The results of comparative evaluations are being tied to financial incentives as part of the Affordable Care Act. Stakeholders, such as the Centers for Medicare & Medicaid Services (CMS), are particularly interested in patient outcomes such as hospital mortality rates. These rates require risk-adjustment to account for differences in the health status of the patient population at different hospitals so that a more accurate estimate of hospital quality can be made. Theoretically, these “risk-adjusted” rates should account for medical conditions that reflect the health status of patients when they arrive at the hospital, and not for diagnoses that result from the hospitalization that could ensue from poor quality care (Hughes et al. 2006). Hospital outcome assessments often rely on administrative data that are generated for all hospitals, but traditionally there was no mechanism in administrative data to distinguish between patients' comorbidities and complications of hospital care.
One strategy that has been suggested to improve the clinical utility of administrative data for risk-adjustment is to use present-on-admission (POA) indicators to distinguish between comorbidities and complications of care (Coffey, Milenkovic, and Andrews 2006; Glance et al. 2008a,b; Pronovost, Goeschel, and Wachter 2008; Wachter, Foster, and Dudley 2008). These POA indicator variables flag each diagnosis as to whether a given condition was present at the time of admission and thus, by default, not attributable to a complication of hospital care. Since 2008, the CMS has required hospitals to report POA indicator variables for every diagnosis on an inpatient acute care hospital claim (Centers for Medicare and Medicaid Services 2008). However, whether POA indicators would actually improve the clinical validity of hospital performance assessment depends on how accurate they are. Prior work has demonstrated that POA accuracy varies significantly by clinical condition and by the type of hospital where the patient is admitted (Goldman et al. 2011b). Of particular concern for performance reports is overreporting (as being POA) risk factors that are strongly predictive of death. Overreporting would inflate the expected risk of death among all of the eligible patients at a hospital and thereby mistakenly lower a hospital's risk-adjusted mortality rate. This is particularly true if hospitals tend to overreport important risk factors such as shock as being present on admission, when in fact the condition developed during the hospitalization. Conversely, hospitals that “underreport” risk factors associated with death risk may appear worse than they truly are. The degree of impact of POA misreporting on hospital mortality ranking remains unknown and unexplored.
We evaluated the effect of inaccurate reporting of POA on a hospital's performance “rank” using California Patient Discharge Data (PDD). We chose to use California administrative data as California is one of two states that has been reporting POA for over 15 years, and it is the only state for which a large-scale evaluation of POA reporting accuracy compared to a gold standard chart review exists (Goldman et al. 2011a). Our study aimed to address the following questions: (1) does the addition of POA reporting to a risk-adjustment mortality model alter the assessments of hospital performance for AMI care and (2) to what extent are differences in the assessments of hospital performance for AMI care that incorporate POA reporting attributable to inaccuracy in the recording of this variable?
Methods
The California PDD includes patient demographics and diagnostic, procedure, and disposition codes for approximately 3.7 million hospitalizations per year from all nonfederal, nonchildren's California acute care hospitals. For our analysis, we included patients 18 years or older who had been discharged during 2005 from a nonfederal California acute care hospital for an AMI due to coronary artery disease (see Appendix SA2 for a list of ICD-9 codes). Patients with hospitalizations whose length of stay was less than 2 days were excluded unless the patient was discharged “Against Medical Advice” or died, to limit the analysis to patients who likely had an AMI. We also excluded records for which a patient had a prior admission within 8 weeks with a diagnosis of AMI to limit the analysis to patients presenting with an initial AMI (and not a readmission for a prior AMI). The Office of Statewide Health Planning and Development performs a linkage of its PDD to California vital statistics data based on a patient's social security number. Therefore, patients were excluded if a reliable social security number was not available in the record (Romano et al. 1997).
Using the study sample described above, we modeled hospital 30-day mortality for AMI in three different ways. Model 1 applied a previously validated risk-adjustment model for AMI based on factors recorded in the California PDD (Romano, Remy, and Luft 1996; Solomon et al. 2002). For Model 1, we adjusted for patient age, sex, race-ethnicity, insurance status, whether the admission was an elective admission, and comorbidities (Appendix SA2). Prior work demonstrated that elective and urgent admissions had similar death rates, compared with emergent admissions, suggesting the administrative coding of elective compared to urgent admissions may be problematic. Therefore, to maximize the generalizability of our study, we decided to include all patients regardless of this administrative code, but risk-adjust for whether the admission was coded as elective or urgent compared to emergent. Risk factors in the mortality model were identified by ICD-9 codes. To obtain 30-day mortality, we used a file that had linked PDD and California vital statistics death records to obtain death dates within 30 days of admission for patients with AMI as their principal diagnosis. For ease of use, our model excluded patients who were transferred from one facility to another. We used the PDD from 2005 because POA reporting accuracy was available for this year from the California PDD Validation Study. We only included hospitals with 25 or more admissions for AMI to minimize the variability in performance assessments due to small sample size (Bardach, Chien, and Dudley 2010). This cut-off is used for publically reported performance assessments on CMS's Hospital Compare website (Centers for Medicare and Medicaid Services 2014). In 2005, 268 of 368 acute care hospitals in California had 25 of more admissions for AMI.
Model 2 uses the same base Model 1 but incorporates a POA indicator from the PDD that is intended to distinguish chronic conditions from acute complications. The goal of this second model was to only adjust for patient comorbidities that were present at the time of admission and not for acute complications of hospital care that often reflect deficits in the quality of care. Conceptually, Model 2 is a more accurate way of assessing risk-adjusted hospital performance than Model 1. However, as the POA indicator is self-reported by hospitals, a concern exists that inaccuracy in POA recording could introduce more error in measuring risk-adjusted outcomes than not including this indicator altogether.
Calculating Hospital-Specific Risk-Adjusted Mortality
We calculated each hospital's risk-adjusted 30-day mortality rate using the fitted mortality risk for each person generated from Model 1. To do this, we used indirect standardization, a common methodology for calculating risk-adjusted rates in which the death rate at a hospital equals the rate across all hospitals in our sample multiplied by the ratio of the number of observed deaths to the number of expected deaths at that hospital among qualifying AMI patients. The expected death rate represents the mean estimated probability of death for all AMI patients at a hospital, which is a measure of average severity of illness. This risk-adjusted death rate provides a basis for comparing the performance of different hospitals for AMI care, because each hospital's rate is adjusted to reflect what its death rate would be if its patients had the average illness burden in the sample. We used these 30-day risk-adjusted mortality rates to rank hospitals (1 to 268). We then determined hospitals' risk-adjusted 30-day mortality for AMI patients based on the fitted mortality results using Model 2, and similarly ranked hospitals based on these results. Thus, each hospital was ranked twice, once using the 30-day risk-adjusted mortality based on Model 1 and a second time using Model 2. We evaluated differences in each hospital's ranking because this provides a detailed evaluation of how the use of POA and POA accuracy impacts hospital assessments.
To evaluate the effect of the inclusion of POA to modify risk factors for 30-day risk-adjusted mortality to form judgments about a hospital's performance, we calculated each hospital's difference in rank (either an improvement or decline) between Model 1 and Model 2. We also calculated the proportion of hospitals whose rank differed by 5 percent (13 positions of 268) or more, 10 percent (27 positions of 268) or more, and 20 percent (54 positions of 268) or more as a strategy to quantify meaningful differences in hospital rank by the different approaches. We examined whether there were differences in the effect on hospital rank by the type of hospital categorized by rural location (California Office of Statewide Health Planning and Development 2012), teaching status, for-profit ownership, and number of hospital beds.
Simulation of POA Error
We then conducted a simulation that uses the risk-adjustment model that includes the POA indicator from Model 2 but applies it to data with errors in POA reporting to assess the impact of POA inaccuracy on hospital-level performance as judged by risk-adjusted mortality rates. To do this, we reassigned POA reporting in a subset of records identified as having ICD-9 codes for the following six acute risk factors for AMI: shock, pulmonary edema, septicemia, acute renal failure, congestive heart failure, and coma. We modified the POA reporting in these risk factors because they are strong predictors of AMI mortality (Romano, Remy, and Luft 1996) and because a previous study found rates of POA overreporting (coded as POA when it was not) and underreporting (coded as not POA when it was) for each of these conditions were greater than 10 percent (Goldman et al. 2011b) (Table1). Diagnoses that rarely have POA either overreported or underreported; that is, chronic conditions such as diabetes or hypertension are unlikely to have POA reporting errors that significantly alter a hospital's rank. These chronic conditions were included in the risk-adjustment model, but their POA status was not modified.
Table 1.
Population in Thirty-Day Acute Myocardial Infarction Risk-Adjustment Model
Characteristics (N = 40,087) | n (%) |
---|---|
Died <30 days | 4,727 (11.8) |
Demographics | |
Age (mean) | |
Female | 15,815 (39.5) |
Age <35 years | 267 (0.7) |
Latino | 6,146 (15.3) |
African American | 2,646 (6.5) |
Medicaid | 2,978 (7.4) |
Uninsured | 2,211 (5.5) |
Prevalence of acute conditions | |
Congestive heart failure | 14,368 (35.8) |
Acute renal failure | 3,866 (9.6) |
Pulmonary edema | 3,488 (8.7) |
Shock | 2,304 (5.8) |
Sepsis | 1,097 (2.7) |
Coma | 584 (1.5) |
Prevalence of chronic conditions | |
Hypertension | 22,683 (56.6) |
Chronic renal failure | 4,080 (10.2) |
Thyroid disease | 3,796 (9.5) |
Prior CABG | 3,694 (9.2) |
Diabetes, complicated | 3,525 (8.8) |
Paroxysmal ventricular tachycardia | 2,276 (5.7) |
Central nervous system diseases | 587 (1.5) |
Cancer | 566 (1.4) |
Low prevalence conditions | |
Aspiration pneumonia | 764 (1.9) |
Complete atrioventricular block | 566 (1.4) |
Ischemic bower or liver | 361 (0.9) |
Cerebrovascular disease | 94 (0.2) |
Seizure disorder | 48 (0.1) |
Skin ulcer | 39 (0.1) |
We generated 1,000 datasets with simulated removal of overreporting and 1,000 with simulated correction of underreporting. For removal of overreporting, we changed conditions coded as POA to not POA with a probability defined for dataset i, each hospital j, and each condition k as:
![]() |
where xj is the row vector of hospital characteristics and α, β, and γ are defined from a reanalysis of previous data on overreporting risk (Goldman et al. 2011a): α is the fixed intercept from a model which is assumed to be the same for all conditions k, β is the column vector of coefficients from that model, and γij is a random intercept generated from a normal distribution with mean zero and variance as estimated in that model. The probabilities Pijk are realizations of the fitted probabilities from the reanalysis of previous data (Goldman et al. 2011b), plus random effects generated independently for each simulated dataset. For each patient with the condition reported as POA, we changed this to not POA with probability Pijk, independently for each patient and condition. The generation of the 1,000 datasets for underreporting was analogous, except that conditions coded as not POA were changed to POA, with probabilities as defined above using a model fit to the previous data on underreporting.
To evaluate the effect of overreporting of POA on hospital rankings, we calculated hospital-specific risk-adjusted 30-day mortality for each of the 1,000 simulated datasets and determined hospital rankings within each of these simulated datasets. We compared the difference in rank for each hospital based on the Model 2 (without POA) and each of the 1,000 simulated rankings. This generated a distribution of differences in hospital rank across all 1,000 simulations for each of the 268 hospitals. We calculated the proportion of times that hospitals' rank differed by 5 percent or more, 10 percent or more, and 20 percent or more. We assumed that if we saw large differences in the rankings between Model 2 and the simulation that this would indicate that inaccuracy in POA reporting was contributing to observed differences in the rankings between Models 1 and 2. We also tested this by comparing the hospital rankings from Model 1 (without POA) to each of the 1,000 simulated ranking overreporting of POA and assumed that if the results of this comparison were markedly different than a comparison of Model 1 and 2 that inaccuracy in POA overreporting would be at least a part of the explanation. We repeated a similar comparison of the simulated data for underreporting of POA with Model 1 applying the same logic.
Finally, we evaluated the stability of whether a hospital would have been designated as a top performer or low performer by the two models. First, we compared how a hospital would be labeled using Model 1 compared with using Model 2. Top performers were those hospitals whose 95 percent confidence limits were below the California state average risk-adjusted mortality and bottom performers were those whose 95 percent confidence limits were above the state average risk-adjusted mortality. We then determined the percent of hospitals whose categorization (1) differed or (2) shifted from either the top to bottom or bottom to top category. We then compared whether a hospital's designation differed based on Model 2 (with POA) to its designation for each of the 1,000 simulations for overreporting. Then, we calculated the percent of times (N = 268,000) that the hospital's designation differed, calculating the percentage whose category differed or shifted from either the top to bottom or bottom to top category. We repeated this comparison between Model 2 and the simulation for underreporting. We determined the proportion of hospitals whose categorization shifted under these settings.
All analyses were conducted using SAS/STAT software, Version 9.2 of the SAS System for Linux. Copyright © 2002–2008 SAS Institute Inc. SAS Cary, NC, USA. Figures were generated using R (Version 2.15.0, Copyright © 2012 The R Foundation for Statistical Computing).
Results
Our sample consisted of the 40,087 hospitalizations for their index AMI at the 268 hospitals (Table1). The average 30-day AMI hospital-specific mortality rate was 11.8 percent (95 percent Confidence Intervals [CI] 11.5–12.1). Figure1 displays hospital-standardized mortality by rank (1–268). When we compared each hospital's 30-day mortality ranking using Model 1 (without POA) to each hospital's 30-day mortality ranking using Model 2 (with POA), we found that 25 percent (n = 67) experienced at least a 10 percent difference in 30-day mortality hospital ranking (Figure2); 13.4 percent had an increase in rank of at least 10 percent and 11.6 percent had a decrease in rank of at least 10 percent.
Figure 1.
Thirty-Day Risk-Adjusted Hospital-Specific Mortality Rates for Acute Myocardial Infarction (AMI) by Hospital Ranking
Note. N = 268 hospitals. Hospital-specific mortality was adjusted for patient age, sex, race-ethnicity, insurance status, whether the admission was an elective admission, and medical comorbidities (Appendix SA2).
Figure 2.
Difference in Hospital Rank Based on Hospital-Specific Mortality Rates for AMI with and without Present-on-Admission (POA) Indicator
Note. N = 268 hospitals, >5 percent = difference in rank of ≥13 or ≤−13, >10 percent = difference in rank of ≥27 or ≤−27, >20 percent = difference in rank of ≥±54 or ≤−54. Hospital-specific mortality was adjusted for patient age, sex, race-ethnicity, insurance status, whether the admission was an elective admission, and medical comorbidities (Appendix SA2).
When we compared the hospital's rank using Model 2 (with POA) to the rank based on simulated data representing overreporting of POA, we found that only 4 percent of hospitals experienced a difference in ranking of at least 10 percent (Figure3A) suggesting that inaccuracy due to overreporting of POA was playing a minor role in the differences observed between Models 1 and 2. The effect of underreporting of POA on difference in hospital rank was even smaller. We found that only 0.6 percent of hospitals experienced a difference in 30-day hospital mortality ranking of at least 10 percent in the simulations of underreporting POA as compared to Model 2 (Figure3B) again suggesting that observed differences between Models 1 and 2 were not due to the inaccuracy of POA underreporting.
Figure 3.
(A) Difference in Hospital Rank Based on Hospital-Specific Mortality Rates for AMI with POA and Simulated Overreported POA Indicator. (B) Difference in Hospital Rank Based on Hospital-Specific Mortality Rates for AMI with POA and Simulated Underreported POA Indicator
Note. N = 268,000 simulations, >5 percent = difference in rank of ≥13 or ≤−13, >10 percent = difference in rank of ≥27 or ≤−27, >20 percent = difference in rank of ≥±54 or ≤−54. Hospital-specific mortality was adjusted for patient age, sex, race-ethnicity, insurance status, whether the admission was an elective admission, and medical comorbidities (Appendix SA2).
As a further indication of the relatively minor role that POA inaccuracy has on hospital rankings based on 30-day risk-adjusted mortality, we found that the percentage of hospitals whose rank differed (either increased or decreased) was similar between Model 1 and the simulation of overreporting POA as it had been between Model 1 and 2 (Appendix SA4). We found that 25 percent of hospitals experienced a difference in ranking of at least 10 percent; 12.5 percent increased rank by greater than 10 percent, and 12.4 percent decreased rank by >10 percent. Similarly, in comparing Model 1 to the model simulating underreporting of POA, 20.8 percent of the hospitals' rank differed; 10.8 percent increased in rank >10 percent, and 10.0 percent decreased rank >10 percent.
Stratification by hospital characteristics revealed that the addition of POA to the model (Model 1 vs. Model 2) impacted the ranking of hospitals with certain characteristics more than others (Table2). Rural hospitals' rank increased by greater than 10 percent more frequently than urban hospitals when POA was added to the model (rural 23.8 percent vs. urban 12.6 percent). In contrast, teaching hospitals rank was more likely to decline more than 10 percent compared with nonteaching hospitals (teaching 20.8 percent vs. nonteaching 10.7 percent). When comparing Model 2 to models that simulated overreporting, there was little difference in how frequently hospitals rank differenced based on the type of hospital with the exception of profit versus nonprofit hospitals (Appendix SA5). For-profit hospitals were more likely to decrease their rank by greater than 10 percent (8.7 percent vs. 1.1 percent) versus nonprofit (0.6 percent vs. 0.7 percent). Comparing Model 2 to models that simulated underreporting, we found even fewer differences by hospital characteristics (Appendix SA6). Our findings for differences in ranking of >5 percent and >20 percent showed similar patterns as >10 percent and are displayed in Table2 and Appendix SA5 and SA6.
Table 2.
Difference in Hospital Rank Based on Hospital-Specific Mortality Rates between Acute Myocardial Infarction (AMI) Model with and without Present-on-admission (POA) Indicator Stratified by Hospital Characteristics
Hospital Characteristic | N | >5% Difference in Rank | >10% Difference in Rank | >20% Difference in Rank | |||
---|---|---|---|---|---|---|---|
Improve | Decline | Improve | Decline | Improve | Decline | ||
Overall | 268 | 23.9 | 24.3 | 13.4 | 11.6 | 2.6 | 1.9 |
Rural | 21 | 38.1 | 0.0 | 23.8 | 0.0 | 4.8 | 0.0 |
Urban | 247 | 22.7 | 26.3 | 12.6 | 12.6 | 2.4 | 2.0 |
Teaching | 24 | 16.7 | 45.8 | 8.3 | 20.8 | 4.2 | 12.5 |
Nonteaching | 244 | 24.6 | 22.1 | 13.9 | 10.7 | 2.5 | 0.8 |
Profit | 61 | 19.7 | 24.6 | 14.8 | 16.4 | 4.9 | 1.6 |
Nonprofit | 207 | 25.1 | 24.2 | 13.0 | 10.1 | 1.9 | 1.9 |
No. staffed beds (quartile) | |||||||
Lowest | 17 | 17.7 | 11.8 | 5.9 | 11.8 | 0.0 | 0.0 |
Second | 66 | 34.9 | 6.1 | 22.7 | 4.6 | 7.6 | 0.0 |
Third | 89 | 29.2 | 22.5 | 16.9 | 10.1 | 2.3 | 1.1 |
Highest | 96 | 12.5 | 40.6 | 5.2 | 17.7 | 0.0 | 4.2 |
Note. The quartile cutoffs for number of staffed beds were as follows: lowest quartile = 60 beds, second quartile (median) = 137 beds, third quartile = 245 beds.
We found in comparing Model 1 with Model 2, that 247 (92 percent) of the hospitals remained in the same category; and no hospitals were identified as top performers in one model and bottom performers in the other (or the reverse). Comparing Model 2 to the simulation of overreporting POA, in 262,771 (97.8 percent) of the 268,000 simulations hospitals performance remained in the same category; and in none of the simulations did a hospital's category shifted from a top to a bottom or a bottom to a top performer. The comparison between Model 2 and the simulation of underreporting POA was similar; 262,055 (98.6 percent) of hospitals performance was categorized the same; and in none of the simulations did a hospital's category shifted from a top to a bottom or a bottom to a top performer.
Discussion
Hospital rank differed at least 10 percent in more than a quarter of hospitals when POA indicators were used in hospital assessments of AMI mortality. Our modeling of the quality of POA reporting suggests that inaccuracy in POA recording is not the primary cause of differences in hospital rank when POA is added to risk-adjustment models of AMI mortality.
Previous literature has supported the value of POA to increase the validity of hospital performance reports (Dalton et al. 2013), but it has acknowledged the potential for inaccurate POA indicator reporting (Ghali, Quan, and Brant 2001; Glance et al. 2006). Researchers developed an algorithm that could be used to identify and exclude individual hospitals with patterns in their administrative data that are suggestive of problematic POA reporting (Hughes et al. 2006). The algorithm identifies hospitals that seem to grossly over-or underreport POA. This approach adopted in California was developed to minimize the likelihood that inaccurate POA reporting by hospitals could impact hospital assessments as a result of POA indicator reporting practices. Our findings suggest that additional considerations by hospital characteristics are unnecessary as there was little suggestion that ranking based on POA reporting inaccuracies differed substantially by hospital type. Certain hospital types including rural hospitals and for-profit hospitals may be more likely to have their rank affected by the use of POA reporting, though this does not seem to be due to inaccuracies in POA.
Hospital-specific mortality rates have been used for over 20 years as metrics to evaluate hospital quality (Romano, Remy, and Luft 1996). CMS currently includes this measure as a part of its assessment of hospitals for determining financial rewards and penalties (Centers for Medicare and Medicaid Services 2014). Some have questioned the use of hospital-specific mortality as a quality indicator citing the extent of random variation in hospital-specific mortality as being too problematic for it to be an effective quality improvement strategy (Hofer and Hayward 1996). POA indicators were developed for eliminating some of this error. Our study cannot answer the question of whether the addition of POA to risk-adjustment models results in a more valid assessment of hospital performance. However, we have shown that differences in the ratings of hospital performance based on risk-adjustment models with and without POA indicators are not heavily influenced by any inaccuracy in how POA is recorded by hospitals.
Our findings reflect POA reporting in California prior to the 2008 CMS mandate that hospitals report POA in their administrative claims. Since 2008, CMS's reporting process has attempted to clarify further the POA reporting. Hospitals now can report that a condition was present on admission, was not present on admission, or that there was inadequate data available to determine. Any improvements in the accuracy of POA would decrease the degree to which inaccuracy would be the cause of difference between models with and without POA.
Our study had several limitations. First, we were using POA reporting from 2005 in a single state. We chose California for this project because data were available on the accuracy of POA reporting, and there was a long-standing history for reporting POA in PDD. We suspect that the accuracy of POA reporting might have improved since 2005, but this along with the generalizability of our findings to other states' PDD should be evaluated, particularly now that all hospitals are reporting POA to Medicare and attention to POA reporting has increased. Second, our performance assessment was based on AMI risk-adjusted mortality. Now that CMS is penalizing hospitals based on readmission rates (Averill et al. 2009; Berenson, Paulus, and Kalman 2012) and hospital-acquired conditions such as pressure ulcers, catheter-associated urinary tract infections, and central-line infections (McNair, Luft, and Bindman 2009; Mookherjee et al. 2010), future work should assess the effect of POA inaccuracy on assessments that will be linked with payments. We only tested the impact of inaccuracies in POA reporting for the clinical outcome of mortality associated with AMI; therefore, we cannot say with certainty what the size of the impact of inaccuracy in POA reporting would be on other clinical conditions or outcomes. Third, our study simulated the impact of the accuracy of only six risk factors for AMI mortality. While these risk factors are highly predictive of mortality, it is possible that results that simulating the impact of the inaccuracy of all risk factors for AMI mortality would have led us to a different conclusion.
The stakes for identifying clinically relevant risk-adjustment strategies rise as financial penalties are increasingly tied to hospital performance reports. Our study confirms that the use of POA indicators in administrative data significantly alters risk-adjusted hospital assessments that do not incorporate a method for distinguishing between comorbidities and complications. Furthermore, our study provides reassurance that the adoption of POA indicators in a risk-adjustment model for AMI care is not substantially confounding results due to the inaccuracy in how POA is reported by hospitals. Future studies should attempt to confirm whether our findings apply to other important hospital outcomes and conditions.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: This study was funded by Agency for Health Care Research and Quality, Grant #1 K08 HS018090-01 and NIH/NCRR/OD UCSF-CTSI KL2 RR024130.
Disclosures: None.
Disclaimers: None.
Supporting Information
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Appendix SA2: Risk Factors in the California Acute Myocardial Infarction Report*.
Appendix SA3: Rates of Overreporting and Underreporting POA for Several Conditions.
Appendix SA4: Difference in Hospital Rank Based on Hospital-Specific Mortality Rates for AMI without POA and Simulated Underreported and Overreported POA Indicator.
Appendix SA5: Effects of Overreporting Select Risk Factors as Present-on-Admission on Difference in Hospital-Specific Thirty-Day Mortality Ranking Stratified by Hospital Characteristics.
Appendix SA6: Effects of Underreporting Select Risk Factors as Present-on-Admission on Difference in Hospital-Specific Thirty-Day Mortality Ranking Stratified by Hospital Characteristics.
References
- Averill RF, McCullough EC, Hughes JS, Goldfield NI, Vertrees JC. Fuller RL. Redesigning the Medicare Inpatient PPS to Reduce Payments to Hospitals with High Readmission Rates. Health Care Financing Review. 2009;30(4):1–15. [PMC free article] [PubMed] [Google Scholar]
- Bardach NS, Chien AT. Dudley RA. Small Numbers Limit the Use of the Inpatient Pediatric Quality Indicators for Hospital Comparison. Academic Pediatrics. 2010;10(4):266–73. doi: 10.1016/j.acap.2010.04.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Berenson RA, Paulus RA. Kalman NS. Medicare's Readmissions-Reduction Program–A Positive Alternative. New England Journal of Medicine. 2012;366(15):1364–6. doi: 10.1056/NEJMp1201268. [DOI] [PubMed] [Google Scholar]
- California Office of Statewide Health Planning and Development. Hospital Annual Financial Data [accessed on September 9, 2014]. Available at http://www.oshpd.ca.gov/hid/Products/Hospitals/AnnFinanData/SubSets/SelectedData/2000Doc/HAFDoc2000andAfter.pdf.
- Centers for Medicare and Medicaid Services 2008. . “ Hospital-Acquired Conditions (Present on Admission Indicator) ” [accessed on June 1, 2014, 2008]. Available at http://www.cms.hhs.gov/HospitalAcqCond/
- Centers for Medicare and Medicaid Services 2014. . “ Hospital Compare ” [accessed on June 1, 2014]. Available at http://www.medicare.gov/hospitalcompare/
- Coffey R, Milenkovic M. Andrews RM. The Case for the Present on Admission (POA) Indicator. Rockville, MD: U.S. Agency for Healthcare Research and Quality; 2006. [Google Scholar]
- Dalton JE, Glance LG, Mascha EJ, Ehrlinger J, Chamoun N. Sessler DI. Impact of Present-on-Admission Indicators on Risk-Adjusted Hospital Mortality Measurement. Anesthesiology. 2013;118(6):1298–306. doi: 10.1097/ALN.0b013e31828e12b3. [DOI] [PubMed] [Google Scholar]
- Ghali WA, Quan H. Brant R. Risk Adjustment Using Administrative Data: Impact of a Diagnosis-Type Indicator. Journal of General Internal Medicine. 2001;16(8):519–24. doi: 10.1046/j.1525-1497.2001.016008519.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glance LG, Dick AW, Osler TM. Mukamel DB. Does Date Stamping ICD-9-CM Codes Increase the Value of Clinical Information in Administrative Data? Health Services Research. 2006;41(1):231–51. doi: 10.1111/j.1475-6773.2005.00419.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glance LG, Li Y, Osler TM, Mukamel DB. Dick AW. Impact of Date Stamping on Patient Safety Measurement in Patients Undergoing CABG: Experience with the AHRQ Patient Safety Indicators. BMC Health Services Research. 2008a;8:176. doi: 10.1186/1472-6963-8-176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glance LG, Li Y, Osler TM, Mukamel DB. Dick AW. Impact of the Present-on-Admission Code in Administrative Data on Patient Safety Measurement in Patients Undergoing CABG: Experience with the AHRQ Patient Safety Indicators. BMC Health Services Research. 2008b;8(1):176. doi: 10.1186/1472-6963-8-176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldman LE, Chu P, Osmond D. Bindman A. The Accuracy of Present-on-Admission Reporting in Administrative Data. Health Services Research. 2011a;46(6 Pt 1):1946–62. doi: 10.1111/j.1475-6773.2011.01300.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldman LE, Chu PW, Prothro C, Osmond D. Bindman AB. Accuracy of Condition Present-on-Admission, Do Not Resuscitate, and e Codes in California Patient Discharge Data. Sacramento, CA: Office of Statewide Health Planning and Development; 2011b. [Google Scholar]
- Hofer TP. Hayward RA. Identifying Poor-Quality Hospitals. Can Hospital Mortality Rates Detect Quality Problems for Medical Diagnoses? Medical Care. 1996;34(8):737–53. doi: 10.1097/00005650-199608000-00002. [DOI] [PubMed] [Google Scholar]
- Hughes JS, Averill RF, Goldfield NI, Gay JC, Muldoon J, McCullough E. Xiang J. Identifying Potentially Preventable Complications Using a Present on Admission Indicator. Health Care Financing Review. 2006;27(3):63–82. [PMC free article] [PubMed] [Google Scholar]
- McNair PD, Luft HS. Bindman AB. Medicare's Policy Not to Pay for Treating Hospital-Acquired Conditions: The Impact. Health Affairs. 2009;28(5):1485–93. doi: 10.1377/hlthaff.28.5.1485. [DOI] [PubMed] [Google Scholar]
- Mookherjee S, Vidyarthi AR, Ranji SR, Maselli J, Wachter RM. Baron RB. Potential Unintended Consequences Due to Medicare's “No Pay for Errors Rule”? A Randomized Controlled Trial of an Educational Intervention with Internal Medicine Residents. Journal of General Internal Medicine. 2010;25(10):1097–101. doi: 10.1007/s11606-010-1395-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pronovost PJ, Goeschel CA. Wachter RM. The Wisdom and Justice of Not Paying for ‘Preventable Complications.’. Journal of the American Medical Association. 2008;299(18):2197–9. doi: 10.1001/jama.299.18.2197. [DOI] [PubMed] [Google Scholar]
- Romano PS, Remy LL. Luft HS. Second Report of the California Hospital Outcomes Project (1996): Acute Myocardial Infarction Volume Two: Technical Appendix. Davis, CA: Center for Healthcare Policy and Research, UC Davis; 1996. [Google Scholar]
- Romano PS, Luft HS, Rainwater JA. Zach A. Report on Heart Attack 1991-1993, Volume 2: Technical Guide. Sacramento, CA: California Office of Statewide Health Planning and Development; 1997. [Google Scholar]
- Solomon L, Zach A, Lubeck S, Simon V, Li YQ, MacDonald M. Hand L. Report on Heart Attack Outcomes in California, 1996-1998. Sacramento, CA: Office of Statewide Healthcare Planning and Development; 2002. pp. 1–63. California Hospital Outcomes Project. [Google Scholar]
- Wachter RM, Foster NE. Dudley RA. Medicare's Decision to Withhold Payment for Hospital Errors: The Devil is in the Det. Joint Commission Journal on Quality and Patient Safety. 2008;34(2):116–23. doi: 10.1016/s1553-7250(08)34014-8. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix SA1: Author Matrix.
Appendix SA2: Risk Factors in the California Acute Myocardial Infarction Report*.
Appendix SA3: Rates of Overreporting and Underreporting POA for Several Conditions.
Appendix SA4: Difference in Hospital Rank Based on Hospital-Specific Mortality Rates for AMI without POA and Simulated Underreported and Overreported POA Indicator.
Appendix SA5: Effects of Overreporting Select Risk Factors as Present-on-Admission on Difference in Hospital-Specific Thirty-Day Mortality Ranking Stratified by Hospital Characteristics.
Appendix SA6: Effects of Underreporting Select Risk Factors as Present-on-Admission on Difference in Hospital-Specific Thirty-Day Mortality Ranking Stratified by Hospital Characteristics.