Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Jan 8.
Published in final edited form as: JACC Cardiovasc Interv. 2013 Jun;6(6):625–630. doi: 10.1016/j.jcin.2013.01.140

Impact of Public Reporting and Outlier Status Identification on Percutaneous Coronary Intervention Case Selection in Massachusetts

James M McCabe *, Karen E Joynt *,, Frederick G P Welt *, Frederic S Resnic
PMCID: PMC6948720  NIHMSID: NIHMS1063990  PMID: 23787236

Abstract

Objectives

This study sought to evaluate the impact of public reporting of hospitals as negative outliers on percutaneous coronary intervention (PCI) case-mix selection.

Background

Public reporting of risk-adjusted in-hospital mortality after PCI is intended to improve outcomes. However, public labeling of negative outliers based on risk-adjusted mortality rates may detrimentally affect hospitals’ willingness to care for high-risk patients.

Methods

We used generalized estimating equations to examine expected in-hospital mortality rates for 116,227 PCI patients at all nonfederally funded Massachusetts hospitals performing PCI from 2003 to 2010. The main outcome measure was the change in predicted in-hospital mortality rates per hospital after outlier status identification.

Results

The prevalence-weighted mean expected mortality for all PCI cases during the study period was 1.38 ± 0.36% (5.3 ± 1.96% for all shock or ST-segment elevation myocardial infarction patients, 0.58 ± 0.19% for all not shock, not ST-segment elevation myocardial infarction patients). After public identification as a negative outlier institution, there was an 18% relative reduction (absolute 0.25% reduction) in predicted mortality among PCI patients at outlier institutions (95% confidence interval: −0.04 to −0.46%, p = 0.021) compared with nonoutlier institutions. Throughout the study period, there was an additional 37% relative (0.51% absolute) reduction in the predicted mortality risk among all PCI patients in Massachusetts attributable to secular changes since the onset of public reporting (95% confidence interval: −0.20 to −0.83, p = 0.002).

Conclusions

The risk profile of PCI patients at outlier institutions was significantly lower after public identification compared with nonoutlier institutions, suggesting that risk-aversive behaviors among PCI operators at outlier institutions may be an unintended consequence of public reporting in Massachusetts

Keywords: case mix, outlier, percutaneous coronary intervention, public reporting


Public reporting of risk-adjusted in-hospital mortality rates after percutaneous coronary intervention (PCI) is intended to build public trust and encourage adoption of best practices (1). However, there is controversy regarding the ability of public reporting to improve patient outcomes while preserving access to potentially lifesaving care (15). Public reporting has been associated with reduced mortality rates for ST-segment elevation myocardial infarction (STEMI) and shock patients in both Massachusetts and New York State (1), but some argue that this may be largely the result of avoidance of high-risk patients (6).

One important aspect of public reporting that has been less well studied is the public labeling of institutions as negative outliers. Negative outlier status is conferred on institutions whose observed risk-adjusted in-hospital mortality rate is significantly greater than expected based on risk prediction models. Since the inception of public reporting in Massachusetts in 2003, 4 institutions have been identified as negative outliers. We hypothesized that PCI operators at institutions previously identified as negative outliers may become more intensely risk averse in PCI case selection compared with physicians at nonoutlier centers and therefore more likely to avoid performing PCI in the most severely ill individuals. Paradoxically, these critically ill patients may stand the most to gain from reperfusion therapy, albeit with a poor overall prognosis (7).

Therefore, we set out to answer the following question: Was identification as an outlier for PCI mortality rate in Massachusetts associated with a change in the risk profile of patients receiving PCI at that hospital in subsequent years compared with patients treated at nonoutlier hospitals?

Methods

Data source.

The Massachusetts Data Analysis Center (Mass-DAC) collects, adjudicates, and analyzes patient-specific risk factors and outcomes for each nonfederal hospital performing PCI in Massachusetts. Mass-DAC uses validated risk prediction models to calculate the expected mortality rates for all patients based on their clinical characteristics and presentation condition. Separate prediction models are used for shock or STEMI (SOS) PCI patients and not-shock, not-STEMI (non-SOS) PCI patients (8). Both expected mortality models were generated yearly to ensure prediction validity and thus model covariates and the point estimates of those covariates were subject to change throughout the study period (Online Tables 1 and 2 list all model covariates and their respective point estimates per year). Model covariates include elements collected for reporting to the National Cardiovascular Data Registry, whose reporting is mandatory in Massachusetts, as well as additional exceptionally high-risk elements not collected for the National Cardiovascular Data Registry but previously validated for improved model fit in exceptionally high-risk cases (6). The area under the receiver-operating characteristic curve for the resultant Mass-DAC models ranged from 0.83 to 0.91 during the study period, suggesting excellent model discrimination (Online Tables 1 and 2). Averaged expected mortality rates per PCI-performing hospital are reported as standardized expected mortality incidence rates (SMIRs). Because separate prediction models are used for SOS patients and non-SOS patients, separate SMIRs for each group are also reported per institution (8). These SMIRs are published yearly and represent the average expected in-hospital mortality rates per hospital based on the characteristics of that institution’s PCI population.

Endpoints and definitions.

We collected publicly reported SMIRs for both SOS and non-SOS patients at all PCI-capable, nonfederal Massachusetts hospitals from 2003 through 2010. Prevalence-weighted mean expected mortality rates were calculated per year by weighting each hospital’s expected mortality rates for SOS and non-SOS patients by those hospitals’ respective number of SOS and non-SOS cases. The pre-specified primary outcome of this study was the change in average expected mortality rates for all patients undergoing PCI at institutions previously labeled as negative outliers compared with the changes in expected mortality rates from all nonoutlier hospitals during that time frame. Comparisons with contemporary nonoutlier hospitals account for yearly changes in the Mass-DAC expected mortality models and the possibility that public reporting and, more specifically, the identification of outlier hospitals, might lead to “risk avoidance creep” among all hospitals and not just outliers. The primary predictor was outlier status as reported by Mass-DAC. During the study period, 4 hospitals were reported to be negative outliers: 1 in 2005, 2 in 2007, and 1 in 2009. For the purposes of this analysis, institutions were not considered outliers until the year of their public identification but remained outliers for all subsequent years after their identification. To determine whether patients at outlier institutions were being routed toward coronary artery bypass graft (CABG) surgery instead of PCI after outlier status identification, we also collected publicly available SMIRs created by Mass-DAC for per-hospital projected 30-day mortality rates after isolated CABG to examine concomitant changes in the illness severity of this population.

Statistical analysis.

Simple comparisons of normally and non-normally distributed data were performed using the Student t test and Wilcoxon rank sum test, respectively. We used generalized estimating equations for multivariate regression analyses comparing outliers with nonoutliers while accounting for nested and repeated measures among specific hospitals. These multivariable models were adjusted for secular trends by incorporating the years analyzed. Analyses were performed with Stata version 11 (StataCorp, College Station, Texas). Institutional review board approval was waived because all analyses were performed using aggregate publicly reported data.

Results

Baseline characteristics.

Twenty-four Massachusetts hospitals performed 116,227 PCI procedures between 2003 and 2010. The prevalence-weighted mean expected mortality rate for all PCI cases during the study period was 1.38 ± 0.36% (5.3 ± 1.96% for all SOS patients, 0.58 ± 0.19% for all non-SOS patients).

Outlier hospitals were larger on average than nonoutlier hospitals (p = 0.03) and significantly more PCI procedures were performed per year (192 ± 80 vs. 112 ± 76 SOS, and 1,163 ± 200 vs. 780 ± 408 non-SOS cases, respectively; both p < 0.01) (Table 1).

Table 1.

Hospital-Based Differences Between Outlier and Nonoutlier Institutions

Outlier (n = 4) Nonoutlier (n = 20) p Value
Mean no. of inpatient beds 590 ± 341 334 ± 173 0.03
Average expected mortality, all PCIs 1.08 ± 0.23* 1.58 ± 0.29 <0.01
Average expected mortality, shock, or STEMI 5.22 ± 1.28* 5.31 ± 2.02 0.87
Average expected mortality, not shock and not STEMI 0.47 ± 0.18* 0.60 ± 0.18 <0.01
Average shock or STEMI case volume/yr 192 ± 80 112 ± 76 <0.01
Average not shock, not STEMI case volume/yr 1,163 ± 200 780 ± 408 <0.01
Cardiothoracic surgery backup present (%) 4 (100) 10 (50) <0.01

Values are mean ± SD or n (%).

*

Mean since outlier status identification.

PCIs = percutaneous coronary interventions; STEMI = ST-segment elevation myocardial infarction.

Changes in expected mortality rate of PCI patients.

On average, after hospitals were labeled as negative outliers, the expected mortality rate of their PCI patients was significantly lower than at nonoutlier institutions (1.08 ± 0.23% vs. 1.58 ± 0.29%, p < 0.01), suggesting the average PCI patient at outlier institutions was less severely ill. Specifically, outlier institutions had significantly lower rates of expected mortality among non-SOS patients compared with non-SOS patients at nonoutlier institutions (0.47 ± 0.18% vs. 0.60 ± 0.18%, p < 0.01), whereas the illness severity of SOS patients, as reflected in their expected mortality rates, did not appear to differ significantly between the 2 groups (5.22 ± 1.28% vs. 5.31 ± 2.02%, p =0.87).

When expected mortality rates at outlier institutions (post-identification) were compared with themselves pre-identification, even greater differences were seen in both SOS and non-SOS expected mortality rates: 7.49 ± 3.47% (pre-) vs. 5.22 ± 1.78% (post-), p = 0.03; and 0.71 ± 0.18% (pre-) vs. 0.47 ± 0.18% (post-), p <0.01, respectively.

After adjusting for temporal trends across the state, there was a significant 18% relative reduction (or an absolute 0.25% reduction) in predicted mortality among PCI patients at hospitals after public identification as an outlier compared with nonoutliers (95% confidence interval [CI]: −0.46 to −0.04%, p = 0.021) (Fig. 1). Furthermore, there was a 37% relative reduction (0.51% absolute decrease, or 0.06% per year decrease) in the predicted mortality rates of all PCI patients in Massachusetts attributable to secular changes since the onset of public reporting (95% CI: −0.83 to −0.20, p = 0.002).

Figure 1. Expected Mortality Rate After Percutaneous Coronary Intervention at Outlier Versus Nonoutlier Institutions.

Figure 1.

Mean prevalence-weighted in-hospital expected mortality rate per year for outlier and nonoutlier hospitals. Error bars represent SDs. PCI = percutaneous coronary intervention.

Changes in observed mortality rate of PCI patients.

Of note, the changes in expected mortality rates calculated from patient risk profiles paralleled the actual observed in-hospital mortality rates after PCI over the study period. The overall observed statewide mortality rates significantly decreased from 1.70% in 2003 to 1.34% in 2010 (p = 0.02 for trend) (Fig. 2).

Figure 2. Observed Percutaneous Coronary Intervention Mortality Rate per Year in Massachusetts.

Figure 2.

Statewide, per-year unadjusted observed in-hospital mortality rates after percutaneous coronary intervention for all patients at nonfederal hospitals since the inception of the public reporting process in Massachusetts. Observed mortality, best fit, and 95% confidence interval (CI) for fit.

CABG surgery among outlier hospitals.

All centers ultimately identified as outlier institutions also offered CABG surgery throughout the study period. Since the inception of public reporting of PCI outcomes in Massachusetts, the average expected 30-day mortality rates after CABG surgery at these 4 institutions has decreased from 2.50 ± 0.39% to 1.23 ± 0.03% (p = 0.01 for trend) (Fig. 3), suggesting that the decrement in average illness severity of the PCI population at those outlier institutions is not likely due to redirecting their most severely ill patients toward an operative reperfusion strategy.

Figure 3. Expected Mortality Rate After CABG Surgery at Outlier Institutions.

Figure 3.

Mean expected mortality after coronary artery bypass graft (CABG) surgery at the 4 institutions labeled as outliers for their percutaneous coronary intervention outcomes since the inception of Massachusetts public reporting. Error bars represent SDs.

Discussion

Using expected mortality rates for each hospital as a surrogate for the case mix of its PCI population, we found that the aggregate illness severity of PCI patients at institutions previously labeled as an outlier was significantly lower than contemporaneous measures at other Massachusetts institutions that had not previously been labeled as a negative outlier. This suggests that risk-aversive behaviors among PCI operators at outlier institutions may be an unintended consequence of public reporting in Massachusetts. Concomitantly, there was a significant temporal trend toward a lower average illness severity for PCI patients across all hospitals in the state since the inception of public reporting of PCI outcomes; whether this was a result of public reporting or simply reflective of larger national trends is unknown, although previous work suggests that this may be a consequence of the public reporting process (9).

The mechanism for the decrease in expected in-hospital mortality rate among PCI patients at outlier institutions is unclear. An improvement in PCI performance or quality may improve observed mortality rates after PCI, but should not affect the expected mortality rates that were used for this analysis because they are calculated based on patients’ presentation characteristics. It is therefore likely that public reporting leads some PCI operators to avoid those patients whom they perceive to be at the highest risk of adverse outcomes and hence most likely to negatively affect their publicly reported performance. In fact, we suspect that the mechanism for the decreased average illness severity among PCI case mix at outlier hospitals is to avoid PCI altogether in the most severely ill SOS patients.

Although we did not observe a difference in the expected risk of SOS patients who underwent PCI after outlier status identification, our analysis cannot account for patients who might qualify for PCI but were no longer offered this therapy. Avoiding interventions in the most severely ill SOS patients who might otherwise have indications for PCI would decrease that institution’s aggregate expected mortality rate by relatively increasing the proportion of lower risk, nonshock, non-STEMI patients among the total PCI population. This hypothesis is consistent with the 30% reduction in STEMI patients with shock who underwent PCI after implementation of public reporting of PCI outcomes in New York State between 1997 and 2003 (1) and is in accord with a recent analysis suggesting that, compared with states without public reporting, states with PCI outcome reporting (including Massachusetts) demonstrate the most significant decrement in PCI application among patients with STEMI or shock (9). Nevertheless, an analysis of all PCI-eligible patients in Massachusetts is required to further evaluate our findings regarding the effects of outlier status on PCI case mix.

Interestingly, we also found that the average illness severity of the non-SOS patients, a generally lower risk group, significantly decreased after outlier status identification. It is unclear why statistically measurable changes not seen among the SOS cohort would be seen in this group after outlier status identification; it may be that the decision to provide PCI is generally more discretionary in a lower risk cohort than in the SOS group, or it may be related statistically to the far greater volume of nonshock, non-STEMI patients undergoing PCI.

Study limitations.

First, our analyses were limited to publicly available per-hospital data, and thus specific patient-level information was not incorporated. Second, we were unable to account directly for differences in risk factor documentation. Previous experience suggest that more rigorous risk profile documentation after public reporting, sometimes referred to as up-coding, may be common (10). Up-coding has the potential to falsely inflate predicted patient mortality rate and therefore dilute any change in quantifiable risk aversion after outlier status identification. Up-coding would have biased this analysis against finding an even greater effect. Additionally, as noted previously, our data cannot account for the patients who might have qualified for PCI during the study period but who did not receive it as a result of risk-aversive behavior because those subjects are not recorded in this PCI-based dataset. Finally, we cannot account for the exact timing of institutional notification regarding their public identification as an outlier. We chose to stratify institutions as outliers for the purposes of this analysis based on the year for which they were cited, but it is possible that these institutions may not have been alerted to the change in their public reporting status until many months later. Stratifying institutions as outliers for the purposes of this analysis before they were alerted to their status changes would also be expected to bias our results toward the null.

Conclusions

Taken as a whole, these data suggest that the public reporting of in-hospital mortality after PCI and the practice of public identification of hospitals as negative outliers may increase risk avoidance in a manner inconsistent with best practices. Arguably, an aversion to performing PCI in the sickest patients at poorly performing hospitals is, in fact, the intended consequence of public reporting of PCI outcomes. However, the hospitals identified as negative outliers in Massachusetts were larger centers with greater average procedure volumes, features classically associated with superior technical performance (1113). Additionally, we did not observe a reciprocal increase in the aggregate expected mortality of those institutions’ CABG patients or in PCI patients at nonoutlier institutions as one might expect to find if the most severely ill patients were being redirected from “poorer performing” outlier hospitals’ catheterization laboratories to their operating rooms for CABG or to “better performing” nonoutlier hospitals for PCI.

Interestingly, in the era of public reporting, the severity of illness of all PCI patients has diminished despite a contemporaneous movement toward PCI “appropriateness,” which generally encourages reserving PCI for higher acuity and more severely symptomatic patients (14). During this period, both the expected and observed in-hospital mortality rates after PCI declined in Massachusetts, although the relative contributions of quality improvement and risk aversion to this trend remain unclear. Further studies on publicly labeling institutions as negative outliers based on in-hospital PCI mortality rate are necessary to assess its impact on operator behavior and case-mix selection.

Supplementary Material

supplemental

Abbreviations and Acronyms

CABG

coronary artery bypass graft

Mass-DAC

Massachusetts Data Analysis Center

PCI

percutaneous coronary intervention

SMIR

standardized expected mortality incidence rate

SOS

shock or ST-segment elevation myocardial infarction

STEMI

ST-segment elevation myocardial infarction

Footnotes

JACC: CARDIOVASCULAR INTERVENTIONS CME

This article has been selected as this issue’s CME activity, available online at http://interventions.onlinejacc.org/ by selecting the CME tab on the top navigation bar.

Accreditation and Designation Statement

The American College of Cardiology Foundation (ACCF) is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians. The ACCF designates this Journal-based CME activity for a maximum of 1 AMAPRA Category 1 Credit(s)™. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Method of Participation and Receipt of CME Certificate

To obtain credit for this CME activity, you must:

1. Be an ACC member or JACC: Cardiovascular Interventions subscriber

2. Carefully read the CME-designated article available online and in this issue of the journal.

3. Answer the post-test questions. At least 2 out of the 3 questions provided must be answered correctly to obtain CME credit.

4. Complete a brief evaluation.

5. Claim your CME credit and receive your certificate electronically by following the instructions given at the conclusion of the activity.

CME Objective for This Article: To recognize that public reporting of hospitals’ PCI outcomes may lead to operator risk aversion and avoidance of PCI in the sickest patients.

CME Editor Disclosure: JACC: Cardiovascular Interventions CME Editor Habib Samady, MB, ChB, FACC, has research grants from the Wallace H. Coulter Foundation, Volcano Corp., St. Jude Medical, Forrest Pharmaceuticals Inc., and Pfizer Inc.

Author Disclosure: The authors have reported that they have no relationships relevant to the contents of this paper to disclose.

Medium of Participation: Print (article only); online (article and quiz).

CME Term of Approval:

Issue Date: June 2013

Expiration Date: May 31, 2014

Data previously presented as a Best Poster finalist at the Scientific Sessions of the American College of Cardiology, 2012, Chicago, Illinois.

APPENDIX

For supplemental tables, please see the online version of this article.

REFERENCES

  • 1.Resnic FS, Welt FGP. The public health hazards of risk avoidance associated with public reporting of risk-adjusted outcomes in coronary intervention. J Am Coll Cardiol 2009;53:825–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ettinger WH, Hylka SM, Phillips RA, Harrison LH, Cyr JA, Sussman AJ. When things go wrong: the impact of being a statistical outlier in publicly reported coronary artery bypass graft surgery mortality data. Am J Med Qual 2008;23:90–5. [DOI] [PubMed] [Google Scholar]
  • 3.Peterson ED. The need for “compassionate provider profiling” refining risk assessment for percutaneous coronary intervention. J Am Coll Cardiol 2011;57:912–3. [DOI] [PubMed] [Google Scholar]
  • 4.Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010; 340:c2016. [DOI] [PubMed] [Google Scholar]
  • 5.Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA 2005;293:1239–44. [DOI] [PubMed] [Google Scholar]
  • 6.Resnic FS, Normand S-LT, Piemonte TC, et al. Improvement in mortality risk prediction after percutaneous coronary intervention through the addition of a “compassionate use” variable to the National Cardiovascular Data Registry CathPCI Dataset: a study from the Massachusetts Angioplasty Registry. J Am Coll Cardiol 2011;57:904–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hochman JS, Sleeper LA, Webb JG, et al. Early revascularization in acute myocardial infarction complicated by cardiogenic shock. SHOCK Investigators. Should we emergently revascularize occluded coronaries for cardiogenic shock. N Engl J Med 1999;341:625–34. [DOI] [PubMed] [Google Scholar]
  • 8.Public Reports: Percutaneous Coronary Intervention Cohort. Available at: http://www.massdac.org/reports/pci.html. Accessed May 13, 2012. [Google Scholar]
  • 9.Joynt KE, Blumenthal DM, Orav EJ, Resnic FS, Jha AK. Association of public reporting for percutaneous coronary intervention with utilization and outcomes among Medicare beneficiaries with acute myocardial infarction. JAMA 2012;308:1460–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hawkes N Patient coding and the ratings game. BMJ. 2010;340:c2153. [DOI] [PubMed] [Google Scholar]
  • 11.Panageas KS, Schrag D, Riedel E, Bach PB, Begg CB. The effect of clustering of outcomes on the association of procedure volume and surgical outcomes. Ann Intern Med 2003;139:658–65. [DOI] [PubMed] [Google Scholar]
  • 12.Katz JN, Losina E, Barrett J, et al. Association between hospital and surgeon procedure volume and outcomes of total hip replacement in the United States Medicare population. J Bone Joint Surg Am 2001; 83A:1622–9. [DOI] [PubMed] [Google Scholar]
  • 13.Harrison EM, O’Neill S, Meurs TS, et al. Hospital volume and patient outcomes after cholecystectomy in Scotland: retrospective, national population based study. BMJ 2012;344:e3330. [DOI] [PubMed] [Google Scholar]
  • 14.Chan PS, Patel MR, Klein LW, et al. Appropriateness of percutaneous coronary intervention. JAMA 2011;306:53–61. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplemental

RESOURCES