The U.S. healthcare system often fails to deliver safe, effective, high-value, and patient-centered care (1). In an effort to improve the quality of healthcare, policymakers and payers have rapidly expanded their use of performance measurement programs during the last 2 decades. Examples include regional and national pay for performance and value-based purchasing programs, as well as public reporting programs such as Medicare’s Hospital Compare. A key step in ensuring such programs are improving the quality of care is rigorous program evaluation. By examining a program’s effectiveness, researchers can assure all stakeholders involved that an ongoing investment in the program is worthwhile. This work also informs the planning and implementation of future programs. To date, the vast majority of performance measurement programs in the United States and their subsequent evaluations have focused on outpatient or general inpatient medical and surgical care. Very few studies have examined the effect of performance measurement on care specific to the intensive care unit (ICU).
The California Hospital Assessment and Reporting Taskforce (CHART) program collected and reported quality measures for California hospitals on the website CalHospitalCompare.org between 2007 and 2011 (2). A unique feature of the CHART program was the public reporting of risk-adjusted ICU mortality rates. The original goal of CHART was to make hospital performance data publicly available so high-quality hospitals would be rewarded by gaining market share and receive higher reimbursement rates for the care they provide. However, the California Hospital Association withdrew support of the initiative in 2011, citing the substantial resources required for data collection and the failure of health insurance plans to reward high-performing hospitals (3). The association also felt the newly created national public reporting program, HospitalCompare.gov, by the Center for Medicare & Medicaid Services (CMS), could provide an appropriate alternative source of quality measurement data, despite the absence of a requirement to include ICU mortality rates in the national program. No study evaluating the effectiveness of publically reporting ICU mortality rates was available to inform the decision to discontinue the program.
In this month’s issue of AnnalsATS, Reineck and colleagues (pp. 57–63) perform an evaluation of the CHART program to determine whether public reporting of ICU mortality rates improved patient outcomes (4). Using Medicare claims, the authors compared mortality rates and discharge patterns among patients cared for in California ICUs in the 2 years before and after the program began. Using adjacent states or matched hospital referral regions as controls, they performed a “difference-in-difference” analysis to isolate the effect of the intervention from other regional and national trends in outcomes over the study period (5). This approach is the study’s major strength, as it allowed the authors to subtract out temporal changes in outcomes in control states from those in California, increasing one’s confidence that any measured change in outcomes was a result of public reporting.
The authors found that public reporting of ICU mortality rates for California hospitals had no effect on actual hospital or 30-day mortality rates. This study is the first to demonstrate that the public reporting of an ICU quality measure had no effect on patient outcomes, and adds to a growing body of evidence suggesting that public reporting in isolation is an ineffective way to improve patient care (6). It is not surprising that public reporting failed to improve outcomes in critically ill patients when one considers the major mechanisms through which public reporting is thought to improve care. In theory, public reporting allows patients to make more informed choices when selecting where to receive care, thus increasing the market share of high-quality hospitals. But it is difficult to imagine a critically ill patient being able to act on publically reported data, negating consumer choice as plausible mechanism. Payers in California also did not reward hospitals that had lower ICU mortality rates, which is another potentially powerful way to incentivize change. Together, these issues may have reduced the ability of the CHART program to affect ICU outcomes.
Selective transfer of critically ill patients to high-quality hospitals, based on public available data, is perhaps another mechanism for public reporting to improve outcomes for critically ill patients (7). The authors found an increase in transfer rates of ICU patients between California hospitals in the years after the implementation of the CHART program when compared with control in states. However, the decision-making surrounding the transfer of a critically ill patient is likely minimally dependent on the receiving hospital’s risk-adjusted mortality (8). Physicians may also view a hospital’s risk-adjusted mortality as inaccurate (9), making it irrelevant to any decision about transfer. Thus, it is unlikely that transfer decisions were based on the publicly reported outcomes data of the receiving hospital. An alternative explanation for the increase in transfer rates observed by the authors is that hospitals were inappropriately transferring patients likely to die, with the intent of lowering their own hospital’s mortality rate. Unintended consequences are common and important to recognize in any performance measurement program, and they often can harm the most vulnerable hospitals and patients (10, 11).
Perhaps most important, the study by Reineck and colleagues exemplifies the potential power of an empirical evaluation of a quality improvement policy. Although CHART was dissolved because of lack of payer buy-in, the study confirms that the CHART program did not have the intended effect of improving mortality, further justifying its abandonment. Moreover, these data suggest that new programs revolving around public reporting of ICU mortality may fail to provide adequate improvements in health for the investment. When program evaluation can be performed in a timely fashion, it has the ability to motivate dramatic change in regional or national quality measurement programs. For example, after learning there was no significant reduction in complications after implementation of a CMS policy to restrict coverage of bariatric surgical procedures to hospitals designated as centers of excellence for bariatric surgery, CMS dropped the policy (12).
In the ICU, using similar approaches as in the study by Reineck and colleagues, empirical evaluation of forthcoming regional or national quality measurement policies may be equally informative. For example, the growing incidence of sepsis and interest in improving sepsis care has resulted in several policies aimed at improving care for this condition. In 2013, hospitals in New York State were required to develop and report their protocols for identification and management of patients with sepsis to the state’s department of health (13). In 2017, CMS will mandate that hospitals begin reporting adherence to the National Quality Form severe sepsis and septic shock bundle (14). These are only two of several forthcoming regional or national programs targeting patients with critical illness that must be subjected to a similarly rigorous evaluation to ensure they are effectively improving the care of such patients without increasing harms.
Because of ongoing concerns regarding the high costs and wide variation in the quality of healthcare, the performance measurement enterprise is only likely to grow. The research community must actively participate in the development, implementation, and evaluation of performance measurement programs (15, 16). As these programs move from public reporting to pay-for-performance as their primary mechanism for incentivizing improvement, the research community must ensure they are both fair and effective. Investigators can turn to studies such as that by Reineck and colleagues as an example of how it should be done.
Footnotes
Supported by grants to M.W.S. from the National Institutes of Health (T32HL007749) and to C.R.C. from the Agency for Healthcare Research and Quality (K08HS020672).
Author disclosures are available with the text of this article at www.atsjournals.org.
References
- 1.Institute of Medicine. Washington, DC: National Academy Press; 2001. Crossing the quality chasm: a new health system for the 21st century. [PubMed] [Google Scholar]
- 2.Teleki S, Shannon M. In California, quality reporting at the state level is at a crossroads after hospital group pulls out. Health Aff (Millwood) 2012;31:642–646. doi: 10.1377/hlthaff.2012.0100. [DOI] [PubMed] [Google Scholar]
- 3.Dauner CD.CHA withdraws support for CHART Sacramento, CA: California Hospital Association2011[accessed 2014 Dec 4]. Available from: http://www.calhospital.org/memo/cha-withdraws-support-chart [Google Scholar]
- 4.Reineck LA, Le TQ, Seymour CW, Barnato AE, Angus DC, Kahn JM.Effect of Public Reporting on Intensive Care Unit Discharge Destination and Outcomes Ann Am Thorac Soc 20141257–63.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. JAMA. 2014;312:2401–2402. doi: 10.1001/jama.2014.16153. [DOI] [PubMed] [Google Scholar]
- 6.Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148:111–123. doi: 10.7326/0003-4819-148-2-200801150-00006. [DOI] [PubMed] [Google Scholar]
- 7.Kahn JM, Linde-Zwirble WT, Wunsch H, Barnato AE, Iwashyna TJ, Roberts MS, Lave JR, Angus DC. Potential value of regionalized intensive care for mechanically ventilated medical patients. Am J Respir Crit Care Med. 2008;177:285–291. doi: 10.1164/rccm.200708-1214OC. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Wagner J, Iwashyna TJ, Kahn JM. Reasons underlying interhospital transfers to an academic medical intensive care unit. J Crit Care. 2013;28:202–208. doi: 10.1016/j.jcrc.2012.07.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Casalino LP, Alexander GC, Jin L, Konetzka RT. General internists’ views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood) 2007;26:492–499. doi: 10.1377/hlthaff.26.2.492. [DOI] [PubMed] [Google Scholar]
- 10.Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293:1239–1244. doi: 10.1001/jama.293.10.1239. [DOI] [PubMed] [Google Scholar]
- 11.Sjoding MW, Cooke CR. Readmission penalties for chronic obstructive pulmonary disease will further stress hospitals caring for vulnerable patient populations. Am J Respir Crit Care Med. 2014;190:1072–1074. doi: 10.1164/rccm.201407-1345LE. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Dimick JB, Nicholas LH, Ryan AM, Thumma JR, Birkmeyer JD. Bariatric surgery complications before vs after implementation of a national policy restricting coverage to centers of excellence. JAMA. 2013;309:792–799. doi: 10.1001/jama.2013.755. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.New York State Department of Health Sepsis regulations: guidance document 405.4 (a)(4) Albany, NY: New York State Department of Health; 2013 [accessed 2014 Dec 19]. Available from: http://www.health.ny.gov/regulations/public_health_law/section/405/index.htm [Google Scholar]
- 14.Centers for Medicare and Medicaid Services (CMS), HHS. Medicare program; hospital inpatient prospective payment systems for acute care hospitals and the long-term care hospital prospective payment system and fiscal year 2015 rates; quality reporting requirements for specific providers; reasonable compensation equivalents for physician services in excluded hospitals and certain teaching hospitals; provider administrative appeals and judicial review; enforcement provisions for organ transplant centers; and electronic health record (EHR) incentive program. Final rule. Fed Regist. 2014;79:49853–50536. [PubMed] [Google Scholar]
- 15.Kahn JM, Gould MK, Krishnan JA, Wilson KC, Au DH, Cooke CR, Douglas IS, Feemster LC, Mularski RA, Slatore CG, et al. ATS Ad Hoc Committee on the Development of Performance Measures from ATS Guidelines. An official American Thoracic Society workshop report: developing performance measures from clinical practice guidelines. Ann Am Thorac Soc. 2014;11:S186–S195. doi: 10.1513/AnnalsATS.201403-106ST. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Cooke CR, Iwashyna TJ. Sepsis mandates: improving inpatient care while advancing quality improvement. JAMA. 2014;312:1397–1398. doi: 10.1001/jama.2014.11350. [DOI] [PMC free article] [PubMed] [Google Scholar]