Public reporting of clinical outcomes is now common. Despite the considerable problems summarized in an earlier paper,1 governmental and commercial entities offer public reporting of hospital and physician performance over the internet. Below, we review common sources of public information on cardiovascular care in the USA.
Perhaps the best known governmental reporting website is Hospital Compare (https://www.medicare.gov/hospitalcompare/search.html) operated by the Centers for Medicare and Medicaid Services (CMS). All hospitals caring for CMS patients are expected to participate. It includes 1) structural measures, such as participation in national clinical databases, 2) evaluation of patient experiences, using Hospital Consumer Assessment of Healthcare Providers and Systems Survey (HCAHPS), 3) timeliness and effectiveness of care, 4) complications, 5) 30 day readmission rates for certain diagnoses, 6) 30 day mortality rates for certain diagnostic groups, 7) efficient use of outpatient imaging, and 8) payments, including payments for care after hospital discharge for some conditions, such as congestive heart failure (CHF) or myocardial infarction (MI). The majority of the data are abstracted from CMS databases, but some data come from other government sources (e.g., infection data may come from the Centers for Disease Control and Prevention) or commercial agencies (e.g., HCAHPS data originate with Press Ganey or other authorized vendors). Hospitals are ranked on 64 measures (including an overall measure) and are assigned one to five stars, 5 being best. The methodology used to calculate the star ratings is complex but is described well in CMS websites (https://www.qualitynet.org/). Hospital Compare is intuitive and easy to use, but there is little instruction about how to use these data meaningfully.
The Hospital Compare model was adapted for reporting on individual physician performance in Physician Compare (https://www.medicare.gov/physiciancompare/). This registry utilizes an existing governmental Healthcare Provider Directory to sort data by provider specialty. In this way, a consumer can access information specifically about, say, electrophysiologists, rather than all cardiologists. Physician Compare was launched in 2010, initially providing more or less demographic data only (practice locations, hospital affiliations, languages spoken, etc.), but in 2014 quality and patient experience data were incorporated. Physicians who are members of group practices may have aggregate data from the entire practice reported.
Individual state governments have begun to report performance within their borders. New York State (NYS) rates hospitals on timely and effective care, complications, in-hospital and 30 day mortality, hospital-acquired infections, patient satisfaction, and readmission within 30 days in separate reports on cardiac surgery and PCI (https://profiles.health.ny.gov). Care is rated as high, average, or poor relative to the median performance of the entire state. Data come from several sources: state law requires cardiac surgery outcome data be reported to the NYS Department of Health Statewide Planning and Research Cooperative System (SPARCS) database, and PCI data must be reported to the NYS PCI Reporting System. Massachusetts evaluates mortality after coronary surgery or PCI (http://www.massdac.org/fiscal-year-2014-annual-reports/) using nearly identical data to the STS and ACCF NCDR (see below), facilitating data collection for both state and national reporting. Similar performance reports are available for some others, including populous states like California (http://calhospitalcompare.org/), Illinois (http://www.healthcarereportcard.illinois.gov/), and Pennsylvania (http://www.phc4.org/) and less populous states such as Washington (http://www.coap.org/) and Wisconsin (https://www.dhs.wisconsin.gov/). Data may be abstracted from other quality initiatives, such as the CMS-driven Surgical Care Improvement Project (SCIP) or national registry reports, or may be collected separately by state agencies. Although all use some form of risk adjustment, their methods are generally not explained.
Certain non-governmental registries have taken a lead role in public reporting. The Society of Thoracic Surgeons (STS) reports on adult and pediatric cardiac surgery (http://www.sts.org/quality-research-patient-safety/sts-public-reporting-online). Hospital participation is voluntary. For adult surgery, there is reporting on coronary artery bypass surgery (CABG), aortic valve replacement (AVR) and combined CABG+AVR. A rating system of 1 to 3 stars is used, 3 being best. Scores focus on outcomes (mortality and certain morbidities) and processes (e.g., use of the internal mammary artery and appropriate perioperative medications). Scores are available both for hospitals and surgical groups; individual surgeon data are not available. The website covers the rationale for public reporting as well as limitations, and the scoring system is explained in non-technical terms.
The American College of Cardiology Foundation National Cardiovascular Data Registry (ACCF NCDR) has recently begun public reporting hospital-level (but not physician-level) quality data from the CathPCI and ICD registries by hospital. Participation is voluntary, and both reports focus on use of appropriate discharge medications. This is an initial effort which will be expanded (http://cvquality.acc.org/NCDR-Home/Public-Reporting.aspx). Compliance with guideline-directed care is reflected as 1 to 4 stars, 4 being best (https://www.cardiosmart.org/Heart-Basics/Find-Your-Heart-a-Home). The methods are explained in non-technical terms on the website, while a published paper from ACCF explains the approach in greater detail.2
Finally, various commercial companies also provide public reporting. Some are non-profit, and most rely heavily on CMS claims data. The widely known Consumer Reports (http://www.consumerreports.org/health/) and US News and World Report ratings (http://health.usnews.com/best-hospitals/rankings) rank hospitals and specialties nationally. Consumer Reports also offers the three star ratings of surgical groups from and in collaboration with the STS (http://www.consumerreports.org/health/doctors-hospitals/surgeon-ratings/ratings-of-bypass-surgeons.htm). Hospitals which are ranked highly by US News and World Report tend to use their rankings in marketing campaigns; despite its high consumer visibility and utility, it is not clear how their reports could be used to improve quality. Hospital systems are also compared by HealthGrades and the Leapfrog Group. HealthGrades calculates risk-adjusted expected event rates for patients based on their principal and secondary diagnoses (using ICD-9 or DRG assignments) and compares them with observed rates. They also provide individual physician reviews based on patient satisfaction surveys. Leapfrog uses a combination of CMS data and proprietary survey instrument reports (most of which capture data being submitted to other governmental agencies, such as the CDC National Healthcare Safety Network) and emphasizes compliance with processes and protocols endorsed by the National Quality Foundation (NQF). Non-technical methodologic explanations are available on-line for both of these companies (https://www.healthgrades.com/; http://www.hospitalsafetygrade.org/). We note that all of these agencies except Consumer Reports fund their analyses, in part, by allowing hospitals to display their ratings in advertisements and promotional materials, rather than through independent funding.3 There are many other physician and hospital grading websites.
Not surprisingly, websites have appeared to grade the grading websites. Interesting information is available from an organization called the Informed Patient Institute (http://www.informedpatientinstitute.org/index.php), which rates the groups rating hospitals, using grades from A to F, arranged by state. For example, this group finds 15 physician report card websites for California, rated from B to D, and 12 hospital reports card websites, rated from B to D. Even small states can have a lot of report card websites: Delaware has 12 physician report card websites, rated from B to D, and 8 hospital report card websites, rated from B to C. Of course, an element of subjectivity may be inevitable in these consumer-oriented judgements, and it is not clear that these efforts have proven useful. A more scholarly approach to understanding rating systems is offered by Austin et al.3, who assessed the consistency of quality judgements made by 4 non-governmental rating agencies (those listed in the paragraph above). Only 10% of 844 hospitals rated as high performers by 1 of 4 rating systems were rated as high performers by any other rating system. No hospital was identified as a high performer by all 4 rating systems, and only 3 were rated highly by 3 rating systems. This heterogeneity reflects methodologic differences rooted in differing areas of focus (safety, quality improvement efforts, high-risk patient outcomes, etc) and differing hospital inclusion/exclusion criteria. We conclude that the current lack of standard approaches makes comparisons between grading agencies meaningless.
Public reporting websites all attempt to simplify complex data and make it digestible to consumers, and many wish to help hospitals and physicians improve. Unfortunately, the methodologies behind the assessments are quite complex, with significant limitations, and often not disclosed fully. While some consumers believe that they are getting useful information for decision making, and hospitals use public reports to drive key decisions, it is not clear that the information is, or even can be, validated. Furthermore, public reporting has not, to date, been shown to improve outcomes.4, 5 Is transparency about medical care defensible without evidence of benefit? Does information of uncertain validity satisfy a standard of transparency? We will explore these issues in the final article in this series.
Acknowledgments
Funded in part by an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number U54-GM104941 (PI: Binder-Macleod).
Footnotes
Disclosures:
Dr. Weintraub has nothing to disclose
Dr. Garratt has nothing to disclose
References
- 1.Weintraub WS, Garratt KN. Challenges in Risk Adjustment for Hospital and Provider Outcomes Assessment. Circulation. 2017;135:317–319. doi: 10.1161/CIRCULATIONAHA.116.025653. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Dehmer GJ, Jennings J, Madden RA, Malenka DJ, Masoudi FA, McKay CR, Ness DL, Rao SV, Resnic FS, Ring ME, Rumsfeld JS, Shelton ME, Simanowith MC, Slattery LE, Weintraub WS, Lovett A, Normand SL. The National Cardiovascular Data Registry Voluntary Public Reporting Program: An Interim Report From the NCDR Public Reporting Advisory Group. J Am Coll Cardiol. 2016;67:205–215. doi: 10.1016/j.jacc.2015.11.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Austin JM, Jha AK, Romano PS, Singer SJ, Vogus TJ, Wachter RM, Pronovost PJ. National hospital ratings systems share few common scores and may generate confusion instead of clarity. Health Aff (Millwood) 2015;34:423–430. doi: 10.1377/hlthaff.2014.0201. [DOI] [PubMed] [Google Scholar]
- 4.James J. Health Policy Brief: public reporting on quality and costs. Health Affairs.org. 2012 http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=65.
- 5.DeVore AD, Hammill BG, Hardy NC, Eapen ZJ, Peterson ED, Hernandez AF. Has Public Reporting of Hospital Readmission Rates Affected Patient Outcomes?: Analysis of Medicare Claims Data. J Am Coll Cardiol. 2016;67:963–972. doi: 10.1016/j.jacc.2015.12.037. [DOI] [PubMed] [Google Scholar]
