Information on the performance of physicians and hospitals is increasingly used by health authorities to monitor and improve health care. Two general types of health care assessments exist: those that measure outcome and those that measure process. Examples of outcome-based reporting include those of US cardiac surgeons’ and hospitals’ mortality rates following coronary artery bypass graft (CABG) surgery.2 Outcome-based reporting may seem to provide objective measures of performance. However, given the many variables that impact on clinical outcomes, most importantly the differences in types of patients and referral patterns (‘case-mix’) among hospitals, these parameters may at the same time be misleading.3 Process-based assessments, frequently called quality indicators, report on rates of medical interventions, such as screening tests and medication use, which are assumed to be related to outcomes. For example, US Centers for Medicare & Medicaid Services (CMS) collect data on aspirin use within 24 hours of arrival in coronary patients, β-blocker use within 24 hours of arrival, angiotensin-converting enzyme inhibitor use for left ventricular dysfunction, aspirin prescribed at discharge, and β-blocker prescribed at discharge.
Given the widespread international initiation of quality reporting systems, general agreement apparently exists among administrators on the usefulness of this approach. Similarly, public reporting on physician and hospital performance, e.g. by internet, is increasingly used as a means of improving the quality of care. However, the positive impact of these systems on the quality of care has not been demonstrated. Similarly, and equally important, the potential unintended and negative consequences of the system remain largely unexplored. Both physicians and health centres may be tempted to change practice patterns to improve their outcome parameters. High-risk patients may be denied surgery, and if rehospitalisation is used as a performance indicator in heart failure patients, they may be managed outside the hospital longer than is medically optimal. Consequently, the quality of care may in fact be reduced by these systems.1 Furthermore, authorities may draw the wrong conclusions and the public perception of centres and physicians may be erroneously favourable or poor.
In addition to these fundamental issues, many hurdles remain. For instance, any assessment system will require careful and uniform definition of diagnoses and careful classification of patients according to these definitions. In the Netherlands, these requirements are not fulfilled.
Against this background, the paper by Oerlemans et al. in this issue of the Netherlands Heart Journal is of particular interest.4 It describes an observational study on the outcome in 547 new patients visiting a cardiology outpatient clinic in three teaching hospitals in the Netherlands. The authors used diagnosis-treatment combinations (‘DBCs’), as currently used in our national system of health insurance, to classify patients. The main finding of the study is that mortality following a first outpatient cardiology contact was 6.4% on average, ranging from 5.0 to 7.3%. Differences between the three centres were not statistically significant.
The purpose of the study, as stated by the authors, was to evaluate the new performance indicator ‘oneyear mortality after a first visit to a cardiology outpatient clinic’. However, if one-year mortality is to be evaluated as a diagnostic tool for performance (or quality of care), it should be compared with a standard. This was not performed and the question cannot be answered. Alternatively, the findings may be used to compare the participating centres. Do the findings imply that the three participating hospitals provide equal quality of care? Do they demonstrate that this quality was high? Neither of these questions can be answered. First, the use of diagnosis-treatment combinations to classify patients is likely unreliable and no check against source data was performed. The system is not designed for this purpose and administrative reasons may influence the selection of certain categories. This may contribute to the large differences that were found in the subsets of several categories: stable angina was 210% more prevalent in hospital 3 compared with hospital 1. Alternatively, these differences in diagnostic categories may reflect differences in case-mix. The information provided (table 1) is limited. For example, there is no information on valvular heart disease, no breakdown of non-atrial fibrillation arrhythmias, no information on in- and outbound referral patterns. Second, the power of the observation is low. A 46% difference in mortality between the best and the poorest performer, which may be clinically highly relevant, is not statistically significant. In fact, given the confidence intervals, the difference may be as high as 550% (2.0 vs. 11.0%).
The authors are to be commended for their effort to explore these issues. Their paper highlights the serious limitations of one-year mortality as a performance indicator. In spite of including three apparently similar, adequately sized hospitals and in spite of a large amount of careful data collection, no reliable explanation for the differences in mortality can be derived. Clinical reality is complex, and reducing all of this information to a single blunt parameter such as mortality is not currently a sensible way to assess performance. Therefore, this parameter should not be used to compare quality across hospitals until reliable background information becomes available. At the same time, both the positive and the negative impact of systems of quality assessment in general on the behaviour of physicians and health care centres should be explored. Before a clear understanding of these issues is developed, and before a validated system is in place, no consequences can be given to the findings, either by the public or by authorities.
References
- 1.Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA 2005; 293:1239-44. [DOI] [PubMed] [Google Scholar]
- 2.New York State Department of Health. Coronary Artery Bypass Surgery in New York State, 1990-1992. Albany: New York State Dept of Health; 1993. [Google Scholar]
- 3.Pitches DW, Mohammed MA, Lilford RJ. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res 2007;7:91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Oerlemans MIFJ, Lok DJA, Cornel JH, Mosterd A. One-year mortality after a first visit to a cardiology outpatient clinic: a useful, performance indicator? Neth Heart J 2009;17:52-5. [DOI] [PMC free article] [PubMed] [Google Scholar]