Skip to main content
Quality & Safety in Health Care logoLink to Quality & Safety in Health Care
. 2004 Feb;13(1):32–39. doi: 10.1136/qshc.2002.003996

Are diagnosis specific outcome indicators based on administrative data useful in assessing quality of hospital care?

I Scott 1, D Youlden 1, M Coory 1
PMCID: PMC1758063  PMID: 14757797

Abstract

Background: Hospital performance reports based on administrative data should distinguish differences in quality of care between hospitals from case mix related variation and random error effects. A study was undertaken to determine which of 12 diagnosis-outcome indicators measured across all hospitals in one state had significant risk adjusted systematic (or special cause) variation (SV) suggesting differences in quality of care. For those that did, we determined whether SV persists within hospital peer groups, whether indicator results correlate at the individual hospital level, and how many adverse outcomes would be avoided if all hospitals achieved indicator values equal to the best performing 20% of hospitals.

Methods: All patients admitted during a 12 month period to 180 acute care hospitals in Queensland, Australia with heart failure (n = 5745), acute myocardial infarction (AMI) (n = 3427), or stroke (n = 2955) were entered into the study. Outcomes comprised in-hospital deaths, long hospital stays, and 30 day readmissions. Regression models produced standardised, risk adjusted diagnosis specific outcome event ratios for each hospital. Systematic and random variation in ratio distributions for each indicator were then apportioned using hierarchical statistical models.

Results: Only five of 12 (42%) diagnosis-outcome indicators showed significant SV across all hospitals (long stays and same diagnosis readmissions for heart failure; in-hospital deaths and same diagnosis readmissions for AMI; and in-hospital deaths for stroke). Significant SV was only seen for two indicators within hospital peer groups (same diagnosis readmissions for heart failure in tertiary hospitals and inhospital mortality for AMI in community hospitals). Only two pairs of indicators showed significant correlation. If all hospitals emulated the best performers, at least 20% of AMI and stroke deaths, heart failure long stays, and heart failure and AMI readmissions could be avoided.

Conclusions: Diagnosis-outcome indicators based on administrative data require validation as markers of significant risk adjusted SV. Validated indicators allow quantification of realisable outcome benefits if all hospitals achieved best performer levels. The overall level of quality of care within single institutions cannot be inferred from the results of one or a few indicators.

Full Text

The Full Text of this article is available as a PDF (195.4 KB).

Selected References

These references are in PubMed. This may not be the complete list of references from this article.

  1. Chassin M. R., Park R. E., Lohr K. N., Keesey J., Brook R. H. Differences among hospitals in Medicare patient mortality. Health Serv Res. 1989 Apr;24(1):1–31. [PMC free article] [PubMed] [Google Scholar]
  2. Chen J., Radford M. J., Wang Y., Marciniak T. A., Krumholz H. M. Performance of the '100 top hospitals': what does the report card report? Health Aff (Millwood) 1999 Jul-Aug;18(4):53–68. doi: 10.1377/hlthaff.18.4.53. [DOI] [PubMed] [Google Scholar]
  3. Coory M., Gibberd R. New measures for reporting the magnitude of small-area variation in rates. Stat Med. 1998 Nov 30;17(22):2625–2634. doi: 10.1002/(sici)1097-0258(19981130)17:22<2625::aid-sim957>3.0.co;2-4. [DOI] [PubMed] [Google Scholar]
  4. DesHarnais S., McMahon L. F., Jr, Wroblewski R. Measuring outcomes of hospital care using multiple risk-adjusted indexes. Health Serv Res. 1991 Oct;26(4):425–445. [PMC free article] [PubMed] [Google Scholar]
  5. Garnick D. W., DeLong E. R., Luft H. S. Measuring hospital mortality rates: are 30-day data enough? Ischemic Heart Disease Patient Outcomes Research Team. Health Serv Res. 1995 Feb;29(6):679–695. [PMC free article] [PubMed] [Google Scholar]
  6. Gibberd R., Pathmeswaran A., Burtenshaw K. Using clinical indicators to identify areas for quality improvement. J Qual Clin Pract. 2000 Dec;20(4):136–144. doi: 10.1046/j.1440-1762.2000.00378.x. [DOI] [PubMed] [Google Scholar]
  7. Green J., Wintfeld N. How accurate are hospital discharge data for evaluating effectiveness of care? Med Care. 1993 Aug;31(8):719–731. doi: 10.1097/00005650-199308000-00005. [DOI] [PubMed] [Google Scholar]
  8. Greenland S. Principles of multilevel modelling. Int J Epidemiol. 2000 Feb;29(1):158–167. doi: 10.1093/ije/29.1.158. [DOI] [PubMed] [Google Scholar]
  9. Hanley J. A., McNeil B. J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982 Apr;143(1):29–36. doi: 10.1148/radiology.143.1.7063747. [DOI] [PubMed] [Google Scholar]
  10. Hartz A. J., Gottlieb M. S., Kuhn E. M., Rimm A. A. The relationship between adjusted hospital mortality and the results of peer review. Health Serv Res. 1993 Feb;27(6):765–777. [PMC free article] [PubMed] [Google Scholar]
  11. Iezzoni L. I., Ash A. S., Shwartz M., Daley J., Hughes J. S., Mackiernan Y. D. Judging hospitals by severity-adjusted mortality rates: the influence of the severity-adjustment method. Am J Public Health. 1996 Oct;86(10):1379–1387. doi: 10.2105/ajph.86.10.1379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Iezzoni L. I. Assessing quality using administrative data. Ann Intern Med. 1997 Oct 15;127(8 Pt 2):666–674. doi: 10.7326/0003-4819-127-8_part_2-199710151-00048. [DOI] [PubMed] [Google Scholar]
  13. Iezzoni L. I., Foley S. M., Daley J., Hughes J., Fisher E. S., Heeren T. Comorbidities, complications, and coding bias. Does the number of diagnosis codes matter in predicting in-hospital mortality? JAMA. 1992 Apr 22;267(16):2197–2203. doi: 10.1001/jama.267.16.2197. [DOI] [PubMed] [Google Scholar]
  14. Jollis J. G., Romano P. S. Pennsylvania's Focus on Heart Attack--grading the scorecard. N Engl J Med. 1998 Apr 2;338(14):983–987. doi: 10.1056/NEJM199804023381410. [DOI] [PubMed] [Google Scholar]
  15. Kahn K. L., Rogers W. H., Rubenstein L. V., Sherwood M. J., Reinisch E. J., Keeler E. B., Draper D., Kosecoff J., Brook R. H. Measuring quality of care with explicit process criteria before and after implementation of the DRG-based prospective payment system. JAMA. 1990 Oct 17;264(15):1969–1973. [PubMed] [Google Scholar]
  16. Keeler E. B., Rubenstein L. V., Kahn K. L., Draper D., Harrison E. R., McGinty M. J., Rogers W. H., Brook R. H. Hospital characteristics and quality of care. JAMA. 1992 Oct 7;268(13):1709–1714. [PubMed] [Google Scholar]
  17. Kiefe C. I., Allison J. J., Williams O. D., Person S. D., Weaver M. T., Weissman N. W. Improving quality improvement using achievable benchmarks for physician feedback: a randomized controlled trial. JAMA. 2001 Jun 13;285(22):2871–2879. doi: 10.1001/jama.285.22.2871. [DOI] [PubMed] [Google Scholar]
  18. Kiefe C. I., Weissman N. W., Allison J. J., Farmer R., Weaver M., Williams O. D. Identifying achievable benchmarks of care: concepts and methodology. Int J Qual Health Care. 1998 Oct;10(5):443–447. doi: 10.1093/intqhc/10.5.443. [DOI] [PubMed] [Google Scholar]
  19. Lee Kwan, McGreevey Christine. Using control charts to assess performance measurement data. Jt Comm J Qual Improv. 2002 Feb;28(2):90–101. doi: 10.1016/s1070-3241(02)28009-8. [DOI] [PubMed] [Google Scholar]
  20. Luce J. M., Thiel G. D., Holland M. R., Swig L., Currin S. A., Luft H. S. Use of risk-adjusted outcome data for quality improvement by public hospitals. West J Med. 1996 May;164(5):410–414. [PMC free article] [PubMed] [Google Scholar]
  21. Martuzzi M., Hills M. Estimating the degree of heterogeneity between event rates using likelihood. Am J Epidemiol. 1995 Feb 15;141(4):369–374. doi: 10.1093/aje/141.4.369. [DOI] [PubMed] [Google Scholar]
  22. Milne R., Clarke A. Can readmission rates be used as an outcome indicator? BMJ. 1990 Nov 17;301(6761):1139–1140. doi: 10.1136/bmj.301.6761.1139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Rainwater J. A., Romano P. S., Antonius D. M. The California Hospital Outcomes Project: how useful is California's report card for quality improvement? Jt Comm J Qual Improv. 1998 Jan;24(1):31–39. doi: 10.1016/s1070-3241(16)30357-1. [DOI] [PubMed] [Google Scholar]
  24. Romano P. S., Zach A., Luft H. S., Rainwater J., Remy L. L., Campa D. The California Hospital Outcomes Project: using administrative data to compare hospital performance. Jt Comm J Qual Improv. 1995 Dec;21(12):668–682. doi: 10.1016/s1070-3241(16)30195-x. [DOI] [PubMed] [Google Scholar]
  25. Rosenthal G. E., Hammar P. J., Way L. E., Shipley S. A., Doner D., Wojtala B., Miller J., Harper D. L. Using hospital performance data in quality improvement: the Cleveland Health Quality Choice experience. Jt Comm J Qual Improv. 1998 Jul;24(7):347–360. doi: 10.1016/s1070-3241(16)30386-8. [DOI] [PubMed] [Google Scholar]
  26. Rosenthal G. E., Harper D. L., Quinn L. M., Cooper G. S. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. Results of a regional study. JAMA. 1997 Aug 13;278(6):485–490. [PubMed] [Google Scholar]
  27. Rosenthal G. E., Shah A., Way L. E., Harper D. L. Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality. Med Care. 1998 Jul;36(7):955–964. doi: 10.1097/00005650-199807000-00003. [DOI] [PubMed] [Google Scholar]
  28. Rosenthal G. E. Weak associations between hospital mortality rates for individual diagnoses: implications for profiling hospital quality. Am J Public Health. 1997 Mar;87(3):429–433. doi: 10.2105/ajph.87.3.429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Scott Ian A., Harper Catherine M. Guideline-discordant care in acute myocardial infarction: predictors and outcomes. Med J Aust. 2002 Jul 1;177(1):26–31. doi: 10.5694/j.1326-5377.2002.tb04627.x. [DOI] [PubMed] [Google Scholar]
  30. Shortell S. M., Bennett C. L., Byck G. R. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624, 510. doi: 10.1111/1468-0009.00107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Taylor G. Underperforming doctors: a postal survey of the Northern Deanery. BMJ. 1998 Jun 6;316(7146):1705–1708. doi: 10.1136/bmj.316.7146.1705. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Thomas J. W., Guire K. E., Horvat G. G. Is patient length of stay related to quality of care? Hosp Health Serv Adm. 1997 Winter;42(4):489–507. [PubMed] [Google Scholar]
  33. Thomas J. W., Hofer T. P. Accuracy of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care. 1999 Jan;37(1):83–92. doi: 10.1097/00005650-199901000-00012. [DOI] [PubMed] [Google Scholar]
  34. Thomas J. W., Hofer T. P. Research evidence on the validity of risk-adjusted mortality rate as a measure of hospital quality of care. Med Care Res Rev. 1998 Dec;55(4):371–404. doi: 10.1177/107755879805500401. [DOI] [PubMed] [Google Scholar]
  35. Wray N. P., Ashton C. M., Kuykendall D. H., Petersen N. J., Souchek J., Hollingsworth J. C. Selecting disease-outcome pairs for monitoring the quality of hospital care. Med Care. 1995 Jan;33(1):75–89. doi: 10.1097/00005650-199501000-00007. [DOI] [PubMed] [Google Scholar]

Articles from Quality & safety in health care are provided here courtesy of BMJ Publishing Group

RESOURCES