In this issue of CMAJ (page 173) Koravangattu Sankaran and colleagues report on death rates among over 19 000 infants admitted to 17 Canadian neonatal intensive care units.1 Death rates ranged from about 1% to 11% of infants admitted. After adjustments for risk, this range narrowed to between 2% and 6%. As Jon Tyson and Kathleen Kennedy point out in an accompanying commentary2 (page 191), this difference in risk-adjusted absolute rates is equivalent to 1 death per 23 admissions. This is not a small number, and for expectant parents and their physicians it might be important to know the identities of the institutions.
But nowhere in the published paper1 are the death rates for individual intensive care units reported. Our request to the authors to name each of the hospitals was not granted, for 2 reasons. First, the cooperation of most of the institutions was contingent on the authors' guarantee that the published data would not be linked to specific institutions. Second, the authors were concerned that publication of the results might mislead the public and physicians, who might then act — needlessly, perhaps — to ensure that infants were referred to certain institutions and not others.
We agreed with the authors on their first point — a promise had been made without which the study might not have been done — and accepted the paper for publication without disclosure of the identity of the participating institutions. However, the premise that such disclosure would be harmful is less compelling.
The goals of the research reported in this issue are to identify differences in outcomes of care, to discover why any differences exist and, ultimately, to improve quality of care. By comparing the process and structure of health care provision in different jurisdictions or institutions (including staffing, equipment, protocols, administrative support, etc.) we can better understand differences in outcome and, eventually, help institutions and providers achieve higher quality of care and lower rates of mortality.
Ever since Florence Nightingale reported on factors associated with reduced mortality after amputations on the battlefield,3 attempts have been made to measure quality of care. Hospital death rounds, in which a case is presented and examined for lessons to be learned, is an example of continuous quality assessment that has been going on in hospitals for a very long time. But the most important recent advances in quality-of-care monitoring in medicine have been borrowed from industry. In the period immediately after World War II, W. Edwards Deming (1900–1993), an American assigned to help rebuild Japanese industry, developed a system to continuously improve the quality of industrial products.4 Appropriated in the 1980s by physicians,5,6 the technique has been applied with increasing frequency but with mixed results to health care.7
The heart of the Deming method is to first identify, in exquisite detail, each step in the process of producing an industrial part, say a car door. Objectives are set for specific outcomes of quality at every step and — of key importance — by the workers involved in the process, not just their managers. Those directly involved continually adjust their methods, techniques and processes, and then evaluate the results again. Gradually, overall quality improves. More recently, again using techniques derived from industry (particularly, quality management in commercial airline safety), quality assessment in health care is now adopting prospective risk management techniques such as human factors analysis.8
Although there is no doubt that information on the quality of goods or services is important to manufacturers and providers, access to such information is now viewed as one of the rights of consumers. This outlook is perhaps particularly evident in the US, where medical care is “purchased” by consumers and efforts are made not only to measure quality of medical care, but also to report the results to the public. An important component of price and the “purchase decision” is quality, or the perception of quality.
It goes without saying that human beings with chest pain or low-birth-weight babies in neonatal intensive care are infinitely more complex than car doors. With the possible exception of survival or death, outcomes in medicine are subtle. Relief of pain or of dyspnea, preserved intellectual capacity and subsequent school performance are more difficult to measure than gas mileage, or the occurrence of paint scratches on the assembly line. Moreover, understanding outcomes requires adjustments for risk, which, however sophisticated, are always imperfect.
Recognizing these difficulties, investigators (and providers and institutions) have in general been reluctant to share the results of their quality assessments with the public. Will the public understand that apparent quality differences, including variation in death rates, may simply reflect differences (and inadequacies) in risk adjustment? Will they understand statistical variability and confidence intervals? Will they recognize imprecision in the measurement of other outcomes? Or will they and their physicians jump to the potentially erroneous conclusion that infants will get better care at the institutions with the lowest death rate?
One of the earliest uses of report cards in the more recent annals of medicine was the yearly reporting, begun in 1992, of mortality rates among patients admitted to hospital under the US Medicare program.9 Seriously flawed by inadequate adjustments for risk, this reporting was subsequently discontinued. A Canadian example was the publication in this journal of complication rates after laparoscopic cholecystectomy in Ontario hospitals.10 The original report did not name the hospitals concerned, but journalist Lisa Priest identified them,11 prompting a further study12 in which coding errors emerged as being mainly responsible for the apparent interhospital variation. The subsequent debate focused almost exclusively on the benefits and harms of releasing nominal data to the public, especially when those data are not robust.13,14,15
Yet not all efforts to report on measurements of quality have failed. About 10 years ago, commissioners of the New York State Department of Health began to publish risk- adjusted measures of mortality after coronary artery bypass grafting.16 The data, which are used for a variety of quality improvement activities, were reported in a public document that initially revealed the identities of the institutions but not of the individual surgeons. However, after a successful court challenge that was based in part on the argument that information describing publicly funded programs should be in the public domain, rates for individual surgeons were published. Hospitals and surgeons, although they were initially reluctant to do so, agreed to comply with the court ruling. Now individual rates are published as 3- year floating averages of observed, expected and risk- adjusted mortality by hospital and by individual surgeon.17 It appears that the publication of this information has not incited panic or caused a run on the lowest-risk surgeons and facilities,18 although just how patients understand and use this information, if at all,19 requires further study.
While we wholeheartedly agree that continuous quality improvement techniques are valuable tools to improve health care outcomes we — perhaps with less assurance — disagree that nominal information relating to individual practitioners and institutions should be suppressed. We have commented in these pages on professional and governmental “nannyism.”20 When high-quality report cards are available on services provided at public expense, it is difficult to see how the withholding of this information serves the public interest. In future, we will be little inclined to publish papers that do not identify the institutions they examine. Naming and taking responsibility are mature (to say nothing of maturing) behaviours. Public disclosure should be the norm unless there is a clear and demonstrable potential for net harm. Society stands to benefit from greater transparency, and such transparency gives researchers, editors and journalists that additional responsibility of providing guidance on how to interpret and use the information they make available. Physicians and their institutions should prepare themselves.21
Footnotes
Acknowledgements: We thank Susan Bondy, William Ghali and Andreas Laupacis for their helpful discussions and suggestions.
Correspondence to: Dr. John Hoey, 1867 Alta Vista Dr., Ottawa ON K1G 3Y6; fax 613 565-2382; john.hoey@cma.ca
References
- 1.Sankaran K, Chien LY, Walker R, Seshia M, Ohlsson A, Lee SK and the Canadian Neonatal Network. Variations in mortality rates among Canadian neonatal intensive care units. CMAJ 2002;166(2):173-8. [PMC free article] [PubMed]
- 2.Tyson J, Kennedy K. Variations in mortality rates among Canadian neonatal intensive care units: interpretations and implications [editorial]. CMAJ 2002; 166(2):191-2. [PMC free article] [PubMed]
- 3.Nightingale F. Notes on nursing: enlarged and for the most part rewritten. 3rd ed. London (UK): Longman, Green, Longman, Roberts and Green; 1863.
- 4.Walton M. The Deming management method. New York: Dodd, Mead; 1986.
- 5.Brook RH. Health care reform is on the way: Do we want to compete on quality? Ann Intern Med 1994;120:84-5. [DOI] [PubMed]
- 6.Berwick DM, Godfrey AB, Roessner J. Curing health care: new strategies for quality improvement. San Francisco: Jossey-Bass; 1990.
- 7.Tu JV, Naylor CD, and the Steering Committee of the Provincial Adult Cardiac Care Network of Ontario. Coronary artery bypass mortality rates in Ontario: a Canadian approach to quality assurance in cardiac surgery. Circulation 1996;94:2429-33. [DOI] [PubMed]
- 8.Davies JM. Painful inquiries: lessons from Winnipeg. CMAJ 2001; 165 (11): 1503-4. [PMC free article] [PubMed]
- 9.Krakauer H, Bailey RC, Skellan KJ, Stewart JD, Hartz AJ, Kuhn EM, et al. Evaluation of the HCFA model for the analysis of mortality following hospitalization. Health Serv Res 1992;27:317-35. [PMC free article] [PubMed]
- 10.Cohen MM, Young W, Thériault ME, Hernandez R. Has laparoscopic cholecystectomy changed patterns of practice and patient outcome in Ontario? CMAJ 1996;154(4):491-500. Abstract available: www.cma.ca/cmaj/vol-154/0491e.htm [PMC free article] [PubMed]
- 11.Priest L. The low-scar surgery with a high risk. Toronto Star 1997 Sept 21; Sect A:1,14,15.
- 12.Taylor B. Common bile duct injury during laparoscopic cholecystectomy in Ontario: does ICD-9 coding indicate true incidence? CMAJ 1998;158(4):481-5. Abstract available: www.cma.ca/cmaj/vol-158/issue-4/0481.htm [PMC free article] [PubMed]
- 13.Anderson GM. Letting the public know [letter]. CMAJ 1998;158(10):1266-8. Available: www.cma.ca/cmaj/vol-158/issue-10/1266e.htm [PMC free article] [PubMed]
- 14.Taylor B. Letting the public know [letter]. CMAJ 1998;158(10):1268. Available: www.cma.ca/cmaj/vol-158/issue-10/1268.htm
- 15.Marshall WJS. Letting the public know [letter]. CMAJ 1998;158(10):1268. Available: www.cma.ca/cmaj/vol-158/issue-10/1269a.htm
- 16.Chassin MR, Hannan EL, DeBuona BA. Benefits and hazards of reporting medical outcomes publicly. N Engl J Med 1996;334:394-8. [DOI] [PubMed]
- 17.Coronary Artery Bypass Surgery in New York State 1996–1998. Available: www.health.state.ny.us/nysdoh/consumer/heart/homehear.htm (accessed 2001 Sept 25).
- 18.Green J, Wintfeld N. Report cards on cardiac surgeons: assessing New York State's approach. N Engl J Med 1995;332:1299-332. [DOI] [PubMed]
- 19.Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data. What do we expect to gain? A review of the evidence. JAMA 2000;283(14):1866-74. [DOI] [PubMed]
- 20.From nannyism to public disclosure: the BSE Inquiry report [editorial]. CMAJ 2001;164(2):165. Available: www.cma.ca/cmaj/vol-164/issue-2/0165.htm [PMC free article] [PubMed]
- 21.Tu JV, Schull MJ, Ferris LE, Hux JE, Redelmeier DA. Problems for clinical judgement: 4. Surviving in the report card era. CMAJ 2001;164(12):1709-12. Available: www.cma.ca/cmaj/vol-164/issue-12/1709.asp [PMC free article] [PubMed]