Skip to main content
The BMJ logoLink to The BMJ
editorial
. 1999 Aug 28;319(7209):528–530. doi: 10.1136/bmj.319.7209.528

Learning from differences within the NHS

Clinical indicators should be used to learn, not to judge

Albert G Mulley Jr 1
PMCID: PMC1116421  PMID: 10463877

We learn by making comparisons and trying to understand the sources of variation. Variation in the rates at which healthcare professionals use interventions in the care of seemingly similar populations creates opportunities to learn about and improve the quality of clinical decision making. And variations in outcomes between different professionals or institutions providing the same interventions create opportunities to learn how to improve the quality of clinical care.1 Yet too often variation is seen more as a challenge to authority and competence than as an opportunity to learn.

Last month the NHS Executive published comparative data for 100 health authorities and 280 NHS hospital trusts on six clinical indicators developed to measure aspects of clinical care that affect quality.2 The indicators measure in-hospital 30 day mortality rates after admission for emergency or elective surgery, for myocardial infarction, and for hip fracture. They also include rates of emergency readmission with any diagnosis and discharge to usual place of residence following admission for either stroke or hip fracture. There is considerable variation across England that cannot readily be explained by characteristics of the populations served or of the hospitals. One pattern that did emerge is higher readmission rates and the highest death rates after surgery among health authorities in coalfields and ports and industrial areas.

The data come from health episode statistics for 1995–6 to 1997–8, comprising 11 000 000 consultant episodes a year. They are imperfect. Data reporting itself is highly variable, and locations with evidently poor reporting were excluded from rate comparisons. The indicator rates derived from these data are also flawed. Adjustments are crude at best, accounting only for differences in age among treated populations. Deaths that occur within 30 days but after discharge are not included, so higher rates could be expected in hospitals with longer lengths of stay.3 Emergency readmissions are not limited to diagnoses that occasioned the index hospitalisations, so higher rates could be expected for populations with a greater burden of illness.4 Rates of return to usual residence after stroke or hip fracture may depend more on characteristics of that residence than on the care provided in hospital. Each of these potential biases could explain in part the worse outcomes observed in less prosperous regions of England.

In his forward to the report the chief executive of the NHS Executive cautions that many factors outside the control of hospitals affect the measured outcomes and acknowledges the limitations of the data and the indicators. They are not direct measures of quality, he says, but should be used to draw attention to issues that may need investigation or action. But will those with a stake in NHS performance quality heed these cautions? Are the indicators so flawed that they will focus attention on the wrong issues, distracting clinicians and managers from more fruitful areas of inquiry? What can be done to increase the likelihood that these comparisons will evoke curiosity and stimulate learning within the NHS? There may be some answers in the successes and failures of similar efforts in the United States.

In 1986 the Health Care Financing Administration issued a report on mortality rates for Medicare beneficiaries for each of 5500 American hospitals.5 Statistical models were used to predict expected rates for each hospital based on characteristics of the hospital and patients served, and standardised differences between expected and observed rates were reported. Hospital administrators and clinicians took a dim view of the usefulness of the data. Those with higher mortality rates claimed their patients were sicker and attacked the validity of the models. They demonstrated the omission of important clinical variables and resulting biases,6 and few used the comparisons to guide quality improvement efforts.7

When some states reported crude and adjusted mortality rates for specific operations, high volume surgeons and institutions generally had lower rates.8 Journalists helped to force public disclosure, even for individual surgeons whose small number of cases precluded accurate ratings. Sensational media reports impeded sensible interpretation of the findings. In this environment few clinicians discussed the mortality differences with patients or altered referral decisions.9 Despite intense media interest and exposure, the comparisons rarely influenced decisions. In one state only 12% of patients undergoing coronary artery bypass surgery had been aware of the availability of mortality ratings and only 1% had known the correct rating of their surgeon or hospital before surgery.10

Evidence suggests that release of mortality rates contributed to a decrease in deaths related to coronary artery bypass surgery.8 But public judgments made about quality and competence based on inadequately adjusted mortality data pose new risks for the quality of decision making. For many procedures, including coronary artery bypass surgery, the net expected benefit of surgery is often greater for patients with higher expected operative mortality. A surgeon or hospital mindful of mortality ratings might alter indications for surgery to improve their standing. Confidence in distinguishing between actual year to year improvement and diversion of surgery away from those most at risk of death but most likely to benefit would require outcome data for all patients who are candidates for the procedure whether they get it or not.1

Fortunately the early clinical indicators chosen by the NHS Executive do not focus on conditions associated with major interventions that are made at the discretion of the provider and therefore subject to shifts in decision making. Population based admission rates for myocardial infarction, hip fracture, or stroke show little variation. Also substantial evidence exists about the effectiveness of elements of care for these conditions that will provide a basis for those willing to respond to the outcome comparisons with constructive curiosity about differences in process.

It is precisely this kind of “benchmarking” that the NHS Executive hopes to incite, obliging the profession to learn from its collective experience. When doctors respond to comparisons by using their expertise to understand the sources of variation, including clinical complexities of disease severity and comorbidity, the results can be striking. When significant differences in adjusted mortality rates were evident among the hospitals and surgeons in three New England states, they did not cite the real limitations of adjustment methods. Instead, surgeons joined with clinical and non-clinical colleagues in an extended series of visits to each other’s operating rooms and hospital wards to discover differences in processes of care. They learnt from their differences. The result was a 24% decrease in hospital mortality for their patients in one year that was sustained for at least three years.11

Achieving these kinds of results is not easy. Professionals inclined to respond with constructive curiosity need help. The NHS has promised a toolkit to aid interpretation of indicators, but to make clinical sense of comparisons and generate actionable insights about how to improve quality will require both sophisticated analytical skills and investments in information systems with more clinical relevance than health episode statistics. Measured items should reflect patients’ as well as clinicians’ perspectives.1 Local initiatives focusing on specific clinical conditions rather than procedures should be encouraged.

Perhaps most important is that all stakeholders should recognise that these comparisons do not meet a standard of evidence sufficient for judgments about quality of care. Responsible journalism can help in educating the public that measurement for improvement is not measurement for judgment.12 Given the scale, scope, and organisation of the NHS, there is great potential for its professionals to learn from their differences. The clinical indicator initiative should be viewed as a step in that direction.

References

  • 1.Mulley AG. Outcomes Research: implications for policy and practice. In: Delamothe T, editor. Outcomes into clinical practice. London: BMJ Books; 1994. [Google Scholar]
  • 2.NHS Executive. Quality and performance in the NHS: clinical indicators. London: BMA Books; 1999. [Google Scholar]
  • 3.Jencks SF, Williams DK, Kay TL. Assessing hospital-associated deaths from discharge data. The role of length of stay and comorbidities. JAMA. 1988;260:2240–2246. [PubMed] [Google Scholar]
  • 4.Greenfield S, Aronow HU, Elashoff RM, Watanabe D. Flaws in mortality data. The hazards of ignoring comorbid disease. JAMA. 1988;260:2253–2255. [PubMed] [Google Scholar]
  • 5.Medicare hospital mortality information, 1986. Washington, DC: US Dept of Health and Human Services; 1987. [Google Scholar]
  • 6.Smith DW, Pine M, Bailey RC, Jones B, Brewster A, Krakauer H. Using clinical variables to estimate the risk of patient mortality. Med Care. 1991;29:1108–1129. doi: 10.1097/00005650-199111000-00004. [DOI] [PubMed] [Google Scholar]
  • 7.Berwick DM, Wald DL. Hospital leaders’ opinions of the HCFA mortality data. JAMA. 1990;263:247–249. [PubMed] [Google Scholar]
  • 8.Hannan EL, Kilburn H, Racz M, Shields E, Chassin MR. Improving the outcomes of coronary artery bypass graft surgery in New York State. JAMA. 1994;271:761–766. [PubMed] [Google Scholar]
  • 9.Hannan EL, Stone CC, Biddle TL, DeBuono BA. Public release of cardiac surgery outcomes data in New York: what do New York state cardiologists think of it? Am Heart J. 1997;134:55–61. doi: 10.1016/s0002-8703(97)70106-6. [DOI] [PubMed] [Google Scholar]
  • 10.Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998;279:1638. doi: 10.1001/jama.279.20.1638. [DOI] [PubMed] [Google Scholar]
  • 11.O’Connor GT, Plume SK, Olstead EM, Morton JR, Maloney CT, Nugent WC, et al. A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery: the Northern New England Cardiovascular Disease Study Group. JAMA. 1996;275:841. [PubMed] [Google Scholar]
  • 12.Berwick DM. Looking forward: the NHS: feeling well and thriving at 75. BMJ. 1998;317:57–61. doi: 10.1136/bmj.317.7150.57. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES