ABSTRACT
Understanding and learning from hospital deaths is an important component of good clinical practice but current approaches and measures are complex, controversial and difficult to understand. Patients who die are not a homogeneous group but fall into three distinct categories; most learning will be achieved by recognising this and investigating categories of deaths in different ways, relying heavily on qualitative approaches. Numerical measures of overall hospital mortality, such as hospital standardised mortality ratio (HSMR) or measures of ‘preventable’ deaths, are most unlikely to be helpful at a hospital level and may even give false reassurance, as accuracy of measurement is strongly influenced by factors apart from quality of care.
KEYWORDS: Care record review, death, hospital, mortality, quality improvement
Introduction
Florence Nightingale is said to have been one of the first to regard high hospital mortality as an indicator of poor care on her return to London from the Crimean war in the 1860s,1 but it was the US Institute of Medicine's (IOM) 1999 report To err is human that focused public attention on potentially avoidable hospital deaths as a result of medical error. The IOM report estimated that 44,000–98,000 unnecessary deaths occurred in US hospitals annually and drew the comparison of a jumbo jet crashing every day.2 This claim was controversial but it did achieve extensive public and professional attention, and the IOM report is credited with launching the modern patient safety movement.3
In the UK, the focus on measures of preventable deaths was promoted by Dr Foster Intelligence – a healthcare data company that developed and published hospital standardised mortality ratios (HSMRs) for all hospitals.4 Despite concerns over the validity of such measures by patient safety experts, they have come into widespread use in the UK, including by the regulator of NHS care in England. Proponents claim that measures of preventable deaths have helped detect serious failings in care, as in Mid-Staffordshire NHS Foundation Trust; opponents question the statistical validity of models based on administrative data and on judging overall hospital quality on the care of the very small percentage of patients who die.5,6
While many clinicians remain confused and sceptical about mortality measures, the concept of detecting ‘preventable’ hospital deaths has an intuitive appeal to the public, policymakers and politicians. In this paper, we aim to offer clinicians and NHS leaders some practical ways to learn from deaths in acute hospital care.
What do we know about those who die in hospitals?
Of the 15 million people admitted to hospitals each year in England, 40% are day cases, 10% other elective admissions, 15% mothers and babies, and 35% emergencies.7 Just less than 2% of these patients die and, unsurprisingly, deaths are very unevenly distributed among the different groups (Fig 1).
Hospital death rate is also influenced by the characteristics of the hospital (eg if it provides specialist oncology services), the characteristics of the patients (older, sicker patients are more likely to die) and the provision of other local services (like hospice care and nursing homes).
Those who die in hospital fall into three broard categories (Fig 2):8
Frail, older patients with multiple comorbidities, admitted as emergencies, account for most deaths (70–80%).
Some deaths occur in patients with conditions that are recognised to have significant mortality, such as stroke, heart attack, hip fracture and high-risk surgery.
A very small number of deaths occur in low-risk patient groups (eg those having low-risk surgery or in maternity care).
Patient safety issues occur in all categories, but since the issues in each category tend to be different, the best ways to investigate and learn from deaths in each category also tend to be different.
Safety issues in those who die
Because around half of all deaths in the UK still occur in hospital, many deaths in older medical patients are inevitable. However, case note review finds between 3% and 5% of these deaths might be preventable.9–12 Safety problems in these patients broadly reflect those arising in the emergency pathway and in general ward care (poor clinical monitoring, inadequate response to deterioration, poor sepsis management, medication issues, acute kidney injury etc).
Deaths in those with conditions such as stroke, heart attack, cardiac surgery or hip fracture are a relatively small proportion of all hospital deaths, but excess deaths in these groups may reflect specific problems in that care pathway. We know expected mortality for many of these conditions from national audits and databases and we also have a range of data on other aspects of quality. Since safety problems tend to be condition specific, high mortality is usually mirrored by poor performance on other process or outcome measures.13
Deaths in low-risk groups are uncommon but by definition almost all of these are preventable, and they are likely to trigger serious incident or coroner’s investigations, and attract public attention. Many ‘never events’ might fall into this category. Investigations tend to reveal safety issues in a pathway or setting that hadn't previously been recognised (like poor monitoring in obstetrics or faulty resuscitation processes in day surgery).
Measuring mortality
Any method of measuring mortality that takes a ‘whole hospital’ approach will be influenced mostly by the care of frail, older, emergency medical patients because most deaths occur in this group.14 While it is important to understand what happens in this group, serious quality and safety issues in other clinical areas may not be detected unless specific approaches are taken for the other two categories.
Quantitative measures
‘Whole hospital’ quantitative measures
Standardised ‘whole hospital’ mortality measures, such as HSMR or the summary hospital mortality indicator (SHMI), are presented as ratios of actual to expected mortality and are in widespread use in the UK (Table 1). They are controversial, especially when they are used to compare hospitals, because the administrative data that they use to calculate expected mortality can be very sensitive to variations in standards of clinical coding. Organisations with more detailed coding tend to have higher expected mortality (so lower standardised mortality ratios) and this, combined with a small proportion of preventable deaths, can mask poor quality care.3,15 The recent PRISM studies have not found any correlation between HSMR and preventable mortality detected by case note review, calling into question the longer-term use of hospital standardised mortality measures based on administrative data.9
Table 1.
Standardised mortality measures, such as the hospital standardised mortality ratio (HSMR) or the summary hospital mortality indicator (SHMI), are presented as a ratio of actual to expected mortality. An expected mortality rate is calculated for each hospital using data derived from discharge coding: using a statistical model to forecast the number of deaths that a hospital would be expected to have, based on the characteristics of the admitted patients. Because expected mortality rates are based on discharge coding, it is important for clinicians to support accurate coding (for example, by avoiding the use of symptom diagnoses such as ‘chest pain’). The SHMI and HSMR have a number of important differences: | ||
---|---|---|
SHMI | HSMR | |
Data source | Discharge codes | Discharge codes |
Methodology | Statistical model to forecast expected number of deaths | Statistical model to forecast expected number of deaths |
Mortality measure | Inpatient deaths or death within 30 days of discharge | Inpatient deaths |
Included patients | All deaths | Exclusions for certain diagnoses |
Standardised measures are presented using statistical process control (SPC) methodology with 100 as the reference value and upper and lower control limits calculated (analogous to confidence intervals). Control limits are usually set so that the probability of a value lying outside them by chance is less than 2 per 1,000. Rates are reported as abnormal if they are outside the control limits. Although HSMR and SHMI are often presented as a single monthly or quarterly figure, this is unhelpful without knowledge of the control limits and the pattern over time. Observing changes in mortality data over time is generally a much more useful guide to quality than a one-off measurement. | ||
Common problems with hospital SMRs: | ||
|
Adapted and reproduced from RCP’s Acute care toolkit 11.15
Table 2.
Measure | Advantages | Disadvantages | |
---|---|---|---|
Quantitative | Hospital level standardised mortality ratios (HSMR, SHMI etc) |
|
|
Condition or pathway specific standardised mortality rates |
|
|
|
Qualitative | RCRR of all or a sample of deaths |
|
|
Individual case-note review |
|
|
HSMR = hospital standardised mortality ratio; RCRR = retrospective case record review; SHMI = summary hospital mortality indicator.
Condition-specific quantitative mortality measures
Standardised ‘condition-specific’ quantitative mortality measures have been developed through national clinical audits and databases, including the Royal College of Physicians’ (RCP) work on stroke, hip fracture and lung cancer.16–18 Other national clinical audits and databases have undertaken similar approaches for heart attack, intensive care patients and many surgical and other conditions. These models for casemix adjustment are based on clinical not administrative criteria, which have been developed and tested by clinical audit teams specifically for these conditions, so they are much more accurate than HSMR. They are also not viewed in isolation but alongside other outcome and process measures.
Qualitative measures
‘Whole hospital’ qualitative measures
Retrospective case record review (RCRR) of all, or a sample of, deaths is regarded as the gold standard for investigating deaths but it is time consuming and labour intensive. It also risks being highly subjective because it is vulnerable to hindsight bias and in some areas (eg determination of ‘preventability’ of death) inter-rater reliability is very low.19 Most trusts in England have some sort of local RCRR process, but there is wide variability in how these function, the training of reviewers and how hospitals act on outputs.20
Standardised structured approaches have been developed to reduce subjectivity of case record review, including the IHI Global Trigger Tool,21 the PRISM methodology 9 and Structured Judgement Review (SJR).22 SJR is currently being used by the Yorks and Humber Improvement Academy, who have trained multidisciplinary teams of reviewers in 12 hospitals in England (Hutchinson, personal communication), and it will form the basis of a national mortality review programme being led by the RCP for the NHS in England and Scotland.23
RCRR is labour intensive but, as a qualitative methodology, it works well because breakdowns in processes of care are common, and it only takes review of a small number of cases before themes start to emerge, especially if data from other sources, like incident reports, are considered. Like other ‘whole hospital’ approaches, outputs from RCRR will mostly detect problems in the care of frail, older patients using the emergency pathway and general wards.
Individual case note qualitative review
Where deaths occur in low-risk conditions or low-risk circumstances, retrospective case record review is often very detailed, as part of a wider incident investigation. The same general approaches are used in clinical specialties or conditions with very small numbers of deaths (eg ear, nose and throat surgery, routine paediatric care).
Even in circumstances when death is very unusual, opportunities for learning and improving will be lost if the investigation focuses completely on the individual case. Data from other sources (eg incident reports of near misses, patient complaints and patient reported experience measures or staff feedback about breakdown in a specific aspect of the pathway) may help uncover underlying systems issues that need to be addressed. Data from national bodies, such as the National Confidential Enquiry into Patient Outcome and Death (NCEPOD), may help inform these investigations and the support of clinical experts from outside the hospital may bring a useful perspective.
What should hospitals do now?
The process of investigating and learning from hospital mortality should be led by a senior clinician, with expertise in patient safety, who can align mortality review programmes with the wider clinical governance system. Many hospitals appoint individuals as deputy or associate medical directors with the responsibility to lead mortality committees, which ensures linkage to other clinical governance activity and provides feedback to reviewers.
Systems for investigating and learning from mortality should be based on the three categories of deaths and incorporate the approaches that exist in many departments already:
A qualitative approach to case-note review of all, or a sample of, deaths using a standardised RCRR methodology will give good insight into problems in emergency care and the general wards. In larger organisations, a screening process may be necessary to identify cases for more detailed review (eg excluding those admitted for palliative care). The national project to use SJR for this, led by the RCP, will be fully operational in England and Scotland from 2017. Outputs from these reviews should identify themes where processes fail and link these to specific quality improvement initiatives.
A mixed quantitative and qualitative approach should be taken for specific conditions or pathways (stroke, heart attack, hip fracture, many surgical conditions, intensive care) where robust national databases exist. Most clinical teams will already have processes from assessing their own results against national standards, but these may need to be incorporated into wider clinical governance arrangements.
Rigorous individual case note review using root cause analysis or other standardised methodology is needed for deaths in low-risk conditions, patients or settings. Interpretation of outputs is most helpful if it is done in conjunction with data from similar safety incidents, near misses, complaints, other safety data for the clinical area and relevant national data.
A core aim of mortality review is to learn and improve as a result; this is unlikely to happen unless there is clear linkage of the mortality review process with specific quality improvement initiatives and integration within wider clinical governance structures. If reviews are integrated into existing morbidity and mortality meetings this would help to ensure alignment with quality improvement and other sources of quality and safety data.
Although most safety problems are the result of systems failures, this process will also occasionally reveal issues related to the clinical competence, behaviour or performance of individual clinicians. Policies and processes need to be in place for dealing with such issues, including those for managing training deficits, health problems, violations of known safety protocols or behavioural issues.
The proposed introduction of the medical examiner role in England and Wales in 2018 will give opportunities to align initiatives and the medical examiner process may be helpful as a screening instrument for more detailed review.
Involving and informing families
Patients’ families frequently report feeling they receive inadequate communication and sometimes are actively excluded from investigation processes when adverse events have led to patient deaths. Such communication failures were frequently described during the Francis enquiry24 and are commonly cited in reports by the Health Service Ombudsman, Action Against Medical Accidents25 and others.
While doctors have always had a professional duty to be open and transparent with patients and families when something has gone wrong, this is now supported by a legal Duty of Candour in England26 (with other parts of the UK planning similar approaches). As well as professional and ethical reasons for increased transparency, there are also sound practical reasons: international evidence from a number of healthcare systems indicates that increased transparency at the time of adverse events, coupled with an appropriate apology, reduces the risk of subsequent complaints and legal action.27
If an adverse event is recognised as contributing to a death then good practice is to involve family members as early as possible in the root cause analysis process, as well as ensuring that an adequate explanation and apology are given.
There are few examples of how this works in practice with RCRR when the contribution of adverse events to deaths may not be recognised for many months after the patient has died. However, it is clear that if significant adverse events do emerge in the course of RCRR then mechanisms need to be put in place to ensure that families receive an explanation and apology where appropriate, and are given the opportunity to ask questions even if some time has passed since the death.
Conclusion
Many deaths in hospital are inevitable, but some are not. Many patients who die receive high-quality care, but some do not. We should study hospital deaths for a number of reasons, including detecting quality failures, learning from good practice, and to fulfil our professional obligation to continuously learn as well as public expectation that we will do so.
However, the concept of a single measure, or even a single approach, to understanding hospital deaths is a fallacy and assumes that patients who die in hospital are a homogeneous group; they are not. Each group is distinct and requires a different approach, but applying qualitative and quantitative methods in each group as we have described will allow us to start learning and improving.
Conflicts of interest
The authors have no conflicts of interest to declare.
Acknowledgements
We are grateful to Dr Mike Gill, Dr Ben Bray and Patricia Snell for helpful comments on earlier drafts of this paper.
References
- 1.Fee E. Garofalo M. Florence Nightingale and the Crimean War. Am J Public Health. 2010;100:1591. doi: 10.2105/AJPH.2009.188607. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Kohn LT. Corrigan JM. Donaldson MS. To err is human: building a safer health system. Washington, DC:: National Academy Press; 2000. [PubMed] [Google Scholar]
- 3.Shojania KG. Deaths due to medical error: jumbo jets or just small propeller planes? BMJ Qual Saf. 2012;21:709–12. doi: 10.1136/bmjqs-2012-001368. [DOI] [PubMed] [Google Scholar]
- 4.Dr Foster Limited Dr Foster Hospital Guide 2013. London:: Dr Foster Limited; 2013. Available online at www.drfoster.com/wp-content/uploads/2014/07/hospital-guide-2013.pdf. [Accessed 1 July 2016] [Google Scholar]
- 5.Liford R. Pronovost P. Using Hospital Mortality rates to judge hospital performance: a bad idea that just won't go away. BMJ. 2010:340c2016. doi: 10.1136/bmj.c2016. [DOI] [PubMed] [Google Scholar]
- 6.Bottle A. Jarman B. Strengths and weaknesses of hospital standardised mortality ratios. BMJ. 2011;342:c7116. doi: 10.1136/bmj.c7116. [DOI] [PubMed] [Google Scholar]
- 7.Conferation NHS. The non-executive directors’ guide to hospital data. Hospitals Forum briefing. 2013:263. www.chks.co.uk/userfiles/files/PR/NEDs%20Guide%20to%20Hospital%20Data%20part%203.pdf. [Accessed 1 July 2016]. [Google Scholar]
- 8.Royal College of Physicians National care of the dying audit for hospitals, England. London:: RCP; 2014. [Google Scholar]
- 9.Hogan H. Healey F. Neale G, et al. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21:737–45. doi: 10.1136/bmjqs-2011-001159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hayward R. Hofer T. Estimating hospital deaths due to medical error – preventability is in the eye of the reviewer. JAMA. 2001;286:415–20. doi: 10.1001/jama.286.4.415. [DOI] [PubMed] [Google Scholar]
- 11.Zegers M. de Bruijne MC. Wagner C, et al. Adverse events and potentially preventable deaths in Dutch hospitals: results of a retrospective patient record review study. Qual Saf Health Care. 2009;18:297–302. doi: 10.1136/qshc.2007.025924. [DOI] [PubMed] [Google Scholar]
- 12.Hogan H. Zipfel R. Neuburger J, et al. Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis. BMJ. 2015;351:h3239. doi: 10.1136/bmj.h3239. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.National Confidential Enquiry into Patient Outcome and Death Just Say Sepsis! A review of the process of care received by patients with sepsis. London:: NCEPOD; 2015. [Google Scholar]
- 14.Donaldson L. Sukhmeet SP. Darzi A. Patient-safety-related hospital deaths in England: thematic analysis of incidents reported to a national database, 2010–2012. PLoS Med. 2014;11:e1001667. doi: 10.1371/journal.pmed.1001667. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Royal College of Physicians Acute care toolkit 11: Using data to improve care. London:: RCP; 2015. [Google Scholar]
- 16.Royal College of Physicians National Lung Cancer Audit annual report 2015 (for the audit period 2014). London:: RCP, 2015; [Google Scholar]
- 17.Royal College of Physicians The Sentinel Stroke National Audit Programme (SSNAP). London:: RCP; 2014. www.strokeaudit.org/results/Organisational/National-Organisational.aspx. [Accessed 1 July 2016]. [Google Scholar]
- 18.Royal College of Physicians National Hip Fracture Database (NHFD) annual report 2015. London:: RCP; 2015. [Google Scholar]
- 19.Hogan H. The problem with preventable death. BMJ Qual Saf. 2016;25:320–3. doi: 10.1136/bmjqs-2015-004983. [DOI] [PubMed] [Google Scholar]
- 20.Smith N. Shotton H. Mason M. Indicator 5c Mortality Survey. Undertaken by NCEPOD on behalf of NHS England 2014. London:: NCEPOD; 2014. Available online at www.ncepod.org.uk/pdf/publications/Indicator5cMortalitySurvey_FinalForNHSE.pdf. [Accessed 1 July 2016] [Google Scholar]
- 21.Griffin FA. Resar RK. IHI global trigger tool for measuring adverse events. 2nd. Cambridge, MA:: Institute for Healthcare Improvement; 2009. [Google Scholar]
- 22.Hutchinson A. Coster JE. Cooper KL, et al. A structured judgement method to enhance mortality case not review: development and evaluation. BMJ Qual Saf. 2013;22:1032–41. doi: 10.1136/bmjqs-2013-001839. [DOI] [PubMed] [Google Scholar]
- 23.Royal College of Physicians National Mortality Case Record Review Programme. www.rcplondon.ac.uk/projects/national-mortality-case-record-review-programme. [Accessed 1 July 2016]. [Google Scholar]
- 24.The Mid Staffordshire NHS Foundation Trust Inquiry Report of the Mid Staffordshire NHS Foundation Trust public inquiry. London:: The Stationary Office; 2013. [Google Scholar]
- 25.Action Against Medical Accidents Patient stories. www.avma.org.uk/patient-stories/ [Accessed 1 July 2016]. [Google Scholar]
- 26.Care Quality Commission Regulation 20: Duty of candour. Newcastle-upon-Tyne:: Care Quality Commission; 2015. [Google Scholar]
- 27.Sadler BL. Stewart K. Leading in a crisis: the power of transparency thought paper. London:: The Health Foundation; 2015. www.health.org.uk/publication/leading-crisis-power-transparency. [Accessed 1 July 2016]. [Google Scholar]