Abstract
Objective
To explore associations between the proportion of hospital deaths that are preventable and other measures of safety.
Design
Retrospective case record review to provide estimates of preventable death proportions. Simple monotonic correlations using Spearman's rank correlation coefficient to establish the relationship with eight other measures of patient safety.
Setting
Ten English acute hospital trusts.
Participants
One thousand patients who died during 2009.
Results
The proportion of preventable deaths varied between hospitals (3–8%) but was not statistically significant (P = 0.94). Only one of the eight measures of safety (Methicillin-resistant Staphylococcus aureus bacteraemia rate) was clinically and statistically significantly associated with preventable death proportion (r = 0.73; P < 0.02). There were no significant associations with the other measures including hospital standardized mortality ratios (r = −0.01). There was a suggestion that preventable deaths may be more strongly associated with some other measures of outcome than with process or with structure measures.
Conclusions
The exploratory nature of this study inevitably limited its power to provide definitive results. The observed relationships between safety measures suggest that a larger more powerful study is needed to establish the inter-relationship of different measures of safety (structure, process and outcome), in particular the widely used standardized mortality ratios.
Keywords: preventable death, patient safety measures, hospital standardized mortality ratio
Introduction
A wide variety of measures are used to assess the safety of hospitals [1]. They can be grouped into three broad categories reflecting Donabedian's typology of outcomes, processes and structures (sometimes referred to as inputs) [2]. Outcomes include preventable mortality, hospital-acquired infections and emergency readmissions. Processes include events that might result in an adverse outcome such as patient safety incidents (e.g. falls and medication errors) and structure, aspects such as the safety culture of a hospital and the attitudes of staff towards safety. It is unclear whether or not these different dimensions of poor safety are associated with one another, an association that would suggest common cause. This is of considerable policy importance as if there is little or no association than the choice of a primary or leading measure of safety of a hospital will influence judgements about its performance.
Over the past decade, the most commonly used primary measure in many countries has been the standardized mortality ratio (SMR) for the entire hospital. In the UK, the most commonly used versions have been the hospital SMR (HSMR) and, more recently, the summary hospital-level mortality indicator (SHMI). Their use has had and continues to have an enormous influence on healthcare policy, despite the lack of any evidence of their validity either as accurate indicators of safety or as a screening tool to raise suspicions of poor safety [3–5].
Arguably, the determination of the preventability of hospital deaths by trained reviewers undertaking retrospective case record review has greater validity as an indicator of hospital safety than SMRs for two reasons: greater clinical credibility by taking account of the complexity of patients′ conditions and care; and it can indicate whether or not poor care was responsible for any death. In addition, clinicians identify the nature of any poor care and this starts to instil improvements in clinical practice. Despite these advantages, it is important to recognize that the inter-rater reliability of reviewers' judgements of preventability is moderate rather than strong. In the study on which the analyses in this paper are based, the inter-rater reliability had a κ coefficient of 0.49 [6], towards the upper end of the range of 0.24–0.69 reported in other studies [7–11].
Only four studies, all in North America, have looked at the association of preventable death proportions with other frequently used measures of safety. All the four studies focused on the relationship with SMRs either for selected specialties [12, 13], specific diseases [14] or a specific intervention [15], rather than for hospital-wide deaths. The studies were limited to considering aggregated data from groups of high- and of low-SMR hospitals due to the small samples from individual hospitals (<50). Three of the studies either found no correlation [12, 13], or a non-significant negative correlation [15]. The fourth study, which reviewed patients with one of three medical conditions, also found no association for two conditions (stroke and myocardial infarction) but did find a positive association in patients with pneumonia [14].
To date, comparing the proportion of preventable deaths with other measures of safety in the UK has been limited by the lack of an accurate estimate for the proportion of deaths in hospitals that were preventable. An opportunity to carry out an initial exploration of this key issue has arisen with the availability of data collected in a recent large retrospective case record review [6]. The rigour with which the measurements were made suggests that they provide a valid and credible indication of safety.
Our aim was to carry out an exploratory study of the associations between the proportion of deaths in a hospital that are deemed preventable and other measures of safety. Our hypothesis was that the association would be strongest with other measures of outcome (HSMR, hospital-acquired infections and emergency readmissions), less strong with measures of process (patient safety incidents, hospital cleanliness and staff hand hygiene) and weakest with structure measures (safety culture and staff sickness absence).
Method
Preventable deaths in hospital
Details of the retrospective case record review of 1000 hospital deaths in 2009 carried out in 10 randomly selected acute hospitals have been described elsewhere [6]. The method was based on previous similar studies [7, 10, 16, 17]. Record reviews were undertaken by 17 recently retired doctors, all of whom had extensive experience as generalists and received training for the task.
The judgement of preventable deaths was undertaken in two stages. First, reviewers were asked to determine whether there had been any ‘problems in care’ that had contributed to the patient's death. Problems in care were defined as patient harm resulting either from acts of omission (inactions), such as failure to diagnose and treat, or from acts of commission (affirmative actions) such as incorrect treatment or management. Problems were included both if they occurred within the index admission and if they occurred before the index admission but led to harm in that admission. For each case in which a problem in care that had contributed to death was identified, reviewers judged the preventability of death. Preventability was assessed on a 6-point Likert scale [18]. Deaths were deemed preventable if it was judged that there was >50% chance that the death was preventable (scored from 4 to 6 on a 1- to 6-Likert scale). For deaths judged to be preventable, reviewers reported the type of problem, its timing and any associated causative or contributory factors.
Other patient safety measures
A priori, we selected measures of safety that were publicly available and reflected safety across entire hospitals rather than restricted to specific departments. The measures were as follows:
Outcomes: HSMR, obtained from Dr Foster Intelligence; Methicillin-resistant Staphylococcus aureus (MRSA) bacteraemia reports, from the Health Protection Agency; emergency readmissions within 28 days of discharge, from Hospital Episode Statistics.
Processes: patient safety incidents, reported to the National Reporting and Learning System; patients' views of hospital cleanliness and of nurses' hand cleaning, obtained from the NHS Inpatient Survey.
Structure: staff views of safety culture, obtained from the NHS Staff Survey; staff sickness absence rates, from the NHS Staff Sickness Reports.
Descriptions of these data sources and the validity of the measures are shown in Table 1. It is important to recognize that all eight measures are inevitably subject to chance variation in addition to the specific limitations mentioned in the table.
Table 1.
Sources and description of eight safety measures
| Source | Description of measure | Collection | Threats to validity | |
|---|---|---|---|---|
| Outcomes | ||||
| HSMR | Dr Foster 2009/10 (http://www.drfosterhealth.co.uk/docs/hospital-guide-2010.pdf) | HMSRs are derived from Hospital Episode Statistics (for 56 conditions known to be related to 80% of hospital mortality) by calculating the ratio of observed deaths to expected deaths with adjustment for case mix | Hospital Episode Statistics are derived from Patient Administration Systems. Expected death rates are calculated using national data. Adjustments are made for age, co-morbidity, number of previous admissions and sociodemographic factors | Low sensitivity for measuring quality of care (most quality problems do not occur in patients who die) Low specificity for measuring quality of care (most deaths do not reflect poor quality) Artefactual variation for a range of reasons including coding depth and patient exclusions Structural factors, e.g. local services available, admission thresholds and technology/treatments available cause variation |
| MRSA bacteraemia rates | Health Protection Agency: MRSA surveillance programme 2009/10 (http://www.hpa.org.uk/web/HPAweb&HPAwebStandard/HPAweb_C/1233906818165) | Hospital-apportioned MRSA bacteraemia reports per 100 000 admissions. A positive blood culture on or after the third day of admission is classified as hospital apportioned | Mandatory surveillance of MRSA bacteraemia conducted by Health Protection Agency based on hospital submitted reports of positive blood cultures and accompanying demographic and clinical data. Data submitted via web-based system | Some MRSA infections occurring prior to admission or recurrent infections may be included Rates are not adjusted for hospital demographics or case mix |
| Emergency readmissions | The NHS Information Centre 2009/10 (https://mqi.ic.nhs.uk/PerformanceIndicatorChapter.aspx?number=1.01) | Percentage of emergency admissions to hospital in the UK for adults that occur within 28 days of the previous discharge (indirectly standardized) | Hospital Episode Statistics derived from Patient Administration Systems | Some readmissions result from of lack of primary/community services Quality of coding Admission decision dependent on variation in clinical judgement of the admitting doctor |
| Processes | ||||
| Patient safety incident reports | National Reporting and Learning System 2009/10 (www.nrls.npsa.nhs.uk) | Overall reporting rate per 100 admissions | Voluntary self reports are received by the NRLS via downloads from local risk management systems or web-based e-forms (including open access e-forms) | Individual reports are not investigated or verified by NPSA Quality/volume of data variable depending on reporting system used and reporter Counts based on incidents reported: known under reporting Variability in reporting rates may not reflect safety but organizational culture Some reports are not PSIs, e.g. misreporting of harm to staff or lost patient property Rates are not adjusted for hospital demographics or case mix |
| Patients’ views of hospital cleanliness | Acute Hospitals Adult Inpatient Survey 2009, Economic and Social Data Service, http://www.esds.ac.uk/findingData/snDescription.asp?sn=7034&key=Acute+Trusts:+Adult+Inpatients+Survey,+2010 | Proportion of patients at each hospital giving negative responses to question related to general cleanliness of the hospital | Annual survey of patient experience commissioned by the Care Quality Commission. Each hospital identifies a random sample of 850 adult and psychiatry patients with a hospital stay of at least one night | Response rate ∼50%. Patient exclusions include maternity |
| Patients’ views of staff hand hygiene | Acute Hospitals Adult Inpatient Survey 2009, Economic and Social Data Service (http://www.esds.ac.uk/findingData/snDescription.asp?sn=7034&key=Acute+Trusts:+Adult+Inpatients+Survey,+2010) | Proportion of patients at each hospital giving negative responses to question related to whether nurses washed their hands between patients | Annual survey of patient experience commissioned by the Care Quality Commission. Each hospital identifies a random sample of 850 adult and psychiatry patients with a hospital stay of at least one night | Response rate ∼50%. Patient exclusions include maternity |
| Structures | ||||
| Staff view of reporting of patient safety incidents | The Staff Survey Coordination Centre, Picker Institute Europe: NHS Staff Survey 2009 (http://www.NHSStaffSurveys.com) | Proportion of staff at each hospital giving a negative response to question: ‘The last time that you saw an error, near miss or incident that could have hurt patients/service users, did you or a colleague report it?’ | Annual survey of staff views. Random sample of staff based on the size of the institution. Analysed by Staff Survey Coordination Centre. Hospital management does not see individuals’ completed surveys but are sent amalgamated results for their hospital following analysis | Wording of some questions is ambiguous Response rates range from 40 to 65% |
| Staff sickness absence | The NHS Information Centre: NHS Electronic Staff Records 2009/10) (http://www.ic.nhs.uk/statistics-and-data-collections/workforce/sickness-absence) | Annual staff sickness absence rate | Data collected monthly from the Electronic Staff Record System, which links to the payroll and human resource systems within hospitals and contains records for the majority of NHS staff. The rates are calculated using full time equivalent (FTE) days lost to sickness divided by the FTE days available | Gives overall measure of sickness absence among NHS staff but does not indicate which staff and in which roles |
Analyses
Median and inter-quartile ranges for each patient safety measure were calculated to show the distribution of values across the hospitals. Simple monotonic correlations between preventable death proportions and each of the other safety measures for the 10 hospitals were examined using Spearman's rank correlation coefficient. The level of what would be deemed a clinically significant association was set as a correlation coefficient of at least 0.3. Tests for significance were two-sided and the significance level set at 0.02, given the multiple comparisons being tested. The impact of deaths in which the problem in care occurred before admission was investigated by re-running the analyses having excluded such cases.
Results
Table 2 details the characteristics of the study hospitals including size, annual admissions and teaching status. In addition, it includes the performance of each hospital site on the safety measures. Among 1000 adult patients dying in acute hospitals in the UK, death was considered preventable in 5.2% of cases [95% confidence interval (CI) 3.8–6.6%]. The proportion varied among hospitals from 3 to 8%, but these differences were not statistically significant (P = 0.94) (Fig. 1). Table 3 shows the distribution of the safety measures across the 10 hospitals.
Table 2.
Hospital characteristics and patient safety indicator values
| PRISM trusts | A | B | C | D | E | F | G | H | I | J |
|---|---|---|---|---|---|---|---|---|---|---|
| Bed numbers | 871 | 393 | 1449 | 998 | 1108 | 693 | 999 | 628 | 483 | 417 |
| Annual admissions | 100 828 | 37 345 | 171 954 | 111 003 | 141 166 | 94 961 | 117 727 | 76 873 | 55 238 | 51 756 |
| Number of adult coronary care unit beds | 32 | 7 | 62 | 67 | 58 | 10 | 24 | 12 | 10 | 7 |
| Hospital type | Large acute | Small acute | Acute teaching | Acute teaching | Large acute | Large acute | Large acute | Medium acute | Small acute | Small acute |
| Preventable deaths (%) | 0.04 | 0.06 | 0.05 | 0.06 | 0.05 | 0.03 | 0.08 | 0.04 | 0.06 | 0.05 |
| HSMR | 107.6 | 97.8 | 79.6 | 96.8 | 96.0 | 102.1 | 107.4 | 89.4 | 90.3 | 112.0 |
| MRSA bacteraemia rates per 100 000 admissions | 1.0 | 4.2 | 3.4 | 6.2 | 2.9 | 0.9 | 3.3 | 1.6 | 4.2 | 0.7 |
| Emergency readmissions within 28 days of discharge (%) | 12.4 | 11.5 | 13.2 | 12.0 | 13.0 | 10.1 | 9.5 | 10.2 | 12.7 | 9.6 |
| Patient safety incidents per 100 000 admissions | 6298.8 | 4236.2 | 4869.3 | 6316.9 | 3536.3 | 5255.8 | 5971.4 | 3903.8 | 4764.8 | 4134.8 |
| Patients reporting hospital ‘not very clean’/‘not clean at all’ (%) | 4.1 | 4.2 | 5.2 | 2.4 | 4.6 | 2.9 | 2.9 | 1.7 | 2.5 | 5.3 |
| Patients reporting hospital not cleaning their hands between patients (%) | 2.2 | 2.6 | 4.9 | 4.2 | 2.1 | 2.6 | 2.8 | 1.5 | 3.6 | 2.8 |
| Staff indicating that patient safety incidents were not reported (%) | 33 | 34 | 34 | 40 | 36 | 34 | 35 | 36 | 34 | 35 |
| Staff sickness absence rate (%) | 3.2 | 4.4 | 2.8 | 3.5 | 4.7 | 4.5 | 4 | 3.1 | 3.7 | 4.1 |
Figure 1.
Proportion of preventable deaths across 10 English acute hospitals.
Table 3.
Distribution of patient safety measure values across 10 acute hospitals
| Safety measure | Median | Inter-quartile range |
|---|---|---|
| Preventable deaths (%) | 5.00 | 4.00–6.00 |
| Outcomes | ||
| HSMR | 97.30 | 91.73–106.10 |
| MRSA bacteraemia rates per 100 000 admissions | 3.10 | 1.15–4.0 |
| Emergency readmissions within 28 days of discharge (%) | 11.74 | 10.15–12.59 |
| Processes | ||
| Patient safety incidents per 100 000 admissions | 4317 | 4160–5793 |
| Patients reporting hospital ‘not very clean’ or ‘not at all clean’ (%) | 3.53 | 2.58–4.50 |
| Patients reporting nurses did not clean their hands between patients (%) | 2.67 | 2.30–3.37 |
| Structures | ||
| Staff indicating that patient safety incidents were not reported (%) | 34.68 | 33.84–35.85 |
| Staff sickness absence rate (%) | 3.85 | 3.28–4.33 |
The relationships between preventable deaths and the other measures of safety are shown in Table 4. Only one association was clinically and statistically significant: there was a positive correlation between preventable death proportion and MRSA bacteraemia rate (r = 0.73; P = 0.02) (Fig. 2). Although a positive association was also observed with one other measure (nurses not cleaning their hands between patients, r = 0.51) and a weak positive relationship with two other measures (staff indicating that adverse events were not reported, r = 0.26; patient safety incidents, r = 0.23), none were statistically significant.
Table 4.
Correlations between preventable deaths and other patient safety measures
| Safety measure | Spearman correlation coefficient | Lower confidence limit | Upper confidence limit | P-value |
|---|---|---|---|---|
| Outcomes | ||||
| HSMR | −0.012 | −0.64 | 0.62 | 0.97 |
| MRSA bacteraemia rates per 100 000 admissions | 0.73 | 0.19 | 0.93 | 0.02 |
| Emergency readmissions within 28 days of discharge (%) | −0.06 | −0.66 | 0.59 | 0.86 |
| Processes | ||||
| Patient safety incidents per 100 000 admissions | 0.23 | −0.47 | 0.75 | 0.52 |
| Patients reporting hospital ‘not very clean’/ ‘not at all clean’ (%) | −0.08 | −0.68 | 0.58 | 0.80 |
| Patients reporting nurses not cleaning hands between patients (%) | 0.51 | −0.17 | 0.86 | 0.12 |
| Structures | ||||
| Staff indicating patient safety incidents were not reported (%) | 0.26 | −0.44 | 0.76 | 0.47 |
| Staff sickness absence rate | 0.06 | −0.59 | 0.66 | 0.86 |
Figure 2.
Scatter plot of hospital-apportioned MRSA bacteraemia rates and hospital preventable death proportion.
On the other four measures (HSMR, emergency readmission, hospital cleanliness and staff sickness absence), there was no evidence of an association with preventable deaths. Given that previous studies have compared groups of hospitals with high SMRs with those with low SMRs, we did the same by aggregating data from the hospitals with the three highest and the three lowest SMRs. There was no significant difference: 5.6 versus 5.0%, respectively (P = 0.74). As regards our hypothesis based on Donabedian's categories of outcome, process and structure, there was a suggestion that the strength of association declined from 0.73 with another outcome (MRSA bacteraemia), to 0.51 with a process measure (staff hand hygiene), to 0.26 with a structure measure (poor safety culture), but the lack of statistical significance of the latter two correlations means such an observation must be treated cautiously.
For five deaths, the only problem in care occurred before admission to hospital (in two cases, the harm originated in primary care, and in three cases, it was due to previous hospital encounters). Repeating the analyses having excluded those deaths made little difference to our findings: there was a slight increase in the positive correlation with MRSA bacteraemia rate (r = 0.77; P = 0.01) and in the negative correlation with HSMR (r = −0.2; P = 0.56).
Discussion
This exploratory study has found that only one of the eight measures of safety (MRSA bacteraemia rate) was significantly associated with the proportion of preventable deaths. In contrast, for four of the other measures, there appeared to be no association with preventable deaths (HSMR, emergency readmissions, hospital cleanliness and staff sickness absence).
Few previous studies have examined most of the measures of safety we considered, but our results can be compared with prior studies of the association between preventable deaths and SMRs. Prior studies found no statistically significant correlation (except for pneumonia deaths in one study [14]) [12, 13, 15]. Our results are consistent with this prior published literature. One study from the UK of 173 acute hospitals considered associations between several patient safety measures but did not include preventable death proportion [19]. It found that a process measure (patient safety incident rates) had no association with several outcome measures (MRSA bacteraemia rates, SMRs, incidence of decubitus ulcers and post-operative sepsis rates). There were, however, positive associations with some structure measures (staff views of the safety culture in their hospital and risk management ratings).
Our generally null results suggest that each safety measure might have a different underlying set of causes. The one exception was a moderately strong correlation with a hospital-acquired infection (MRSA), which gives credence to the focus on the latter in many countries, including the UK [20]. In the study from which our data were drawn, 7% of preventable deaths were associated with hospital-acquired infection and 3.8% with MRSA septicaemia [6]. Our findings do provide support to policies in UK, USA and France aimed at reducing MRSA bacteraemia rates in order to reduce preventable deaths [21]. Although our study was insufficiently powered to detect a statistically significant association with one of the cornerstones of infection control, hand washing by healthcare staff [22], the findings do suggest a link with hospital-acquired infection-related deaths. Measuring this activity may be an effective way to assess and drive improvements in safety [23].
The lack of correlation of preventable deaths with rates of patient safety incidents may reflect a true lack of association or incomplete reporting to the National Patient Safety Authority. In addition, there is some uncertainty as to whether a high rate of reported incidents reflects poor safety or the opposite, acting as an indication of a greater propensity to address safety issues. Staff concerns about unfair blame or fear of litigation, particularly in organizations with a poor safety culture, generally discourage reporting [24, 25].
The lack of correlation with HSMR is consistent with findings from other studies [26]. After taking into account artefactual and structural factors, it remains unclear how much of the residual variation in HSMRs represents differences in safety between organizations [4, 5]. Given that this study (and others) suggests that only 5% of deaths are preventable, the signal-to-noise ratio of preventability would preclude the ability of HSMRs to be valid measures of safe care.
The principal strengths of this study are the methodological rigour of determining preventable deaths, and the first time the relationships with a wide range of other safety measures at the level of individual hospitals have been undertaken outside the USA. However, with only 10 hospitals, the principal limitation is its power to detect associations between safety measures. The small sample size may have led to the undue influence of outliers on the value of the correlation coefficients, though this was minimized by using the more conservative Spearman Rank Correlation test. Furthermore, the sample size from each hospital meant that CIs around proportions of preventable deaths were wide, which further reduced the likelihood of detecting significant associations between measures. Whilst retrospective case record review provides a comprehensive picture of patient care, it is inevitably limited to what is written in the record, the impact of only moderate inter-rater reliability and hindsight bias. By using senior doctor reviewers, a training package and a structured data collection form, we attempted to mitigate against these biases.
Our choice of safety measures was guided by the desire to look at safety from multiple perspectives and to focus on entire hospitals. Use of other, not yet publicly available measures, such as in-hospital cardiac arrest rates currently being collected as part of a national clinical audit in the UK [27], would have strengthened the study. Despite the selection of safety measures being limited to those with reasonable coverage and measurement properties, several of these are of uncertain accuracy (such as patient incident reports, patients′ views and staff views).
Although our study of preventable deaths was conducted during calendar year 2009, the majority of safety measures used in the correlation analyses were from the financial year 2009/10. We have no reason to believe that this minor lack of concordance of data collection periods would have introduced any significant bias.
Given the lack of association between preventable death proportion and some widely used measures of safety (HSMRs, emergency readmissions), a larger study is needed to establish whether this reflects the limited power of this study or real relationships. In spring 2014, the study of an additional 24 hospitals in the UK is underway to reduce the current uncertainty that inevitably surrounds these preliminary findings [28]. This new study will examine the relationship between the outcome measures HSMR/SHMI and preventable deaths.
Finally, our findings underline the need for governments and others responsible for healthcare systems to consider a portfolio of measures of safety when assessing a hospital, until the inter-relationships between the various safety measures is better understood.
Funding
The work was supported by the National Institute of Health Research- Research for Patient Benefit Programme (PB-PG-1207-15215). The funders had no role in study design, data collection, data analysis, data interpretation or composition of the report. The corresponding author had full access to all data in the study and had final responsibility for the decision to submit for publication. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
Acknowledgements
We thank the 10 English acute hospital Trusts and the PRISM case record reviewers for supplying the background data for this study. We also thank Jenny Neuburger and Andrew Hutchings for statistical advice, and the National Institute of Health Research, Research for Patient Benefit Programme for funding.
Appendix 1 Scatter plots

References
- 1.Hogan H, Olsen S, Scobie S, et al. What can we learn about patient safety from information sources within an acute hospital: a step on the ladder of integrated risk management? Qual Saf Health Care. 2008;17:209–15. doi: 10.1136/qshc.2006.020008. [DOI] [PubMed] [Google Scholar]
- 2.Donabedian A. Methods for deriving criteria for assessing the quality of medical care. Med Care Rev. 1980;37:653–98. [PubMed] [Google Scholar]
- 3.Shojania KG, Forster AJ. Hospital mortality: when failure is not a good measure of success. Can Med Assoc J. 2008;179:153–7. doi: 10.1503/cmaj.080010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Scott IA, Brand CA, Phelps GE, et al. Using hospital standardised mortality ratios to assess quality of care—proceed with extreme caution. Med J Aust. 2011;194:645–8. doi: 10.5694/j.1326-5377.2011.tb03150.x. [DOI] [PubMed] [Google Scholar]
- 5.van Gestel YR, Lemmens VE, Lingsma HF, et al. The hospital standardized mortality ratio fallacy: a narrative review. Med Care. 2012;50:662–7. doi: 10.1097/MLR.0b013e31824ebd9f. [DOI] [PubMed] [Google Scholar]
- 6.Hogan H, Healey F, Neale G, et al. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. Br Med J Qual Saf. 2012;21:737–45. doi: 10.1136/bmjqs-2011-001159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Brennan T, Leape L, Laird N, et al. Incidence of adverse events and negligence in hospitalised patients. Results of the Harvard Medical Practice Study 1. N Eng J Med. 1991;324:370–6. doi: 10.1056/NEJM199102073240604. [DOI] [PubMed] [Google Scholar]
- 8.Wilson R, Runciman W, Gibberd R, et al. Quality in Australian Health Care Study. Med J Aust. 1995;163:472–5. doi: 10.5694/j.1326-5377.1995.tb124691.x. [DOI] [PubMed] [Google Scholar]
- 9.Baker R, Norton P, Flintoft V, et al. The Canadian adverse events study: the incidence of adverse events among hospital patients in Canada. Can Med Assoc. 2004;170:1678–86. doi: 10.1503/cmaj.1040498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Zegers M, de Bruijne MC, Wagner C, et al. Adverse events and potentially preventable deaths in Dutch hospitals: results of a retrospective patient record review study. Qual Saf Health Care. 2009;18:297–302. doi: 10.1136/qshc.2007.025924. [DOI] [PubMed] [Google Scholar]
- 11.Hayward R, McMahon L, Bernard A. Evaluating the care of general medical inpatients: how good is structured implicit review? Ann Intern Med. 1993;1993:550–6. doi: 10.7326/0003-4819-118-7-199304010-00010. [DOI] [PubMed] [Google Scholar]
- 12.Best WR, Cowper DC. The ratio of observed-to-expected mortality as a quality of care indicator in non-surgical VA patients. Med Care. 1994;32:390–400. doi: 10.1097/00005650-199404000-00007. [DOI] [PubMed] [Google Scholar]
- 13.Gibbs J, Clark K, Khuri S, et al. Validating risk-adjusted surgical outcomes: chart review of process of care. Int J Qual Health Care. 2001;13:187–96. doi: 10.1093/intqhc/13.3.187. [DOI] [PubMed] [Google Scholar]
- 14.Dubois RW, Rogers WH, Moxley JH, 3rd, et al. Hospital inpatient mortality. Is it a predictor of quality? N Engl J Med. 1987;317:1674–80. doi: 10.1056/NEJM198712243172626. [DOI] [PubMed] [Google Scholar]
- 15.Guru V, Tu JV, Etchells E, et al. Relationship between preventability of death after coronary artery bypass graft surgery and all-cause risk-adjusted mortality rates. Circulation. 2008;117:2969–76. doi: 10.1161/CIRCULATIONAHA.107.722249. [DOI] [PubMed] [Google Scholar]
- 16.Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. Br Med J. 2001;322:517–9. doi: 10.1136/bmj.322.7285.517. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Hayward R, Hofer T. Estimating hospital deaths due to medical error—preventability is in the eye of the reviewer. J Am Med Assoc. 2001;286:415–20. doi: 10.1001/jama.286.4.415. [DOI] [PubMed] [Google Scholar]
- 18.Brennan TA, Localio RJ, Laird NL. Reliability and validity of judgments concerning adverse events suffered by hospitalized patients. Med Care. 1989;27:1148–58. doi: 10.1097/00005650-198912000-00006. [DOI] [PubMed] [Google Scholar]
- 19.Hutchinson A, Young TA, Cooper KL, et al. Trends in healthcare incident reporting and relationship to safety and quality data in acute hospitals: results from the national reporting and learning system. Qual Saf Health Care. 2009;18:5–10. doi: 10.1136/qshc.2007.022400. [DOI] [PubMed] [Google Scholar]
- 20.National Audit Office. The Management and Control of Hospital Acquired Infection in Acute Trusts in England. London: National Audit Office; 2008. [Google Scholar]
- 21.Haustein T, Gastmeier P, Holmes A, et al. Use of benchmarking and public reporting for infection control in four high-income countries. Lancet Infect Dis. 2011;11:471–81. doi: 10.1016/S1473-3099(10)70315-7. [DOI] [PubMed] [Google Scholar]
- 22.World Health Organisation/Patient Safety Alliance. WHO Guidelines on Hand Hygiene in Healthcare: First Global Patient Safety Challenge: Clean Care is Safe Care. Geneva: WHO; 2009. [PubMed] [Google Scholar]
- 23.Kirkland KB, Homa KA, Lasky RA, et al. Impact of a hospital-wide hand hygiene initiative on healthcare-associated infections: results of an interrupted time series. Br Med J Qual Saf. 2012;21:1019–26. doi: 10.1136/bmjqs-2012-000800. [DOI] [PubMed] [Google Scholar]
- 24.Firth-Cozens J. Barriers to incident reporting. Qual Saf Health Care. 2002;11:7. doi: 10.1136/qhc.11.1.7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Stanhope C, Crowley-Murphey M, Vincent C, et al. An evaluation of adverse incident reporting. J Eval Clin Pract. 1995;5:1–4. doi: 10.1046/j.1365-2753.1999.00146.x. [DOI] [PubMed] [Google Scholar]
- 26.Pitches DW, Mohammed MA, Lilford RJ. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res. 2007;7:91. doi: 10.1186/1472-6963-7-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Nolan J, Gallegher L, Llyod-Scott L, et al. National Cardiac Arrest Audit (NCAA) J Inn Care Soc. 2009;10:313–15. [Google Scholar]
- 28.Keogh B. Review Into the Quality of Care and Treatment Provided by 14 Hospital Trusts in England: Overview Report. London: Department of Health; 2013. [Google Scholar]


