Skip to main content
The Cochrane Database of Systematic Reviews logoLink to The Cochrane Database of Systematic Reviews
. 2016 Dec 2;2016(12):CD008992. doi: 10.1002/14651858.CD008992.pub3

External inspection of compliance with standards for improved healthcare outcomes

Gerd Flodgren 1,, Daniela C Gonçalves‐Bradley 2, Marie‐Pascale Pomey 3
Editor: Cochrane Effective Practice and Organisation of Care Group
PMCID: PMC6464009  PMID: 27911487

Abstract

Background

Inspection systems are used in healthcare to promote quality improvements (i.e. to achieve changes in organisational structures or processes, healthcare provider behaviour and patient outcomes). These systems are based on the assumption that externally promoted adherence to evidence‐based standards (through inspection/assessment) will result in higher quality of healthcare. However, the benefits of external inspection in terms of organisational‐, provider‐ and patient‐level outcomes are not clear. This is the first update of the original Cochrane review, published in 2011.

Objectives

To evaluate the effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes.

Search methods

We searched the following electronic databases for studies up to 1 June 2015: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Database of Abstracts of Reviews of Effectiveness, HMIC, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform. There was no language restriction and we included studies regardless of publication status. We also searched the reference lists of included studies and contacted authors of relevant papers, accreditation bodies and the International Organization for Standardization (ISO), regarding any further published or unpublished work. We also searched an online database of systematic reviews (PDQ‐evidence.org).

Selection criteria

We included randomised controlled trials (RCTs), non‐randomised trials (NRCTs), interrupted time series (ITSs) and controlled before‐after studies (CBAs) evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes in hospitals, primary healthcare organisations and other community‐based healthcare organisations.

Data collection and analysis

Two review authors independently applied eligibility criteria, extracted data and assessed the risk of bias of each included study. Since meta‐analysis was not possible, we produced a narrative results summary. We used the GRADE tool to assess the certainty of the evidence.

Main results

We did not identify any new eligible studies in this update. One cluster RCT involving 20 South African public hospitals and one ITS involving all acute hospital trusts in England, met the inclusion criteria. A trust is a National Health Service hospital which has opted to withdraw from local authority control and be managed by a trust instead.

The cluster RCT reported mixed effects of external inspection on compliance with COHSASA (Council for Health Services Accreditation for South Africa) accreditation standards and eight indicators of hospital quality. Improved total compliance score with COHSASA accreditation standards was reported for 21/28 service elements: mean intervention effect was 30% (95% confidence interval (CI) 23% to 37%) (P < 0.001). The score increased from 48% to 78% in intervention hospitals, while remaining the same in control hospitals (43%). The median intervention effect for the indicators of hospital quality of care was 2.4% (range ‐1.9% to +11.8%).

The ITS study evaluated compliance with policies to address healthcare‐acquired infections and reported a mean reduction in MRSA (methicillin‐resistant Staphylococcus aureus) infection rates of 100 cases per quarter (95% CI ‐221.0 to 21.5, P = 0.096) at three months' follow‐up and an increase of 70 cases per quarter (95% CI ‐250.5 to 391.0; P = 0.632) at 24 months' follow‐up. Regression analysis showed similar MRSA rates before and after the external inspection (difference in slope 24.27, 95% CI ‐10.4 to 58.9; P = 0.147).

Neither included study reported data on unanticipated/adverse consequences or economic outcomes. The cluster RCT reported mainly outcomes related to healthcare organisation change, and no patient reported outcomes other than patient satisfaction.

The certainty of the included evidence from both studies was very low. It is uncertain whether external inspection accreditation programmes lead to improved compliance with accreditation standards. It is also uncertain if external inspection infection programmes lead to improved compliance with standards, and if this in turn influences healthcare‐acquired MRSA infection rates.

Authors' conclusions

The review highlights the paucity of high‐quality controlled evaluations of the effectiveness and the cost‐effectiveness of external inspection systems. If policy makers wish to understand the effectiveness of this type of intervention better, there needs to be further studies across a range of settings and contexts and studies reporting outcomes important to patients.

Plain language summary

Can third‐party inspections of whether healthcare organisations are fulfilling mandatory standards improve healthcare outcomes?

What is the aim of this review?

The aim of this Cochrane Review was to find out if external inspection of compliance with standards can improve improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes. Cochrane researchers collected and analysed all relevant studies to answer this question and found two studies.

Key messages

It is unclear whether third‐party inspection programmes designed to measure a healthcare organisation's compliance with standards of care can improve professional practice and healthcare outcomes. There was little information on patient outcomes. This review highlights the lack of high‐quality evaluations.

What was studied in the review?

Third‐party (external) inspection programmes are used within healthcare settings (e.g. clinics and hospitals) to increase the compliance with evidence‐based standards of care, but very little is known of their benefits in terms of organisational performance (e.g.,waiting list times, inpatient length of stay),the performance of healthcare professionals (e.g. referral rate, prescribing rate) and patient outcomes (e.g. mortality, and condition specific outcomes like blood glucose for diabetic patients, or weight loss for overweight or obese patients), or their cost‐effectiveness. Accreditation programmes are one example of an external review system. Accreditation is a process of review that healthcare organisations participate in to demonstrate the ability to meet predetermined criteria and standards of accreditation established by a professional accrediting agency. Infection control inspection programmes are also examples. Designed to reduce infection, standards based on evidence and best practice, are used to improve care quality and safety to decrease healthcare‐acquired infection rates (also called a hospital acquired infection or nosocomial infection, is an infection that is acquired at a hospital or another healthcare setting). If the standards are not adhered to, the external inspection body can take actions to reinforce them.

What are the main results of the review?

The review authors searched the literature for studies evaluating the effects of external inspection of compliance with standards and found two relevant studies: one study involving 20 hospitals in the Republic of South Africa and one study providing time series data (a sequence of outcome measurements taken at successive equally spaced points in time) involving all acute hospital trusts in England (a trust is a National Health Service hospital which has opted to withdraw from local authority control and be managed by a trust instead). The comparison was no inspection.

One study reported improved compliance scores with hospital accreditation standards. However, it is uncertain whether external inspection leads to improved compliance with standards because the certainty of the evidence was very low. Only one of the nine intervention hospitals achieved accreditation status during the study period.

Another study reported on the effects of an infection control inspection programme. This programme was commissioned in the UK to reduce infection rates of MRSA (methicillin‐resistant Staphylococcus aureus, which is a form of bacterial infection that is resistant to many antibiotics). However, the inspection programme was only one element of a wider range of methods being applied to infection control in the UK National Health Service at that time. Even before the introduction of the inspection programme, the infection rates appeared to be decreasing ‐ but the introduction of the inspection programme did not accelerate this decrease. It is also uncertain whether the Healthcare Commission's Healthcare Associated Infections Inspection Programme may lead to lower MRSA infection rates or not because the certainty of the evidence was very low.

How up‐to‐date is this review?

The review authors searched for studies that had been published up to June 2015.

Summary of findings

Summary of findings for the main comparison. External inspection of compliance with COHSASA hospital accreditation standards versus no external inspection.

External inspection of compliance with COHSASA hospital accreditation standards versus no external inspection
Recipient: public hospitals
Settings: KwaZulu province, the Republic of South Africa
Intervention: external inspection of compliance with accreditation standards
Comparison: no inspection
Outcomes Intervention effect (range) No of studies
(hospitals)
Certainty of the evidence 
 (GRADE) Comments
Compliance with COHSASA accreditation standards at 2 years' follow‐up
(change in total compliance score for 21/28 service elements ‐ for 7/28 service elements, data were not available)
I: pre: 48% (not reported), post: 78% (not reported)
C: pre: 43% (not reported), post: 43% (not reported)
Mean intervention effect: 30% (23% to 37%), P < 0.001
1
(18)
⊕⊖⊖⊖
Very low 1,2
Uncertain whether external inspection leads to improved compliance with standards.
Compliance with COHSASA accreditation standards ‐
subgroup of critical criteria analysed at 2 years' follow‐up
(compliance score for 19 generic service elements, involving 426 predefined critical criteria)
I: pre: 38% (21% to 46%), post: 76% (55% to 96%)
C: pre: 37% (28% to 47%), post: 38% (25% to 49%)
Mean intervention effect: 37% (not reported), P < 0.001
1
(18)
⊕⊖⊖⊖
Very low 1,2
Uncertain whether external inspection leads to improved compliance with standards.
Indicators for hospital quality of care at 2 years' follow‐up Median intervention effect:
2.4% (‐1.9% to 11.8%)
1
(18)
⊕⊖⊖⊖
Very low 1,2
Uncertain whether external inspection improves median quality indicator scores.
Only 1 of the indicators was indicative of higher quality in accreditation hospitals.
Mortality and condition‐specific measures of outcome related to patients' health Not measured or reported.
Unanticipated/adverse consequences Not measured or reported.
Costs and cost effectiveness Not measured or reported.
High = This research provides a very good indication of the likely effect. The likelihood that the effect will be substantially different is low.
Moderate = This research provides a good indication of the likely effect. The likelihood that the effect will be substantially different is moderate.
Low = This research provides some indication of the likely effect. However, the likelihood that it will be substantially different is high.
Very low = This research does not provide a reliable indication of the likely effect. The likelihood that the effect will be substantially different is very high.
Substantially different = a large enough difference that it might affect a decision.

1 Downgraded two levels for very serious risk of bias due to unclear blinding, baseline compliance and time differences in outcome measurements.

2 Downgraded one level for serious imprecision due to the small sample size and wide confidence intervals.

C: control; COHSASA: Council for Health Services Accreditation for South Africa; I: intervention.

Summary of findings 2. External inspection of compliance with the Code of Practice and the law related to healthcare‐acquired infections.

External inspection of compliance with the Code of Practice and the law related to healthcare‐acquired infections
Recipient: all acute trusts
Settings: England
Intervention: external inspection of compliance with the Code of Practice and the Health Act 2006 related to healthcare‐acquired infections
Comparison: no control group (time series)
Outcomes Mean intervention effect (95% CI) No of studies
(trusts)
Certainty of the evidence 
 (GRADE) Comments
MRSA infection rate At 3 months: ‐100 (‐221.0 to 21.5) cases per quarter (P = 0.096)
 At 6 months' follow‐up: ‐75 (‐217.2 to 66.3) cases per quarter (P = 0.259)
 At 12 months' follow‐up: ‐27 (‐222.1 to 168.2) cases per quarter (P = 0.762)
 At 24 months' follow‐up: +70 (‐250.5 to 391.0) cases per quarter (P = 0.632) 1
(168)
⊕⊖⊖⊖
Very low1,2
Uncertain whether external inspection lowers MRSA infection rates.
Regression analysis showed similar MRSA rate before and after the external inspection (difference in slope 24.27, 95% CI ‐10.4 to 58.9; P = 0.147).
Unanticipated/adverse consequences Not measured or reported.
Costs and cost effectiveness Not measured or reported.
High = This research provides a very good indication of the likely effect. The likelihood that the effect will be substantially different is low.
Moderate = This research provides a good indication of the likely effect. The likelihood that the effect will be substantially different is moderate.
Low = This research provides some indication of the likely effect. However, the likelihood that it will be substantially different is high.
Very low = This research does not provide a reliable indication of the likely effect. The likelihood that the effect will be substantially different is very high.
Substantially different = a large enough difference that it might affect a decision.

1 Downgraded one level for serious risk of bias due to the probability that the intervention was not independent of other changes.

2 Downgraded one level for serious imprecision due to wide confidence intervals.

CI: confidence interval; MRSA: methicillin‐resistant Staphylococcus aureus.

Background

Inspection or review systems are used in healthcare to promote improvements in the quality of care, promoting changes in organisational structures or processes, in healthcare provider behaviour and thereby in patient outcomes. These review systems are based on the assumption that externally promoted adherence to evidence‐based standards (through inspection/assessment) will result in higher quality of healthcare (Shaw 2001; Pomey 2005). Review systems are popular among healthcare funders, who are more likely to make funding available (or less likely to withdraw funding) if standards are met and healthcare professionals and the public can have confidence in the standards of care provided (Hart 2013). There are numerous review systems described in the literature (e.g. peer review, accreditation, audit, regulation and statutory inspection, International Organization for Standardization (ISO) certification, evaluation against a business excellence (Shaw 2001; Shaw 2004)). Even if there is a trend towards mandatory government‐driven accreditation systems (Accreditation Canada 2015), many review systems still assume that the organisation being reviewed will volunteer itself for the review process and by so doing will have already made some commitment to improvement. Such volunteer systems will systematically miss including those organisations that are not inclined to submit themselves for review; only an externally authorised and driven process can promote change in any organisation, irrespective of its inclination to be inspected. An example of such a system is the inspection processes run by the Care Quality Commission (formerly the Healthcare Commission) in the UK National Health Service (NHS) (www.cqc.org.uk/). The commission has a regular cycle of inspection and the ability to respond to concerns about the quality of healthcare in any NHS organisation, inspect and largely decide the consequences of inspection (Care Quality Commission 2013).

Description of the intervention

For the purposes of this review, an external inspection is defined as "a system, process or arrangement in which some dimensions or characteristics of a healthcare provider organisation and its activities are assessed or analysed against a framework of ideas, knowledge, or measures derived or developed outside that organisation" (Walsche 2000). The process of inspection is initiated and controlled by an organisation external to the one being inspected.

ISO defines a standard as "a document, established by consensus and approved by a recognised body, that provides, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context" (ISO 2004). Included in this definition is that "standards should be based on the consolidated results of science, technology and experience, and aimed at the promotion of optimum community benefits."

The external standard is a standard that has been developed by a body external to the organisation being inspected, which distinguishes it from the standards that are used in, for example, audit and feedback, that are often set by the group to whom they are applied.

How the intervention might work

Inspection of performance assumes that the process of comparison of performance against an explicit standard of care will lead to the closing of any identified important gaps; in this sense the underlying process is akin to that which underpins clinical audit (Accreditation Canada 2015). However, when conducted at an organisational level, it is usually used to encompass a far wider range of organisational attributes than clinical audit would normally address. Inspections for assessing the quality of care within healthcare organisations are undertaken by a variety of agencies, non‐governmental as well as governmental, that use different approaches and methods (Hart 2013). Thus, the inspection process can take different forms, both in terms of measurements and data used as well as the purpose and focus of the review. Various systems may also differ in when and how they are initiated, if they are voluntary or mandatory, if they are applied to all an organisation or only a subsection of it (e.g. to a particular clinical area or a professional group), and whether they are linked to incentives or sanctions. Also, the way external bodies use the results to bring about desired quality improvements in an organisation differs. There may also be adverse effects, undesired change or changes that do not last long term (Walsche 2000).

Why it is important to do this review

Voluntary inspection processes are extensively used in North America, Europe and elsewhere around the world, but have rarely been evaluated in terms of impacts on the organisations reviewed (i.e. healthcare delivery, patient outcomes or cost‐effectiveness) (Greenfield 2008). External inspection processes similarly lack evaluations and it is therefore not clear what the benefits of such inspections are, or what inspection process is most successful in improving healthcare. Reviewing the available evidence on the effectiveness of external inspection is an important first step towards identifying the optimal design of such process in terms of their ability to improve healthcare processes and outcomes. This is the first update of the original review (Flodgren 2011).

Objectives

To evaluate the effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes.

Methods

Criteria for considering studies for this review

Types of studies

We included studies evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes in hospitals, primary healthcare organisations and other community‐based healthcare organisations. We considered the following study designs: randomised controlled trials (RCTs), non‐randomised trial (NRCT), interrupted time series (ITSs) and controlled before‐after studies (CBAs) that included at least two sites in both control and intervention groups. As we did not expect to find many eligible randomised studies in this area of research, we considered including robust non‐randomised studies complying with the Effective Practice and Organisation of Care (EPOC) study design criteria (EPOC 2015).

Types of participants

We included hospitals, primary healthcare organisations or other community‐based healthcare organisations containing health professionals.

Types of interventions

We included all processes of external inspection against external standards in a healthcare setting compared with no inspection or with another form of inspection (e.g. against internally derived standards).

Types of outcome measures

We included studies that reported one or more of the following objective outcome measures.  

Main outcomes
  • Measures of healthcare organisational change (e.g. organisational performance, waiting list times, inpatient hospital stay time).

  • Measures of healthcare professional behaviour (e.g. referral rate, prescribing rate).

  • Measures of patient outcome (e.g. mortality and condition‐specific measures of outcome related to patients' health).

Other outcomes
  • Patient satisfaction and patient involvement.

  • Unanticipated or adverse consequences.

  • Economic outcomes.

Search methods for identification of studies

We searched for studies evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes.

Electronic searches

We searched the following electronic databases for primary studies:

  • Cochrane Central Register of Controlled Trials (CENTRAL) 2015, Issue 5;

  • Database of Abstracts of Reviews of Effectiveness (DARE), the Cochrane Library, 2015, Issue 2;

  • MEDLINE Ovid (1946 to 1 June 2015);

  • Embase Ovid (1996 to 2015 week 22);

  • HMIC Ovid (1983 to 1 June 2015);

  • Clinicaltrials.gov (clinicaltrials.gov/ searched 1 June 2015);

  • World Health Organization International Clinical Trials Registry Platform (apps.who.int/trialsearch/ searched 1 June 2015).

We translated the search strategy for each database using the appropriate controlled vocabulary as applicable. There was no language restriction. We included studies regardless of publication status.

Appendix 1 shows the full search strategies.

Searching other resources

We searched the reference lists of all included studies. We contacted authors of relevant papers as well as accreditation bodies and ISO regarding any further published or unpublished work. We searched web sites of organisations concerned with accreditation, such as Joint Commission on Accreditation of Healthcare Organizations (JCAHO) (www.jointcommission.org/); Accreditation Canada (www.accrediation.ca); ACHSI‐Australian Council for Healthcare Standards International (www.achs.org.au/ACHSI); and ISQua International Society for Quality in healthcare (www.isquaresearch.com). We also searched an online database of systematic reviews (www.pdq‐evidence.org/). We accessed these resources on 10 June 2015.

Data collection and analysis

Selection of studies

We downloaded all titles and abstracts retrieved by electronic searching to the reference management database EndNote and removed duplicates. For this update, one review author (DGB) screened the titles and abstracts found by the electronic searches to remove irrelevant citations. This was done after independently piloting the inclusion criteria with another review author (GF) against a random sample of studies. One review author (DGB) produced a list of possible eligible citations which a second review author (GF) assessed for eligibility. We excluded those studies that clearly did not meet the inclusion criteria and obtained copies of the full text of potentially relevant references. Two review authors (GF and DGB) independently assessed the eligibility of the papers retrieved in full text. A third review author (MPP) resolved any disagreements.

Data extraction and management

Two review authors (from GF, MPE, MPP and ST) independently extracted the data from each included study into a modified EPOC data extraction form (Appendix 2). We resolved disagreements by discussion, or arbitration by a third review author.

Assessment of risk of bias in included studies

Two review authors (from GF, MPE, MPP and ST) independently assessed the risk of bias of each included study. We resolved disagreements by discussion, or arbitration by a third review author. For RCTs, we used Cochrane's tool for assessing risk of bias (Higgins 2011) on six standard criteria:

  • adequate sequence generation;

  • concealment of allocation;

  • blinded or objective assessment of primary outcome(s);

  • adequately addressed incomplete outcome data;

  • free from selective reporting;

  • free of other risk of bias.

We also used three additional criteria specified by EPOC (EPOC 2009):

  • similar baseline characteristics;

  • similar baseline outcome measures;

  • adequate protection against contamination.

For the included ITS study, we used the following criteria.

  • Was the intervention independent of other changes?

  • Was the shape of the intervention effect prespecified?

  • Was the intervention unlikely to affect data‐collection?

  • Was knowledge of the allocated interventions adequately prevented during the study?

  • Were incomplete outcome data adequately addressed?

  • Was the study free from selective outcome reporting?

  • Was the study free from other risks of bias?

We resolved disagreements by discussion between review authors or if needed, arbitration by a review author. We scored risk of bias for these criteria as 'yes' (= adequate), 'no' (= inadequate) or 'unclear'. Studies achieved a 'low' risk of bias score if all risk of bias criteria were judged as 'adequate'. We assigned a score of moderate risk of bias to studies that scored inadequate on 'one to two' criteria and a score of high risk of bias to studies that scored inadequate on 'more than two' criteria (Jamtvedt 2006). The risk of bias of included studies is summarised in the text and presented in the risk of bias section within the Characteristics of included studies table.

Measures of treatment effect

For each study, we reported data in natural units. Where baseline results were available from RCTs, NRCTs and CBAs, we reported preintervention and postintervention means or proportions for both study and control groups and calculated the unadjusted and adjusted (for any baseline imbalance) absolute change from baseline with 95% confidence intervals (CI).

For ITS studies, we reported the main outcomes in natural units and two effect sizes: the change in the level of outcome immediately after the introduction of the intervention and the change in the slopes of the regression lines. Both estimates are necessary for interpreting the results of each comparison. For example, there could have been no change in the level immediately after the intervention, but there could have been a change in slope. We also reported level effects for six months and yearly postintervention points within the postintervention phase.

We used a standard method of presentation where possible for the results. For comparisons of RCTs, NRCTs and CBAs, we reported (separately for each study design): median effect size across included studies, interquartile ranges of effect sizes across included studies and range of effect sizes across included studies.

Unit of analysis issues

We found two studies and neither had unit of analysis errors. In the cluster RCT, analysis was performed at the level of randomisation (hospital), not at the individual or record level within hospitals, thus allowing for clustering in the analysis (Salmon 2003).

Assessment of heterogeneity

We could not explore heterogeneity, due to finding too few studies.

Assessment of reporting biases

We could not assess publication bias because we found too few studies. 

Data synthesis

We did not carry out meta‐analysis. Instead, we produced a narrative results summary. In one of the included studies, we re‐analysed data on MRSA (methicillin‐resistant Staphylococcus aureus) rate as a time series (OPM 2009). We used Review Manager 5 to present and synthesise the data (RevMan 2014).

Two review authors (GF and DGB) used the GRADE tool to judge the overall certainty of the evidence for each outcome using the following domains: risk of bias, inconsistency, imprecision, indirectness and publication bias (www.gradeworkinggroup.org/). We downgraded the evidence for serious concerns about each of these domains. We resolved disagreements through discussions among review authors. We presented the grading of the evidence in 'Summary of findings' tables.

Subgroup analysis and investigation of heterogeneity

We did not perform any subgroup analysis or investigate heterogeneity.

Sensitivity analysis

We had planned to perform a sensitivity analysis excluding high risk of bias studies, but since we found so few studies, we did not perform an analysis.

Results

Description of studies

We searched for studies (RCTs, NRCTs, ITSs and CBAs) evaluating the effect of external inspection of compliance with standards on healthcare organisation change, healthcare professional behaviour or patient outcomes.

Results of the search

Figure 1 shows the study PRISMA flow chart (Moher 2009). We identified a total of 3405 non‐duplicate citations from electronic database searches and 536 from other sources, totaling 3941 unique records. After screening of all titles and abstracts, we identified two citations (one full‐text paper and one abstract) that met the initial selection criteria and we obtained these in full text for review. We excluded one of the studies and listed the abstract in the Characteristics of studies awaiting classification table (Browne 2015). The two studies previously included, which met the inclusion criteria, are reported in detail in the Characteristics of included studies table.

1.

1

Study flow diagram.

Included studies

Two studies met the inclusion criteria: one cluster RCT (Salmon 2003) performed in an upper‐middle‐income country, and one before‐after study (OPM 2009) (that could be re‐analysed as an ITS) performed in a high‐income country. Both external inspections were mandatory (i.e. decided upon by somebody other than the recipient), and universal (i.e. applied at the organisational level). This review update did not identify any new eligible studies for inclusion.

Targeted behaviour

The aims of the accreditation programme were to improve the compliance with COHSASA (Council for Health Services Accreditation for South Africa) accreditation standards and to improve performance related to eight hospital quality‐of‐care indicators (Salmon 2003). The purpose of the Healthcare Commission's inspection programme was to improve trusts' compliance with the Health Act 2006 and the Code of Practice for the Prevention and Control of Healthcare Associated Infections (Department of Health 2006) related to healthcare‐acquired infections, thereby reducing the number of healthcare‐acquired infections (including MRSA infections), and increasing patients' and the public's confidence in the healthcare system (OPM 2009).

Participants and settings

The setting in the cluster RCT was 20 public hospitals in the Kwa‐Zulu province in South Africa (five urban, three peri‐urban and two rural hospitals in the intervention group; two urban, two peri‐urban and six rural hospitals in the control group) (Salmon 2003). The mean (standard deviation (SD) number of beds was 435 (± 440) in the intervention hospitals and 467 (± 526) in the control hospitals. One intervention hospital dropped out of the accreditation midway through the study, and so to retain comparability of the intervention and control groups, a similar sized hospital was removed from the control group, leaving nine hospitals in the intervention group and nine in the control group for this part of the study. Therefore, of the 20 randomised hospitals, 18 remained for the final analyses.

For the Healthcare Commission's inspection programme, all acute hospital trusts in England were included (OPM 2009).

Standards

In Salmon and colleagues, the hospital quality‐of‐care indicators were developed by consensus of an advisory board during a workshop held in South Africa in May 1999 (Salmon 2003). Present at the workshop were South African healthcare professional leaders, the managing director of COHSASA, a representative from Joint Commission International (JCI) and the principal investigators for the research study. This process resulted in 12 indicators for the first round of data collection. However, based on preliminary analysis of data collected from the first round, the research team recommended to the study steering committee that some indicators be dropped. The steering committee (composed of representatives from the research team, the sponsors of the research, COHSASA and several South African medical experts) decided to drop four of the 12 indicators (surgical wound infections, time to surgery, neonatal mortality and financial solvency). These decisions resulted in eight quality indicators (see Table 3). The reasons for abandoning the four indicators are described in Appendix 3.

1. Quality indicators: summary of mean scores for intervention and control hospitals over time, with intervention effect.
Indicator Intervention n = 10 Control n = 10 Intervention
effect
P value
Time 1 Time 2 Change Time 1 Time 2 Change
1. Nurse perceptions 59.3 60.8 1.5 60.8 56.5 ‐4.2 15.7 0.031
2. Patient satisfaction 86.9 91.5 4.6 87.0 90.1 3.1 1.5 0.484
3. Medical education 42.9 43.1 0.2 41.5 40.0 ‐1.5 1.7 0.395
4. Medical records:
accessibility
85.4 77.5 ‐7.9 79.4 68.4 ‐11.0 3.1 0.492
5. Medical records:
completeness
47.1 49.1 2.0 48.6 44.9 ‐3.7 5.7 0.114
6. Completeness of
perioperative notes
70.2 72.7 2.5 65.2 69.6 4.4 ‐1.9 0.489
7. Ward stock labelling 66.0 81.8 15.8 45.6 49.6 4.0 11.8 0.112
8. Hospital sanitation 59.7 62.8 3.1 50.2 55.7 5.5 ‐2.4 0.641

All scores were standardised to a 100‐point scale, with 100 as high. Positive intervention effects represent improvements in intervention hospitals that exceeded the improvements in control hospitals. P values are based on an analysis of variance (ANOVA) model with intervention group and hospital size as main effects.

The Code of Practice and the Health Act 2006 used as standards in the Healthcare Commission's inspection programme (OPM 2009) were developed and launched by the Department of Health, who in 2006 enacted the new legislation with the aim to decrease the number of healthcare‐acquired infections.

Outcomes

In the study by Salmon and colleagues, there were two sets of outcome measures: the eight study‐derived quality indicators and the larger raft of COHSASA accreditation criteria (Salmon 2003). The eight indicators of hospital quality (i.e. those that remained after four indicators had been dropped) were: nurses' perceptions of clinical quality, participation and teamwork; patient satisfaction with care; patient medication education; medical records: accessibility and accuracy; medical records: completeness; completeness of perioperative notes; completeness of ward stock medicine labelling and hospital sanitation. All eight indicators were measured twice (see Table 3 for a description of the indicators). The COHSASA accreditation criteria were 6000 criteria (measurable elements) in 28 service elements (see additional Table 4) measuring aspects of hospital quality of care, of which 424 standards (in 19 generic service elements) were a priori judged by COHSASA to be critical criteria for the function of the service elements. These critical criteria were mainly drawn from the following service elements: obstetric and maternity inpatient services; operating theatre and anaesthetic services; resuscitation services; paediatric services and medical inpatient services. The accreditation standards required that systems and processes be established in clinical and non‐clinical activities of all services.

2. COHSASA standards compliance scores of intervention and control hospitals over time.
Service elements Intervention hospitals Control hospitals Mean
intervention effect
(95% CI)
P value
No of hospitals Mean baseline Mean external Mean change No of hospitals Mean baseline Mean external Mean change
Management service 9 50 79 29 10 42 44 2 27 (17 to 37) < 0.001
Administrative support 9 57 73 16 10 56 52 ‐4 20 (4 to 36) 0.038
Nursing management 9 55 87 32 10 51 50 ‐1 33 (24 to 43) < 0.001
Health and safety 9 35 75 40 10 28 32 4 36 (23 to 51) < 0.001
Infection control 9 45 88 43 10 39 42 3 40 (27 to 52) < 0.001
Operating theatre 9 57 86 29 10 50 53 3 26 (16 to 35) < 0.001
Sterilising and disinfectant 9 47 81 34 10 33 35 2 32 (22 to 41) < 0.001
Medical inpatient 8 49 78 29 10 44 46 2 27 (17 to 35) < 0.001
Pharmaceutical 9 41 75 34 10 42 38 ‐4 38 (25 to 52) < 0.001
Paediatric inpatient 8 51 78 27 10 44 46 2 25 (17 to 33) < 0.001
Maternity inpatient 9 53 82 29 10 52 51 ‐1 30 (23 to 36) < 0.001
Surgical inpatient 9 48 81 33 10 46 46 0 33 (25 to 42) < 0.001
Laundry 9 30 68 38 10 23 24 1 37 (26 to 47) < 0.001
Housekeeping 9 37 73 36 10 33 32 ‐1 37 (24 to 51) < 0.001
Maintenance 9 51 74 23 10 43 44 1 22 (11 to 34) 0.004
Resuscitation 9 31 83 52 10 25 25 0 52 (43 to 61) < 0.001
Food 9 41 73 32 10 38 38 0 32 (24 to 41) < 0.001
Diagnostic 9 44 79 35 10 38 39 1 34 (22 to 46) < 0.001
Critical care category 2 2 46 92 46 4 58 61 3 43 (15 to 70) NA
Casual 7 48 81 33 5 40 43 3 30 (17 to 44) NA
Outpatient 8 46 83 37 9 40 43 3 34 (20 to 47) NA
Occupational 3 42 85 43 4 43 47 4 39 (16 to 62) NA
Physiotherapy 7 46 84 38 4 38 42 4 34 (24 to 45) NA
Laboratory 9 46 85 39 8 43 40 ‐3 42 (31 to 53) < 0.001
Medical life support 9 37 74 37 10 21 23 2 35 (22 to 49) 0.001
Community health 4 50 88 38 8 54 50 ‐4 42 (28 to 56) NA
Social work 4 53 82 29 5 40 44 4 25 (6 to 41) NA
Medical practitioner 9 51 75 24 10 44 42 ‐2 26 (13 to 40) 0.004
Overall services score 9 48 78 30 10 43 43 0 30 (23 to 37) < 0.001

CI: confidence interval; COHSASA: Council for Health Services Accreditation for South Africa; NA: not available

In the OPM report (OPM 2009), only one of the reported outcomes was suitable for inclusion in this review (by virtue of presenting pre‐ and postintervention quantitative data): data on rates of hospital‐acquired MRSA infections for one year before the initiation of the inspection process and for two years after. The MRSA rate is mandatory for trusts to report each quarter, and is monitored by The Health Protection Agency, and had a sufficient number of measurements before and after the intervention to allow re‐analysis as a short time series. The other outcomes reported in OPM 2009 involved aggregated uncontrolled before and after data that could not be re‐analysed as time series (e.g. data on trusts' compliance with the Code of Practice, and patient and public confidence in healthcare). Thus, we considered that we could analyse this outcome rather than being specified as an appropriate main outcome by the authors of the report.

Data collection

In the study by Salmon and colleagues, the before and after measures of compliance with the accreditation standards were collected by COHSASA surveyors (or teams hired by COHSASA), and indicators of hospital quality were collected by research assistants hired by the independent research team composed of South African and US investigators (Salmon 2003). The time between measurements of the accreditation standards differed between intervention (19 months) and control hospitals (16 months). Due to the time it took to develop and test the indicators for hospital quality, the first round of measurements was not performed until a mean of 7.4 months after COHSASA collected the baseline survey data (but was performed at the same time point in intervention and control hospitals), which resulted in a difference in the interval between baseline survey and the first indicator survey. For both the intervention hospitals and the control hospitals only about nine months separated the first and second round of indicator data collection.

In the other study, the MRSA rate was reported quarterly by the trusts, and monitored and summarised by The Health Protection Agency (OPM 2009).

Description of the intervention

Salmon 2003: COHSASA facilitators initially assisted each participating facility to understand the accreditation standards and to perform a self‐assessment (baseline survey) against the standards, which was validated by a COHSASA team. Detailed written reports on the level of compliance with the standards and reasons for non‐conformance were generated and sent to the hospitals for use in their continuous quality improvement (CQI) programme. Next, the facilitators assisted the hospitals in implementing a CQI process to enable the facilities to improve on standards identified as suboptimal in the baseline survey. Lastly, the hospital entered the accreditation (external) survey phase when a team of COHSASA surveyors who were not involved in the preparatory phase conducted an audit. The accreditation team usually consisted of a medical doctor, a nurse and an administrator who spend a mean of three days evaluating the degree to which the hospital complied with the standards and recording the areas of non‐compliance. Hospitals found by COHSASA's accreditation committee to comply substantially with the standards were awarded either preaccreditation or full accreditation status. Preaccreditation encouraged institutions to continue with the CQI process, in the expectation that this would help progress to eventual full accreditation status. In the control hospitals, the accreditation variables were measured as unobtrusively as possible, but none of the other components of the accreditation programme were performed, meaning no feedback of results and no technical assistance until after the research was completed. Meanwhile, a separate research team measured the eight study quality indicators in both the intervention and control hospitals.

OPM 2009: The Healthcare Commission Healthcare Acquired Infections Inspection Programme: the selected trusts were notified that they would be inspected at any time point within the next three months. A pre‐inspection report was produced by the assessors, using relevant data sent to the assessors by the trusts. The assessors used the pre‐inspection report to select a subset of duties described in the Code of Practice to be assessed at the next inspection. During the inspection, the inspection team looked for any breeches of the Code of Practice, and this fed into the formal inspection output, either an inspection report with recommendations or an improvement notice. The inspection report highlighted areas requiring improvements and made recommendations as to how the trust needed to improve. The trusts acted on the comments and took steps to improve practices. In contrast, an improvement notice required the trusts to draw up an action plan and specify how it would remedy the material breeches of the code that had been identified. Only once the steps to remedy the breeches to the Code of Practice had been followed was a notice lifted.

Excluded studies

In this review update, we excluded one study after obtaining and scrutinising the full text. For the previous version of the review (as for this update), the main reason for exclusion was ineligible intervention (11 studies). We excluded two papers because they were overviews, one paper due to ineligible study design and one paper could not be found. See Characteristics of excluded studies table.

Risk of bias in included studies

The risk of bias of included studies is described in the 'Risk of bias' table within the Characteristics of included studies table. Both studies had an overall high risk of bias.

Cluster randomised controlled trial

In the study by Salmon and colleagues, the allocation sequence was adequately generated: to ensure a balanced design with respect to service and care characteristics, researchers stratified the hospitals by size (number of beds) into four categories and within each stratum a simple random sample without replacement was drawn (Salmon 2003). The allocation was made by the research team, but it was unclear if it was done by an independent statistician. The hospitals were notified about the process of inspection, and could not be blinded to whether they were part of an accreditation programme. It was unclear whether the assessors were blinded. Incomplete outcome data were adequately addressed: when one of the intervention hospitals, and also one of the biggest hospitals, dropped out half way through the accreditation process, a similar‐sized hospital from the control group was excluded to yield the same number of hospitals in each group. Thus, out of 20 hospitals initially included in the trial, 18 remained for the final analysis.

It was unclear if the four indicators of hospital quality of care that were dropped (see Appendix 3) should be deemed as selective reporting of results. After the first round of measurements, the research team suggested to the independent advisory board that the four indicators should be dropped due to problems with comparability between hospitals, and, therefore, only results for eight indicators were reported in the paper.

It was also unclear if the baseline characteristics of the participating hospitals were the same in the intervention as in the control group. Seemingly more rural hospitals were included in the control group, and more urban hospitals in the intervention group.

Analysis was performed at the level of randomisation (hospital), not at the individual or record level within hospitals, thus allowing for clustering in the analysis. The baseline survey of the compliance with the accreditation standards was not performed simultaneously in intervention and control hospitals, but on average three months later in control hospitals; therefore, it is unclear whether the baseline outcome measurements were similar and the control measurements represent a true baseline.

Interrupted time series study

In the OPM report, the intervention was not necessarily independent of other changes (OPM 2009). Within the UK there had been an increasing awareness of and publicity about the problem of hospital‐acquired infections. Rates of reported cases of MRSA were already showing a downward trend one year before the inspections began.

The intervention effect was not prespecified, since nothing was mentioned about what effect (a step change or change in slope) was expected for the outcome measure (MRSA infection rate); however, since the MRSA data were re‐analysed by the review authors, we considered the risk of bias for this item low.

The intervention was unlikely to affect data collection, since sources and methods of data collection were the same before and after the intervention (The Health Protection Agency monitor quarterly mandatory reported cases by trusts).

The only re‐analysable outcome measure (MRSA rate) was objective, and thus we scored low risk for 'knowledge of the allocated interventions'.

Since quarterly reporting of cases of MRSA is mandatory for acute trusts, there were no incomplete outcome data, missing data or selective reporting of data. The study was also free from other risks of bias.

Effects of interventions

See: Table 1; Table 2

The cluster RCT reported results for compliance scores with COHSASA accreditation standards, involving 28 service elements and performance related to eight study quality‐of‐care indicators (Salmon 2003). The results are summarised in Table 1 and Table 5.

3. Results: Salmon 2003.

Author Year Compliance with COHSASA accreditation standards (28 service elements) Hospital quality indicators (n = 8)
Salmon 2003 At 2 years' follow‐up:
Total compliance score for 21/28 service elements, for which comparisons were possible, rose from 48% to 78% in intervention hospitals, while control hospitals maintained the same compliance score throughout (43%).
Mean intervention effect was 30% (23% to 37%).
Looking at the individual scores of compliance with the accreditation standards for each service element, the results were mixed, with 21/28 service elements showing a beneficial effect of the inspections.
Mean intervention effect ranged from 20% to 52%, while data for the remaining 7 service elements were not available, i.e. some of the service elements were only evaluated in the higher‐level hospitals, so comparisons between the intervention and control hospitals was not appropriate due to small sample size.
Subanalysis of the standards that a priori were deemed by the COHSASA as being 'critical' for a specific function was performed. As some of the 28 service elements evaluated in the accreditation process were not applicable for all hospitals, this left 19 generic service elements, yielding 424 critical criteria for the subanalysis. These critical criteria were mainly drawn from the following service elements: obstetric and maternity inpatient services; operating theatre and anaesthetic services; resuscitation services; paediatric services and medical inpatient services.
Subanalysis showed improved mean compliance with the critical standards in intervention hospitals: total score rose from 38% (range 21% to 46%) to 76% (range 55% to 96%).
Control hospitals maintained the same compliance score throughout: 37% (range 28% to 47%) before the intervention and 38% (range 25% to 49%) after the intervention. There was a difference in means between groups (P < 0.001).
Effects on the hospital quality indicators were mixed, with mean intervention effects ranging from ‐1.9 to +11.8, and only 1/8 indicators: 'nurses' perception of clinical care', showed a beneficial effect of the intervention (see below).
Nurses' perception of clinical care
Intervention hospitals: pre: 59.3%, post: 60.8% (change 1.5%); control hospitals: pre: 60.8%, post: 56.5% (change ‐4.2%); intervention effect: 5.7 percentage points (P = 0.031).
Patient satisfaction with care:
Intervention hospitals: pre: 86.9%, post: 91.5% (change 4.6%); control hospitals: pre: 87.0%, post: 90.1% (change 3.1%); intervention effect: 1.5 percentage points (P = 0.484).
Patient medication education:
Intervention hospitals: pre: 42.9%, post: 43.1% (change 0.2%); control hospitals: pre: 41.5%, post: 40.0% (change ‐1.5%); intervention effect: 1.7 percentage points (P = 0.395).
Medical records: accessibility:
Intervention hospitals: pre: 85.4%, post: 77.5% (change ‐7.9%); control hospitals: pre: 79.4%, post: 68.4% (change ‐11.0%); intervention effect: 3.1 percentage points (P = 0.492).
Medical records: completeness (consisting of 2 components: admissions and discharge):
Intervention hospitals: pre: 47.1%, post: 49.1% (change 2.0%); control hospitals: pre: 48.6%, post: 44.9% (change ‐3.7%); intervention effect: 5.7 percentage points (P = 0.114).
Completeness of peri‐operative notes:
Intervention hospitals: pre: 70.2%, post: 72.7% (change 2.5%); control hospitals: pre: 65.2%, post: 69.6% (change 4.4); intervention effect: ‐1.9 percentage points (P = 0.489).
Ward stock labelling:
Intervention hospitals: pre: 66.0%, post: 81.8% (change 15.8%); control hospitals: pre: 45.6%, post: 49.6% (change 4.0%); intervention effect: 11.8 percentage points (P = 0.112).
Hospital sanitation*:
Intervention hospitals: pre: 59.7%, post: 62.8% (change 3.1%); control hospitals: pre: 50.2%, post: 55.7% (change 5.5); intervention effect: ‐2.4 percentage points (P = 0.641).
* Consisted of the assessment of 6 items (availability of soap, water, paper towels and toilet paper and whether toilets were clean and in working order) of which a composite score was developed.

COHSASA: Council for Health Services Accreditation for South Africa.

Salmon and colleagues reported higher total mean compliance score with COHSASA accreditation standards in intervention hospitals (Salmon 2003). The total score for 21/28 service elements, for which comparisons were possible, rose from 48% (range 30% to 57%) to 78% (range 68% to 92%) in intervention hospitals, while control hospitals maintained the same score throughout: 43% (range 21% to 58%) before the intervention and 43% (range 23% to 61%) after the intervention. The mean intervention effect was 30% (95% CI 23% to 37%) (P < 0.001).

In terms of individual scores of compliance with accreditation standards, 21/28 service elements showed a beneficial effect of the inspections (mean intervention effects ranging from 20% to 52%). For the remaining seven service elements, data were not available (i.e. some of the service elements were only evaluated in some hospitals, so comparisons between the intervention and control groups were not appropriate due to small sample size).

A subanalysis of 424 a priori identified critical criteria (in 19 generic service elements) showed greater mean compliance with the critical standards in intervention hospitals: the total overall compliance score rose from 38% (range 21% to 46%) to 76% (range 55% to 96%). Control hospitals maintained the same compliance score throughout: 37% (range 28% to 47%) before the intervention and 38% (range 25% to 49%) after the intervention. There was a difference in means between groups (P < 0.001).

Only one of the nine intervention hospitals gained full accreditation status during the study period.

The same authors reported little or no effect of the accreditation system on the performance related to eight study indicators of hospital quality of care: median intervention effect of 2.4 percentage points (range ‐1.9% to +11.8%) (Salmon 2003). Only for one of the quality indicator 'nurses' perception of clinical quality, participation and teamwork' the intervention effect was different between groups (5.7 percentage points higher in the accreditation group, P = 0.03). The increase in intervention hospitals was 1.5 percentage points (from 59.3% to 60.8%), while there was a decrease with ‐4.2 percentage points (from 60.8% to 56.5%) in control hospitals. All other indicators, patient satisfaction included (intervention effect 1.5%, P = 0.484), were similar between groups (see Characteristics of included studies table).

The OPM study reported results for MRSA rates (OPM 2009). The results are detailed in Table 6 and summarised in Table 2.

4. Results: OPM 2009.

Author
Year
Infection rate
OPM 2009 Date; No of cases
 April 2006 to June 2006; 1742
 July 2006 to September 2006; 1651
 October 2006 to December 2006; 1543
 January 2007 to March 2007; 1447
 April 2007 to June 2007; 1306. In June 2007, the Healthcare Commission began a series of unannounced inspections, focused specifically on assessing compliance with the Code of Practice
 July 2007 to September 2007; 1083
 October 2007 to December 2007; 1092
 January 2008 to March 2008; 970
 April 2008 to June 2008; 839
 July 2008 to September 2008; 724
 October 2008 to December 2008; 678
 January 2009 to March 2009; 694
 April 2009 to June 2009; 509
Re‐analysis of the MRSA data, as an ITS:
Difference (24.27, 95% CI ‐10.4 to 58.9) between the preslope (‐107.6) and the postslope (‐83.32) suggested similar infection rates before and after the inspection (P = 0.147).
When the downward trend in MRSA rate before the intervention was considered, the results showed a mean (95% CI) decrease with 100 (‐221.0 to 21.5) cases at 3 months' follow‐up (P = 0.096), 75 (‐217.2 to 66.3) cases at 6 months' follow‐up (P = ‐0.259), 27 (‐222.1 to 168.2) cases at 12 months' follow‐up (P = 0.62), and an increase with 70 (‐250.5 to 391) cases per quarter at 24 months' follow‐up (P = 0.632).

CI: confidence interval; ITS: interrupted time series; MRSA: methicillin‐resistant Staphylococcus aureus.

Re‐analysis of the quarterly reported MRSA data, as an ITS, showed no difference in MRSA rate before and after the Healthcare Commission's inspection programme. The difference (24.27, 95% CI ‐10.4 to 58.9) between the pre slope (‐107.6) and the post slope (‐83.32) suggest no difference between groups (P = 0.147). When the downward trend in MRSA rate before the intervention had been considered, the results showed a mean decrease with 100 (95% CI ‐221.0 to 21.5) cases at three months' follow‐up (P = 0.096), 75 (95% CI ‐217.2 to 66.3) cases at six months' follow‐up (P = 0.259), 27 (95% CI ‐222.1 to 168.2) cases at 12 months' follow‐up (P = 0.762), and an increase with 70 (95% CI ‐250.5 to 391) cases per quarter at 24 months' follow‐up (P = 0.632). Data relating to patient satisfaction were not reported.

Neither included study reported data on unanticipated/adverse consequences or economic outcomes.

Discussion

Summary of main results

We did not identify any new studies for inclusion in this review update (Flodgren 2011). This review includes two studies (one study that provided time series data and one cluster RCT) (Salmon 2003; OPM 2009). The RCT study reported improved mean total compliance score with COHSASA standards in intervention hospitals, as well as improved compliance with predetermined critical criteria assessed in a subanalysis (Salmon 2003). A beneficial effect of the intervention was reported only for one out of eight indicators of hospital quality. However, it is uncertain whether external inspection leads to improved compliance with standards because the certainty of the evidence was very low. Only one of the intervention hospitals achieved accreditation status at the end of the study period.

It is also uncertain whether the Healthcare Commissions Healthcare Associated Infections Inspection Programme may lead to lower MRSA infection rates or not because the certainty of the evidence is very low. However, the inspection programme was only one element of a wider range of interventions being applied to infection control in the UK NHS at that time (OPM 2009). Even before the introduction of the inspection programme, there was a negative tie trend (infection rates were decreasing) ‐ but the introduction of the inspection programme did not accelerate that trend.

It is reasonable to assume that many of the low‐income countries' issues apply to middle‐income countries too. The hospital accreditation programme was performed in a upper‐middle‐income country (Salmon 2003), while the Healthcare Commissions Healthcare Associated Infections Inspection Programme was not (OPM 2009). It is not possible, or even desirable, to compare or synthesise results from studies in which the conditions during which healthcare is provided are so different (e.g. in Salmon 2003, basic necessities such as soap and papers towels were not available in more than half of the included hospitals).

The way the assessors used data from the inspection to accomplish change (i.e. improved compliance with standards in the healthcare organisation being inspected) differed between studies. When the inspection was performed by a non‐governmental body, the assessors could recommend change, but had no power to enforce it. In contrast, the Healthcare Commission, as a governmental body, had the means to enforce change and make organisations improve their compliance with standards through 'inspection notices'. There is a trend today towards mandatory accreditation systems, for example in Canada, but still little is known about the effects of accreditation on healthcare outcomes (Pomey 2010).

Overall completeness and applicability of evidence

The evidence is limited in scale, content and generalisability. With only two studies, and low certainty of the included evidence, it is difficult to draw any clear conclusions about the effectiveness of external inspection of compliance with external standards beyond their effects within the two included studies.

Of the outcomes reported by Salmon and colleagues, the majority were outcomes of healthcare organisation change, and only one healthcare professional outcome (nurse perception of quality of care) and one patient outcome was reported (patient satisfaction with care) (Salmon 2003). Unfortunately, important outcomes (e.g. morbidity and mortality), had to be dropped during the research process due to problems with comparability between hospitals. Furthermore, the lack of information about the tool used to measure patient satisfaction, namely its validity and reliability, limits the interpretation of the data. In the evaluation of the Healthcare Commissions inspection programme (OPM 2009), most reported data were uncontrolled before‐after data, and only data on MRSA rate could be re‐analysed and included in this review. Neither study reported any unintended effects of the inspection.

Even if external inspection is associated with non‐negligible costs, and little evidence of its cost‐effectiveness exist to date (Shaw 2003), neither of the studies included in the review reported any cost data. There are furthermore indirect costs of external inspection that were not considered, namely the costs of putting measures in place to demonstrate compliance with the standards of care and displacement costs, where outcomes that are known to be inspected will be prioritised over others that will not (Davis 2001). Both included studies evaluated the effects of external inspection in secondary care, and the results cannot therefore be generalised to primary care.

This review raises the interesting question of the reasonable anticipated effect of an intervention such as external inspection. If a process of inspection identifies any deficiencies, then the anticipated response would be a number of changes at an organisational level with potential changes in care processes and thus patient outcomes. Although external inspection might be the trigger to such a series of events, the further along the causal chain one goes, the less its direct influence as a direct cause of changes is likely to be. Likewise, the impact of inspection might only be observable several years after it has been conducted (Davis 2001). Therefore, the most direct outcomes should be regarded as the subsequent organisational (and probably professional behaviour) changes with patient outcomes being regarded as a more distant (and less directly connected) outcome. The included studies illustrated this in different ways. In the study by Salmon and colleagues, the external inspection identified a cascade of consequent events (Salmon 2003); in the OPM report, the data analysed were clearly collected and reported in a milieu of a range of other interventions (OPM 2009). However, it is not quite that simple, as in the OPM report an outcome measure that is apparently a patient outcome (infection rate) is clearly regarded as an important organisational‐level indicator of organisational performance. Therefore, the choice of outcomes for an intervention such as external inspection has to be made in a way that allows for an appropriate diversity of measures that reflect the underlying issues that may have triggered the inspection.

Quality of the evidence

The evidence that we identified has to be regarded as sparse and susceptible to bias. The ITS study generally scored 'low' on the risk of bias assessment except for the criterion on independence from other changes, for which it scored 'high' (OPM 2009). The cluster RCT scored as 'unclear' on three of the 'Risk of bias' criteria (blinding of participants, blinding of outcome assessors and time differences in outcome measurements) and was therefore judged at high risk of bias (Salmon 2003). The certainty of the evidence in both the cluster RCT and the re‐analysed ITS study was downgraded due to high risk of bias and imprecision, and judged to be very low.

Potential biases in the review process

One review author first screened all references found by the electronic searches (update search only) to remove clearly irrelevant studies, and two review authors assessed the remaining references. Two review authors independently extracted data and assessed the risk of bias of included studies.

The search was difficult to conduct as there were few specific terms that we could use. Although an experienced information technologist carefully developed the search strategy, an information technologist at the editorial base review it, and we searched the home pages of many accreditation bodies, we cannot exclude the possibility that important references may have been missed.

There is also the risk of publication bias (i.e. that only studies showing a beneficial effect of intervention were published and not studies pointing towards little or no effect of intervention) (Hopewell 2009). Unfortunately, because we identified too few studies for inclusion in this review, we could not assess publication bias.

Agreements and disagreements with other studies or reviews

We are not aware of any other systematic reviews evaluating the effects of external inspection of compliance with standards on healthcare outcomes. We found one Norwegian retrospective exploratory review related to the topic, but which looked at how external inspecting organisations express and state their grounds for non‐compliant behaviour and how they follow‐up to enforce improvements (Hovlid 2015). Something which was not a part of this review. The two studies included in this review reported very few patient outcomes. One Australian report confirmed the lack of causal inferences in studies evaluating the effectiveness of external inspection on patient outcomes (Hart 2013).

Authors' conclusions

Implications for practice.

In terms of considering quality of care delivered across a whole healthcare system, external inspection (as defined for this review) as opposed to voluntary inspection, has the advantage of incorporating all organisations rather than only volunteer organisations. The trend today is towards more mandatory government‐mandated accreditation systems (Accreditation Canada 2015). For those running a healthcare system this is a very attractive advantage and it is likely that external inspection will continue to be used. Situations where this occurs offer a useful opportunity to better define the effects of such processes, the optimal configuration of inspection processes and their value for money. Results of a recent survey shows that a sustainable organisation typically complement regulation mechanisms, funding or governmental commitment to quality and health‐care improvement that offer a supportive environment (Shaw 2013).

Implications for research.

This review update identified no new eligible studies for inclusion in addition to the two studies previously included. If policy makers wish to understand the effectiveness of this type of intervention better, then there needs to be further studies across a range of settings and contexts. There does not seem to be any prima facie reason for not conducting a trial; however, if it is felt that an experimental design cannot be used then other non‐randomised designs, such as ITS designs, could be used as such designs offer a useful way of interpreting the data.

Whatever design is used, including an appropriate follow‐up period is important to examine whether any improvements observed after the external inspection endure. Any studies should endeavour to include outcomes important to patients and preferable also an economic evaluation.

One of the studies experienced some problems during data collection, and had to drop some important outcomes (Salmon 2003). Researchers and inspecting bodies should ensure that inspection and data collection are conducted using standardised and validated instruments (Tuijn 2011).

What's new

Date Event Description
11 February 2016 New citation required but conclusions have not changed The conclusions remain unchanged (no new eligible studies identified, review includes two studies).
11 February 2016 New search has been performed This is the first update of the original review (Flodgren 2011). A new search was conducted, the composition of the author team changed, and the review content was updated to comply with new Cochrane methods and MECIR standards.

Acknowledgements

We wish to acknowledge the authors of the original version of the review Professor Martin Eccles, Sarah Taber and information scientist Fiona Beyer who developed the search strategy. We also want to thank senior information scientist Nia Roberts for searching the electronic databases, information scientist Paul Miller for running the update search, review editor Arash Rashidian for his input and the editorial base for their assistance.

National Institute for Health Research (NIHR), via Cochrane Infrastructure funding to the Effective Practice and Organisation of Care (EPOC) group. The views and opinions expressed therein are those of the review authors and do not necessarily reflect those of the Systematic Reviews Programme, NIHR, National Health Service or the Department of Health.

Appendices

Appendix 1. Search strategy

The Cochrane Library (DARE, CENTRAL)

1. [mh "health personnel"/ST]

2. (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gynecologist* or gynaecologist* or hematologist* or haematologist* or internist* or nurse* or obstetrician* or occupational therapist* or paediatrician* or pediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counsel*or* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or healthcare personnel or health care personnel)

3. [mh "health facilities"/ST]

4. (hospital or hospitals or clinic or clinics or (primary near/2 care) or (health near/2 care))

5. {or #1‐#4}

6. [mh "peer review, health care"]

7. [mh benchmarking]

8. [mh accreditation]

9. [mh "management audit"]

10. [mh "clinical audit"]

11. (organi* next raid*)

12. (external* near/5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark* or measured or measurement or evaluation or evaluated or audit or audits or auditing or assessment or assessed or monitored or visitation or surveillance or (control next program*)))

13. {or #6‐#12}

14. (standards or standard or performance or criterion or criteria or indicator* or (clinical next competence) or compliance or (clinical next improvement) or (quality next (improvement or management)) or (organi* next development) or (health next care next regulation))

15. #5 and #13 and #14

MEDLINE

1. exp health personnel/st

2. (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gyn?ecologist* or h?ematologist* or internist* or nurse* or obstetrician* or occupational therapist* or p?ediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counsel?or* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or healthcare personnel or health care personnel).tw.

3. exp health facilities/st

4. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

5. or/1‐4

6. peer review, health care/

7. benchmarking/

8. exp accreditation/

9. exp management audit/

10. exp clinical audit/

11. (organi?ation* adj raid*).tw.

12. (external* adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark* or measured or measurement or evaluation or evaluated or audit or audits or auditing or assessment or assessed or monitored or visitation or surveillance or (control adj program*))).tw.

13. or/6‐12

14. (standards or standard or performance or criterion or criteria or indicator* or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj (improvement or management)) or (organi?ation* adj development) or (health adj care adj regulation)).tw.

15. 5 and 13 and 14

16. intervention?.ti. or (intervention? adj6 (clinician? or collaborat* or community or complex or design* or doctor? or educational or family doctor? or family physician? or family practitioner? or financial or GP or general practice? or hospital? or impact? or improv* or individuali?e? or individuali?ing or interdisciplin* or multicomponent or multi‐component or multidisciplin* or multi‐disciplin* or multifacet* or multi‐facet* or multimodal* or multi‐modal* or personali?e? or personali?ing or pharmacies or pharmacist? or pharmacy or physician? or practitioner? or prescrib* or prescription? or primary care or professional* or provider? or regulatory or tailor* or target* or team* or usual care)).ab.

17. (pre‐intervention? or preintervention? or "pre intervention?" or post‐intervention? or postintervention? or "post intervention?").ti,ab.

18. (hospital* or patient?).hw. and (study or studies or care or health* or practitioner? or provider? or physician? or nurse? or nursing or doctor?).ti,hw.

19. demonstration project?.ti,ab.

20. (pre‐post or "pre test*" or pretest* or posttest* or "post test*" or (pre adj5 post)).ti,ab.

21. (pre‐workshop or post‐workshop or (before adj3 workshop) or (after adj3 workshop)).ti,ab.

22. trial.ti. or ((study adj3 aim?) or "our study").ab.

23. (before adj10 (after or during)).ti,ab.

24. ("quasi‐experiment*" or quasiexperiment* or "quasi random*" or quasirandom* or "quasi control*" or quasicontrol* or ((quasi* or experimental) adj3 (method* or study or trial or design*))).ti,ab.

25. non‐randomized controlled trials as topic/

26. pilot projects/

27. pilot.ti. or (pilot adj (project? or study or trial)).ab.

28. (time points adj3 (over or multiple or three or four or five or six or seven or eight or nine or ten or eleven or twelve or month* or hour? or day? or "more than")).ab.

29. ("time series" adj2 interrupt*).ti,ab.

30. interrupted time series analysis/

31. controlled before‐after studies/

32. historically controlled study/

33. (multicentre or multicenter or multi‐centre or multi‐center).ti.

34. (control adj3 (area or cohort? or compare? or condition or design or group? or intervention? or participant? or study)).ab.

35. random*.ti,ab. or controlled.ti.

36. (control year? or experimental year? or (control period? or experimental period?)).ti,ab.

37. (utili?ation or programme or programmes).ti.

38. (during adj5 period).ti,ab.

39. ((strategy or strategies) adj2 (improv* or education*)).ti,ab.

40. (clinical trial or multicenter study).pt.

41. evaluation studies as topic/ or prospective studies/ or retrospective studies/

42. ((evaluation or prospective or retrospective) adj study).ti,ab.

43. or/16‐42

44. "comment on".cm. or review.pt. or (review not "peer review*").ti. or randomized controlled trial.pt.

45. (rat or rats or cow or cows or chicken? or horse or horses or mice or mouse or bovine or animal?).ti,hw. or veterinar*.ti,ab,hw.

46. exp animals/ not humans.sh.

47. or/44‐46

48. 43 not 47

49. exp randomized controlled trial/

50. controlled clinical trial.pt.

51. randomi#ed.ti,ab.

52. placebo.ab.

53. drug therapy.fs.

54. randomly.ti,ab.

55. trial.ab.

56. groups.ab.

57. or/49‐56

58. Clinical Trials as topic.sh.

59. trial.ti.

60. or/49‐52,54,58‐59

61. exp animals/ not humans/

62. 60 not 61

63. 48 or 62

64. 15 and 63

Embase

1. exp health care personnel/

2. exp health care facility/

3. (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gyn?ecologist* or h?ematologist* or internist* or nurse* or obstetrician* or occupational therapist* or p?ediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counsel?or* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or healthcare personnel or health care personnel).tw.

4. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

5. or/1‐4

6. exp *accreditation/

7. *"peer review"/

8. *medical audit/

9. (organi?ation* adj raid*).tw.

10. (external* adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark* or measured or measurement or evaluation or evaluated or audit or audits or auditing or assessment or assessed or monitored or visitation or surveillance or (control adj program*))).tw.

11. or/6‐10

12. (standards or standard or performance or criterion or criteria or indicator* or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj (improvement or management)) or (organi?ation* adj development) or (health adj care adj regulation)).tw.

13. 5 and 11 and 12

14. intervention?.ti. or (intervention? adj6 (clinician? or collaborat* or community or complex or design* or doctor? or educational or family doctor? or family physician? or family practitioner? or financial or GP or general practice? or hospital? or impact? or improv* or individuali?e? or individuali?ing or interdisciplin* or multicomponent or multi‐component or multidisciplin* or multi‐disciplin* or multifacet* or multi‐facet* or multimodal* or multi‐modal* or personali?e? or personali?ing or pharmacies or pharmacist? or pharmacy or physician? or practitioner? or prescrib* or prescription? or primary care or professional* or provider? or regulatory or tailor* or target* or team* or usual care)).ab.

15. (pre‐intervention? or preintervention? or "pre intervention?" or post‐intervention? or postintervention? or "post intervention?").ti,ab.

16. (hospital* or patient?).hw. and (study or studies or care or health* or practitioner? or provider? or physician? or nurse? or nursing or doctor?).ti,hw.

17. demonstration project?.ti,ab.

18. (pre‐post or "pre test*" or pretest* or posttest* or "post test*" or (pre adj5 post)).ti,ab.

19. (pre‐workshop or post‐workshop or (before adj3 workshop) or (after adj3 workshop)).ti,ab.

20. trial.ti. or ((study adj3 aim?) or "our study").ab.

21. (before adj10 (after or during)).ti,ab.

22. ("quasi‐experiment*" or quasiexperiment* or "quasi random*" or quasirandom* or "quasi control*" or quasicontrol* or ((quasi* or experimental) adj3 (method* or study or trial or design*))).ti,ab.

23. quasi experimental study/

24. *experimental design/ or *pilot study/

25. pilot.ti. or (pilot adj (project? or study or trial)).ab.

26. (time points adj3 (over or multiple or three or four or five or six or seven or eight or nine or ten or eleven or twelve or month* or hour? or day? or "more than")).ab.

27. ("time series" adj2 interrupt*).ti,ab.

28. (multicentre or multicenter or multi‐centre or multi‐center).ti.

29. (control adj3 (area or cohort? or compare? or condition or design or group? or intervention? or participant? or study)).ab.

30. random*.ti,ab. or controlled.ti.

31. (control year? or experimental year? or (control period? or experimental period?)).ti,ab.

32. (utili?ation or programme or programmes).ti.

33. (during adj5 period).ti,ab.

34. ((strategy or strategies) adj2 (improv* or education*)).ti,ab.

35. *clinical trial/ or *multicenter study/

36. *evaluation study/ or *prospective study/ or *retrospective study/

37. ((evaluation or prospective or retrospective) adj study).ti,ab.

38. or/14‐37

39. (rat or rats or cow or cows or chicken? or horse or horses or mice or mouse or bovine or animal?).ti.

40. (exp animal/ or exp invertebrate/ or animal experiment/ or animal model/ or animal tissue/ or animal cell/ or nonhuman/ or exp experimental animal/) not (human/ or normal human/ or human cell/)

41. or/39‐40

42. 38 not 41

43. random*.ti,ab.

44. factorial*.ti,ab.

45. (crossover* or cross over*).ti,ab.

46. ((doubl* or singl*) adj blind*).ti,ab.

47. (assign* or allocat* or volunteer* or placebo*).ti,ab.

48. crossover procedure/

49. single blind procedure/

50. randomized controlled trial/

51. double blind procedure/

52. or/43‐51

53. exp animal/ not human/

54. 52 not 53

55. 42 or 54

56. 13 and 55

HMIC

1. exp health service staff/

2. exp health services/

3. exp health buildings/

4. (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gyn?ecologist* or h?ematologist* or internist* or nurse* or obstetrician* or occupational therapist* or p?ediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counsel?or* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or healthcare personnel or health care personnel).tw.

5. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

6. or/1‐5

7. exp "Peer review"/

8. exp Benchmarking/

9. exp Accreditation/

10. exp Clinical audit/

11. exp Management audit/

12. (organi?ation* adj raid*).tw.

13. (external* adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark* or measured or measurement or evaluation or evaluated or audit or audits or auditing or assessment or assessed or monitored or visitation or surveillance or (control adj program*))).tw.

14. or/7‐13

15. (standards or standard or performance or criterion or criteria or indicator* or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj (improvement or management)) or (organi?ation* adj development) or (health adj care adj regulation)).tw.

16. 6 and 14 and 15

Clinical trials registries

1. accreditation AND external

2. peer review AND external

3. inspection AND external

4. regulation AND external

5. certification AND external

6. benchmark AND external

7. assessment AND external

Appendix 2. Data extraction form

Cochrane Effective Practice and Organisation of Care Group (EPOC)

Modified EPOC Group Data Abstraction Form

External inspection versus external standards for improving healthcare organisation behaviour, professional behaviour and patient outcomes

Data collection

Name of reviewer: 

Date: 

Study reference:

Is the healthcare organisation review performed (i) by an external body (independent of the organisation under review) and (ii) against external standards?

If not ‐ EXCLUDE!

1.      Inclusion criteria

1.1    Study design

1.1.1 RCT designs

1.1.2 NRCTdesigns 

1.1.3 CBA designs

a)      Contemporaneous data collection

b)      Appropriate choice of control site/activity

c)      At least two intervention and two control sites

1.1.4 ITS designs

a)      Clearly defined point in time when the intervention occurred

b)      At least 3 data points before and 3 after the intervention

1.2    Methodological inclusion criteria

a)      The objective measurement of performance/provider behaviour or health/patient outcomes

b)      Relevant and interpretable data presented or obtainable

N.B.  A study must meet the minimum criteria for EPOC scope, design and methodology for inclusion in EPOC reviews.  If it does not, COLLECT NO FURTHER DATA.

 

2.      Interventions

2.1           Type of external inspection intervention

        (state all interventions for each comparison/study group)

Group 1:

Group 2:

Group 3:

2.2    Control(s)

3.      Type of targeted behaviour (state more than one where appropriate)

4.      Participants

 

4.1    Characteristics of participating providers

 

4.1.1 Profession

 

4.1.2 Level of training

 

4.1.3 Clinical specialty

 

4.1.4 Age

 

4.1.5 Time since graduation (or years in practice)

 

4.2   Characteristics of participating patients

 

4.2.1 Clinical problem

 

4.2.2 Other patient characteristics

a)  Age

b)  Gender 

c)  Ethnicity

d)  Other (specify)

 

4.2.3 Number of patients included in the study

a)    Episodes of care

b)    Patients

c)    Providers

d)    Practices 

e)    Hospitals

f)     Communities or regions

 

5.      Setting

5.1    Reimbursement system

5.2    Location of care

5.3    Academic status

5.4    Country

5.5    Proportion of eligible providers (or allocation units)

 

6.      Methods 

6.1    Unit of allocation

6.2    Unit of analysis  

6.3    Power calculation

6.4    Risk of bias assessment

                        (If the trial is an ITS go directly to 6.4.2 for the RoB assessment)

 

6.4.1 Risk of bias assessment for randomised controlled trials (RCTs), nonrandomised controlled trials (NRCTs) and controlled before‐after studies (CBAs)

a)      Was the allocation sequence adequately generated? (cut and paste from the paper verbatim)

Score
YES
If a random component in the sequence generation process is described (e.g. referring
to a random numbers table)
 
Score
NO
If a non‐random method is used (e.g. performed by date of submission)  
Score
UNCLEAR
If not specified in the paper
 
 
 

b)      Was the allocation adequately concealed? 

Score
YES
If the unit of allocation was by institution, team or professional and allocation was performed at all units at the start of the study; or if the unit of allocation was by patient or episode of care and there was some kind of centralised randomisation scheme; an on‐site computer system or if sealed opaque envelopes were used  
Score
NO
If none of the above mentioned methods were used (or if a CBA)  
Score
UNCLEAR
If not specified in the paper  

c) Were baseline outcome measurements similar?

Score
YES
If performance or patient outcomes were measured prior to the intervention, and no important differences were present across study groups  
Score
NO
If important differences were present and not adjusted for in analysis.**  
Score
UNCLEAR
If RCTs have no baseline measure of outcome**  

 

d)      Were baseline characteristics similar?

Score
YES
If baseline characteristics of the study and control providers are reported and similar  
Score
NO
If there is no report of characteristics in the text or tables or if there are differences between control and intervention providers  
Score
UNCLEAR
If it is not clear in the paper (e.g. characteristics are mentioned in the text but no data were presented)  

 

e)      Were incomplete outcome data adequately addressed?

Score
YES
If missing outcome variables were unlikely to bias the results (e.g. the proportion of missing data was similar in the intervention and the control group, or the proportion of missing data was less than the effect size, i.e. unlikely to overturn the study results)  
Score
NO
If missing data were likely to bias the results  
Score
UNCLEAR
If not specified in the paper (do not assume 100% follow‐up unless stated explicitly)  

 

f)       Was knowledge of the allocated interventions adequately addressed?*

Score
YES
If the authors state explicitly that primary outcome variables was assessed blindly, or the outcomes are objective, e.g. length of hospital stay  
Score
NO
If the outcomes were not assessed blindly  
Score
UNCLEAR
If not specified in the paper  

 

g)      Was the study adequately protected against contamination?

Score
YES
If allocation was by community, institution or practice and it is unlikely that the control group received the intervention  
Score
NO
If it is likely that the control group received the intervention (e.g. if patients rather than professionals were randomised)  
Score
UNCLEAR
If professionals were allocated within a clinic or practice and it is possible that communication between intervention and control professionals could have occurred (e.g. physicians within practices were allocated to intervention or control)  

 

h)      Was the study free from selective outcome reporting?

Score
YES
If there is no evidence that outcomes were selectively reported (e.g. all relevant outcomes in the methods section are reported in the results section)  
Score
NO
If some important outcomes are subsequently omitted from the results  
Score
UNCLEAR
If not specified in the paper  

i)        Was the study free from other risks of bias?

Score
YES
If no evidence of other risks of bias  
Score
NO
   
Score
UNCLEAR
   

 

* If some primary outcomes were imbalanced at baseline, assessed blindly or affected by missing data and others were not, each primary outcome can be scored separately.

**If 'UNCLEAR' or 'No', but there are sufficient data in the paper to do an adjusted analysis (e.g. baseline adjustment analysis or intention‐to‐treat analysis) the criteria should be re scored to 'Yes'.

6.4.2           Risk of bias assessment for interrupted time series (ITS) designs

Note: if the ITS study has ignored secular (trend) changes and performed a simple t‐test of the pre versus post intervention periods without further justification, the study should not be included in the review unless reanalysis is possible.

a) Was the intervention independent of other changes? (cut and paste from the paper verbatim)

Score
YES
If there are compelling arguments that the intervention occurred independently of other changes over time and the outcome was not influenced by other confounding variables/historic events during study period  
Score
NO
If reported that intervention was not independent of other changes in time
If Events/variables identified, note what they are
 
Score
UNCLEAR
If not specified in the paper
 
 

 

b)  Was the shape of the intervention effects pre‐specified?

Score
YES
If point of analysis is the point of intervention OR a rational explanation for the shape of intervention effect was given by the author(s). Where appropriate, this should include an explanation if the point of analysis is NOT the point of intervention.  
Score
NO
If it is clear that the condition above is not met
 
 
Score
UNCLEAR
If not specified in the paper  

 

c)   Was the intervention unlikely to affect data collection?

Score
YES
If reported that intervention itself was unlikely to affect data collection (for example, sources and methods of data collection were the same before and after the intervention)  
Score
NO
If the intervention itself was likely to affect data collection (for example, any change in source or method of data collection reported)  
Score
UNCLEAR
If not stated in the paper  

 

d)  Was knowledge of the allocated interventions adequately prevented during the study?***

Score
YES
If the authors state explicitly that the primary outcome variables were assessed blindly, or the outcomes are objective, e.g. length of hospital stay. Primary outcomes are those variables that correspond to the primary hypothesis or question as defined by the authors.  
Score
NO
If the outcomes were not assessed blindly  
Score
UNCLEAR
If not specified in the paper  

 

e)  Were incomplete outcome data adequately addressed?***

Score
YES
If missing outcome measures were unlikely to bias the results (e.g. the proportion of missing data was similar in the pre and post intervention periods or the proportion of missing data was less than the effect size, i.e. unlikely to overturn the study result)  
Score
NO
If missing data were likely to bias the results  
Score
UNCLEAR
If not specified in the paper (do not assume 100% follow‐up unless stated explicitly)  

 

f)   Was the study free from selective outcome reporting?

Score
YES
If there is no evidence that  outcomes were selectively reported (e.g. all relevant outcomes in the methods section are reported in the results section)  
Score
NO
If some important outcomes are subsequently omitted from the results  
Score
UNCLEAR
If not specified in the paper  

 

g). Was the study free from other risks of bias?

Score
YES
If no evidence of other risks of bias, e.g. should consider if seasonality is an issue (i.e. if January to June comprises the pre intervention period and July to December the post, could the 'seasons’ have caused a spurious effect)  
Score
NO
   
Score
UNCLEAR
   

*** If some primary outcomes were assessed blindly or affected by missing data and others were not, each primary outcome can be scored separately.

 

6.5         Consumer involvement

 

7.      Prospective identification by investigators of barriers to change

 

8.      Intervention

8.1 Description of the external inspection intervention (cut and paste from paper verbatim):

8.1.1 Voluntary or mandatory review

 

8.1.2 Universally or targeted review (applied to all of an organisation or only sub‐sections of it, e.g. to a clinical area or a professional group)

8.1.3 Purpose and focus of the review

8.1.4 Type of external standards (description and  evidence base of standards)

8.1.5 Who set the standards?

8.1.6 Who did the inspection? (governmental or non‐governmental organisation)

8.1.7 How were results used (by the external body) to bring about the desired quality improvements

 

8.1.8      Recipient

 

8.1.9      Timing

a)      Frequency/number of inspections

b)      Duration of inspection

 

9.      Outcomes

9.1         Description of the main outcome measure(s)

a)            Healthcare organisational change (e.g. organisational performance)

b)            Health professional behaviour

c)     Patient outcomes

d)      Economic variables (only if reported)

  • Costs of the intervention

  • Changes in direct health care costs as a result of the intervention

  • Changes in non‐health care costs as a result of the intervention

  • Costs associated with the intervention are linked with provider or patient outcomes in an economic evaluation

9.2    Length of post intervention follow‐up period

9.3    Identify a possible ceiling effect:

a)      Identified by investigator

b)      Identified by reviewer

 

10.         Results (use extra page if necessary)

10.1.1        For RCTs and NRCTs

10.1.2        For CBAs

 

10.1.3        For ITSs

 

Appendix 3. Dropped hospital quality indicators

Dropped hospital quality indicators Reasons
Neonatal mortality High variation in documenting neonatal births between hospitals.
Surgical wound infections and time to surgery Only 9 hospitals (6 intervention and 3 control) performed surgery regularly, and many records lacked information on infections and times to surgery.
Financial solvency Strict budgeting control across all hospitals in the region, with reportedly no additional funds being assigned.

 

Characteristics of studies

Characteristics of included studies [ordered by study ID]

OPM 2009.

Methods Study design: ITS (uncontrolled before‐after study re‐analysed as a time series)
Data: data were gathered from: a national survey of trusts; in‐depth case studies with 10 trusts, desk‐research to map inspection process, analyse outcome indicators and analyse 80 published inspection reports, and that on the views of patients, service users and the public involving archive research, analysis of press coverage and discussion groups/telephone interviews with 25 stakeholders (these results were not re‐analysable, and therefore not included in this review).
Data on hospital‐acquired infections (MRSA rate) 1 year before the intervention and 2 years after the intervention (results re‐analysable and included in the review).
Participants Recipients: all acute trusts in England (168 acute trusts in 2009)
Country: England
Targeted behaviour: compliance with the Code of Practice and the law (the Health Act 2006) related to HCAIs
Interventions Description of the intervention:
The Healthcare Commission's Healthcare Associated Infections Inspection Programme ‐ addressed hospital trusts' compliance with the Code of Practice (and the Health Act 2006) aiming at reducing HCAIs, including MRSA infections:
  • the selected trusts are notified that they will be inspected at any time point within the subsequent 3 months. Being aware of the forthcoming inspection may encourage the trust to take steps to improve the compliance with the Code of Practice, before the actual inspection;

  • a pre‐inspection report is produced by the assessors, using relevant data sent to them by the trusts;

  • with the help of the pre‐inspection report, the assessors select a subset of duties described in the Code of Practice to be assessed at the inspection;

  • during the inspection, the inspection team will look for a likely breech of the Code of Practice, and this will feed into the formal inspection output, either an inspection report with recommendations or an improvement notice;

  • the inspection report highlights areas requiring improvements and makes recommendations as to how the trust needs to improve. The trusts will act on the comments and take steps to improve its practices. In contrast, an improvement notice requires the trusts to draw up an action plan and specify how it will remedy the material breeches of the code that have been identified;

  • once the steps to remedy the breeches to the Code of Practice have been followed the notice is lifted.


Type of external standard: the Code of Practice and the Health Act 2006 ‐ aimed at decreasing the number of hospital‐acquired infections.
Who developed the standards: in 2006 the Department of Health enacted new legislation: the Health Act 2006 supported by a Code of Practice aiming at decreasing the number of HCAIs infections.
Voluntary or mandatory review: mandatory
Universally or targeted review: universal
Who performed the review: the Healthcare Commission (now the Care Quality Commission); a governmental organisation
Purpose and focus of the review: to ensure that trusts are complying with the Code of Practice and by doing so bring about reductions in hospital‐acquired, infection‐related morbidity, as well as to improve patient and public confidence in healthcare.
Timing:
  • frequency and number of inspections: 1 per trust

  • duration of inspection: not stated

Outcomes
Other reported (uncontrolled before‐after) outcomes that could not be re‐analysed and therefore not included in this review were:
  • rate of Clostridium difficile infections;

  • patients' perceptions of hospital cleanliness and hand‐washing among doctors and nurses;

  • number and 'sentiment' of national media coverage of hospital‐acquired infections over time;

  • inspections conducted and trust performance;

  • trusts' understanding of the purpose and the aim of the inspection programme;

  • views on previsit submission of relevant documents, unannounced visits, the post reporting process and experiences of the inspection visit;

  • impact on standards of infection control, overall standards of infection prevention and control, and public satisfaction with, and confidence in, hospitals.

Notes  
Risk of bias
Bias Authors' judgement Support for judgement
Incomplete outcome data (attrition bias) 
 All outcomes Low risk There were no incomplete outcome data, since quarterly reporting of MRSA infections by trusts are mandatory.
Selective reporting (reporting bias) Low risk Results were presented for all outcomes described in the methods section.
Other bias Low risk No other risk of bias was identified.
Intervention independent of other changes High risk p. 16, para. 4.2.1
"As shown in Figure 2, reported cases of MRSA have decreased from 1,742 cases in Apr‐Jun 2006 to 509 cases in Apr‐Jun 2009 a reduction of 71 per cent. The total number of cases has shown a steady downward trend during this time, except for slight increases in cases between Jul‐Oct and Oct‐Dec 2007 and between Oct‐Dec and Jan‐Mar 09. These increases may reflect seasonal trends. The vertical line on Figure 2 indicates when the inspection programme began, in June 2007; this is included for information and we do not imply that any observed trends are being attributed to the impact of the programme." The 'Code of Practice' as well as the law related to HCAIs: The Health Act, were enacted in 2006, which may explain the downward trend MRSA rate seen from June 2006.
Prespecified shape of intervention Low risk Data re‐analysed by review authors.
Intervention unlikely to affect data collection Low risk Sources and methods of data collection were the same before and after the intervention (The Health Protection Agency monitor quarterly mandatory reports of MRSA rates made by trusts).
Knowledge of allocation adequately protected Low risk The outcome measure was objective (MRSA infection rates).

Salmon 2003.

Methods Study design: cluster RCT
Data: survey data from COHSASA accreditation programme were used, measuring hospital structures and processes, along with 8 hospital quality indicators.
Participants Recipients: 20 randomly selected public hospitals: 10 intervention and 10 control. 1 of the hospitals dropped out half‐way through the accreditation process, and 1 similar‐sized hospital in the control group was therefore excluded, leaving 9 intervention and 9 control hospitals for the final analysis.
Characteristics of included hospitals:
Setting: intervention: 5 urban, 3 peri‐urban and 2 rural hospitals; control: 2 urban, 2 peri‐urban and 6 rural hospitals.
Mean number of beds (SD): intervention hospitals: 435 (± 440); control hospitals: 467 (± 526)
Country: KwaZulu‐Natal Province, Republic of South Africa
Targeted behaviour: compliance with COHSASA accreditation standards, performance related to the hospital quality of care indicators
Interventions Description of the intervention: p. 4, col. 1, para. 2, and col. 2, para. 1
The accreditation process:
During the 2‐year study, COHSASA measured the accreditation variables twice and performed the remainder of the programme as normal in the 9 intervention hospitals.
  • COHSASA facilitators initially assisted each participating facility to understand the accreditation standards and to perform a self‐assessment (baseline survey) against the standards (that was validated by COHSASA surveyors).

  • Detailed written reports on the level of compliance with the standards and reasons for non‐conformance were generated and sent to the hospitals for use in their quality improvement programme.

  • Next the facilitators assisted the hospitals in implementing a CQI to enable the facilities to improve on standards identified as suboptimal in the baseline survey.

  • Lastly, the hospital entered the accreditation (external) survey phase, when a team of COHSASA surveyors who were not involved in the preparatory phase conducted an audit. The accreditation team usually consists of a medical doctor, a nurse and an administrator who spend a mean of 3 days evaluating the degree to which the hospital complies with the standards and recording the areas of non‐compliance.

  • Hospitals found by COHSASA's accreditation committee to comply substantially with the standards were awarded either preaccreditation or full accreditation status. The preaccreditation status encourages respective institution to continue with the CQI process, which should help it stay on the path to eventual full accreditation status.


Control condition:
The accreditation variables were measured as unobtrusively as possible in the 9 control hospitals. None of the other components of the accreditation programme were performed, meaning no feedback of results and no technical assistance, until after the research was completed. Meanwhile, a separate research team measured the research indicators in both the intervention and control hospitals.
Type of external standard: COHSASA accreditation standards and 8 indicators of hospital quality that had been developed by consensus of an advisory committee in South Africa.
Who developed the standards: to develop indicators for hospital quality, a workshop was held in South Africa in May 1999. Present at the workshop were South African healthcare professional leaders, the managing director of COHSASA, a representative from JCI, and the principal investigators for the research study.
Voluntary or mandatory inspection: mandatory
The KZN province signed a contract with COHSASA for the first province‐wide public hospital accreditation activity in the country (p. 4, col. 2, para. 1). The hospitals did not volunteer to participate.
Universally or targeted inspection: universal (i.e. all groups of health professionals were involved).
Who performed the inspection: before and after measures of compliance with COHSASA accreditation standards were collected by COHSASA surveyors or teams hired by COHSASA, and indicators of hospital quality were collected by research assistants hired by the independent research team composed of South African and American investigators (p. 6, col. 3, para. 2).
Purpose and focus of the inspection: to improve compliance with COHSASA accreditation standards and performance related to the hospital quality of care indicators.
Timing:
  • Frequency and number of inspections: 2: 1 before the start of the accreditation programme (a self‐assessment of compliance with accreditation standards that were validated by a COHSASA team of surveyors or a team hired by COHSASA), and 1 inspection at the end of the 2‐year period.

  • Duration of inspection: approximately 3 days for the on‐site inspection (the time put down in between inspections is unclear).

Outcomes
  • Compliance with COHSASA accreditation standards (6000 standards, and 28 service elements, see additional Table 4).

  • 8 indicators of hospital quality of care (see Table 5) were measured:

    • nurses' perception of clinical care;

    • patient satisfaction with care (assessed using an 18‐item questionnaire for up to 50 patients (inpatients and outpatients) and a 4‐part Likert scale (agree a lot, some, disagree some, a lot));

    • patient medication education;

    • medical records: accessibility and accuracy;

    • medical records: completeness;

    • completeness of peri‐operative notes;

    • completeness of ward stock medicine labelling;

    • hospital sanitation.


Note: initially there were 12 indicators, but after the first measurement 4 of them were dropped. Indicators of neonatal mortality, surgical wound infections and time to surgery, and financial solvency were dropped due to difficulties in achieving comparability between hospitals (see Appendix 3 for reasons).
Notes  
Risk of bias
Bias Authors' judgement Support for judgement
Random sequence generation (selection bias) Low risk p. 5, col. 2, para. 1
"To ensure a balanced design with respect to service and care characteristics, researchers stratified the hospitals by size (number of beds) into four categories. Within each stratum a simple random sample without replacement was drawn."
Allocation concealment (selection bias) Low risk The allocation was made by the research team (i.e. centrally), but it is unclear if it was done by an independent statistician.
Blinding (performance bias and detection bias) 
 All outcomes High risk The hospitals could not be blinded to the fact that they were part of an hospital accreditation programme or not, and it was not stated whether the assessors were blinded.
Incomplete outcome data (attrition bias) 
 All outcomes Low risk p. 1, col. 1, para. 3
"One of the intervention hospitals dropped out of the accreditation midway through the study, and so to retain comparability of the intervention and control groups, a similar sized hospital was removed from the control group, leaving nine hospitals in the intervention group and nine in the control for this part of the study. Of the 20 randomised hospitals, 18 remained from the final analyses."
Selective reporting (reporting bias) Unclear risk p. 7, col. 1, last para. and col. 2
"This process resulted in 12 indicators for the first round of data collection. However, based on preliminary analysis of data collected from the first round, the research team recommended to the steering committee that some indicators be dropped. The steering committee (composed of representatives from the research team, the sponsors of the research, COHSASA, and several South African medical experts) decided to drop the two indicators relating to surgical wound infections and time to surgery because only nine hospitals (six intervention and three control) performed surgery regularly and many of the records lacked information on infections and times. Despite its limitations, the committee did retain the indicator on completeness of peri‐operative notes and extended this to include any form of significant incision or anaesthesia. The committee also dropped the indicator of neonatal mortality rate, because the research assistants had great difficulty in finding reliable data due to the high variation in approaches to documenting neonatal deaths among the various hospitals. Transferring newborns soon after birth was common, but sometimes hospitals reported transfers even when they recorded deaths. Finally, the indicator of financial solvency was discarded because the KZN provincial government had implemented strict budgeting controls across all hospitals in the region with reportedly no additional funds being assigned. Hence, it was unlikely that the COHSASA process would affect this indicator. These decisions resulted in eight quality indicators (see Table 3)."
Other bias Unclear risk The baseline survey of the compliance with the accreditation standards was not performed simultaneously in intervention and control hospitals, but on a mean of 3 months later in control hospitals, so it is unclear whether the control measurements represent a true baseline. This especially since the COHSASA survey team did not only consult hospital records, but interviewed hospital staff, and observed procedures and operations to determine the degree to which the service elements met the requirements of the standards. However, it was suggested by the authors, that the intervention hospital generally would wait to start working on improving the standards until they had seen the baseline survey report, which was about 2 months after the baseline survey was conducted.
Similar baseline characteristics Unclear risk The stratification of hospitals into intervention and control groups was by hospital size ‐ but neither hospital size nor any of the other hospital characteristics reported at p. 19, Table A, were tested for statistically significant differences between groups. See also comment under other risk of bias.
Similar baseline outcome measures Low risk p. 10, col. 1, para. 1, lines 14 to 17
"At baseline the intervention and control hospitals showed similar levels of compliance to the critical standards."
Adequate protection against contamination Low risk Allocation was by hospital.

COHSASA: Council for Health Services Accreditation for South Africa; col.: column; CQI: continuous quality improvement; HCAIs: healthcare‐acquired infections; ITS: interrupted time series; JCI: Joint Commission International; KZN: Kwa‐Zulu Natal; MRSA: methicillin‐resistant Staphylococcus aureus; p.: page; para.: paragraph; RCT: randomised controlled trial; SD: standard deviation.

Characteristics of excluded studies [ordered by study ID]

Study Reason for exclusion
Al Tehewy 2009 Not an external inspection intervention, but a comparison between naturally experimented clusters (submitted for accreditation) and control clusters on the compliance with standards (monitoring indicators set by the General Directorate of Quality in the Ministry of Health and Population, Egypt). Ineligible study design (survey).
Brooke 2008 Not an external inspection intervention, but evaluated compliance with standards (Leapfrog evidence‐based standards for abdominal aortic aneurysm repair set by the Leapfrog Group). Ineligible study design (questionnaire survey).
Frasco 2005 Not an external inspection intervention, but evaluated the implementation of the JCAHO pain initiative. Ineligible study design (uncontrolled BA study).
Kowiatek 2002 Not an external inspection intervention, but evaluated a new medication control review tool for monitoring compliance with JCAHO standards. Ineligible study design (case studies).
Laselle 2006 Not an external inspection intervention, but evaluated the implementation of standards (JCAHO medication management standards). Ineligible study design (retrospective survey).
Mattes 1987 Not an external inspection intervention, evaluated only 2 different treatment planning conference ratings. Ineligible study design (uncontrolled BA study).
OPM 2007 External inspection intervention, but ineligible study design (questionnaire survey).
OPM 2008a Not an external inspection intervention, but an evaluation of the Healthcare Commission's assessment process. Ineligible study design (case studies and surveys).
OPM 2008b Not an external inspection intervention, but an evaluation of the Healthcare Commission's inspection programme. Ineligible study design (case studies and surveys).
Russell 2014 Not an external inspection intervention, but a programme of reciprocal peer‐to‐peer review visits with supported quality improvement.
Shaw 2003 Not an intervention (overview paper).
Shonka 2009 Not an external inspection intervention. Evaluated compliance with work hour regulations. Ineligible study design (retrospective review).
Walsh 1998 The report could not be found and neither could the contact details of the authors.
Winchester 2008 Not an intervention (overview paper).

BA: before‐after study; JCAHO: Joint Commission on Accreditation of Healthcare Organizations.

Characteristics of studies awaiting assessment [ordered by study ID]

Browne 2015.

Methods Study design: unclear
Data collection: previsit information requested from each CCG; standardised reviews with clinicians, managers and patients, audits of clinical notes, compliance with NICE standards assessed by the review team during the visits.
Participants Recipients: 13 CCGs
Country: England (the Southwest)
Targeted behaviour: diabetic foot care (decrease amputation rates)
Interventions Aim: structured diabetic foot care peer review programme commissioned by the Strategic Clinical Network to assess clinical pathways across each CCG in southwest England and with the aim to reduce diabetes amputation rates.
Description of the intervention:
Previsit information was requested from each CCG: this included service specification, pathways, practice based 5‐year amputation rates and compliance with the 9 key care processes.
Visits: external multiprofessional review teams visited each acute trust to conduct standardised interviews with clinicians and managers invited from community and acute providers regarding the local diabetic foot care pathways. Structured patient interviews were conducted and standardised audit of randomly selected clinical notes.
Compliance with guidelines: compliance with NICE CG119 on diabetic foot care was assessed by the review team.
Feedback and recommendations: concluding the visit preliminary findings were given to the CCGs and providers and anonyms feedback forms on the value of the visit were collected from the attendees. A standardised report was sent to chief executives of CCGs and relevant providers within 21 days of each visit. Each report included summary findings, areas of excellence and those requiring improvement with clear recommendations, suggested time frames and responsible organisation.
Outcomes Main outcome: amputation rates
Notes This study was completed and a draft manuscript produced at almost the same time as our review was submitted for publication.

CCG: clinical commissioning group; NICE: National Institute for Health and Care Excellence.

Differences between protocol and review

Two of the authors on the original version of the review (Professor Martin Eccles and Sarah Taber) did not take part in this update. One new review author (Daniela Gonçalves‐Bradley) joined the review team. In the previous version of the review, two review authors independently screened all references since most studies in the previous search were irrelevant, for this update one review author screened all references to remove the irrelevant studies, and produced a long list that two review authors independently screened. We used updated Cochrane/EPOC methods including the GRADE methods. We did not search some of the databases that were searched for the original version of the review (i.e. Science Citation Index, Social Science Citation Index, ISI Conference Proceedings and Intute) as the previous searches identified no unique relevant records.

Contributions of authors

DGB screened the update search and produced a long list of possible eligible studies. GF assessed the eligibility of the citations in the long list and produced a short list for reconciliation. GF updated the text of the review and DGB added search information and a PRISMA table. All review authors read and approved the final version.

For the previous version of the review: GF and MP Eccles (MPE) sifted all the titles and abstracts and applied the eligibility criteria. GF, MPE, MPP and SA Taber (ST) extracted data and assessed the risk of bias of included studies. GF drafted the review. MPE, MPP and ST commented on drafts and approved the final version.

Sources of support

Internal sources

  • Newcastle University, UK.

  • Oxford University, UK.

  • NIHR Cochrane Programme Grant, UK.

External sources

  • NIHR Cochrane Programme Grant, UK.

Declarations of interest

GF state no conflict of interest.

DGB state no conflict of interest.

MPP state no conflict of interest.

New search for studies and content updated (no change to conclusions)

References

References to studies included in this review

OPM 2009 {published data only}

  1. OPM evaluation team. Evaluation of the Healthcare Commission's Healthcare Associated Infections Inspection Programme. OPM Report 2009:1‐23.

Salmon 2003 {published data only}

  1. Salmon JW, Heavens J, Lombard C, Tavrow P. The impact of accreditation on the quality of hospital care: KwaZulu‐Natal Province, Republic of South Africa. Operations Research Results. Bethesda: U.S. Agency for International Development (USAID), Quality Assurance Project, University Research Co., LLC, 2003; Vol. 2, issue 17:1‐49.

References to studies excluded from this review

Al Tehewy 2009 {published data only}

  1. Al Tehewy N, Salem B, Habil I, Okda S. Evaluation of accreditation program in non‐governmental organisations' health units in Egypt: short term outcomes. International Journal for Quality in Healthcare 2009;21(3):183‐9. [DOI] [PubMed] [Google Scholar]

Brooke 2008 {published data only}

  1. Brooke BS, Perler BA, Dominici F, Makary MA, Pronovost PJ. Reduction of in‐hospital mortality among California hospitals meeting Leapfrog evidence‐based standards for abdominal aortic aneurysm repair. Journal of Vascular Surgery 2008;47(6):1155‐6; 1163‐4. [DOI] [PubMed] [Google Scholar]

Frasco 2005 {published data only}

  1. Frasco PE, Sprung J, Trentman TL. The impact of the joint commission for accreditation of healthcare organisational pain initiative on perioperative opiate consumption and recovery room length of stay. Anesthesia & Analgesia 2005;100:162‐8. [DOI] [PubMed] [Google Scholar]

Kowiatek 2002 {published data only}

  1. Kowiatek JG, Weber RJ, Schilling DE, McKaveney TP. Monitoring compliance with JCAHO standards using a medication‐control review tool. American Journal of Health System Pharmacy 2002;59(18):1763‐7. [DOI] [PubMed] [Google Scholar]

Laselle 2006 {published data only}

  1. Laselle TJ, May SK. Medication orders are written clearly and transcribed accurately? Implementing Medication Management Standard 3.20 and National Patient Safety Goal 2b. Hospital Pharmacy 2006;41:82‐7. [Google Scholar]

Mattes 1987 {published data only}

  1. Mattes JA. A controlled evaluation of a JCAH regulation. Psychiatric Hospital 1987;18(3):131‐3. [PubMed] [Google Scholar]

OPM 2007 {published data only}

  1. Office of Public management (OPM). Evaluation of a national audit of specialist in healthcare services for people with learning difficulties in England. OPM Reports 2007.

OPM 2008a {published data only}

  1. Office of Public Management (OPM). Evaluation of the Healthcare Commission's Assessment Process 2006‐2007. OPM Reports 2008.

OPM 2008b {published data only}

  1. Office of Public Management (OPM). Evaluation of the Healthcare Commission investigation function. OPM Reports 2008.

Russell 2014 {published data only}

  1. Russell GK, Jimenez S, Martin L, Stanley R, Peake MD, Woolhouse I. A multicentre randomised controlled trial of reciprocal lung cancer peer review and supported quality improvement: results from the improving lung cancer outcomes project. British Journal of Cancer 2014;110(8):1936‐42. [DOI: 10.1038/bjc.2014.146] [DOI] [PMC free article] [PubMed] [Google Scholar]

Shaw 2003 {published data only}

  1. Shaw CD. Measuring against clinical standards. Clinica Chimica Acta 2003;333(2):115‐24. [DOI] [PubMed] [Google Scholar]

Shonka 2009 {published data only}

  1. Shonka DC, Ghanem TA, Hubbard MA, Barker DA, Kesser BW. Four years of accreditation council of graduate medical education duty hour regulations: have they made a difference?. Laryngoscope 2009;119(4):635‐9. [DOI] [PubMed] [Google Scholar]

Walsh 1998 {published data only}

  1. Walsh N, Walshe K. Accreditation in primary care: an evaluation of the Royal College of General Practitioners' team based practice accreditation programme. University of Birmingham Health Services Management Centre, 1998. [Google Scholar]

Winchester 2008 {published data only}

  1. Winchester DP, Kaufman C, Anderson B, El‐Tamer M, Kurtzman SH, Masood S, et al. The National Accreditation Program for Breast Centers: quality improvement through interdisciplinary evaluation and management. Bulletin of the American College of Surgeons 2008;93(10):13‐7. [PubMed] [Google Scholar]

References to studies awaiting assessment

Browne 2015 {published data only}

  1. Browne DL, Bamford M, Roe M, Harrington A, Paisey RB. A structured diabetic foot care peer review programme commissioned by the Strategic Clinical Network to assess clinical pathways across each clinical commissioning group (CCG) in southwest England and reduce diabetes amputation rates. Diabetic Medicine 2015;32(Suppl 1):197. [Google Scholar]

Additional references

Accreditation Canada 2015

  1. Nicklin W. The value and impact of healthcare accreditation: a literature review. Accreditation Canada 2015.

Care Quality Commission 2013

  1. Raising standards: putting people first. Our strategy for 2013‐2016. Care Quality Commission 2013.

Davis 2001

  1. Davis D, Downe J, Martin S. External inspection of local government: driving improvement or drowning in detail?. Joseph Rowntree Foundation 2001.

Department of Health 2006

  1. Department of Health. Code of practice for the prevention and control of healthcare associated infections. The Health Act 2006:1‐41.

EPOC 2009

  1. EPOC. Risk of bias tool. Available from epoc.cochrane.org/epoc‐resources‐review‐authors 2009.

EPOC 2015

  1. Effective Practice, Organisation of Care (EPOC). EPOC Resources for review authors. Available at: epoc.cochrane.org/epoc‐specific‐resources‐review‐authors. Oslo: Norwegian Knowledge Centre for the Health Services, 2015.

Greenfield 2008

  1. Greenfield D, Braithwaite J. Health sector accreditation research: a systematic review. International Journal for Quality in Health Care 2008;20(3):172‐83. [DOI] [PubMed] [Google Scholar]

Hart 2013

  1. Hart K, Djasri H, Utarini A. Regulating the quality of healthcare: lessons from hospital accreditation in Australia and Indonesia. Health Policy & Health Finance Hub. Working papers. 2013;28:1‐22. [Google Scholar]

Higgins 2011

  1. Higgins JPT, Altman DG (editors). Chapter 8: Assessing risk of bias in included studies. In: Higgins JP, Green S, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011. Available from www.cochrane‐handbook.org.

Hopewell 2009

  1. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database of Systematic Reviews 2009, Issue 1. [DOI: 10.1002/14651858.MR000006.pub3] [DOI] [PMC free article] [PubMed] [Google Scholar]

Hovlid 2015

  1. Hovlid E, Høifødt H, Smedbråten B, Braut GS. A retrospective review of how nonconformities are expressed and finalized in external inspections of health‐care facilities. BMC Health Service Research 2015;15(405):1‐11. [DOI] [PMC free article] [PubMed] [Google Scholar]

ISO 2004

  1. International Organization for Standardization/International Electrotechnical Commission. Standardization and related activities. ISO/IEC Guide 2004; Vol. 2.

Jamtvedt 2006

  1. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2006, Issue 2. [DOI: 10.1002/14651858.CD000259.pub2] [DOI] [PubMed] [Google Scholar]

Moher 2009

  1. Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta‐Analyses: the PRISMA Statement. PLoS Medicine 2009;6(6):e1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]

Pomey 2005

  1. Pomey MP, François P, Contandriopoulos AP, Tosh A, Bertrand D. Paradoxes in French Accreditation. Quality and Safety in Health Care 2005;14(1):51‐5. [DOI] [PMC free article] [PubMed] [Google Scholar]

Pomey 2010

  1. Pomey MP, Lemieux‐Charles L, Champagne F, Angus D, Shabah A, Contandriopoulos AP. Does accreditation stimulate change? A study of the impact of the accreditation process on Canadian healthcare organizations. Implementation Science 2010;26(5):31. [DOI] [PMC free article] [PubMed] [Google Scholar]

RevMan 2014 [Computer program]

  1. The Nordic Cochrane Centre, The Cochrane Collaboration. Review Manager (RevMan). Version 5.3. Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2014.

Shaw 2001

  1. Shaw C. External assessment of healthcare. British Medical Journal 2001;322:851‐4. [Google Scholar]

Shaw 2004

  1. Shaw C. The external assessment of health services. Worlds Hospitals and Health Services 2004;40(1):24‐7. [PubMed] [Google Scholar]

Shaw 2013

  1. Shaw CD, Braithwaite J, Moldovan M, Nicklin W, Grgic I, Fortune T, et al. Profiling health‐care accreditation organizations: an international survey. International Journey of Quality in Health Care 2013;25(3):222‐31. [DOI] [PubMed] [Google Scholar]

Tuijn 2011

  1. Tuijn SM, Robben PB, Janssens FJ, Bergh H. Evaluating instruments for regulation of health care in the Netherlands. Journal of Evaluation of Clinical Practice 2011;17(3):411‐9. [DOI] [PubMed] [Google Scholar]

Walsche 2000

  1. Walsche K, Freeman T, Latham L, Wallace L, Spurgeon P. Chapter 6. The development of external reviews of clinical governance. Clinical Governance ‐ from Policy to Practice. Birmingham, UK: University of Birmingham, Health Services Management Centre, 2000. [Google Scholar]

References to other published versions of this review

Flodgren 2011

  1. Flodgren G, Pomey M‐P, Taber SA, Eccles MP. Effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour or patient outcomes. Cochrane Database of Systematic Reviews 2011, Issue 11. [DOI: 10.1002/14651858.CD008992.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Cochrane Database of Systematic Reviews are provided here courtesy of Wiley

RESOURCES