Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2014 Sep 15.
Published in final edited form as: Cochrane Database Syst Rev. 2011 Nov 9;(11):CD008992. doi: 10.1002/14651858.CD008992.pub2

Effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour or patient outcomes

Gerd Flodgren 2, Marie-Pascale Pomey 3, Sarah A Taber 4, Martin P Eccles 1
PMCID: PMC4164461  EMSID: EMS58367  PMID: 22071861

Abstract

Background

Inspection systems are used in health care to promote quality improvements, i.e. to achieve changes in organisational structures or processes, healthcare provider behaviour and patient outcomes. These systems are based on the assumption that externally promoted adherence to evidence-based standards (through inspection/assessment) will result in higher quality of health care. However, the benefits of external inspection in terms of organisational, provider and patient level outcomes are not clear.

Objectives

To evaluate the effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes.

Search methods

We searched the following electronic databases for studies: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness, Scopus, HMIC, Index to Theses and Intute from their inception dates up to May 2011. There was no language restriction and studies were included regardless of publication status. We searched the reference lists of included studies and contacted authors of relevant papers, accreditation bodies and the International Organization for Standardisation (ISO), regarding any further published or unpublished work.

Selection criteria

We included randomised controlled trials (RCTs), controlled clinical trials (CCTs), interrupted time-series (ITSs) and controlled before and after studies (CBAs) evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes in hospitals, primary healthcare organisations and other community-based healthcare organisations.

Data collection and analysis

Two review authors independently applied eligibility criteria, extracted data and assessed the risk of bias of each included study. Since meta-analysis was not possible, we produced a narrative results summary.

Main results

We identified one cluster-RCT involving 20 South African public hospitals (Salmon 2003) and one ITS involving all acute trusts in England (OPM 2009) for inclusion in this review.

Salmon and colleagues (Salmon 2003) showed mixed effects of a hospital accreditation system on the compliance with COHSASA (the Council for Health Services Accreditation for South Africa) accreditation standards and eight indicators of hospital quality. Significantly improved total mean compliance score with COHSASA accreditation standards was found for 21/28 service elements: mean intervention effect (95% confidence interval (CI)) was 30% (23% to 57%) (P < 0.001). The score increased from 48% to 78% in intervention hospitals, while remaining the same in control hospitals (43%). A sub-analysis of 424 a priori identified critical criteria (19 service elements) showed significantly improved compliance with the critical standards (P < 0.001). The score increased from 41% (21% to 46%) to 75% (55% to 96%) in intervention hospitals, but was unchanged in control hospitals (37%). Only one of the nine intervention hospitals gained full accreditation status at the end of the study period, with two others reached pre-accreditation status.The median intervention effect (range) for the indicators of hospital quality of care was 2.4 (−1.9 to +11.8) and only one of the eight indicators: ‘nurses perception of clinical quality, participation and teamwork’ was significantly improved (mean intervention effect 5.7, P = 0.03).

Re-analysis of the MRSA (methicillin-resistant Staphylococcus aureus) data showed statistically non-significant effects of the Healthcare Commissions Infection Inspection programme.

Authors’ conclusions

We only identified two studies for inclusion in this review, which highlights the paucity of high-quality controlled evaluations of the effectiveness of external inspection systems. No firm conclusions could therefore be drawn about the effectiveness of external inspection on compliance with standards.

Medical Subject Headings (MeSH): Accreditation [*standards]; Commission on Professional and Hospital Activities [*standards]; Cross Infection [epidemiology]; England; Guideline Adherence [standards]; Hospitals [*standards]; Methicillin-Resistant Staphylococcus aureus; Organizational Culture; Outcome Assessment (Health Care) [standards]; Professional Practice [*standards]; Quality Assurance, Health Care [*standards]; Quality Improvement [*standards]; Randomized Controlled Trials as Topic; South Africa; Staphylococcal Infections [epidemiology]

BACKGROUND

Inspection or review systems are used in health care to promote improvements in the quality of care, promoting changes in organisational structures or processes, in healthcare provider behaviour and thereby in patient outcomes. These review systems are based on the assumption that externally promoted adherence to evidence-based standards (through inspection/assessment) will result in higher quality of health care. Review systems are popular among healthcare funders, who are more likely to make funding available (or less likely to withdraw funding) if standards are met and healthcare professionals and the public can have confidence in the standards of care provided. There are numerous review systems described in the literature (e.g. peer review, accreditation, audit, regulation and statutory inspection, International Organization for Standardisation (ISO) certification, evaluation against a business excellence (Shaw 2004)). However, many of these assume that the organisation being reviewed will volunteer itself for the review process and by so doing will have already made some commitment to improvement. Such volunteer systems will systematically miss including those organisations that are not inclined to submit themselves for review; only an externally authorised and driven process will have the ability to promote change in any organisation, irrespective of its inclination to be inspected. An example of such a system is the inspection processes run by the Care Quality Commission (formerly the Healthcare Commission) in the UK National Health Service (NHS) (http://www.cqc.org.uk/). The commission has a regular cycle of inspection and the ability to respond to concerns about the quality of healthcare in any NHS organisation, inspect and also largely decide the consequences of inspection.

Definitions

For the purposes of this review, an external inspection is defined as “a system, process or arrangement in which some dimensions or characteristics of a healthcare provider organisation and its activities are assessed or analysed against a framework of ideas, knowledge, or measures derived or developed outside that organisation” (Walsche 2004). The process of inspection is initiated and controlled by an organisation external to the one being inspected.

The International Organisation for Standardisation, ISO (ISO 2004) defines a standard as “a document, established by consensus and approved by a recognized body, that provides, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context”. Included in this definition is that “standards should be based on the consolidated results of science, technology and experience, and aimed at the promotion of optimum community benefits.”

The external standard is a standard that has been developed by a body external to the organisation being inspected, which distinguishes it from the standards that are used in, for example, audit and feedback, that are often set by the group to whom they are applied.

How the intervention might work

Inspection of performance assumes that the process of comparison of performance against an explicit standard of care will lead to the closing of any identified important gaps; in this sense the underlying process is akin to that which underpins clinical audit. However, when conducted at an organisational level, it is usually used to encompass a far wider range of organisational attributes than clinical audit would normally address. Inspections for assessing the quality of care within healthcare organisations are undertaken by a variety of agencies, non-governmental as well as governmental, that use different approaches and methods. Thus, the inspection process can take different forms, both in terms of measurements and data used as well as the purpose and focus of the review. Various systems may also differ in when and how they are initiated, if they are voluntary or mandatory, if they are applied to all of an organisation or only a sub-section of it (e.g. to a particular clinical area or a professional group), and whether or not they are linked to incentives or sanctions. Also, the way external bodies use the results to bring about desired quality improvements in an organisation differs. There may also be adverse effects, undesired change or changes that do not last long term (Walsche 2004).

Why it is important to do this review

Voluntary inspection processes are extensively used in North America, Europe and elsewhere around the world, but have rarely been evaluated in terms of impacts on the organisations reviewed, i.e. healthcare delivery, patient outcomes or cost-effectiveness (Greenfield 2008). External inspection processes similarly lack evaluations and it is therefore not clear what the benefits of such inspections are, or what inspection process is most successful in improving health care. Reviewing the available evidence on the effectiveness of external inspection is an important first step towards identifying the optimal design of such process in terms of their ability to improve healthcare processes and outcomes.

OBJECTIVES

To evaluate the effectiveness of external inspection of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour and patient outcomes.

METHODS

Criteria for considering studies for this review

Types of studies

We included studies evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes in hospitals, primary healthcare organisations and other community-based healthcare organisations. We considered the following study designs: randomised controlled trials (RCTs), controlled clinical trials (CCTs), interrupted time-series (ITSs) and controlled before and after studies (CBAs) that included at least two sites in both control and intervention groups.

Types of participants

We included hospitals, primary healthcare organisations or other community-based healthcare organisations containing health professionals.

Types of interventions

We included all processes of external inspection against external standards in a healthcare setting compared with no inspection or with another form of inspection (e.g. against internally-derived standards).

Types of outcome measures

We included studies that reported one or more of the following objective outcome measures.

Main outcomes
  • Measures of healthcare organisational change (e.g. organisational performance, waiting list times, inpatient hospital stay time)

  • Measures of healthcare professional behaviour (e.g. referral rate, prescribing rate)

  • Measures of patient outcome (e.g. mortality and condition-specific measures of outcome related to patients’ health)

Other outcomes
  • Patient’s satisfaction and patient involvement

  • Unanticipated or adverse consequences

  • Economic outcomes

Search methods for identification of studies

We searched for studies evaluating the effect of external inspection against external standards on healthcare organisation change, healthcare professional behaviour or patient outcomes.

Electronic searches

We searched the following electronic databases for primary studies:

  • Cochrane Central Register of Controlled Trials (CENTRAL) Cochrane Library 2011 issue 1, May 2011

  • Cochrane Database of Systematic Reviews (CDSR) Cochrane Library 2010 issue 2, May 2011

  • Database of Abstracts of Reviews of Effectiveness (DARE) Cochrane Library 2010 issue 2, May 2011

  • MEDLINE, Ovid (1950 to May 2011)

  • EMBASE, Ovid (1980 to May 2011)

  • CINAHL, EBSCO (1980 to May 2011)

  • Science Citation Index, Web of Knowledge (1970-May 2011)

  • Social Science Citation Index, Web of Knowledge (1970-May 2011)

  • ISI Conference Proceedings, Web of Knowledge (1970-May 2011)

  • PsycINFO, Ovid (1806 to May 2011)

  • HMIC, Ovid (1983-May 2011)

  • Intute (www.intute.ac.uk) (searched May 2011)

  • Electronic Theses Online (EThOS) (www.ethos.ac.uk) (searched May 2011)

We translated the search strategy for each database using the appropriate controlled vocabulary as applicable. There was no language restriction. We included studies regardless of publication status.

Full search strategies are reported in Appendix 1.

Searching other resources

We searched the reference lists of all included studies. We contacted authors of relevant papers as well as accreditation bodies and ISO regarding any further published or unpublished work. We searched websites of organisations concerned with accreditation, such as Joint Commission on Accreditation of Healthcare Organizations (JCAHO) (http://www.jointcommission.org/); Accreditation Canada (www.accrediation.ca); ACHSI-Australian Council for Healthcare Standards International (www.achs.org.au/ACHSI); and ISQua International Society for Quality in Health Care (www.isquaresearch.com).

Data collection and analysis

Selection of studies

We downloaded all titles and abstracts retrieved by electronic searching to the reference management database EndNote and removed duplicates. Two review authors (GF and MPE) independently screened the titles and abstracts found by the electronic searches. We excluded those studies that clearly did not meet the inclusion criteria and we obtained copies of the full text of potentially relevant references. Two review authors (GF and MPE) independently assessed the eligibility of retrieved papers. Disagreements were resolved by discussion between review authors.

Data extraction and management

Two review authors (from GF, MPE, MPP and ST) independently extracted the data from each included study into a modified EPOC (Effective Practice and Organisation of Care Group) data extraction form (Appendix 2). Disagreements were resolved by discussion, or arbitration by a third person.

Assessment of risk of bias in included studies

Two review authors (from GF, MPE, MPP and ST) independently assessed the risk of bias of each included study. Disagreements were resolved by discussion, or arbitration by a third person. For randomised controlled trials, we used The Cochrane Collaboration’s tool for assessing risk of bias (Higgins 2008) on six standard criteria: (i) adequate sequence generation, (ii) concealment of allocation, (iii) blinded or objective assessment of primary outcome(s), (iv) adequately addressed incomplete outcome data, (v) free from selective reporting, (vi) free of other risk of bias. We also used three additional criteria specified by EPOC (EPOC 2009): (vii) similar baseline characteristics, (viii) similar baseline outcome measures, (ix) adequate protection against contamination. For the included ITS study the following criteria were used: a) was the intervention independent of other changes?b) was the shape of the intervention effect pre-specified? c) was the intervention unlikely to affect data-collection? d) was knowledge of the allocated interventions adequately prevented during the study? e) were incomplete outcome data adequately addressed? f) was the study free from selective outcome reporting? g) was the study free from other risks of bias? Disagreements were resolved by discussion between review authors or if needed arbitration by a third person. We scored risk of bias for these criteria as Yes (= adequate), no (= inadequate) or unclear. Studies achieved a ‘low’ risk of bias score if all risk of bias criteria were judged as ‘adequate’. We assigned a score of moderate or high risk of bias to studies that scored inadequate on ‘one to two’ or ‘more than two’ criteria, respectively (Jamtvedt 2006). The risk of bias of included studies is summarised in the text and presented in the risk of bias section within the Characteristics of included studies table.

Measures of treatment effect

For each study, we reported data in natural units. Where baseline results were available from RCTs, CCTs and CBAs, we reported pre intervention and post intervention means or proportions for both study and control groups and calculated the unadjusted and adjusted (for any baseline imbalance) absolute change from baseline with 95% confidence limits.

For ITS studies, we reported the main outcomes in natural units and two effect sizes: the change in the level of outcome immediately after the introduction of the intervention and the change in the slopes of the regression lines. Both of these estimates are necessary for interpreting the results of each comparison. For example, there could have been no change in the level immediately after the intervention, but there could have been a significant change in slope. We also reported level effects for six months and yearly post intervention points within the post intervention phase.

The results for all comparisons were presented using a standard method of presentation where possible. For comparisons of RCTs, CCTs and CBAs we reported (separately for each study design): median effect size across included studies; inter-quartile ranges of effect sizes across included studies; range of effect sizes across included studies.

Unit of analysis issues

Neither of the included studies had unit of analysis errors.

Assessment of heterogeneity

We could not explore heterogeneity, due to too few studies being identified.

Assessment of reporting biases

We could not assess publication bias because too few studies were identified.

Data synthesis

We did not carry out meta-analysis. Instead, we produced a narrative results summary. In one of the included studies (OPM 2009), we re-analysed data on MRSA (methicillin-resistant Staphylococcus aureus) rate as a time series. We used Review Manager 5 (RevMan) (Review Manager 2008) to present and synthesise the data.

The results of the review are summarised in the ‘Summary of findings’ table.

Subgroup analysis and investigation of heterogeneity

We did not perform any subgroup analysis or investigate heterogeneity.

Sensitivity analysis

We had planned to perform a sensitivity analyses, excluding high risk of bias studies, but since so few studies were found we did not perform any analysis.

RESULTS

Description of studies

See: Characteristics of included studies; Characteristics of excluded studies.

We searched for studies (RCTs, CCTs, ITSs and CBAs) evaluating the effect of external inspection of compliance with standards on healthcare organisation change, healthcare professional behaviour or patient outcomes.

Results of the search

Figure 1 shows the study PRISMA flow chart (Moher 2009). We identified a total of 9901 non-duplicate citations from electronic database searches. After screening of all titles and abstracts, 10 citations met the initial selection criteria and we obtained the full text for review. We identified an additional five papers through contacts with authors and accreditation bodies. Of these 15 studies, we excluded 13 for reasons presented in the Characteristics of excluded studies table. The remaining two studies, which met the inclusion criteria, are reported in detail in the Characteristics of included studies table.

Figure 1.

Figure 1

Study flow diagram.

Included studies

Two studies met the inclusion criteria: one cluster-RCT (Salmon 2003) and one before and after study (that could be re-analysed as an ITS) (OPM 2009) performed in an upper-middle and a high-income country respectively. Both external inspections were mandatory, i.e. decided upon by somebody other than the recipient, and universal, i.e. applied at the organisational level.

Targeted behaviour

The aims of the accreditation programme (Salmon 2003) were to improve the compliance with COHSASA (the Council for Health Services Accreditation for South Africa) accreditation standards and to improve performance related to eight hospital quality of care indicators. The purpose of the Healthcare Commission’s inspection programme (OPM 2009) was to improve trusts’ compliance with the Health Act and the Code of Practice (Department of Health 2006) related to healthcare-acquired infections, thereby reducing the number of healthcare-acquired infections (including MRSA infections), and increasing patients’ and the public’s confidence in the healthcare system.

Participants and settings

The setting in Salmon et al (Salmon 2003) was 20 public hospitals in the Kwa-Zulu province in South Africa (five urban, three peri-urban and two rural hospitals in the intervention group; two urban, two peri-urban and six rural hospitals in the control group). The mean standard deviation (SD) number of beds was 435 (± 440) and 467(± 526) in intervention and control hospitals, respectively. One intervention hospital dropped out of the accreditation midway through the study, and so to retain comparability of the intervention and control groups, a similar sized hospital was removed from the control group, leaving nine hospitals in the intervention group and nine in the control for this part of the study. Therefore, of the 20 randomised hospitals, 18 remained for the final analyses.

For the Healthcare Commissions Inspection Programme (OPM 2009), all acute hospital trusts in England were included.

Standards

In Salmon et al (Salmon 2003), the hospital quality of care indicators were developed by consensus of an advisory board during a workshop held in South Africa in May 1999. Present at the workshop were South African healthcare professional leaders, the managing director of COHSASA, a representative from Joint Commission International (JCI) and the principal investigators for the research study. This process resulted in 12 indicators for the first round of data collection. However, based on preliminary analysis of data collected from the first round, the research team recommended to the study steering committee that some indicators be dropped. The steering committee (composed of representatives from the research team, the sponsors of the research, COHSASA and several South African medical experts) decided to drop four of the 12 indicators (surgical wound infections and time to surgery, neonatal mortality and financial solvency). These decisions resulted in eight quality indicators (see Figure 2). The reasons for abandoning the four indicators are described in Appendix 3.

The Code of Practice and the Healthcare Act, 2006 used as standards in the Healthcare Commission’s Inspections Programme (OPM 2009) were developed and launched by the Department of Health, who in 2006 enacted the new legislation with the aim to decrease the number of healthcare-acquired infections.

Outcomes

In the study by Salmon et al (Salmon 2003) there were two sets of outcome measures: the eight study-derived quality indicators and the larger raft of COHSASA accreditation criteria. The eight indicators of hospital quality were: (i) nurses perception of clinical quality, participation and teamwork; (ii) patient satisfaction with care; (iii) patient medication education; (iv) medical records: accessibility and accuracy; (v) medical records: completeness; (vi) completeness of peri-operative notes; (vii) completeness of ward stock medicine labelling and (viii) hospital sanitation. All eight indicators were measured twice (see Figure 3 for a description of the indicators). The COHSASA accreditation criteria were 6000 criteria (measurable elements) in 28 service elements (see list of service elements in Figure 4) measuring aspects of hospital quality of care, of which 424 standards (in 19 generic service elements) were a priori judged by COHSASA to be critical criteria for the function of the service elements. These critical criteria were mainly drawn from the following service elements: obstetric and maternity inpatient services; operating theatre and anaesthetic services; resuscitation services; paediatric services and medical inpatient services. The accreditation standards required that systems and processes be established in clinical and non-clinical activities of all services.

In the OPM report (OPM 2009), only one of the reported outcomes was suitable for inclusion in this review (by virtue of presenting pre and post intervention quantitative data): data on rates of hospital-acquired MRSA infections for one year before the initiation of the inspection process and for two years after. The MRSA rate is mandatory for trusts to report each quarter, and is monitored by The Health Protection Agency, and had a sufficient number of measurements before and after the intervention to allow re-analysis as a short time series. The other outcomes reported in OPM 2009 involved aggregated uncontrolled before and after data that could not be re-analysed as time series (e.g. data on trusts’ compliance with the code of practice, and patient and public confidence in health care). Thus, we considered that we could analyse this outcome rather than being specified as an appropriate main outcome by the authors of the report.

Data collection

In the study by Salmon and colleagues (Salmon 2003), the before and after measures of compliance with the accreditation standards were collected by COHSASA surveyors (or teams hired by COHSASA), and indicators of hospital quality were collected by research assistants hired by the independent research team composed of South African and US investigators. The time between measurements of the accreditation standards differed between intervention (19 months) and control hospitals (16 months). Due to the time it took to develop and test the indicators for hospital quality, the first round of measurements was not performed until an average 7.4 months after COHSASA collected the baseline survey data (but was performed at the same time point in intervention and control hospitals), which resulted in a statistically significant difference in the interval between baseline survey and the first indicator survey. For both the intervention hospitals and the control hospitals only about nine months separated the first and second round of indicator data collection.

In the other study (OPM 2009) the MRSA rate was reported quarterly by the trusts, and monitored and summarised by The Health Protection Agency.

Description of the intervention

Salmon 2003: COHSASA facilitators initially assisted each participating facility to understand the accreditation standards and to perform a self assessment (baseline survey) against the standards, which was validated by a COHSASA team. Detailed written reports on the level of compliance with the standards and reasons for non-conformance were generated and sent to the hospitals for use in their continuous quality improvement (CQI) programme. Next, the facilitators assisted the hospitals in implementing a CQI process to enable the facilities to improve on standards identified as sub-optimal in the baseline survey. Lastly, the hospital entered the accreditation (external) survey phase when a team of COHSASA surveyors who were not involved in the preparatory phase conducted an audit. The accreditation team usually consisted of a medical doctor, a nurse and an administrator who spend an average three days evaluating the degree to which the hospital complied with the standards and recording the areas of non-compliance. Hospitals found by COHSASA’s accreditation committee to comply substantially with the standards were awarded either pre-accreditation or full accreditation status. Pre-accreditation en couraged institutions to continue with the CQI process, in the expectation that this would help progress to eventual full accreditation status. In the control hospitals the accreditation variables were measured as unobtrusively as possible, but none of the other components of the accreditation programme were performed, meaning no feedback of results and no technical assistance until after the research was completed. Meanwhile, a separate research team measured the eight study quality indicators in both the intervention and control hospitals.

OPM 2009: The Healthcare Commission Healthcare Acquired Infection Inspection Programme: the selected trusts were notified that they would be inspected at any time point within the next three months. A pre-inspection report was produced by the assessors, using relevant data sent to the assessors by the trusts. The assessors used the pre-inspection report to select a subset of duties described in the Code of Practice to be assessed at the next inspection. During the inspection, the inspection team looked for any breeches of the Code of Practice, and this fed into the formal inspection output, either an inspection report with recommendations or an improvement notice. The inspection report highlights areas requiring improvements and made recommendations as to how the trust needs to improve. The trusts act on the comments and take steps to improve practices. An improvement notice, on the other hand, requires the trusts to draw up an action plan and specify how it will remedy the material breeches of the code that have been identified. Only once the steps to remedy the breeches to the Code of Practice had been followed was a notice lifted.

Excluded studies

We excluded 13 studies after full copies of the papers were obtained and scrutinised. The main reason for exclusion was ineligible intervention (10 studies). We excluded two papers because they were overviews, one paper was excluded due to ineligible study design, and one paper could not be found. See Characteristics of excluded studies.

Risk of bias in included studies

The risk of bias of included studies is described in the ‘Risk of bias’ table within the Characteristics of included studies table.

In the study by Salmon et al (Salmon 2003), the allocation sequence was adequately generated: to ensure a balanced design with respect to service and care characteristics, researchers stratified the hospitals by size (number of beds) into four categories and within each stratum a simple random sample without replacement was drawn. The allocation was made by the research team, but it is unclear if it was done by an independent statistician or not The hospitals were notified about the process of inspection, and could not be blinded to whether they were part of an accreditation programme or not. It was unclear whether the assessors were blinded or not. Incomplete outcome data were adequately addressed: when one of the intervention hospitals, and also one of the biggest hospitals, dropped out half way through the accreditation process, a similar-sized hospital from the control group was excluded to yield the same number of hospitals in each group. Thus, out of 20 hospitals initially included in the trial, 18 remained for the final analysis.

It is unclear if the four indicators of hospital quality of care that were dropped (see Appendix 3) should be deemed as selective reporting of results. After the first round of measurements, the research team suggested to the independent advisory board that the four indicators should be dropped due to problems with comparability between hospitals, and only results for eight indicators were therefore reported in the paper.

It was also unclear if the baseline characteristics of the participating hospitals were the same in the intervention as in the control group. Seemingly more rural hospitals were included in the control group, and more urban hospitals in the intervention group. However, the data were not tested for statistical differences in characteristics between intervention and control hospitals.

Analysis was performed at the level of randomisation (hospital), not at the individual or record level within hospitals, thus allowing for clustering in the analysis. The baseline survey of the compliance with the accreditation standards was not performed simultaneously in intervention and control hospitals, but on average three months later in control hospitals; therefore it is unclear whether or not the baseline outcome measurements were similar and the control measurements represent a true baseline.

In the OPM report (OPM 2009), the intervention was not necessarily independent of other changes. Within the UK there had been an increasing awareness of and publicity about the problem of hospital-acquired infections. Rates of reported cases of MRSA were already showing a downward trend one year before the inspections began.

The intervention effect was not pre-specified, since nothing was mentioned about what effect (a step change or change in slope) was expected for the outcome measure (MRSA infection rate); however, since the MRSA data were re-analysed by the review authors), we considered the risk of bias for this item low.

The intervention was unlikely to affect data collection, since sources and methods of data collection were the same before and after the intervention (The Health Protection Agency monitor quarterly mandatory reported cases by trusts).

The only re-analysable outcome measure (MRSA rate) was objective, and thus we scored ‘knowledge of the allocated interventions’ as being adequately prevented.

Since quarterly reporting of cases of MRSA is mandatory for acute trusts, there were no incomplete outcome data, missing data or selective reporting of data. The study was also free from other risks of bias.

Effects of interventions

See: Summary of findings for the main comparison Salmon 2003; Summary of findings 2 OPM 2009

Salmon and colleagues (Salmon 2003) reported results for compliance scores with COHSASA accreditation standards, involving 28 service elements and performance related to the eight study quality of care indicators. The results are summarised in Summary of findings for the main comparison.

Compliance scores with the COHSASA accreditation standards are shown in Figure 4.

The results showed significantly improved total mean compliance score with COHSASA accreditation standards in intervention hospitals. The total score for 21 of the 28 service elements, for which comparisons were possible, rose from 48% (range 30% to 57%) to 78% (range 68% to 92%) in intervention hospitals, while control hospitals maintained the same score throughout: 43% (range 21% to 58%) before the intervention and 43% (range 23% to 61%) after the intervention. The mean intervention effect (95% confidence interval (CI)) was 30% (23% to 37%) (P < 0.001).

In terms of individual scores of compliance with accreditation standards, 21/28 service elements showed a significant effect of the inspections (mean intervention effects ranging from 20% to 52%). For the remaining seven service elements, data were not available, i.e. some of the service elements were only evaluated in some hospitals, so comparisons between the intervention and control arms were not appropriate due to small sample size.

A sub-analysis of 424 a priori identified critical criteria (in 19 generic service elements), showed significantly improved mean compliance with the critical standards in intervention hospitals: the total overall compliance score rose from 38% (range 21% to 46%) to 76% (range 55% to 96%). Control hospitals maintained the same compliance score throughout: 37% (range 28% to 47%) before the intervention and 38% (range 25% to 49%) after the intervention. The difference in means between groups was statistically significant (P < 0.001).

Only one of the nine intervention hospitals gained full accreditation status at the end of the study period, with two others reaching pre-accreditation status.

Mean hospital quality indicator scores are shown in Figure 2.

The effects on the eight study indicators of hospital quality of care were mostly non-significant, with a median intervention effect (range) of 2.4 percentage points (−1.9% to +11.8%). The quality indicator “nurses perception of clinical quality, participation and teamwork” showed a significant mean intervention effect (5.7 percentage points, P = 0.03).The increase in intervention hospitals was 1.5 percentage points (from 59.3% to 60.8%), while a decrease with −4.2 percentage points (from 60.8% to 56.5%) was found in control hospitals. All other indicators, patient satisfaction included (intervention effect 1.5, P = 0.484), showed non-significant effects of the inspections (see Characteristics of included studies table).

In the OPM report (OPM 2009) results for MRSA rates are reported. The results are summarised in Summary of findings 2.

Re-analysis of the quarterly reported MRSA data, as an ITS, showed statistically non-significant effects of the Healthcare Commission’s Infection Inspection Programme. The difference (24.27, 95% CI −10.4 to 58.9) between the pre slope (−107.6) and the post slope (−83.32) was not statistically significant (P = 0.147). When the downward trend in MRSA rate before the intervention had been taken into account, the results showed a mean (CI) decrease with 100 (−221.0 to 21.5) cases at three months (P = 0.096), 75 (−217.2 to 66.3) cases at six months (P = 0.259), 27 (−222.1 to 168.2) cases at 12 months (P = 0.762), and an increase with 70 (−250.5 to 391) cases per quarter at 24 months follow-up (P = 0.632).

Neither included study reported data on unanticipated/adverse consequences or economic outcomes.

DISCUSSION

Summary of main results

Two studies (one cluster-randomised controlled trial (RCT) and one interrupted time-series (ITS) study) met our inclusion criteria and we included both in this review (OPM 2009; Salmon 2003).The RCT study (Salmon 2003) reported significantly improved mean total compliance score with COHSASA (the Council for Health Services Accreditation for South Africa) standards in intervention hospitals, as well as significantly improved compliance with pre-determined critical criteria assessed in a sub-analysis. The effects on the indicators of hospital quality, however, were mostly non-significant. Only one of the intervention hospitals achieved accreditation status in the end of the study period, while two others achieved pre-accreditation status.

The evaluation of the effectiveness of the Healthcare Commissions Healthcare Associated Infections Inspections Programme (OPM 2009) showed no significant effect of the inspection programme on the number of hospital-acquired MRSA cases. However, the inspection programme was only one element of a wider range of interventions being applied to infection control in the UK National Health Service (NHS) at that time. Even before the introduction of the inspection programme, there was a significant negative tie trend (rates were decreasing) - but the introduction of the inspection programme did not accelerate that trend.

The emphasis on external inspection differs greatly between high and low-income countries. In high-income countries, the main aims with external inspection are evaluation and improvement of safety, clinical effectiveness, consumer information, staff development etc. While in low-income countries, the emphasis is on establishing basic facilities and information, and improving access to healthcare services (Shaw 2003). It is reasonable to assume that many of the low-income countries’ issues apply to middle-income countries too. The hospital accreditation programme was performed in a high middle-income country (Salmon 2003), while the Healthcare Commissions Healthcare Acquired Infections Inspections Programme was not (OPM 2009). It is not possible, or even desirable, to compare or synthesise results from studies in which the conditions during which health care is provided are so different, e.g. in Salmon et al (Salmon 2003), basic necessities like soap and papers towels were not available in more than half of the included hospitals.

The way the assessors used data from the inspection to accomplish change, i.e. improved compliance with standards in the healthcare organisation being inspected, differed between studies. When the inspection was performed by a non-governmental body, the assessors could recommend change, but had no power to enforce it. The Healthcare Commission, on the other hand, as a governmental body, had the means to enforce change and make organisations improve their compliance with standards through ‘inspection notices’.

Overall completeness and applicability of evidence

The evidence is limited in scale, content and generalisability. With only two studies it is difficult to draw any clear conclusions about the effectiveness of external inspection of compliance with external standards beyond their effects within the two included studies. Of the outcomes reported by Salmon and colleagues (Salmon 2003), the majority were structural or process outcomes, and only one patient outcome was reported (patient satisfaction with care). Unfortunately, important outcomes, e.g. morbidity and mortality, had to be dropped during the research process due to problems with comparability between hospitals. In the evaluation of the Healthcare Commissions Inspections Programme (OPM 2009), most reported data were uncontrolled before and after data, and only data on MRSA rate could be re-analysed and included in this review. Neither study reported any unintended effects of the inspection.

Even if external inspection is associated with non-negligible costs, and little evidence of its cost-effectiveness exist to date (Shaw 2003), neither of the studies included in the review reported any cost data. Both included studies evaluated the effects of external inspection in secondary care, and the results cannot therefore be generalised to primary care.

This review raises the interesting question of the reasonable anticipated effect of an intervention such as external inspection. If a process of inspection identifies any deficiencies then the anticipated response would be a number of changes at an organisational level with potential changes in care processes and thus patient outcomes. Although external inspection might be the trigger to such a series of events, the further along the causal chain one goes, the less its direct influence as a direct cause of changes is likely to be. Therefore, the most direct outcomes should be regarded as the subsequent organisational (and probably professional behaviour) changes with patient outcomes being regarded as a more distant (and less directly connected) outcome. Both the included studies illustrate this in different ways. In the study by Salmon, the external inspection identified a cascade of consequent events; in the OPM report, the data analysed were clearly collected and reported in a milieu of a range of other interventions. However, it is not quite that simple, as in the OPM report an outcome measure that is apparently a patient outcome (infection rate) is clearly regarded as an important organisational level indicator of organisational performance. Therefore, the choice of outcomes for an intervention such as external inspection has to be made in a way that allows for an appropriate diversity of measures that reflect the underlying issues that may have triggered the inspection.

Quality of the evidence

The evidence that we identified has to be regarded as sparse and susceptible to bias. The ITS generally scored “low” on the risk of bias assessment except for the criterion on independence from other changes. The cluster-RCT was scored as ‘unclear’ on several of the ‘Risk of bias’ criteria.

Potential biases in the review process

All references found by the electronic searches were sifted and two review authors independently extracted data. Two review authors also independently assessed the risk of bias of included studies.

The search was difficult to conduct as there were few specific terms that we could use. Although the search strategy was carefully developed by an experienced information technologist, and reviewed by an information technologist at the editorial base, and we searched the home pages of many accreditation bodies, we cannot exclude the possibility that important references may have been missed.

There is also the risk of publication bias, i.e. that only studies showing a beneficial effect of intervention are published and not studies pointing towards little or no effect of intervention (Hopewell 2009). Unfortunately, because too few studies were identified for inclusion in this review, we could not assess publication bias.

Agreements and disagreements with other studies or reviews

We are not aware of any other systematic reviews evaluating the effects of external inspection of compliance with standards on healthcare organisational behaviour, healthcare professional behaviour or patient outcomes.

AUTHORS’ CONCLUSIONS

Implications for practice

In terms of considering quality of care delivered across a whole healthcare system, external inspection (as defined for this review) as opposed to voluntary inspection, has the advantage of incorporating all organisations rather than only volunteer organisations. For those running a healthcare system this is a very attractive advantage and it is likely that external inspection will continue to be used. Situations where this occurs offer a useful opportunity to better define the effects of such processes, the optimal configuration of inspection processes and their value for money. If randomised studies are not possible then interrupted time-series designs offer a useful way of interpreting such data.

Implications for research

The review identified only two eligible studies. If policy makers wish to better understand the effectiveness of this type of intervention then there need to be further studies across a range of settings and contexts. There does not seem to be any prima facie reason for not conducting a trial, however, if it is felt that an experimental design cannot be used then other non-randomised designs (such as interrupted time-series designs) could be used.

Whatever design is used, including an appropriate follow-up period is important to examine whether any improvements observed after the external inspection endure. Any studies should also include an economic evaluation.

PLAIN LANGUAGE SUMMARY.

Can third party inspections of whether or not healthcare organisations are fulfilling mandatory standards improve care processes, professional practice and patient recovery?

Third party (external) review systems are used within health care settings as a way to increase the compliance with evidence-based standards, but very little is known of their benefits in terms of organisational, provider and patient level outcomes, or their cost-effectiveness.

We searched the literature for evaluations of the effectiveness of external inspection of compliance with standards including; randomised controlled trials (RCTs), controlled clinical trials (CCTs), interrupted time-series (ITSs) and controlled before and after studies (CBAs). We found only two relevant studies: one RCT involving 20 hospitals in the Republic of South Africa and one uncontrolled before and after study (that required re-analysis as an ITS) involving all acute trusts in England.

The RCT evaluating the effectiveness of a hospital accreditation system showed improved compliance with COHSASA (the Council for Health Services Accreditation for South Africa) accreditation standards (involving 28 service elements), but little effect on eight indicators of hospital quality. Significantly improved compliance score with COHSASA accreditation standards was reported for 21/28 service elements: mean intervention effect (95% confidence interval (CI)) 30% (23% to 57%, P < 0.001). The score increased from 48% to 78% in intervention hospitals, while remaining the same in control hospitals (43%). A sub-analysis of 424 critical criteria, showed significantly improved mean compliance with the critical standards (P < 0.001). The score increased from 41% (21% to 46%) to 75% (55% to 96%) in hospitals with the accreditation programme, but was unchanged in control hospitals. However, only one of the nine intervention hospitals gained full accreditation status at the end of the study period, and with two others reached pre-accreditation status. Only one of eight indicators of hospital quality: ‘nurses perception of clinical quality, participation and teamwork’ was significantly improved (mean intervention effect 5.7%, P = 0.03).

Re-analysis of the patient MRSA (methicillin-resistant Staphylococcus aureus) infection data showed statistically non-significant effects of the Healthcare Commissions Infection Inspection programme.

Too few studies were identified for inclusion in this review to draw any firm conclusions about the effectiveness of external review of compliance with standards in improving healthcare organisation behaviour, healthcare professional behaviour or patient outcomes. Instead, this review highlights the lack of high-quality studies evaluating the effectiveness of external inspections on compliance with standards in healthcare organisations.

ACKNOWLEDGEMENTS

We wish to acknowledge information technologist Fiona Beyer for developing and running the search strategy.

SOURCES OF SUPPORT

Internal sources

  • Newcastle University, UK.

External sources

  • NIHR Cochrane Programme Grant, UK.

CHARACTERISTICS OF STUDIES

Characteristics of included studies [ordered by study ID]

OPM 2009

Methods Study design: ITS (uncontrolled before and after study re-analysed as a time series)
Data: data were gathered from: a national survey of trusts; in-depth case studies with 10 trusts, desk-research to map inspection process, analyse outcome indicators and analyse 80 published inspection reports, and that on the views of patients, service users and the public involving archive research, analysis of press coverage and discussion groups/telephone interviews with 25 stakeholders (these results were not re-analysable, and therefore not included in this review)
Data on hospital-acquired infections (MRSA rate) 1 year before the intervention and 2 years after the intervention (results re-analysable and included in the review)
Participants Recipients: all acute trusts in England (168 acute trusts in 2009)
Country: England
Targeted behaviour: compliance with the Code of Practice and the law (the Healthcare Act, 2006) related to HCAIs
Interventions Description of the intervention:
The Healthcare Commissions Healthcare Associated Infections Inspections Programme - addressed hospital trusts’ compliance with the Code of Practice (and the Healthcare Act) aiming at reducing HCAIs, including MRSA infections:
i) The selected trusts are notified that they will be inspected at any time point within the subsequent 3 months. Being aware of the forthcoming inspection may encourage the trust to take steps to improve the compliance with the Code of Practice, before the actual inspection
ii) A pre-inspection report is produced by the assessors, using relevant data sent to them by the trusts
iii) With the help of the pre-inspection report the assessors select a subset of duties described in the Code of Practice to be assessed at the inspection
iv) During the inspection the inspection team will look for a likely breech of the Code of Practice, and this will feed into the formal inspection output, either an inspection report with recommendations or an improvement notice
v) The inspection report highlights areas requiring improvements and makes recommendations as to how the trust needs to improve. The trusts will act on the comments and take steps to improve its practices. An improvement notice, on the other hand, requires the trusts to draw up an action plan and specify how it will remedy the material breeches of the code that have been identified
vi) Once the steps to remedy the breeches to the Code of Practice have been followed the notice is lifted
Type of external standard: the ‘Code of Practice’ and the ‘Healthcare Act, 2006’ - aimed at decreasing the number of hospital-acquired infections
Who developed the standards: in 2006 the Department of Health enacted new legislation: ‘the Healthcare Act’ supported by a ‘Code of Practice’ aiming at decreasing the number of HCAIs infections
Voluntary or mandatory review: mandatory
Universally or targeted review: universal
Who performed the review: the Healthcare Commission (now the Care Quality Commission); a governmental organisation
Purpose and focus of the review: to ensure that trusts are complying with the Code of Practice and by doing so bring about reductions in hospital-acquired, infection-related morbidity, as well as to improve patient and public confidence in health care
Timing:
a) Frequency and number of inspections: one per trust
b) Duration of inspection: not stated
Outcomes MRSA infection rate was retrieved from http://www.hpa.org.uk/web/HPAwebFile/HPAweb.C/1229502459877, and was re-analysed as a time series
All other data provided in the report were uncontrolled before and after data, with less than 3 data points before, and three data points after the intervention, which prevented re-analysis being undertaken
Results:
Date; No of cases
April 2006 to June 2006; 1742
July 2006 to September 2006; 1651
October 2006 to December 2006; 1543
January 2007 to March 2007; 1447
April 2007 to June 2007; 1306. In June 2007, the Healthcare Commission began a series of unannounced inspections, focused specifically on assessing compliance with the ‘Code of Practice’
July 2007 to September 2007; 1083
October 2007 to December 2007; 1092
January 2008 to March 2008; 970
April 2008 to June 2008; 839
July 2008 to September 2008; 724
October 2008 to December 2008; 678
January 2009 to March 2009; 694
April 2009 to June 2009; 509
Re-analysis of the MRSA data, as an ITS, showed statistically non-significant effects of the intervention. The difference (24.27, 95% CI −10.4 to 58.9) between the pre-slope (−107.6) and the post slope (−83.32) was not statistically significant (P = 0.147). When the downward trend in MRSA rate before the intervention had been taken into account, the results showed a mean (CI) decrease with 100 (−221.0 to 21.5) cases at 3 months (P = 0.096), 75 (−217.2 to 66.3) cases at 6 months (P = −0.259), 27 (−222.1 to 168.2) cases at 12 months (P = 0.62), and an increase with 70 (−250.5 to 391) cases per quarter at 24 months follow-up (P = 0.632)
The results of the MRSA data are summarised in Summary of findings 2.
Other reported outcomes that could not be re-analysed and therefore not included in this review were:
  • rate of Clostridium difficile infections

  • patients’ perceptions of hospital cleanliness and hand-washing among doctors and nurses

  • number and ‘sentiment’ of national media coverage of hospital-acquired infections over time

  • inspections conducted and trust performance

  • trusts’ understanding of the purpose and the aim of the inspection programme

  • views on (i) pre-visit submission of relevant documents, (ii) unannounced visits (iii) the post reporting process, (iv) and experiences of the inspection visit

  • impact on (i) standards of infection control and (ii) overall standards of infection prevention and control, (iii) public satisfaction with, and confidence in, hospitals

Notes
Risk of bias
Bias Authors’ judgement Support for judgement
Incomplete outcome data (attrition bias)
All outcomes
Low risk There were no incomplete outcome data, since quarterly reporting of MRSA infections by trusts are mandatory
Selective reporting (reporting bias) Low risk Results were presented for all outcomes described in the methods section
Other bias Low risk No other risk of bias was identified
Intervention independent of other changes High risk p.16, para 4.2.1
“As shown in Figure 2, reported cases of MRSA have decreased from 1,742 cases in Apr-Jun 2006 to 509 cases in Apr-Jun 2009 a reduction of 71 per cent. The total number of cases has shown a steady downward trend during this time, except for slight increases in cases between Jul-Oct and Oct-Dec 2007 and between Oct-Dec and Jan-Mar 09. These increases may reflect seasonal trends.The vertical line on Figure 2 indicates when the inspection programme began, in June 2007; this is included for information and we do not imply that any observed trends are being attributed to the impact of the programme.”
The ‘Code of Practice’ as well as the law related to HCAIs: The Healthcare Act, were enacted in 2006, which may explain the downward trend MRSA rate seen from June 2006
Pre-specified shape of intervention Low risk Data re-analysed by review authors
Intervention unlikely to affect data collection Low risk Sources and methods of data collection were the same before and after the intervention (The Health Protection Agency monitor quarterly mandatory reports of MRSA rates made by trusts)
Knowledge of allocation adequately protected Low risk The outcome measure was objective (MRSA infection rates)

Salmon 2003

Methods Study design: cluster-RCT
Data: survey data from COHSASA accreditation programme were used, measuring hospital structures and processes, along with 8 hospital quality indicators
Participants Recipients: 20 randomly selected public hospitals: 10 intervention and 10 control. One of the hospitals dropped out half-way through the accreditation process, and a similar size hospital in the control group was therefore excluded, leaving 9 intervention and 9 control hospitals for the final analysis
Characteristics of included hospitals:
Setting: Intervention: 5 urban, 3 peri-urban and 2 rural hospitals; Control: 2 urban, 2 peri-urban and 6 rural hospitals
Mean number of beds (SD): Intervention hospitals: 435 (± 440); Control hospitals: 467 (± 526)
Country: KwaZulu-Province, The Republic of South Africa
Targeted behaviour: compliance with COHSASA accreditation standards (p.20, Table B), performance related to the hospital quality of care indicators (p. 8, Table 2)
Interventions Description of the intervention: p. 4, col1, para 2, and col 2, para 1.
The accreditation process:
During the 2-year study, COHSASA measured the accreditation variables twice and performed the rest of the programme as normal in the 9 intervention hospitals
(i) COHSASA facilitators initially assisted each participating facility to understand the accreditation standards and to perform a self assessment (baseline survey) against the standards (that was validated by COHSASA surveyors)
(ii) Detailed written reports on the level of compliance with the standards and reasons for non-conformance were generated and sent to the hospitals for use in their quality improvement programme
(iii) Next the facilitators assisted the hospitals in implementing a CQI to enable the facilities to improve on standards identified as sub-optimal in the baseline survey
(iv) Lastly the hospital entered the accreditation (external) survey phase, when a team of COHSASA surveyors who were not involved in the preparatory phase conducted an audit.The accreditation team usually consists of a medical doctor, a nurse, and an administrator who spend an average 3 days evaluating the degree to which the hospital complies with the standards and recording the areas of non-compliance
(v) Hospitals found by COHSASA’s accreditation committee to comply substantially with the standards were awarded either pre-accreditation or full accreditation status. The former status encourages respective institution to continue with the CQI process, which should help it stay on the path to eventual full accreditation status
Control condition:
The accreditation variables were measured as unobtrusively as possible in the 9 control hospitals. None of the other components of the accreditation programme were performed, meaning no feedback of results and no technical assistance, until after the research was completed. Meanwhile, a separate research team measured the research indicators in both the intervention and control hospitals
Type of external standard: COHSASA accreditation standards and 8 indicators of hospital quality that had been developed by consensus of an advisory committee in South Africa
Who developed the standards: to develop indicators for hospital quality, a workshop was held in South Africa in May 1999. Present at the workshop were South African healthcare professional leaders, the managing director of COHSASA, a representative from JCI, and the principal investigators for the research study
Voluntary or mandatory inspection: mandatory
The Kwa-Zulu Natal (KZN) province signed a contract with COHSASA for the first province-wide public hospital accreditation activity in the country (p. 4, col 2, para 1). The hospitals did not volunteer to participate
Universally or targeted inspection: universal (i.e. all groups of health professionals were involved)
Who performed the inspection: before and after measures of compliance with COHSASA accreditation standards were collected by COHSASA surveyors or teams hired by COHSASA, and indicators of hospital quality were collected by research assistants hired by the independent research team composed of South African and American investigators (p. 6, col 3, para 2)
Purpose and focus of the inspection: to improve compliance with COHSASA accreditation standards and performance related to the hospital quality of care indicators
Timing:
a) Frequency and number of inspections: 2: 1 before the start of the accreditation programme (a self assessment of compliance with accreditation standards that were validated by a COHSASA team of surveyors or a team hired by COHSASA), and one inspection at the end of the 2-year period
b) Duration of inspection: approximately 3 days for the on-site inspection (the time put down in between inspections is unclear)
Outcomes Compliance with COHSASA accreditation standards (6000 standards, and 28 service elements, see Figure 4 for details on the service elements) and 8 indicators of hospital quality of care (see Figure 2) were measured. See Figure 3 for details on how these outcomes were measured
Initially there were 12 indicators, but after the first measurement 4 of them were dropped. Indicators of neonatal mortality, surgical wound infections and time to surgery, and financial solvency were dropped due to difficulties in achieving comparability between hospitals (see Appendix 3 for reasons). The 8 indicators of hospital quality of care that remained for the final analysis were: i) nurses perception of clinical care; ii) patient satisfaction with care; iii) patient medication education; iv) medical records: accessibility and accuracy; v) medical records: completeness; vi) completeness of peri-operative notes; vii) completeness of ward stock medicine labelling, and viii) hospital sanitation
Results:
Compliance with COHSASA accreditation standards:
Results were reported for compliance with COHSASA accreditation standards (represented by 28 service elements), and 8 quality of care indicators
The results showed that after the 2-year accreditation period, the total mean compliance score with COHSASA accreditation standards was significantly improved in intervention hospitals. The total compliance score for 21/28 service elements, for which comparisons were possible, rose from 48% to 78% in intervention hospitals, while control hospitals maintained the same compliance score throughout (43%). The mean intervention effect was 30 (23% to 37%)
Looking at the individual scores of compliance with the accreditation standards for each service element the results were mixed, with 21/28 fields showing a significant effect of the inspections (the mean intervention effect ranged from 20% to 52%, while for the remaining 7 service elements data were not available, i.e. some of the service elements were only evaluated in the higher level hospitals, so comparisons between the intervention and control arms was not appropriate due to small sample size
Sub-analysis of the standards that a priori were deemed by the COHSASA as being ‘critical’ for a specific function was performed. As some of the 28 service elements evaluated in the accreditation process were not applicable for all hospitals, this left 19 generic service elements, yielding 424 critical criteria for the sub-analysis
These critical criteria were mainly drawn from the following service elements: obstetric and maternity inpatient services; operating theatre and anaesthetic services; resuscitation services; paediatric services and medical inpatient services
The sub-analysis showed significantly improved mean compliance with the critical standards in intervention hospitals: the total score rose from 38% (range 21% to 46%) to 76% (range 55% to 96%). Control hospitals maintained the same compliance score throughout: 37% (range 28% to 47%) before the intervention and 38% (range 25% to 49%) after the intervention. The difference in means between groups was statistically significant (P < 0.001)
Only 1 of the 9 intervention hospitals gained full accreditation status at the end of the study period, with 2 others reaching pre-accreditation status
Hospital quality indicators:
The effects on the hospital quality indicators were mixed, with mean intervention effects ranging from −1.9 to +11.8, and only 1 of the 8 indicators: ‘nurses perception of clinical care’, showed a significant effect of the intervention (see below)
Nurses perception of clinical care
Intervention hospitals: Pre: 59.3%, Post: 60.8% (Change: 1.5); Control hospitals: Pre: 60.8% Post: 56.5% (Change: −4.2), Intervention effect: 5.7 percentage points (P = 0.031)
Patient satisfaction with care:
Intervention hospitals: Pre: 86.9%, Post: 91.5% (Change: 4.6); Control hospitals: Pre: 87.0%; Post: 90.1% (Change: 3.1), Intervention effect: 1.5 percentage points (P = 0.484)
Patient medication education:
Intervention hospitals: Pre: 42.9%, Post: 43.1% (Change: 0.2); Control hospitals: Pre: 41.5%, Post: 40.0% (Change:-1.5), Intervention effect: 1.7 percentage points (P = 0.395)
Medical records: accessibility:
Intervention hospitals: Pre: 85.4 %, Post:77.5 % (Change: −7.9); Control hospitals: Pre: 79.4%, Post: 68.4% (Change: −11.0), Intervention effect: 3.1 percentage points (P = 0.492)
Medical records: completeness (consisting of 2 components: admissions and discharge): Intervention hospitals: Pre: 47.1, Post: 49.1 (Change: 2.0 ), Control hospitals: Pre: 48.6% Post: 44.9% (Change: −3.7), Intervention effect: 5.7 percentage points (P = 0.114)
Completeness of peri-operative notes:
Intervention hospitals: Pre: 70.2%, Post: 72.7% (Change: 2.5); Control hospitals: Pre: 65.2%, Post: 69.6% (Change: 4.4), Intervention effect: −1.9 percentage points (P = 0.489)
Ward stock labelling:
Intervention hospitals: Pre: 66.0, Post: 81.8 (Change: 15.8); Control hospitals: Pre: 45.6, Post: 49.6 (Change: 4.0), Intervention effect: 11.8 percentage points (P = 0.112)
Hospital sanitation*:
Intervention hospitals: Pre: 59.7%, Post: 62.8% (Change: 3.1); Control hospitals: Pre: 50.2%, Post: 55.7% (Change: 5.5), Intervention effect: −2.4 percentage points (P = 0.641)
* Consisted of the assessment of 6 items (availability of soap, water, paper towels and toilet paper and whether toilets were clean and in working order) of which a composite score was developed
Notes
Risk of bias
Bias Authors’ judgement Support for judgement
Random sequence generation (selection bias) Low risk p.5, col 2, para. 1
“To ensure a balanced design with respect to service and care characteristics, researchers stratified the hospitals by size (number of beds) into four categories. Within each stratuma simple randomsample without replacement was drawn.”
Allocation concealment (selection bias) Low risk The allocation was made by the research team (i.e. centrally), but it is unclear if it was done by an independent statistician or not
Blinding (performance bias and detection bias)
All outcomes
High risk The hospitals could not be blinded to the fact that they were part of an hospital accreditation programme or not, and it was not stated whether or not the assessors were blinded
Incomplete outcome data (attrition bias)
All outcomes
Low risk p.1, col 1, para 3
“One of the intervention hospitals dropped out of the accreditation midway through the study, and so to retain comparability of the intervention and control groups, a similar sized hospital was removed from the control group, leaving nine hospitals in the intervention group and nine in the control for this part of the study. Of the 20 randomised hospitals, 18 remained from the final analyses.”
Selective reporting (reporting bias) Unclear risk p.7, col 1, last paragraph and col 2
“This process resulted in 12 indicators for the first round of data collection. However, based on preliminary analysis of data collected from the first round, the research team recommended to the steering committee that some indicators be dropped. The steering committee (composed of representatives from the research team, the sponsors of the research, COHSASA, and several South African medical experts) decided to drop the two indicators relating to surgical wound infections and time to surgery because only nine hospitals (six intervention and three control) performed surgery regularly and many of the records lacked information on infections and times. Despite its limitations, the committee did retain the indicator on completeness of peri-operative notes and extended this to include any form of significant incision or anaesthesia.
The committee also dropped the indicator of neonatal mortality rate, because the research assistants had great difficulty in finding reliable data due to the high variation in approaches to documenting neonatal deaths among the various hospitals. Transferring newborns soon after birth was common, but sometimes hospitals reported transfers even when they recorded deaths. Finally, the indicator of financial solvency was discarded because the KZN provincial government had implemented strict budgeting controls across all hospitals in the region with reportedly no additional funds being assigned. Hence, it was unlikely that the COHSASA process would affect this indicator. These decisions resulted in eight quality indicators (see Figure 2)
Other bias Unclear risk The baseline survey of the compliance with the accreditation standards was not performed simultaneously in intervention and control hospitals, but on average 3 months later in control hospitals, so it is unclear whether or not the control measurements represent a true baseline. This especially since the COHSASA survey team did not only consult hospital records, but interviewed hospital staff, and observed procedures and operations to determine the degree to which the service elements met the requirements of the standards. It was, however, suggested by the authors, that the intervention hospital generally would wait to start working on improving the standards until they had seen the baseline survey report, which was about 2 months after the baseline survey was conducted
Similar baseline characteristics Unclear risk The stratification of hospitals into intervention and control groups was by hospital size - but neither hospital size nor any of the other hospital characteristics reported at p.19, Table A, were tested for statistically significant differences between groups. See also comment under other risk of bias
Similar baseline outcome measures Low risk p.10, col 1, para 1, line 14-17
“At baseline the intervention and control hospitals showed similar levels of compliance to the critical standards.”
Adequate protection against contamination Low risk Allocation was by hospital

COHSASA: Council for Health Services Accreditation for South Africa

CI: confidence interval

CQI: continuous quality improvement

ITS: interrupted time series

JCI: Joint Commission International

HCAIs: healthcare-acquired infections

MRSA: methicillin-resistant Staphylococcus aureus

N/A: not applicable

RCT: randomised controlled trial

SD: standard deviation

Characteristics of excluded studies [ordered by study ID]

Study Reason for exclusion
Al Tehewy 2009 Not an external inspection intervention, but a comparison between naturally experimented clusters (submitted for accreditation) and control clusters on the compliance with standards (monitoring indicators set by the General Directorate of Quality in the Ministry of Health and Population, Egypt). Ineligible study design (survey)
Brooke 2008 Not an external inspection intervention, but evaluates compliance with standards (Leapfrog evidence-based standards for abdominal aortic aneurysm repair set by the Leapfrog Group). Ineligible study design (questionnaire survey)
Frasco 2005 Not an external inspection intervention, but evaluates the implementation of the JCAHO pain initiative. Ineligible study design (uncontrolled BA study)
Kowiatek 2002 Not an external inspection intervention, but evaluates a new medication control review tool for monitoring compliance with JCAHO standards. Ineligible study design (case studies)
Lasalle 2002 Not an external inspection intervention, but evaluates the implementation of standards (JCAHO medication management standards). Ineligible study design (retrospective survey)
Mattes 1987 Not an external inspection intervention, evaluates only 2 different treatment planning conference ratings. Ineligible study design (uncontrolled BA study)
OPM 2007 External inspection intervention, but ineligible study design (questionnaire survey)
OPM 2008a Not an external inspection intervention, but an evaluation of the Healthcare Commission’s assessment process. Ineligible study design (case studies and surveys)
OPM 2008b Not an external inspection intervention, but an evaluation of the Healthcare Commission’s inspection process. Ineligible study design (case studies and surveys)
Shaw 2003 Not an intervention (overview paper)
Shonka 2009 Not an external inspection intervention. Evaluates compliance with work hour regulations. Ineligible study design (retrospective review)
Walsh and Walshe The report could not be found and neither could the contact details of the authors
Winchester 2008 Not an intervention (overview paper)

BA: before and after study

JCAHO: Joint Commission on Accreditation of Healthcare Organisations

DATA AND ANALYSES

This review has no analyses.

Appendix 1. Search strategy

The Cochrane Library (CDSR, DARE, CENTRAL)

1. exp Health Personnel/

2. (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gynecologist* or gynaecologist* or hematologist* or haematologist* or internist* or nurse* or obstetrician* or occupational therapist* or pediatrician* or paediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counselor* or counsellor* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or health care personnel or healthcare personnel):ti,ab

3. exp Health Facilities/

4. (hospital or hospitals or clinic or clinics or (primary NEAR/2 care) or (health NEAR/2 care)):ti,ab

5. or/1-4

6. Peer Review, Health Care/

7. Benchmarking/

8. exp Accreditation/

9. exp Management Audit/ or exp Clinical Audit/

10. (“organisation* raid*” or “organization* raid*”):ti,ab

11. ((external* NEAR/5 accreditation) or (external* NEAR/5 accredited) or (external* NEAR/5 peer review) or (external* NEAR/5 inspection) or (external* NEAR/5 inspected) or (external* NEAR/5 regulation) or (external* NEAR/5 regulated) or (external* NEAR/5 certified) or (external* NEAR/5 certification) or (external* NEAR/5 benchmark*) or (external* NEAR/5 measured) or (external* NEAR/5 measurement) or (external* NEAR/5 (audit or audits or auditing)) or (external* NEAR/5 evaluation) or (external* NEAR/5 evaluated) or (external* NEAR/5 assessment) or (external* NEAR/5 assessed) or (external* NEAR/5 monitored) or (external* NEAR/5 visitation) or (external* NEAR/5 surveillance)):ti,ab

12. or/6-11

13. st.fs.

14. (standards or standard or performance or criterion or criteria or indicator* or “clinical competence” or compliance or “clinical improvement” or “quality improvement” or “organisation* development” or “organization* development” or “health care regulation”): ti,ab

15. or/13-14

16. 5 and 12 and 15

MEDLINE

1. exp Health Personnel/

2. (clinician$ or consultant$ or dentist$ or doctor$ or family practition$ or general practition$ or gyn?ecologist$ or h?ematologist$ or internist$ or nurse$ or obstetrician$ or occupational therapist$ or p?ediatrician$ or pharmacist$ or physician$ or physiotherapist$ or psychiatrist$ or psychologist$ or radiologist$ or surgeon$ or surgery or therapist$ or counsel?or$ or neurologist$ or optometrist$ or paramedic$ or social worker$ or health professional$ or health personnel or healthcare personnel or health care personnel).tw.

3. exp Health Facilities/

4. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

5. or/1-4

6. Peer Review, Health Care/

7. Benchmarking/

8. exp Accreditation/

9. exp Management Audit/ or exp Clinical Audit/

10. (organi?ation$ adj raid$).tw.

11. (external$ adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark$ or measured or measurement or evaluation or evaluated or audit or audits or auditing or assessment or assessed or monitored or visitation or surveillance or (control adj program$))).tw.

12. or/6-11

13. st.fs.

14. (standards or standard or performance or criterion or criteria or indicator$ or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj improvement) or (organi?ation$ adj development) or (health adj care adj regulation)).tw.

15. or/13-14

16. randomized controlled trial.pt.

17. random$.tw.

18. intervention$.tw.

19. control$.tw.

20. evaluat$.tw.

21. or/16-20

22. Animals/

23. Humans/

24. 22 not (22 and 23)

25. 21 not 24

26. 5 and 12 and 15 and 25

27. “audit and feedback”.mp

28. 26 not 27

EMBASE

1. exp health care personnel/

2. exp health care facility/

3. (clinician$ or consultant$ or dentist$ or doctor$ or family practition$ or general practition$ or gyn?ecologist$ or h?ematologist$ or internist$ or nurse$ or obstetrician$ or occupational therapist$ or p?ediatrician$ or pharmacist$ or physician$ or physiotherapist$ or psychiatrist$ or psychologist$ or radiologist$ or surgeon$ or surgery or therapist$ or counsel?or$ or neurologist$ or optometrist$ or paramedic$ or social worker$ or health professional$ or health personnel or healthcare personnel or health care personnel).tw.

4. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

5. or/1-4

6. “peer review”/

7. exp accreditation/

8. Clinical audit/

9. (organi?ation$ adj raid$).tw.

10. (external$ adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or audit or audits or auditing or benchmark$ or measured or measurement or evaluation or evaluated or assessment or assessed or monitored or visitation or surveillance or (control adj program$))).tw.

11. or/6-10

12. (standards or standard or performance or criterion or criteria or indicator$ or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj improvement) or (organi?ation$ adj development) or (health adj care adj regulation)).tw.

13. 5 and 11 and 12

14. randomized controlled trial/

15. (randomised or randomized).tw.

16. experiment$.tw.

17. (time adj series).tw.

18. (pre test or pretest or post test or posttest).tw.

19. impact.tw.

20. intervention?.tw.

21. chang$.tw.

22. evaluat$.tw.

23. effect?.tw.

24. compar$.tw.

25. or/14-24

26. nonhuman/

27. 25 not 26

28. 13 and 27

CINAHL

1. (MH “Health Personnel+”)

2. TI (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gynecologist* or gynaecologist* or hematologist* or haematologist* or internist* or nurse* or obstetrician* or occupational therapist* or pediatrician* or paediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counselor* or counsellor* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or health care personnel or healthcare personnel) OR AB (clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gynecologist* or gynaecologist* or hematologist* or haematologist* or internist* or nurse* or obstetrician* or occupational therapist* or pediatrician* or paediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counselor* or counsellor* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or health care personnel or healthcare personnel)

3. (MH “Health Facilities+”)

4. TI (hospital or hospitals or clinic or clinics or (primary N2 care) or (health N2 care)) or AB (hospital or hospitals or clinic or clinics or (primary N2 care) or (health N2 care))

5. S1 or S2 or S3 or S4

6. (MH “Peer Review”)

7. (MH “Benchmarking”)

8. (MH “Accreditation+”)

9. (MH “Nursing Audit”)

10. TI (organization* raid* or organisation* raid*) or AB (organization* raid* or organisation* raid*)

11. TI ((external* N5 accreditation) or (external* N5 accredited) or (external* N5 peer review) or (external* N5 inspection) or (external* N5 inspected) or (external* N5 regulation) or (external* N5 regulated) or (external* N5 certified) or (external* N5 certification) or (external* N5 benchmark*) or (external* N5 measured) or (external* N5 measurement) or (external* N5 evaluation) or (external* N5 evaluated) or (external* N5 assessment) or (external* N5 assessed) or (external* N5 monitored) or (external* N5 visitation) or (external* N5 surveillance) or (external* N5 audit) or (external* N5 audits) or (external* N5 auditing)) OR AB ((external* N5 accreditation) or (external* N5 accredited) or (external* N5 peer review) or (external* N5 inspection) or (external* N5 inspected) or (external* N5 regulation) or (external* N5 regulated) or (external* N5 certified) or (external* N5 certification) or (external* N5 benchmark*) or (external* N5 measured) or (external* N5 measurement) or (external* N5 evaluation) or (external* N5 evaluated) or (external* N5 assessment) or (external* N5 assessed) or (external* N5 monitored) or (external* N5 visitation) or (external* N5 surveillance) or (external* N5 audit) or (external* N5 audits) or (external* N5 auditing))

12. S6 or S7 or S8 or S9 or S10 or S11

13. TI (standards or standard or performance or criterion or criteria or indicator* or “clinical competence” or compliance or “clinical improvement” or “quality improvement” or “organisation* development” or “organization* development” or “health care regulation”) or AB (standards or standard or performance or criterion or criteria or indicator* or “clinical competence” or compliance or “clinical improvement” or “quality improvement” or “organisation* development” or “organization* development” or “health care regulation”)

14. S5 and S12 and S13

15. (MH “Outcome Assessment/ST”) or (MH “Process Assessment (Health Care)+/ST”)

16. (MH “Peer Review+/ST”) or (MH “Benchmarking/ST”) or (MH “Accreditation+/ST”)

17. (MH “Quality Assurance+/ST”) or (MH “Clinical Indicators/ST”) or (MH “Clinical Competence+/ST”)

18. S15 or S16 or S17

19. (MH “Clinical Trials+”) or (MH “Comparative Studies”) or (MH “Pretest-Posttest Design”) or (MH “Quasi-Experimental Studies+”)

20. TI (control* or random* or experiment or time series or impact or intervention? or evaluat* or effect?) or AB (control* or random* or experiment or time series or impact or intervention? or evaluat* or effect?)

21. S19 or S20

22. (S14 and S21) or (S18 and S21)

Web of Knowledge (SCI, SSCI, Conference Proceedings)

1. TS=(clinician* or consultant* or dentist* or doctor* or family practition* or general practition* or gynecologist* or gynaecologist* or hematologist* or haematologist* or internist* or nurse* or obstetrician* or occupational therapist* or pediatrician* or paediatrician* or pharmacist* or physician* or physiotherapist* or psychiatrist* or psychologist* or radiologist* or surgeon* or surgery or therapist* or counselor* or counsellor* or neurologist* or optometrist* or paramedic* or social worker* or health professional* or health personnel or health care personnel or healthcare personnel)

2. TS=((external* SAME audit) or (external* SAME accreditation) or (external* SAME accredited) or (external* SAME peer review) or (external* SAME inspection) or (external* SAME inspected) or (external* SAME regulation) or (external* SAME regulated) or (external* SAME certified) or (external* SAME certification) or (external* SAME benchmark*) or (external* SAME measured) or (external* SAME measurement) or (external* SAME evaluation) or (external* SAME evaluated) or (external* SAME assessment) or (external* SAME assessed) or (external* SAME monitored) or (external* SAME visitation) or (external* SAME surveillance)))

3. TS=(standards or standard or performance or criterion or criteria or indicator* or “clinical competence” or compliance or “clinical improvement” or “quality improvement” or “organisation* development” or “organization* development” or “health care regulation”)

4. TS=(randomi?ed or experiment* or impact* or intervention* or evaluat* or effect* or comparative or “time series”)

5. TS=(random* SAME allocat*) or TS=(random* SAME assign*) or TS=(controlled SAME trial*) or TS=(controlled SAME study)

6. 4 or 5

7. 1 and 2 and 3 and 6

PsycINFO

1. (randomi?ed or experiment* or impact* or intervention* or evaluat* or effect* or comparative or pre test or pretest or posttest or post test).tw.

2. ((time adj2 series) or (random* adj2 allocat*) or (random* adj2 assign*) or (controlled adj2 trial*) or (controlled adj2 study) or (clinical adj2 trial*) or (clinical adj2 study)).tw.

3. 2 or 1

4. exp Health Personnel/

5. (clinician$ or consultant$ or dentist$ or doctor$ or family practition$ or general practition$ or gyn?ecologist$ or h?ematologist$ or internist$ or nurse$ or obstetrician$ or occupational therapist$ or p?ediatrician$ or pharmacist$ or physician$ or physiotherapist$ or psychiatrist$ or psychologist$ or radiologist$ or surgeon$ or surgery or therapist$ or counsel?or$ or neurologist$ or optometrist$ or paramedic$ or social worker$ or health professional$ or health care personnel or healthcare personnel).tw.

6. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

7. or/4-6

8. Peer Evaluation/

9. Hospital Accreditation/

10. Clinical audits/

11. (organi?ation* adj raid*).tw.

12. (external$ adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark$ or measured or measurement or evaluation or evaluated or assessment or assessed or audit or audits or auditing or monitored or visitation or surveillance or (control adj program$))).tw.

13. or/8-12

14. (standards or standard or performance or criterion or criteria or indicator$ or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj improvement) or (organi?ation* adj development) or (health adj care adj regulation)).tw.

15. 7 and 13 and 14

16. 3 and 15

HMIC

1. exp Health Service Staff/

2. (clinician$ or consultant$ or dentist$ or doctor$ or family practition$ or general practition$ or gyn?ecologist$ or h?ematologist$ or internist$ or nurse$ or obstetrician$ or occupational therapist$ or p?ediatrician$ or pharmacist$ or physician$ or physiotherapist$ or psychiatrist$ or psychologist$ or radiologist$ or surgeon$ or surgery or therapist$ or counsel?or$ or neurologist$ or optometrist$ or paramedic$ or social worker$ or health professional$ or health care personnel or healthcare personnel).tw.

3. exp Health Services/

4. exp Health Buildings/

5. (hospital or hospitals or clinic or clinics or (primary adj2 care) or (health adj2 care)).tw.

6. or/1-5

7. Peer Review/

8. Benchmarking/

9. exp Accreditation/

10. exp Clinical audit/ or exp Management audit/

11. (organi?ation* adj raid*).tw.

12. (external$ adj5 (accreditation or accredited or peer review or inspection or inspected or regulation or regulated or certified or certification or benchmark$ or measured or measurement or evaluation or evaluated or assessment or assessed or audit or audits or auditing or monitored or visitation or surveillance or (control adj program$))).tw.

13. or/7-12

14. (standards or standard or performance or criterion or criteria or indicator$ or (clinical adj competence) or compliance or (clinical adj improvement) or (quality adj improvement) or (organi?ation* adj development) or (health adj care adj regulation)).tw.

15. 6 and 13 and 14

16. (randomi?ed or experiment* or impact* or intervention* or evaluat* or effect* or comparative or pre test or pretest or posttest or post test).tw.

17. ((time adj2 series) or (random* adj2 allocat*) or (random* adj2 assign*) or (controlled adj2 trial*) or (controlled adj2 study) or (clinical adj2 trial*) or (clinical adj2 study)).tw.

18. 16 or 17

19. 15 and 18

Intute

(accreditation or peer review or inspection or regulation or certification or benchmark or evaluation or assessment or visitation) and external

Results:

National Cancer Peer Review Programme http://www.cquins.nhs.uk/

National Cancer Peer Review (NCPR) is a national quality assurance programme for NHS cancer services.

The programme involves both self-assessment by cancer service teams and external reviews of teams conducted by professional peers, against nationally agreed “quality measures”.

Clinical Pathology Accreditation (UK) Ltd http://www.cpa-uk.co.uk/

The Clinical Pathology Accreditation (UK) Ltd provides a means to accredit Clinical Pathology Services and External Quality Assessment Schemes (EQA) through a system of specialist advisory committees. By declaring a defined standard of practice and having this independently confirmed, by onsite visits and documentation, accredited organisations are able to attain a hallmark of performance and offer reassurance to users of their service. Within this site are current lists of CPA approved Clinical Pathology Services and External Quality Assessment Schemes (EQA).

Appendix 2. Data extraction form

Cochrane Effective Practice and Organisation of Care Group (EPOC)[1]

Modified EPOC Group Data Abstraction Form

External inspection versus external standards for improving healthcare organisation behaviour, professional behaviour and patient outcomes

Data collection

Name of reviewer:

Date:

Study reference:

Is the healthcare organisation review performed (i) by an external body (independent of the organisation under review) and (ii) against external standards?

If not - EXCLUDE!

1. Inclusion criteria

1.1 Study design

1.1.1 RCT designs

1.1.2 CCT designs

1.1.3 CBA designs

a) Contemporaneous data collection

b) Appropriate choice of control site/activity

c) At least two intervention and two control sites

1.1.4 ITS designs

a) Clearly defined point in time when the intervention occurred

b) At least 3 data points before and 3 after the intervention

1.2 Methodological inclusion criteria

a) The objective measurement of performance/provider behaviour or health/patient outcomes

b) Relevant and interpretable data presented or obtainable

N.B. A study must meet the minimum criteria for EPOC scope, design and methodology for inclusion in EPOC reviews. If it does not, COLLECT NO FURTHER DATA.

2. Interventions

2.1 Type of external inspection intervention

(state all interventions for each comparison/study group)

Group 1:

Group 2:

Group 3:

2.2 Control(s)

3. Type of targeted behaviour (state more than one where appropriate)

4. Participants

4.1 Characteristics of participating providers

4.1.1 Profession

4.1.2 Level of training

4.1.3 Clinical specialty

4.1.4 Age

4.1.5 Time since graduation (or years in practice)

4.2 Characteristics of participating patients

4.2.1 Clinical problem

4.2.2 Other patient characteristics

a) Age

b) Gender

c) Ethnicity

d) Other (specify)

4.2.3 Number of patients included in the study

a) Episodes of care

b) Patients

c) Providers

d) Practices

e) Hospitals

f) Communities or regions

5. Setting

5.1 Reimbursement system

5.2 Location of care

5.3 Academic status

5.4 Country

5.5 Proportion of eligible providers (or allocation units)

6. Methods

6.1 Unit of allocation

6.2 Unit of analysis

6.3 Power calculation

6.4 Risk of bias assessment

(If the trial is an ITS go directly to 6.4.2 for the RoB assessment)

6.4.1 Risk of bias assessment for randomised controlled trials (RCTs), controlled clinical trials (CCTs) and controlled before and after studies (CBAs)

a) Was the allocation sequence adequately generated? (cut and paste from the paper verbatim)

Score
YES
If a random component in the sequence generation process is described (e.g. referring to a random numbers table)
Score
NO
If a non-random method is used (e.g. performed by date of submission)
Score
UNCLEAR
If not specified in the paper

b) Was the allocation adequately concealed?

Score
YES
If the unit of allocation was by institution, team or professional and allocation was performed at all units at the start of the study; or if the unit of allocation was by patient or episode of care and therewas some kind of centralised randomisation scheme; an on-site computer system or if sealed opaque envelopes were used
Score
NO
If none of the above mentioned methods were used (or if a CBA)
Score
UNCLEAR
If not specified in the paper

c) Were baseline outcome measurements similar?

Score
YES
If performance or patient outcomes were measured prior to the intervention, and no important differences were present across study groups
Score
NO
If important differences were present and not adjusted for in analysis.**
Score
UNCLEAR
If RCTs have no baseline measure of outcome**

d) Were baseline characteristics similar?

Score
YES
If baseline characteristics of the study and control providers are reported and similar
Score
NO
If there is no report of characteristics in the text or tables or if there are differences between control and intervention providers
Score
UNCLEAR
If it is not clear in the paper (e.g. characteristics are mentioned in the text but no data were presented)

e) Were incomplete outcome data adequately addressed?

Score
YES
If missing outcome variables were unlikely to bias the results (e.g. the proportion of missing data was similar in the intervention and the control group, or the proportion of missing data was less than the effect size, i. e. unlikely to overturn the study results)
Score
NO
If missing data were likely to bias the results
Score
UNCLEAR
If not specified in the paper (do not assume 100% follow-up unless stated explicitly)

f) Was knowledge of the allocated interventions adequately addressed?*

Score
YES
If the authors state explicitly that primary outcome variables was assessed blindly, or the outcomes are objective, e.g. length of hospital stay
Score
NO
If the outcomes were not assessed blindly
Score
UNCLEAR
If not specified in the paper

g) Was the study adequately protected against contamination?

Score
YES
If allocation was by community, institution or practice and it is unlikely that the control group received the intervention
Score
NO
If it is likely that the control group received the intervention (e.g. if patients rather than professionals were randomised)
Score
UNCLEAR
If professionals were allocated within a clinic or practice and it is possible that communication between intervention and control professionals could have occurred (e.g. physicians within practices were allocated to intervention or control)

h) Was the study free from selective outcome reporting?

Score
YES
If there is no evidence that outcomes were selectively reported (e.g. all relevant outcomes in the methods section are reported in the results section)
Score
NO
If some important outcomes are subsequently omitted from the results
Score
UNCLEAR
If not specified in the paper

i) Was the study free from other risks of bias?

Score
YES
If no evidence of other risks of bias
Score
NO
Score
UNCLEAR

* If some primary outcomes were imbalanced at baseline, assessed blindly or affected by missing data and others were not, each primary outcome can be scored separately.

**If ‘UNCLEAR’ or ‘No’, but there are sufficient data in the paper to do an adjusted analysis (e.g. baseline adjustment analysis or intention-to-treat analysis) the criteria should be re scored to ‘Yes’.

6.4.2 Risk of bias assessment for interrupted time series (ITS) designs

Note: if the ITS study has ignored secular (trend) changes and performed a simple t-test of the pre versus post intervention periods without further justification, the study should not be included in the review unless reanalysis is possible.

a) Was the intervention independent of other changes? (cut and paste from the paper verbatim)

Score
YES
If there are compelling arguments that the intervention occurred independently of other changes over time and the outcome was not influenced by other confounding variables/historic events during study period
Score
NO
If reported that intervention was not independent of other changes in time
If Events/variables identified, note what they are
Score
UNCLEAR
If not specified in the paper

b) Was the shape of the intervention effects pre-specified?

Score
YES
If point of analysis is the point of intervention OR a rational explanation for the shape of intervention effect was given by the author(s). Where appropriate, this should include an explanation if the point of analysis is NOT the point of intervention
Score
NO
If it is clear that the condition above is not met
Score
UNCLEAR
If not specified in the paper

c) Was the intervention unlikely to affect data collection?

Score
YES
If reported that intervention itself was unlikely to affect data collection (for example, sources and methods of data collection were the same before and after the intervention)
Score
NO
If the intervention itself was likely to affect data collection (for example, any change in source or method of data collection reported)
Score
UNCLEAR
If not stated in the paper

d) Was knowledge of the allocated interventions adequately prevented during the study?***

Score
YES
If the authors state explicitly that the primary outcome variables were assessed blindly, or the outcomes are objective, e.g. length of hospital stay. Primary outcomes are those variables that correspond to the primary hypothesis or question as defined by the authors
Score
NO
If the outcomes were not assessed blindly
Score
UNCLEAR
If not specified in the paper

e) Were incomplete outcome data adequately addressed?***

Score
YES
If missing outcome measures were unlikely to bias the results (e.g. the proportion of missing data was similar in the pre and post intervention periods or the proportion of missing data was less than the effect size, i.e. unlikely to overturn the study result)
Score
NO
If missing data were likely to bias the results
Score
UNCLEAR
If not specified in the paper (do not assume 100% follow-up unless stated explicitly)

f) Was the study free from selective outcome reporting?

Score
YES
If there is no evidence that outcomes were selectively reported (e.g. all relevant outcomes in the methods section are reported in the results section)
Score
NO
If some important outcomes are subsequently omitted from the results
Score
UNCLEAR
If not specified in the paper

g). Was the study free from other risks of bias?

Score
YES
If no evidence of other risks of bias, e.g. should consider if seasonality is an issue (i.e. if January to June comprises the pre intervention period and July to December the post, could the ‘seasons’ have caused a spurious effect)
Score
NO
Score
UNCLEAR

*** If some primary outcomes were assessed blindly or affected by missing data and others were not, each primary outcome can be scored separately.

6.5 Consumer involvement

7. Prospective identification by investigators of barriers to change

8. Intervention

8.1 Description of the external inspection intervention (cut and paste from paper verbatim):

8.1.1 Voluntary or mandatory review

8.1.2 Universally or targeted review (applied to all of an organisation or only sub-sections of it, e.g. to a clinical area or a professional group)

8.1.3 Purpose and focus of the review

8.1.4 Type of external standards (description and evidence base of standards)

8.1.5 Who set the standards?

8.1.6 Who did the inspection? (governmental or non-governmental organisation)

8.1.7 How were results used (by the external body) to bring about the desired quality improvements

8.1.8 Recipient

8.1.9 Timing

a) Frequency/number of inspections

b) Duration of inspection

9. Outcomes

9.1 Description of the main outcome measure(s)

a) Healthcare organisational change (e.g. organisational performance)

b) Health professional behaviour

c) Patient outcomes

d) Economic variables (only if reported)

  • Costs of the intervention

  • Changes in direct health care costs as a result of the intervention

  • Changes in non-health care costs as a result of the intervention

  • Costs associated with the intervention are linked with provider or patient outcomes in an economic evaluation

9.2 Length of post intervention follow-up period

9.3 Identify a possible ceiling effect:

a) Identified by investigator

b) Identified by reviewer

10. Results (use extra page if necessary)

10.1.1 For RCTs and CCTs

10.1.2 For CBAs

10.1.3 For ITSs

[1]EPOC Editorial Base: Alain Mayhew, Managing Editor, Cochrane Effective Practice and Organisation of Care Group, Institute of Population Health, University of Ottawa, 1 Stewart Street, Suite 205, Ottawa, Ontario, K1N 6N5. Tel: +1 613 562 5800 x2361. Fax: +1 613 562 5659. al.mayhew@uottawa.ca

Appendix 3. Dropped hospital quality indicators

Dropped hospital quality indicators Reasons
Neonatal mortality High variation in documenting neonatal births between hospitals
Surgical wound infections and time to surgery Only 9 hospitals (6 intervention and 3 control) performed surgery regularly, and many records lacked information on infections and times
Financial solvency Strict budgeting control across all hospitals in the region, with reportedly no additional funds being assigned

SUMMARY OF FINDINGS FOR THE MAIN COMPARISON

External inspection of compliance with COHSASA hospital accreditation standards for improving healthcare organisation behaviour, healthcare professional behaviour or patient outcomes

Patient or population: hospitals, primary healthcare organisations and/or other community-based healthcare organisations containing health professionals

Settings: secondary care

Intervention: external inspection of compliance with accreditation standards and performance related to indicators for hospital quality of care

Comparison: no intervention

Outcomes Intervention effect No of hospitals/no of service elements/indicators of hospital quality included No of studies Quality of the evidence
(GRADE)
Comments
Compliance with COHSASA accreditation standards Mean intervention effect (95% CI)
30%1(23% to 37%), P <0.001
18 hospitals (9 intervention and 9 control) - 21/28 service elements included in analysis 1 +000 The total compliance score with the accreditation standards was significantly greater in intervention hospitals, as compared with control hospitals (P <0.001)
Compliance with COHSASA accreditation standards - subgroup of critical criteria analysed I: Pre: 38% (21% to 46%), Post: 76% (55% to 96%)
C: Pre: 37% (28% to 47%), Post: 38% (25% to 49%)
Mean intervention effect: 37% (P <0.001)
Unclear number of hospitals included in analysis - 19/28 service elements (and 426 pre-defined criteria) included in analysis 1 +000 The compliance score for a sub-section of predefined critical criteria, deemed crucial for the function of the service elements, was significantly greater in intervention hospitals, as compared with control hospitals (P <0.001)
Indicators for hospital quality of care Median intervention effect (range)
2.4% (−1.9% to 11.8%)
18 hospitals (9 intervention and 9 control) - 8 indicators of hospital quality included in analysis 1 +000 The median quality indicator score was non-significant. All but one of the indicators for hospital quality of care was non-significant
1

Change in total compliance score for 21/28 service elements - for 7 of the 28 service elements data were not available

2

Sub-analysis of the compliance score for 19 generic service elements (involving 426 predefined critical criteria)

We downgraded the evidence on the basis of imprecision and publication bias

++++

High. Further research is very unlikely to change our confidence in the estimate of effect or accuracy

+++0

Moderate. Further research is likely to have an important impact on our confidence in the estimate of effect or accuracy and may change the estimate

++00

Low. Further research is very likely to have an important impact on our confidence in the estimate of effect or accuracy and is likely to change the estimate

+000

Very low. Any estimate of effect or accuracy is very uncertain

C: control

CI: confidence interval

COHSASA: Council for Health Services Accreditation for South Africa

I: intervention

ADDITIONAL SUMMARY OF FINDINGS

External inspection of compliance with the ‘Code of Practice’ and the law related to healthcare-acquired infections for improving healthcare organisation behaviour, healthcare professional behaviour or patient outcomes

Patient or population: hospitals, primary healthcare organisations and/or other community-based healthcare organisations containing health professionals

Settings: secondary care

Intervention: external inspection of compliance with the Code of Practice and the Health Care Act related to healthcare-acquired infections

Comparison: no control group (time series)

Outcomes Mean intervention effect (95% CI) No of acute trusts No of studies Quality of the evidence
(GRADE)
Comments
MRSA infection rate At 3 months: −100 (−221.0 to 21.5) cases per quarter (P = 0.096)
At 6 months follow-up: −75 (−217.2 to 66.3) cases per quarter (P = 0.259)
At 12 months follow-up: −27 (−222.1 to 168.2) cases per quarter (P = 0.762)
At 24 months follow-up: +70 (−250.5 to 391) cases per quarter (P = 0.632)
168 (in 2009) 1 +000 Re-analysis of the quarterly reported rate of MRSA cases, as an interrupted time series, showed statistically non-significant effects of the Healthcare Commission’s Infection Inspection Programme

We downgraded the evidence on the basis of imprecision and publication bias

++++

High. Further research is very unlikely to change our confidence in the estimate of effect or accuracy

+++0

Moderate. Further research is likely to have an important impact on our confidence in the estimate of effect or accuracy and may change the estimatey

++00

Low. Further research is very likely to have an important impact on our confidence in the estimate of effect or accuracy and is likely to change the estimate

+000

Very low. Any estimate of effect or accuracy is very uncertain

MRSA: methicillin-resistant Staphylococcus aureus

HISTORY

Protocol first published: Issue 2, 2011

Review first published: Issue 11, 2011

Footnotes

DECLARATIONS OF INTEREST None known.

References to studies included in this review

  • OPM 2009 {published data only} .OPM evaluation team Evaluation of the healthcare commission’s healthcare associated infections inspection programme. OPM report. 2009:1–23. [Google Scholar]
  • Salmon 2003 {published data only} .Salmom JW, Heavens J, Lombard C, Tavrow P. Operations Research Results. issue 17. Vol. 2. U.S. Agency for International Development (USAID), Quality Assurance Project, University Research Co., LLC; Bethesda: 2003. The impact of accreditation on the quality of hospital care:KwaZulu-Natal Province, Republic of South Africa; pp. 1–49. [Google Scholar]

References to studies excluded from this review

  • Al Tehewy 2009 {published data only} .Al Tehewy N, Salem B, Habil I, El Okda S. Evaluation of accreditation program in non-governmental organisations’ health units in Egypt:short term outcomes. International Journal for Quality in Healthcare. 2009;21(3):183–9. doi: 10.1093/intqhc/mzp014. [DOI] [PubMed] [Google Scholar]
  • Brooke 2008 {published data only} .Brooke BS, Perler BA, Dominici F, Makary MA, Pronovost PJ. Reduction of in-hospital mortality among California hospitals meeting Leapfrog evidence-based standards for abdominal aortic aneurysm repair. Journal of Vascular Surgery. 2008;47(6):1155–6. 1163–4. doi: 10.1016/j.jvs.2008.01.021. [DOI] [PubMed] [Google Scholar]
  • Frasco 2005 {published data only} .Frasco PE, Sprung J, Trentman TL. The impact of the joint commission for accreditation of healthcare organisational pain initiative on perioperative opiate consumption and recovery room length of stay. Anesthesia & Analgesia. 2005;100:162–8. doi: 10.1213/01.ANE.0000139354.26208.1C. [DOI] [PubMed] [Google Scholar]
  • Kowiatek 2002 {published data only} .Kowiatek JG, Weber RJ, Schilling DE, McKaveney TP. Monitoring compliance with JCAHO standards using a medication-control review tool. American Journal of Health System Pharmacy. 2002;59(18):1763–7. doi: 10.1093/ajhp/59.18.1763. [DOI] [PubMed] [Google Scholar]
  • Lasalle 2002 {published data only} .Laselle TJ, May SK. Medication orders are written clearly and transcribed accurately? Implementing Medication Management Standard 3.20 and National Patient Safety Goal 2b. Hospital Pharmacy. 2006;41:82–7. [Google Scholar]
  • Mattes 1987 {published data only} .Mattes JA. A controlled evaluation of a JCAH regulation. Psychiatric Hospital. 1987;18(3):131–3. [PubMed] [Google Scholar]
  • OPM 2007 {published data only} .Office of Public management (OPM) Evaluation of a national audit of specialist in healthcare services for people with learning difficulties in England. OPM Reports. 2007 [Google Scholar]
  • OPM 2008a {published data only} .Office of Public Management (OPM) Evaluation of the Healthcare Commission’s Assessment Process 2006-2007. OPM Reports. 2008 [Google Scholar]
  • OPM 2008b {published data only} .Office of Public Management (OPM) Evaluation of the Healthcare Commission investigation function. OPM Reports. 2008 [Google Scholar]
  • Shaw 2003 {published data only} .Shaw CD. Measuring against clinical standards. Clinica Chimica Acta. 2003;333(2):115–24. doi: 10.1016/s0009-8981(03)00175-x. [DOI] [PubMed] [Google Scholar]
  • Shonka 2009 {published data only} .Shonka DC, Ghanem TA, Hubbard MA, Barker DA, Kesser BW. Four years of accreditation council of graduate medical education duty hour regulations: have they made a difference? Laryngoscope. 2009;119(4):635–9. doi: 10.1002/lary.20144. [DOI] [PubMed] [Google Scholar]
  • Walsh and Walshe {published data only} .Walsh N, Walshe K. Accreditation in primary care: an evaluation of the Royal College of General Practitioners’ team based practice accreditation programme.
  • Winchester 2008 {published data only} .Winchester DP, Kaufman C, Anderson B, El-Tamer M, Kurtzman SH, Masood S, et al. The National Accreditation Program for Breast Centers: quality improvement through interdisciplinary evaluation and management. Bulletin of the American College of Surgeons. 2008;93(10):13–7. [PubMed] [Google Scholar]

Additional references

  • Department of Health 2006 .The Department of Health Code of practice for the prevention and control of healthcare associated infections. The Health Act. 2006:1–41. [Google Scholar]
  • EPOC 2009 .EPOC Risk of bias tool. 2009 Available from http://epoc.cochrane.org/epoc-resources-review-authors.
  • Greenfield 2008 .Greenfield D, Braithwaite J. Health sector accreditation research: a systematic review. International Journal for Quality in Health Care. 2008;20(3):172–83. doi: 10.1093/intqhc/mzn005. [DOI] [PubMed] [Google Scholar]
  • Higgins 2008 .Higgins JPT, Altman DG, Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. 5.0.1 (updated September 2008) The Cochrane Collaboration; 2008. Chapter 8: Assessing risk of bias in included studies. Available from www.cochrane-handbook.org. [Google Scholar]
  • Hopewell 2009 .Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K. Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database of Systematic Reviews. 2009;(Issue 1) doi: 10.1002/14651858.MR000006.pub3. [DOI: 10.1002/14651858.MR000006.pub3] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • ISO 2004 .ISO/IEC Standardization and related activities. ISO/IEC Guide. 2004;Vol. 2 [Google Scholar]
  • Jamtvedt 2006 .Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews. 2006;(Issue 2) doi: 10.1002/14651858.CD000259.pub2. [DOI: 10.1002/14651858.CD000259.pub2] [DOI] [PubMed] [Google Scholar]
  • Moher 2009 .Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Medicine. 2009;6(6):e1000097. doi: 10.1371/journal.pmed.1000097. doi:10.1371/journal.pmed1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • Review Manager 2008 .The Nordic Cochrane Centre, The Cochrane Collaboration . Review Manager (RevMan) Version 5.0 The Nordic Cochrane Centre, The Cochrane Collaboration; Copenhagen: 2008. [Google Scholar]
  • Shaw 2004 .Shaw C. The external assessment of health services. Worlds Hospitals and Health Services. 2004;40(1):24–7. [PubMed] [Google Scholar]
  • Walsche 2004 .Walsche K, Freeman T, Latham L, Wallace L, Spurgeon P. Clinical governance - from policy to practice. University of Birmingham, Health Services Management Centre; Birmingham, UK: 2000. Chapter 6. The development of external reviews of clinical governance. [Google Scholar]
  • * Indicates the major publication for the study

RESOURCES