Skip to main content
Implementation Science : IS logoLink to Implementation Science : IS
. 2011 Aug 23;6:97. doi: 10.1186/1748-5908-6-97

What is the value and impact of quality and safety teams? A scoping review

Deborah E White 1,, Sharon E Straus 2, H Tom Stelfox 3, Jayna M Holroyd-Leduc 3, Chaim M Bell 2, Karen Jackson 4, Jill M Norris 1, W Ward Flemons 3, Michael E Moffatt 5, Alan J Forster 6
PMCID: PMC3189393  PMID: 21861911

Abstract

Background

The purpose of this study was to conduct a scoping review of the literature about the establishment and impact of quality and safety team initiatives in acute care.

Methods

Studies were identified through electronic searches of Medline, Embase, CINAHL, PsycINFO, ABI Inform, Cochrane databases. Grey literature and bibliographies were also searched. Qualitative or quantitative studies that occurred in acute care, describing how quality and safety teams were established or implemented, the impact of teams, or the barriers and/or facilitators of teams were included. Two reviewers independently extracted data on study design, sample, interventions, and outcomes. Quality assessment of full text articles was done independently by two reviewers. Studies were categorized according to dimensions of quality.

Results

Of 6,674 articles identified, 99 were included in the study. The heterogeneity of studies and results reported precluded quantitative data analyses. Findings revealed limited information about attributes of successful and unsuccessful team initiatives, barriers and facilitators to team initiatives, unique or combined contribution of selected interventions, or how to effectively establish these teams.

Conclusions

Not unlike systematic reviews of quality improvement collaboratives, this broad review revealed that while teams reported a number of positive results, there are many methodological issues. This study is unique in utilizing traditional quality assessment and more novel methods of quality assessment and reporting of results (SQUIRE) to appraise studies. Rigorous design, evaluation, and reporting of quality and safety team initiatives are required.

Background

Over the last four decades, there has been a growing interest in improving the quality of care provided to patients. Recipients of care, providers, and healthcare leaders acknowledge that patient harm resulting from the delivery of healthcare is far more common and serious than they would like. For example, studies indicate that between 5% and 20% of patients admitted to hospital experience adverse events (AEs). AEs cost healthcare systems billions of dollars in additional hospital stays; retrospective reviews judge that between 36% and 50% of these AEs could have been avoided under different circumstances [1-4]. Building a culture of safety is cited as one of the most important aspects of improving patient safety and quality of care [5]. This requires an environment in which staff can speak freely about the lack of quality in the delivery of care, report errors, close calls, and hazardous situations that occur in the system, and feel empowered to implement changes that impact patient, provider, and system outcomes [6-8].

Quality and safety teams have been proposed as one strategy for professionals to discuss threats to quality and patient safety, and to identify and implement actions towards building safer systems [7,9]. These teams (often called quality improvement teams, quality collaboratives, clinical networks, or safety teams) are groups of individuals brought together to undertake specific initiatives to improve the quality of care [10]; care that is timely, effective, patient centred, efficient, equitable, and safe [11]. These team initiatives are often focused on designing and redesigning structures and/or processes of care at the local and system level, to yield better results for not only patients, but also providers and the broader health system [12]. If health organizations are to improve the quality of care and enhance patient safety, it is essential that there is a more in-depth understanding of how these teams are established, the barriers and facilitators to establishing and implementing teams and team initiatives, as well as the strength of the evidence about the impact of team initiatives.

Before embarking on a national study to survey and interview senior leaders and team members of quality and safety teams across Canada, a scoping review of the literature was undertaken to understand the types of quality and safe team initiatives, the evidence about their impact, and the barriers and facilitators to establishing teams and team initiatives.

Methods

Data sources and searches

We searched MEDLINE (1980-November 2007), EMBASE (1980-November 2007), CINAHL (1982-November 2007), Cochrane Effective Practice of Care, PsycINFO and ABI Inform (1980 to November 2007). Grey literature and websites were also searched. If a publication area could be identified on websites, this area was specifically searched rather than the entire site.

Combinations of the following search terms were used: patient safety, quality improvement, safety, quality, collaborative, team, committee, model, initiative, and clinical microsystems. Appropriate wildcards were used. Additional articles were identified through review of reference lists (see Additional file 1, Tables S1 and S2).

Study selection

All abstracts were reviewed independently by multidisciplinary teams of two reviewers using the following inclusion criteria: qualitative or quantitative study; study occurred in an acute care centre; English language publication; description of how quality and safety teams were established, implemented and/or the impact of teams and their initiatives on provider, patient, and/or system outcomes; or description about barriers and/or facilitators to the establishment and implementation of quality and safety teams. Disagreements about inclusion were reviewed by two independent reviewers. Full text articles were retrieved and were further reviewed by two independent investigators. Disagreements between a set of reviewers were both reviewed and resolved by SES and DEW through consensus. Inter-rater agreement between reviewers was assessed using Cohen's k coefficient.

Data abstraction and quality assessment

Initial data abstraction was performed by two independent reviewers, using a standardized data abstraction form (see Additional file 1, Table S3). Differences in abstraction between reviewers were resolved by a third reviewer.

The scoping review was designed according to recognized methodology [13], including a thorough documentation of the process for selection and inclusion of studies, data abstraction methods, traditional methodological critique [14], as well as other threats to internal and external validity. For randomized controlled trials (RCTs), criteria included method of randomization, allocation of concealment, blinding, protection from bias, assessment of outcomes, and description of sites. For observational studies, assessment included description of cohorts and assessment of outcomes among other items. Qualitative studies were assessed for evidence of appropriate sampling, adequate description, data quality, and theoretical and conceptual adequacy [15].

The Cochrane Effective Practice and Organisation of Care (EPOC) taxonomy for quality interventions [16] was adopted to aid in documenting quality improvement efforts undertaken by teams, and to explore which techniques lead to improved outcomes. Additionally, The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines, described elsewhere [17], were also used to enhance the critique and capture rigor within the variations in reporting across published studies. Frequencies of the items and corresponding sections within SQUIRE checklist (see Additional file 1, Table S3) were used to determine coverage (i.e., yes or no) and thoroughness in the reporting of those items (i.e., good, fair, poor).

Results

Data synthesis

After duplicates were removed from 7,994 citations retrieved, 6,674 abstracts were identified for review. Of these, 6,400 papers were excluded due to not meeting one or more of the inclusion criterion (Figure 1). Abstracts that did not describe teams in hospital settings, teams that did not undertake quality or safety work, or were not a quantitative or qualitative study were excluded. A total of 274 full-text papers were reviewed, and 99 papers were included within this review. Final inter-rater agreement reached 76.0% (Cohen's k coefficient = 0.50). The heterogeneity of studies and outcomes/results reported precluded quantitative data analyses. Instead a descriptive summary is presented [13,18].

Figure 1.

Figure 1

Study selection process.

Summary of research on quality and safety teams in acute care

To assist in the description and analysis, papers were categorized according to selected dimensions of quality defined by the IOM [11] (effectiveness, efficient, timely, patient centred, safety, equity; see Additional file 1, Table S4). Of the 99 papers included in our study, the primary focus of 45 addressed dimensions of effectiveness, 15 addressed aspects of efficiency, 16 focused on timeliness, 8 focused on patient centeredness, and 15 focused on safety. No papers focused on equitable care.

Effectiveness papers

In 45 studies, the intent was to develop or utilize evidence about the impact of quality and safety teams and their initiatives. Quality initiatives were often focused on changes directed at clinical care processes for patient populations (i.e., maternity, cardiac, infection processes, asthma, and diabetes management) [19-44], exploration of effectiveness of quality and safety programs [45-49], and descriptions of team characteristics and leadership as important to the establishment, implementation, and/or outcome of initiatives [50-63].

Sixteen of the 45 quality initiatives [20-24,26-28,32-34,36,39,40,43,44] utilized best practice or national guidelines. Nine controlled studies reported statistically significant results [20,21,23,26,40,42,43,56,63], but only three studies reported statistically significant differences over a sustained period of time [20,23,56]. There were methodological flaws within the controlled studies, such as a greater dropout rate in the control group [56], and no description of case mix [20]. Horbar et al. [23] demonstrated the strongest design amongst the effectiveness papers. In a randomized trial, investigators tested whether teams in neonatal intensive care units exposed to a multifaceted collaborative QI intervention would decrease time to surfactant use after birth, and achieve improved patient outcomes for preterm infants of 23 to 29 weeks gestation. They reported a reduction in nosocomial infection (26% to 22%; p = 0.007) and coagulase-negative staphylococcus infections (22% to 16.6%; p = 0.007) in neonates. Reduced rates were maintained over a four-year period.

Patient-centred papers

Eight studies focused on improving and eliciting feedback about the patients' experience with programming and transitions in health systems (i.e., pain management programs, admission, and discharge processes). Bookbinder et al. [64], the only controlled study in this group, implemented a number of clinical care processes to improve palliative care for inpatients who were expected to die from advanced disease. Patients in intervention units were more likely to have a comfort plan in place (p < 0.0001) and do-not-resuscitate orders than the comparison units (p < 0.0001). Six studies were descriptive and did not have a control group [65-70]; each reported positive improvements over time (i.e., facilitated patient-centred care and assessment, patient satisfaction, excellent ratings of new discharge processes). Two studies reported statistically significant improvements from baseline [64,65], one of which maintained the desired outcomes over a period of six months or more [65].

Safety papers

Of the safety papers (n = 15) many focused on the reduction of AEs and/or errors (n = 12). Initiatives focused on medication concerns [12,71-77], decreasing prescribing and administration error [12,71,73-75,78,79], reducing medical error, increasing overall error, and/or near miss reporting [12,71,72,75,77,80,81], among other issues [82,83]. Four studies employed statistical testing, and all reported statistically significant findings for desired outcomes when compared with baseline (i.e., increased reporting, decreased errors, and reduction of preventable adverse drug events) [12,72,73,75]. Common interventions included education sessions and audit/feedback. With the exception of Carey et al. [75], who utilized an interrupted time series design, the remaining study designs were descriptive or before and after case series.

Timeliness papers

Sixteen papers were directed at improving structural and care processes such as decreased time to treatment, waiting times, length of stay [84-98], overcrowding, and patient flow [99]. While the majority of authors suggested positive improvement [85-100], only six studies used tests of significance [84,86-88,90,92]. Statistically significant improvements from baseline (i.e., decrease in delay of treatment [28,84,86,87,92], timely diagnosis [86-88,92]) were found for all six studies, but there were no reports of sustainability of outcomes. With the exception of Horbar et al. [84], the study designs were weak (before and after case series or historically controlled).

Efficiency papers

Fifteen studies were directed at changing clinical practice patterns, outcomes, and system processes to address costs [100-107] and/or resource utilization (i.e., people and services) [102,105,106,108-114]. Three of the studies reported significant outcomes (i.e., decreased length of stay, reduced number of non-clinically indicated tests, decreased costs associated with personnel) when compared with baseline [102,103,112] or a control inpatient unit [102].

Few papers (n = 6) [25,51-53,57,59] focused specifically on barriers and facilitators to establishing, implementing, and measuring the impact of quality and safety team initiatives. However, regardless of study aim, the role of leadership, organizational culture, and access to resources in supporting quality and safety were consistent messages in all the studies. A selection of team attributes, processes, and structures were also identified as important to implementation of initiatives (e.g., physician champions, expertise, understanding of roles on the team, time for meetings).

General description of teams and their initiatives

Various professionals were represented on the teams, including nurses, physicians, and pharmacists. Approximately one-third of the teams also had representation from administrative and clinical leadership positions, as well as quality improvement experts. Statistical expertise was only reported in four studies. Twenty-one studies reported participation in a formal collaborative such as the IHI Breakthrough Series [12,20-22,44,45,57,65,72,85] and the Vermont Oxford Network [23,46,58,84].

A diverse number of quality improvement techniques/interventions were used in improvement initiatives. Teams used a mix of professional, financial, organizational, and regulatory quality interventions (see Table 1). Educational meetings (n = 59), audit and feedback (n = 30), and other quality improvement methodology (n = 54) such as plan-do-study-act cycles (PDSA, n = 15), and were frequently used. In addition to these professional interventions, teams often reported structural changes within organizations and provider oriented interventions.

Table 1.

EPOC quality improvement strategies

N %
Professional interventions
 Educational meetings 59 59.6
 Other quality improvement techniques (i.e., PDSA, process mapping flowcharts) 54 54.5
 Audit and feedback 30 30.3
 Distribution of educational materials 18 18.2
 Educational outreach visits 12 12.1
 Reminders 11 11.1
 Marketing 10 10.1
 Patient mediated interventions 5 5.1
 Local consensus processes 4 4.0
 Local opinion leaders 1 1.0
Financial interventions
 Provider oriented 9
  Provider salaried service 4 4.0
  Provider incentives 3 3.0
  Fee-for-service 1 1.0
  Institution grant/allowance 1 1.0
  Patient oriented 0 0.0
  Other 3 3.0
Organisational interventions
 Provider oriented
  Clinical multidisciplinary teams 99 100.0
  Case management 17 17.2
  Continuity of care 16 16.2
  Communication and case discussion between distant health professionals 12 12.1
  Revision of professional roles 11 11.1
  Satisfaction of providers with the conditions of work and its material and psychic rewards 11 11.1
  Skill mix changes 10 10.1
  Formal integration of services 6 6.1
  Arrangements for follow-up 5 5.1
 Patient oriented
  Presence and functioning of adequate mechanisms for dealing with client suggestions and complaints 12 12.1
  Consumer participation in governance of healthcare organisation 1 1.0
 Structural interventions
  Changes in physical structure, facilities and equipment 23 23.2
  Changes in scope and nature of benefits and services 19 19.2
  Changes in medical record systems 16 16.2
  Presence and organisation of quality monitoring mechanisms 15 15.2
  Staff organisation 9 9.1
  Other 4 4.0
  Changes in the setting/site of service delivery 2 2.0
  Ownership, accreditation, and affiliation status of hospitals and other facilities 1 1.0
Regulatory interventions
 Management of patient complaints 4 4.0
 Peer review 1 1.0

Critical appraisal of methodological quality and reporting of studies

A controlled study design was used in twenty-three studies: interrupted time series (n = 7) [20,24,37,38,75,82,85], controlled before and after (n = 9) [19,21,23,26,27,56,64,112,113], RCT (n = 2) [84,102], cohort (n = 2) [39,40], and case-control studies (n = 3) [41-43]. Twelve controlled studies utilized patient charts and administrative databases to measure outcomes. Limitations of the reporting of the studies included sparse information about the control sites, potential differences of baseline measurement, and lack of information about data collection processes and tools. Most studies used uncontrolled study designs (n = 76): before-and-after case series (n = 29) [12,22,28-32,57,63,65,71-74,79,80,87-91,98,99,103-106,109,115], historically controlled (n = 6) [33-36,86,92], and descriptive (i.e., cross-sectional, correlational, survey, case-report; n = 36) [44-49,52-55,58,60-62,66-68,70,76-78,81,83,93-97,100,101,107,108,110,111,114,116]. Five were qualitative-descriptive or mixed methods [25,50,51,59,69].

While subject to a number of single-group threats to internal validity, the overall methodological quality of studies was weak (see Table 2). Particularly, there were concerns of selection bias from few details about the patient populations, patient care units, and/or individual organizations involved in collaboratives. Other weaknesses included a lack of description about methods to ensure data quality and accuracy, reliance on team self-report measures, and a lack of documented questionnaire reliability and validity. While most reported 'significant' or 'very positive' improvements as a result of the intervention(s), only one-third employed appropriate statistical tests to determine if the interventions did make a difference.

Table 2.

Methodological status of controlled studies

Study Design Methodological status Commentary on potential bias
Horbar et al. [84] (2004) Randomized controlled Randomization (computer generated), allocation concealment (investigators, prior to intervention), baseline (13 of 14 measures similar, no statistical testing), blinding (statistician), ITT (done), follow-up (100%) Voluntary participation in collaborative: 114/178 hospitals eligible participated.
Curley et al. [102]
(1998)
Randomized controlled Randomization (blocked), allocation concealment (NS), baseline (18 of 19 similar), blinding (NS), ITT (NS), follow-up (NS) Used a convenience sample for one measure; controlled for potential covariates in analyses; questionable construct validity for provider satisfaction.
Carlhed et al. [26] (2006) Controlled before Allocation (matched then randomized), allocation concealment (controls), baseline (7 of 7 similar), blinding (controls), ITT (NS), follow-up (NS) Intervention group hospitals self-selected, whereas control hospitals were hospitals that did not self-select; no group differences at baseline; registry had continuous monitoring; no reason to believe proposition of patients with contraindications systematically differed.
Doran et al. [56] (2002) Controlled before Allocation (participant preference, attempts to randomize), allocation concealment (NS), baseline (NS), blinding (external reviewers), ITT (NS), follow-up (time 1: 85%, time 2: 74%; higher control group attrition) Selection: sample may be biased towards those who responded most quickly; measurement: unlikely, external reviewers blinded to group allocation and not part of study, reported methods to avoid bias; attrition/exclusion: differences between intervention group and those who withdrew, greater drop-out in the control group; gave description of sample, but did not compare group characteristics; performance: unlikely, analyses at team level.
Hermida and Robalino [19] (2002) Controlled before Allocation (matched then randomized), allocation concealment (NS), baseline (higher outcomes in intervention group), blinding (NS), ITT (NS), follow-up (NS)
Howard et al. [21] (2007) Controlled before Allocation (matched, wait-list control), allocation concealment (NS), baseline (2 of 6 similar - controls, 5 of 6 similar - delayed comparison), blinding (NS), ITT (NS), follow-up (NS) Provided information on non-responders; selection: self-selection, 43/58 participated, group differences at baseline; provide evidence against regression to the mean and selection bias in the wait-list controls; no information on quality of the data source.
Bookbinder et al. [64] (2005) Controlled before Allocation (location - unit type), allocation concealment (NS), baseline (3 of 21 similar), blinding (NS), ITT (NS), follow-up (NS) Measurement: no baseline data; developed tools with interrater reliability; attrition bias: short survival of patients on the oncology unit; one tool could not completed: use was limited to 50 patients on intervention unit; selection: loss to follow up on comparison unit; performance: not possible to control for extraneous variables; referral to consultation team, exposure of staff to other educational offerings, cultural and leadership styles.
Brickman et al. [27] (1998) Controlled before Allocation (location - hospital, unclear if 'randomization' occurred), allocation concealment (NS), baseline (NS), blinding (NS), ITT (NS), follow-up (NS) Performance: changing processes.
Horbar et al. [23] (2001) Controlled before Allocation (project participation), allocation concealment (NS), baseline (9 of 9 similar), blinding (NS), ITT (NS), follow-up (attrition in control) Selection: self-selection of institutions.
Wang et al. [113] (2003) Controlled before Allocation (location - unit type), allocation concealment (NS), baseline (10 of 12 similar), blinding (NS), analyses (covariates), ITT (NS), follow-up (NS) Selection: allocated by unit type, differences between groups on baseline characteristics and outcome measures, controlled for characteristics in analyses; clinical significance of differences in question; no attrition bias; performance: likely with different unit types being compared; source of inventory data quality is not known.
Isouard [112] (1999) Controlled before Allocation (location - hospital), allocation concealment (NS), baseline (3 of 3 similar), blinding (NS), analyses (no covariates), ITT (NS), follow-up (NS) Selection: well defined criteria for selection for AMI.
Cable [37] (2001) Interrupted time series Data points (pre - 42-47 months/data points, post 22 to 27 months/data points), blinding (NS), analyses (ARIMA, switching replication), ITT (NS), follow-up (100%) Measurement: change in catheterization tray, which affected catheterization events.
Berriel-Cass et al. [20] (2006) Interrupted time series Baseline (retrospective, NS case mix; pre - 7/8 months/data points, post - 23/24 months/data points), blinding (NS), analyses (pre-post comparisons), ITT (NS), follow-up (NS)
Carey and Teeters [75] (1995) Interrupted time series Baseline (pre - 6 months/data points, post - 15 months/data points), blinding (NS), analyses (np charts, no inferential statistics), ITT (NS), follow-up (NS) Selection/attrition: NA; performance/measurement: nurses may have increased reporting after training program, rather than the intervention being efficacious; unclear as to whether there was a change in intervention midway or after training program.
Harris et al. [38] (2000) Interrupted time series Baseline (pre - 3 years/6 data points, post - 3 years/6 data points), blinding (NS), analyses (no inferential statistics), ITT (NS), follow-up (NS) Performance: physicians were already beginning to establish criteria before implementation; selection: no information about the sample.
Bartlett et al. [85] (2002) Interrupted time series Baseline (1. pre - 20 weeks/data points, post - 20 weeks/data points; 2. pre - 10 weeks/6 data points, post - 25 weeks/14 data points), blinding (NS), analyses (no inferential statistics), ITT (NS), follow-up (100%) Selection/attrition: unlikely; measurement/performance: team-self and director-reported 'significant improvements', attempts to blind director to team identity.
Fox et al. [24] (2006) Interrupted time series Baseline (pre - 15 months/5 data points, post - 27 months/9 data points), blinding (NS), analyses (no inferential statistics), ITT (NS), follow-up (100%) Time series controls for selection, but does not for history, instrumentation, and testing; no testing and instruments using review of charts; difficult to determine if there were any historical events that may have influenced results.
Allison and Toy [82] (1996) Interrupted time series Baseline (pre - 6 years/data points, post - 5 years/data points), blinding (NS), analyses (no inferential statistics), ITT (NS), follow-up (NS) Measurement/instrumentation: unclear as to how some of the data was collected.
Halm et al. [40] (2004) Cohort Cohort (matched, separate pre- post cohorts, 30 of 37 similar), blinding (NS), ITT (NS), follow-up (NS) Selection: acknowledges pre-post comparison of separate groupings of patients who met criteria of CAP; samples matched for age, race, sex, severity of diseases, co-morbidities, etc.
Berenholtz et al. [39] (2004) Cohort Cohort (different ICU types, baseline NS), blinding (NS), ITT (NS), follow-up (NS) Selection: no description of population; may not have accounted for other confounding factors such as antibiotic use and location of catheter insertion.
Brown et al. [42] (2006) Case-control Cohort (prospective, case mix 3 of 4 similar, before-after comparisons), blinding (NS), analyses (regression) Participants matched on post-data; performance: defined eras and care; selection bias: no loss to follow up, matched on most confounding variables; no masking regarding exposure and outcome.
Houston et al. [43] (2003) Case-control Cohort (matched - chart review, NS case mix), blinding (NS), analyses (no inferential statistics)
Bromenshenkel et al. [41] (2000) Case-control Cohort (chart review, NS case mix; pre-post comparisons), blinding (NS), analyses (no inferential statistics) No information on comparability of cases and controls for confounding variables, or if data collection was masked with regard to disease status of participant.

Abbreviations: NS = not specified, ITT = intention to treat, ARIMA = Autoregressive integrated moving average, ICU = intensive care unit.

Qualitative studies provided a description of purposive sampling of key informants and efforts to assure sampling adequacy. Only two authors [25,51] provided descriptions of the method of analysis. There was limited discussion of how researchers assured rigor; one author discussed member checking [33]. None of the qualitative studies addressed more than three methods to improve validity [117].

The EPOC classification of quality interventions [16] was utilized to examine whether specific types of improvement interventions lead to positive outcomes. All studies used two or more interventions in their initiatives; thus, it was difficult to make judgements regarding the unique or combined contribution of selected interventions on positive outcomes. Furthermore, within the studies there was a mix of improved outcomes and no change in the identified outcome. Papers seldom provided sufficient information to determine the mechanism of change, or details regarding the robustness of interventions. Beyond a narrative account of quality improvement efforts, additional inquiry regarding the weight of evidence for a particular technique was precluded by the heterogeneity in outcomes, design, and topics that quality and safety teams addressed in this scoping review.

Across the studies, authors seldom provided essential elements of SQUIRE reporting. More specifically, efforts to address a number of issues related to internal and external validity, or the validity and reliability of assessment instruments were documented in less than one-quarter of studies. Detailed information about training of data collectors and interviewers or data quality and accuracy were infrequently discussed. Few authors reported analyses that included effect size and power (n = 14) or the distribution and management of missing data (n = 10). Only one-half of the authors contextualized findings within existing literature. The weakest section of reporting across studies was planning of the interventions, with less than half of studies including any of the five elements outlined by SQUIRE. The study aim, abstract, background knowledge, and description of the local problem were uniformly addressed across all studies. Six exemplar studies reported at least three-quarters of all SQUIRE elements [33,39,40,56,65,69].

Discussion

Over the past twenty years, there has been substantial growth in the number of quality improvement teams [7,8,59]. Under the direction of clinical or administrative leadership, teams have collectively directed their efforts to changing clinical and/or system processes and structures with the goal to improve patient, provider and system outcomes. This review revealed that the foci within each of the dimensions of quality, the interventions implemented by teams, the composition of teams, and the context in which initiatives occur were diverse. It was surprising to find that best evidence (i.e., best practice guidelines or national guidelines) or research-based evidence was not always utilized in these initiatives.

Few papers focused on barriers and facilitators to establishing and measuring the impact of quality and safety team initiatives, however, most researchers reported factors that they believed influenced the success of the teams. Many factors that were identified as facilitators (i.e., senior leadership support, supportive organizational cultures, resources, ability to work as a team, physician 'opinion' leaders) are attributes of effective teams [118]. Often, these factors were identified as barriers if they were absent. Teams' perception of their success or failure often revolved around these factors. These findings are consistent with other authors [119-121] who have emphasized that strategic direction and vision of senior leadership, organizational culture, and support of leadership to remove barriers for teams are key to making a difference in quality and safety in organizations.

We found a lack of evidence about the attributes of successful and unsuccessful team initiatives, descriptions of how to establish and implement the teams, the unique or combined contribution of selected interventions, and the cost-benefit analyses of such initiatives. Future research could focus on the behaviours and actions of participants themselves, such as what actions senior leaders did to assure the team was successful and what role physicians and nurse champions played in winning the support of their colleagues [18].

We noted few methodologically strong studies. As a result, it is difficult to know whether the 'success' or 'failure' of quality and safety team initiatives are the result of the attributes and ideal mix of team members, team processes, period over which the initiatives occurs, certain clinical conditions and system processes, selected or combined interventions, the outcomes measured, or context in which the interventions occur. Understanding the unique and combined contributions of quality improvement interventions will require the use of rigorous designs and synthesis of study results through a systematic review. A broad-based scoping review does not seek to synthesize or weight evidence from various studies [13].

Despite this lack of evidence about the mechanisms by which intervention components and contextual factors may influence the study outcomes, quality improvement methodologies and quality collaboratives are popular methods for understanding and organizing quality improvement and safety efforts in hospitals. The nature of quality improvement is pragmatic; an examination of the 'real world.' Health systems are living laboratories that are complex, frequently unpredictable, and change is often multifaceted. Unfortunately, RCTs are often not an option and control groups may not be possible to understand localized microsystem or mesosystem change. However, moving away from weaker study designs (e.g., before and after designs) to designing evaluation of change initiatives that utilize more robust designs (e.g., interrupted time series or step wedge design) would enhance the science of quality improvement as well as strengthen the evidence about the actual effectiveness of methods used in initiatives.

Healthcare providers, senior leaders, and boards strongly affirm the importance of improving processes for assuring quality and safety, and require access to the best evidence to help achieve that goal. We observed that many documented improvements, and identified 'successes' have been reported using percentage changes over time without comparisons to control groups or subject to statistical testing. There needs to be more rigorous evaluation of the interventions to propose legitimately that 'evidence-based' practices be accepted. Considerable resources are allocated to changes associated with these initiatives. The time has come to decide whether this investment is justified.

Mittman [122] proposes that researchers, users, and stakeholders engage in rigorous evaluation and creation of a valid, useful knowledge and evidence base for quality and safety. This will require improved conceptions of the nature of quality and safety issues, an understanding of the mechanisms by which various structures and processes (e.g., quality improvement interventions) impact outcomes, stronger designed studies (i.e., time series), reliable and valid measurements, data quality control, and statistical processes to evaluate the impact of initiatives [123].

A strength of this review was the quality appraisal of reporting excellence using the newly established SQUIRE guidelines. Ogrinc et al. [17] have called for excellence in reporting as a means to share organizational learning and benefit care delivery. Our review revealed that the quality of current reporting varies widely. Improving the rigor of study methods and the reporting of study findings will build a stronger foundation and more convincing argument for future studies and the practice of quality improvement and safety in healthcare.

Limitations should be considered in interpreting the results of this review. First, the search was broad and included studies of quality and safety team initiatives without operational definitions of quality and safety. This may have introduced misclassification of the studies. However, we believe our selection process of an independent review by two investigators and unresolved disagreements on inclusion referred to a team of two reviewers strengthened our classification. Second, this review only addressed studies conducted in an acute care setting, thus results may not be applicable to outpatient and community settings.

Conclusions

Clearly, there is much needed improvement in the design and reporting of quality and safety initiatives. If readers are to judge the internal and external validity of a study, investigators must provide enough information for critical appraisal of the intervention procedures, measurements, subject selection, analysis, and the context of the individual, group, organization, and system characteristics in which the intervention occurs. Knowing how the contextual factors compare to one's own circumstances is key to determining the generalisability and relevance of the results [124].

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

DEW is the guarantor for the paper. DEW led the review, obtained funding for the study, and identified the research question. DEW and SS designed the search strategy. DEW, SES, HTS, JMH, CMB, KJ, WWF, MEM, and AJF screened search results and reviewed papers against the inclusion criteria. DEW, SES, and JMN extracted data and assessed papers for methodological and reporting quality. DEW and JMN synthesized the results, analysed the findings, and drafted the manuscript. All authors made critical revisions of the manuscript for intellectual content and approved the final version.

Supplementary Material

Additional file 1

Tables S1 to S4. Table S1- Search strategies by database; Table S2- Distribution of references by electronic bibliographic source; Table S3- Data abstraction form; Table S4- Reviewed studies, differentiated by quality dimension.

Click here for file (430.5KB, DOC)

Contributor Information

Deborah E White, Email: dwhit@ucalgary.ca.

Sharon E Straus, Email: sharon.straus@utoronto.ca.

H Tom Stelfox, Email: Tom.Stelfox@albertahealthservices.ca.

Jayna M Holroyd-Leduc, Email: Jayna.Holroyd-Leduc@albertahealthservices.ca.

Chaim M Bell, Email: Bellc@smh.toronto.on.ca.

Karen Jackson, Email: karen.jackson@albertahealthservices.ca.

Jill M Norris, Email: jmnorris@ucalgary.ca.

W Ward Flemons, Email: flemons@ucalgary.ca.

Michael E Moffatt, Email: MMoffatt@wrha.mb.ca.

Alan J Forster, Email: aforster@ohri.ca.

Acknowledgements

This work was supported by grant funding from the Canadian Institutes of Health Research and Alberta Innovates-Health Solutions. We gratefully acknowledge the contributions of Laure Perrier (Information Specialist, University of Toronto) for carrying out the literature searches, Dr. Joshua Tepper (Vice President, Education for Sunnybrook Health Sciences Centre, Toronto, Ontario) for his valuable guidance, and the administrative and technical support of Fatima Chatur and Navjot Virk. We also acknowledge in-kind/and or cash contributions from Faculty of Nursing, University of Calgary, Winnipeg Regional Health Authority, Saskatoon Health Region, Alberta Health Services, and the Canadian Patient Safety Institute. Results expressed in this report are those of the investigators and do not necessarily reflect the opinions or policies of Winnipeg Regional Health Authority, Saskatoon Health Region, Alberta Health Services, or the Canadian Patient Safety Institute.

References

  1. Baker G, Norton P, Flintoft V, Blais R, Brown A, Cox J, Etchells E, Ghali W, Hebert P, Majumdar S. et al. The Canadian adverse events study: The incidence of adverse events among hospital patients in Canada. Canadian Medical Association Journal. 2004;170(11):1678–1686. doi: 10.1503/cmaj.1040498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Forster A, Asmis T, Clark H, Al Saied G, Code C, Caughey S. Ottawa hospital patient safety study: incidence and timing of adverse events in patients admitted to a Canadian teaching hospital. Canadian Medical Association Journal. 2004;170(8):1235–1240. doi: 10.1503/cmaj.1030683. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Nieva V, Sorra J. Safety culture assessment: A tool for improving patient safety in healthcare organizations. Quality and Safety in Health Care. 2003;12:17–23. doi: 10.1136/qhc.12.suppl_2.ii17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. British Medical Journal. 2001;322(7285):517–519. doi: 10.1136/bmj.322.7285.517. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Cranfill L. Approaches for improving patient safety through a safety clearing house. Journal for Health Care Quality. 2003;25(1):43–47. doi: 10.1111/j.1945-1474.2003.tb01032.x. [DOI] [PubMed] [Google Scholar]
  6. Gherardi S, Nicolini D. The organizational learning of safety in communities of practice. Journal of Management Inquiry. 2000;9:7–18. doi: 10.1177/105649260091002. [DOI] [Google Scholar]
  7. Ketring S, White J. Developing a system wide approach to patient safety: The first year. Joint Commission Journal on Quality Improvement. 2002;28(6):287–295. doi: 10.1016/s1070-3241(02)28028-1. [DOI] [PubMed] [Google Scholar]
  8. Morath J, Leary M. Creating safe spaces in organization to talk about safety. Nursing Economics. 2004;22(3):334–351. [PubMed] [Google Scholar]
  9. Akins R. A process centered tool for evaluating patient safety performance and guiding strategic improvement. Advances in Patient Safety. 2005;4:109–126. [PubMed] [Google Scholar]
  10. Mohr J, Baltalden P, Barach P. In: Continuous Quality Improvement in Health Care: Theory, Implementations and Applications. 3. McLaughlin C, Kaluzny A, editor. Toronto, ON: Jones and Bartlett Publishers; 2001. Inquiring into the quality and safety of care in academic clinical microsystems; pp. 407–445. [Google Scholar]
  11. Institute-of-Medicine. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy of Sciences; 2001. [Google Scholar]
  12. Silver MP, Antonow JA. Reducing medication errors in hospitals: a peer review organization collaboration. Jt Comm J Qual Patient Saf. 2000;26(6):332–340. doi: 10.1016/s1070-3241(00)26027-6. [DOI] [PubMed] [Google Scholar]
  13. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Social Research Methodology. 2005;8(1):19–32. doi: 10.1080/1364557032000119616. [DOI] [Google Scholar]
  14. Khan K Riet R Glanville J Sowden A Kleijnen J Undertaking systematic reviews of research on effectiveness CRD's guidance for those carrying out or commissioning reviews CRD 2001York: York Publishing Services Ltd; 21876928 [Google Scholar]
  15. Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health service research. Qual Health Res. 1998;8:341–351. doi: 10.1177/104973239800800305. [DOI] [PubMed] [Google Scholar]
  16. Effective-Practice-and-Organisation-of-Care-Group. Data Collection Checklist. http://epoc.cochrane.org/sites/epoc.cochrane.org/files/uploads/datacollectionchecklist.pdf (Accessed 27 June 2011)
  17. Ogrinc G, Mooney SE, Estrada C, Foster T, Goldmann D, Hall LW, Huizinga MM, Liu SK, Mills P, Neily J. et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17:i13–i32. doi: 10.1136/qshc.2008.029058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Lindenauer PK. Effects of quality improvement collaboratives. British Medical Journal. 2008;336(7659):1448–1449. doi: 10.1136/bmj.a216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hermida J, Robalino ME. Increasing compliance with maternal and child care quality standards in Ecuador. Int J Qual Health Care. 2002;14:25. doi: 10.1093/intqhc/14.suppl_1.25. [DOI] [PubMed] [Google Scholar]
  20. Berriel-Cass D, Adkins FW, Jones P, Fakih MG. Eliminating nosocomial infections at Ascension Health. Jt Comm J Qual Patient Saf. 2006;32(11):612–620. doi: 10.1016/s1553-7250(06)32079-x. [DOI] [PubMed] [Google Scholar]
  21. Howard DH, Siminoff LA, McBride V, Lin M. Does quality improvement work? Evaluation of the Organ Donation Breakthrough Collaborative. Health Serv Res. 2007;42(6p1):2160–2173. doi: 10.1111/j.1475-6773.2007.00732.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Wagner EH, Glasgow RE, Davis C, Bonomi AE, Provost L, McCulloch D, Carver P, Sixta C. Quality improvement in chronic illness care: a collaborative approach. Jt Comm J Qual Improv. 2001;27(2):63–80. doi: 10.1016/s1070-3241(01)27007-2. [DOI] [PubMed] [Google Scholar]
  23. Horbar JD, Rogowski J, Plsek PE, Delmore P, Edwards WH, Hocker J, Kantak AD, Lewallen P, Lewis W, Lewit E. Collaborative quality improvement for neonatal intensive care. Pediatrics. 2001;107(1):14–22. doi: 10.1542/peds.107.1.14. [DOI] [PubMed] [Google Scholar]
  24. Fox J, Hendrickson S, Miller N, Parry C, Youngman D. A cooperative approach to standardizing care for patients with AMI or heart failure. Jt Comm J Qual Patient Saf. 2006;32(12):682–687. doi: 10.1016/s1553-7250(06)32090-9. [DOI] [PubMed] [Google Scholar]
  25. Newton PJ, Halcomb EJ, Davidson PM, Denniss AR. Barriers and facilitators to the implementation of the collaborative method: reflections from a single site. Qual Saf Health Care. 2007;16(6):409–414. doi: 10.1136/qshc.2006.019125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Carlhed R, Bojestig M, Wallentin L, Lindstrom G, Peterson A, Aberg C, Lindahl B. Improved adherence to Swedish national guidelines for acute myocardial infarction: The Quality Improvement in Coronary Care (QUICC) study. Am Heart J. 2006;152(6):1175. doi: 10.1016/j.ahj.2006.07.028. [DOI] [PubMed] [Google Scholar]
  27. Brickman R, Axelrod R, Roberson D, Flanagan C. Clinical process improvement as a means of facilitating health care system integration. Jt Comm J Qual Improv. 1998;24(3):143–153. doi: 10.1016/s1070-3241(16)30368-6. [DOI] [PubMed] [Google Scholar]
  28. Brush JE, Balakrishnan SA, Brough J, Hartman C, Hines G, Liverman DP, Parker JP, Rich J, Tindall N. Implementation of a continuous quality improvement program for percutaneous coronary intervention and cardiac surgery at a large community hospital. Am Heart J. 2006;152(2):379–385. doi: 10.1016/j.ahj.2005.12.014. [DOI] [PubMed] [Google Scholar]
  29. Cerulli J, Malone M. Can changes to a total parenteral nutrition order form improve prescribing? Nutr Clin Pract. 2000;15(3):143–151. doi: 10.1177/088453360001500306. [DOI] [Google Scholar]
  30. Feldman AM, Weitz H, Merli G, DeCaro M, Brechbill AL, Adams S, Bischoff L, Richardson R, Williams MJ, Wenneker M. The physician-hospital team: a successful approach to improving care in a large academic medical center. Acad Med. 2006;81(1):35. doi: 10.1097/00001888-200601000-00009. [DOI] [PubMed] [Google Scholar]
  31. Pierre JS. CE delirium: a process improvement approach to changing prescribing practices in a community teaching hospital. J Nurs Care Qual. 2005;20(3):244. doi: 10.1097/00001786-200507000-00009. [DOI] [PubMed] [Google Scholar]
  32. Skupski DW, Lowenwirt IP, Weinbaum FI, Brodsky D, Danek M, Eglinton GS. Improving hospital systems for the care of women with major obstetric hemorrhage. Obstet Gynecol. 2006;107(5):977–983. doi: 10.1097/01.AOG.0000215561.68257.c5. [DOI] [PubMed] [Google Scholar]
  33. Bédard D, Purden MA, Sauvé-Larose N, Certosini C, Schein C. The pain experience of post surgical patients following the implementation of an evidence-based approach. Pain Manag Nurs. 2006;7(3):80–92. doi: 10.1016/j.pmn.2006.06.001. [DOI] [PubMed] [Google Scholar]
  34. Cheah J. Clinical pathways-an evaluation of its impact on the quality care in an acute care general hospital in Singapore. Singapore Med J. 2000;41(7):335–346. [PubMed] [Google Scholar]
  35. Blaylock B. Solving the problem of pressure ulcers resulting from cervical collars. Ostomy Wound Manage. 1996;42(2):26–28, 30, 32-33. [PubMed] [Google Scholar]
  36. Mayo PH. Results of a program to improve the process of inpatient care of adult asthmatics. Chest. 1996;110(1):48–52. doi: 10.1378/chest.110.1.48. [DOI] [PubMed] [Google Scholar]
  37. Cable G. Enhancing causal interpretations of quality improvement interventions. Qual Health Care. 2001;10(3):179–186. doi: 10.1136/qhc.0100179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Harris S, Buchinski B, Gryzbowski S, Janssen P, Mitchell GWE, Farquharson D. Induction of labour: a continuous quality improvement and peer review program to improve the quality of care. Can Med Assoc J. 2000;163(9):1163–1166. [PMC free article] [PubMed] [Google Scholar]
  39. Berenholtz SM, Pronovost PJ, Lipsett PA, Hobson D, Earsing K, Farley JE, Milanovich S, Garrett-Mayer E, Winters BD, Rubin HR. Eliminating catheter-related bloodstream infections in the intensive care unit. Crit Care Med. 2004;32(10):2014. doi: 10.1097/01.CCM.0000142399.70913.2F. [DOI] [PubMed] [Google Scholar]
  40. Halm EA, Horowitz C, Silver A, Fein A, Dlugacz YD, Hirsch B, Chassin MR. Limited impact of a multicenter intervention to improve the quality and efficiency of pneumonia care. Chest. 2004;126(1):100–107. doi: 10.1378/chest.126.1.100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Bromenshenkel J, Newcomb M, Thompson J. Continuous quality improvement efforts decrease postoperative ileus rates. J Healthc Qual. 2000;22(2):4–7. doi: 10.1111/j.1945-1474.2000.tb00107.x. [DOI] [PubMed] [Google Scholar]
  42. Brown KL, Ridout DA, Shaw M, Dodkins I, Smith LC, O'Callaghan MA, Goldman AP, Macqueen S, Hartley JC. Healthcare-associated infection in pediatric patients on extracorporeal life support: The role of multidisciplinary surveillance. Pediatr Crit Care Med. 2006;7(6):546. doi: 10.1097/01.PCC.0000243748.74264.CE. [DOI] [PubMed] [Google Scholar]
  43. Houston S, Gentry LO, Pruitt V, Dao T, Zabaneh F, Sabo J. Reducing the incidence of nosocomial pneumonia in cardiovascular surgery patients. Qual Manag Health Care. 2003;12(1):28. doi: 10.1097/00019514-200301000-00006. [DOI] [PubMed] [Google Scholar]
  44. Baker DW, Asch SM, Keesey JW, Brown JA, Chan KS, Joyce G, Keeler EB. Differences in education, knowledge, self-management activities, and health outcomes for patients with heart failure cared for under the chronic disease model: the improving chronic illness care evaluation. J Card Fail. 2005;11(6):405–413. doi: 10.1016/j.cardfail.2005.03.010. [DOI] [PubMed] [Google Scholar]
  45. Pronovost PJ, Berenholtz SM, Ngo K, McDowell M, Holzmueller C, Haraden C, Resar R, Rainey T, Nolan T, Dorman T. Developing and pilot testing quality indicators in the intensive care unit. J Crit Care. 2003;18(3):145–155. doi: 10.1016/j.jcrc.2003.08.003. [DOI] [PubMed] [Google Scholar]
  46. Horbar JD, Plsek PE, Leahy K. NIC/Q 2000: establishing habits for improvement in neonatal intensive care units. Pediatrics. 2003;111(4):e397. [PubMed] [Google Scholar]
  47. Bouchet B, Francisco M, Ovretveit J. The Zambia Quality Assurance Program: successes and challenges. Int J Qual Health Care. 2002;14:89. doi: 10.1093/intqhc/14.suppl_1.89. [DOI] [PubMed] [Google Scholar]
  48. Catsambas TT, Kelley ED, Legros S, Massoud R, Bouchet B. The evaluation of quality assurance: developing and testing practical methods for managers. Int J Qual Health Care. 2002;14:75. doi: 10.1093/intqhc/14.suppl_1.75. [DOI] [PubMed] [Google Scholar]
  49. Gandhi TK, Graydon-Baker E, Barnes JN, Neppl C, Stapinski C, Silverman J, Churchill W, Johnson P, Gustafson M. Creating an integrated patient safety team. Jt Comm J Qual Patient Saf. 2003;29(8):383–390. doi: 10.1016/s1549-3741(03)29046-8. [DOI] [PubMed] [Google Scholar]
  50. Price M, Fitzgerald L, Kinsman L. Quality improvement: the divergent views of managers and clinicians. J Nurs Manag. 2007;15(1):43. doi: 10.1111/j.1365-2934.2006.00664.x. [DOI] [PubMed] [Google Scholar]
  51. Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ, Krumholz HM. The roles of senior management in quality improvement efforts: what are the key components? J Healthc Manag. 2003;48(1):15–28. discussion 29. [PubMed] [Google Scholar]
  52. Weiner BJ, Shortell SM, Alexander J. Promoting clinical involvement in hospital quality improvement efforts: the effects of top management, board, and physician leadership. Health Serv Res. 1997;32(4):491. [PMC free article] [PubMed] [Google Scholar]
  53. Thor J, Wittlöv K, Herrlin B, Brommels M, Svensson O, Skår J, Øvretveit J. Learning helpers: how they facilitated improvement and improved facilitation-lessons from a hospital-wide quality improvement initiative. Qual Manag Health Care. 2004;13(1):60. doi: 10.1097/00019514-200401000-00006. [DOI] [PubMed] [Google Scholar]
  54. Branowicki PA, Shermont H, Rogers J, Melchiono M. Improving systems related to clinical practice: an interdisciplinary team approach. Semin Nurse Manag. 2001;9(2):110–114. [PubMed] [Google Scholar]
  55. Marsteller JA, Shortell SM, Lin M, Mendel P, Dell E, Wang S, Cretin S, Pearson ML, Wu SY, Rosen M. How do teams in quality improvement collaboratives interact? Jt Comm J Qual Patient Saf. 2007;33(5):267–276. doi: 10.1016/s1553-7250(07)33031-6. [DOI] [PubMed] [Google Scholar]
  56. Doran DMI, Baker GR, Murray M, Bohnen J, Zahn C, Sidani S, Carryer J. Achieving clinical improvement: an interdisciplinary intervention. Health Care Manage Rev. 2002;27(4):42. doi: 10.1097/00004010-200210000-00005. [DOI] [PubMed] [Google Scholar]
  57. Mills PD, Weeks WB. Characteristics of successful quality improvement teams: lessons from five collaborative projects in the VHA. Jt Comm J Qual Patient Saf. 2004;30(3):152–162. doi: 10.1016/s1549-3741(04)30017-1. [DOI] [PubMed] [Google Scholar]
  58. Brown MS, Ohlinger J, Rusk C, Delmore P, Ittmann P. Implementing potentially better practices for multidisciplinary team building: creating a neonatal intensive care unit culture of collaboration. Pediatrics. 2003;111(4):e482. [PubMed] [Google Scholar]
  59. Ayers LR, Beyea SC, Godfrey MM, Harper DC, Nelson EC, Batalden PB. Quality improvement learning collaboratives. Qual Manag Health Care. 2005;14(4):234. doi: 10.1136/qshc.2004.011924. [DOI] [PubMed] [Google Scholar]
  60. Kollberg B, Elg M, Lindmark J. Design and implementation of a performance measurement system in swedish health care services: a multiple case study of 6 development teams. Qual Manag Health Care. 2005;14(2):95. doi: 10.1097/00019514-200504000-00005. [DOI] [PubMed] [Google Scholar]
  61. Lammers JC, Cretin S, Gilman S, Calingo E. Total quality management in hospitals: the contributions of commitment, quality councils, teams, budgets, and training to perceived improvement at Veterans Health Administration hospitals. Med Care. 1996;34(5):463. doi: 10.1097/00005650-199605000-00008. [DOI] [PubMed] [Google Scholar]
  62. Brewer BB. Relationships among teams, culture, safety, and cost outcomes. West J Nurs Res. 2006;28(6):641. doi: 10.1177/0193945905282303. [DOI] [PubMed] [Google Scholar]
  63. Irvine DM, Leatt P, Evans MG, Baker GR. The behavioural outcomes of quality improvement teams: the role of team success and team identification. Health Serv Manage Res. 2000;13(2):78–89. doi: 10.1177/095148480001300202. [DOI] [PubMed] [Google Scholar]
  64. Bookbinder M, Blank AE, Arney E, Wollner D, Lesage P, McHugh M, Indelicato RA, Harding S, Barenboim A, Mirozyev T. Improving end-of-life care: development and pilot-test of a clinical pathway. J Pain Symptom Manage. 2005;29(6):529–543. doi: 10.1016/j.jpainsymman.2004.05.011. [DOI] [PubMed] [Google Scholar]
  65. Cleeland CS, Reyes-Gibby CC, Schall M, Nolan K, Paice J, Rosenberg JM, Tollett JH, Kerns RD. Rapid Improvement in pain management: the Veterans Health Administration and the Institute for Healthcare Improvement Collaborative. Clin J Pain. 2003;19(5):298. doi: 10.1097/00002508-200309000-00003. [DOI] [PubMed] [Google Scholar]
  66. Briscoe G, Arthur G. CQI teamwork: reevaluate, restructure, renew. Nurse Manag. 1998;29(10):73–78. [PubMed] [Google Scholar]
  67. Carter JH, Meridy H. Making a performance improvement plan work. Jt Comm J Qual Improv. 1996;22(2):104–113. doi: 10.1016/s1070-3241(16)30212-7. [DOI] [PubMed] [Google Scholar]
  68. Hickey ML, Kleefield SF, Pearson SD, Hassan SM, Harding M, Haughie P, Lee TH, Brennan TA. Payer-hospital collaboration to improve patient satisfaction with hospital discharge. Jt Comm J Qual Improv. 1996;22(5):336–344. doi: 10.1016/s1070-3241(16)30237-1. [DOI] [PubMed] [Google Scholar]
  69. Elf M, Putilova M, von Koch L, Ohrn K. Using system dynamics for collaborative design: a case study. BMC Health Serv Res. 2007;7:123. doi: 10.1186/1472-6963-7-123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Campese C. Development and implementation of a pain management program. AORN J. 1996;64(6):931–940. doi: 10.1016/S0001-2092(06)63604-1. [DOI] [PubMed] [Google Scholar]
  71. Costello JL, Torowicz DL, Yeh TS. Effects of a pharmacist-led pediatrics medication safety team on medication-error reporting. Am J Health-Syst Ph. 2007;64(13):1422. doi: 10.2146/ajhp060296. [DOI] [PubMed] [Google Scholar]
  72. Weeks WB, Mills PD, Dittus RS, Aron DC, Batalden PB. Using an improvement model to reduce adverse drug events in VA facilities. Jt Comm J Qual Improv. 2001;27(5):243–254. doi: 10.1016/s1070-3241(01)27021-7. [DOI] [PubMed] [Google Scholar]
  73. Cimino MA, Kirschbaum MS, Brodsky L, Shaha SH. Assessing medication prescribing errors in pediatric intensive care units. Pediatr Crit Care Med. 2004;5(2):124. doi: 10.1097/01.PCC.0000112371.26138.E8. [DOI] [PubMed] [Google Scholar]
  74. Adachi W, Lodolce A. Use of failure mode and effects analysis in improving the safety of iv drug administration. Am J Health-Syst Ph. 2005;62(9):917–920. doi: 10.1093/ajhp/62.9.917. [DOI] [PubMed] [Google Scholar]
  75. Carey RG, Teeters JL. CQI case study: reducing medication errors. Jt Comm J Qual Improv. 1995;21(5):232–237. doi: 10.1016/s1070-3241(16)30144-4. [DOI] [PubMed] [Google Scholar]
  76. Apkon M, Leonard J, Probst L, DeLizio L, Vitale R. Design of a safer approach to intravenous drug infusions: failure mode effects analysis. Qual Saf Health Care. 2004;13(4):265–271. doi: 10.1136/qshc.2003.007443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Sim TA, Joyner J. A multidisciplinary team approach to reducing medication variance. Jt Comm J Qual Patient Saf. 2002;28(7):403–409. doi: 10.1016/s1070-3241(02)28040-2. [DOI] [PubMed] [Google Scholar]
  78. Hasler S, McNutt R, Abrams R, Dimou C, Brill J, Rosen R, Reiner Y, Korla V, Buzyna L, Levin S. Characterizing adverse events as errors: example in a patient using steroids daily. Endocrinologist. 2001;11(6):451–455. doi: 10.1097/00019616-200111000-00004. [DOI] [Google Scholar]
  79. Farbstein K, Clough J. Improving medication safety across a multihospital system. Jt Comm J Qual Patient Saf. 2001;27(3):123–137. doi: 10.1016/s1070-3241(01)27012-6. [DOI] [PubMed] [Google Scholar]
  80. Rask KJ, Schuessler LD, Naylor V. A statewide voluntary patient safety initiative: the Georgia experience. Jt Comm J Qual Patient Saf. 2006;32(10):564–572. doi: 10.1016/s1553-7250(06)32074-0. [DOI] [PubMed] [Google Scholar]
  81. Frankel A, Gandhi TK, Bates DW. Improving patient safety across a large integrated health care delivery system. Int J Qual Health Care. 2003;15:(Suppl 1):131–140. doi: 10.1093/intqhc/mzg075. [DOI] [PubMed] [Google Scholar]
  82. Allison MJ, Toy P. A quality improvement team on autologous and directed-donor blood availability. Jt Comm J Qual Improv. 1996;22(12):801–810. doi: 10.1016/s1070-3241(16)30284-x. [DOI] [PubMed] [Google Scholar]
  83. Korytkowski M, DiNardo M, Donihi AC, Bigi L, DeVita M. Evolution of a diabetes inpatient safety committee. Endocr Pract. 2006;12:91–99. doi: 10.4158/EP.12.S3.91. [DOI] [PubMed] [Google Scholar]
  84. Horbar JD, Carpenter JH, Buzas J, Soll RF, Suresh G, Bracken MB, Leviton LC, Plsek PE, Sinclair JC. Collaborative quality improvement to promote evidence based surfactant for preterm infants: a cluster randomised trial. Brit Med J. 2004;329(7473):1004. doi: 10.1136/bmj.329.7473.1004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Bartlett J, Cameron P, Cisera M. The Victorian emergency department collaboration. Int J Qual Health Care. 2002;14(6):463. doi: 10.1093/intqhc/14.6.463. [DOI] [PubMed] [Google Scholar]
  86. Berry BB, Geary DL, Jaff MR. A model for collaboration in quality improvement projects: implementing a weight-based heparin dosing nomogram across an integrated health care delivery system. Jt Comm J Qual Improv. 1998;24(9):459–469. doi: 10.1016/s1070-3241(16)30395-9. [DOI] [PubMed] [Google Scholar]
  87. Alberts KA, Bellander BM, Modin G. Improved trauma care after reorganisation: a retrospective analysis. Eur J Surg. 1999;165(5):426–430. doi: 10.1080/110241599750006640. [DOI] [PubMed] [Google Scholar]
  88. Bluth EI, Havrilla M, Blakeman C. Quality improvement techniques: value to improve the timeliness of preoperative chest radiographic reports. Am J Roentgenol. 1993;160(5):995–998. doi: 10.2214/ajr.160.5.8470615. [DOI] [PubMed] [Google Scholar]
  89. Gall K. Improving admission and discharge: quality improvement teams. Nurs Manage. 1996;27(4):46–47. [PubMed] [Google Scholar]
  90. Gering J, Schmitt B, Coe A, Leslie D, Pitts J, Ward T, Desai P. Taking a patient safety approach to an integration of two hospitals. Jt Comm J Qual Patient Saf. 2005;31(5):258–266. doi: 10.1016/s1553-7250(05)31033-6. [DOI] [PubMed] [Google Scholar]
  91. Tunick PA, Etkin S, Horrocks A, Jeglinski G, Kelly J, Sutton P. Reengineering a cardiovascular surgery service. Jt Comm J Qual Improv. 1997;23(4):203–216. doi: 10.1016/s1070-3241(16)30310-8. [DOI] [PubMed] [Google Scholar]
  92. Gilutz H, Battler A, Rabinowitz I, Snir Y, Porath A, Rabinowitz G. The" door-to-needle blitz" in acute myocardial infarction: the impact of a CQI project. Jt Comm J Qual Improv. 1998;24(6):323–333. doi: 10.1016/s1070-3241(16)30384-4. [DOI] [PubMed] [Google Scholar]
  93. Benson R, Harp N. Using systems thinking to extend continuous quality improvement. Qual Lett Healthc Lead. 1994;6(6):17–24. [PubMed] [Google Scholar]
  94. Carboneau CE. Achieving faster quality improvement through the 24-hour team. J Healthc Qual. 1999;21(4):4–10. doi: 10.1111/j.1945-1474.1999.tb00969.x. [DOI] [PubMed] [Google Scholar]
  95. Cooperative-Cardiovascular-Project-Best-Practices-Working-Group. Improving care for acute myocardial infarction: experience from the Cooperative Cardiovascular Project. Jt Comm J Qual Improv. 1998;24(9):480–490. doi: 10.1016/s1070-3241(16)30397-2. [DOI] [PubMed] [Google Scholar]
  96. Heilig S. The team approach to change. Quality case study. Healthc Forum J. 1990;33(4):19–22. [PubMed] [Google Scholar]
  97. Isouard G. The key elements in the development of a quality management environment for pathology services. J Qual Clin Pract. 1999;19(4):202–207. doi: 10.1046/j.1440-1762.1999.00329.x. [DOI] [PubMed] [Google Scholar]
  98. Kallenbach AM, Rosenblum CJ. Carotid endarterectomy: creating the pathway to 1-day stay. Crit Care Nurse. 2000;20(4):23–26. [PubMed] [Google Scholar]
  99. Yancer DA, Foshee D, Cole H, Beauchamp R, de la Pena W, Keefe T, Smith W, Zimmerman K, Lavine M, Toops B. Managing capacity to reduce emergency department overcrowding and ambulance diversions. Jt Comm J Qual Patient Saf. 2006;32(5):239–245. doi: 10.1016/s1553-7250(06)32031-4. [DOI] [PubMed] [Google Scholar]
  100. Hobde BL, Hoffman PB, Makens PK, Tecca MB. Pursuing clinical and operational improvement in an academic medical center. Jt Comm J Qual Improv. 1997;23(9):468–484. doi: 10.1016/s1070-3241(16)30333-9. [DOI] [PubMed] [Google Scholar]
  101. Eavy ER, Conlon PF. Health center-supplier team approach to solving iv equipment problems. Am J Health-Syst Ph. 1993;50(2):275–279. [PubMed] [Google Scholar]
  102. Curley C, McEachern JE, Speroff T. A firm trial of interdisciplinary rounds on the inpatient medical wards: an intervention designed using continuous quality improvement. Med Care. 1998;36(8 Suppl):12. doi: 10.1097/00005650-199808001-00002. [DOI] [PubMed] [Google Scholar]
  103. Clemmer TP, Spuhler VJ, Oniki TA, Horn SD. Results of a collaborative quality improvement program on outcomes and costs in a tertiary critical care unit. Crit Care Med. 1999;27(9):1768. doi: 10.1097/00003246-199909000-00011. [DOI] [PubMed] [Google Scholar]
  104. Blackburn K, Neaton ME. Redesigning the care of carotid endarterectomy patients. J Vasc Nurs. 1997;15(1):8–12. doi: 10.1016/S1062-0303(97)90047-9. [DOI] [PubMed] [Google Scholar]
  105. Beesley J, Helton HD, Merkley A, Swalberg ED. Quality management series. How we implemented TQM in our laboratory and our blood bank. Clin Lab Manage Rev. 1993;7(3):217–227. [PubMed] [Google Scholar]
  106. Mazur L, Miller J, Fox L, Howland R. Variation in the process of pediatric asthma care. J Healthc Qual. 1996;18(3):11–17. doi: 10.1111/j.1945-1474.1996.tb00837.x. [DOI] [PubMed] [Google Scholar]
  107. Dugar B. Implementing CQI on a budget: a small hospital's story. Jt Comm J Qual Improv. 1995;21(2):57–69. doi: 10.1016/s1070-3241(16)30128-6. [DOI] [PubMed] [Google Scholar]
  108. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual. 2004;17(6):349–358. doi: 10.1108/09526860410557606. [DOI] [PubMed] [Google Scholar]
  109. Sanborn MD, Braman KS, Weinhold FE. Using multidisciplinary quality focus teams to develop 5-ht antagonist guidelines. Formulary. 1996;31(1):49–61. [Google Scholar]
  110. Ziegenfuss JT, Munzenrider RF, Fisher K, Noll S, Poss LK, Lartin-Drake J. Engineering quality through organization change: a study of patient care initiatives by teams. Am J Med Qual. 1998;13(1):44. doi: 10.1177/106286069801300106. [DOI] [PubMed] [Google Scholar]
  111. Cholewka PA. Reengineering the Lithuanian healthcare system: a hospital quality improvement initiative. J Healthc Qual. 1999;21(4):26–27, 30-23, 37. doi: 10.1111/j.1945-1474.1999.tb00973.x. [DOI] [PubMed] [Google Scholar]
  112. Isouard G. A quality management intervention to improve clinical laboratory use in acute myocardial infarction. Med J Aust. 1999;170(1):11–14. [PubMed] [Google Scholar]
  113. Wang FL, Lee LC, Lee SH, Wu SL, Wong CS. Performance evaluation of quality improvement team in an anesthesiology department. Acta Anaesthesiol Sin. 2003;41(1):13–19. [PubMed] [Google Scholar]
  114. Alemi F, Safaie FK, Neuhauser D. A survey of 92 quality improvement projects. Jt Comm J Qual Patient Saf. 2001;27(11):619–632. doi: 10.1016/s1070-3241(01)27053-9. [DOI] [PubMed] [Google Scholar]
  115. Reiley P, Pike A, Phipps M, Weiner M, Miller N, Stengrevics SS, Clark L, Wandel J. Learning from patients: a discharge planning improvement project. Jt Comm J Qual Improv. 1996;22(5):311–322. doi: 10.1016/s1070-3241(16)30235-8. [DOI] [PubMed] [Google Scholar]
  116. Frush KS, Alton M, Frush DP. Development and implementation of a hospital-based patient safety program. Pediatr Radiol. 2006;36(4):291–298. doi: 10.1007/s00247-006-0120-7. [DOI] [PubMed] [Google Scholar]
  117. Mays N, Pope C. Qualitative research in health care: Assessing quality in qualitative research. BMJ. 2000;320:50–52. doi: 10.1136/bmj.320.7226.50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Fried B, Carpenter W. In: Continuous Quality Improvement in Health Care: Theory, Implementations and Applications. 3. McLaughlin C, Kaluzny A, editor. Toronto, ON: Jones and Bartlett Publishers; 2006. Understanding and improving team effectiveness in quality improvement; pp. 154–188. [Google Scholar]
  119. Carlow D. Can healthcare boards really make a difference? Healthcare Quarterly. 2010;13(1):46–54. doi: 10.12927/hcq.2013.21582. [DOI] [PubMed] [Google Scholar]
  120. Jiang H, Lockee C, Bass K. Board oversight of quality: Any difference in the process of care and mortality. J Healthc Manag. 2009;54(1):15–29. [PubMed] [Google Scholar]
  121. Øvretveit J, Bate P, Cleary P, Cretin S, Gustafson D, McInnes K, McLeod H, Molfenter T, Plsek P, Robert G. et al. Quality collaboratives: lessons from research. Qual Saf Health Care. 2002;11:345–351. doi: 10.1136/qhc.11.4.345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Mittman BS. Creating the evidence base for quality improvement collaboratives. Ann Intern Med. 2004;140(11):897–901. doi: 10.7326/0003-4819-140-11-200406010-00011. [DOI] [PubMed] [Google Scholar]
  123. Needham D, Sinopoli D, Dinglas V, Bernholtz S, Korupolu R, Watson S, Lubomski L, Goeschedl C, Pronovost P. Improving data quality control in quality improvement projects. Int J Qual Health Care. 2009. pp. 1–6. [DOI] [PMC free article] [PubMed]
  124. Speroff T, James B, Nelson E, Headerick L, Brommels M. Guidelines for appraisal and publication of PDSA quality improvement. Qual Manag Health Care. 2004;13(1):33–39. doi: 10.1136/qshc.2004.009936. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Additional file 1

Tables S1 to S4. Table S1- Search strategies by database; Table S2- Distribution of references by electronic bibliographic source; Table S3- Data abstraction form; Table S4- Reviewed studies, differentiated by quality dimension.

Click here for file (430.5KB, DOC)

Articles from Implementation Science : IS are provided here courtesy of BMC

RESOURCES