Abstract
PURPOSE
The purpose of this study was to evaluate a primary care practice–based quality improvement (QI) intervention aimed at improving colorectal cancer screening rates.
METHODS
The Supporting Colorectal Cancer Outcomes through Participatory Enhancements (SCOPE) study was a cluster randomized trial of New Jersey primary care practices. On-site facilitation and learning collaboratives were used to engage multiple stakeholders throughout the change process to identify and implement strategies to enhance colorectal cancer screening. Practices were analyzed using quantitative (medical records, surveys) and qualitative data (observations, interviews, and audio recordings) at baseline and a 12-month follow-up.
RESULTS
Comparing intervention and control arms of the 23 participating practices did not yield statistically significant improvements in patients’ colorectal cancer screening rates. Qualitative analyses provide insights into practices’ QI implementation, including associations between how well leaders fostered team development and the extent to which team members felt psychologically safe. Successful QI implementation did not always translate into improved screening rates.
CONCLUSIONS
Although single-target, incremental QI interventions can be effective, practice transformation requires enhanced organizational learning and change capacities. The SCOPE model of QI may not be an optimal strategy if short-term guideline concordant numerical gains are the goal. Advancing the knowledge base of QI interventions requires future reports to address how and why QI interventions work rather than simply measuring whether they work.
Keywords: quality improvement, primary health care, cancer screening, facilitation, learning collaboratives
INTRODUCTION
Quality improvement (QI) approaches vary in the extent to which specific objectives, tools, resources, and change processes are provided and orchestrated by health systems or researchers. On one end of the spectrum, these features are externally imposed on participating organizations/subjects, such as providing physicians with flow sheets,1,2 checklists,3,4 or computer-based reminders,5–8 or distributing patient educational materials.6,9,10 Although such approaches can provide straightforward change mechanisms that ensure generalizability and treatment fidelity, they can pose problems when contextual variables contradict intervention fidelity11 or when motivation to sustain changes wanes once the researchers leave.12
On the other end of the spectrum are approaches where organizations/subjects engage in their own problem identification, and the processes for change emerge internally. These approaches move beyond filling a knowledge deficit on the part of patients or clinicians to enhancing the organization’s capacity and resources for change.13–17 Research on stakeholders—those individuals and groups who have an interest in and are influenced by the organization18—suggests that when stakeholders identify problems and generate their own solutions, they are more likely to engage in and sustain change processes.19 Without the engagement, motivation, and commitment of key stakeholders within an organization, even meritorious innovations may be abandoned before they have had the chance to be effective.20
We report the results of the Supporting Colorectal Cancer Outcomes through Participatory Enhancements (SCOPE) study, which combined features on both ends of this spectrum. The study imposed on participating primary care practices a specific goal—to improve colorectal cancer screening (CRC) rates—and a change process—a series of facilitated team meetings and learning collaboratives. The use of practice facilitators to guide QI efforts21 and learning collaboratives to stimulate cross-practice learning22–26 has received growing attention as robust methods for translating evidence-based guidelines into practice. Within these parameters, the study tailored the change process, allowing practice members to generate their own QI objectives and strategies in hopes of enhancing practices’ capacity for change.
METHODS
SCOPE was a cluster randomized trial designed to evaluate the effectiveness of a tailored intervention on CRC screening rates in primary care practices. The study design incorporated a mixed-methods evaluation to assess practice-level variation in intervention fidelity and experiences.27 CRC screening was selected because of its documented benefits for reducing morbidity and mortality, and its proven cost-effectiveness.28–31
The unit of randomization and intervention was the practice, whereas the unit of observation of outcomes was patients within each practice. SQUIRE32,33 and CONSORT34 guidelines served as a framework for implementing the intervention and reporting findings. The study was approved by the University of Medicine and Dentistry of New Jersey Institutional Review Board, and informed consent was obtained from participating practice members and patients.
Intervention
The 6-month intervention included 3 integrated components: a multimethod assessment process (MAP),27,35 a reflective adaptive process (RAP),35–37 and learning collaboratives.23–26,38–40 Key study personnel included 6 doctoral- or masters-level professionals who served as both qualitative researchers and QI facilitators. Most had experience in qualitative data collection methodologies and received facilitation training to ensure consistent implementation of the intervention. Most did not have expertise in cancer screening.
During the 3-day assessment, study personnel systematically observed practices and conducted interviews with clinicians and staff.27 Study personnel used an observation template to guide data collection and ensure consistency. After the MAP, study personnel shifted into facilitator mode and prompted the formation of a RAP team in each practice, which drove the practice’s CRC screening improvement efforts. RAP teams engaged in 2 cycles of meetings, with each cycle consisting of approximately 4 to 6 meetings. Although the facilitators guided the teams through the change process,37,41,42 decision making and QI work rested with the practice members.
The intervention also included 2 day-long learning collaboratives held after the first and second RAP cycles to foster cross-practice learning.25 Two representatives from each practice, including at least 1 physician, were requested to attend. The curriculum included a mix of didactic presentations from experts on cancer screening, cancer survivorship, and organizational change, followed by reflective discussions. Key points included the value of all recommended screening modalities, colonoscopy as the only method that can prevent CRC, and barriers to CRC screening.
Practice and Patient Sample
Power calculations indicated that for a 2-group t test of follow-up screening rates conducted at the .05 significance level, a sample of 24 practices evenly split between control and intervention groups with 30 patients per practice would give 90% power to detect an absolute increase of 75% in screening rates (from 31% to 54%). These calculations were based on estimates from previous data with an average baseline screening rate of 31% and an intracluster correlation (ICC) coefficient of 0.38.35
Practices were recruited from the New Jersey Primary Care Research Network, as well as the general population of primary care practices in New Jersey. Practices that agreed to participate were randomized to either the intervention arm or control arm of the study.
A consecutive sample (a type of nonprobability sampling that seeks to include all accessible and eligible subjects as part of the sample)43 of 30 patients aged 50 years or older was recruited from waiting rooms of each practice at baseline and the 12-month follow-up, constituting independent samples of patients. Descriptions of recruitment are published elsewhere.44 New patients and those who could not read or write English or Spanish were excluded. Patients were surveyed, and screening information on various cancers was extracted from their medical records.
Data for Quantitative Outcomes
CRC screening rates and physician recommendation for CRC screening were determined by medical record review.45 Trained chart auditors used a standardized chart abstraction tool, and interrater reliability analyses were conducted as part of ongoing quality checks. Patients were considered to be up-to-date on CRC screening if there was documentation of having received any tests in the recommended time period based on 2005 recommendations from the American Cancer Society: fecal occult blood test (FOBT) within 1 year, sigmoidoscopy or barium enema within 5 years, or colonoscopy within 10 years.46 Information was not collected on whether the tests were done for screening or for diagnosis of symptoms or abnormal physical findings. Because patients and practice members were not blinded to the focus of the study, we excluded data from the day of patient recruitment to minimize potential Hawthorne effects.
Statistical Analysis
Percentages summarized the distributions of patient and practice characteristics. The percentages of (1) patients for whom the practice met screening guidelines and (2) patients with appropriate screening or recommendation for screening in the medical chart were calculated at baseline and follow-up for each group (intervention and control). ICCs for these outcomes at baseline were calculated. An intent-to-treat analysis assessed the main effect of the intervention by comparing the odds of improvement for intervention practices with those for control practices. Specifically, within each group a Mantel-Haenszel common odds ratio was estimated stratifying by practice, thus accounting for clustering of responses within practices. A Z test was then used to assess whether the log-odds of improvement differed significantly between groups. A Breslow-Day test assessed homogeneity across practices in the odds of improvement within each group. Sensitivity analyses included 2-group t tests comparing the average improvement in screening rates between groups, measured within practice as a difference in proportion screened at follow-up minus that at baseline. This approach follows in principle that described by Donner and Klar47 for follow-up screening rates when not controlling for baseline. All analyses were conducted using SAS software (SAS Institute Inc).
Data for Qualitative Assessments
Qualitative data included MAP field notes and audio-taped RAP and learning collaborative meetings. Field notes of RAP meetings and learning collaboratives were written to capture elements not available from audio-recordings, such as group dynamics. Six- and 12-month follow-up visits were completed to assess longer term effects of the intervention. Data were de-identified to ensure confidentiality.
Qualitative Analysis
An immersion/crystallization technique was used to analyze the qualitative data.48 Descriptive case summaries were written for each practice and discussed in detail with the coauthors to identify initial patterns and themes. During this analytic process, 6 characteristics emerged as key contributing factors for the teams’ QI implementation: (1) team structure, defined as consistency of RAP team membership; (2) leadership, defined as how well formal practice leaders fostered team development and participated in QI efforts; (3) engagement, defined as participation by team members in the RAP meeting discussions and QI efforts; (4) psychological safety, defined as evidence of interpersonal risk-taking, such as voicing dissenting opinions or critical perspectives on QI efforts; (5) intracommunication, defined as communication among RAP team members regarding QI efforts; and (6) intercommunication, defined as communication between the RAP team and the rest of the practice regarding QI efforts. Each practice was then ranked along a continuum of strong, moderate, or weak on each characteristic. Implementation characteristics were explored using a comparative case study analysis. Any discrepancies in how the coauthors interpreted the findings were discussed to reach consensus.
RESULTS
Twenty-five practices consented to participate and were randomized to either the intervention arm (n = 12) or control arm (n = 13) (Supplemental Figure 1, available at http://annfammed.org/content/11/3/220/suppl/DC1). Early on, 2 practices closed (1 intervention, 1 control). To ensure an adequate intervention group sample, 1 control practice was randomly selected to be in the intervention group, thus providing a final sample size of 23 practices (12 intervention, 11 control). Of the 23 practices, the average number of physicians was 4 (min/max = 1 to 11). All were family or internal medicine practices, and only 1 was a residency practice (P16); 83% of practices were located in suburban settings. The average length of practice existence was 11.7 years.
Of the 12 intervention practices, 7 fully engaged in the intervention, 2 practices (P17 and P21) failed to participate in the intervention, and 3 others never fully engaged in developing collaborative processes as intended by the study (P7, P11, and P15) (Supplemental Table 1, available at http://annfammed.org/content/11/3/220/suppl/DC1).
At baseline, 80% (N = 791) of eligible patients consented to participate in the study; 67% (n = 723) of eligible patients participated at the 12-month follow-up (Supplemental Figure 2, available at http://annfammed.org/content/11/3/220/suppl/DC1). On average, 37% of patients had Medicare or Medicaid insurance. A total of 1,315 charts were audited for this study. Patient characteristics are presented in Table 1.
Table 1.
Patient Characteristics, Baseline and 12-Month Follow-up
Baseline | 12-Month Follow-up | |||
---|---|---|---|---|
|
||||
Patient Characteristics | Control No. (%) | Intervention No. (%) | Control No. (%) | Intervention No. (%) |
Age, y | ||||
50–59 | 133 (42) | 148 (42) | 128 (44) | 124 (36) |
60–69 | 98 (31) | 104 (29) | 109 (37) | 115 (33) |
≥70 | 89 (28) | 101 (29) | 57 (19) | 109 (31) |
Sex | ||||
Male | 118 (37) | 136 (39) | 112 (38) | 152 (44) |
Female | 202 (63) | 217 (61) | 182 (62) | 196 (56) |
Race | ||||
White | 189 (59) | 269 (76) | 187 (64) | 280 (80) |
Black | 96 (30) | 27 (8) | 75 (26) | 32 (9) |
Hispanic | 20 (6) | 41 (12) | 17 (6) | 22 (6) |
Other | 15 (5) | 16 (5) | 15 (5) | 14 (4) |
Insurance | ||||
Commercial | 135 (42) | 172 (49) | 154 (52) | 169 (49) |
Medicare | 123 (38) | 129 (37) | 92 (31) | 135 (39) |
Other | 62 (19) | 52 (15) | 48 (16) | 44 (13) |
Education level | ||||
Less than high school | 50 (16) | 34 (10) | 35 (12) | 27 (8) |
High school diploma or some college | 147 (46) | 171 (49) | 135 (46) | 189 (55) |
College or graduate school degree | 123 (38) | 143 (41) | 123 (42) | 129 (37) |
Self-rated health | ||||
Excellent-good | 184 (58) | 227 (65) | 181 (63) | 221 (64) |
Fair-poor | 134 (42) | 123 (35) | 106 (37) | 122 (36) |
Smoking status | ||||
Current | 45 (14) | 27 (8) | 33 (11) | 37 (11) |
Never | 185 (58) | 216 (61) | 175 (60) | 200 (58) |
Former | 88 (28) | 109 (31) | 84 (29) | 109 (32) |
Body mass index | ||||
Underweight | 2 (0.7) | 2 (0.6) | 5 (2) | 2 (0.6) |
Normal | 75 (25) | 87 (26) | 67 (24) | 71 (21) |
Overweight | 94 (31) | 120 (36) | 98 (35) | 129 (38) |
Obese | 135 (44) | 126 (38) | 112 (40) | 134 (40) |
Years enrolled in practice | ||||
≤1 | 84 (26) | 79 (22) | 53 (18) | 58 (17) |
2–4.9 | 90 (28) | 88 (25) | 99 (34) | 102 (29) |
5–9.9 | 109 (34) | 122 (35) | 104 (35) | 117 (34) |
≥10 | 37 (12) | 64 (18) | 38 (13) | 71 (20) |
Visits in last 24 months | ||||
<5 | 121 (38) | 96 (27) | 108 (37) | 96 (28) |
5–8 | 103 (32) | 109 (31) | 94 (32) | 130 (37) |
9–12 | 61 (19) | 84 (24) | 47 (16) | 67 (19) |
≥13 | 35 (11) | 64 (18) | 45 (15) | 55 (16) |
Quantitative Findings
Baseline CRC screening rates by practice ranged from 14% to 93%, with the average being 46%. At baseline, the outcomes (whether patients were appropriately screened or whether they received a screening recommendation and screening) had ICCs of 0.18 and 0.19, respectively.
The percentage of patients appropriately screened for CRC decreased among control practices (43% to 38%) and increased among intervention practices (49% to 53%). The percentage of patients screened or receiving physician recommendations decreased from 62% to 58% in control practices and increased from 67% to 71% among intervention practices. These differences were not statistically significant, however (Tables 2 and 3).
Table 2.
Patients Screened, Chart Audit Data
Measure | Control Practices (n=11) | Intervention Practices (n=12) |
---|---|---|
Odds of improvement | ||
OR (95% CI)a | 0.80 (0.58–1.12) | 1.17 (0.86–1.59) |
OR=1, P valueb | .18 | .32 |
Breslow-Day, P value | <.001 | .001 |
Ratio of ORsc | – | 1.45 |
Equal ORs, P value | – | .10 |
Change in performance | ||
Average difference of proportions (95% CI)d | −0.05 (−0.20 to 0.09) | 0.04 (−0.09 to 0.17) |
Change within group, P value | .44 | .56 |
Intervention effect, P value | – | .33 |
OR = odds ratio, interpreted as the odds of screening at follow-up relative to the odds of screening at baseline.
P value testing the null hypothesis of no improvement within group.
P value testing whether the odds of improvement are homogeneous across practices within a group.
Calculated as the odds of improvement under the intervention relative to the odds of improvement for the control group.
For each practice, change was measured as the proportion screened at follow-up minus the proportion screened at baseline.
Table 3.
Patients Screened or Screening Recommended, Chart Audit Data
Measure | Control Practices (n=11) | Intervention Practices (n=12) |
---|---|---|
Odds of improvement | ||
OR (95% CI)a | 0.82 (0.59 to 1.15) | 1.21 (0.87 to 1.68) |
OR=1, P valueb | .24 | .25 |
Breslow-Day, P value | <.001 | .008 |
Ratio of ORsc | – | 1.47 |
Equal ORs, P value | – | .11 |
Change in performance | ||
Average difference of proportions (95% CI)d | −0.05 (−0.23 to 0.13) | 0.04 (−0.08 to 0.15) |
Change within group, P value | .59 | .48 |
Intervention effect, P value | – | .40 |
CI=confidence interval. OR = odds ratio, interpreted as the odds of screening at follow-up relative to the odds of screening at baseline.
P value testing the null hypothesis of no improvement within group.
P value testing whether the odds of improvement are homogeneous across practices within a group.
Calculated as the odds of improvement under the intervention relative to the odds of improvement for the control group.
For each practice, change was measured as the proportion screened at follow-up minus the proportion screened at baseline.
Within the treatment arm, practices were heterogeneous with respect to changes in odds of screening (Breslow-Day test P=.001 and <.001 for control and intervention practices, respectively). When examining screening modalities, FOBT use decreased substantially among the intervention practices (Table 4).
Table 4.
All Patients, Chart Audit Data, Breakdown of Screening Modalities
Baseline | Follow-up | |||
---|---|---|---|---|
|
||||
Screening | Control No. (%)a | Intervention No. (%)b | Control No. (%)c | Intervention No. (%)d |
Total screened | 136 (43) | 174 (49) | 111 (38) | 183 (53) |
Colonoscopy only | 114 (84) | 139 (80) | 95 (86) | 164 (80) |
FOBT only | 5 (4) | 22 (13) | 6 (5) | 6 (3) |
Colonoscopy + FOBT | 11 (8) | 11 (6) | 10 (9) | 13 (7) |
Sigmoidoscopy only | 6 (4) | 2 (1) | 0 (0) | 0 (0) |
Screened or recommended | 197 (62) | 236 (67) | 170 (58) | 246 (71) |
FOBT = fecal occult blood test.
11 Practices, 320 patients.
12 Practices, 353 patients.
11 Practices, 294 patients.
12 Practices, 348 patients.
This change was largely due to a single practice (P10) that improved their colonoscopy rates but dramatically reduced their use of FOBT.
Qualitative Findings
The quantitative analysis revealed considerable variation in screening rate changes across practices; therefore, we conducted a qualitative analysis to understand the context of practices’ QI implementation to shed light on factors contributing to this variation. Of the 12 intervention practices, 7 were high performers based on rankings of moderate to strong on all or most of the QI implementation characteristics (Table 5).
Table 5.
Qualitative Assessment of Quality Improvement Implementation (Intervention Practices)
Practice | Team Structure | Leadership | Engagement | Psychological Safety | Intra-communication | Inter-communication | CRC Screening Rates | |
---|---|---|---|---|---|---|---|---|
| ||||||||
Baseline (%) | 12-Month Follow-up (%) | |||||||
P2a | Strong | Moderate | Strong | Strong | Strong | Moderate | 14 | 30 |
P7 | Strong | Weak | Moderate | Weak | Moderate | Weak | 53 | 73 |
P8a | Strong | Moderate | Strong | Moderate | Moderate | Weak | 37 | 52 |
P10a | Strong | Moderate | Moderate | Moderate | Strong | Strong | 71 | 33 |
P11 | Weak | Weak | Moderate | Weak | Moderate | NA | 54 | 66 |
P15 | Moderate | Weak | Moderate | Weak | Moderate | Weak | 50 | 67 |
P16a | Strong | Strong | Strong | Strong | Strong | Weak | 43 | 48 |
P17 | – | – | – | – | – | – | 41 | 10 |
P19a | Strong | Strong | Strong | Strong | Strong | NA | 52 | 44 |
P21 | – | – | – | – | – | – | 38 | 56 |
P22a | Strong | Weak | Moderate | Moderate | Moderate | Weak | 47 | 71 |
P23a | Strong | Moderate | Strong | Strong | Strong | Weak | 93 | 86 |
CRC=colorectal cancer; NA=not applicable.
High-performing practice.
Three practices (P7, P11, and P15) were low performers based on the ranking of weak to moderate on most of the QI implementation characteristics. Overall, most had moderate to strong team structure, engagement, and intracommunication; most practices also evidenced weak intercommunication. Despite repeated attempts by study personnel to address participation challenges, 2 practices (P17, P21) failed to engage in the intervention at all. In both cases there was evidence that poor communication between practice leaders and other members led to misunderstandings about their participation. Also, practice members reported being overwhelmed with co-occurring events in the practice, such as electronic health record implementation or practice ownership changes.
One pattern was evident across the high- and low-performing practices. The high-performing practices had moderate to strong leadership (except for P22) and psychological safety for this QI intervention, whereas all 3 of the low-performing practices evidenced weak leadership and psychological safety. Although this finding does not signify a causal relationship, it suggests an association between how well leaders fostered team development and the extent to which team members felt safe to engage in the change process.
Using the qualitative 12-month follow-up data, we also found evidence suggesting that the high-performing practices improved their capacity for change more so than the low-performing practices. Three of the high-performers continued to use the team-based RAP model in an adapted form (eg, RAP meetings integrated into practice meetings), and there was evidence that 2 of these practices applied this model to other (non–CRC-focused) QI efforts. In contrast, none of the low performers continued a reflective adaptive process in any form or used the model for other improvements. Major practice changes (such as ownership changes and practice leader turnover) that had occurred by the 12-month follow-up in several practices may have affected their use of the SCOPE model after the intervention ended.
While the preceding results speak to variation across practices, we also explored variation regarding the within-practice congruity of qualitative and quantitative results. One anomaly was evident in practices that did well on the QI implementation characteristics but poorly on their CRC screening rates. The converse reflected a second anomaly. We therefore selected 3 case studies to further explicate connections between practices’ implementation process and their changes in screening rates. P2 illustrates what we hoped for in an intervention study: a practice that had excellent implementation characteristics and had positive increases in their CRC screening rates (Supplemental Appendix 1, available at http://annfammed.org/content/11/3/220/suppl/DC1). This practice had strong relationships as evidenced by a cohesive team, open discussions of proposed QI changes, and a psychologically safe environment where practice members felt comfortable critically reflecting on the current state of the practice. Data and peer stimulus proved to be powerful motivators for their improvement.
In contrast, P10 had a moderate to strong QI implementation yet experienced a dramatic decrease in their CRC screening rates, from 71% to 33% (Supplemental Appendix 2, http://annfammed.org/content/11/3/220/suppl/DC1). For most of the intervention period, the RAP team addressed practice “chaos” and communication issues, and little time was devoted to direct CRC improvement efforts. Although there are likely multiple factors contributing to this decrease, it is plausible that the intervention had an unintended effect on the practice’s screening rates, suggesting that this intervention may have had differing effects—beneficial and adverse—on different types of practices.
Lastly, P15 illustrates a practice that was ranked as weak to moderate on QI implementation but experienced an improvement in CRC screening rates, from 50% to 67% (Appendix 3, http://annfammed.org/content/11/3/220/suppl/DC1). The primary physician in the practice acknowledged that being involved in this project increased his diligence to screen for CRC. Ultimately, the primary physician’s concerted efforts to screen better seemed sufficient to positively affect their screening rates.
DISCUSSION
Project SCOPE tested an intervention model that used a facilitated, team-based approach to improve CRC screening rates in primary care settings. Facilitators tailored the change efforts according to the particular culture and perceived needs of each practice. Although CRC screening rates were emphasized as the focus of the intervention, specific QI objectives and plans rested with the practice members. A central assumption was that getting multiple stakeholder buy-in through this approach would enhance motivation and commitment to the change process. An explicit goal of the study was to develop a practice change model (using CRC screening as an initial focus) that could then be replicated for ongoing change efforts.
Most SCOPE practices were successful in several QI implementation characteristics, including team structure, team member engagement, and intrateam communication. Except for 2 practices that opted not to participate in the intervention, all others formed a RAP team, sent representatives to the learning collaboratives, and worked on 1 or more QI plans, suggesting that this type of intervention model is viable in primary care settings of varying size and structure. Variation between high- and low-performing practices, however, was evident in how well leaders fostered team development and the extent to which team members felt psychologically safe to take risks during the change process. Most teams were not adept at communicating their QI plans with the rest of the practice regardless of size. Moreover, only a few practices adapted the RAP model for use as an ongoing method to identify and work on continuous QI efforts. Organizational disruptions likely affected several practices’ progression of their change capacity. Previous analyses have explored, in depth, additional aspects of QI implementation from the SCOPE trial.25,49
Despite certain successes regarding practices’ QI implementation, overall SCOPE did not yield statistically significant improvements in CRC screening rates. Importantly, the integration of qualitative methods into the study design allowed us to answer recent calls to explore the implementation context of null trials.21 Several lessons learned from SCOPE are important to consider for future interventions.
One lesson was that allowing RAP teams to choose their own QI objectives and plans meant that some practices chose issues that were not directly related to CRC screening. RAP teams that focused on poor communication or chaos in the practice viewed these issues to be of sufficient priority that they needed to be addressed before the teams could delve into concrete clinical improvements. Facilitators prompted teams to keep CRC screening in the foreground, but their discussions often maintained a broader focus on practice dynamics and operations. Although potentially beneficial for the organization in other ways, this aspect of tailoring likely diminished their CRC screening improvements in the time frame of the study.
Another lesson pertained to the notion of spread. RAP teams typically included 1 clinician, and there was variability in how well these leaders fostered a climate of change for the entire practice. Other clinicians in a practice tended not to be aware of or engaged in the CRC improvement efforts, and RAP teams tended to communicate poorly with the rest of the practice regarding QI plans. Even though facilitators emphasized the importance of practice-wide communication regarding their QI efforts, ultimately this responsibility rested with the teams. As a result, segments of a practice improved their capacity for change and CRC screening efforts, but most practices were unsuccessful in effecting organization-wide improvements.
Additionally, because RAP teams were made up of diverse practice members where differing levels of administrative power were evident, teams needed a sense of psychological safety and trust50,51 that supported critical reflection of the change process.49 Facilitators helped foster a safe environment, but it often hinged on the role of practice leaders. High-performing practices had strong to moderate leadership and psychological safety, whereas low-performing practices were weak in both of these areas. Future interventions must pay attention to the role of practice leaders given their influence on team dynamics and the change process. Interventions using a team-based approach may benefit from incorporating instructional components for practice leaders to enhance their knowledge and skills in leading QI teams.
Lastly, SCOPE employed generalist facilitators who had expertise in organizational change and group process but not in cancer screening. As such, facilitators were not relied upon for giving practices CRC solutions. Instead, practices were encouraged to develop their organizational learning capacity to identify and implement their own solutions. Although the facilitators were well-suited to concentrate on the change process—eg, prompting teams to confront and deal with barriers to change—having facilitators who also had expertise in the target condition likely could have had more direct benefits on practices’ CRC screening efforts.
We recognize several limitations to our study. Power to detect an intervention effect was limited by the small number of participating practices, which was compounded by the failure of 2 practices to participate in the intervention as prescribed. This lack of fidelity would lead to attenuation of the intervention effect and, thus, reduced power. Moreover, the higher than expected CRC screening rates on average affected our ability to detect a significant increase in CRC screening rates. We also would expect that the volunteer practices in our sample were sufficiently motivated to improve CRC screening rates, which may differ from practices in general. Lastly, based on previous work showing that the level of uncertainty associated with a disease is a critical factor for intervention design,52 we acknowledge that CRC screening may preclude the need for an intensive, team-based model of practice improvement53–55 and, therefore, may have affected the change process and, consequently, the intended goal of extrapolating the change model to other diseases or areas of improvement.
Various QI approaches and methods can be effective in achieving targeted outcomes. Yet practice transformation, such as that envisioned by the patient-centered medical home, cannot be realized through only a series of incremental QI projects. Developing greater organizational learning and change capacities is required. The SCOPE intervention sought to bridge the gap between an externally orchestrated, single-target intervention and full-scale, emergent practice transformation. The response of practices to the SCOPE intervention suggests that this QI approach (ie, MAP/RAP, including facilitated team meetings and learning collaboratives) may not be an optimal strategy for single-target interventions, particularly if short-term guideline concordant numerical gains are the goal. The MAP/RAP approach provides considerable flexibility in the improvement focus a practice can take, as well as the strategies to get there. If improving performance measures for a preselected target, such as CRC screening rates, is the focus, perhaps a more traditional targeted continuous QI approach would be more appropriate. Nevertheless, because there are so many potential disease-specific and patient-centered targets in need of improvement in primary care, relying on a series of single-target QI interventions may not be realistic.
Methodologically, the SCOPE study shows that quantitative and qualitative findings should not be seen as a way to merely confirm or disconfirm each other. In some cases, SCOPE results reveal discordance in the 2 types of data, which might tempt us to think one or the other is wrong. Rather, integrating both views into an overarching analysis of the study provides a richer understanding of the intervention. Advancing the knowledge base of QI interventions requires future reports to address how and why QI interventions work rather than simply measuring whether they work.
Footnotes
Conflicts of interest: authors report none.
Funding support: This research was supported by a grant from the National Cancer Institute (R01 CA112387-01). Dr Crabtree’s time was supported in part by a senior investigator grant from the National Cancer Institute (K05 CA140237).
References
- 1.Frame PS, Kowulich BA, Llewellyn AM. Improving physician compliance with a health maintenance protocol. J Fam Pract. 1984;19 (3):341–344 [PubMed] [Google Scholar]
- 2.Madlon-Kay DJ. Improving the periodic health examination: use of a screening flow chart for patients and physicians. J Fam Pract. 1987;25(5):470–473 [PubMed] [Google Scholar]
- 3.Cheney C, Ramsdell JW. Effect of medical records’ checklists on implementation of periodic health measures. Am J Med. 1987;83 (1):129–136 [DOI] [PubMed] [Google Scholar]
- 4.Shannon KC, Sinacore JM, Bennett SG, Joshi AM, Sherin KM, Deitrich A. Improving delivery of preventive health care with the comprehensive annotated reminder tool (CART). J Fam Pract. 2001;50(9):767–771 [PubMed] [Google Scholar]
- 5.Ornstein SM, Garr DR, Jenkins RG, Musham C, Hamadeh G, Lancaster C. Implementation and evaluation of a computer-based preventive services system. Fam Med. 1995;27(4):260–266 [PubMed] [Google Scholar]
- 6.Sequist TD, Zaslavsky AM, Marshall R, Fletcher RH, Ayanian JZ. Patient and physician reminders to promote colorectal cancer screening: a randomized controlled trial. Arch Intern Med. 2009; 169(4):364–371 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Litzelman DK, Dittus RS, Miller ME, Tierney WM. Requiring physicians to respond to computerized reminders improves their compliance with preventive care protocols. J Gen Intern Med. 1993;8(6): 311–317 [DOI] [PubMed] [Google Scholar]
- 8.Overhage JM, Tierney WM, McDonald CJ. Computer reminders to implement preventive care guidelines for hospitalized patients. Arch Intern Med. 1996;156(14):1551–1556 [PubMed] [Google Scholar]
- 9.Pignone M, Harris R, Kinsinger L. Videotape-based decision aid for colon cancer screening. A randomized, controlled trial. Ann Intern Med. 2000;133(10):761–769 [DOI] [PubMed] [Google Scholar]
- 10.Pye G, Christie M, Chamberlain JO, Moss SM, Hardcastle JD. A comparison of methods for increasing compliance within a general practitioner based screening project for colorectal cancer and the effect on practitioner workload. J Epidemiol Community Health. 1988;42(1):66–71 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Cohen DJ, Crabtree BF, Etz RS, et al. Fidelity versus flexibility: translating evidence-based research into practice. Am J Prev Med. 2008; 35(5 Suppl):S381–S389 [DOI] [PubMed] [Google Scholar]
- 12.Pluye P, Potvin L, Denis JL. Making public health programs last: conceptualizing sustainability. Eval Program Plann. 2004;27(4):453–453 [Google Scholar]
- 13.Cohen D, McDaniel RR, Jr, Crabtree BF, et al. A practice change model for quality improvement in primary care practice. J Healthc Manag. 2004;49(3):155–168, discussion 169–170. [PubMed] [Google Scholar]
- 14.Plsek PE, Greenhalgh T. Complexity science: The challenge of complexity in health care. BMJ. 2001;323(7313):625–628 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Soubhi H, Bayliss EA, Fortin M, et al. Learning and caring in communities of practice: using relationships and collective learning to improve primary care for patients with multimorbidity. Ann Fam Med. 2010;8(2):170–177 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Soubhi H, Colet NR, Gilbert JH, et al. Interprofessional learning in the trenches: fostering collective capability. J Interprof Care. 2009;23(1):52–57 [DOI] [PubMed] [Google Scholar]
- 17.Wenger E, McDermott R, Snyder WM. Cultivating Communities of Practice. Boston, MA: Harvard Business School Publishing; 2002 [Google Scholar]
- 18.Blair JD, Fottler MD. Challenges in Health Care Management: Stategic Perspectives for Managing Key Stakeholders. San Francisco, CA: Jossey-Bass; 1990 [Google Scholar]
- 19.Drach-Zahavy A, Somech A. Understanding team innovation: the role of team processes and structures. Group Dyn. 2001;5(2):111–123 [Google Scholar]
- 20.Agrell A, Gustafson R. Innovation and creativity in work groups. In: West M, ed. Handbook of Work Group Psychology. London, UK: Wiley; 1996:314–343 [Google Scholar]
- 21.Baskerville NB, Liddy C, Hogg W. Systematic review and meta-analysis of practice facilitation within primary care settings. Ann Fam Med. 2012;10(1):63–74 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Dückers ML, Spreeuwenberg P, Wagner C, Groenewegen PP. Exploring the black box of quality improvement collaboratives: modelling relations between conditions, applied changes and outcomes. Implement Sci. 2009;4:74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Lindenauer PK. Effects of quality improvement collaboratives. BMJ. 2008;336(7659):1448–1449 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Schouten LM, Hulscher ME, van Everdingen JJ, Huijsman R, Grol RP. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336(7659):1491–1494 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Shaw EK, Chase SM, Howard J, Nutting PA, Crabtree BF. More black box to explore: how quality improvement collaboratives shape practice change. J Am Board Fam Med. 2012;25(2):149–157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Vos L, Dückers ML, Wagner C, van Merode GG. Applying the quality improvement collaborative method to process redesign: a multiple case study. Implement Sci. 2010;5:19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Crabtree BF, Miller WL, Stange KC. Understanding practice from the ground up. J Fam Pract. 2001;50(10):881–887 [PubMed] [Google Scholar]
- 28.Maciosek MV, Coffield AB, Edwards NM, Flottemesch TJ, Goodman MJ, Solberg LI. Priorities among effective clinical preventive services: results of a systematic review and analysis. Am J Prev Med. 2006;31(1):52–61 [DOI] [PubMed] [Google Scholar]
- 29.Pinkowish MD. Promoting colorectal cancer screening: which interventions work? CA Cancer J Clin. 2009;59(4):215–217 [DOI] [PubMed] [Google Scholar]
- 30.Shires DA, Divine G, Schum M, et al. Colorectal cancer screening use among insured primary care patients. Am J Manag Care. 2011; 17(7):480–488 [PubMed] [Google Scholar]
- 31.Siegel R, Ward E, Brawley O, Jemal A. Cancer statistics, 2011: the impact of eliminating socioeconomic and racial disparities on premature cancer deaths. CA Cancer J Clin. 2011;61(4):212–236 [DOI] [PubMed] [Google Scholar]
- 32.Davidoff F, Batalden P, Stevens D, Ogrinc G, Mooney SESQUIRE development group Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ. 2009;338:a3152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17(Suppl 1):i13–i32 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Campbell MK, Elbourne DR, Altman DGCONSORT group CONSORT statement: extension to cluster randomised trials. BMJ. 2004; 328(7441):702–708 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Balasubramanian BA, Chase SM, Nutting PA, et al. ULTRA Study Team Using Learning Teams for Reflective Adaptation (ULTRA): insights from a team-based change management strategy in primary care. Ann Fam Med. 2010;8(5):425–432 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Stroebel CK, McDaniel RR, Jr, Crabtree BF, Miller WL, Nutting PA, Stange KC. How complexity science can inform a reflective process for improvement in primary care practices. Jt Comm J Qual Patient Saf. 2005;31(8):438–446 [DOI] [PubMed] [Google Scholar]
- 37.Chase SM, Nutting PA, Crabtree BF. How to solve problems in your practice with a new meeting approach. Fam Pract Manag. 2010;17 (2):31–34 [PMC free article] [PubMed] [Google Scholar]
- 38.Ayers LR, Beyea SC, Godfrey MM, Harper DC, Nelson EC, Batalden PB. Quality improvement learning collaboratives. Qual Manag Health Care. 2005;14(4):234–247 [PubMed] [Google Scholar]
- 39.Mittman BS. Creating the evidence base for quality improvement collaboratives. Ann Intern Med. 2004;140(11):897–901 [DOI] [PubMed] [Google Scholar]
- 40.Wilson T, Berwick DM, Cleary PD. What do collaborative improvement projects do? Experience from seven countries. Jt Comm J Qual Saf. 2003;29(2):85–93 [DOI] [PubMed] [Google Scholar]
- 41.Shaw EK, Looney J, Chase S, et al. “In the moment”: the impact of intentional facilitators on group processes. Group Facilitation”. 2011;10:4–16 [PMC free article] [PubMed] [Google Scholar]
- 42.Looney JA, Shaw EK, Crabtree BF. Passing the baton: sustaining organizational change after the facilitator leaves. Group Facilitation. 2011,11:15–23 [Google Scholar]
- 43.Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB. Designing Clinical Research. 3rd ed Philadelphia, PA: Lippincott Williams & Wilkins; 2007 [Google Scholar]
- 44.Felsen CB, Shaw EK, Ferrante JM, Lacroix LJ, Crabtree BF. Strategies for in-person recruitment: lessons learned from a New Jersey primary care research network (NJPCRN) study. J Am Board Fam Med. 2010;23(4):523–533 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Ferrante JM, Ohman-Strickland P, Hahn KA, et al. Self-report versus medical records for assessing cancer-preventive services delivery. Cancer Epidemiol Biomarkers Prev. 2008;17(11):2987–2994 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Nadel MR, Shapiro JA, Klabunde CN, et al. A national survey of primary care physicians’ methods for screening for fecal occult blood. Ann Intern Med. 2005;142(2):86–94 [DOI] [PubMed] [Google Scholar]
- 47.Donner A, Klar N. Design and Analysis of Cluster Randomization Trials in Health Research. London, UK: Arnold; 2000 [Google Scholar]
- 48.Borkan J. Immersion/Crystallization. In: Crabtree B, Miller W, eds. Doing Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1999:179–194 [Google Scholar]
- 49.Shaw EK, Howard J, Etz RS, Hudson SV, Crabtree BF. How team-based reflection affects quality improvement implementation: a qualitative study. Qual Manag Health Care. 2012;21(2):104–113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Edmondson AC. Learning from failure in health care: frequent opportunities, pervasive barriers. Qual Saf Health Care. 2004;13 (Suppl 2):ii3–ii9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Edmondson A. Speaking up in the operating room: how team leaders promote learning in interdisciplinary action teams. J Manage Stud. 2003;40(6):1419–1452 [Google Scholar]
- 52.Leykum LK, Parchman M, Pugh J, Lawrence V, Noël PH, McDaniel RR., Jr The importance of organizational characteristics for improving outcomes in patients with chronic disease: a systematic review of congestive heart failure. Implement Sci. 2010;5:66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Jerant A, Kravitz RL, Rooney M, Amerson S, Kreuter M, Franks P. Effects of a tailored interactive multimedia computer program on determinants of colorectal cancer screening: a randomized controlled pilot study in physician offices. Patient Educ Couns. 2007;66(1):67–74 [DOI] [PubMed] [Google Scholar]
- 54.Sarfaty M, Wender R. How to increase colorectal cancer screening rates in practice. CA Cancer J Clin. 2007;57(6):354–366 [DOI] [PubMed] [Google Scholar]
- 55.Pignone MP, Lewis CL. Using quality improvement techniques to increase colon cancer screening. Am J Med. 2009;122(5):419–420 [DOI] [PMC free article] [PubMed] [Google Scholar]