Abstract
Colorectal cancer (CRC) screening is highly effective at reducing cancer-related morbidity and mortality, yet screening rates remain suboptimal. Evidence-based interventions can increase screening rates, particularly when they target multiple levels (e.g., patients, providers, health care systems). However, effective interventions remain underutilized. Thus, there is a pressing need to build capacity to select and implement multilevel CRC screening interventions. We report on formative research aimed at understanding how Federally Qualified Health Center (FQHC) staff select and implement CRC screening interventions, which will inform development of capacity-building strategies. We report the qualitative findings from a study that used a mixed methods design, starting with a quantitative survey followed by a qualitative study. In-depth interviews were conducted with 28 staff from 14 FQHCs in 8 states. The Consolidated Framework for Implementation Research (CFIR) guided interview questions and data analysis. Related to the CFIR process domain, few respondents described conducting formal assessments of factors contributing to low screening rates prior to planning their interventions. Many described engaging champions, implementation leaders, and external change agents. Few described a systematic approach to executing implementation plans beyond conducting plan-do-study-act cycles. Reflection and evaluation consisted primarily of reviewing Uniform Data System performance measures. Findings also include themes related to factors influencing these implementation processes. Although FQHCs are implementing CRC screening interventions, they are not actively targeting the multilevel factors influencing their CRC screening rates. Our findings on gaps in FQHCs’ implementation processes will inform development of strategies to build capacity to select and implement multilevel CRC screening interventions.
Keywords: Consolidated Framework for Implementation Research, Colorectal cancer screening, Evidence-based interventions, Evidence-based decision-making
Community clinics are not using systematic processes to improve their colorectal cancer screening rates and would benefit from additional training, tools, and technical assistance.
Implications
Practice: The American Cancer Society, Centers for Disease Control and Prevention, and others are working to build Federally Qualified Health Center’s capacity to adopt and implement multilevel colorectal cancer screening interventions. These efforts will be most effective if they target the gaps in FQHCs’ capacity and leverage existing partnerships and other capacity-building strategies identified in this study (see Table 3).
Policy: Funders who want to increase rates of colorectal cancer screening should provide funding to support FQHCs in conducting a comprehensive implementation process that begins with needs assessment that then guides intervention selection, execution, and evaluation.
Research: Further research is needed to develop and test strategies to build FQHCs’ capacity to select and implement multilevel interventions. This study’s formative work can inform the development of capacity-building strategies
INTRODUCTION
Colorectal cancer (CRC) screening is highly effective at reducing CRC-related morbidity and mortality [1,2], and yet screening rates remain suboptimal, particularly in underserved populations [3]. Fewer than two thirds of U.S. adults aged 50–75 are up to date with recommended CRC screening, and these rates are even lower among uninsured, racial/ethnic minority, and low socioeconomic status populations [3,4]. Research has documented factors that contribute to low screening rates at multiple levels including the patient (e.g., CRC knowledge, risk perception), provider (e.g., CRC screening knowledge), organization (e.g., electronic medical record (EMR) reminders), and community (e.g., access to diagnostic colonoscopies) [5,6]. Interventions have been tested and found to be effective at increasing CRC screening rates, particularly interventions that target multiple levels [7,8]. Many of these interventions are disseminated on the Centers for Disease Control and Prevention's Community Guide and the National Cancer Institute’s Research-Tested Intervention Program websites [9,10]. Despite the availability of evidence-based interventions, they remain underutilized [11–13]. Thus, a pressing need exists to build health care practitioners’ capacity to select and implement CRC screening interventions that target multiple levels. This article reports on qualitative research conducted by the Cancer Prevention and Control Research Network (CPCRN) to understand the process that Federally Qualified Health Center (FQHC) staff use to select and implement CRC screening interventions, with the goal of developing tailored capacity-building training and tools.
The CPCRN is a national network funded by the CDC and National Cancer Institute whose member and affiliated centers collaborate in cross-center workgroups to accelerate the adoption and implementation of cancer prevention and control interventions [14,15]. The CPCRN created a cross-center workgroup to study the implementation of CRC screening interventions in FQHCs because FQHCs have broad reach to underserved populations, and less than 40% of the patient population is current with recommended CRC screening (38.9%) [16]. The workgroup conducted a multistate survey to assess FQHCs’ use of available CRC screening interventions and implementation strategies [13]. In this study, workgroup members conducted in-depth interviews to explore the processes FQHC providers, administrators, and staff (referred to hereafter as “staff”) use to select and implement CRC screening interventions, and the factors that influence those processes. Study findings will inform efforts to increase FQHC staffs’ capacity to select and implement CRC screening interventions that target multiple levels.
Conceptual framework
This study was guided by the Consolidated Framework for Implementation Research (CFIR), which identifies 39 factors within 5 domains (intervention characteristics, outer setting, inner setting, characteristics of individuals, and process) that influence implementation of interventions into practice [17]. The CFIR has been widely used as a framework for assessing factors that influence implementation in general and, more specifically, the implementation of cancer screening interventions in FQHCs and other types of community clinics [18–20]. These studies have documented the presence or absence of constructs in the CFIR process domain and their influence on the implementation of cancer screening interventions. This study is distinct in the depth of its focus on the CFIR process domain with the goal of understanding how FQHC staff plan, execute, and evaluate the implementation of CRC screening interventions and the factors that influence those processes. This study further contributes to existing research on the implementation of CRC screening in FQHCs through its inclusion of FQHCs in multiple states.
METHODS
Design
The qualitative research reported here was conducted as a part of study that used an explanatory, sequential, mixed methods design [21], starting with a quantitative online survey (findings previously reported [13]), followed by a qualitative in-depth interview study (findings reported here). The study team included members from CPCRN-funded centers in seven states (IA, KY, NC, OH, PA, SC, and WA) and an affiliated center in Florida. The study was approved by the institutional review board of each of the collaborating CPCRN centers’ universities.
Sample
In this study, 14 of 33 invited FQHCs agreed to participate (42.4% response rate). FQHCs were selected purposively from the 56 that participated in the prior survey [13]], with the goal of selecting FQHCs whose survey findings indicated they were fully or partially implementing at least 1 CRC screening intervention at both the individual and organizational levels. Two additional FQHCs from a state that did not participate in the survey were included based on investigators’ knowledge of the high quality of their CRC screening implementation. For each selected FQHC, research team members contacted the Chief Executive Officer (CEO) or Medical Director, invited them to participate in the study, and asked them to identify one other individual with direct experience implementing CRC screening in their FQHC. A total of 28 participants including physicians, nurses, and administrators (e.g., Medical Director, Clinical Manager, CEO, Quality Improvement Director, Nurse Manager, Colon Cancer Prevention Coordinator) were interviewed after informed consent was given. Interviewees received a $100 gift card.
Data collection
Trained interviewers from each CPCRN center conducted phone or in-person interviews between June and October 2017. Interviewers followed a semi-structured interview guide that asked participants to describe their FQHC’s approach to each construct in the CFIR’s process domain: planning, engaging, executing, and reflecting and evaluating [17]. For each construct, participants also were asked to describe any challenges encountered and strategies taken to overcome those challenges. Sample questions included: “Describe the steps taken to implement your most successful CRC screening intervention.” “Describe any monitoring or evaluation to determine if the intervention was implemented as intended.” “What were some of the challenges encountered when implementing the intervention?” The interview guide (see Appendix) was pilot-tested with several staff from FQHCs other than those included in this study and was refined prior to initiating data collection.
Data analysis
Interviews were digitally recorded, transcribed verbatim, and imported into ATLAS.ti. We used a two-prong data analytic process that included a core team of four qualitative researchers in North Carolina and a cross-center analysis team of researchers from five CPCRN-funded centers (IA, KY, NC, PA, and WA). Initial coding was done using directed-content analysis [22]. The core team created codes for the interventions and the five CFIR domains. Intervention codes included interventions recommended by the Community Guide [10] and identified through a review of the literature [13]. New codes were inductively generated as needed to fully capture all interventions. The codes, based on CFIR domains, were sufficiently broad not to require inductive coding. The core team created a codebook, piloted the codebook with two randomly selected transcripts, reconciled coding differences, and fine-tuned the codebook’s definitions and decision rules. The core team then used the finalized codebook to code the remaining transcripts, with two researchers independently coding each transcript and meeting to discuss and reconcile differences via consensus.
Next, code reports were generated from ATLAS.ti for each of the CFIR domains and assigned to members of the cross-center analysis team for additional coding (using constructs within each of CFIR’s five domains). New codes were generated inductively to capture items not covered by the CFIR constructs. Two team members independently coded their assigned reports, reconciled coding differences via consensus, developed a summary report of themes that emerged within each code, and identified quotes to illustrate the findings. The cross-center analysis team, almost all of whom had participated in data collection, reviewed the themes. This provided an opportunity for the full team to confirm the coherence and credibility of identified themes.
RESULTS
Participants included 28 staff working in 14 FQHCs across 8 states, and most were either Medical Directors, CEOs, Chief Quality Officers, or Chief Nursing Officers of their respective centers. As summarized in Table 1, most FQHCs were implementing Community Guide interventions that targeted the patient level (one-on-one education, patient reminders, and small media), provider level (assessment and feedback), and organization level (patient navigators, reminder and recall systems) [10]. Participants from six FQHCs reported implementing mailed FIT (fecal immunochemical test) and FluFIT (combining FIT with annual flu shot) interventions, in addition to those recommended by the Community Guide.
Table 1.
Community Guide-recommended interventions | |
---|---|
Patient level | No. of FQHCS using intervention |
One-on-one patient education | 14 |
Patient reminders | 13 |
Small media | 11 |
Group education | 1 |
Provider level | |
Provider assessment and feedback | 9 |
Organizational level | |
Patient navigators | 9 |
Reminder and recall systems | 8 |
Other interventions | |
FluFIT and mailed FIT | 6 |
CRC colorectal cancer; FIT fecal immunochemical test; FQHC Federally Qualified Health Center.
As summarized in Table 2 and detailed later, thematic findings are organized within each of the five CFIR domains. Findings related to the CFIR process domain are described first, followed by factors that influenced those processes within the CFIR’s intervention, outer setting, inner setting, and individual domains.
Table 2.
CFIR domain and construct | Exemplar quotations |
---|---|
Process | |
Planning Clinic #1 Respondent A |
Well, it was pretty much Dr. [name removed] came here and said, “We’d like to do this. What do you think?” I talked to Dr. [name removed], and to the nurse manager, and the front desk, and the management team, and said, “Think we can implement this? Does this look like where the issues in there? Let’s do it.” |
Engaging Clinic #1 Respondent A |
I think maybe it was a push at the American Cancer Society to say, “Let’s work on colorectal.” She [referring to the ACS care manager] came to us with the FluFIT idea and we said, “Sounds good, let’s try it because we’re not doing good where we are.” That’s how we got started on that. |
Executing Clinic #2 Respondent B |
We did sort of a small batch for testing and then did the full-fledged mailed FIT in February of 2016. It was a long rollout process . . . I think that we did about a hundred mailings that first month. We then kind of saw how it went, and then I know that, my colleague before me, made some adjustments to the process following that mailing . . . |
Reflecting and evaluating Clinic #3 Respondent C |
I do look at county health rankings, I use UDS a lot, because I know we can compare. I look at us in comparison to the state or nationally. |
Characteristics of evidence-based interventions | |
Relative advantage Clinic #4 Respondent D |
We’ll usually select our projects based on what are going to be requirements in terms of resources, and then the overall value to our patients from it. |
Complexity Clinic #1 Respondent A |
We’ve got this simple process here [for FIT]. We’ll give this to you. You can take it home, follow the instructions, send it back in, and we’ll call you with the results. The problem is they take it home, and then we have a process…we call them after two weeks, we send a postcard after a month . . . and try to follow-up with them . . . so we kind of push them along. |
Outer setting | |
Patient needs and resources Clinic #2 Respondent B |
Everyone knows that it’s important, but a lot of patients at [FQHC] have really complex medical histories. A lot of our patients are refugees, or are just dealing with other health issues, especially within the fifty (50) to seventy-five (75)-year-olds. |
External policy and incentives Clinic #5 Respondent E |
And then, again, very generically speaking, there’s a lot of measures that people want us to track. Health Resources and Services Administration, there’s a ton of them, and then we have the Managed Care Organizations want us to do something different. And so there’s just sometimes a lot of fatigue about everything’s important, which means that sometimes nothing gets done the way we want it to get done. . . |
Cosmopolitanism Clinic #6 Respondent F |
I mean I’m always looking in multiple—like, I belong with, what is it, the Physicians Working Together Network on Facebook and Doximity, which is another kind of provider/physician-led organization. I listen to people throw out different ideas in medical directors’ meetings and things like that. |
Inner setting | |
Structural characteristics Clinic #5 Respondent E |
You know I think, generally speaking, one of our challenges is we have a large population, we have multiple clinics. That is certainly one of our challenges when we have six different locations. The processes have not always been consistent, in our six locations. So really trying to get the protocol down about what we’re going to do, so that when staff travel in between locations, there’s some consistency. That’s certainly been one of the barriers. |
Networks and communication Clinic #7 Respondent G |
The staff meetings in each clinic, all staff members—front line and clinical staff—are involved. The practice manager and the lead nurse head that meeting. . . . It’s an opportunity to provide everyone with updated clinic information, policies, procedures, and also to gain some—to answer any questions or gain insight on any suggestions or concerns and the providers meet with me during that monthly meeting in a different location. |
Implementation climate Relative priority Goals and feedback Clinic #5 Respondent H and clinic #8 Respondent I |
But now that we’re asking the nurses to present this colorectal cancer screening, we do HIV screening, we do safety screenings. So I think that some frustrations for all of these screenings comes because we’re asking them to do them all in a 15-minute time period. We have PDSAs running to monitor them every month, and then every month we review it and say “You know is this change making a difference, or is it not?” |
Readiness for implementation Leadership engagement Available resources Clinic #9 Respondent J Clinic #3 Respondent C |
Having Dr. [name removed], our medical director, so involved. She’s very quality minded. She’s very good at getting by and involving key staff and her providers, which is great. And ultimately it ends up being a money thing too, you know for us, none of these preventative screening measures generate revenue. |
Characteristics of individuals | |
Knowledge, self-efficacy Clinic #10 Respondent K |
The willingness of them [staff] to see the importance of this, to see that this was a relatively easy test for the clients to perform. I think that really helped us. |
Other personal attributes Clinic #8 Respondent L |
We also have one location that had a little more success than some of the others, and they have one nurse in particular that works there and really spends time and said “You know this is why you do this. You can get colon cancer. We can find it early,” and does more education with the patients. |
ACS American Cancer Society; CFIR Consolidated Framework for Implementation Research; FIT fecal immunochemical test; FQHC Federally Qualified Health Center; PDSA plan-do-study-act.
CFIR processes domain
The processes participants reported using are summarized according to the following process domain constructs: planning, engaging, executing, and reflecting and evaluating.
Planning
Few participants described any formal assessment of factors contributing to low screening rates prior to selecting and implementing interventions. Instead, FQHCs’ selection process was driven by FQHC leadership preferences, grant funding, and/or the influence of external change agents.
Engaging
Participants described three of four stakeholder types that the CFIR “engaging” construct identifies as central to implementation: champions, formally appointed implementation leaders, and external change agents. Many participants described the importance of having a CRC screening champion who engaged staff and motivated them to commit to implementation. Other participants described a formally appointed implementation leader, often someone hired with grant funding (e.g., patient navigator or nurse coordinator). Implementation leaders identified patients who were due for CRC screening, distributed FIT kits, and monitored FIT kit return, while often also serving as patient health coaches and educators.
Participants also talked about the importance of external change agents (i.e., external organizations that promoted and supported CRC screening implementation). The most commonly referenced external change agents included the American Cancer Society, the National Colorectal Cancer Roundtable, state and local health departments, quality improvement (QI) organizations, state Primary Care Associations, and universities.
Executing
Many participants described conducting plan-do-study-act cycles, or small tests of a planned improvement. Other participants described less systematic “trial-and-error” approaches to testing new interventions. When participants described a systematic approach to executing the implementation of an intervention, they almost always did so in relation to either FluFIT, mailed FIT, or patient navigation strategies that were coupled with external funding or support.
Reflecting and evaluating
Reflection and evaluation were predominantly based on review and reporting of FQHCs’ annual Uniform Data System (UDS) performance measures. Some participants described using data other than UDS measures, including conducting queries or running reports from the EMR or maintaining a manual log to track FIT kit distribution and results (e.g., who received one, who returned it, whether they completed it correctly, and whether diagnostic testing was completed as needed). Some sites also pulled paper charts to check the accuracy of their EMR data. A few participants reported requesting feedback from patients on their CRC screening process, and only one reported a systematic effort to get patient input using qualitative approaches. Although all FQHCs were collecting data, only a few participants described how they were using data to improve their CRC screening efforts.
CFIR characteristics of the intervention domain
The selection and implementation of CRC screening tests and interventions was influenced by two of the eight constructs within CFIR’s “characteristics of the intervention” domain: relative advantage and complexity. Stool tests were viewed as having relative advantage when compared to colonoscopy because they were more affordable, especially for those patients lacking health insurance. The FIT emerged as the preferred stool test for most FQHCs. The FluFIT intervention was viewed as having the advantage of bundling the annual FIT test with the annual flu shot program. Although participants saw the advantages of FIT and FluFIT, implementation was constrained by the complexity involved in getting patients to complete and return the FIT.
CFIR outer setting domain
Three of four constructs within the CFIR’s “outer setting” domain influenced FQHCs’ adoption and implementation of CRC screening interventions: patient needs and resources, external pressure and incentives, and cosmopolitanism.
Patient needs and resources
The implementation of CRC screening intervention was constrained by patients’ lack of insurance coverage to pay for follow-up diagnostic testing and by communication barriers such as not speaking English, low health literacy, or difficulty hearing automated calls. Other patient-level barriers included limited access to transportation, negative attitudes toward stool testing, and low priority given to screening relative to other medical needs.
External pressure and incentives
FQHCs’ implementation of CRC screening interventions was influenced by grant funding and financial incentives related to reimbursement (e.g., pay for performance). Several FQHCs were implementing an intervention (e.g., navigators and FluFIT) in response to funding that specifically supported those interventions. Although participants mentioned the influence of pay-for-performance incentives, some reported being overwhelmed and experiencing “fatigue” related to all the potential financial incentives.
Cosmopolitanism
Cosmopolitanism refers to the extent to which an organization is networked or connected with external organizations. The majority of FQHCs reported that they were engaged in various types of networks and participated in network-related meetings and activities (e.g., statewide QI networks). Many participants noted the value of networking with their FQHC peers, particularly those with high CRC screening rates, because it provided an opportunity for them to learn about what was working well for others.
CFIR inner setting domain
Participants identified the following constructs within CFIR’s inner setting domain that influenced CRC screening intervention adoption and implementation: structural characteristics, networks and communications, implementation climate, and readiness for implementation.
Structural characteristics
The size of FQHCs (i.e., number of clinic sites, number of providers), presence of medical residents, and designation as a Patient-Centered Medical Home all influenced the implementation of CRC screening intervention. Large FQHCs with numerous clinic sites and providers, and those that trained residents, experienced challenges implementing interventions.
Networks and communications
All FQHCs were required to have a QI Committee that reported to a board of directors. In most FQHCs, communication about CRC screening improvement flowed vertically from the QI Committee to staff via staff meetings, e-mail, newsletters, and team huddles. In many of the FQHCs, information was communicated to providers and nonprovider staff in separate meetings, limiting opportunities for interdisciplinary exchange. Few FQHCs facilitated staff engagement in workgroups that reviewed data, set goals, and proposed solutions to the QI Committee.
Implementation climate
Two of the CFIR’s implementation climate constructs were salient: relative priority and goals and feedback. FQHCs’ efforts to improve CRC screening were influenced by the relative priority accorded to CRC screening compared to other areas in need of improvement. Although many FQHCs monitored 40 or more QI indicators, most selected 3–6 foci per year; CRC screening had to compete for attention with other high-priority concerns. Most participants reported that their FQHC set goals for CRC screening and provided monthly or quarterly performance feedback (e.g., clinic or provider specific). The data provided were typically UDS measures and were presented via a graph, report, or scorecard, often during regularly scheduled meetings.
Readiness for implementation
The CFIR readiness for implementation construct includes three factors that influenced implementation: leadership engagement, available resources, and access to knowledge and information. Leadership engagement played a central role in driving implementation and holding providers and staff accountable. This typically referred to the CEO’s and/or Medical Director’s public support for a CRC screening intervention. Available resources also were critical to determining whether an intervention would be implemented and sustained. Participants referenced the importance of the following resources: funding, new staff (e.g., patient navigators), changes to their EMR systems, and educational materials for providers, staff, and patients. Participants also discussed the importance of providing clinicians, staff, and patients with access to knowledge and information about interventions in an easy-to-understand format through patient information sheets, team huddles, and trainings.
CFIR characteristics of individual domains
Most participants indicated that staff and providers were knowledgeable about CRC screening and recognized its importance, although a few providers preferred colonoscopy. Participants noted that implementation was limited by provider’s and staff’s lack of confidence (i.e., self-efficacy) in their ability to translate their knowledge of CRC screening into action. Participants highlighted the importance of other personal attributes to increasing screening rates such as staff and providers’ empathy, interpersonal skills, and ability to communicate with patients in a caring manner.
DISCUSSION
This study explored the processes FQHC staff used to select and implement CRC screening interventions. The findings show that although FQHC staff are implementing interventions that target multiple levels, they are not using a systematic process to select those interventions. Furthermore, FQHC staff are not assessing local factors that influence CRC screening rates, and therefore, are not purposefully targeting the factors that influence screening rates in their settings or populations. Once interventions were selected, some FQHCs implemented them using a top-down communication process. Other FQHCs used more active implementation processes such as champions, formally appointed implementation leaders, or plan-do-study-act cycles. Only a small number of FQHCs engaged QI workgroups in a systematic improvement process. FQHCs’ monitoring and evaluation processes also varied. Most FQHCs collected data for UDS reporting; fewer reported collecting or using other forms of evaluation data, and a minority of participants reported using data to improve their CRC screening processes.
This study also identified multiple contextual factors that influenced FQHCs’ selection and implementation processes within the CFIR domains of the intervention, outer setting, inner setting, and individual practitioner. This study’s use of the CFIR framework facilitates synthesis of study findings with other studies that have used CFIR and thus contributes to the evidence base in support of factors that influence CRC screening implementation in FQHCs. For example, similar to this study, other studies of CRC screening implementation in FQHCs have identified the importance of having both organizational leaders and formally appointed implementation leaders to support the planning and execution of CRC screening intervention [18,20], and that structural characteristics such as the size of an FQHCs’ population and physician staffing are also key to success of the implementation [19,20]. Although prior studies have reported the importance of implementation processes [18–20], this study is distinct in its focus on the CFIR process domain to guide a more in-depth understanding of how FQHCs plan, execute, and evaluate the implementation of CRC screening interventions and the factors that influence those processes.
Limitations of the study were its focus on relatively high-performing FQHCs and limited response rate. Caution therefore should be taken in generalizing findings to all FQHCs. Strengths of the study are its inclusion of 14 FQHCs from 8 geographically dispersed states and in-depth focus on the processes FQHCs use to select and implement interventions.
Implications for the development of capacity-building training and tools
More effective approaches to building practice-level capacity are essential to accelerating the implementation of evidence-based interventions [23]. Study findings have implications for the design of, training, tools, and other approaches to build FQHCs’ capacity to select and implement CRC screening interventions that target multiple levels. Table 3 provides an overview of the specific gaps in capacity that training efforts should target. Several of the identified gaps are similar to those identified in a survey of CDC-funded Colorectal Cancer Control Program grantees, in which practitioners reported limited capacity to implement interventions and to conduct process and outcome evaluations [11].
Table 3.
CFIR domain and construct | Target gaps in FQHC staffs’ capacity to . . . | Leverage existing partnerships and other capacity-building strategies |
---|---|---|
Process | ||
Planning | Assess local factors that contribute to low screening rates | |
Engaging | Identify and prepare champions and implementation leaders | Align with other external change agents that build FQHC capacity |
Executing | Develop and execute implementation plans | |
Reflecting and evaluating | • Use existing sources of data • Collect qualitative and quantitative data • Use data to improve processes and outcomes |
|
Outer setting | ||
Patient needs and resources | • Assess patient level factors that contribute to low screening rates • Identify and partner with colonoscopy providers |
Refer to sources of tailored patient education materials (e.g., Make it Your Own [28]) |
External policy and incentives | Provide grant funding for CRC screening intervention selection in addition to implementation | |
Cosmopolitanism | Work within existing regional and state QI networks | |
Inner setting | ||
Networks and communication | • Leverage FQHC’s existing QI and communication infrastructure • Strengthen QI and communication infrastructure |
Establish minimum QI infrastructure as a criterion for FQHC to participate in training |
Implementation climate | Use data feedback and other strategies to sustain investment in CRC screening as a priority | Provide grant funding so that FQHCs will prioritize CRC screening |
Readiness for implementation | • Engage leadership support • Educate and motivate staff |
Provide grant funding to hire additional staff and purchase resources |
CRC colorectal cancer; FQHC Federally Qualified Health Center; QI quality improvement.
Table 3 also summarizes recommendations for how those providing training might leverage existing partnerships and provide resources. Participants reported that their FQHCs were partnering with external change agents (e.g., the American Cancer Society) and state and regional networks. To avoid confusion and cognitive overload, training and tools should be designed to align with the models and guidance being provided by other external change agents. This may, for example, involve designing training to align with the Institute for Healthcare Improvement’s approach to QI [24] and/or the Colorectal Cancer Roundtable’s screening manual for community health centers [25]. Training also may be designed to be delivered within an existing network to maximize peer networking and support.
Participants identified grant funding as a primary driver of their CRC screening efforts. As detailed in Table 3, grant funding is related to four CFIR constructs: “planning,” “external policy and incentives,” “implementation climate,” and “readiness for implementation.” Of note, participants reported that grant funding often was linked to specific, preselected interventions, which may have the unintended consequence of undermining efforts to build FQHCs’ capacity to select interventions to target the specific factors that contribute to low CRC screening rates in their settings and populations. To address this concern, organizations providing grant funding should consider linking that funding to an overall process that supports FQHCs’ conducting assessments and selecting interventions in addition to implementation.
In this study, FQHC QI and communication structures varied greatly, suggesting that some FQHCs may need to build their general QI capacity before they are ready to focus on improving CRC screening [26]. In cases where QI structures are underdeveloped, training may need to focus on building general QI capacity prior to building CRC screening-specific capacity.
CONCLUSIONS
FQHC are making significant strides in addressing CRC screening rates with a combination of internal QI initiatives and external resources and guidance. This study found that many FQHCs are in fact implementing interventions that target factors at the level of the patient, provider, and organization. These results are similar to what our prior survey [13] and other research studies have found [19,27]. The full impact of evidence-based interventions may be realized if FQHCs are provided with support to assess local factors that influence CRC screening rates, select interventions to address local factors, combine interventions to target multiple levels, evaluate CRC screening processes and outcomes, and apply evaluation data to continuously improve their screening rates. This study provides evidence to inform the development of training and tools needed to address these gaps in FQHCs’ capacity.
Acknowledgements
Appreciation is expressed to Maihan Vu and Randall Teal for guidance, support, and oversight of the qualitative analysis; to Genevieve Birkby for her assistance with data collection; and to Kathleen Knocke for her assistance with manuscript preparation. Funding: This work was supported by Cooperative Agreement Numbers 3 U48 DP005030-01S5, 3 U48 DP005021-01S4, 3 U48 DP005014-01S2, 3 U48 DP005053-01S1, 3 U48 DP005000-01S2, 3 U48 DP005013-01S1A3, 3 U48 DP005017-01S8, and 3 U48 DP005017-01S8 from the Centers for Disease Control and Prevention’s Prevention Research Center’s Program and the National Cancer Institute. The findings and conclusions in this work are those of the authors and do not necessarily represent the official position of the funders.
APPENDIX. FINAL INTERVIEW GUIDE
EACH SITE TO INSERT IRB LANGUAGE FOR ELEMENTS OF CONSENT AND PERMISSION TO AUDIO RECORD
First, we would like to hear from you about what types of colorectal cancer screening approaches are in place at your clinic, and how decisions were made about which ones to implement.
Can you describe what approaches your clinic is implementing to increase CRC screening rates?
In addition to what you just told me, are you using any of these approaches? Here is a list of some approaches…HAND THEM THE CARD.
What was the decision-making process for choosing these approaches? Can you please describe that process for me?
-
Who in the organization was involved in the decision-making process for choosing these approaches?
PROBE: Were you a part of the decision-making process? Can you describe your role in the decision-making process for me?
-
What influenced your clinic’s decision to select these approaches?
PROBE: What, if any, data about your clinic and patient population influenced the decision? How about peer clinics, national/state initiatives, health care reform, or other external influences (e.g., from your PCA or a presentation from a researcher or community partner)?
Overall, how do the providers and staff in your FQHC perceive the need to increase colorectal cancer screening rates? Why do you think this is true?
Next, we would like to know more about how you rolled out your colorectal cancer screening interventions.
-
Can you please describe the steps taken to implement your most successful approach?
Was there a written implementation plan? Who was responsible for developing and writing the implementation plan?
-
Was the approach piloted on a small scale first, such as testing it in a Plan-Do-Study-Act cycle?
IF YES: Please describe what was done and what was learned.
-
Was there someone formally assigned to lead or oversee implementation of this approach?
IF YES: What was that person’s job title?
IF NO: Did anyone informally take responsibility for overseeing implementation? If so, what was that person’s job title?
What steps were taken to communicate with providers and other staff about the new approach?
-
Can you describe any monitoring or evaluation to determine if the approach was implemented as intended?
PROBE: Did you monitor CRC rates specifically?
PROBE: Did you provide any feedback or reporting to staff on the findings from your monitoring and evaluation? If yes, please describe.
PROBE: Did you gather any kind of qualitative or anecdotal feedback from either patients or clinic staff on the approaches? If yes, please describe.
PROBE: What actions, if any, did you take based on your monitoring or evaluation findings?
-
What were some of the challenges encountered to implementing the approach?
What steps were taken to overcome the challenges?
What were some factors that made it easier to implement the approach?
We are at the last section of the interview. We are also interested in your overall approach to quality improvement.
Please tell me about how quality improvement works at your clinic.
-
Do you have quality improvement meetings at your clinic?
-
IF YES: Can you tell us about those meetings?
PROBE: How often are they held? Who leads them? Do you typically attend? What proportion of staff typically attend?
PROBE: Have the efforts to improve colorectal cancer screening rates been included on the meeting agenda? Why or why not?
PROBE: What kind of feedback about your CRC strategies is solicited at these meetings, if any?
-
We are also interested in where you and your organization turn for assistance when thinking about how to improve the quality of care in your FQHC.
-
There are many organizations, programs or resources that are intended to help clinics improve patient care. What, if any, organizations, programs, or resources does your clinic use?
Which of those organizations, programs, and/or resources are most helpful and why?
-
Thinking specifically about colorectal cancer screening what, if any, organizations or programs or resources does your clinic go to for help?
Which of those organizations, programs, and/or resources are most helpful and why?
In what areas do you feel you need additional support to improve colorectal cancer screening?
Thank you so much for your time. Is there anything else we should know? Great! If you think of something, do not hesitate to get in touch with me.
Possible colorectal cancer screening approaches
One-on-one education
Group education
Patient navigators
Patient reminders
Provider assessment and feedback
Small media
Reminder and recall systems
Compliance with Ethical Standards
Conflict of Interest: All authors declare that they have no conflicts of interest.
Ethical Approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Study protocols were approved by the institutional review boards of the University of North Carolina at Chapel Hill, University of Iowa, University of Kentucky, University of Washington, University of Pennsylvania, University of South Florida, and University of South Carolina. Consent was obtained from all study participants prior to data collection.
Informed Consent: Informed consent was obtained from all individual participants included in the study.
Welfare of Animals: This article does not contain any studies with animals performed by any of the authors.
References
- 1. Knudsen AB, Zauber AG, Rutter CM, et al. . Estimation of benefits, burden, and harms of colorectal cancer screening strategies: Modeling study for the US preventive services task force. JAMA. 2016;315(23):2595–2609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Guy GP Jr, Richardson LC, Pignone MP, Plescia M. Costs and benefits of an organized fecal immunochemical test-based colorectal cancer screening program in the United States. Cancer. 2014;120(15):2308–2315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. White A, Thompson TD, White MC, et al. . Cancer screening test use—United States, 2015. MMWR Morb Mortal Wkly Rep. 2017;66(8):201–206. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Centers for Disease Control and Prevention. Vital signs: Colorectal cancer screening test use—United States, 2012. MMWR Morb Mortal Wkly Rep. 2013;62(44):881–888. [PMC free article] [PubMed] [Google Scholar]
- 5. Zapka J, Taplin SH, Ganz P, Grunfeld E, Sterba K. Multilevel factors affecting quality: Examples from the cancer care continuum. J Natl Cancer Inst Monogr. 2012;2012(44):11–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Yano EM, Green LW, Glanz K, et al. . Implementation and spread of interventions into the multilevel context of routine practice and policy: Implications for the cancer care continuum. J Natl Cancer Inst Monogr. 2012;2012(44):86–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Sabatino SA, Lawrence B, Elder R, et al. ; Community Preventive Services Task Force. Effectiveness of interventions to increase screening for breast, cervical, and colorectal cancers: Nine updated systematic reviews for the Guide to Community Preventive Services. Am J Prev Med. 2012;43(1):97–118. [DOI] [PubMed] [Google Scholar]
- 8.Community Preventive Services Task Force. 2016. Cancer screening: Multicomponent interventions—colorectal cancer. Available at https://www.thecommunityguide.org/findings/cancer-screening-multicomponent-interventions-colorectal-cancer. Accessibility verified May 17, 2018.
- 9.National Cancer Institute. 2018. Research-Tested Intervention Programs (RTIPs). Available at https://rtips.cancer.gov/rtips/index.do. Accessibility verified May 17, 2018.
- 10.Community Preventive Services Task Force. 2018. The Guide to Community Preventive Services Available at https://www.thecommunityguide.org. Accessibility verified May 17, 2018.
- 11. Escoffery C, Hannon P, Maxwell AE, et al. . Assessment of training and technical assistance needs of Colorectal Cancer Control Program Grantees in the U.S. BMC Public Health. 2015;15:49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Hannon PA, Maxwell AE, Escoffery C, et al. . Colorectal Cancer Control Program grantees’ use of evidence-based interventions. Am J Prev Med. 2013;45(5):644–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Adams SA, Rohweder CL, Leeman J, et al. . Use of evidence-based interventions and implementation strategies to increase colorectal cancer screening in Federally Qualified Health Centers. J Community Health. 2018. doi: 10.1007/s10900-018-0520-2. [Epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Fernandez ME, Melvin CL, Leeman J, et al. . The Cancer Prevention and Control Research Network: An interactive systems approach to advancing cancer control implementation research and practice. Cancer Epidemiol Biomarkers Prev. 2014;23(11):2512–2521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Ribisi KM, Fernandez ME, Friedman DB, et al. . Impact of the Cancer Prevention and Control Research Network: Accelerating the translation of research into practice. Am J Prev Med. 2012;52(3S3):S233–S240. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.US Department of Health and Human Services. 2016. Health Center Profile Available at https://bphc.hrsa.gov/uds/datacenter.aspx?q=t6b&year=2016&state=&fd=. Accessibility verified May 17, 2018.
- 17. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Liang S, Kegler MC, Cotter M, et al. . Integrating evidence-based practices for increasing cancer screenings in safety net health systems: A multiple case study using the Consolidated Framework for Implementation Research. Implement Sci. 2016;11:109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Walker TJ, Risendal B, Kegler MC, et al. . Assessing levels and correlates of implementation of evidence-based approaches for colorectal cancer screening: A cross-sectional study with Federally Qualified Health Centers. Health Educ Behav. 2018;45(6):1008–1015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Kegler MC, Liang S, Weiner BJ, et al. . Measuring constructs of the Consolidated Framework for Implementation Research in the context of increasing colorectal cancer screening in Federally Qualified Health Center. Health Serv Res. 2018;53(6):4178–4203. doi: 10.1111/1475–6773.13035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Creswell JW, Plano Clark VL. Choosing a mixed methods design. In: Designing and Conducting Mixed Method Research. Thousand Oaks, CA: Sage; 2011:53–106. [Google Scholar]
- 22. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–1288. [DOI] [PubMed] [Google Scholar]
- 23. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: Reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Institute for Healthcare Improvement. 2018. Open School Available at http://www.ihi.org/education/ihiopenschool/Pages/default.aspx. Accessibility verified May 17, 2018.
- 25.National Colorectal Cancer Roundtable. 2014. Steps for increasing colorectal cancer screening rates: A manual for community health centers Available at http://nccrt.org/resource/steps-increasing-colorectal-cancer-screening-rates-manual-community-health-centers-2/. Accessibility verified May 17, 2018.
- 26. Wandersman A, Duffy J, Flaspohler P, et al. . Bridging the gap between prevention research and practice: The Interactive Systems Framework for Dissemination and Implementation. Am J Community Psychol. 2008;41(3–4):171–181. [DOI] [PubMed] [Google Scholar]
- 27. Maxwell AE, Crespi CM, Arce AA, Bastani R. Exploring the effects of longstanding academic–community partnerships on study outcomes: A case study. Prev Med Rep. 2017;8:101–107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Health Communication Research Laboratory, Washington University, St. Louis, MO. MIYO: Make it Your Own. No Date Available at http://www.miyoworks.org/login/auth. Accessibility verified February 7, 2019.