Abstract
Little is known about the challenges faced by community clinics who must address clinical priorities first when participating in pragmatic studies. We report on implementation challenges faced by the eight community health centers that participated in Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC), a large comparative effectiveness cluster-randomized trial to evaluate a direct-mail program to increase the rate of colorectal cancer (CRC) screening. We conducted interviews, at the onset of implementation and 1 year later, with center leaders to identify challenges with implementing and sustaining an electronic medical record (EMR)-driven mailed program to increase CRC screening rates. We used the Consolidated Framework for Implementation Research to thematically analyze the content of meeting discussions and identify anticipated and experienced challenges. Common early concerns were patients’ access to colonoscopy, patients’ low awareness of CRC screening, time burden on clinic staff to carry out the STOP CRC program, inability to accurately identify eligible patients, and incompatibility of the program’s approach with the patient population or organizational culture. Once the program was rolled out, time burden remained a primary concern and new organizational capacity and EMR issues were raised (e.g., EMR staffing resources and turnover in key leadership positions). Cited program successes were improved CRC screening processes and rates, more patients reached, reduced costs, and improved patient awareness, engagement, or satisfaction. These findings may inform any clinic considering mailed fecal testing programs and future pragmatic research efforts in community health centers.
Keywords: Pragmatic trial, Colorectal cancer screening, Federally Qualified Health Center, Safety net clinics, Implementation challenges
BACKGROUND
Relatively, little is known about the successes and challenges of implementing a large pragmatic study in health care systems, including community health centers. Focused on the generalization of findings across similar settings, pragmatic studies are generally embedded into standard clinical processes and rely on clinic staff to deliver the intervention. While research in real-world settings is becoming more prominent at the National Institutes of Health, one of the hallmarks of those settings is their commitment to clinical care. The effect of trial implementation and the challenges faced are therefore important to both researchers and clinicians.
Research on the challenges faced in pragmatic study is scarce. Murray and colleagues noted specific challenges in partnering with primary care clinics, which may shut down unexpectedly or fail to implement electronic medical records (EMRs) or other tools on a timeline needed to fulfill the research requirements [1]. Dietrich et al. showed that leadership instability threatened the success of a colorectal cancer (CRC) screening program at a community health center [2]. These factors and others may distract focus from research initiatives, which can influence the consistency with which a pragmatic intervention is delivered. Regardless, few studies have reported specifically on the implementation successes and challenges in participating in pragmatic research, from the perspective of health system leaders.
Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC) is a large, cluster randomized pragmatic trial that tests the effectiveness of an EMR-leveraged automated mailed fecal immunochemical test (FIT) program to increase CRC screening rate in community health center clinics [3]. This project is funded by the National Institutes of Health, Health Care Systems Research Collaboratory program (UH2AT007782/4UH3CA18864002), whose aim is to provide a framework of implementation methods and best practices that will foster participation of health systems in clinical research [4].
As part of STOP CRC, we enrolled 26 community health center clinics to participate in a large multisite pragmatic study aimed at increasing CRC screening rates. We report qualitative data from interviews of health center leadership and used the Consolidated Framework for Implementation Research (CFIR) to guide our coding and interpretation. These findings fill an important research gap and may inform future efforts to partner with health systems to conduct collaborative, pragmatic studies. Our hope is that these findings can advance the science of pragmatic research and that future population-based CRC screening efforts can benefit from the challenges and success we identified.
METHODS
Clinic recruitment
We worked with our advisory board to define health center-level eligibility criteria and systematically applied these criteria to health centers affiliated with OCHIN (formerly Oregon Community Health Information Network), a third-party EMR vendor and practice-based research network headquartered in Portland, Oregon. Our health center recruitment activities are reported previously [5]. Briefly, we identify 11 health centers that met our inclusion criteria and met with leadership to explain the study and solicit their participation. Eight of the 11 clinics agreed to participate. Primary reasons for participation were as follows: perception by center leadership that the project was an opportunity to increase CRC screening rates and to use EMR tools for population management. Nevertheless, participating health center leaders expressed concerns about their clinics’ ability to provide fecal testing to and assure follow-up colonoscopy of uninsured patients and limited clinic capacity to prepare mailings required by the study protocol. All participating health centers were affiliated with OCHIN and used the same EMR platform: Epic (Verona, WI). In the first year of the project, we conducted a pilot study in collaboration with Virginia Garcia Memorial Health Center, which showed a 38% boost in CRC screening in 2 pilot clinics compared to a usual care clinic [6]. Two additional Virginia Garcia clinics participated in the full study. Leadership from a total of 26 clinics at eight of these health centers agreed to participate in the program after the pilot phase. The study was approved by the Institutional Review Board of Kaiser Permanente Northwest; the trial is registered at ClinicalTrials.gov (NCT01742065). All interview participants provided verbal assent.
STOP CRC pragmatic study
We used a collaborative learning approach to engage health center leadership and train health center staff to implement the STOP CRC program [7]. This approach involved an annual in-person meeting at each participating centers among the project’s principal investigators and staff and center leadership; a 1-day annual in-person meeting in Portland, Oregon; and quarterly telephone meetings with the project’s advisory board (including clinic leadership, policy-makers, representatives of advocacy organizations, and patient advocates). In-person trainings were conducted annually (once for intervention clinics and once for usual care clinics) at each health center and attended by multiple clinic staff. In addition, EMR site specialists and project leads from each center attended a monthly phone meeting led by the project director to report on progress and troubleshoot issues. At the end of the first year, project staff held an EMR work session, where we used affinity mapping to identify additional refinements to the EMR tools for the program.
EMR tools
The EMR tools developed for the project were described previously [8]. Briefly, Reporting Workbench (a registry embedded within the EMR) was customized to identify patients eligible for CRC screening, using codes in the EMR and a preventive health tracking tool (Health Maintenance in Epic, Verona, WI). Customized lists were created to identify patients eligible for each mailed component of the intervention: the introductory letter, the fecal immunochemical test (FIT) kit, and the reminder postcard. Patients were excluded if they had an invalid address, reported recent CRC screening, were poor candidates for screening based on health status (e.g., end-stage renal failure), or recently completed the FIT and needed no further reminders. To fully use the tools, EMR site specialists at each health center needed to complete an iterative, multi-step testing process to properly assign the printer and letter formats, print labels in a given format, ensure the electronic interface with outside labs was able to read in FIT results, and assign procedure codes to smartsets used in ordering the FIT test. At least six work orders were required to fully activate the tools.
Once the tools were fully activated, the Reporting Workbench lists could be used to print an introductory letter to all patients on the list (bulk mailing); after a year into the project, functions were enabled in Reporting Workbench so that laboratory FIT orders could be placed in bulk for all patients on a given list (e.g., bulk ordering). Once the introductory letters were printed, clinic staff engaged in a series of timed activities, including stuffing, labeling, and mailing the initial introductory letters; mailing FIT kits; and sending reminder letters. Additional lists were created for patient management, tracking of abnormal FIT test results, excluded patients, and patients who had not returned FIT kits. The EMR-generated lists were updated nightly.
Data collection instruments and survey procedures
For these analyses, we used data gathered from surveys and interviews with clinic leadership. We used the CFIR to guide the selection of items for interview guides [9]. The CFIR framework posits that the implementation process can be organized into five core domains: outer setting, inner setting, intervention characteristics, characteristics of individuals, and process planning. We further refined the surveys and interview guides by reviewing relevant literature and consulting experts in the areas of clinic readiness, adaptive reserve, implementation climate, and community health center settings [10–12]. The pre- and 1-year post-implementation assessments included an organizational survey, an in-depth leadership interview, and a provider survey. For this analysis, we report on data from the leadership interviews.
Leadership interview
The leadership interview was composed primarily of open-ended, qualitative questions exploring the motivations of health centers to participate in the STOP CRC program, the centers’ priorities, the importance of CRC screening to the centers, prior CRC screening strategies, beliefs about different screening tests, patient and system barriers to CRC screening, and early concerns about implementing the program. We sought to complete approximately three in-depth interviews per center with individuals representing a range of key leadership roles, including medical director, operation manager, quality improvement specialist, EMR specialist, and anyone else deemed important to interview. These participants were sent an email invitation to be interviewed. The interviews, which lasted 45 to 60 min, were typically conducted over the phone by trained qualitative researchers (JLS, JSR) and were audio recorded and transcribed for analysis [13, 14].
Approximately 6 to 9 months following the full STOP CRC implementation, we conducted a second round of in-depth interviews with leadership and staff. This round contained most of the same questions but focused on implementation challenges and solutions as well as possible plans for maintaining the program. Questions were modified or added based on findings from the first interview, observations about early and new concerns, and reported barriers and facilitators to implementation. These interviews, which lasted 45 to 60 min, were conducted by phone and typically included the project lead, quality improvement specialist, operation manager, and/or medical director. Interviews occurred June–September 2015.
Data analysis
All interviews were coded, and the content was analyzed using standard qualitative techniques [13, 15–17] aided by a qualitative software program (Atlas.ti) for coding, organizing, and retrieving transcribed data. Reports of coded data were generated, reviewed, and further summarized into themes. Identified challenges were further reviewed and organized using the CFIR as a guide. Successes of the STOP CRC program were organized using an open coding, theme-based approach [17–19].
As a “member check” of our interpretations [16, 20], we gathered feedback on the summarized results of the themes derived from our qualitative analysis. We presented the findings to clinic leadership during meetings with each health center 2 years post-implementation. We asked both center leaders and staff to reflect on the credibility of the themes we generated, including expanding upon them or identifying any missing topics. The refined themes from this process are presented below.
RESULTS
Sixty-one interviews were conducted; 39 were completed with health center leaders at baseline (range = 4–6 per health center) and 22 were completed 1 year after implementation (range = 2–4 per health center). Ten participants who participated in the baseline interview also participated in the 1-year post-implementation interview. Reasons that a person who was originally interviewed was not the same person at follow-up were that (1) the original staff member was no longer there, (2) the original staff member had a change in role, and (3) new staff took on program responsibilities. We noted any challenge or success raised by one or more participating health center leader. For these analyses, data are aggregated at the health center level; in this way, a challenge or success was noted if it was raised by one or more respondent within a given health center.
Anticipated and experienced implementation challenges (Table 1)
Table 1.
Consolidated Framework for Implementation Research Construct/Theme | Challenges | Representative quote | |
---|---|---|---|
Anticipated | Experienced | ||
Health centers (N = 8) | Health centers (N = 8) | ||
External setting (outer environment) | |||
Impact on colonoscopy access for higher number of positive FITsa | 7 | – | “The initial concern about follow up colonoscopy access … went away with the ACA because we went from having over 20% of our patients under sliding fee scales to now around 7%...[The] increase in coverage … has made a big, big difference.” |
Cumbersome process of EMR vendor to activate EMR tools creating time delays for execution of worka | – | 6 | “We had to do multiple [work requests] to get it up and running, to get all the EMR network and semantics set up for it… And not getting the timely response that we needed from [EMR Vendor] just bogged down the process quite a bit.” |
Burdensome interface with outside labs processing kits created delays/extra work | – | 2 | “The other challenge was in getting the kits to us from lab vendor. There was [sic] some delays... there would be weeks … where they [staff mailing kits] weren’t able to do anything.” |
Low patient awareness about CRC in general / confusion with a mailed programa | 5 | 3 | “I think it’s difficult for people to figure out how to do those kits at home without having somebody having instructed them in advance at the clinic. They get the kit and … don’t know what to do with it, or else they complete it but then it’s not correct.” |
Internal setting (organizational environment) | |||
Time burden on staff to implement program components competing with other work demandsa | 8 | 8 | “It was time consuming because sometimes we’d get like 40 of them back in a week. One staff person would … make each person an appointment and ‘check’ them in [when FIT was returned]. And it would take her quite a while to get that done… then I would … go into their chart and document everything and result the test. So if I had 30 tests, it would take me a good couple of hours to get it all done at once.” |
Staff roles unclearly defined, staff not fully in place or trained in new work | 3 | 3 | “One of the biggest, early challenges that we had was identifying a workforce that can actually do the work. Because the workflow was in process, I wasn’t ready to hand it over to the clinical teams where I wanted it eventually. And the front office team that ended up doing it, they were, of course, always really busy. So I had some part-time staff that worked on it.” |
High leadership/staff turnover or restructuring | 2 | 5 | Our organization has changed a lot with role changes and turnover. We have also grown so much that we’ve had to kind of put the brakes a little bit on some of the more innovative work that we’re doing like STOP in order to build our foundation a little stronger, and create more solid workflows and ensure that we’re more standardized across the whole organization. |
Inadequate EMR staffing resources/ technology challenges within electronic health record | – | 6 | “Another issue is that our EMR staff are really stretched beyond their FTE. There is 1.5 EMR staff available to the whole organization of 300+ people. And one of them… is not only in transition to her job, but she is in training still. And so having access for problems that might come up [for STOP] in … issues that need to be addressed from [EMR vendor], those were really problematic to resolve.” |
Batch printing of materials was a new workflow creating time consuming challenges for execution | – | 5 | “We also had a lot of work that we had to do with [EMR vendor], to determine which of our printers could handle the capacity of printing batch letters. We hadn’t done that before, so this was our first experiment with batch letter printing… it was just a lot of time consuming work to figure out.” |
Limited coordination between participating clinic sites, undefined process for resolving issues as they arise | – | 2 | “…one of the issues that I’ve been having is the communication with all the clinics… our own internal communication and, what that workflow is when there is retesting [of a FIT kit] that is required—does that become the work of their clinical team or does that become the work of the person who’s doing the central approach and sending out those kits centrally?” |
Intervention characteristics | |||
Cannot accurately identify target population because screening history inconsistently documented in health record | 5 | – | “One of the things that has come up as a concern is about patients who receive the kits unnecessarily. And it’s because the health maintenance alert in Epic hasn’t been updated because we’ve struggled with that. So there may be a patient who’s had a colonoscopy, [but s/he is] … on the list to receive a kit.” |
Direct-mail approach may be incompatible with patient population (i.e. homelessness, low literacy) | 6 | 2 | A direct mail approach may not end up being the best approach for some of the population you serve if they are homeless or have sort of transitional housing and don’t have a stable address. |
Direct-mail approach is incongruous with how organization likes to work | 5 | 1 | “Our new CMO is not fully supportive of mailed screening kits. He prefers face-to-face conversations and directly handing kits to patients.” |
Direct-mail approach may be too impersonal/ inferior to face-to-face encounters | 4 | - | “I think the thing that is lacking… is the provider discussion with the patient and getting the patient to really understand that this is something that they want them to do, it keeps them healthy… I think it impersonalizes everything when you just get this kit in the mail and it’s a letter from your doctor.” |
Process | |||
Lack of timely or accessible data to show benefit worth the efforta | – | 8 | “I’ve struggled … to determine whether or not this particular type of intervention was really making a difference. And the data I’ve received from the project so far, though helpful, I don’t think really captures that piece very well…. Are we doing a better job at screening for and preventing CRC? And it’s been hard for me as the project manager to gather the data that I need in order to make that determination.” |
Incorrect postage on kits | – | 3 | “The postage for mailing the kit was a hurdle, because we are putting it on the envelopes to go to the lab as well—we had figured out the cost and the weight for each one and then mailed a bunch, and I guess the postage cost increased so half came back. So now we are going to go back and add more postage for the next batch.” |
Wasted samples / no collection date | – | 1 | “We had discovered that we had a very large percentage of returned samples coming back without the collection date on the label… We want to make sure that doesn’t happen anymore than we absolutely have to because it’s hard to do that kit… so wasted samples is something we need to resolve.” |
aAlso cited as a possible barrier to sustaining the STOP CRC program
Based on our thematic analysis, we organized barriers to implementation by four constructs outlined in the CFIR framework: external setting, internal setting, process, and intervention attributes. These challenges reflect both early and anticipated concerns as well as concrete issues experienced during active implementation of STOP CRC.
External setting
More FIT testing may exacerbate colonoscopy access issues
Several health center leaders initially expressed concern that increasing fecal testing for CRC would increase the demand for follow-up colonoscopy and amplify the challenges of obtaining colonoscopy for patients, especially those who lack health care coverage. However, once the program was implemented, no health center leaders raised this issue: some cited the Affordable Care Act as providing insurance for a substantial number of previously uninsured patients, and others mentioned that the increased rate of FIT tests with positive findings was offset by fewer patients being referred for screening colonoscopy.
Cumbersome process of EMR vendor delays tool activation
None of the health center leaders anticipated having issues with activating EMR tools for delivering the intervention; the challenges arose once implementation began. Most health center leaders/staff described this process as cumbersome and sometimes confusing, and that it could take up to a few months to complete.
Burdensome interface with outside labs processing FIT kits created delays and extra work
Some health center leaders/staff cited issues with outside labs that processed the returned FIT kits. These issues included delays in getting enough kits to mail to patients, issues with the post office returning kits, non-working electronic interfaces that required clinic staff to search a Web site to locate FIT test results for a given patient, and labs unresponsive to requests for changes to the address on the return envelope (kits were required to be returned to the health center to enable the bulk ordering of FIT tests). In one case, a health center noticed that some kits could not be processed because the patient had failed to provide the collection date.
Low patient awareness about CRC screening
Leaders from five health centers anticipated that patients’ low awareness of CRC screening in general would generate confusion about fecal testing, causing many patients to call their clinic with questions or feeling “turned off” by being offered an unfamiliar test. Once the program was launched, leaders from three health centers remained concerned about this issue while the other two felt less concerned.
Internal setting
Time burden on staff to implement program components and competing with other work demands
All health center leaders anticipated concerns with the project-imposed time burden on clinic staff, and all leaders and implementation staff cited this challenge once the program was launched. Project staff consistently described how the multi-step direct mail process was time-consuming during a busy clinic shift. This burden was cited as a key challenge in sustaining the program over time.
Staff roles unclearly defined or staff not fully in place or trained in new work
Three of the eight health centers expressed concerns about staff roles and training at the onset of the project, and an equal number expressed this concern once the program was implemented. For health centers that relied on existing staff to deliver the program and/or who opted to delegate project tasks across multiple staff roles, this was particularly problematic. Health centers with a sole staff member dedicated to overseeing the project and had centralized their process did not express this concern.
High leadership/staff turnover or restructuring
At the onset of the program, leaders at two health centers anticipated that high leadership and staff turnover would be an issue, and leaders at five health centers cited this challenge once the program was launched. Moreover, several health centers opened new clinics during the program’s implementation, which resulted in the de-prioritization of newly implemented and innovative programs, such as STOP CRC.
Inadequate EMR staffing and resource and technology challenges within the EMR
Inadequate EHR staffing and technology were not anticipated to be challenges for any of the health centers. Nonetheless, once the program was launched, six health centers described being understaffed or having EMR specialists that were over-stretched with other clinical priorities. These issues made trouble-shooting EMR challenges or workflows for STOP CRC burdensome for the EMR staff.
Batch printing of materials was a new workflow and created time-consuming challenges
Prior to STOP CRC, none of the health centers had used batch printing. Once the health centers implemented the process, five centers reported challenges with this function.
Challenges with internal coordination
After the program was launched, leaders at two health centers expressed concerns about limited coordination between participating clinic sites. Chiefly, they described having undefined internal processes and lacking regular communication for aligning workflows or resolving issues.
Process
None of the health center leaders anticipated concerns about the process by which the intervention would be delivered. Once the program was implemented, all eight centers desired timely and accessible data that could show the benefits of the program; however, not all felt they received data from the study in a form that clearly demonstrated results or could guide future implementation decisions. Health center leaders desired data showing that the program was worth the extra costs in staff time and other resources, compared to in-clinic distribution of FIT kits. A minority of health center leaders also expressed challenges related to affixing the correct postage on the mailed kits and issues with patients omitting the collection date from completed tests.
Intervention characteristics
Inability to accurately identify target population due to inconsistently documented screening history in the health record
Several health center leaders raised concerns about inaccurate identification of patients due for CRC screening. This concern may have been driven by the lack of a discrete field in the health record to document a colonoscopy or by providers inconsistently reporting this information in the preventive health tracking tool (Health Maintenance). Notably, none of these leaders cited this as a concern once the program was implemented. Instead, staff cited improvements in provider use of Health Maintenance for CRC screening as a program benefit.
Incompatibility with the population served by a given health center
The concern that a direct-mail approach may be incompatible with the population served by a given health center was anticipated by six of the eight participating health centers; only two centers expressed this concern after the program was implemented. Leaders in five centers initially felt that the direct-mail approach was incongruous with how the organization liked to work, but only one expressed this concern once the program was launched. Concerns that the direct-mail approach may be too impersonal to motivate patients to get screened were a concern held by four health centers initially but none after implementation.
Successes of the direct-mail program (Table 2)
Table 2.
Theme/subtheme | Illustrative quote |
---|---|
Improved internal processes | |
Improved standardization of CRC screening process | “Once you get the hang of the process, just carrying on with it has been pretty straightforward … this project has prepared us for implementing at the larger clinic. We know what we can and can’t do and how much time things take and what we’re going to need to prepare for to get things going here. So I think that’s been beneficial.” |
Refined workflow for future mail out, population-based screening efforts | “It’s opened up a whole new way of using the [EMR] for screening and as mentioned earlier it has helped guide how we do outreach for … different preventive measures.” |
Improved EMR accuracy and standard use of screening tools within EMR | “… we’ve been working very hard over the last year to move our health maintenance Epic field to better capture where we are with CRC screening for all of our patients. So [STOP CRC] is a big part of that… that’s been a major change for us and a real positive.” |
Program has positive attributes | |
Reached more patients for CRC screening | “More patients are being screened than were previously from … patients coming to us and getting it from their providers. And that sending it out in the mail based on the criteria that we are using is a much more effective way to get it to patients.” |
Increased CRC screening rate | “I think that STOP CRC has actually helped us to be able to better meet the [CRC screening] measure, because we have been able to get tests out to more people. So it’s impacted our [screening] scores in a positive way.” |
Saved resources | “I was actually surprised at the amount of negative [normal] results we had [from completed mailed out FITs]. So we didn’t have enough positive results to impact the [colonoscopy] resources at all—which is great.” |
Aligned with organizational values and culture | “I think STOP CRC is a good fit. And it’s really good timing, because we are moving more to that type of approach. Earlier I was trying to describe this transition from the provider being the center of how the team functions to spreading … other staff within the clinical team and other processes outside of the exam room of being able to serve patients. We’re transitioning to that…so this matches that really nicely.” |
Improved patients’ awareness, engagement and satisfaction | |
Increased patient engagement in prevention and screening | “It creates a channel or a door to have certain communications with patients that maybe we haven’t necessarily had before. Then we’re actually able to get patients back in that were staying away for one reason or another. So, I’m really optimistic about it, [as it] encourages patient care.” |
Increased patient awareness, knowledge, and skills about CRC screening in general | “Just the fact that we’re getting the information out to the community and it’s becoming less and less of a negative thing to talk about, less taboo…if one family member gets one, they explain it to the other family members and so on and so forth—they’ve been introduced to it already so if someone approaches them [about CRC screening] they’re a little less likely to say no.” |
Experienced patient satisfaction and gratitude | “I’ve actually had people thank me for educating them on it. And then they’re more willing to do it... I’ve had patients come say,’ oh, I got a FIT kit and got checked out’, and they’re very thankful.” |
Providers/staff had positive reactions to/ benefited from program | |
Providers supportive and appreciative | “I think there is recognition from our providers that getting the kits to patients based off of a conversation in the exam room, that our rates of return were really low. And I think our physicians are realizing that that’s just something that they weren’t able to keep up with… so it is a successful approach, for sure, in terms of provider perspective—I’ve never heard a complaint from a provider about [STOP CRC] reminding patients.” |
Increased staff awareness, knowledge, and skills | “I do think it’s [STOP CRC] is worth it. I think that when staff have been working on the project they have enjoyed that part of it and learned a lot.” |
Based on our thematic analysis, the successes of the program related to three main areas: improving the CRC screening process, positive patient experiences with the program, and positive provider experiences with the program. Leaders from all eight health centers noted that a key strength of the program was the improved standardization of their CRC screening process and refined CRC screening workflow. They also noted that the program reached more patients than would have been reached through in-clinic distribution alone, improved the accuracy of CRC screening data in the EMR, and encouraged the standard use of screening tools in the health record (such as Health Maintenance). Moreover, the program was perceived as increasing CRC screening rates and potentially saving resources (by reducing the need for screening colonoscopy). Other successes related to patients’ reaction to the program, which included increased patient engagement in prevention and screening; increased patient awareness, knowledge, and skills related to CRC screening in general; and patients’ satisfaction and gratitude about having received an invitation to be screened. Notably, health center providers supported and appreciated the STOP CRC program and that staff increased their awareness, knowledge, and skills about CRC screening in general and the use of EMR tools specifically for tracking screening events. Six of the health centers noted that the program aligned with their future goals and organizational culture of “trying new things.”
DISCUSSION
We explored lessons learned from 26 clinics within eight community health centers in a pragmatic study to evaluate a clinic-based intervention designed to raise the rates of CRC screening. Common early concerns were patients’ access to colonoscopy, low patient awareness of CRC screening (external setting), time burden on clinic staff to carry out the program, inability to accurately identify eligible patients due to inconsistent EMR documentation (inner setting), and incompatibility of the approach with the patient population or organizational culture (intervention characteristics). Once the program was rolled out, the time burden on clinical staff remained a primary concern. Additional concerns focused on the cumbersome process for activating the EMR tools (external setting), inadequate EMR staffing resources, challenges in printing letters and labels in batches, high turnover in leadership or key staffing positions (inner setting), and lack of data to show that the benefit was worth the effort (process). Commonly cited program successes were improved CRC screening processes, increased patient awareness and engagement with screening, and positive patient and provider experiences with the program.
Health center readiness for study participation
Health center leadership and staff took steps to prepare their clinics for participating in their study, which took time to execute. While STOP CRC inclusion criteria did not require the centers to switch from a gFOBT test to FIT, all but one health center, which was already using FIT, took the opportunity to switch. Health center leadership perceived it as a valuable benefit to rely on the expertise of scientific staff in making a decision about which FIT to use. This process typically took several months as health center leadership and labs weighed the advantages and disadvantages of the tests, considering such factors as existing relationships with external labs, cost, where a given test was processed (on site or at an outside laboratory), and whether patients would be able to understand how to do the test. Several health center leadership and staff engaged in other readiness activities, which included soliciting provider buy-in for the program, which sometimes involved coordinating with the project co-PI to present at provider meetings; still, others focused on updating their EMR with historical screenings, particularly colonoscopy. This investment in readiness was an essential and unmeasured aspect of the trial. A key lesson was that even though our study criteria did not require that the health centers take a given set of steps to improve readiness, several health centers leveraged the project as an opportunity to make additional improvements. Further efforts are needed to identify and anticipate these readiness steps and consider a validated tool to assess the readiness of a health center to participate in a pragmatic trial. A high-value readiness assessment may capture aspects of the current setting as well as additional changes health centers may want to make prior to implementation.
Internal setting
Clinical practices are busy, and they may lack the capacity (due to limited staffing, time constraints, or frequent staff and leadership turnover) to implement an intervention that involves substantial effort on the part of clinical staff. Clinical infrastructure has been shown to be important in other clinical improvement efforts: Frosch et al. gathered qualitative field notes from observation of 12 community-based primary care practices that implemented a CRC and prostate screening program using pre-visit brochures and videos; they report the practices that were better able to integrate the project into their clinical workflows had: (1) adequate clinic infrastructure, (2) a relatively high number of patients eligible for the intervention, and (3) positive work and patient care environments [21].
The challenges identified by health center leadership emphasize that significant leadership and provider turnover may destabilize an organization to the extent that initiatives previously endorsed are less prioritized. Clinics that adopted a centralized process, involving staff with allocated and protected time on the project, had some protection against staffing turnover and time burden (related to clinic staff being too busy to fit it into their day-to-day work). Aligning health center champions across all levels of the organization is critical for implementing the program. Moreover, implementing a program in a way that demonstrates early success to health center leaders and staff may help maintain buy-in once the program is rolled out.
Moreover, clinic leadership and those conducting the intervention may be skeptical about whether the program is better than standard care. Moreover, given the desire to offer consistent quality programs across clinic sites, health center leaders may introduce related initiatives (e.g., boosting in-clinic FIT kit distribution) in usual care clinics. Limited buy-in may hinder the full implementation of the intervention or may lessen resilience to overcome challenges as they arise. On the other hand, relying on clinic staff to carry out the intervention can offer key advantages, such as identifying needed workflow and processes critical for successful implementation and maintenance.
External setting
The STOP CRC trial was conducted amid important and external transformations in health care. Medicaid expansion resulted in large increases in the number of new patients at STOP CRC health centers. Several of the health centers that participated in the trial opened new clinics during the course of the intervention. Additionally, the state of Oregon adopted legislation in 2011 to form Coordinated Care Organizations to provide high-quality coordinated care for patients served by Oregon’s Medicaid Health Plan. The legislation required definitions of quality metrics for the CCOs, and in 2012, CRC screening became an incentivized metric for CCOs. Moreover, in 2014, Oregon legislation was passed to require insurance companies to cover the cost of a screening colonoscopy, even when polyps are removed. These changes had the effect of reducing structural barriers impacting patients and incentivizing health centers to raise their rates of CRC screening. At the same time, the inclusion of CRC screening as a CCO metric also motivated Medicaid Health Plans to support health centers’ screening efforts. Some of those plans used the direct-mail approach and mailed FIT kits to Medicaid patients identified as eligible based on their claims data. All of these factors diminished the stability of clinics assigned to usual care. These and other internal changes sometimes distracted from the focus needed to fully implement the STOP CRC program.
Program successes
A positive impact of this research and a factor attributed to the pragmatic way that it was delivered was that participation in STOP CRC prompted clinic leadership to prioritize and organize their screening process into a more systematic and population-focused effort. A positive unintended consequence was that the improved processes, EMR tools, and workflows implemented in the STOP CRC program were thought by leadership to be transferrable to other quality improvement programs, such as breast cancer screening, cervical cancer screening, and immunizations. This opportunity to improve the delivery of care may not have been possible had we relied on research staff and a conventional trial design to deliver the intervention. Notably, our trial also used the plan-do-study-act (PDSA) program to assist health centers in further refining their workflows or testing enhancements to the direct-mail program (Coury et al. submitted). Despite the stated challenges, data from the organizational survey conducted in November 2015 to January 2016 showed that six of the eight participating health centers agreed or somewhat agreed that the benefits of STOP CRC were worth the extra work (data not shown).
Few investigations have reported on the challenges of engaging clinics to participate in research. The experience of Murray et al. with clinics shutting down unexpectedly was not experienced by our participating health centers [1]. However, like Murray et al., who found that clinics may fail to implement EMRs or other tools on a timeline needed to fulfill research requirements, we found that implementation at some health centers was substantially delayed due to technological issues with their tools, competing priorities, staffing issues, and other reasons. In addition, consistent with Dietrich et al. who showed that leadership stability was key to success in CRC screening at community health centers, we found that several STOP CRC health centers experienced turnover in key staff positions within the first year of implementation [2]. These findings reinforce the need for health center- and clinic-level data when conducting pragmatic trials and suggest that characteristics of leadership, practice experience, population served, and overall stability need to be recorded. Consistent with these findings, over one-half of the health centers in the STOP CRC trial experienced turnover of leadership or key staff roles, although only two health centers anticipated this challenge at the pre-implementation interview.
Limitations and strengths
This study’s limitations are consistent with those inherent to qualitative data collection and analysis, including a relatively small sample size which may have resulted in a limited set of anticipated and experienced challenges. Moreover, leaders and staff reports may have been positively biased because of their participation in STOP CRC (social desirability bias). However, we engaged in several techniques to lessen these limitations including conducting interviews with leaders and staff representing a range of roles at each health center, collecting data at multiple time-points, using interview guides, having staff formally trained in qualitative methods collect and analyze the data, and analyzing the data through formal coding and iterative review. Additional strengths include our collaboration with health center leaders and staff to review and interpret the findings, and our use of the well-validated CFIR framework to guide our efforts.
CONCLUSIONS
We present lessons learned from a multi-site pragmatic study of an intervention intended to raise CRC screening rates. Our findings underscore aspects of the external and internal environment that can pose advantages for clinical care and challenges to conducting pragmatic research. Future pragmatic research may benefit from anticipating some of these benefits and challenges faced by our health centers.
Compliance with ethical standards
Acknowledgments
Grant Support: Research reported in this publication was supported by the National Institutes of Health through the National Center for Complementary and Alternative Medicine under Award Number UH2AT007782 and the National Cancer Institute under Award Number 4UH3CA18864002. The work has not been published before nor is it under consideration for publication anywhere else. The manuscript has been approved by all co-authors.
Conflict of interest
The authors declare they have no conflicts of interest.
Research involving human participants and/or animals
The study was approved by the Institutional Review Board of Kaiser Permanente Northwest (Protocol # 4364; Trial Registration: ClinicalTrials.gov NCT01742065). All procedures performed in studies involving human participants were in accordance with the ethical standards of the responsible committee on human experimentation at Kaiser Permanente Northwest and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Footnotes
Implications
Practice: Electronic health record tools can facilitate improvements in colorectal cancer screening processes and rates, but their implementation can be challenging.
Policy: Effective and sustainable colorectal cancer screening programs must consider organizational capacity and on-site electronic health record expertise as well as patients’ awareness of colorectal cancer screenings and access to follow-up colonoscopy.
Research: Future research is needed to identify external and internal factors that drive STOP CRC implementation using quantitative data.
Contributor Information
Gloria D Coronado, Email: Gloria.d.coronado@kpchr.org.
Jennifer L Schneider, Email: Jennifer.L.Schneider@kpchr.org.
Amanda Petrik, Email: Amanda.F.Petrik@kpchr.org.
Jennifer Rivelli, Email: Jennifer.S.Rivelli@kpchr.org.
Stephen Taplin, Email: taplins@mail.nih.gov.
Beverly B Green, Email: green.b@ghc.org.
References
- 1.Murray DM, Katz ML, Post DM, et al. Enhancing cancer screening in primary care: rationale, design, analysis plan, and recruitment results. Contemp Clin Trials. 2013;34(2):356–363. doi: 10.1016/j.cct.2013.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Dietrich AJ, Tobin JN, Robinson CM, et al. Telephone outreach to increase colon cancer screening in medicaid managed care organizations: a randomized controlled trial. Ann Fam Med. 2013;11(4):335–343. doi: 10.1370/afm.1469. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Coronado GD, Vollmer WM, Petrik A, et al. Strategies and opportunities to STOP Colon cancer in priority populations: design of a cluster-randomized pragmatic trial. Contemp Clin Trials. 2014;38(2):344–349. doi: 10.1016/j.cct.2014.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.NIH Common Fund. National Health Care Systems Research Collaboratory. 2013; http://commonfund.nih.gov/hcscollaboratory/index. Accessed 7/28/2016.
- 5.Coronado GD, Retecki S, Schneider J, Taplin SH, Burdick T, Green BB. Recruiting community health centers into pragmatic research: findings from STOP CRC. Clin Trials. 2016;13(2):214–222. doi: 10.1177/1740774515608122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Coronado GD, Vollmer WM, Petrik A, et al. Strategies and opportunities to STOP colon cancer in priority populations: pragmatic pilot study design and outcomes. BMC Cancer. 2014;14:55. doi: 10.1186/1471-2407-14-55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Daniels SW. GB Working through environmental conflict: The Collaborative Learning Approach. Westport, CT: Praeger; 2001. [Google Scholar]
- 8.Coronado G, Burdick T, Petrik A, Kapka T, Retecki S, Green B. Using an automated, data-driven, EMR-embedded program for mailing FIT kits: lessons learned from the STOP CRC pilot study. Journal of General Practice. 2013;2:141. doi: 10.4172/2329-9126.1000141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Brown T, Lee JY, Park J, et al. Prev Med Rep. 2015;2:886–891. doi: 10.1016/j.pmedr.2015.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Emmons KM, Weiner B, Fernandez ME, Tu SP. Systems antecedents for dissemination and implementation: a review and analysis of measures. Health Educ Behav. 2012;39(1):87–105. doi: 10.1177/1090198111409748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Tu SP, Young VM, Coombs LJ, et al. Practice adaptive reserve and colorectal cancer screening best practices at community health center clinics in 7 states. Cancer. 2015;121(8):1241–1248. doi: 10.1002/cncr.29176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Patton MQ. Qualitative research and evaluation methods. Vol third edition. Thousand Oaks, CA: Sage Publications, Inc.; 2002. [Google Scholar]
- 14.Bernard HR, Ryan GW. Analyzing qualitative data: systematic approaches. Los Angeles [Calif.]: SAGE; 2010. [Google Scholar]
- 15.Alkelai A, Greenbaum L, Lupoli S, et al. Association of the type 2 diabetes mellitus susceptibility gene, TCF7L2, with schizophrenia in an Arab-Israeli family sample. PLoS ONE [Electronic Resource] 2012;7(1):e29228. doi: 10.1371/journal.pone.0029228. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Denzin N, Lincoln Y. The sage handbook of qualitative research. Thousand Oaks, CA: Sage Publications, Inc.; 2011. [Google Scholar]
- 17.Strauss A, Corbin J. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Vol third edition. Thousand Oaks, CA: Sage Publications, Inc.; 2008. [Google Scholar]
- 18.Lofland L, Lofland J. Analyzing social settings: a guide to qualitative observation and analysis. 3. San Francisco, CA: Wadsworth Publishing, Inc.; 1995. [Google Scholar]
- 19.Riessman C. Narrative analysis: qualitative research methods series 30. Newbury Park, CA: Sage Publications; 1993. [Google Scholar]
- 20.Silverman D. Qualitative Research: A practical handbook. Vol third edition. Thousand Oaks, CA: Sage Publications, Inc.; 2010. [Google Scholar]
- 21.Frosch DL, Singer KJ, Timmermans S. Conducting implementation research in community-based primary care: a qualitative study on integrating patient decision support interventions for cancer screening into routine practice. Health Expect. 2011;14(Suppl 1):73–84. doi: 10.1111/j.1369-7625.2009.00579.x. [DOI] [PMC free article] [PubMed] [Google Scholar]