This paper presents methods for identifying what makes programs work (i.e., core functions and forms of programs) so that people implementing programs don't change what makes programs work as they are adapting programs to their organization/setting.
Keywords: training, quality assurance, fidelity, effective strategies, HIV/AIDS, implementation
Abstract
High-quality implementation of evidence-based interventions is important for program effectiveness and is influenced by training and quality assurance (QA). However, gaps in the literature contribute to a lack of guidance on training and supervision in practice settings, particularly when significant adaptations in programs occur. We examine training and QA in relationship to program fidelity among organizations delivering a widely disseminated HIV counseling and testing EBI in which significant adaptations occurred due to new testing technology. Using a maximum variation case study approach, we examined training and QA in organizations delivering the program with high- and low-fidelity (agencies: 3 = high; 3 = low). We identified themes that distinguished high- and low-fidelity agencies. For example, high-fidelity agencies more often employed a team approach to training; demonstrated use of effective QA strategies; leveraged training and QA to identify and adjust for fit problems, including challenges related to adaptations; and understood the distinctions between RESPECT and other testing programs. The associations between QA and fidelity were strong and straightforward, whereas the relationship between training and fidelity was more complex. Public health needs high-quality training and QA approaches that can address program fit and program adaptations. The study findings reinforced the value of using effective QA strategies. Future work should address methods of increasing program fit through training and QA, identify a set of QA strategies that maximize program fidelity and is feasible to implement, and identify low-cost supplemental training options.
Implications.
Practice: Training and QA should be flexible enough to address program fit and adaptations that occur when programs are delivered in practice settings in order to better support high-quality implementation.
Policy: Consideration should be given to the development of policies that address standards for training and QA in public health practice settings.
Research: Research should focus on identifying training and QA strategies that maximize program fidelity, address the varied needs of programs, staff and organizations, and are feasible to implement.
INTRODUCTION
There is a longstanding practice in HIV/AIDS prevention and treatment emphasizing dissemination and implementation of evidence-based interventions (EBIs) [1, 2]. High-quality implementation is important to maintaining program efficacy and is influenced by a host of factors, including practitioner training and ongoing quality assurance (QA) [3, 4]. Large-scale reviews consistently point to the value of training and QA in achieving program fidelity [3, 4]. In particular, the impact of effective QA on improving and maintaining program fidelity is strong [3, 5].
Despite its foundational importance, significant gaps in the literature have contributed to a lack of guidance on how best to train and supervise practitioners delivering EBIs [6–8]. Furthermore, there are no standards to guide training and QA to address adaptations carried out to create a better fit within an agency or as a result of technological changes that alter the program. In the absence of guidelines, organizations may carry out training and QA in ways that enhance or detract from implementation fidelity. To add to our knowledge of how agencies conduct training and QA and their relationship to implementation fidelity, we examined these issues in agencies delivering an HIV counseling and testing program, RESPECT [9]. RESPECT provides a highly instructive example because: (1) it was widely disseminated through a national effort that included standard training until 2014; (2) QA is identified as a core component of this program; (3) RESPECT was one of several testing and counseling programs supported by state and federal sources (e.g., CTR, CRCS), contributing to the likelihood that agency staff might have experience with a similar program; and (4) changes in HIV testing technology (e.g., rapid test) resulted in a major adaptation of RESPECT from a two-session program to a single session that was offered in many agencies. We draw on Norton and Chambers [10] and the Centers for Disease Control and Prevention (CDC) designation in categorizing the change to a single-session RESPECT as an adaptation.
Characteristics of effective training
EBI training is best conceptualized as an ongoing process, which involves both initial training and QA. All staff responsible for delivering a program and those who supervise staff should be trained to apply the EBI in practice. Initial training, often delivered in a single intensive workshop, typically includes background information (e.g., theory, philosophy), introduction to program core components, and may also include practice (e.g., role plays) with feedback [3, 11, 12]. However, initial training is not sufficient for achieving mastery of new skills [6, 12–14]. Practitioners need opportunities to obtain feedback, to practice new skills under the direction of a competent supervisor, and to use tools that enhance program delivery (i.e., QA). Effective QA learning strategies include, for example, modeling, practice, and feedback [12, 15]. Practitioner competence and high-quality program delivery are facilitated by trained supervisors who engage in observation and feedback, staff engagement in problem-based learning (PBL; e.g., case studies), and the use of written QA plans [3, 4, 13]. Quality of supervision is important to effective QA [14, 16–18]. Thus, supervisors trained on a specific EBI are more apt to understand the implementation challenges. Further, supervisor participation in EBI training conveys a commitment to working together to incorporate the EBI into the organization [19, 20]. Indeed, team-based learning has been shown to enhance program delivery, especially when institutional barriers exist [15, 19].
Contextual factors that impact effectiveness of training
Initial training and supervision do not occur in a vacuum. Their impact on program delivery may be influenced by contextual factors such as practitioner characteristics and skills, program adaptations, and organizational fit [12, 13, 21]. Practitioners enter training with an existing set of skills, some of which they are actively using in their current work. When training is focused on a program (i.e., EBI) that is well outside the practitioner’s current skill set, motivation to master the new skills is reduced in the absence of ongoing support [13, 22]. Interestingly, challenges also emerge when a new program is very similar to existing programs [13]. In the latter case, the interference of old information on learning new information can occur (e.g., proactive inhibition, proactive interference) [23, 24]. That is, a practitioner may view the training as redundant and fail to learn the new program or skill set. Thus, challenges that lead to lower quality program implementation can occur when training for a new EBI requires either very different or very similar skills than those practitioners currently hold.
Another contextual factor to consider is program adaptation and its impact on fidelity. Fidelity is associated with program effectiveness [4, 12], yet it is widely accepted that adaptations occur when EBIs are implemented in practice settings [25, 26]. Adaptations may occur, for example, when organizations adjust programs to meet client needs or when a technological change leads to different ways of achieving program goals (e.g., rapid testing replaces traditional testing methods). There is a paucity of research on how the value of training is impacted by adaptations. When staff training is based on the original EBI, but the EBI is significantly adapted in practice, staff members may be ill-prepared to deliver the adapted program. This, in turn, may impact program fidelity.
Organizational level factors can also impact program implementation [21, 27, 28]. Program fit refers to how well a program meshes with, for example, an organization’s structure (e.g., time allotted for clients vs. time needed to deliver the program), the demands placed on staff (e.g., client load), and/or staff skills (e.g., trained vs. not trained) [29, 30]. Even with high-quality training and QA, if there is poor fit, programs may not be implemented with fidelity. Fit challenges may emerge during training, early on in program implementation (e.g., pilot runs), or during full implementation [20]. For example, challenges may come to light during QA when a supervisor observes that counselors cannot deliver the program in the allotted time, which leads to a client backlog. Ongoing QA serves as a mechanism for addressing fit problems, and may lead to modifications. However, staff may make adaptations to the EBI to enhance fit but may not have the technical assistance necessary to help minimize threats to fidelity.
Disseminated HIV counseling and testing programs
RESPECT [9] is one of several counseling and testing programs that was disseminated by the CDC [31], and for which the CDC provided standardized training. RESPECT was initially implemented as a two-session program with a 2-week interval between sessions. Typically, a client participated in a 20-min counseling session that included establishing short-term goals for reducing risk and a blood draw for the HIV test. The client returned for a second session and test results; the second session offered an opportunity to reflect on successes and failures with goals, re-setting goals, and to prepare for medical follow up if HIV-positive. The uncertainty present during the wait period (i.e., unknown HIV status), may have created conditions that increased the likelihood of executing risk reduction goals. Although, RESPECT was initially disseminated as a two-session program, number of sessions was not a core component [32]. Several factors led to changes in the number of sessions. Technological advancements led to the availability of rapid testing [33] and evidence suggested that an adaptation to a single session was largely comparable to the two-session program [34]. Following this the CDC provided updated guidance regarding adaptations to RESPECT [32]. In the adapted format, RESPECT was delivered in a ‘single session’ with brief counseling before and after test results (e.g., rapid test results were available in less than 30 min).
The CDC also supported Comprehensive Risk Counseling and Services (CRCS) and Counseling, Testing and Referral (CTR). CRCS is an intensive individual-level client-centered intervention delivered to high-risk clients. Counselors help clients identify risks and challenges and then develop solutions to address these [35]. CTR is a collection of prevention strategies that can be used separately or together to help clients learn their HIV status and/or to reduce their risks. Although both programs share similarities with RESPECT (e.g., individual level, client-centered), they are distinct programs (e.g., CTR elements can be delivered separately whereas RESPECT elements are delivered as a package; CRCS is intensive counseling and can involve many sessions whereas RESPECT is brief and limited to two sessions).
In the current study, we used a case study design to examine the relationship between training and QA and program fidelity in six agencies delivering RESPECT. This qualitative study employs a maximum variation case study approach [36], with a focus on agencies at two ends of the fidelity spectrum (i.e., high-fidelity and low-fidelity) using data from a larger mixed-methods investigation, the Translation into Practice (TIP) Study [37].
METHODS
Overview
Case studies are effective for determining differences and similarities between separate entities, allowing for the extraction of data dependent on differential contexts and environments. The present paper used staff interviews to assess training and QA and client exit surveys to assess program fidelity.
Sampling and data collection
Agencies
A subset of the agencies from the TIP study, three low-fidelity agencies and three high-fidelity agencies (defined below) were the focus of the case analysis. Within these 6 agencies, we interviewed 25 staff members and obtained 194 client surveys.
Clients
We obtained anonymous exit surveys from an opportunistic sample of clients. Clients were eligible to participate if they were 18 years of age or older and had received the first session of RESPECT. Client surveys were obtained at participating agencies where staff had been trained in study procedures (see Dolcini et al. [37] for details). The exit survey was completed after the participant’s first RESPECT session under private conditions, following informed consent.
Agency staff
We interviewed executive directors (ED), supervisors (SUP), and counselors (CN) at participating agencies (see Catania et al. [38] for selection procedures). In some agencies, staff performed dual roles, most commonly as an executive director and supervisor (i.e., ED/SUP) and occasionally as a supervisor and a counselor. Following informed consent, we conducted semi-structured, telephone and in-person interviews (45–60 min) under private conditions. Interviews were digitally recorded, transcribed, and checked for accuracy. Small incentives were provided.
Measures
Client survey
The self-administered client survey was brief (5 min) in order to accommodate clinic flow and client schedules. The survey assessed exposure to RESPECT counseling and was used to determine program delivery. Client exit indices have been found to be reliable reports of what transpires in related settings and are widely used in health services research [39–42].
Determining agency-level fidelity
Based on the client survey, a fidelity index was designed to assess whether three fundamental program components reflecting the primary objectives of RESPECT [43], were conducted during the counseling session. Items and rationale for determining agency-level fidelity are described in Catania et al., [38]. We relied on the literature (e.g., ref. [4], as well as how agency-level fidelity scores clustered in our sample to guide cutoffs for the present analyses. Agencies with a score below 60 were considered low-fidelity and those with scores of 79 or higher were labeled high-fidelity (TIP full sample agency fidelity scores range 26–88/100). In agencies with scores below 60, approximately 40% or more of the clients have not received the program as designed.
Staff interviews
Staff interviews covered a range of topics related to program adoption and implementation. For the present analyses, we focused on information related to training and QA. We obtained information on whether staff received formal training on RESPECT, who delivered the training (i.e., internal vs external entity), and what training strategies were used (e.g., didactic, active rehearsal and feedback). We also assessed whether and how the agency conducted QA, as well as whether QA was formalized through a written protocol. We focused on several types of training strategies identified as important to mastering EBIs, including didactic training, PBL, active rehearsal strategies (ARS), and coaching and observation and feedback (COF) (see Table 1 [4, 12–15, 44, 45]). We identified didactic training as necessary and identified PBL, ARS, and COF as effective strategies.
Table 1.
Strategies | Examples |
---|---|
Didactica | Lecture, readings |
Active learning | Role play/behavioral rehearsal, reverse role play, behavior modeling |
Observation and feedback | In-person, video, or audio recording of staff working with clients (observation); reinforcement, correction of behavior |
Problem-based learning | Self-reflection, case studies, written scenarios |
aDidactic strategies are necessary for training, but should be coupled with active learning.
Analytic plan
We selected cases from each stratum for a comparative case study analysis (n = 6; low-fidelity = 3, high-fidelity = 3). We selected cases based on fidelity scores, inclusion of both CBOs and departments of public health (DPHs), as well as on the availability of sufficiently rich material on training and QA for analyses. We used multiple sources of data, including semi-structured interviews conducted with executive directors, supervisors, and counselors, and field notes. Each transcript was read in full multiple times by coders (M.M.D., J.H., R.S.). We conducted structural and descriptive coding [46] to characterize training and QA. We extracted all material related to training and QA from the transcripts. Using an iterative process, team members reviewed extractions, developed initial codes and codebook, applied codes to a subset of interviews, and revised the codes and codebook. Two reviewers completed the coding (R.S., M.D.R.), meeting regularly with a third team member (M.M.D.). Reliability coding (20% of transcripts) showed strong agreement; discrepancies were resolved by consensus. We then conducted magnitude coding through a transfer of codes to charts summarizing training and QA activity at each agency (e.g., types of training strategies, who received the training). Following this, we conducted pattern coding to identify themes [46].
RESULTS
Agency characteristics
High-fidelity agencies
The high- fidelity agencies consisted of two CBOs (A, B) and a DPH (C; see Table 2). Agency A had prior experience delivering EBIs. Three full-time staff members were responsible for implementing RESPECT. The agency primarily served high-risk women, injection drug users, and men-who-have-sex-with-men (MSM). Agency B had two full-time staff members dedicated to HIV/AIDS, including RESPECT. RESPECT was supported by funds provided to the state by the CDC. In addition to RESPECT, Agency B also delivered CTR. The agency conducted outreach to engage their clientele who were primarily low-income, involved in the legal system, male and white. Agency C delivered a state-run model of RESPECT at two clinical sites, as well as at outreach sites including correctional facilities and community treatment centers. Funding for the program was provided by the state and the county. Agency C served male and female high-risk clients from a variety of racial/ethnic backgrounds, including IV drug users and MSM.
Table 2.
High-fidelity agencies | Fidelity score | Agency type | Geographic characteristics | Agency size | Counseling and testinga | Number of staff members interviewed | Number of client surveys |
---|---|---|---|---|---|---|---|
A | 80 | CBO | Non-urban; East | 1 site; 12–19 staff | Conventional testing; 2 sessions | 4 | 31 |
B | 87 | CBO | Non-urban; East | 1 site; 20–30 staff | Conventional and rapid testing; two sessions | 3 | 30 |
C | 81 | DPH | Urban; South | Multiple sites; >300 staff | Rapid testing; two sessions | 7 | 49 |
Low-fidelity agencies | |||||||
D | 27 | DPH | Urban; West | 1 siteb; 3,000 staff | Rapid testing; single session | 5 | 27 |
E | 44 | DPH | Non-urban; Midwest | 1 site; total number of staff unknownc | Rapid testing; single session | 2 | 30 |
F | 52 | CBO | Urban; South | 1 site; 10 staff | Conventional and rapid testing; single and two sessions | 4 | 27 |
aRegardless of the typical approach to HIV testing at the agency, conventional testing was used for preliminary positive cases.
bThe department of health is structured as a single site, RESPECT was also being delivered through a mobile van unit.
cThe total number of staff members in the agency is unknown. However, there are 40 full-time staff in the HIV & STI Services Department.
Low-fidelity agencies
The low-fidelity agencies consisted of one CBO (F) and two DPHs (D, E; see Table 2). Although Agency D is a large urban department of public health, the STD/HIV services unit was relatively small (approximately 11 part-time staff). The agency received state and county funding for RESPECT, which was delivered at the clinic and through mobile vans. About two-thirds of clients were male, with mixed racial/ethnic backgrounds. Agency E had 40 full-time staff members in the HIV/AIDS and STI department. In addition to RESPECT, the agency also delivered CRCS. A single employee was responsible for implementing RESPECT both at the clinic and through outreach. In the HIV program, the primary clients were HIV-positive individuals including injection drug users, MSM and homeless individuals from a wide range of ethnic/racial backgrounds. Agency F delivered RESPECT as part of an ongoing counseling and testing program. The agency integrated RESPECT into counseling and testing on an ad hoc basis after being offered the opportunity to obtain training on the program. Both standard and rapid testing were offered. RESPECT was delivered at the agency and through multiple outreach sites (e.g., correctional facilities, substance abuse centers, homeless shelters, and other CBOs). The agency served low-income clients, the majority of whom were white.
Themes
The case analysis revealed four themes related to training and QA and fidelity: (1) agency approach to training: “team” versus individual; (2) QA strategies; (3) impact of proactive inhibition; (4) training and QA reveal fit issues. Below we discuss how these themes are reflected in high- and low-fidelity agencies and representative quotes are provided in Tables 3 and 4.
Table 3.
Theme | High-fidelity agencies | Low-fidelity agencies |
---|---|---|
Approach to training | . . . when we were offered the RESPECT training, we went as a whole department, and we all agreed this was a real easy fit. [Agency A, SUP] | I guess just a clarification that we [executive director and supervisor] weren’t participating in the training. [Agency E, ED/SUP] |
We met as a team and we discussed—before we even went [to the training], we discussed would this be something that would work well with what our agency does. . . it should be a team decision. We all do interventions, so any decision on what trainings or what diffusion of effective behavioral interventions we implement should be a team decision. [Agency A, SUP] | . . . we sent about half of our staff through it [training] . . . and it was just kind of whoever’s available to go went. [Agency F, ED] | |
. . . we discussed it as a team, because we do work as a team, we bounced ideas off each other how the individual counselor reacts to said situation, and we use each others’ input. . . [Agency A, CN1] | ||
We [the agency] were proactive and contacted the people in [city] that did the training . . . and we were trying to get the RESPECT manual before the training . . . So it was a lot, all of us studying everything. [Agency B, SUP] | ||
Yes. We tried it out [pilot tested] with the first group of staff that were trained before anybody else was . . . [Agency C, ED] | ||
. . . [staff] talk about how . . . it was bumpy in the beginning . . . they piloted [the protocol] [Agency C, CN2] | ||
QA strategies | We actually do everything you just said [QA strategies]. Ongoing training, each client has their individual file . . . we have observation. So, we do all of that and, always continuing education. [Agency A, SUP] | Well, you know, maybe having something written down [QA protocol] that having set measures that we are looking at, that would be great. [Agency D, ED/SUP] |
I use the Provider Cards, the Provider Cards are pretty much an outline. The QA form is an outline of what’s in the cards . . . [Agency A, CN2] | I: Do you have a written quality assurance protocol for RESPECT? | |
R: Not that I’m aware of. No. [Agency E, CN1] | ||
Weekly individual supervision with each of the assigned staff. Twice monthly supervision with all of the participants in the program, all the staff. [Agency B, ED] | I: Does your quality assurance protocol for RESPECT include any of the following activities [case conferences, observation & feedback]? | |
R: Hm, no. We have done none of that [case conferences] . . . no. I’ve never had anybody come out and observe me in the field [conduct COF]. [Agency E, CN1] | ||
[QA strategies] They all sit in on the sessions and conduct audits. They’ll audit a negative or a positive [HIV test] . . . and they’ll review that information with the employee. [Agency C, ED1] | ||
Impact of proactive inhibition | . . . we have RESPECT and CTR as two different things, you know. . . [Agency B, SUP] | I: And how did the RESPECT model modify the existing program? |
R: I don’t know. I don’t know [later in the interview] . . . so our protocols of how we run the clinic. And it’s certainly not specific of RESPECT Again, in our environment we utilize a RESPECT intervention, but we do not call it RESPECT. [Agency D, ED/SUP] | ||
With RESPECT it’s just easy to implement because it was close to what we were doing . . . the only thing that I had to do differently, is the part to heighten their anxiety about whatever risk . . . because I wasn’t previously doing that. [Agency A, CN1] | The overlap between the RESPECT intervention itself, and what’s just a regular counseling and testing session and the training for the RESPECT intervention and the training that we locally provide for the HIV prevention and behavior change course are pretty similar, and it’s kind of confusing to provide, to separate the two. [Agency E, ED/SUP] | |
It [the training] handheld us through the process of understanding what protocol-based counseling was because people had previous counseling experience, and so they wanted to let us understand what . . . portion of it did not have to do with our past experience [counseling] . . . [Agency C, CN1] | So honestly, with some of the clients, I take it or leave it . . . I go back to an older, more very client focused model whenever it’s someone who’s not major risk oriented or have a lot of risk areas, and if they are giving me cues, like they’re not really responding well to having their assumptions challenged, and just asking them a lot more questions. [Agency F, CN] |
Table 4.
Theme | High-fidelity agencies | Low-fidelity agencies |
---|---|---|
Time | . . . if I sit in on a session and use direct observation, which is a QA component . . . An example could be, when we first started, the staff were very focused on time and having to get everything in within that specific time period. Now, we don’t like [to] cut corners. We don’t cut things short based on time. Some people just talk and you can’t just hold them to that, ‘18 to 20 min’ type of thing . . . [Agency B, SUP] | We adjust our time in terms of what’s working, what’s not working with the program, and . . . we change it [the protocol], you know, based upon things that we learn from the clients, based upon things we learn from the staff that. . . implementing interventions. [Agency D, ED/SUP] |
Client load | . . . well going through . . . supervisory trainings, we’ve learned the things that we needed and the different QA things that we needed to do now that we have more clients. [Agency B, SUP] | |
Early assessments of capacity and pilot runs | We tried it out [pilot runs] with the first group of staff that were trained before anybody else was . . . [Agency C, ED] | . . . we didn’t have a lot of prep time and we weren’t familiar with [RESPECT]. . . [Agency F, ED] |
Lack of adaptations to address mismatch between training and delivery | I had to hire a 40-hr staff person who would be the day to day staff Prevention Specialist for the RESPECT program. I also had to appoint the current supervisor of the counseling program. I had to increase her hours to provide the supervision for the program, and also identified a third staff person who would be trained in the absence of the primary staff. [Agency B, ED] | The person implementing RESPECT has just begun doing testing as a part of RESPECT. That’s brand new. Prior to that, none of our staff had done any HIV testing but we contract agencies to provide testing . . . [counselor name] just got trained in rapid testing maybe a month ago . . . [Agency E, ED/SUP] |
. . . I think for me it was like a leap—a big leap when—when I started actually providing the Rapid test and looking at one session . . . I think that was the most difficult thing to conceptualize . . . because I don’t remember in the training from [the external site] where they talked about [one session]—the training was for two sessions . . . so this whole thing about—doing the rapid testing—the one session was kind of just kind of remarkably surprising to me. I mean, you know, it—it’s WORKING but it’s, ‘Hum, how well?’ [Agency E, CN] | ||
I think it was that transition to be—being trained in two sessions. Now it’s, you know, being utilized in one session for rapid testing. [Agency E, CN1] |
Approach to training: “team” versus individual
Broad differences in approach were observed between high- and low-fidelity agencies. High-fidelity agency personnel worked collectively to create an environment in which appropriate staff members were well trained to deliver the program, whereas low-fidelity agencies took a less collaborative approach. Agency A, a high-fidelity agency, utilized a “team” approach in which counselors and supervisors attended RESPECT training together at a designated training site after making a group decision to adopt and train for the program. The philosophy of working as a team permeated the organizational culture, including the agency’s approach to training new staff. When a new counselor was hired, she was trained by the supervisor using the RESPECT manual with a focus on counseling skills and reinforcement through COF. Problem-solving RESPECT issues was also accomplished through a team approach. Similarly, in Agency B, there was an active interest by the staff at the organization in learning as much as possible about RESPECT prior to training. Finally, in Agency C, specific personnel were selected to attend the initial training, enabling them to obtain exposure to the program, while preparing to send the rest of the staff to a later training. Collectively, agency staff worked toward obtaining training, and preliminary runs of the program, to ensure a good fit.
In contrast, low-fidelity agencies approached initial training in a more piecemeal way, and not all personnel were trained. In Agency E, the supervisory staff was not formally trained in RESPECT. In Agency D, one of two supervisory positions was eliminated due to insufficient funding. The remaining supervisor was trained and had a clear conceptualization of RESPECT, however, there was no effort for staff to train together. In this large agency, some program delivery staff received training internally, while others attended external training. In the third low-fidelity agency, about half the staff received the training and there was no coordination around which staff members were appropriate to train.
QA strategies
The three high-fidelity agencies incorporated effective strategies, including written QA and formal ongoing supervision (e.g., COF, PBL). In contrast, no low-fidelity agency had a written QA protocol and QA strategies were implemented inconsistently. In Agency A, a high-fidelity organization, QA included ongoing training and education, evaluation of program delivery staff with feedback mechanisms, and intermittent case conferences between supervisors and counselors. Record keeping included a risk reduction form listing client information to ensure continuity. The agency’s QA also included case and program record reviews. Although both counselors stated there was a written protocol, one noted that he/she had not been using the protocol but was using a shortcut approach to ensuring good delivery (see Table 3). The effectiveness of the modification is unknown, but the act of using a written standard to document delivery of program components, may in and of itself have benefits. In Agency B, in addition to written QA, counselors engaged in practice and feedback immediately after training, and in ongoing individual and group supervision. In Agency C, supervisors and the counselors provided consistent descriptions of effective QA strategies in use, including COF.
As noted above, no low-fidelity agency had a written QA protocol. The ED/SUP for Agency D noted that the lack of a QA protocol was a weakness. In the other two low-fidelity agencies, the supervisors and counselors made contradictory statements about the presence of QA protocols and the use of specific strategies. For example, in Agency E higher-level personnel (ED/SUP) discussed an unwritten QA protocol but a counselor was not aware of a written or unwritten protocol. Further, the ED/SUP said the unwritten plan contained training and education for delivery staff and supervisors, and case conferences between supervisors and counselors but the counselor reported that none of those activities were taking place. This counselor noted that case and program records were reviewed, but that there was no COF. It is possible that inconsistencies stem from memory difficulties for the counselor or the ED/SUP. However, because individuals are likely to recall activities in which they have been engaged (e.g., people would remember having been observed), it is more likely that data reflect the absence of QA.
Impact of proactive inhibition
In some agencies, existing HIV testing and counseling programs shared similarities with RESPECT; in such cases, it is important that staff are able to distinguish among the various programs. In high-fidelity agencies, staff recognized how RESPECT differed from other ongoing programs, whereas in low-fidelity agencies staff had difficulty making distinctions. Thus, in high-fidelity agencies, there was no evidence to suggest that proactive inhibition was occurring. Staff explicitly noted the differences between RESPECT and another program they were offering and noted how the training helped them identify the unique aspects of RESPECT (see Table 3).
In contrast, challenges with distinguishing between programs were pronounced at the low-fidelity agencies. At one agency (D) staff perceived RESPECT to be synonymous with a testing program already being implemented, while at the other agency staff expressed confusion about the differences (E). In Agency F, RESPECT was being utilized as a part of the agency’s larger counseling and testing strategy. The expectation within the agency was that only certain components of RESPECT would be incorporated, and the decision of what to incorporate was left up to the counselors. It appeared as though the staff were relying more heavily on the existing counseling and testing strategies in their delivery.
Training and QA reveal fit issues
Training and QA assisted high-fidelity agencies in determining fit and in identifying solutions to address problems, while in low-fidelity agencies there was less evidence of efforts to address fit. Specifically, as described below, some high-fidelity agencies mitigated challenges with time and client load through QA and used pilot runs to adapt RESPECT to better fit agency demands. But these types of strategies were not found in low-fidelity agencies. Further, in low-fidelity agencies, we identified a mismatch between training protocols and how RESPECT was being delivered that led to challenges for program staff (see Table 4).
Time.
A challenge with time was directly mentioned by two agencies. For example, in Agency B, a high-fidelity agency, QA activities led to the recognition that staff members were focused on delivering RESPECT in 20 min, but that the standard time constraint did not meet the needs of some clients or align with the philosophy of the agency. A supervisor noted how adaptations strengthened counselors’ ability to deliver RESPECT faithfully, even when clients needed more time (see Table 4). Only one of the three low-fidelity agencies mentioned time issues and efforts to address it. In Agency D, the executive director/supervisor noted some approaches to dealing with time constraints, but the comments are quite general and it is difficult to pinpoint specific strategies.
Client load.
Difficulty meeting client demand is another fit problem that agencies encounter. In one high-fidelity agency (B), the supervisor noted that the training that she had received helped her support counselors when the client load increased. After attending supervisor training, she instituted new QA strategies and increased the frequency of QA. Additionally, COF was retained as a practice and weekly case conferences were added. Increases in client load required other adaptations, including adding program delivery to the job duties of the trained supervisor. Later, the agency hired a new counselor and sought out formal training for that person, which increased the agency’s capacity to serve more clients effectively. This agency was the only one among the six that explicitly addressed the issue of client load, yet it provides a compelling illustration of how training and QA can be leveraged to address fit issues.
Early assessments of capacity and pilot runs.
As noted earlier, the decision to adopt RESPECT in Agency A, a high-fidelity agency, was made by the team after determining that it was a good fit. Staff members used the initial training to further assess whether RESPECT aligned with their organization and clients. Immediately following training, the agency embarked on a pilot phase to assess the level of fit. In another high-fidelity agency (B), there was not a formal pilot, but counselors engaged in practice runs before working independently with clients. Agency C initially sent a small group of staff to be trained in RESPECT, which may be indicative of exploring the fit of the program to the agency. Similar to Agency A, Agency C adopted RESPECT and proactively implemented it, prior to an impending mandate.
No low-fidelity agencies conducted pilot-testing. In fact, in Agency D, the only practice that counselors received was at the formal training; staff began delivering the program with clients immediately post-training. The ED/SUP for Agency D also noted ongoing concerns with maintaining fidelity with single session delivery; strategies to address capacity to deliver the single-session program could have been identified through pilot testing. Finally, in Agency F there was limited time to prepare for RESPECT, which precluded pilot-testing.
Lack of adaptations to address mismatches between training and delivery.
In some agencies using single-session RESPECT, there was a mismatch between training and how the program was being delivered in the field. The high fidelity agencies that encountered this issue made adaptations to address it, but this was not the case in low-fidelity agencies. Two of the high-fidelity agencies were conducting standard testing with two sessions, so there was no obvious need to adapt RESPECT. The third high-fidelity agency was using a combination of rapid and standard testing. In this agency (C), training included assistance for staff with the delivery of the program in a single session, thus decreasing mismatches between training and implementation practices. It is noteworthy that Agency B, which was using the two-session approach, instituted structural adaptations to meet needs that emerged through QA, including increasing supervision of counselors to support RESPECT delivery.
In contrast, the approach to delivering RESPECT at two low-fidelity agencies (D, E), did not align well with the training. The training focused on the two-session RESPECT but they were delivering the program in a single session (i.e., rapid test). The third low-fidelity agency (F) was conducting both rapid and standard testing sessions. Staff members noted the challenge delivering RESPECT in a one-session format, and that some components had to be omitted. None of these agencies had high levels of QA, which compounded the challenges. Further, Agency D had adapted to a phone session lasting only a few minutes for delivering test results. The challenges encountered by Agency E, as reflected by both supervisor and counselors, further reflect mismatches between the training and delivery (see Table 4).
DISCUSSION
Through our case study analysis, we explore the inter-relationships among training, QA, intervention fit, and intervention fidelity. Training and QA differed substantially among organizations with high- and low-fidelity to the RESPECT intervention, including with regard to the use of written QA protocols, the agency approach to training, and attention program to fit. We also demonstrated how standard training may help or hinder fidelity depending on program fit, trainees’ understanding of the RESPECT program, and the similarity of a new program to existing services within an organization.
Our results reinforce the value of QA for achieving EBI fidelity [3, 4, 13]. QA is a core component of RESPECT [32], but two low-fidelity organizations failed to implement this component. The third low-fidelity agency reported an informal QA protocol, which was inconsistently executed. This finding suggests that informal QA processes may lead to weak QA or even to quiet neglect of QA all together. The formal QA protocols found in high-fidelity organizations presented a sharp contrast and indicate a commitment to ensuring the quality of services. Furthermore, high-fidelity agencies used multiple QA strategies including, case conferences, observation and feedback, and the use of checklists. It is unclear, based on our analyses, how the variety of strategies or the level of consistency with which specific strategies were used, impacted program fidelity. Future studies that examine the relative value of one QA strategy over another, including considering which approaches work well for what types of EBIs and practitioners, may lead to guidance to streamline QA, ultimately reducing costs (e.g., supervisor time).
In contrast to the relatively straightforward relationship between QA and RESPECT fidelity, our results reveal a complex picture of how training impacted fidelity. Our findings reinforce conclusions in the broader literature that initial training alone is insufficient [12–44]. All organizations in our study obtained standard RESPECT training, but the extent to which staff and supervisors participated in trainings differed between high- and low-fidelity organizations. In high-fidelity organizations it was common for the entire staff, including supervisors, to receive training. The converse was found in low-fidelity organizations. The piecemeal approach to training in these agencies resulted in inconsistent staff knowledge and skill and made it difficult for supervisors to monitor fidelity. Ideally, the entire staff would be trained on a new EBI, but this may stretch resources for some agencies. At a minimum, in addition to training service delivery staff, supervisors should have sufficient EBI specific knowledge to support high quality QA. Supervisor involvement can also set norms for EBI implementation at the organization [47].
In our study, the utility of training was impacted by adaptations that were leveraged both prior to and after training. For example, technological innovations led to a significant adaptation of RESPECT that created an opportunity to deliver the program in a single session. Yet, the most widely available training on RESPECT was for the two-session program. We found that formal training did not always prepare staff for the way RESPECT was being delivered at their agency. Even in the absence of significant adaptations, it is well documented that organizations alter components of innovations to create better fit [48], as we also observed in our study. High-fidelity agencies more often demonstrated an ability to address challenges around staff time (e.g., length of program sessions, managing client load) that emerged through QA, as well as organizational fit issues that emerged from pilot runs of the program.
Two approaches, contextual assessments and supplemental training, may help address challenges identified in the current study. The first approach involves incorporating contextual or needs assessments into training to help organizations anticipate adaptations needed to improve fit. Such assessments are fairly common when interventions are adapted for contexts or topics that differ significantly from the original program (e.g., moving from an HIV prevention setting to a substance abuse setting [48, 49], but may also prove valuable when an EBI is being implemented in what scientists and trainers may consider a “standard” context [2, 20]. Contextual assessment is currently being incorporated into training approaches [50], but it is unclear whether this approach will be sufficient to address barriers to EBI fidelity in light of technological innovations (e.g., rapid testing) that alter the mode of delivery of programs.
Supplemental training, either before or after formal training, also holds promise for improving implementation. At least one organization incorporated rapid testing as part of service delivery after receiving training in RESPECT. In such instances, supplemental training that addresses the adaptation may be valuable. Presumably, if staff members are already trained in the fundamentals of an EBI, it will be easier to provide support in the form of lower-cost training approaches such as webinars. Supplemental training may also be a useful tool prior to training, in part because pre-training may increase practitioner engagement. As seen in our study, some high-fidelity agencies reported considerable investment in learning about RESPECT prior to training. Providers themselves suggest that training that requires less investment is preferable to in-person training [26], and consideration of practitioner preferences may increase motivation, an important factor in engagement [13]. Thus, lower-cost supplemental trainings may offer an avenue for achieving program fidelity when innovations occur and EBIs are revised, or as a means of increasing practitioner motivation to learn about new EBIs [26].
As noted earlier, trainers and supervisors need to take into account pracitioners’ existing skillsets when new programs are introduced. In this study, staff from low-fidelity organizations sometimes showed difficulty distinguishing between various counseling and testing EBIs. This phenomenon, referred to as proactive inhibition or proactive interference in the psychological literature [23, 24], is not well described in the implementation literature. Our findings highlight the need for trainers to understand the variety of services staff members provide at their organizations and highlight the distinctions among programs as part of the training. Similar to what we observed in our study with regard to risk reduction counseling approaches (i.e., RESPECT, CTR, CRCS), there are a number of EBIs in related areas of HIV prevention and treatment that focus, for example, on improving linkage-to-care and engagement for HIV-positive individuals (e.g., ARTAS, Connect HIP; see effectiveinterventions.com). To facilitate high-fidelity delivery, it will be essential for staff members to understand how a new program differs from an existing service and the distinctive evidence, theory, and activities that are central to the new program.
Limitations
The findings from our case study are limited in their generalizability to other populations but are valuable in informing theory and hypothesis building [36]. The intent of this analysis was not to enumerate all the factors that constitute adequate training and QA, but rather to examine how training and QA impact fidelity and to identify patterns to be tested in future studies. Our study did not include a direct examination of staff turnover, although we recognize that turnover is common and impacts agencies [51].
CONCLUSIONS
We identified key differences between organizations, the training they received, QA procedures, and fidelity to an evidence-based HIV prevention program. The value of high-quality training and QA that is flexible enough to address program fit and program adaptations cannot be underestimated. Future research should extend this work by addressing methods of increasing program fit through training and QA, identifying a set of QA strategies that maximize program fidelity and is feasible to implement, and identifying and testing low-cost supplemental training approaches. Collectively such work will contribute to the development of training and QA frameworks that can guide agencies as they undertake new programs or adapt existing evidence-based programs.
Acknowledgments
This research was supported by the National Institute of Mental Health RO1MH085502 awarded to M. M.D. Data analyses and manuscript preparation were supported by the National Institute of Mental Health R21MH105180 awarded to J.A.C., the National Institute of Mental Health KO1MH096611 awarded to M.D.-R., Oregon State University Thurgood Marshall Graduate Scholarship awarded to R. Singh. We thank Westat Corporation for contributions to data collection and staff and clients at participating agencies for sharing their time and insights.
Compliance with Ethical Standards
Conflict of interest: none declared.
Authors’ Contributions: M.M.D. and J.A.C. designed the study, and along with V.N. and A.A.G. executed the study. M.D.-R., R.S., J.H., and M.M.D. conducted data analysis and interpreted findings. M.M.D. wrote initial draft and all authors contributed to revisions and approved the final manuscript.
Human Rights: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.
Informed Consent: Informed consent was obtained from all individual participants included in the study.
References
- 1. Collins CB Jr, Wilson KM. CDC’s dissemination of evidence-based behavioral HIV prevention interventions. Transl Behav Med. 2011;1(2):203–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. McKleroy VS, Galbraith JS, Cummings B, et al. ; ADAPT Team. Adapting evidence-based behavioral interventions for new settings and target populations. AIDS Educ Prev. 2006;18(4 Suppl A):59–73. [DOI] [PubMed] [Google Scholar]
- 3. Bertram RM, Blase KA, Fixsen DL. Improving programs and outcomes: implementation frameworks and organization change. Res Soc Work Pract. 2015;25(4):477–487. [Google Scholar]
- 4. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3–4):327–350. [DOI] [PubMed] [Google Scholar]
- 5. Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50(3-4):445–459. [DOI] [PubMed] [Google Scholar]
- 6. Edmunds JM, Beidas RS, Kendall PC. Dissemination and implementation of evidence-based practices: training and consultation as implementation strategies. Clin Psychol (New York). 2013;20(2):152–165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Kazdin AE. Evidence-based treatment and practice: new opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. Am Psychol. 2008;63(3):146–159. [DOI] [PubMed] [Google Scholar]
- 8. NIMH Multisite HIV/STD prevention trial for African American couples group. Supervision of facilitators in a multisite study: goals, process, and outcomes. J Acquir Immune Defic Syndr 1999. 2008;49(suppl 1:S59–S 67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Kamb ML, Fishbein M, Douglas JM Jr, et al. Efficacy of risk-reduction counseling to prevent human immunodeficiency virus and sexually transmitted diseases: a randomized controlled trial. Project RESPECT Study Group. JAMA. 1998;280(13):1161–1167. [DOI] [PubMed] [Google Scholar]
- 10. Chambers DA, Norton WE. The adaptome: advancing the science of intervention adaptation. Am J Prev Med. 2016;51(4 suppl 2):S124–S131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Fixsen DL, Blase KA, Naoom SF, Wallace F. Core implementation components. Res Soc Work Pract. 2009;19(5):531–40. [Google Scholar]
- 12. Fixsen D, Naoom SF, Blase KA, Friedman RM. Implementation Research: a Synthesis of the Literature [Internet]. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. Available at https://www.popline.org/node/266329. Accessibility verified May 3, 2019. [Google Scholar]
- 13. Lyon AR, Stirman SW, Kerns SE, Bruns EJ. Developing the mental health workforce: review and application of training approaches from multiple disciplines. Adm Policy Ment Health. 2011;38(4):238–253. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Sheidow AJ, Donohue BC, Hill HH, Henggeler SW, Ford JD. Development of an audio-tape review system for supporting adherence to an evidence-based treatment. Prof Psychol Res Pr. 2008;39(5):553–560. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Burke LA, Hutchins HM. Training transfer: an integrative literature review. Hum Resour Dev Rev. 2007;6(3):263–96. [Google Scholar]
- 16. Chinman M, Hunter SB, Ebener P, et al. The getting to outcomes demonstration and evaluation: an illustration of the prevention support system. Am J Community Psychol. 2008;41(3-4):206–224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Saldana L, Chamberlain P, Chapman J. A supervisor-targeted implementation approach to promote system change: the R3 model. Adm Policy Ment Health. 2016;43(6):879–892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Schoenwald SK, Mehta TG, Frazier SL, Shernoff ES. Clinical supervision in effectiveness and implementation research. Clin Psychol Sci Pract. 2013;20(1):44–59. [Google Scholar]
- 19. Culyba RJ, McGee BT, Weyer D. Changing HIV clinical knowledge and skill in context: the impact of longitudinal training in the Southeast United States. J Assoc Nurses AIDS Care. 2011;22(2):128–139. [DOI] [PubMed] [Google Scholar]
- 20. Margaret Dolcini M, Gandelman AA, Vogan SA, et al. Translating HIV interventions into practice: community-based organizations’ experiences with the diffusion of effective behavioral interventions (DEBIs). Soc Sci Med. 2010;71(10):1839–1846. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Joyce B, Showers B. Student Achievement through staff development. In: Designing Training and Peer Coaching: Our Needs for Learning. 3rd ed. Alexandria, VA: Association for Supervision and Curriculum Development; 2002. [Google Scholar]
- 23. Anderson MC, Neely JH. Interference and inhibition in memory retrieval. In: Bjork EL, Bjork RA, eds. Memory [Internet]. San Diego, CA: Academic Press; 1996:237–313. Available at http://www.sciencedirect.com/science/article/pii/B978012102570050010. Accessibility verified May 3, 2019. [Google Scholar]
- 24. Neath I, Surprenant AM. Proactive interference. In: Wright JD, editor. International Encyclopedia of the Social & Behavioral Sciences [Internet]. 2nd ed. Oxford: Elsevier; 2015:1–8. Available at http://www.sciencedirect.com/science/article/pii/B978008097086851054X [Google Scholar]
- 25. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Jacob RR, Allen PM, Ahrendt LJ, Brownson RC. Learning about and using research evidence among public health practitioners. Am J Prev Med. 2017;52(3 suppl 3):S304–S308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Aarons GA, Horowitz JD, Dlugosz LR, Ehrhart MG. The role of organizational processes in dissemination and implementation research. In Brownson GA, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health: Translating Science to Practice. New York: Oxford University Press; 2012:128–53. [Google Scholar]
- 28. Colditz GA. The promise and challenges of dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health. New York: Oxford University Press; 2012. [Google Scholar]
- 29. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev. 1998;18(1):23–45. [DOI] [PubMed] [Google Scholar]
- 30. Mowbray CT, Holter MC, Teague GB, Bybee D. Fidelity criteria: development, measurement, and validation. Am J Eval. 2003;24(3):315–40. [Google Scholar]
- 31. Johns DM, Bayer R, Fairchild AL. Evidence and the politics of deimplementation: the rise and decline of the “Counseling and Testing” paradigm for HIV prevention at the US centers for disease control and prevention. Milbank Q. 2016;94(1):126–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. CDC. RESPECT Implementation Guide [Internet]. Centers for Disease Control and Prevention; August 2012. Available at https://effectiveinterventions.cdc.gov/files/RESPECT_Procedural_Guide_8-09.pdf [Google Scholar]
- 33. CDC. Approval of a new rapid test for HIV antibody. MMWR Morb Mortal Wkly Rep. November 22, 2002;51(46):1051–2. [PubMed] [Google Scholar]
- 34. Metcalf CA, Douglas JM Jr, Malotte CK, et al. ; RESPECT-2 Study Group. Relative efficacy of prevention counseling with rapid and standard HIV testing: a randomized, controlled trial (RESPECT-2). Sex Transm Dis. 2005;32(2):130–138. [DOI] [PubMed] [Google Scholar]
- 35. Centers for Disease Control and Prevention. Comprehensive Risk Counseling and Services (CRCS) [Internet]. CDC.gov. 2018. Available at https://www.cdc.gov/hiv/programresources/guidance/crcs/index.html. Accessibility verified October 18, 2018. [Google Scholar]
- 36. Yin RK. Case Study Research: Designs and Methods. 5th ed. Los Angeles, CA: Sage; 2014. [Google Scholar]
- 37. Dolcini MM, Catania JA, Gandelman A, Ozer EM. Implementing a brief evidence-based HIV intervention: a mixed methods examination of compliance fidelity. Transl Behav Med. 2014;4(4):424–433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Catania JA, Dolcini MM, Gandelman A, Narayanan V, Mckay VR. Fiscal loss and program fidelity: impact of the economic downturn on HIV/STI prevention program fidelity. Transl Behav Med. 2014;4(1):34–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Iverson EF, Balasuriya D, García GP, et al. The challenges of assessing fidelity to physician-driven HIV prevention interventions: lessons learned implementing Partnership for Health in a Los Angeles HIV clinic. AIDS Behav. 2008;12(6):978–988. [DOI] [PubMed] [Google Scholar]
- 40. Klein JD, Allan MJ, Elster AB, et al. Improving adolescent preventive care in community health centers. Pediatrics. 2001;107(2):318–327. [DOI] [PubMed] [Google Scholar]
- 41. Lau JS, Adams SH, Irwin CE Jr, Ozer EM. Receipt of preventive health services in young adults. J Adolesc Health. 2013;52(1):42–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Ozer EM, Adams SH, Lustig JL, et al. Increasing the screening and counseling of adolescents for risky health behaviors: a primary care intervention. Pediatrics. 2005;115(4):960–968. [DOI] [PubMed] [Google Scholar]
- 43. Kamb ML, Dillon BA, Fishbein M, Willis KL. Quality assurance of HIV prevention counseling in a multi-center randomized controlled trial. Project RESPECT Study Group. Public Health Rep. 1996;111 (suppl 1:99–107. [PMC free article] [PubMed] [Google Scholar]
- 44. Lyon AR, Dorsey S, Pullmann M, Silbaugh-Cowdin J, Berliner L. Clinician use of standardized assessments following a common elements psychotherapy training and consultation program. Adm Policy Ment Health. 2015;42(1):47–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Whitmer K, Sweeney C, Slivjak A, Sumner C, Barsevick A. Strategies for maintaining integrity of a behavioral intervention. West J Nurs Res. 2005;27(3):338–345. [DOI] [PubMed] [Google Scholar]
- 46. Saldaña J. The Coding Manual for Qualitative Researchers. 2nd ed. Los Angeles, CA: Sage; 2013. [Google Scholar]
- 47. Birken S, Clary A, Tabriz AA, et al. Middle managers’ role in implementing evidence-based practices in healthcare: a systematic review. Implement Sci. 2018;13(1):149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Baumann A, Cabassa L, Stirman SW. Adaptation in dissemination and implementation science. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health: Translating Science to Practice. New York: Oxford University Press; 2017:286–300. [Google Scholar]
- 49. Cabassa LJ, Gomes AP, Meyreles Q, et al. Using the collaborative intervention planning framework to adapt a health care manager intervention to a new population and provider group to improve the health of people with serious mental illness. Implement Sci. 2014;9:178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Proctor E, Ramsey AT, Brown MT, Malone S, Hooley C, McKay V. Training in Implementation Practice Leadership (TRIPLE): evaluation of a novel practice change strategy in behavioral health organizations. Implement Sci. 2019;14(1):66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. McKay VR, Dolcini MM, Catania JA. Impact of human resources on implementing an evidence-based HIV prevention intervention. AIDS Behav. 2017;21(5):1394–1406. [DOI] [PMC free article] [PubMed] [Google Scholar]