Abstract
In the emerging field of implementation science, measuring the extent to which a new or modified healthcare program or practice is successfully implemented following an intervention is a critical component in understanding how evidence-based treatments become part of regular practice. This paper is intended to expand our understanding of factors that influence the successful adoption of new or modified HIV services in correctional settings. The nine-site project developed and directed an organization-level intervention designed to implement improvements in preventing, detecting, and treating HIV for persons under correctional supervision. Using semi-structured interviews to elicit perceptions from Senior Researchers and Executive Sponsors at each of the nine sites, this paper presents their views and observations regarding the success of the experimental intervention in their criminal justice setting. Within the areas of focus for implementation (either HIV prevention, testing, or linkage to community treatment) the complexity of programmatic needs was very influential with regards to perceptions of success. An organization’s pre-existing characteristics, staffing, funding, and interorganizational relationships contributed to either the ease or difficulty of programmatic implementation. Results are discussed pertaining to furthering our understanding of why new or modified healthcare interventions achieve success, including whether the intervention is a modification of existing practice or is a new intervention, and the choice of implementation strategy.
Implementation science is an evolving field dedicated to developing and testing strategies to bridge the gap between evidence-based research and clinical practice (Brownson, Colditz, & Proctor, 2012). Conventionally in health services and other intervention research, success is measured at the level of patient outcomes. In implementation science, however, the definition of success is more nuanced: outcomes related to implementation effectiveness are distinct from outcomes related to intervention effectiveness (Proctor & Brownson, 2012; Proctor et al., 2011; Rabin & Brownson, 2012). According to Rabin and Brownson (2012) it is important to distinguish between these levels of outcomes in order to understand where success and failure happen in the process. In other words, did an intervention fail at the patient level because it did not work (i.e., it was ineffective) in that particular context or did patient outcomes not improve because the intervention was implemented incorrectly or inadequately?
Proctor and colleagues (2009) had previously described a range of outcomes related to implementation research, including implementation outcomes (e.g., penetration, acceptability), service outcomes (e.g., effectiveness, patient-centeredness), and patient outcomes (e.g., satisfaction, symptomatology). Each of these outcomes can be measured across multiple types of stakeholders at multiple levels (e.g., providers, patients, administrators) and encompass outcomes that could be easily quantified (e.g., effectiveness; penetration) as well as more qualitative outcomes (e.g., acceptability; patient-centeredness). Thus, defining implementation success depends on the focus of the implementation outcome and perceptions of success, which could vary based on the point of view of the stakeholder being asked to evaluate the success of the intervention’s implementation.
CRIMINAL JUSTICE DRUG ABUSE TREATMENT STUDIES (CJ-DATS)
As evidence-based strategies have developed and improved across the HIV service continuum (i.e., prevention, testing, and linkage to treatment), the implementation of such strategies has failed to keep pace (D’Aunno, Pollack, Jiang, Metsch, & Friedmann, 2014; Ducharme, Chandler, & Wiley, 2013). This disparity between research and practice is particularly acute when it comes to criminal justice populations (Belenko et al., 2013). Individuals in this population are at increased risk for HIV infection, with 1.5% of the total prison population infected with HIV or AIDS, a rate that is approximately four-times greater than that of the general population (Belenko, Langley, Crimmins, & Chaple, 2004; Centers for Disease Control and Prevention, 2012; Ducharme et al., 2013; Martin, O’Connell, Inciardi, Surratt, & Beard, 2003; Maruschak, 2012). They also experience disproportionately higher rates of other behavioral health issues, particularly substance use disorders (Taxman & Ressler, 2010). To address these disparities, the National Institute on Drug Abuse launched the Criminal Justice Drug Abuse Treatment Studies (CJ-DATS), a national collaborative of investigators and criminal justice practitioners, to test strategies for closing the gap between behavioral health services evidence and practice in criminal justice settings, focusing on prisons and jails.
Studies conducted under the CJ-DATS cooperative focused on the experimental testing of organizational change or process improvement interventions and their impact on various implementation, organizational, and staff outcomes. A variety of service improvement strategies are available that could be of use in implementing evidence-based services in criminal justice settings. One such widely used performance improvement strategy is the Network for the Improvement of Addiction Treatment (NIATx) strategy, which was developed for use in substance abuse treatment facilities and has been implemented widely throughout the United States (McCarty et al., 2007; Roosa, Scripa, Zastowny, & Ford, 2011). Based on the work of those in the manufacturing sector who sought to improve quality and reduce product variation (e.g., Deming, 1986), NIATx uses performance improvement techniques such as Plan-Do-Study-Act (PDSA) cycles in which the organization first identifies problems and develops plans for correcting the issues identified, then implements the new process for a limited amount of time in order to assess improvement before either making additional modifications or institutionalizing the changes on a broader scale (McCarty et al., 2007). Targeted outcomes need to be measurable in order for improvement to be assessed, with Roosa and colleagues (2011) noting that the use of data can also help increase participation at the organizational level. However, as noted by Wisdom and colleagues (2006), treatment agencies may not be accustomed to using data for clinical decision making and view it as more of a bureaucratic hassle than a tool for performance improvement purposes. This finding may not be idiosyncratic to the drug treatment systems, as larger health care systems also accumulate a significant amount of data (e.g., The Healthcare Effectiveness Data and Information Set which is used by the majority of America’s health plans to measure performance on important dimensions of care and service) that is reportable and comparable to other systems but whose metrics may not be sufficiently meaningful to the organization or represent actual patient-oriented outcomes of improvement (Institute of Medicine, 2009; Lim et al., 2008).
THE HIV SERVICES AND TREATMENT IMPLEMENTATION IN CORRECTIONS STUDY (HIV-STIC)
Under the support of the CJ-DATS collaborative, the HIV Services and Treatment Implementation in Corrections (HIV-STIC) study was a multisite, cluster randomized trial. HIV-STIC tested an organizational process improvement strategy for promoting the implementation of evidence-based HIV services in participating correctional facilities (Belenko et al., 2013). A total of nine research centers across the U.S., each partnered with a criminal justice agency (generally a state- or county-level jail or prison agency), participated in HIV-STIC, forming a site. The nine sites identified two to four facilities to participate in the study, one or two of which was randomized to the experimental condition, the others to the control condition, to form an experimental vs. control cluster. The 14 experimental facilities each initiated improvements in evidence-based HIV services using a local change team approach, which was modified from the standard NIATx strategy (McCarty et al., 2007). Unlike standard NIATx, the HIV-STIC implementation intervention strategy involved staff from multiple agencies, including correctional, medical, and community health agencies (Pearson et al., 2014). Each Principal Investigator and their criminal justice partner identified an agency executive, the Executive Sponsor, who chose the substantive service focus, either HIV testing, prevention, or linkage to care. The Executive Sponsor then selected a Facility Sponsor, a senior staff member at the prison or jail facility, to oversee and lead the implementation intervention at the specific correctional facility. Each experimental facility also worked with a Change Team Coach (an external consultant trained in NIATx process improvement strategies) to help identify roadblocks and other issues which could impede effective implementation and coach them through the implementation of the intervention using the modified NIATx process improvement strategy. The HIV-STIC site-specific areas of focus were increasing HIV testing rates (facilities from one site), improving the linkage to HIV care upon release into the community (facilities from five sites), and building HIV prevention awareness and skills to improve healthy behaviors after release into the community (facilities from three sites). Staff in the 14 control condition facilities were given a directive to improve HIV services at their facilities, but did not utilize the HIV-STIC intervention strategy to facilitate change.
The HIV-STIC study found statistically significant rates of greater service penetration in the experimental facilities as compared with the control facilities, indicating that more people received the HIV services out of the total number of eligible persons at the experimental facilities (Pearson et al., 2014). The majority of the group differences were attributable to those sites that implemented an HIV prevention intervention, but overall increases in HIV testing were found as well. Fidelity to the modified NIATx strategy was not a significant moderator in the main study outcomes (Pearson et al., 2014). In addition, staff in the facilities that implemented the HIV-STIC intervention increased their perceptions of the value of HIV services as compared to staff in the control facilities (Visher et al., 2014). Staff in the facilities that participated in the change team activities rated implementing HIV services in their facility as more acceptable and feasible as compared with staff in the control facilities.
The present study used qualitative methodology to examine implementation success across the participating HIV-STIC sites. The goal of the current study was to better understand organizational characteristics and processes that lead to or were conceptualized as successful and unsuccessful implementation efforts from the perspective of different key stakeholders.
METHODS
PARTICIPANTS
Senior Researcher (either Principal Investigators or Project Managers) and Executive Sponsor dyads were selected from each of the nine participating sites, with the exception of one site whose Executive Sponsor was no longer with the institution. Interview participants were selected due to their in-depth knowledge of the CJDATS HIV-STIC study and their unique perspectives regarding study conduct and outcomes, which accounts for some sites selecting their PI or their project manager as the research representative for their site. A total of 17 interviews were completed from October through December 2013.
SEMISTRUCTURED INTERVIEWS
All participants completed an independent one-hour, semistructured interview with the lead author approximately six to nine months after the end of the 10-month implementation timeframe. Interviews were conducted by phone, audio recorded, and transcribed for accuracy and analysis. The interview guide was constructed to assess: (a) organizational systems change and sustainability of improvements in HIV services in correctional agencies, and (b) organizational characteristics and processes that lead to successful and unsuccessful implementation.
ANALYSES
A constant comparative analytic approach was used in our analyses with the aid of Atalas.ti software. Comparisons were made across types of respondents, types of services being implemented, and the level of complexity of implementing the evidence-based practice into the particular facility, regardless of whether the facility was in the control or experimental condition.
Main level coding involved two co-authors (Y.Y. and Y.P.), supervised by the paper’s lead author (S.G.M., a senior qualitative researcher and the study’s interviewer). During this phase of analysis the two coders first coded four transcripts independently, then discussed the discrepancies with the lead author and consolidated the codebook. Afterwards, coders divided the coding work but continued the discussion and debriefing throughout the coding process. Secondary coding involved another co-author (J.W.) working in concert with the paper’s lead author to refine secondary codes and contrast emergent ideas with HIV-STIC’s structural components, comparing, and contrasting across types of services implemented, as well as type of respondent.
RESULTS
Within the areas of focus for services that were implemented (i.e., HIV prevention, testing, or linkage to care) the complexity of programmatic needs was very influential with regards to perceptions of success. An organization’s pre-existing characteristics, staffing, funding, and inter-organizational linkage needs contributed to either the ease or difficulty of programmatic implementation. While both Senior Researches and Executive Sponsors mentioned aspects of the HIV-STIC intervention strategy that did not go as planned or were difficult to assess, there were numerous aspects of the HIV-STIC intervention that were perceived as successful.
MODIFICATION OF AN EXISTING PROGRAM
Many sites focused on modifying existing services, which proved to be less complex, more feasible, and more commonly defined as successful by interviewees than those sites that tried to initiate a new service. For example, five sites had linkages to HIV care between the correctional institution and community healthcare services, but these existing processes and relationships were not formalized. These experimental facilities at these sites were able to use the HIV-STIC intervention to bring together relevant players onto the local change team (LCT), including both Department of Corrections’ employees and relevant community-based organization’s employees. Together they built relationships, created mutually-beneficial solutions, and codified these solutions into policies. When discussing a provider, one Executive Sponsor described success as:
They have been one of our providers but because of this study it formed a relationship and a bond. It was a chance for people to see faces, to get down and dirty, and have conversations about where the gap was. And when you get those people in a room together, wow, the ideas and coming up with different things and they just start flying all over the room.
Likewise, all correctional facilities had testing procedures prior to the initiation of the study. Thus, the experimental facility at this site focused on improving testing rates, modified existing procedures, and made small changes to support the work of the HIV counselors. Changes were feasible and implemented in an incremental manner that allowed for testing the success of each change.
Both Senior Researchers and Executive Sponsors felt that the HIV-STIC intervention strategy fit well with their processes of modifying existing programs and the timeframe set by the study. Because the HIV-STIC intervention used NIATx’s multiple cycles of Plan-Do-Study-Act, it lent itself to tweaking processes and programs that were already occurring, using data collected during rapid cycle testing periods to assess the impact of implemented changes. Indeed, Executive Sponsors explicitly said success was due to the structure of the modified NIATx strategy used in the study. When explaining the appropriateness of the HIV-STIC intervention strategy an Executive Sponsor said:
It gave a guiding force on how we were to approach and what we were looking for. It was very helpful.
Many explained that the intervention worked well because it helped the administration identify the relevant players who worked with these issues on a daily basis and were able to make important contributions to develop solutions. Others explained that the coaching support in the HIV-STIC intervention invigorated and empowered the workers to create change. The HIV-STIC intervention fit so well in many sites that several adopted it to improve the performance of other issues after the HIV-STIC study was completed.
Interviewees that focused on modifying their existing procedures generally described themselves as very successful. However, these same interviewees described difficulties with seeing measurable improvement in the a priori outcomes related to their focus area due to various problems with the data. For example, two of the sites that chose to improve their existing linkage to care services for HIV-positive inmates both reported that there were so few HIV-positive inmates being released back into the community during the course of the study that they could not see statistically significant changes from pre- to post-intervention. Rather than dwelling on these shortcomings, interviewees preferred to refer to broader indications of successes. A Senior Researcher whose site focused on reducing barriers to testing explained that, because they were already testing a certain percentage of inmates, it was difficult to demonstrate quantitative improvement in testing, likely because of a ceiling effect:
We tried very hard to move in that direction and there was some improvement but it was so modest that is didn’t really say much for the implementation or the intervention that was used.
Instead, the intervention was successful in that it empowered the HIV counselor to voice complaints about the complexities of her job and made her life more easy through small, responsive changes, such as reducing her data entry requirements.
IMPLEMENTATION OF A NEW PROGRAM
In contrast to modifying an existing program, three sites implemented a new prevention-focused service. Because these sites deemed their testing rates high and their linkages to care strong, they set ambitious goals within their focus area. One site studied the implementation of one of the first prison-based prevention programs derived from an evidence-based community network intervention. In hindsight, it was clear that although the prevention services were needed, their a priori measures of success may have been overly ambitious to accomplish within the study’s time frame and parameters.
Of particular difficulty was setting up relationships across departments and agencies. Unlike in the sites that focused on modification of a program that had prior relationships that were in need of formalization, these sites had to develop partnerships from the ground-up, which was a time consuming and challenging process. They struggled due to lack of cooperation by key players (including custody staff, who had to be involved in getting inmates to and from their prevention intervention groups), over-involvement by too many departments (which created a chaotic atmosphere), and inability to find strong partners within the allotted time frame. A Senior Researcher’s challenges began with the onset of the study:
Boy, I will tell you we had difficulty identifying partners for the HIV study in our region. We originally had attempted to develop a partnership with our state Department of Corrections and for a variety of reasons, they refused to participate. Likewise our second choice then was to try to engage with the two largest urban communities. They too refused to participate. And so we really kind of went to our third option.
Problems continued at this particular site with a poor relationship between the researchers and the correctional partners, lack of clear leadership from their identified executive sponsor, and change team members who did not adhere to the HIV-STIC intervention strategy. Because of delays in project implementation due to time spent finding partners and the significant amount of time needed to design and implement a new program, these facilities were unable to perform multiple cycles of the Plan-Do-Study-Act process. In fact, these facilities never fully moved past the Plan-Do stages. Despite the significant work that they did accomplish in setting up a new program, their struggles were reflected in the interviewees’ perceptions of success, which were often muted or negative. Indeed, one interviewee even flatly said there was no success because the local change team did not achieve sustainable systemic changes, an end goal of the HIV-STIC intervention. One Executive Sponsor described confusion over success as the change team was not able to complete multiple Plan-Do-Study-Act cycles:
We were only able to do one cycle. And so the outcomes that they looked at were things like, how many people in the facility got HIV tested? And truthfully I am not sure that has any relationship to what was done and I’m not sure, I mean I don’t necessarily even think that any of our facilities effectively implemented having a change team because I don’t think it worked for the purpose in that there was nothing to really fix.
In a similar vein, two Senior Researchers mentioned that there was a lack of congruence between the HIV-STIC intervention strategy (i.e., the modified NIATx strategy) and setting up a new program. They explained that, having attempted to utilize the HIV-STIC intervention strategy, their conclusion was that the implementation strategy was strictly for process improvement, not one that could support the implementation of new evidence-based programs. Indeed, interviewees from both of these sites concluded that the HIV-STIC intervention strategy was not conducive to reaching success. Rather than supporting their work, they felt the implementation intervention strategy got in the way and took up resources, likely because they were only able to implement the Plan-Do stages, stages which are not the identifying and unique characteristics that support systemic change.
In addition to the complexities of implementing a new program, the focus area of prevention, done by three sites, proved to be the most challenging with regards to achieving implementation success, despite the fact that these same sites showed significant improvement in patient participation relative to their control sites. The longitudinal nature of measuring prevention outcomes inherently conflicted with the short timeframe set by the research study. Further, study participants were trained in the HIV-STIC intervention strategy to collect data to assess their programs, but this training focused on the collection and analysis of simple quantitative descriptors, like number of released inmates who went to a community follow-up appointment. Other options for measuring prevention success, such as substituting a proxy measure of attitudinal changes towards HIV, were beyond the scope of the training for the local change team. Interviewees at these sites spoke of unachieved goals of success and anecdotal successes outside of the bounds of measured outcomes.
LIMITATIONS OF QUANTITATIVELY DEFINING IMPLEMENTATION SUCCESS
Across all focus areas and differing intervention complexities, quantitative definitions of success were limited due to a variety of factors regarding the availability and usefulness of the quantitative data. The lack of quantitative perceptions and definitions of successes was mentioned by both Senior Researchers and Executive Sponsors across all sites.
In general, facilities had difficulty detecting measurable changes. Some participants described small sample sizes, due to a low base-rate phenomenon, such as releasing very few HIV-positive inmates over the course of the implementation period. Facilities that modified an existing program sometimes struggled to measure change, since their services were being delivered at relatively high rates prior to participating in the study and they were unable to significantly improve upon those service delivery numbers. Still others could not detect changes because of the longitudinal nature of their programs and the lack of a sufficient time-frame to detect measurable outcomes. The prevention-based programs struggled with this in particular.
All sites were hindered by a lack of proper data collection processes that should have measured change, whether at experimental or control facilities. A Senior Researcher explained:
The numbers that were available from them…, some months they had access. Some months they didn’t, depending how busy they were. This was tracked or they could find this or that. And then other months, maybe not. And even then if you’re able to get the numbers, the numbers are relatively small and there’s not a lot of room for change.
Senior Researchers described a lack of utilization of valid metrics when facilities collected data, which made assessment of the programs difficult. Facilities also did not collect data at enough (or any) points over time, which did not allow for a longitudinal evaluation on the efficacy of the program. Many stated the numbers weren’t there or that the results could not be determined. One Senior Researcher could only imagine that the intervention fell short in achieving outcomes.
Another Senior Researcher did define success wholly quantitatively. In regards to the implementation of a prevention program, he defined the program as mildly successful due to 10% more inmates receiving the prevention intervention services in the experimental condition than in the control condition, however, because of the complexity of measuring prevention outcomes as described earlier, this 10% increase may lack meaning in really describing processes and end results.
The difficulties with defining success through a quantitative standpoint were at odds with the HIV-STIC intervention strategy’s Plan-Do-Study-Act cycles, which depended on the rapid and accurate collection and analysis of quantitative data related to the implemented program or program change. This fact may help explain why, during the interviews, the Senior Researchers were more likely to try to use quantitative data when defining and assessing the level of success achieved at their sites. Even so, because of these complications with utilizing quantitative data, most interviewees in this study described successes in terms of qualitative rather than quantitative improvements.
QUALITATIVELY DEFINING IMPLEMENTATION SUCCESS
When discussing success, most Senior Researchers and Executive Sponsors moved away from quantitative measures and highlighted their qualitative successes. Qualitative definitions of success were highly nuanced but generally focused on successes related to the HIV-STIC intervention strategy’s process as well as highlighting programmatic outcomes. An Executive Sponsor explained their perceptions of success regarding their prevention program, which included information packets and educational sessions:
We didn’t feel that there was any way to measure who it was reaching, if it was reaching, if they were just tossing it in the trash as soon as they got to their assigned unit or you know we had no way of measurement. So we liked what we were doing where we could see their responses, see their reactions and answer questions and elaborate on things that they didn’t understand to take the opportunity for teaching moments.
The HIV-STIC Intervention Process.
Teamwork successes within the HIV-STIC intervention process were common. These included getting all the players at the table, creating a group of people willing to work with each other, and building relationships within and across agencies and service systems. The HIV-STIC intervention process often formalized and solidified existing teams, which included teams within corrections as well as between DOC and community-based organizations. Interviewees felt this newfound teamwork would lead to better outcomes for their inmates over time, even if outcomes could not be measured by the end of the study. A Senior Researcher explained success as:
I think it was successful in bringing a community organization and a prison organization together and the reason I think it was successful is that once the project was over, that relationship still exists. And our change team leader actually got promoted and she took that relationship with her and is now training her staff at the prisons to work with that community provider and go visit the community facility and get to know those people. So I think the project was successful at bringing different groups together to create sustained relationships.
Interviewees also described communication successes. These included opening up conversations and breaking down communication barriers. Commonly, staff members did not know who to call for assistance and support even when they wanted to achieve certain outcomes for their inmates; by the end of the project, staff members not only overcame these communication barriers but met like-minded allies with whom they could do further work. These communication successes were also broad and wide-reaching, spanning communication between healthcare employees in the correctional setting and Department of Corrections, between healthcare employees and the community-based organizations, and between frontline workers and administration. Another Senior Researcher explained:
I think that definitely, definitely the organizational connections are the most important outcome of this study… I truly think that this being able to come together at the table and sort out how they could improve their communication, corrections being able to let the correctional health people know when somebody is going to be released so that they could take whatever appropriate actions are required to transfer health information and to plan on transition to community care. That was an achievement.
Like the teamwork successes, interviewees felt the communications successes would lead to better outcomes for their inmates over time.
Change-oriented successes were also mentioned, and included feelings of excitement about change and feeling proud of the local change team’s work. Interviewees specifically discussed that staff liked feeling that they could improve their work environment, despite not having the ability to quantify and measure these changes. Many interviewees also reported a further system-wide buy-in of the HIV-STIC intervention process to create other change because it worked so well in their setting. A Senior Researcher explained the most essential part of the intervention as:
I think that there is something a little more grassroots going on as a result of the intervention and that was to do with fortifying or strengthening people in their jobs and this kind of substantial value they have and who they are now and what they’re contributing to the process. They seem to have a different view of themselves in their context.
Even though this study largely focused on the corrections environment, one Senior Researcher reported that a community-based organization involved in the study was using the HIV-STIC intervention approach to address internal issues as well.
Broader HIV-STIC Outcomes.
Beyond the qualitative successes of the HIV-STIC intervention process, many described the successful qualitative outcomes of their new or modified programs. Generally, interviewees described a growing awareness of the focus area and HIV, defining the focus area (prevention, testing, or linkage) as a problem, and reducing stigma around HIV for both inmates and staff members. One of the Executive Sponsors defined their views of success as encapsulating increased morale among staff members who are working in the HIV continuum of services:
Many of them, once they started being recognized, it increased their overall positive attitude. To say that, somebody’s seen what I’m doing as important, it is important. And it even increased their productivity.
Through this growing awareness, HIV became an issue of attention for the upper levels of administration as well, a support that most felt when advocating for systemic changes.
Outcome-related successes were particularly focused on patient satisfaction outcomes by many interviewees and reflected anecdotal recollections as well as observations. Inmates reported to staff members that they were happy they participated in the program. Incarcerated individuals were also described as happy to know they would have support upon release. Interviewees mentioned the empowerment of inmates as a success, which included incarcerated individuals opening up about their HIV risk, an act that many had not done before.
Likewise, some described success as empowerment of their employees. One of the Senior Researchers explained that success was:
Giving staff a sense that they can actually help with something. They can actually make changes in a complicated system and sometimes those changes are pretty simple. It makes them feel good about it and feel empowered and energized to do things.
Senior Researchers and Executive Sponsors reported they felt their local change team members were happy they participated in the study. Others reported that the study successfully resulted in HIV staff members feeling more appreciated and less isolated in their work.
Overall, many felt that the service program that was implemented worked and that the process was successful. Outcomes were positive, despite not having quantitative data to support the efficacy of these changes. Thus the HIV-STIC intervention was considered successful by those who ascribed to this definition of success. As one Senior Researcher explained:
I mean, you can just tell by the excitement on the part of both the community providers and the institutional folks and even probation and parole that they feel the system is better and the changes that were made, made sense.
DISCUSSION
While the CJ-DATS HIV-STIC study focused primarily on the increases in HIV-related services delivery to inmates and positive changes in staff attitudes toward HIV services as outcomes as indicative of greater study success, the qualitative interviews conducted for the present study reveal the benefits of examining other program aspects when defining success in implementation research. As outlined in 2011 by Proctor and colleagues, there are numerous implementation outcomes that might be appropriate to examine and, “Many of the implementation outcomes can be inferred or measured in terms of expressed attitudes and opinions, intentions, or reported or observed behaviors” (Proctor et al., 2011, p. 67). Our study sought not to determine whether or not any or all of the HIV-STIC sites were successful in terms of our own pre-determined definition of success, but rather to examine how differing key study stakeholders defined success and which implementation components may have been deemed successful using these definitions.
What was viewed as successful or unsuccessful was as informative as why it was deemed to be so. Programs and processes that were modified and improved upon through the HIV-STIC intervention were less complex and more likely to be viewed as successful than were newly developed programs. Programs that predated the study had identified relationships or partnerships and often included a system for tracking activities, so adherence to the Plan-Do-Study-Act process was more likely to occur. New programs, such as the three prevention interventions, were extremely time consuming and complicated to launch, so adherence to the implementation process was a challenge. Ironically it was these same types of prevention interventions that were most strongly associated with significant improvements in receipt of HIV services in the experimental sites. It may be that these types of programs were so complex that the inclusion of any additional support structure made a large difference in terms of successful program delivery, even if fidelity to the implementation model was lacking. It also may be that because prevention programs can serve large numbers of inmates at a time, any implementation of a new program would yield significant impacts on service receipt, whether or not the prevention intervention was effective.
The constraints of the research project may have also been too restrictive, given the time consuming and complex nature of trying to implement a new program or service into a complex existing structure and culture, often involving multiple organizations and systems. The HIV-STIC study took a considerable length of time to set-up, leaving only 10 months for the implementation phase of the study. Settings that are resistant to the change, for whatever reason (e.g., conflicting cultural paradigms; a lack of buy-in from people in powerful roles; lack of resources), may require more time to overcome than is typically allocated for many research projects, even those spanning two or more years. Successful adherence to the HIV-STIC intervention protocol could also have been more challenging for these sites that needed more flexibility in how the programs were implemented. For example, rapid Plan-Do-Study-Act cycles were not feasible for some sites because things were not moving rapidly—as such, these sites were not successful in adhering to the implementation process as defined by the protocol (i.e., completing Plan-Do-Study-Act cycles). Perhaps if they had more time and/or other ways to evaluate the changes they were making, they would have perceived the activities that took place at their sites as being more successful. Likewise, other sites found the emphasis on quantitative evaluations of success to be less valuable than more qualitative evaluations of the process and staff satisfaction. Again, success in this context was being evaluated under the constraints of the research design. These findings reflect current discussions among implementation researchers around the need for more flexibility in research designs and methods and more recognition of the merits of study designs other than randomized trials (Glasgow & Chambers, 2012; Landsverk et al., 2012).
Ultimately, this study shows that understanding and defining success in implementation research requires more than quantitative evaluations of success. Mixed methods approaches can be useful for understanding both objective measures of success and more subjective perceptions of success, from the points of view from the various stakeholders involved in the process of implementing change. More research is needed to help detect and describe the subtle nuances of implementation success. This can be accomplished in terms of the type of program or service being implemented, the perspective of who is evaluating success (e.g., administrators, front line staff, or researchers), and at the level of analysis from which success is being examined (i.e., the implementation, service, or patient outcomes). In particular, several respondents noted improvements in interagency collaboration and communication. Although analyses are underway to examine effects of the HIV-STIC experimental intervention on interagency collaboration using a brief attitudinal scale, new and more refined measures of the quality and impact of staff collaboration and communication are needed to advance implementation science. In addition, researchers should be cognizant of their use of the term success and be explicit in how they define, measure, and evaluate success in the context of their studies.
Acknowledgments
This study is funded under a cooperative agreement from the U.S. Department of Health and Human Services, National Institutes of Health, National Institute on Drug Abuse (NIH/NIDA), with support from the Substance Abuse and Mental Health Services Administration (SAMHSA) and the Bureau of Justice Assistance, U.S. Department of Justice. The authors gratefully acknowledge the collaborative contributions by NIDA; the Coordinating Center, AMAR International, Inc.; and the Research Centers participating in CJ-DATS. The Research Centers include: Arizona State University and Maricopa County Adult Probation (U01DA025307); University of Connecticut and the Connecticut Department of Correction (U01DA016194); University of Delaware and the Delaware Department of Corrections (U01DA016230); Friends Research Institute and the Maryland Department of Public Safety Correctional Services’ Division of Parole and Probation (U01DA025233); University of Kentucky and the Kentucky Department of Corrections (U01DA016205); National Development and Research Institutes, Inc. and the Colorado Department of Corrections (U01DA016200); University of Rhode Island, Rhode Island Hospital and the Rhode Island Department of Corrections (U01DA016191); Texas Christian University and the Illinois Department of Corrections (U01DA016190); Temple University and the Pennsylvania Department of Corrections (U01DA025284); and the University of California at Los Angeles and the Washington State Department of Corrections (U01DA016211). The views and opinions expressed in this report are those of the authors and should not be construed to represent the views of NIDA nor any of the sponsoring organizations, agencies, CJ-DATS partner sites, or the U.S. government.
Contributor Information
Shannon Gwin Mitchell, Friends Research Institute, Inc., Baltimore, Maryland.
Jennifer Willett, University of Connecticut, West Hartford, Connecticut.
Holly Swan, Abt Associates, Inc., Cambridge, Massachusetts.
Laura B. Monico, Friends Research Institute, Inc., Baltimore, Maryland.
Yang Yang, University of Louisiana at Lafayette, Lafayette, Louisiana.
Yvonne O. Patterson, University of Connecticut, West Hartford, Connecticut.
Steven Belenko, Temple University, Philadelphia, Pennsylvania.
Robert P. Schwartz, Friends Research Institute, Inc., Baltimore, Maryland.
Christy A. Visher, University of Delaware, Newark, Delaware.
REFERENCES
- Belenko S, Langley S, Crimmins S, & Chaple M (2004). HIV risk behaviors, knowledge, and prevention among offenders under community supervision: A hidden risk group. AIDS Education and Prevention, 16, 367–385. [DOI] [PubMed] [Google Scholar]
- Belenko S, Hiller M, Visher C, Copenhaver M, O’Connell D, Burdon W, . . . Oser C (2013). Policies and practices in the delivery of HIV services in correctional agencies and facilities: Results from a multisite survey. Journal of Correctional Health Care, 19, 293–310. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brownson RC, Colditz GA, & Proctor EK (Eds.). (2012). Dissemination and implementation research in health: Translating science to practice. New York, NY: Oxford University Press. [Google Scholar]
- Centers for Disease Control and Prevention. (2012). HIV in correctional settings. Retrieved January 7, 2105, from http://www.cdc.gov/hiv/resources/factsheets/pdf/correctional.pdf
- D’Aunno T, Pollack HA, Jiang L, Metsch LR, & Friedmann PD (2014). HIV testing in the nation’s opioid treatment programs, 2005–2011: The role of state regulations. Health Services Research, 49, 230–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deming WE (1986). Out of the crisis. Cambridge, MA: Massachusetts Institute of Technology, Center for Advanced Engineering Study. [Google Scholar]
- Ducharme LJ, Chandler RK, & Wiley TRA (2013). Implementing drug abuse treatment services in criminal justice settings: Introduction to the CJ-DATS study protocol series. Health & Justice, 1, 5. http://www.healthandjusticejournal.com/content/1/1/5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glasgow RE, & Chambers D (2012). Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clinical and Translational Science, 5, 48–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Institute of Medicine. (2006). Performance measurement: Accelerating improvement. Washington, DC: National Academies Press. [Google Scholar]
- Landsverk J, Brown CH, Chamberlain P, Palinkas L, Ogihara M, Czaja S, . . . Horwitz SM (2012). Design and analysis in dissemination and implementation research. In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating science to practice. New York, NY: Oxford University Press. [Google Scholar]
- Lim KG, Patel AM, Naessens JM, Li JT, Volcheck GW, Wagie AE, Enders FB, & Beebe TJ (2008). Flunking asthma? When HEDIS takes the ACT. American Journal of Managed Care, 14, 487–494. [PubMed] [Google Scholar]
- Martin SS, O’Connell DJ, Inciardi JA, Surratt HL, & Beard RA (2003). HIV/AIDS among probationers: An assessment of risk and results from a brief intervention. Journal of Psychoactive Drugs, 35, 435–443. [DOI] [PubMed] [Google Scholar]
- Maruschak LM (2012). HIV in Prisons, 2001–2010. DOJ BJS September 2012, NCJ 238877. Retrieved January 26, 2015, from http://www.bjs.gov/content/pub/pdf/hivp10.pdf
- McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, Capoccia V, & Cotter F (2007). The Network for the Improvement of Addiction Treatment (NIATx): Enhancing access and retention. Drug and Alcohol Dependence, 88, 138–145. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pearson FS, Shafer MS, Dembo R, del Mar Vega-Debien G, Pankow J, Duvall JL, . . . Patterson Y (2014). Efficacy of a process improvement intervention on delivery of HIV services to offenders: A multisite trial. American Journal of Public Health, 104, 2385–2391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor EK, & Brownson RC (2012). Measurement issues in dissemination and implementation research. In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating science to practice. New York, NY: Oxford University Press. [Google Scholar]
- Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, & Mittman B (2009). Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health, 36, 24–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, & Hensley M (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health, 38, 65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rabin BA, & Brownson RC (2012). Developing the terminology for dissemination and implementation research. In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating science to practice. New York, NY: Oxford University Press. [Google Scholar]
- Roosa M, Scripa JS, Zastowny TR, & Ford JH III. (2011). Using NIATx based local learning collaborative for performance improvement. Evaluation Program Planning, 34, 390–398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taxman F, & Ressler L (2010). Public health is public safety: Revamping the correctional mission. In Frost N, Freilich J, & Clear T (Eds.), Contemporary issues in criminal justice policy: Policy proposals from the American Society of Criminology Conference. Belmont, CA: Wadsworth. [Google Scholar]
- Visher CA, Hiller M, Belenko S, Pankow J, Dembo R, Frisman LK, . . . Wiley TRA (2014). The effect of a local change team intervention on staff attitudes towards HIV service delivery in correctional settings: A randomized trial. AIDS Education and Prevention, 26, 411–428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wisdom JP, Ford JH III, Hayes RA, Edmundson E, Hoffman K, & McCarty D (2006). Addiction treatment agencies’ use of data: A qualitative assessment. Journal of Behavioral Health Services & Research, 33, 394–407. [DOI] [PubMed] [Google Scholar]
