Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Sep 21.
Published in final edited form as: Fed Probat. 2019 Sep;83(2):45–51.

Fidelity in Evidence-based Practices in Jail Settings

Robert Walker 1, Michele Staton 2, Grant Victor III 3, Kirsten Smith 4, Theodore Godlaski 5
PMCID: PMC8455082  NIHMSID: NIHMS1619279  PMID: 34552275

Interventions for substance use disorders (SUDs) occur across a wide range of settings, including outpatient, intensive outpatient, short- and long-term residential, inpatient, and corrections. In the last decades ever-increasing research with addiction has led to more effective interventions, which have been termed Evidence-Based Practices (EBPs). In fact, federal and state governments, in funding intervention programs and research, often require the use of EBPs. In the U.S., the Single State Authorities (SSAs) allocate the federal block grants and state general funds to programs with specific requirements for EBPs in their contracts (Torrey, Lynde, & Gorman, 2005; Riekmann, Kovas, Cassidy, & McCarty, 2011). Commitments to implementing EBPs vary from state to state and SSAs face challenges in realizing the adoption of EBPs due to unintended consequences of policy mandates as well as insufficient support structures during and following implementation (Mueser, Torrey, Lynde, Singer, & Drake, 2003; Goldman, Morrissey, & Ridgely, 1994; Goldman et al., 2001; McHugh & Barlow, 2010). In addition, there is much variation in how funding sources monitor implementation and fidelity of EBPs and how they evaluate outcomes against their expectations (D’Aunno, 2006; Rapp et al., 2005).

With increased focus on EBPs, there is also concern over the degree to which any substance abuse intervention provider can implement an EBP under real-world conditions (Aarons, Hurlburt, & Horwitz, 2011; Garner, 2009; Glisson et al., 2008; Hennessy, Finkbiner, & Hill, 2006; Hennessy & Green-Hennessy, 2011; McHugo et al., 2007; Mendel, Meredith, Schoenbaum, Sherbourne, & Wells, 2008; Hoagwood, Burns, Kiser, Ringeisen, & Schoenwald, 2001; Steinberg & Luce, 2005). In addition, questions remain regarding how portable even the most promising research-based interventions are where disparities in interventionist training and expertise, complex client comorbidities, non-traditional intervention settings, and consumer choices about their care can result in discrepancies in EBP fidelity and, by extension, to expected outcomes (Bond, Salyers, Rollins, Rapp, & Zipple, 2004; Garner, 2009).

Given the demand for demonstrably effective treatment outcomes and the high importance attached to the implementation and outcomes of EBPs, a thoughtful exploration is needed to identify and analyze how—and to what extent—EBPs can be implemented in real-world practice environments with a reasonable degree of fidelity. While the literature has examined the fidelity of certain EBPs in purely clinical settings, little has been done to determine exactly how the fidelity of EBPs can be measured and ensured in the messy world outside of controlled clinical settings, such as in jails or prisons, where an increasing number of inmates receive substance abuse interventions.

One EBP that has extensive empirical support of its effectiveness with substance using populations is Motivational Interviewing (MI) (Miller & Rollnick, 1992; 2002; Carroll et al., 2006; Vader, Walters, Prabhu, Houck, & Field, 2010). However, the effectiveness and efficacy of MI has been largely examined among voluntary clinical participants. Its association with change-talk and open-endedness has been well established, and it is an open communication style rather than a specific treatment protocol or fixed set of topics (Miller & Rollnick, 2009; Morgenstern et al., 2012). Thus, research on the implementation fidelity of MI should have important implications for its dissemination into various correctional settings and non-traditional intervention environments. While recent studies have examined the fidelity of MI implementation among probationers, it remains an open question as to how and whether EBPs can be successfully delivered in correctional settings (Spohr, Taxman, Rodriguez, & Walters, 2015).

Delivery of EBPs in real-world settings with disenfranchised populations is particularly relevant in areas where treatment resources are limited, such as rural Appalachia. The Appalachian region of the U.S. has some of the highest rates of health disparities and service limitations in the nation (America’s Health Rankings, 2015). The Appalachian region also ranks highest in the county for prescription opiate abuse (Appalachian Regional Commission, 2017). Despite the prevalence of substance abuse, SAMHSA’s Treatment Episode Data Set indicates that only 7 percent of all substance abuse treatment admissions take place in rural areas, and that rural admissions are more likely to be referred from the criminal justice system compared to urban treatment admissions (SAMHSA, 2012). Therefore, conducting research on the effectiveness of substance use interventions in jails as venues to reach out to high-risk drug users is critical—not only because jails typically house a high volume of drug users (Karberg & James, 2005), but also because many of these individuals will never be referred for treatment or engaged in an intervention.

This study is part of a larger NIDA-funded grant (R01DA033866; Staton et al., 2018) that examines the effectiveness of an evidence-based motivational interviewing (MI) program targeting high-risk drug use and risky sexual practices (Weir et al., 2009) compared to usual jail-based health information services for high-risk behavior among incarcerated women (Staton et al., 2018). This study examines the steps to validate the delivery of MI in a challenging real-world environment with rural drug-using women. MI was selected because it allows for a tailored approach to individualized risk behaviors that are driven by clients. The intervention was also selected because MI has been successful in reducing high-risk sexual practices among women offenders, and MI is considered one of the most supported EBPs (Seng & Lovejoy, 2013; Weir et al., 2009). Specifically, this study will (1) describe and examine the fidelity in the use of 10 MI components; (2) describe the characteristics of participant collaboration; and (3) examine the correlation between interventionist statements and participant collaboration. The overall goal of this article is to illustrate the feasibility of attaining a sufficient degree of EBP fidelity in a real-world, non-therapeutic environment of a rural jail.

Method

Participants

As part of the larger parent project (Staton et al., 2018), potential participants were randomly selected from the jail population and were provided with informed consent to participate in a study that included random assignment to an intervention group or a comparison group. All participants were screened for substance use using the NIDA-modified Alcohol, Smoking, and Substance Involved Screening Test (ASSIST, NIDA, 2009). During the study period, rural drug-using women (N=400) entered the trial, and after completion of a baseline interview, 199 were randomly assigned to the MI intervention group and 201 were assigned to an education session. Of those in the MI condition, 20 percent (n=40) were randomly selected for fidelity assessment for this study.

Materials

The baseline clinical assessment instruments for the women covered socio-demographics, drug use and related risk behaviors, stage of change, and use of services. For the fidelity measurement for this study, three coding tools were developed by modifying the Motivational Interviewing Skill Code (MISC 2.1) (Miller, Moyers, Ernst, & Amrhein, 2008; Moyers, Manuel, & Ernst, 2014). The MISC 2.1 was modified for use with a rural incarcerated sample. Coding focused on two primary scales from the MISC 2.1: 1) the Global Facilitator Rating Scale and 2) the Global Interaction Rating Scale. The Global Facilitator Rating Scale was modified from 6 original items (acceptance, egalitarianism, genuineness/congruence, empathy/understanding, warmth, and spirit) to 10 scale items by separating measures for empathy and understanding and by adding interactiveness, narrative, and summarizing to capture additional components of motivational interviewing. The scale was used to assess interaction between the interventionist and the participant along a 7-point Likert scale, with higher scores indicating more adherence to the traditional motivational interviewing approach.

The Global Interaction Rating Scale was adapted to include the original measure of “collaboration” between interventionist and participant, but also expanded to include the level of participant “cognitive” capability, and the level of participant “interaction” and engagement with the interventionist. The addition of cognitive capability was critical following pilot testing due to several participants apparently lacking the basic cognitive skills and introspection to fully engage in the intervention process.

Procedures

As part of the parent study, all participating women agreed to all research and clinical procedures through informed consent approved by the University of Kentucky Medical IRB. In addition, a Certificate of Confidentiality was obtained from the Office of Health and Human Services due to the sensitive and confidential nature of the questions and intervention activities in a jail setting.

For purposes of fidelity monitoring, participants in the MI group were asked for permission to audiotape the sessions. For this analysis, 40 participants (20 percent) in the MI group were randomly selected for their audio and transcribed records to be evaluated by the reviewers. Participants in the sub-study sample attended an average of 3.1 MI intervention sessions. MI was used throughout all sessions as the standardized intervention approach. Using an established, manualized approach (Weir et al., 2009), we intended to use MI to facilitate change in high-risk drug use and risky sexual practices following the women’s release from jail.

All audio-recorded sessions were entered into a voice record data file in a secure, encrypted-access server and later transcribed into a Word document. Since each woman could participate in up to four sessions prior to release, only one recorded session per participant was randomly selected for fidelity assessment. Two independent raters rated these sessions using the modified MISC scales. The raters were trained in MI and given refresher training sessions following their initial coding. Examples of MI-congruent and MI-incongruent statements were used in the training to try and titrate rater decisions about what might or might not fit within MI specifications.

Rating scores were entered into a database and were reviewed by the investigators and interventionist. In addition, the data were shared with the raters early in the study to probe for variations in interpretative ratings when major differences were found. In some instances, the differences were found to be due to misunderstanding of variable intent. Thus, raters were given new definitions of variable meaning and intent for adjusted ratings and for future ratings. In addition, given that MI is characterized not by specific treatment content items but by relational style, detailed interview audio recording and transcription were used during training to capture “soft” elements of the interviews such as the context of sentences and discussion flow. This approach was used despite suggestions that it is labor intensive (Essock et al., 2015).

Interventionist and Rater Preparation

An interventionist with extensive case management experience was recruited from the Appalachian area to aim for cultural congruence. The interventionist held a master’s degree in social work and had over four years supervised practice before her three years with this project. All intervention sessions were provided by the same interventionist, who received 20 hours of clinical supervision on MI coupled with over 90 hours of other case supervision with the PI. She also obtained a certificate from the Institute of Family Development (http://www.institutefamily.org/) for participating in 40 hours of clinical training on family interventions, and she had intern experience in a rural domestic violence center. The study team included two members with considerable clinical experience and experience as clinical supervisors. Clinical supervision used audio-recorded sessions with feedback to the interventionist, information about diagnostic possibilities, and modeling of MI-consistent ways of communicating with participants. The interventionist also received biweekly supervision with the PI to review cases and self-identified questions about the MI approach, as well as quarterly case conferences and clinical supervision sessions during each year of the project.

Similar to the interventionist, all raters were trained in MI by study investigators in seminar settings with lecture and question and answer format. Examples of MI-congruent and non-congruent statements were presented for the raters to evaluate. The six interview raters were trained to rate audio and transcribed interview verbatims on 10 MI interventionist characteristics and three characteristics pertaining to participant responses.

Results

Participant Demographics

Participants selected for the fidelity sub-group analysis did not differ significantly from the larger parent study. Women were about 32.8 years old, white (98 percent), and had approximately 11.1 years of education. Less than one-quarter of women (22.8 percent) were employed in the six months before incarceration, and 32 percent were married at the time of interview. Women reported an average of 5.9 adult incarcerations, and they reported a lifetime average of 16.2 months of incarceration.

Mental health problems were common among women in the study, with self-reported depression affecting 68.5 percent of the sample. Self-reported symptoms of anxiety and post-traumatic stress affected 45.3 percent and 67.4 percent (respectively) of the sample. Women were recruited into the sample as drug users, with the most commonly used drugs including illicit prescription opioids (70.9 percent of women in the 30 days before jail) and benzodiazepines like Xanax® and Valium® (55.8 percent in the 30 days before jail). The majority of women reported using multiple substances per day during the six months before jail (80.9 percent), with about 75 percent having a history of IV drug use, and being high on most days during that time period (average of 135.3 days).

MI Component Ratings

The 6 independent raters scored an average of 14 cases (range of 4–25 cases). Each rater scored at least two cases for training purposes, and two raters scored three. A total of 40 participants were evaluated in 83 separate reviews by the six members of the rating team. Table 1 shows the mean rating scores for the 10 MI interventionist characteristics that were measured by the team of six raters. The mean scores are derived from the 7-point Likert values, with 7 being the highest value. Rater 2 had the lowest mean score rating of the MI characteristics across all interviews, and rater 6 had the highest ratings, although rater 6 also had the fewest cases (4).

TABLE 1.

Scores on the Global Facilitator Rating Scale across all six reviewers (n=40)

Rater 1 2 3 4 5 6 Range Team Mean
 Acceptance 5.9 6.0 6.0 6.7 7.0 6.8 5.9–7.0 6.4
 Egalitarianism 5.9 5.7 5.8 6.2 6.1 6.0 5.7–6.2 6.0
 Empathy 5.7 5.5 6.1 6.1 5.7 6.8 5.5–6.8 6.0
 Understanding 6.2 5.8 5.6 6.1 6.3 5.5 5.5–6.3 5.9
 Genuine 5.7 5.7 5.8 5.9 6.1 6.8 5.7–6.8 6.0
 Warmth 6.6 5.8 6.1 6.4 6.7 6.8 5.8–6.8 6.4
 Spirit 5.5 4.0 5.2 6.0 5.0 5.8 4.0–5.8 5.3
 Interactive 5.8 4.5 5.8 6.2 5.4 6.3 5.4–6.3 5.7
 Narrative 6.5 5.1 6.0 6.2 5.2 6.3 5.2–6.5 5.9
 Summarizing 5.3 4.5 5.9 5.9 5.8 6.0 4.5–6.0 5.6
Overall mean 5.9 5.3 5.8 6.2 5.9 6.3
Range 5.3–6.6 4.0–6.0 5.2–6.1 5.9–6.7 5.2–7.0 5.5–6.8 5.2–7.0

The MI interventionist characteristic with the higher-end scores was interventionist acceptance, with four of the six raters giving ratings that varied from 6.0 to 7.0 on the 7-point scale. The mean score for acceptance was 6.4. The characteristic of interventionist warmth received the next highest number (3 raters) of high-end ratings, with a range of 5.8 to 6.8 and a mean rating of 6.0. Third highest rating of interventionist characteristics was empathy, with raters 3 and 6 giving 6.1 and 6.8 respectively, and the characteristic had an overall range from all six raters of 5.5 to 6.8, with a mean of 6.0. The interventionist characteristics that received lower-end ratings were spirit and summarizing, which had overall score ranges from 4.0 to 5.8 and 4.5 to 6.0 respectively and overall means of 5.3 and 5.6. Five of the interventionist characteristics had overall mean ratings of 6 or better and none had mean ratings under 5.

Ratings of Participant Interaction and Effects on MI

Table 2 examines three other measures of MI fidelity (collaboration, cognition, and interaction) that were aimed at assessing participant characteristics or engagement in the intervention sessions. Interaction had the highest rating across the six raters and cognition rated the lowest. Participation appeared to be high even though cognitive ability was rated somewhat lower. Participant collaboration also rated rather high on the 7-point Likert scale. All three components received positive ratings, even though the rating of cognitive responsiveness was the lowest of the three.

TABLE 2.

Scores on the Global Interaction Rating Scale by reviewers (n=40)

Rater 1 2 3 4 5 6 Range Team Mean
 Collaboration 5.4 5.3 4.6 6.1 5.9 6.0 4.6–6.1 5.5
 Cognition 5.6 4.9 4.3 5.9 4.9 6.8 4.3–6.8 5.4
 Interaction 6.3 5.3 4.4 6.5 5.7 6.3 4.4–6.5 5.8
Overall mean 5.8 5.2 4.4 6.2 5.5 6.4 4.4–6.4 5.6
Range 5.4–6.3 4.9–5.3 4.3–4.6 5.9–6.5 4.9–5.9 6.0–6.8

Correlations between Interventionist and Participant Interaction

To further examine the interventionist’s fidelity to MI approaches while considering participant interaction, bivariate correlations were also examined to better understand if the perception of participants’ interaction or cognitive abilities would have made adherence to MI components more challenging. As shown in Table 3, findings support an overall positive relationship between ratings on the delivery of MI and the participants’ degree of engagement. Specifically, ratings of acceptance were significantly and positively related to ratings of collaboration (r=.556, p<.001), cognition (r=.522, p<.01), and interaction (r=.434, p<.01). In addition, ratings of understanding were positively and significantly associated with collaboration (r=.389, p<.05) and cognition (r=.448, p<.01).

TABLE 3.

Correlations between interventionist MI ratings and participant engagement

Primary MI components Collaboration Cognition Interaction
Acceptance .556*** .522** .434**
Egalitarianism .260 .346* .206
Empathy .011 .008 .012
Understanding .389* .448** .311
Genuine .256 .261 .173
Warmth .228 .168 .127
Spirit .261 .239 .155
Interactive .029 .222 .135
Narrative .068 .239 .251
Summarizing .231 .193 .124
*

p<.05,

**

p<.01,

***

p<.001

Discussion

This study examined fidelity in delivering core MI components in a challenging, real-world correctional environment with a largely treatment-resistant population. The MI approach was used in this study in a difficult environment (jail) to evaluate the portability of this extensively studied EBP to a non-therapeutic setting. A by-product of the study was information about the care and supervision that are needed to implement EBPs by clinicians in community practice. This study highlights the steps that must be taken to ensure faithful implementation of EBPs outside of carefully controlled study conditions like those found in jails and prisons. This study shows that MI can be used in challenging environments like jails, but considerable training, support, and feedback may be necessary for faithful implementation of this EBP.

This study was something of an acid test of the implementation of an EBP in a challenging correctional environment with a difficult-to-serve population. In examining the study findings on the interventionist’s congruence of language to the 10 MI components, average ratings were well over 5 and in most cases, closer to an average of 6 on a 7-point scale. These findings are consistent with MISC or MITI ratings noted in other studies using MI in more controlled settings (Bertholet, Palfai, Gaume, Daeppen, & Saitz, 2014; Moyers, Martin, Manuel, Hendrickson, & Miller, 2005; Spohr, Taxman, Rodriguez, & Walters, 2015). Thus, findings suggest that a sufficient threshold of fidelity was achieved in this application of MI in a non-therapeutic setting.

Findings also suggest that MI fidelity was not significantly affected by participants’ level of engagement in the intervention sessions. Initial concerns related to this non-treatment-seeking jail population were that lower levels of cognition and/or interest in the intervention could make MI implementation more challenging. Findings demonstrated that the performance of the 10 MI components remained high even with limited active engagement from participants in a number of cases. Findings also supported a positive relationship between measures of collaboration and cognition when examined along with the primary MI components. To our knowledge, this relationship has not been examined in other studies of MI fidelity. However, the relationship between participant-level factors, particularly cognition, should be examined further in future research on MI fidelity, even though this factor lies outside of usual MI fidelity evaluation. These findings suggest that when working with challenging participant populations (such as incarcerated rural women drug users), being able to tailor the approach in a way that is most congruent with the culture may be critical for the success of MI. This also raises important questions for providers in clinical settings when considering EBPs related to “what characterizes a good candidate client for MI?”

Fidelity in Difficult Pre-treatment Settings

This study examined MI implementation in rural jails among a pre-treatment population of rural women where there are multiple challenges to positive outcomes. By comparison to urban areas, the rural areas where this study was conducted have more structural constraints on health and wellbeing, such as lower education rates, lower incomes, higher rates of unemployment and disability, and more limited services and more barriers for women who need protection from partner violence—all of which tend to work against responsiveness to interventions (ARC, 2008; Eastman & Bunch, 2007; Harrington, 1997; Iceland, 2003; Pruitt, 2008a; 2008b; Porter, 1993). In addition to this regional context, the rural jails afford few health-promoting opportunities.

This study clearly demonstrated that the amount of preparation, training, and watchfulness over the interventional processes is critical to implementation. This study suggests that any notion that an EBP can be lifted off the shelf, briefly trained among qualified providers, then confidently implemented is seriously questionable. Moreover, doubts regarding the fidelity of EBP implementation are compounded when considering a difficult to severe target population, such as the one in this study. This project did not face institutional reluctance or systemic barriers other than what might be expected in any detention facility per se. That is, the detention staff members were facilitative and not resistant to the project.

Implications for Policy-makers and Practice Professionals

Indicators of MI fidelity for this project were high, but the implications for the practice environment is that considerable clinical support is not only desirable, but essential. The idea that simply attending a training session will lead to greater use of EBPs appears naïve in the extreme. This study suggests that investment in training and guidance is critical not only on the front end of things, but also throughout. Real-world EBP implementation might be like a child’s gyroscope that, once wound up, does very well at first and then gradually shifts into wider wobbles as client characteristics and clinical practice habits intrude on the plan. The presence of episodic clinical supervision and feedback may have proven critical to the delivery of this MI intervention. This study also suggests that, particularly when delivering an EBP in real-world settings, the importance of fidelity should not be limited by scores on rating scales, but should take into consideration a varying “threshold” of acceptance of MI approaches that might be seen as congruent with the client population.

Limitations

This study had several limitations. First, raters inevitably interpret in order to categorize and score the verbal actions of both the interventionist and participants. There is an unavoidable limit to fidelity measurement of MI using measurement of audio records due to subjectivity in evaluating the combination of voice tone and specific semantic and associational meanings of words and sentences, versus the overall conative sense of the communications by both parties. These limitations were somewhat mitigated in this study by using one study interventionist who was born and raised in the same culture as the participants, and her voice and manner became familiar to the reviewers. However, the raters were not all from the same region, and thus interpretations of language may be biased in subtle ways. Second, attempts were made to capture open-ended and close-ended statements by the interventionist, but the flow of conversation and cultural idiom led to many statements that required too much interpretation to reliably say they fit in either category. Many statements were grammatically close-ended, but in the context of the conversation, had the intent and effect of open-ended statements.

Implications for Future Research

Despite these limitations, this study contributes to the literature on how evidence-based practices can be delivered by practitioners in real-world settings with a high degree of fidelity. One conclusion from this study might be that the training and skill-sustaining process of clinical supervision is essential in research projects using EBPs. The implications are serious for the successful delivery of EBPs, as well as for the ethical principle of practitioner competence. For example, a program may state that its practitioners use EBPs, but they may actually lack competence with the EBP. Absent any sustained effort to assess implementation of any EBP, managed care organizations and funding sources remain dependent on providers merely asserting that they use certain EBPs.

This study calls for further research on thinking about fidelity along a continuum. Many institutional factors make EBP implementation possible, including the professional investment in fidelity training, supervision, and monitoring, as well as the interventionist characteristics along with client-level factors that may significantly affect intervention adherence. This study suggests no short-cuts; if anything, it suggests that great institutional support is critical to implementing any EBP.

More importantly, substance abuse treatment and research funding often require the use of an EBP, often with no stipulations about what should be incorporated to ensure fidelity of implementation. Future funding of intervention programs with an EBP requirement should also require evidence of at least some effort at fidelity evaluation. And unless research projects undertake serious fidelity measurement, their findings about the effects of EBPs should be taken with a large grain of salt.

Acknowledgments

Research reported in this manuscript was supported by the National Institute On Drug Abuse of the National Institutes of Health under Award R01DA033866. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We would also like to recognize the cooperation and partnership with the Kentucky Department of Corrections and the local jails participating in this study.

Contributor Information

Robert Walker, University of Kentucky (Retired).

Michele Staton, University of Kentucky.

Grant Victor, III, Wayne State University.

Kirsten Smith, National Institute on Drug Abuse.

Theodore Godlaski, University of Kentucky (Retired).

References

  1. Aarons GA, Hurlburt M, & Horwitz SM (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. America’s Health Rankings. (2015). 2015 Annual report. United Health Foundation. Retrieved from http://cdnfiles.americashealthrankings.org/SiteFiles/StateSummaries/Kentucky-Health-Summary-2015.pdf. on February 23, 2016. [Google Scholar]
  3. [ARC] Appalachian Regional Commission (2008). An analysis of mental health and substance abuse disparities & access to treatment services in the Appalachian region. Retrieved January 4, 2016 from http://www.arc.gov/research/researchreportdetails.asp?REPORT_ID=71.
  4. [ARC] Appalachian Regional Commission (2017). Health disparities in Appalachia. PDA, Inc Retrieved August 25, 2018 from https://www.arc.gov/assets/research_reports/Health_Disparities_in_Appalachia_August_2017.pdf.
  5. Bertholet N, Palfai T, Gaume J, Daeppen J-B, & Saitz R (2014), Do brief alcohol motivational interventions work like we think they do?. Alcoholism: Clinical and Experimental Research, 38: 853–859. doi: 10.1111/acer.12274 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bond G, Salyers MP, Rollins AL, Rapp CA, & Zipple AM (2004). How evidence-based practices contribute to community integration. Community Mental Health Journal, 40, 6. 569–588. [DOI] [PubMed] [Google Scholar]
  7. Carroll KM, Ball SA, Nich C, Martino S, Frankforter TL, Farentinos C, … & Woody GE (2006). Motivational interviewing to improve treatment engagement and outcome in individuals seeking treatment for substance abuse: A multisite effectiveness study. Drug and Alcohol Dependence, 81(3), 301–312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. D’Aunno T (2006). The role of organization and management in substance abuse treatment: Review and roadmap. Journal of Substance Abuse Treatment, 31(3), 221–233. [DOI] [PubMed] [Google Scholar]
  9. Eastman B, & Bunch SG (2007). Providing services to survivors of domestic violence: A comparison of rural and urban service provider perceptions. Journal of Interpersonal Violence, 22, 4, 465–473. [DOI] [PubMed] [Google Scholar]
  10. Essock SM, Nossel IR, McNamara K, Bennett ME, Buchanan RW, Kreyenbuhl,… Dixon LB (2015). Practical monitoring of treatment fidelity: Examples from a team-based intervention for people with early psychosis. Psychiatric Services, PS in Advance (doi: 10.1176/appi.ps.201400531). [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Garner BR (2009). Research on the diffusion of evidence-based treatments within substance abuse treatment: A systematic review. Journal of Substance Abuse Treatment, 36(4), 376–399. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S, … & Research network on youth mental health. (2008). Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research, 35(1–2), 98–113. [DOI] [PubMed] [Google Scholar]
  13. Goldman HH, Ganju V, Drake RE, Gorman P, Hogan M, Hyde PS, & Morgan O (2001). Policy implications for implementing evidence-based practices. Psychiatric Services 52(12):1591–7. [DOI] [PubMed] [Google Scholar]
  14. Goldman HH, Morrissey JP, & Ridgely MS (1994). Evaluating the Robert Wood Johnson Foundation program on chronic mental illness. The Milbank Quarterly, 37–47. [PubMed] [Google Scholar]
  15. Harrington M (1997). The other America. New York, NY: Scribner. [Google Scholar]
  16. Hennessy KD, Finkbiner R, & Hill G (2006). The National Registry of Evidence-Based Programs and Practices: A decision-support tool to advance the use of evidence-based services. International Journal of Mental Health, 35(2), 21–34. [Google Scholar]
  17. Hennessy KD, & Green-Hennessy S (2011). A review of mental health interventions in SAMHSA’s National Registry of Evidence-Based Programs and Practices. Psychiatric Services 62(3):303–5. [DOI] [PubMed] [Google Scholar]
  18. Hoagwood K, Burns BJ, Kiser L, Ringeisen H, & Schoenwald SK (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services 52(9):1179–89. [DOI] [PubMed] [Google Scholar]
  19. Iceland J (2003). Poverty in America. Berkeley: University of California Press. [Google Scholar]
  20. Karberg JC, & James DJ (2005). Substance dependence, abuse, and treatment of jail inmates, 2002. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Statistics. [Google Scholar]
  21. McHugh RK, & Barlow DH (2010). The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. American Psychologist, 65(2), 73. [DOI] [PubMed] [Google Scholar]
  22. McHugo GJ, Drake RE, Whitley R, Bond GR, Campbell K, Rapp CA, … & Finnerty MT (2007). Fidelity outcomes in the national implementing evidence-based practices project. Psychiatric Services, 58, 10, 1279–1284. [DOI] [PubMed] [Google Scholar]
  23. Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, & Wells KB (2008). Interventions in organizational and community context: A framework for building evidence on dissemination and implementation in health services research. Administration and Policy in Mental Health and Mental Health Services Research, 35(1–2), 21–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Miller WR, Moyers TB, Ernst D, & Amrhein P (2008). Manual for the motivational interviewing skill code (MISC) Version 2.1. 2008 Retrieved from http://casaa.unm.edu/download/misc.pdf. [Google Scholar]
  25. Miller WR, & Rollnick S (2009). Ten things that Motivational Interviewing is not. Behavioural and Cognitive Psychotherapy, 37: 129–140. [DOI] [PubMed] [Google Scholar]
  26. Miller WR, & Rollnick S (1992). Motivational Interviewing: Preparing people to change. New York, NY: Guilford Press. [Google Scholar]
  27. Miller WR, & Rollnick S (2002). Motivational Interviewing: Preparing people to change (2nd Ed.). New York, NY: Guilford Press. [Google Scholar]
  28. Morgenstern J, Kuerbis A, Amrhein P, Hail L, Lynch K, & McKay JR (2012). Motivational interviewing: A pilot test of active ingredients and mechanisms of change. Psychology of Addictive Behaviors, 26(4), 859. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Moyers TB, Manuel JK, & Ernst D (2014). Motivational Interviewing treatment integrity coding manual 4.1 Unpublished manual. Downloaded on 10 June 2015 from http://casaa.unm.edu/codinginst.html [Google Scholar]
  30. Moyers TB, Martin T, Manuel JK, Hendrickson SL, & Miller WR (2005). Assessing competence in the use of motivational interviewing. Journal of Substance Abuse Treatment, 28(1), 19–26. doi: 10.1016/j.jsat.2004.11.001 [DOI] [PubMed] [Google Scholar]
  31. Mueser KT, Torrey WC, Lynde D, Singer P, & Drake RE (2003). Implementing evidence-based practices for people with severe mental illness. Behavior Modification, 27(3), 387–411. [DOI] [PubMed] [Google Scholar]
  32. [NIDA] National Institute on Drug Abuse. (2009) NIDA Modified-ASSIST. Retrieved February, 01, 2019 from http://www.drugabuse.gov/nidamed/screening/. [Google Scholar]
  33. Porter K (1993). Poverty in rural America: A national overview. In Sociocultural and service issues in working with rural clients. Albany, NY: State University of New York at Albany. [Google Scholar]
  34. Pruitt L (2008a). Place matters: Domestic violence and rural difference. Wisconsin Journal of Law, Gender, and Society, 23, 2, 349–416. [Google Scholar]
  35. Pruitt L (2008b). Gender, geography & rural justice. Berkeley Journal of Gender, Law, & Justice, 23, 338–391. [Google Scholar]
  36. Rapp CA, Bond GR, Becker DR, Carpinello SE, Nikkel RE, & Gintoli G (2005). The role of state mental health authorities in promoting improved client outcomes through evidence-based practice. Community Mental Health Journal, 41(3), 347–363. [DOI] [PubMed] [Google Scholar]
  37. Rieckmann TR, Kovas AE, Cassidy EF, & McCarty D (2011). Employing policy and purchasing levers to increase the use of evidence-based practices in community-based substance abuse treatment settings: Reports from single state authorities. Evaluation and Program Planning, 34(4), 366–374. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. [SAMHSA] Substance Abuse and Mental Health Services Administration, Center for Behavioral Health Statistics and Quality. (2012). The TEDS Report: A comparison of rural and urban substance abuse treatment admissions. Rockville, MD. Retrieved November 15, 2015 from http://www.samhsa.gov/sites/default/files/teds-short-report043-urban-rural-admissions-2012.pdf. [Google Scholar]
  39. Seng EK, Lovejoy TI (2013). Reliability and validity of a treatment fidelity assessment for motivational interviewing targeting sexual risk behaviors in people living with HIV/AIDS. Journal of Clinical Psychology in Medical Settings, 20: 440–448. Downloaded from DOI 10:1007/s10880-012-9343-y. [DOI] [PubMed] [Google Scholar]
  40. Spohr SA, Taxman FS, Rodriguez M, & Walters ST (2015). Motivational interviewing fidelity in a community corrections setting: Treatment initiation and subsequent drug use. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2015.07.012. [Epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Staton M, Ciciurkaite G, Oser C, Tillson M, Leukefeld C, Webster M, & Havens JR (2018). Drug use and incarceration among rural Appalachian women: Findings from a jail sample. Substance Use & Misuse, 53(6): 931–941. doi: 10.1080/10826084.2017.1385631. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Steinberg EP, & Luce BR (2005). Evidence based? Caveat emptor!. Health Affairs, 24(1), 80–92. [DOI] [PubMed] [Google Scholar]
  43. Torrey WC, Lynde DW, & Gorman P (2005). Promoting the implementation of practices that are supported by research: The National Implementing Evidence-Based Practice Project. Child and Adolescent Psychiatric Clinics of North America, 14(2), 297–306. [DOI] [PubMed] [Google Scholar]
  44. Vader AM, Walters ST, Prabhu GC, Houck JM & Field CA (2010). The language of motivational interviewing and feedback: Counselor language, client language, and client drinking outcomes. Psychology of Addictive Behavior, 24(2), 190–197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Weir BW, O’Brien K, Bard RS, Casciato CJ, Maher JE, Dent CW, Stark MJ (2009). Reducing HIV and partner violence risk among women with criminal justice system involvement: A randomized controlled trial of two motivational interviewing-based interventions. AIDS Behavior, 13, 509–522. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES