Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 1.
Published in final edited form as: J Subst Abuse Treat. 2011 Dec 29;43(2):168–177. doi: 10.1016/j.jsat.2011.11.004

Adaptability of Contingency Management in Justice Settings: Survey Findings on Attitudes Towards Using Rewards

Amy Murphy 1, Anne Giuranna Rhodes 2, Faye S Taxman 3
PMCID: PMC3340513  NIHMSID: NIHMS340889  PMID: 22209658

Abstract

Contingency management (CM) is widely recognized as an evidence-based practice, but it is not widely used in either treatment settings or justice settings. CM is perceived as adaptable in justice settings given the natural inclination to use contingencies to improve compliance to desired behaviors. In the Justice Steps implementation study, five federal district court jurisdictions agreed to consider implementing CM in specialized problem-solving courts or probation settings. A baseline survey (n=186) examined the acceptance and feasibility of using rewards as a tool to manage offender compliance. The results of the survey revealed that the majority of respondents believe that rewards are acceptable, with little difference between social and material rewards. Survey findings also showed that female justice workers and those who were not Probation Officers were more accepting of material rewards than their counterparts. Findings are consistent with prior research in drug treatment settings where there is little concern about using rewards.

1.0 Introduction

With a renewed interest in improving practice to address the gap between science and practice (Institute of Medicine [IOM], 1992), the field of evidence-based practices (EBPs) has focused not only on defining the techniques that “work” but also on understanding what aspects of the techniques are transferrable in real world settings. The concept of contingency management (CM), an EBP that promotes behavior change through systematic reinforcement of desired behaviors with rewards, has received attention in the scientific community given its efficacy in the laboratory (Griffith, Rowan-Szal, Roark, & Simpson, 2000; Higgins, Alessi, & Dantona, 2002, Higgins, Heil, & Lussier, 2004; Prendergast, Podus, Finney, & Roll, 2006; Stitzer, Petry, & Peirce, 2010). Surveys of the field have documented, though, that substance abuse and mental health counselors fail to use CM in routine practice (Kirby, Benishek, Dugosh, & Kerwin, 2006; McGovern, Fox, Xie, & Drake, 2004).

In other settings, such as justice agencies, where the normal process includes inherent contingencies like probation conditions (Marlowe, Festinger, Dugosh, Arabia, & Kirby, 2008; Taxman, Soule, & Gelb, 1999), little is actually known about how CM can be used, although researchers have recently found that the concepts of rewards are acceptable to the field (Rudes et al., 2011) and that group processes for using CM practices have limited success (Trotman & Taxman, 2011). We theorize that CM can be a good fit for justice agencies, given the prevalence of substance abuse issues among criminal justice clients and that clients are often heavily monitored and have a number of conditions with which they must comply. Many of these conditions are easily verifiable (e.g., testing negative for drugs, attending court-ordered treatment) and can readily be given objective point values. CM could be incorporated into justice settings at various points in the continuum, such as in general probation, during court hearings, in correctional facilities, and in mandated treatment settings.

1.1 Effectiveness of CM

Contingency Management (CM) protocols create a system where the use of incentives disrupts the reinforcing influence of drugs or other negative behaviors by providing rewards for abstinence, which makes abstinence more attractive (Higgins & Petry, 1999; Griffith et al., 2000). CM has been studied extensively in substance abuse treatment populations, where incentives have been used primarily to reinforce abstinence (see Stitzer et al., 2010) and behaviors related to sustaining sobriety, such as medication adherence and treatment attendance.

Research has shown that the CM technique is useful in treatment programs for alcohol and drug abuse, although it has been shown to be more effective with individuals using a single drug rather than multiple drugs (Griffith et al., 2000). CM has more positive results than standard case management (Higgins & Petry, 1999; Petry, Martin, Coonan, & Kranzler, 2000). Petry and colleagues (2000), in a study on alcohol-dependent subjects, found that 84 percent of participants receiving CM plus standard treatment remained in treatment over 8 weeks, compared with only 22 percent of participants receiving standard treatment only; the CM groups also had higher rates of abstinence (69% versus 39%). Studies have also shown that CM is effective when used in combination with medication-assisted treatment such as methadone (Griffith et al., 2000). In a meta-analysis of 30 studies, Griffith and colleagues (2000) found significant positive effects with CM only, and a meta-analysis conducted by Schumacher and colleagues (2007) found that CM used both on its own and in conjunction with day treatment showed better results than day treatment alone.

1.2 Resistance to CM

Researchers have examined why CM is not widely implemented, in spite of being regarded as an effective intervention. Organizational constraints appear to be a major impediment to the uptake of the use of CM in routine practice in substance abuse treatment settings. In addition to issues of compatibility with the organization’s values, some contributing factors to nonuse include cost (Amass & Kamien, 2004; Kirby et al., 2006), unawareness about the technique or even the research findings (Herbeck, Hser, & Teruya, 2008; Kirby et al., 2006; Willenbring et al., 2004), lack of administrative support for CM, lack of skills or knowledge (Kirby et al., 2006; Willenbring et al., 2004), lack of staff time, and low demand for the practice (Kirby et al., 2006; Willenbring et al., 2004).

Ducharme and colleagues (2007) note that organizations may be deterred by the “financial and administrative burdens” associated with using financial rewards as part of a CM protocol, a sentiment echoed by others (Amass & Kamien, 2004; Kirby et al., 2006). Some providers are skeptical about the real effectiveness of CM used as part of routine practice. Herbeck, Hser, and Teruya (2008) conducted a survey of treatment providers regarding their perceptions of the effectiveness of using financial rewards with clients. Using an effectiveness scale of 1 (not at all effective) to 5 (very effective), respondents rated voucher incentive programs as 2.65 (for program directors) and 2.79 (for program staff). These ratings were lower than behavioral interventions, such as Motivational Enhancement Therapy. Also, roughly one-third (32%) of program directors and 43 percent of staff said that they did not know whether vouchers were effective. Willenbring and colleagues (2004) found low levels of knowledge about CM among substance abuse counselors: 23 percent of respondents reported that they did not know whether it was effective.

1.3 Organizations and Adoption of CM

Researchers have commented that successful adoption is more likely to occur when the organizational values support the innovation. Proctor and colleagues (2009) developed a conceptual model of implementation that examines stages such as acceptability, feasibility, alignment, and penetration. This model outlines how it is important for organizations, in the early phases, to find the EBP (or CM) to be considered of value, either through demonstration that the idea has merit (acceptability) or that the idea can be launched in this environment (feasibility). In a prior study of CM, Kirby and colleagues (2006) measured the degree to which treatment providers held positive beliefs about CM and misgivings providers had regarding costs and limitations of the model. In other works, researchers highlighted the characteristics and values of organizations that opted to implement CM in addiction treatment settings. Nelson and Steele (2007), in a national online survey of mental health practitioners, found that positive attitudes toward research at the individual level and openness to research at the organization level were predictors of adoption of EBPs. Similarly, organizations with prior research experience using EBPs were more likely to use CM than those without (Bride et al., 2011; Pinto et al., 2010). Walker and colleagues (2010), in a pre-post study on prize-based CM, highlighted the need to observe organizational culture to determine appropriate strategies for intervention.

Ducharme, Knudsen, Roman, and Johnson (2007) found that the adoption of the innovation motivational incentives was influenced solely by organizational factors. Factors associated with positive adoption of CM included receiving public revenue for operations, nonprofit status and having multiple types of services. In contrast, adoption of medications (e.g., buprenorphine) in treatment was influenced both by organizational characteristics (such as having medical personnel on staff) and exposure to clinical trials (Ducharme et al., 2007).

1.4 Organization Characteristics and Implementation of EBPs

Technology transfer is a complex process, and there is a dearth of knowledge on how to effectively implement a CM protocol in service delivery settings (Ritter & Cameron, 2007). In order to understand more about implementing CM, it is vital to examine the wider body of literature on implementing EBPs in different settings. In a systematic review on attitudes toward, adoption of, and implementation of evidence-based practices (EBPs), Garner (2009) found only a small number of studies that addressed the implementation of psychosocial EBPs. Researchers have begun to examine what organizational characteristics lend themselves to adoption of evidence-based practices generally and CM specifically. Knudsen and Roman (2004) found that an organization’s “absorptive capacity” was a predictor of adoption, where the degree of workforce professionalism, environmental scanning of effective strategies, and overall customer satisfaction with services delivered were all factors that improved utilization of EBPs. The researchers measured absorptive capacity using six indicators: percentage of counselors possessing master’s degrees or higher; percentage of certified or licensed counselors; percentage of counselors in recovery from substance abuse; presence of at least one staff physician; estimation of staff acquisition of knowledge of treatment techniques; and whether the organization collected client satisfaction data (Knudsen & Roman, 2004).

1.5 Additional Factors Associated with Adoption of EBPs

Several studies have found that clinicians in organizations that are “research-friendly” (such as participating in studies or working with researchers) are more open to adopting EBPs (Bride, Abraham, & Roman, 2011; Pinto, Yu, Spector, Gorroochurn, & McCarty, 2010). McCarty and colleagues (2007), studying the impact of individual support for using EBPs (including but not limited to CM), found that job category and level of education were influential factors: managers and respondents with graduate degrees were more positive toward EBPs than those without. Other characteristics of organizations that have been shown to be associated with adoption of EBPs include a supportive approach to counseling, high levels of education (Aarons, 2004; Bride et al., 2011; Nelson & Steele, 2007), positive attitudes toward research (Bride et al., 2011) and researchers, knowledge of and positive attitudes toward EBPs (Friedmann, Taxman, & Henderson, 2007; Nelson & Steele, 2007), and working with a targeted clientele such as drug court clients (Bride et al., 2011).

1.6 Kirby’s Provider Survey of Incentives

Kirby and colleagues (2006) developed the Provider Survey of Incentives (PSI) to determine the beliefs that community treatment providers held regarding the use of CM procedures. The survey examined the counselor’s beliefs in using social (reinforcing behaviors of counselors) as compared to financial or material incentives. In this study of treatment provider agencies, the researchers administered a 44-item survey, including 28 items that were parallel across material and social incentives. Kirby and colleagues (2006) found respondents agreed or strongly agreed to the use of CM, with a mean of 5.8 positive statements about material incentives and 6.4 positive statements about social incentives. When the researchers examined characteristics of the treatment providers, they found that supervisors were significantly more positive about using incentives than counselors or support staff, and any staff with a master’s degree or higher supported the use of incentives. They also found that respondents with more than 12 years of experience were significantly more positive toward material incentives than those with less than two years of experience. The latter group was significantly more positive toward social incentives than those with mid-level experience of two to seven years. The majority of respondents agreed with at least one positive statement about rewards, and respondents were overall more positive toward social incentives than material (Kirby et al., 2006).

The question about the transportability of an evidence-based practice like CM from one setting to another is a major issue that needs more attention (Taxman & Belenko, 2011), and the issues of transportability are critical in a scenario where the various systems have different goals such as the addiction treatment providers (drug use reduction) and probation (public safety). In the study we discuss, the issues regarding transportability of CM into a justice setting are explored in order to assess the degree to which staff beliefs and values affect the acceptance of CM. To examine these issues, we used the Kirby PSI scale with criminal justice front-line staff (stakeholders)—judges, prosecutors, defenders, probation officers, and treatment providers—working on the use of CM principles in their settings (either probation or a drug court). This paper reports on the findings of this survey with staff associated with a research study on the use of CM in federal justice settings.

2.0 Methods

Justice Steps (JSTEPS) is an implementation study funded by the National Institute on Drug Abuse (NIDA) to understand the issues surrounding the acceptance and feasibility of using CM in five federal districts (geographical areas devoted to processing cases and managing offenders). The study sites were selected based on their use of a screening tool that identifies offenders at the highest risk for violations of probation or parole (Rudes et al., 2011) and their willingness to consider alternative strategies for dealing with compliance related issues. Each participating site had to indicate that they were willing to adopt EBPs in order to improve client compliance. These jurisdictions agreed to participate in a study that examines the factors that affect the adoption and implementation of CM protocol with probation clients. The research team established Memoranda of Understanding (MOU) with the five sites, consisting of staff from the Judiciary, the US Attorney’s office, the Office of the Federal Defender, and/or US Probation Agency. In two jurisdictions, treatment providers were also part of the local teams.

The teams agreed to provide access to administrative data on clients, distribute organizational surveys to staff, allow qualitative researchers to conduct site visits and extensive research, and send several team members to two learning sessions devoted to information on CM. The first learning session (January 2010) focused on understanding the concepts of CM and participating in a strategic process to develop a CM procedure for their jurisdictions. After meeting these requirements, sites were given discretion in determining whether and how they wanted to implement JSTEPS. The second learning session (May 2011) was devoted to a quality improvement process to integrate CM into the procedures of the agencies for purposes of sustainability.

The Proctor and colleagues model for studying implementation strategies specifies that outcomes depend upon related proximal measures of perceived feasibility of the EBP in the given setting, fidelity to the model, penetration within the study site, acceptability of the EBP in the study site, sustainability, and uptake among stakeholders (2009). Given that CM is a new concept to many justice system stakeholders, the JSTEPS study team recognized the importance of gauging the acceptance of CM prior to full implementation. Using the survey developed by Kirby and colleagues (2006), we administered a baseline survey to staff in offices taking part in the study. Rather than select a sample of staff or limit participation to those involved in specialized programming, we invited all staff in the probation, defender, prosecutorial, and judicial offices to participate in the study. In the survey instrument, we asked a series of questions on the acceptability and feasibility of the use of rewards in their districts. These data helped us understand whether participants were familiar with CM, whether they had favorable attitudes toward rewards, and if they preferred material incentives (i.e., tangible items with a monetary value such as gift cards) or social incentives (i.e., items without a monetary value, ranging from verbal or written praise to a decreased probation sentence). The research team, which conducted extensive qualitative research in this study, did not observe major shifts in criminal justice staff attitudes toward CM following the learning sessions.

2.1 Survey Instrument

The survey was divided into two sections: in the first, the study team addressed the characteristics of the court district, respondents’ attitudes toward rehabilitation and punishment, degree of inter- and intra-agency collaboration, and attitudes toward incentives. We also included questions on the respondents’ backgrounds, including level of education, area of concentration in education, position, social demographics, years of experience, and other fields in which respondents have worked. In the second section of the survey, we addressed the operations of the specialty courts and solicited opinions on components of CM. We also included questions intended to measure respondents’ degrees of familiarity with incentives.

The baseline JSTEPS organizational survey drew from a number of sources; since we are primarily concerned with the acceptability of rewards in justice settings, we will focus on those measures in this study. We utilized Kirby’s Provider Survey of Incentives, a 44-item instrument that was designed to solicit treatment provider opinions on incentives (Kirby et al., 2006). Researchers have hypothesized why CM is not more widespread, with reasons including cost, workload, difficulty of implementation, lack of fit with current interventions, and philosophical objections (Kirby, Marlowe, Festinger, Lamb, & Platt, 1999; McGovern et al., 2004; Petry & Simcic, 2002, as cited in Kirby et al., 2006). As discussed above, Kirby and colleagues developed the PSI to understand clinicians’ attitudes towards CM. Similar understanding was needed in justice settings, and survey questions were modified to assess the probationer as the client of interest. Since the sample consisted primarily of criminal justice personnel, we modified the PSI for this population, and a number of items that were not applicable to corrections settings were dropped from the survey. Our survey had 16 parallel items (used for both social and material incentives), culled from the 28 parallel items of the original instrument. The 12 dropped items were relevant to counseling situations only. Examples of these included: “If the client is abstinent just to get the incentive, it could hurt the treatment process” and “Consistently providing the client with incentives is likely to push the client back into denial.”

The items are a five-point Likert scale (1=Disagree Strongly, 2=Disagree, 3=Neutral, 4=Agree, 5=Agree Strongly) where respondents are asked to state whether they agree or disagree with the statements provided. Some of the material incentive statements include: “Giving a material incentive to offenders who earned it will result in arguing about rewards”1 and “Material incentives are worth the expense considering their effectiveness.” Social incentive statements include: “Giving social praise and reinforcement in a structured way will become artificial and harm the client/PO relationship” and “Positive incentives are more effective than verbal warnings and removal of privileges in getting offenders to achieve abstinence.”

2.2 Survey Participants

Survey participants included staff from the judiciary, US Probation (probation officers and supervisors), Federal Defenders offices, US Attorneys Offices, and treatment/counseling providers. These staff were chosen because they are usually the key players in decisions regarding the use of specialized programs for offenders. For each office, we determined a staff member to serve as point person, and mailed all the surveys to that person, asking him or her to distribute the hard copy surveys. A business reply envelope was provided for returning each survey. We sent a total of 286 surveys in December 2009 and asked participants to return the surveys within a month. Each survey was assigned a tracking number in order to facilitate pre/post comparisons. We had a 70 percent participation rate2, with probation staff most likely to complete and return their surveys and prosecutors least likely to complete the survey. Participation rates for the sites ranged from 53 to 83 percent. Attorneys in the US Attorney’s office had low rates of completion due to concerns from central administration about prosecutors participating in surveys. Because such a large percentage of respondents were probation staff, we compared probation staff to non-probation staff to see where differences lay and determine whether findings can be generalized to any of the other categories of criminal justice stakeholders.

2.3 Analysis Plan

Characteristics of the staff were examined, including gender, race/ethnicity, age, education level, fields of study, years of experience, areas of work experience, and current positions. To examine the attitudes of staff towards CM, a number of analyses were conducted. The psychometric properties of the modified PSI were examined to ensure that the scales were valid and reliable. In order to assess the construct validity of the PSI scales, we used an item-level confirmatory factor analysis (CFA) to determine if the two-scale structure of the original PSI developed by Kirby (social and material) fit the data of this sample of criminal justice staff. Because the item responses were a five-point Likert scale, we ran the CFA with the maximum likelihood estimator assuming that the responses were continuous and an asymptotically distribution-free estimator (diagonally weighted least squares) assuming that the item responses were ordinal. Likert responses are often considered ordinal data, but the previous data used by Kirby were treated as continuous with item means and standard deviations were examined. Model fit was assessed using the RMSEA, the Comparative Fit Index (CFI) and the Tucker-Lewis Index (TLI).

Items that did not load onto the two-scale structure were removed from the average score calculation. The average score was calculated for both the material and social incentives subscales using the methods outlined by Kirby and colleagues (2006). Cronbach’s alpha was calculated for the social and material subscales as a measure of internal consistency and reliability. The average scores were used as the indication of attitudes towards CM.

Differences in attitudes were examined by a number of covariates including gender, age, years of experience, and level of education, all of which have been shown in the literature to affect uptake of EBPs. We also examined differences by study site. Sites were grouped for this analysis into whether or not they had an established specialty court prior to the implementation of JSTEPS. Two sites had an established court that had been in operation for more than a year, and three did not. We ran ANOVA tests to examine the differences in the average score for both social and material incentives by the covariates. The choice of covariates was driven by the contingency management literature and previous work by Kirby on the PSI. Kirby found differences in attitudes according to education level, position, and years of experience (Kirby, 2006). Other literature has found difference in attitudes according to gender (McCarty et al., 2007) and age (Bride, et al, 2010).

The variables were coded for the bivariate and regression analysis to be consistent with previous findings. For the bivariate analysis, age was categorized into those 35 and under and those over 35, while education was categorized into a bachelor’s degree or less and more than a bachelor’s degree. Experience was captured by two different variables: 1) area of study, which was divided into those with any criminal justice background and all others; and 2) work experience, which was divided into those who had any social work experience and all others.

The study included a multivariate linear regression model to test the influence of these demographics together on a person’s attitudes towards material and social incentive as part of federal probation. All variables that showed significance in the bivariate analysis for either material or social incentives were used in the equation for each type of incentive. In the regression models, those variables that were originally continuous (e.g., age) were put into the model as continuous, rather than in their dichotomous forms in order to capture the full range of available data.

3.0 Results

Table 1 presents the demographics of the sample. Of the surveys received (n=186), the majority (83%) were completed by Probation Officers (POs), followed by Federal Defenders (10%), judges (4%), and Assistant US Attorneys (2%). Criminal justice staff (judges, prosecutors, defenders) were more likely to complete the survey if they were affiliated with a specialty court. Respondents overall had higher levels of education (99 percent reporting a bachelor’s degree or higher) than the substance abuse counselors reported in the Kirby and colleagues 2006 study. Typical educational areas included criminal justice, law, and psychology. Most respondents also had extensive work experience, with an average of 11 years worked for the current agency, ranging from 5 months to 31 years. Respondents indicated work experience in law enforcement, judiciary, social work, and juvenile justice. They were an average of 43 years old, ranging from 21 to 63. The majority were male (57%) and white (77%). Eight percent (8%) of respondents were Hispanic.

Table 1.

Characteristics of sample

Number Percentage
Position in the Court
 Probation Officer 155 83%
 Defender 18 10%
 Judge 7 4%
 Prosecutor 5 3%
 Treatment Counselor 1 1%
Level of Education
 AA 2 1%
 BA/BS 61 34%
 Some graduate studies 18 10%
 Advanced degree 99 55%
Area of Study
 Criminal Justice 70 38%
 Law 35 19%
 Political Science 11 6%
 Psychology 20 11%
 Sociology 14 8%
 Social Work 15 8%
Years in Field
 Under 2 years 10 6%
 2–4 years 36 20%
 5–9 years 27 15%
 10–14 years 59 47%
 15–19 years 26 14%
 20 years or more 23 13%
Gender
 Male 104 57%
 Female 77 43%
Work experience
 Law enforcement 81 49%
 Prosecution 16 10%
 Defense 23 14%
 Judiciary 43 23%
 Juvenile justice 38 23%
 Substance abuse treatment 23 14%
 Social work 46 28%
Age range
 21–35 26 15%
 36–44 79 44%
 45–54 59 33%
 55 and older 16 9%
Site
 Had Established problem solving court 69 37%
 No established court 117 63%

Approximately 37 percent of respondents were part of a site where a problem-solving court had been established and operating for longer than a year. The representation of the various disciplines was similar for sites with and without established courts; in both groups, the largest number of respondents was POs (80% and 85%, respectively), followed by defenders, judges, and prosecutors. Fewer respondents from the established court group had a background in law enforcement (43%) than in the non-established court group (52%), and more had experience in juvenile justice (29%, compared with 19%) and social work (44%, compared with 17%). This group was fairly similar to respondents from sites that had not been operating a problem-solving court long-term with regard to age and race: the mean age for both groups was 43, though the respondents without an established court had a greater proportion of females, 44 percent, compared with 40 percent.

The confirmatory factor analysis of the PSI scale (16 items that were used for both types of incentives) showed that 4 items did not load onto a common factor for either social or material incentives. The two-scale structure with one scale for social and one for material had the best fit, with an RMSEA of .108 (compared to .111 for a four-factor solution and .117 for a one-factor solution) and a CFI of .871 (compared to .855 for 4 and .849 for 1) and a TLI of .911 (compared to .903 for 4 and .889 for 1). There were no significant differences in the CFAs for the continuous and ordinal variables.

The four items that did not load were not used in the calculation of the average score. Table 2 presents the average scores for the 16 parallel items for both the social and material scales and indicates which items did not load onto a common factor. One of these items, “Giving incentives for drug-free urine samples helps the offender to become abstinent” was problematic in both the original Kirby study (Kirby et al., 2006) and in this study. Reliability analysis produced a Cronbach’s alpha for material incentives of .88 and .84 for the social incentives scales. The other items that did not load on either scale were: “Overall, I would be in favor of adding an incentive program to the court;” “Incentives help offenders achieve sobriety, allowing the counselor to focus on helping them make other life changes;” and “Incentives can be useful whether or not they address the underlying issues of addiction.”

Table 2.

Responses for Kirby’s Provider Survey of Incentives (Modified)

Survey Question Mean Item Score for Material Incentives (± SD)¥ Mean Item Score for Social Incentives (± SD)¥
Q1.Overall, I would be in favor of adding an incentive program to the court* 3.48 (±1.05) 4.19(±1.85)
Q2.Incentives are useful if they reward offenders for fulfilling treatment goals other than just providing a clean urine sample, such as regular attendance 3.63(±0.90) 3.78(±0.88)
Q3.Incentives help offenders achieve sobriety, allowing the counselor to focus on helping them make other life changes* 3.52(±0.81) 3.72(±0.77)
Q4.Giving incentives for drug-free urine samples helps the offender to become abstinent* 3.18 (±0.92) 3.50(±0.84)
Q5.An advantage of incentive programs is that they focus on what is good in the offender’s behavior (i.e., the ability to become abstinent), not what went wrong in their recovery 3.76(±0.80) 3.94(0.79)
Q6.Any source of abstinence motivation, not just internal motivation, is a good thing for treatment 3.72(±0.82) 3.95(0.73)
Q7.Incentives can be useful whether or not they address the underlying issues of addiction* 3.58(±0.87) 3.87(±0.67)
Q8.Many offenders will see rewards for abstinence as cheesy or artificial ® 3.44(±0.89) 3.56(±0.86)
Q9.Incentives are just not right because they are rewarding the offender for what he/she should be doing in the first place ® 3.49(±1.02) 3.83(±0.83)
Q10.It wouldn’t be right to give incentives to offenders for goals such as attendance if they aren’t testing negative (clean) ® 2.73(±1.09) 2.99(± 1.14)
Q11.Incentive programs are not consistent with my philosophy of treatment ® 3.54(±0.99) 3.87(±0.78)
Q12.Incentives are a bribe ® 3.67(±1.04) 3.91(±0.79)
Q13.The problem with incentives is that abstinence will only last for as long as the incentives are given ® 3.50(±1.01) 3.79(±0.80)
Q14.Giving incentives for treatment attendance will not improve attendance ® 3.57(±0.80) 3.72(± 0.72)
Q15.There are enough rewards in being clean; incentives aren’t necessary ® 3.54(±0.95) 3.82(±0.79)
Q16.Incentives don’t address the underlying issues of addiction ® 2.56(±0.97) 2.76(±1.06)
*

Items that did not load during factor analysis and were not used in calculation of overall scale means. These items were not used in the scales for Tables 3 and 4.

¥

Means are on a five-point scale (1=Disagree Strongly, 2=Disagree, 3=Neutral, 4=Agree, 5=Agree Strongly)

®

Reverse coded.

Tables 3a and 3b present the results of the bivariate analysis examining support for material and social incentives. For material incentives, the overall average (n=179) was 3.39 on the five-point scale, indicating a slightly positive approach to using this incentive. This average was very similar to that found by Kirby in her study of treatment providers (mean=3.38), and it corresponded with our expectation that the groups would show an overall positive response, given that the sites were selected for their willingness to consider the use of CM. For social incentives (n=126), the average was 3.61, compared to 3.55 found by Kirby. Differences in support for material and social incentives were found by gender, with women having more favorable attitudes towards material incentives than men, but this difference was not significant for social incentives. Staff over 35 years old were more likely to favor social incentives than younger respondents. Respondents who had education beyond a college degree gave higher scores to social incentives than those who did not, which is unsurprising given that other studies have found that possession of a master’s degree was associated with greater adoption of CM (McCarty et al., 2007). Those with a criminal justice background had more favorable attitudes toward material incentives than those who did not have a background in criminal justice, as did those who worked for a site that had an established specialty court.

Table 3a.

Attitudes About Material Incentives by Respondent Characteristics

Characteristic Material Incentives F-Value Denominator DF Effect Size
Average Score (±SD) 3.39 (±.70)
Gender
 Male 3.26 (±.77) 8.78** 175 .47
 Female 3.57 (±.55)
Age
 < = 35 3.28(±.54) 1.12 177 .22
 Over 35 3.42(±.74)
Education
 Bachelor degree or less 3.31(±.47) 1.17 175 .18
 More than Bachelor degree 3.43(±.80)
Area of Study
 Criminal Justice 3.24(±.64) 2.47** 176 .40
 Others 3.49(±.73)
Work Experience
 Social Work 3.71(±.50) 3.46* 176 .58
 Others 3.36(±.71)
Site
Did not have est. court 3.30(±.72) 4.75** 177 .35
Had established court 3.54(±.66)
**

< .05,

*

< .1

Table 3b.

Attitudes About Social Incentives by Respondent Characteristics

Characteristic Social Incentives F-Value Denominator DF Effect Size
Average Score (±SD) 3.61 (±.58)
Gender
 Male 3.54 (±.61) 3.11* 124 .34
 Female 3.73 (±.51)
Age
 < = 35 3.68(±.70) 7.36** 124 .55
 Over 35 3.34(±.53)
Education
 Bachelor degree or less 3.47(±.47) 4.74** 123 .41
 More than Bachelor degree 3.70(±.62)
Area of Study
 Criminal Justice 3.51(±.59) 2.15 124 .28
 Others 3.67(±.57)
Work Experience
 Social Work 3.74(±.54) 0.47 124 .25
 Others 3.60(±.58)
Site
Did not have est. court 3.60(±.54) 0.08 124 .05
Had established court 3.63(±.63)
**

< .05,

*

< .1

The results of the multivariate regression are given in Table 4. For both types of incentives, being part of a site with a problem-solving court and having a criminal justice background were not significantly associated with attitudes towards the incentives. Being female was positively associated with attitudes towards both types of incentives, as was being older and having higher levels of education.

Table 4.

Multivariate Regression Results for Material and Social Incentives

Characteristics Material Incentives (n=174) Social Incentives (n=124)
Adjusted R-Squared = .946 Adjusted R-Squared = .962
Beta SE Beta SE
Female Gender .50** .12 .38** .14
Age .04** .01 .05** .01
Education Level (years) .30** .05 .29** .06
Criminal Justice Education .30* .13 .42** .14
Work in problem solving court .24 .13 .094 .13
*

p<.05;

**

p<.01

4.0 Discussion

Drawing on Proctor and colleagues’ (2009) conceptual model for implementation research, we developed the organizational survey to delve into systems issues to assess the fit of CM innovation to the organization, including “the characteristics, readiness, and attitudes” of actors (McGovern et al., 2004). McGovern and colleagues (2004) assert that studies that explore what actors think about an EBP, such as CM, enable the implementation team to predict and address barriers. Because JSTEPS is an implementation study where the study research questions address the acceptance and feasibility of using CM in probation settings, we are especially interested in the factors that influence adherence at the site, agency, and individual levels.

In addition to showing a moderate to high level of acceptability for incentives and their use in treating substance abuse issues, respondents showed high levels of knowledge about CM, demonstrated by their responses in other sections of the survey (e.g., sections titled “Definition of Contingency Management” and “Measuring Implementation of Contingency Management”). Responses to the PSI were similar to those found in a survey of treatment providers (Kirby et al., 2006) despite the fact that the organizations we surveyed consisted mainly of criminal justice personnel with an average of over 10 years working at their agency, many of whom did not have a treatment background. The factor that appears to have the largest influence on attitudes towards incentives was being a probation officer (as opposed to the other positions, such as judge, defender, or prosecutor), which may indicate that POs have experience in working on compliance issues and are more open to the practice. It should also be noted that within the justice system, and probation specifically, officers often use contingency approaches to encourage compliance (see Taxman, Soule, & Gelb, 1999). This is an encouraging finding, as POs have the bulk of responsibility in implementing CM in this study, and most likely in most criminal justice practice settings.

Other important findings included gender as a significant influence on attitudes towards incentives, with women having more positive attitudes than men towards incentives. This may provide an opportunity to initiate discussions around gender roles and how they are interpreted in the criminal justice system as we move through the implementation phase of the project and analyze how different people choose to operationalize CM at their agencies. Age and education also influence attitudes, findings that are consistent with previous work (Kirby et al., 2006). Most importantly, the findings indicate that there is equal support for material and social incentives.

The survey findings helped inform the study team’s approaches as we moved from the early to later stages of implementation. Understanding that the criminal justice stakeholders in the study held generally positive attitudes toward incentives allowed us to focus the plan-do-study-act process and training and technical assistance on the nuances of CM, rather than the basics regarding accepting rewards as a tool in behavior change. We were also able to emphasize those principles with which sites appeared to be struggling (e.g., several sites selected a greater number of behaviors than is generally recommended with CM).

The issues surrounding transportability are significant given that court stakeholders operate within a context that is influenced by social and political forces (Ward & Kupchik, 2010) and that the criminal justice system tends to focus on punitive reinforcers (Rudes et al., 2011). The justice system consists of multiple layers of stakeholders from diverse agencies with different goals and objectives. Behind each of the individual players is at least one agency or organization to which the player is responsible (Hiller et al., 2010). A probation officer on a drug court does not merely represent the client and the team interests, he or she reports to a probation supervisor and to the Chief Probation Officer. In addition, all of these parties fall under the supervision of the Administrative Office of the Courts (AO). This study suggests some critical issues regarding transportability. First, justice stakeholders are broad-minded in their approaches. The support for CM was moderate to high and was widespread among the responding subjects. Second, as has been suggested in other studies, the problem-solving court model contributes to more open-minded approaches to dealing with offender compliance issues. Problem-solving courts with team approaches and the use of integrated services are useful in terms of advancing the justice system as being open to evidence-based practices (Taxman, Henderson, & Belenko, 2009).

We also found that the use of material incentives is as acceptable as social incentives. This is an important finding since it suggests that if there can be proper support from the larger community, then the justice system may be hospitable to using treatment oriented tools to address offender behavior. While support was nearly equal for material and social incentives, justice agencies interested in implementing CM may find social incentives easier to implement. Many organizations already use some types of social reinforcers such as positive affirmations by officers, offender access to office phones and computers, and recognition by supervisors of positive behaviors. Many drug court programs decrease reporting requirements for clients who test negative several weeks in a row; this type of reward is consistent with CM. These rewards also have the advantage of not only costing nothing, but potentially saving money through decreased time requirements for the PO and other staff. CM is acceptable to these study sites. The notion of rewarding does not appear to be a conceptual problem, which means that implementation is likely to proceed. The JSTEPS study will further document the type of rewards, the frequency of providing rewards, and consistency in providing rewards as part of a greater understanding of how rewards can be used in real world justice settings.

Footnotes

1

Items with negative statements were reverse coded for the analysis.

2

Eleven surveys had to be discarded because respondents did not sign the consent form.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Amy Murphy, George Mason University.

Anne Giuranna Rhodes, George Mason University.

Faye S. Taxman, George Mason University

References

  1. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6(2):61–74. doi: 10.1023/b:mhsr.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Amass L, Kamien J. A tale of two cities: Financing two voucher programs for substance abusers through community donations. Experimental and Clinical Psychopharmacology. 2004;12(2):147–155. doi: 10.1037/1064-1297.12.2.147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bride BE, Abraham AJ, Roman PM. Organizational factors associated with the use of contingency management in publicly funded substance abuse treatment centers. Journal of Substance Abuse Treatment. 2011;40(1):87–94. doi: 10.1016/j.jsat.2010.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ducharme LJ, Knudsen HK, Roman PM, Johnson JA. Innovation adoption in substance abuse treatment: Exposure, trialability, and the Clinical Trials Network. Journal of Substance Abuse Treatment. 2007;32(4):321–329. doi: 10.1016/j.jsat.2006.05.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Friedmann PD, Taxman FS, Henderson CE. Evidence-based treatment practices for drug-involved adults in the criminal justice system. Journal of Substance Abuse Treatment. 2007;32(3):267–277. doi: 10.1016/j.jsat.2006.12.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Garner BR. Research on the diffusion of evidence-based treatments within substance abuse treatment: A systematic review. Journal of Substance Abuse Treatment. 2009;36(4):376–399. doi: 10.1016/j.jsat.2008.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Griffith JD, Rowan-Szal GA, Roark RR, Simpson DD. Contingency management in outpatient methadone treatment: a meta-analysis. Drug and Alcohol Dependence. 2000;58(1–2):55–66. doi: 10.1016/s0376-8716(99)00068-x. [DOI] [PubMed] [Google Scholar]
  8. Herbeck DM, Hser YI, Teruya C. Empirically supported substance abuse treatment approaches: A survey of treatment providers’ perspectives and practices. Addictive Behaviors. 2008;33(5):699–712. doi: 10.1016/j.addbeh.2007.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Higgins ST, Petry NM. Contingency management: Incentives for sobriety. Alcohol Research & Health: The Journal of the National Institute on Alcohol Abuse and Alcoholism. 1999;23(2):122–127. [PMC free article] [PubMed] [Google Scholar]
  10. Higgins ST, Alessi SM, Dantona RL. Voucher-based incentives: A substance abuse treatment innovation. Addictive Behaviors. 2002;27(6):887–910. doi: 10.1016/s0306-4603(02)00297-6. [DOI] [PubMed] [Google Scholar]
  11. Higgins ST, Heil SH, Lussier JP. Clinical implications of reinforcement as a determinant of substance use disorders. Annual Review of Psychology. 2004;55(1):431–461. doi: 10.1146/annurev.psych.55.090902.142033. [DOI] [PubMed] [Google Scholar]
  12. Hiller M, Belenko S, Taxman FS, Young DW, Perdoni M, Saum C. Measuring drug court structure and operations: Key components and beyond. Criminal Justice & Behavior. 2010;37(9):933–950. [Google Scholar]
  13. Institute of Medicine. Guidelines for clinical practice: from development to use. Washington, DC: Institute of Medicine; 1992. [PubMed] [Google Scholar]
  14. Kirby KC, Marlowe DB, Festinger DS, Lamb RJ, Platt JJ. Schedule of voucher delivery influences initiation of cocaine abstinence. Journal of Consulting and Clinical Psychology. 1998;66(5):761–767. doi: 10.1037//0022-006x.66.5.761. [DOI] [PubMed] [Google Scholar]
  15. Kirby KC, Benishek LA, Dugosh KL, Kerwin ME. Substance abuse treatment providers’ beliefs and objections regarding contingency management: Implications for dissemination. Drug and Alcohol Dependence. 2006;85(1):19–27. doi: 10.1016/j.drugalcdep.2006.03.010. [DOI] [PubMed] [Google Scholar]
  16. Knudsen HK, Roman PM. Modeling the use of innovations in private treatment organizations: The role of absorptive capacity. Journal of Substance Abuse Treatment. 2004;26(1):51–59. doi: 10.1016/s0740-5472(03)00158-2. [DOI] [PubMed] [Google Scholar]
  17. Marlowe DB, Festinger DS, Dugosh KL, Arabia P, Kirby KC. An effectiveness trial of contingency management in a felony preadjudication drug court. Journal of Applied Behavior Analysis. 2008;41(4):565. doi: 10.1901/jaba.2008.41-565. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, Capoccia V, et al. The Network for the Improvement of Addiction Treatment (NIATx): Enhancing access and retention. Drug and Alcohol Dependence. 2007;88(2–3):138–145. doi: 10.1016/j.drugalcdep.2006.10.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. McGovern MP, Fox TS, Xie H, Drake RE. A survey of clinical practices and readiness to adopt evidence-based practices: Dissemination research in an addiction treatment system. Journal of Substance Abuse Treatment. 2004;26(4):305–312. doi: 10.1016/j.jsat.2004.03.003. [DOI] [PubMed] [Google Scholar]
  20. Nelson TD, Steele RG. Predictors of practitioner self-reported use of evidence-based practices: Practitioner training, clinical setting, and attitudes toward research. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34(4):319–330. doi: 10.1007/s10488-006-0111-x. [DOI] [PubMed] [Google Scholar]
  21. Petry NM, Simcic F. Recent advances in the dissemination of contingency management techniques: clinical and research perspectives. Journal of Substance Abuse Treatment. 2002;23(2):81–86. doi: 10.1016/s0740-5472(02)00251-9. [DOI] [PubMed] [Google Scholar]
  22. Petry NM, Martin B, Cooney JL, Kranzler HR. Give them prizes and they will come: Contingency management for treatment of alcohol dependence. Journal of Consulting and Clinical Psychology. 2000;68(2):250–257. doi: 10.1037//0022-006x.68.2.250. [DOI] [PubMed] [Google Scholar]
  23. Pinto RM, Yu G, Spector AY, Gorroochurn P, McCarty D. Substance abuse treatment providers’ involvement in research is associated with willingness to use findings in practice. Journal of Substance Abuse Treatment. 2010;39(2):188–194. doi: 10.1016/j.jsat.2010.05.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Prendergast M, Podus D, Finney J, Greenwell L, Roll J. Contingency management for treatment of substance use disorders: a meta-analysis. Addiction. 2006;101(11):1546–1560. doi: 10.1111/j.1360-0443.2006.01581.x. [DOI] [PubMed] [Google Scholar]
  25. Proctor E, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research. 2009;36(1):24–34. doi: 10.1007/s10488-008-0197-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Ritter A, Cameron J. Australian clinician attitudes towards contingency management: Comparing down under with America. Drug and Alcohol Dependence. 2007;87(2–3):312–315. doi: 10.1016/j.drugalcdep.2006.08.011. [DOI] [PubMed] [Google Scholar]
  27. Rudes DS, Portillo S, Murphy A, Rhodes A, Stitzer M, Loungo P, Taxman FS. Adding Positive Reinforcements in a Criminal Justice Setting: Acceptability and Feasibility. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2011.08.002. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Schumacher JE, Milby JB, Wallace D, Meehan DC, Kertesz S, Vuchinich R, Dunning J, et al. Meta-analysis of day treatment and contingency-management dismantling research: Birmingham Homeless Cocaine Studies (1990–2006) Journal of Consulting and Clinical Psychology. 2007;75(5):823–828. doi: 10.1037/0022-006X.75.5.823. [DOI] [PubMed] [Google Scholar]
  29. Stitzer ML, Petry NM, Peirce J. Motivational incentives research in the National Drug Abuse Treatment Clinical Trials Network. Journal of Substance Abuse Treatment. 2010;38(Supp 1):S61–S69. doi: 10.1016/j.jsat.2009.12.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Taxman, Belenko . Implementing Evidence-based Practices in Community Corrections and Addiction Treatment. New York: Springer; 2011. in press. [Google Scholar]
  31. Taxman FS, Henderson CE, Belenko S. Organizational context, systems change, and adopting treatment delivery systems in the criminal justice system. Drug and Alcohol Dependence. 2009;103 (Supplement 1):S1–S6. doi: 10.1016/j.drugalcdep.2009.03.003. [DOI] [PubMed] [Google Scholar]
  32. Taxman FS, Soule D, Gelb A. Graduated sanctions: Stepping into accountable systems and offenders. The Prison Journal. 1999;79(2):182–204. [Google Scholar]
  33. Trotman A, Taxman FS. Implementation of a contingency management-based intervention in a community supervision setting: Clinical issues and recommendations. Journal of Offender Rehabilitation. doi: 10.1080/10509674.2011.585924. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Walker R, et al. Disseminating contingency management to increase attendance in two community substance abuse treatment centers: Lessons learned. Journal of Substance Abuse Treatment. 2010;39(3):202–209. doi: 10.1016/j.jsat.2010.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Ward G, Kupchik A. What drives juvenile probation officers? Crime & Delinquency. 2010;56(1):35–69. [Google Scholar]
  36. Willenbring ML, Kivlahan D, Kenny M, Grillo M, Hagedorn H, Postier A. Beliefs about evidence-based practices in addiction treatment: A survey of Veterans Administration program leaders. Journal of Substance Abuse Treatment. 2004;26(2):79–85. doi: 10.1016/S0740-5472(03)00161-2. [DOI] [PubMed] [Google Scholar]

RESOURCES