Skip to main content
Implementation Science Communications logoLink to Implementation Science Communications
. 2025 Dec 15;7:9. doi: 10.1186/s43058-025-00841-7

Project MIMIC (Maximizing Implementation of Motivational Incentives in Clinics): preparation phase outcomes of a hybrid type 3 trial

Sara J Becker 1,, Tim Janssen 2, Cara M Murphy 2, Kelli Scott 1, Kira DiClemente-Bosco 1, Tim Souza 2, Bryan R Garner 3
PMCID: PMC12821900  PMID: 41398966

Abstract

Background

According to phasic models of implementation, a Preparation phase designed to enhance the implementation climate should be completed prior to the Implementation phase. Yet preparatory activities and outcomes are rarely reported or assessed in implementation research. Project MIMIC (Maximizing Implementation of Motivational Incentives in Clinics) was a hybrid type 3 effectiveness-implementation trial that compared two multi-component, phasic strategies to implement contingency management (CM) in opioid treatment programs. The current secondary analysis assessed the comparative effectiveness of the two strategies on 5-month Preparation phase outcomes: attainment of knowledge and fidelity benchmarks, implementation climate at the end of the Preparation phase, and time required for providers to complete the final preparatory/pre-implementation activity of enrolling and scheduling their first CM patient.

Methods

Twenty-eight opioid treatment programs and 186 staff were cluster-randomized to receive the Addition Technology Transfer Center (ATTC) control strategy (didactic workshop + performance feedback + consultation) or the theory-driven Enhanced-ATTC (E-ATTC) experimental strategy. During the Preparation phase, the E-ATTC strategy consisted of the ATTC strategy plus monthly Implementation Sustainment Facilitation sessions rooted in principles of team-based motivational interviewing to cultivate a strong implementation climate and accelerate successful completion of the Preparation phase.

Results

Across the 28 OTPs and 186 staff, attainment of knowledge and fidelity benchmarks favored the E-ATTC but did not differ significantly by condition. Implementation climate ratings after the Preparation phase were high in both conditions with no conditional differences. Providers randomized to E-ATTC completed their final preparatory activity at significantly higher rates than those randomized to ATTC. Cox regression revealed that receipt of the E-ATTC strategy was also associated with significantly faster completion of the final Preparation activity.

Conclusions

Consistent with hypotheses, the theory-driven implementation strategy was associated with higher levels of and faster time to completion of preparatory activities, a key indicator of readiness for implementation. Counter to expectations, this was not driven by differences in implementation climate. High ratings of implementation climate at baseline limited our ability to detect change over time, highlighting a need for alternate strategies to measure putative mechanisms of change. This analysis adds to the scant literature reporting Preparation phase strategies and outcomes, which are strong predictors of successful implementation.

Trial registration

This study is registered in Clinicaltrials.gov (NCT03931174).

Supplementary Information

The online version contains supplementary material available at 10.1186/s43058-025-00841-7.

Keywords: Preparation phase, Hybrid trial, Addiction Technology Transfer Center, Contingency management, Opioid use disorder, Substance use


Contributions to the literature.

  • Phasic implementation strategies are common, yet Preparation phase strategies and outcomes are rarely reported.

  • This secondary analysis of a hybrid type 3 trial compared two phasic implementation strategies on key Preparation phase outcomes: attainment of benchmarks, implementation climate, and time required for providers to complete the final preparatory/pre-implementation activity of enrolling their first patient.

  • The enhanced implementation strategy significantly increased and accelerated completion of the final preparatory activity but did not improve implementation climate, the theorized mechanism of change.

  • Theory-driven Preparation phase strategies can accelerate readiness for implementation, but more sensitive measures may be needed to elucidate mechanisms of change.

Background

The robust evidence base for contingency management (CM) [16] a behavioral intervention in which patients earn incentives for meeting their substance-related treatment goals, has led to widespread calls to implement the intervention more widely [79]. CM is highly effective when delivered as a standalone treatment [14] and as an adjunct to medication for opioid use disorder [5, 6]. To address well-documented barriers to implementation of this evidence-based intervention, recent efforts to implement CM have relied on multi-component strategies that simultaneously address barriers to change at both the provider- and organization-levels [1012]. Increasingly, such efforts have used a phasic approach [10, 11, 13], guided by Aarons and colleagues’ Exploration-Preparation-Implementation-Sustainment (EPIS) framework [14, 15]. According to the EPIS framework, implementation initiatives should include preparatory strategies to enhance the implementation climate (i.e., the extent to which the evidence-based practice is expected, supported, and rewarded) prior to active implementation [16]. Yet preparatory activities are rarely reported or assessed in phasic implementation activities. Indeed, a recent synthesis of articles using the EPIS framework [16] found that only 18 of 49 articles (37%) explicitly contained a Preparation phase and only nine studies (19%) measured indicators of the implementation climate during this phase. Attention to preparatory strategies during the Preparation phase and the effect of those strategies on key preparatory outcomes is an understudied area of implementation science.

Project MIMIC (Maximizing Implementation of Motivational Incentives in Clinics) was a hybrid type 3 effectiveness-implementation trial, guided by the EPIS model, that compared two multi-component, phasic strategies to advance CM implementation in opioid treatment programs (OTPs) [17]. The first strategy mirrored the approach employed by the SAMHSA-funded Addiction Technology Transfer Center (ATTC) Network, the largest network of intermediary purveyor organizations dedicated to training the addiction workforce [18]. The ATTC strategy consisted of three components supported by the robust literature on provider training [19, 20]: didactic workshop + performance feedback + consultation. These components were designed to address provider knowledge and fidelity as key determinants of implementation. The second strategy was an Enhanced-ATTC (E-ATTC) strategy that layered in two components at different timepoints to enhance the implementation climate: facilitation and incentivization. Facilitation started in the Preparation phase and targeted the expected and supported components of implementation climate, whereas incentivization started in the Implementation phase and targeted the reward component. By targeting implementation climate, we expected to accelerate CM adoption and implementation.

To augment the Project MIMIC primary outcomes analysis [21], which focused on the 9-month Implementation phase, we assessed the comparative effectiveness of the E-ATTC and ATTC strategies on key outcomes related to the 5-month Preparation phase. Specifically, we assessed the effect of the two multi-component strategies on: attainment of Preparation phase knowledge and fidelity benchmarks, implementation climate at the end of the Preparation phase, and time required for providers to complete their final preparatory/pre-implementation activity and demonstrate readiness for implementation. We expected the E-ATTC strategy to be associated with superior implementation climate at the end of the Preparation phase and both higher level of and faster time to completion of preparatory activities, given the strategy’s explicit focus on enhancing the implementation climate to catalyze and accelerate CM delivery. Because the ATTC strategy (received by both conditions) explicitly targeted provider-level knowledge and fidelity, and these benchmarks were assessed early in the Preparation phase shortly after facilitation commenced, we did not expect the strategies to have differential effects on the attainment of the knowledge and fidelity benchmarks.

Methods

The methods for this study were described in detail in a published protocol [17] and registered in clinicaltrials.gov (NCT03931174).

Organizational partners

Project MIMIC recruited 28 OTPs across three staggered cohorts. OTPs were recruited via informational flyers and brief PowerPoint presentations, distributed via state Department of Health administrators and cold calls from the research team to OTP leaders. The informational materials emphasized the opportunity for OTPs to receive state-of-the-art training and implementation support in CM in partnership with the regional ATTC. To participate, OTPs had to dispense methadone and/or buprenorphine, admit at least four new patients per month, have at least two full-time providers willing to deliver CM and receive intensive CM implementation support, and have at least one provider willing to serve in a supervisory role. OTPs also had to agree to enroll and track CM delivery to at least 25 patients who had been inducted on methadone and/or buprenorphine within the past 30 days. Any OTP provider interested in having their OTP participate could contact the study hotline, from which research staff confirmed eligibility. Eligible OTPs were given a non-binding Organizational Agreement Form that detailed the study procedures, organizational support to be provided, and expected time commitment. OTPs that returned the Organizational Agreement Forms signed by institutional leaders or administrators were enrolled in the order that forms were received until each cohort was filled (maximum of 12 OTPs per cohort).

Randomization to condition

OTPs that enrolled in the study were asked to complete a Baseline Organizational Assessment form, which collected organization-level background data. The study statistician employed urn randomization [22] to assign OTPs to either the ATTC or E-ATTC implementation strategy. Within each cohort, the urn balanced on up to three of the following variables: patient census; state in which licensed to dispense medication; number of providers; and non-profit vs. for-profit status. Specific variables and cutoffs included in the urn depended on the level of heterogeneity in each cohort. For census and number of providers per OTP, the median value in each cohort was used as the cutoff to designate high and low levels.

Provider recruitment

Following randomization to condition, each OTP was asked to nominate up to 7 providers, which could include 2–5 front-line counselors delivering CM and 1–2 leaders supervising CM, to participate in CM implementation activities. To be eligible to participate, counselors had to have active caseloads of patients receiving methadone and/or buprenorphine and be willing to receive 14 months of implementation support; leaders had to supervise counselors meeting these criteria. Nominated providers were emailed condition-specific electronic consent forms that outlined study procedures with clear delineation of roles for counselors and leaders. Research staff contacted the nominated providers via phone and in-person site visits to confirm eligibility and ensure they had an ample opportunity to ask questions about the risks and benefits of participation, prior to obtaining informed consent.

Evidence-based intervention: prize-based contingency management

The CM protocol used in Project MIMIC has been described elsewhere [17, 21], was based on the prize-based CM protocol used in both the Veteran’s Administration national rollout [12] and the NIDA Clinical Trials Network [23], and was informed by qualitative feedback from OTP staff collected during the Exploration phase [24, 25]. Briefly, the protocol was prize-based CM using the “fishbowl” method and consisted of the following elements: (a) development of objectively verifiable OTP-specific engagement targets (e.g., attendance at weekly group or individual counseling, receipt of medication doses, etc.); (b) weekly monitoring of patient attainment of the OTP-specific engagement target; (c) immediate provision of reinforcement; (d) escalation of draws by 1 weekly for 12 weeks; and (e) expected value of $3 per draw for a maximum total expected value of approximately $250.

Preparation phase implementation strategies

The Project MIMIC protocol contained a comprehensive accounting of strategies spanning the Preparation, Implementation, and Sustainment phases [17]. During the 5-month Preparation phase, all providers (counselors and leaders) randomized to both conditions received the three components of the ATTC strategy: didactic workshop + performance feedback + consultation. Providers randomized to E-ATTC additionally received monthly facilitation sessions throughout the Preparation phase. The E-ATTC strategy also contained incentivization (see [21]), but this component did not commence until after all preparatory/pre-implementation activities were complete and active CM implementation began and is therefore not discussed here. The timing and components of strategies delivered during the Preparation phase are visually depicted in Fig. 1 and elaborated below. As shown in Fig. 1, the day after the 5-month Preparation phase was called “CM Launch Day,” after which providers could commence enrolling CM patients.

Fig. 1.

Fig. 1

Implementation strategies delivered in the Preparation phase of the Project MIMIC cluster-randomized trial. CM = contingency management; ATTC = Addiction Technology Transfer Center strategy; E-ATTC = Enhanced Addiction Technology Transfer Center strategy

Didactic workshop (ATTC and E-ATTC conditions)

Providers in both conditions who planned to deliver CM (all counselors and any leaders planning to offer back-up coverage) were expected to attend a didactic workshop, which was offered in-person prior to March 2020 (start of the COVID pandemic) and virtually thereafter (see [26] for details). The workshop augmented didactic instruction from a CM subject matter expert with active learning strategies such as observations of CM demonstrations; practice rating and delivering CM sessions using a skills-based fidelity criterion; and small group discussion around key CM workflow decisions. Immediately following the didactic workshop, OTP providers completed a 20-item knowledge test, adapted from the test used in the VA rollout of CM [12]. Providers who received less than 70% on the knowledge test were given corrective feedback and asked to resubmit the test until they earned a passing score.

Performance feedback (ATTC and E-ATTC conditions)

Providers in both implementation strategy conditions who planned to deliver CM (all counselors and any leaders interested in providing back-up coverage) were also required to complete a role play within 30 days of the didactic workshop. Role plays were rated by coders blind to condition using the CM Competence Scale [27, 28], which contained six CM-specific skills (e.g., specifying the engagement target, specifying number of prize draws earned, assessing desire for prizes) and three general counseling skills (e.g., therapeutic effectiveness, maintaining session structure, empathy). Role plays could be conducted with a research staff member who acted as a standardized patient or with OTP staff based on standardized case examples. All role plays were recorded and submitted for performance feedback. Based on prior benchmarks documented in effectiveness trials, providers needed to receive an average score of 4.0 out of 7.0 on the 9-item CM Competence Scale to be deemed as having “adequate” fidelity. Providers who received an average score less than 4.0 were asked to resubmit audio recordings until they earned an adequate score.

Consultation (ATTC and E-ATTC conditions)

All providers, regardless of whether they intended to deliver CM or not, were invited to receive consultation focused on preparing for implementation. Consultation was provided by the CM subject matter expert who led the didactic training and a second expert in the provision of implementation support.

The goal of consultation was to provide OTPs and providers with the technical assistance needed to complete their preparatory activities and demonstrate readiness for implementation. To successfully complete preparatory activities, OTPs needed to have at least two providers pass the knowledge test and demonstrate CM competence. As soon as two providers passed the CM knowledge test, the CM expert helped each OTP to select a set of locally meaningful prizes. Once at least two providers demonstrated CM competence, the implementation expert conducted a half-day site visit with each OTP to stock the prize cabinet and review site-specific procedures for integrating CM into the OTP workflow. During this half-day site visit, providers were informed that in addition to demonstrating knowledge and fidelity, they needed to complete one final preparatory/pre-implementation activity to be considered successful completers of all preparatory activities and deemed ready for implementation: specifically, providers had to enroll at least one patient in CM, indicated by scheduling their first session in an online CM Tracker (see [17, 21]), as soon after “CM Launch Day” as feasible. Enrolling at least one patient was deemed an essential precursor to the start of active CM implementation, yet due to the inherent challenges engaging OTP patients, enrolled patients were not necessarily expected to receive CM.

Facilitation (E-ATTC condition only)

All OTP providers randomized to E-ATTC, regardless of whether they intended to deliver CM or not, were invited to participate in monthly hour-long virtual facilitation sessions led by an expert external to the OTP. Sessions began the month before the didactic workshop and were guided by the Implementation & Sustainment Facilitation protocol [2931], which is rooted in principles of team-based motivational interviewing [20, 32] and seeks to increase the team’s motivation to optimize implementation climate.

In each session, the expert facilitator strived to complete four key tasks: (a) Engage the providers on the CM implementation team; (b) Focus the team on two key principles of CM implementation effectiveness (quality and consistency of CM delivery); (c) Evoke from the team thoughts about expected CM implementation effectiveness; and (d) Plan how to optimize the level of CM implementation effectiveness. Each session consisted of team-based exercises designed to establish a shared vision and increase collective motivation for CM implementation. Example exercises included discussing the pros and cons of implementing CM, identifying the OTP’s change-related strengths and weaknesses, and examining success factors in prior implementation efforts. OTPs were offered 3–4 facilitation sessions during the Preparation phase (depending on the timing of the didactic workshop and OTP availability): these monthly sessions continued during the Implementation phase. Importantly, facilitation sessions and the associated team-based exercises were hypothesized to cultivate a climate in which attainment of key CM preparatory outcomes (e.g., passing the knowledge test, demonstrating adequate CM competence, enrolling the first CM patient) was expected, supported, and rewarded [33]. Central to cultivating a strong implementation climate was encouraging the efficient and timely completion of Preparation phase activities to build shared momentum for change: facilitators therefore encouraged CM providers to enroll their first patient as soon after the “CM Launch Day” as possible.

Preparation outcomes

Benchmark attainment

For each condition, we calculated the proportion of providers attaining a passing score of greater than 70% on the post-workshop CM knowledge test and attaining an adequate score of 4.0 or greater on the CM Competence Scale on the CM role play. Each proportion was calculated among both the full sample of providers randomized to condition and the subsample of active providers, excluding providers for whom employment-related changes resulted in ineligibility (e.g., left the OTP/unit/division implementing CM, leave of absence) or participation was optional (e.g., leaders providing supervision only).

Implementation climate

We examined implementation climate at baseline and the end of the Preparation phase. Implementation climate was measured at the provider-level in two ways. First, we used Jacobs and colleagues’ 6-item scale [34] that assessed the extent to which providers agreed that use of CM was expected, supported, and rewarded within their OTP. Second, we used a 4-item leadership engagement scale developed by Garner and colleagues that asked providers to rate the extent to which leadership was supportive of CM implementation. This scale was included as a proxy of implementation climate due to the well-documented relationship between leadership engagement and implementation climate [33, 35]. Both scales were rated on 5-point Likert scales and averaged to obtain overall scores, with higher scores indicating a more favorable climate.

Completion of the final Preparation activity

At the provider-level, we calculated the proportion of providers in each condition that completed the final Preparation activity of enrolling at least one patient in CM. We also calculated the number of days from the “CM Launch Day” that each provider needed to complete this activity, as indicated by the date on which the provider logged their first patient in the online CM Tracker.

Analysis plan

Analytical procedures varied by the outcome of interest. For the indicators of provider-level benchmark attainment, we conducted chi-square tests of independence to evaluate whether the proportion of providers demonstrating knowledge and fidelity varied as a function of condition assignment. To use as stringent a comparison as possible, we only compared scores on the initial knowledge test and role play submission. For implementation climate, we investigated whether condition predicted change in provider-level scores (on both the implementation climate and leadership engagement scales) via multiple regression controlling for baseline scores.

For completion of the final preparatory activity, we first used Chi-square to compare the proportion of providers completing the task. We then employed Cox regression in which a cumulative incremental hazard of meeting the “event” (enrolling the first CM patient) was estimated over time for each condition and the extent to which membership in the E-ATTC condition predicted this hazard was estimated as a hazard ratio. The hazard rate, also known as the instantaneous event rate, was defined as the risk of a provider enrolling their first CM patient at a specific timepoint, given that the event had not occurred earlier. The hazard ratio was then calculated as the ratio of the hazard rates in the two conditions (i.e., the hazard rate in the E-ATTC condition divided by the hazard rate in the ATTC condition): a hazard ratio greater than 1.0 indicated that the event was more likely in the E-ATTC condition, whereas a hazard ratio less than 1.0 indicated that it was more likely in the ATTC condition. Prior to estimating the Cox regression, we coded two variables for each provider: (a) the greater of the difference between “CM Launch Day” and the day on which the first CM patient was enrolled or 270 days (the duration of the Implementation phase); and b) whether the event had (coded 1) or had not (right-censored; coded 0) occurred. All 186 providers were included in this analysis consistent with an intent-to-treat approach. Cox regression models were run in MPlus 8.10 [36] using a maximum-likelihood estimator with robust standard errors [37] that controlled for clustering within organizations.

Results

Provider progress and characteristics

Figure 2 presents a flow chart of provider progress through the study and engagement in Preparation activities. In total, 28 OTPs participated from which 186 providers were randomized to condition, 95 to ATTC and 91 to E-ATTC. As detailed in the primary outcomes paper [21], demographics mirrored the demographics of the behavioral health workforce in New England [38, 39], with providers identifying as predominantly female (81.7%) and Non-Hispanic White (76.3%) with most having a Master’s degree or higher (62.4%).

Fig. 2.

Fig. 2

Flow chart of provider progress through the Preparation phase milestones; CM = contingency management; ATTC = Addiction Technology Transfer Center strategy; E-ATTC = Enhanced Addiction Technology Transfer Center strategy

At the OTP-level, all 28 OTPs had at least two providers attain the knowledge and fidelity benchmarks and completed the Preparation phase. Among the 14 OTPs randomized to E-ATTC condition, the number of facilitation sessions received ranged from 2–4 (M = 3.3, SD = 0.7), with attendance ranging from 2–7 providers per session (M = 4.7, SD = 1.3). At the provider-level, completion of specific preparatory activities varied substantially, as shown in Fig. 3 and elaborated in the following sections.

Fig. 3.

Fig. 3

Cox regression showing the number of days required for different proportions of providers to complete their final preparatory activity and enroll their first contingency management (CM) patient. ATTC = Addiction Technology Transfer Center strategy; E-ATTC = Enhanced Addiction Technology Transfer Center strategy

Attainment of preparatory phase benchmarks

Passing the CM knowledge test

At the time of the didactic training and post-training knowledge test, 172 providers were deemed “active” (14 inactive: 9 left the OTP/were on leave; 5 leaders opted not to complete). A total of 149 providers (86.6% of active providers, 80.1% of full sample) attended the didactic workshop; attendance rates did not vary by condition. Of these 149 providers, 122 (81.9% of those attending the didactic workshop, 70.9% of active providers, 65.5% of full sample) passed the knowledge test on the first submission. Rates of passing the CM knowledge test favored E-ATTC but did not significantly differ between conditions among those actively employed (E-ATTC 73.3% vs. ATTC 68.6%, X2(1, N = 172) = 0.46, p = 0.50) or the full sample (E-ATTC 69.2% vs. ATTC 62.1%, X2(1, N = 186) = 1.05, p = 0.31).

Demonstrating adequate CM competence

At the time of the CM role play submission and competence evaluation, 135 providers were deemed “active” (51 inactive: 20 left the OTP/were on leave; 31 leaders opted not to complete). As shown in Fig. 2, there was a major decrease in the number of active providers due to over 30 leaders who had attended the didactic workshop opting not to submit a role play. A total of 118 providers (91.5% of active providers, 63.4% of full sample) submitted a role play; role play submission rates did not vary by condition. Of the 118 providers, 103 (87.3% of those submitting a tape, 76.3% of active providers, 55.4% of full sample) demonstrated adequate CM competence on their first role play. Passing rates favored E-ATTC by over 12% but did not significantly differ between conditions among active providers (E-ATTC 82.8% vs. ATTC 70.4%, X2 (1, N = 135) = 2.84, p = 0.09) or the full sample (E-ATTC 58.2% vs. ATTC 52.6%, X2(1, N = 186) = 0.59, p = 0.44).

Implementation climate

Average ratings on the implementation climate scale were generally high across providers in both conditions at baseline (E-ATTC: M = 4.29, SD = 0.81 vs. ATTC: M = 4.33, SD = 0.86) and among active providers at the end of the Preparation phase (E-ATTC: M = 4.19, SD = 0.84 vs. ATTC: M = 4.07, SD = 0.94) conditions. The same pattern was observed on the leadership engagement scale, with high scores observed at both timepoints (baseline: E-ATTC: M = 4.14, SD = 1.02 vs. ATTC: M = 4.03, SD = 1.11; follow-up: E-ATTC: M = 3.74, SD = 1.15 vs. ATTC: M = 3.83, SD = 1.24). Both scales demonstrated high internal consistency (alphas 0.89–0.90.89.90) across conditions and timepoints, with a substantial proportion of providers scoring near the top end of the scale (see Supplemental Material 1). The effect of condition was not significant in predicting either implementation climate (b = 0.17, p = 0.27) or leadership engagement (b = −0.10, p = 0.68) at the end of the Preparation phase, when controlling for ratings prior to the Preparation phase.

Days required to complete final preparatory activity

The proportion of providers in the full sample completing the final preparatory activity was significantly higher in the E-ATTC condition (51%) than the ATTC condition (33%), X2 (1, N = 185) = 6.16, p = 0.01. Cox regression (see Fig. 3) predicting change in the hazard of providers completing the final preparatory task and demonstrating readiness for implementation indicated that the hazard ratio was significantly greater in the E-ATTC condition than in the ATTC condition (HR: 1.86; 95% CI: 1.18, 2.93). A hazard ratio of 1.86 indicates that, at any given point in time, the rate of providers having completed their final preparatory activity was 86% higher in the E-ATTC condition than in the ATTC condition. Because each program was expected to have at least 2 of the 7 enrolled providers deliver CM, we examined the days required to have 30% of providers complete the final preparatory milestone as an indicator of the size of the effect: in this sample, this milestone was attained approximately 119 days faster in the E-ATTC condition (40 days) than in the ATTC condition (159 days).

Discussion

This study focused on the Preparation phase of a CM implementation trial, a crucial but less commonly studied phase in the implementation process. As highlighted in a systematic review by Moullin and colleagues [16], the Preparation phase is not routinely assessed in implementation studies, despite its importance for planning implementation supports and determining when an organization is ready for implementation. This study specifically compared the effectiveness of two multi-component implementation strategies, ATTC and E-ATTC, on the attainment of key Preparation phase milestones and outcomes across 28 OTPs. Specifically, we examined differences between implementation strategies (ATTC and E-ATTC) on provider knowledge and fidelity benchmarks, on provider-reported perceptions of implementation climate, and on both the level and speed with which providers completed the final preparatory activity and demonstrated readiness for implementation.

Results were generally consistent with our hypotheses. The two implementation strategy conditions yielded comparable attainment of the knowledge and fidelity benchmarks, which was expected given the extensive focus of both strategies on provider knowledge (via didactic training) and fidelity to the CM protocol (via performance feedback), and the measurement of these constructs early in the Preparation phase shortly after facilitation commenced. Additionally, the theory-driven E-ATTC implementation strategy was associated with a higher level of and significantly faster time to successful completion of the final preparatory activity: at any given point in time, providers in the E-ATTC condition had 86% higher rate of completing the final preparatory activity than providers in the ATTC condition. Estimates of the size of the effect indicated that the number of days required to have 30% of providers complete the activity was 119 days – approximately 4 months – faster in the E-ATTC versus the ATTC condition. Completion of preparatory/pre-implementation activities is a meaningful indicator of implementation readiness and creates a strong foundation for implementation, as preparatory activities are essential for gaining buy-in, planning for effective and efficient resource utilization, and keeping momentum for change [40, 41]. Indeed, prior research has found that organizations that effectively execute a pre-implementation/Preparation phase are more likely to successfully implement new programs [42]. When combined with the results of our primary outcomes analysis, the current findings suggest the E-ATTC strategy enabled more providers to more quickly move toward implementation and that OTPs receiving the E-ATTC went on to have higher quality and consistency of CM delivery, as well as superior patient outcomes [21].

Contrary to our hypotheses, the expedited completion of preparatory activities was not driven by differences in implementation climate, the theorized mechanism of change. Beginning in the Preparation phase and continuing in the Implementation phase, participating opioid treatment programs received team-based monthly facilitation sessions rooted in principles of motivational interviewing [32] to foster a shared vision for change and promote a collective sense of commitment to the change effort. Results demonstrate that these sessions were effective in increasing the level and accelerating the pace at which providers completed preparatory/pre-implementation activities, but not successful in changing provider perceptions of the extent which CM implementation was expected, supported, and rewarded.

The current results should be considered in the context of the scant literature examining change in implementation climate and leadership engagement over time, particularly during the Preparation phase. One study by Williams and colleagues (2020) [35] examined change in implementation climate and implementation leadership across 30 organizations over 5 years and found that average ratings of both constructs were unchanged across the three waves. In harmony with the current trial, change in average ratings of implementation leadership directionally decreased from wave 1 to wave 2. While other studies have tracked implementation climate across multiple timepoints using data from individual providers [43, 44] or aggregated across teams [45, 46], none of these studies reported average implementation climate scores over time to enable longitudinal comparison. Perhaps surprisingly, we could not find any trials demonstrating a significant increase in implementation climate over time in response to implementation strategies. Notably, a recent scoping review by Powell and colleagues [47] highlighted the paucity of prior trials in this area, noting that “far fewer measures of implementation climate” were identified than measures of other organizational constructs, and concluded that “nearly half the measures of implementation climate and related subcontracts were used only once” and that “psychometric evidence for implementation climate and its related subcontracts was less readily available.”

Given the scant prior literature, we posit several possible reasons why E-ATTC may have increased and accelerated the pace of activity completion without producing demonstrable changes in implementation climate, all of which reflect potential limitations of the trial. Most importantly, the measures of implementation climate relied on provider self-report via Likert scales, which demonstrated ceiling effects at baseline. Ceiling effects have frequently been observed on brief provider self-report scales measuring implementation concepts such as acceptability, appropriateness, and feasibility [48, 49]. Relatedly, it was noteworthy that perceptions of implementation climate and leadership engagement were both higher at baseline than at the end of the Preparation phase. This observation is conceptually akin to our team’s finding in a recent study that OTP teams’ perceptions of CM barriers were lower at baseline than after 6-months of active implementation [50], suggesting that OTP providers may overestimate the level of implementation support and underestimate the difficulty of implementation prior to the start of active work. Taken together, these findings suggest that alternate measures – such as self-report scales with more psychometric data (e.g., Ehrhart et al.’s Implementation Climate Scale [51]), facilitator-rated scales, or latent constructs – and more frequent data collection may be needed to better understand change in implementation climate over time.

Other reasons for the lack of significant change in implementation climate include the fact that both the ATTC and E-ATTC strategies were multi-component, multi-level strategies that offered providers and opioid treatment programs substantial support. The high level of support provided in the “control” condition may have limited our ability to detect the additive effects of the E-ATTC strategy. This hypothesis is supported by the fact that both conditions had relatively high rates of attaining the provider knowledge and competence benchmarks. Additionally, it is possible that speed of completion was driven by other unmeasured factors such as project management, accountability, or the strength of the working alliance with the facilitator [52, 53], and not by implementation climate. Finally, it is possible that the 5-month time horizon was too brief to observe meaningful changes in implementation climate, particularly since programs had only received 2–4 facilitation sessions before this construct was measured. Future research should seek to develop and psychometrically validate alternate measures of implementation climate and administer such measures with greater frequently to elucidate how perceptions of implementation climate evolve during the Implementation and Sustainment phases.

Conclusions

This study extends prior literature on the importance and impact of the Preparation phase for supporting the successful implementation of evidence-based practices like CM. Providers receiving the theory-driven E-ATTC strategy completed their final preparatory activity far more rapidly than those receiving the standard ATTC strategy, with speed of completion accelerated by approximately 4 months in the E-ATTC versus the ATTC condition. Offering OTPs facilitation sessions rooted in motivational interviewing beginning in the Preparation phase, which only required teams to invest one-hour per month, had a potent effect on the extent and speed with which providers moved through the Preparation phase and completed their preparatory activities.

The current work extends the primary outcomes analysis of the Project MIMIC trial, which found that the E-ATTC strategy was associated with superior CM consistency and quality during the Implementation phase, by finding that the E-ATTC strategy had significant effects on the speed of completing preparatory activities and demonstrating readiness for implementation. These results add to the limited yet growing body of literature on the utility of Preparation phase strategies to ensure that agencies have the necessary support for success. Systematic selection of Preparation phase strategies and careful measurement of Preparation phase outcomes can help to ensure that sites have sufficient support to proceed to implementation, while fostering a more robust understanding of the implementation process. To advance the identification of mechanisms that explain how and whether Preparation phase strategies improve outcomes, more sensitive measures of implementation climate may be required.

Supplementary Information

Acknowledgements

The authors are deeply saddened to recognize the contribution of Bryan R. Garner posthumously. The MIMIC trial is a testament to his scientific innovation, ambition, and leadership, as well as his incredible collaborative spirit, creativity, and mentorship. The authors would also like to acknowledge Samantha Moul, Kimberly Yap, Sharon Lang, Julia Yermash, Nick Correia, Sarah Salino, Fariha Hasan, Bryan Hartzler, and Carla Rash for their contributions to Project MIMIC. The authors are also grateful to the opioid treatment programs leaders, counselors, and patients who participated in this trial, without whom this work would not have been possible.

Authors’ contributions

SB and BG designed and conceptualized the study and jointly oversaw study execution. TJ was the trial data analyst and led all analyses. TS was the trial data manager and led all data management, cleaning, and preparation of syntax throughout the trial. CM and KS were study Co-Investigators, contributed to all Preparation phase activities, and led the contingency management competence coding. KDB joined the team as a project director during the third cohort and oversaw day-to-day trial execution for the final two years of the study. SB and TJ created a first draft of the manuscript, KDB and TS prepared all study figures. All authors substantively reviewed and revised the manuscript.

Funding

This research is supported by the National Institute on Drug Abuse (NIDA) under R01DA046941 (Multiple Principal Investigators: Becker and Garner). The content is solely the responsibility of the authors and does not necessarily represent the official views of NIDA. The funder had no direct role in the study design, analysis plan, interpretation of data, or presentation of findings in manuscripts.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Declarations

Ethics approval and consent to participate

Ethical approval was provided by Brown University’s Institutional Review Board (Protocol 1811002260).

Consent for publication

Not applicable.

Competing interests

The authors have no competing interests to declare.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Ginley MK, Pfund RA, Rash CJ, Zajac K. Long-term efficacy of contingency management treatment based on objective indicators of abstinence from illicit substance use up to 1 year following treatment: a meta-analysis. J Consult Clin Psychol. 2021;89(1):58–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Prendergast M, Podus D, Finney J, Greenwell L, Roll J. Contingency management for treatment of substance use disorders: a meta-analysis. Addiction. 2006;101(11):1546–60. [DOI] [PubMed] [Google Scholar]
  • 3.Pfund RA, Ginley MK, Rash CJ, Zajac K. Contingency management for treatment attendance: a meta-analysis. J Subst Abuse Treat. 2022;133:108556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Benishek LA, Dugosh KL, Kirby KC, Matejkowski J, Clements NT, Seymour BL, et al. Prize-based contingency management for the treatment of substance abusers: a meta-analysis. Addiction. 2014;109(9):1426–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bolívar HA, Klemperer EM, Coleman SRM, DeSarno M, Skelly JM, Higgins ST. Contingency management for patients receiving medication for opioid use disorder: a systematic review and meta-analysis. JAMA Psychiat. 2021;78(10):1092–102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Griffith JD, Rowan-Szal GA, Roark RR, Simpson DD. Contingency management in outpatient methadone treatment: a meta-analysis. Drug Alcohol Depend. 2000;58(1–2):55–66. [DOI] [PubMed] [Google Scholar]
  • 7.Khazanov GK, McKay JR, Rawson R. Should contingency management protocols and dissemination practices be modified to accommodate rising stimulant use and harm reduction frameworks? Addiction. 2024;119(9):1505–14. [DOI] [PubMed] [Google Scholar]
  • 8.Proctor SL. Rewarding recovery: the time is now for contingency management for opioid use disorder. Ann Med. 2022;54(1):1178–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Becker SJ. Contingency management needs implementation science. Addiction. 2024;119(9):1522–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Freese TE, Rutkowski BA, Peck JA, Padwa H, Thompson C, Datrice A, et al. California’s recovery incentives program: implementation strategies. J Subst Use Addict Treat. 2024;167:209513. [DOI] [PubMed] [Google Scholar]
  • 11.Freese TE, Rutkowski BA, Peck JA, Urada D, Clark HW, Bland AN, et al. Recovery incentives program: California’s contingency management benefit. Prev Med. 2023;176:107703. [DOI] [PubMed] [Google Scholar]
  • 12.Petry NM, DePhilippis D, Rash CJ, Drapkin M, McKay JR. Nationwide dissemination of contingency management: the Veterans Administration initiative. Am J Addict. 2014;23(3):205–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hartzler B, Gray K, Marx M, Kirk-Lewis K, Payne-Smith K, McIlveen JW. Community implementation of contingency management to address stimulant use. J Subst Use Addict Treat. 2023;151:208941. [Google Scholar]
  • 14.Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Moullin JC, Aarons GA. Exploration, preparation, implementation, sustainment (EPIS) framework. Implementation Science: Routledge; 2022. p. 51–5. [DOI] [PMC free article] [PubMed]
  • 16.Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the exploration, preparation, implementation, sustainment (EPIS) framework. Implement Sci. 2019;14(1):1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Becker SJ, Murphy CM, Hartzler B, Rash CJ, Janssen T, Roosa M, et al. Project MIMIC (maximizing implementation of motivational incentives in clinics): a cluster-randomized type 3 hybrid effectiveness-implementation trial. Addict Sci Clin Pract. 2021;16(1):61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.McCance-Katz EF. The substance abuse and mental health services administration (SAMHSA): new directions. Psychiatr Serv. 2018;69(10):1046–8. [DOI] [PubMed] [Google Scholar]
  • 19.Downs A, Downs RC, Rau K. Effects of training and feedback on discrete trial teaching skills and student performance. Res Dev Disabil. 2008;29(3):235–46. [DOI] [PubMed] [Google Scholar]
  • 20.Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. J Consult Clin Psychol. 2004;72(6):1050–62. [DOI] [PubMed] [Google Scholar]
  • 21.Becker SJ, Janssen T, Souza T, Hartzler B, Rash CJ, DiClemente-Bosco K, et al. Project MIMIC (Maximizing Implementation of Motivational Incentives in Clinics): results of a 28-site cluster-randomized type 3 hybrid trial [In press]. Implement Sci. 2025. [DOI] [PMC free article] [PubMed]
  • 22.Stout RL, Wirtz PW, Carbonari JP, Del Boca FK. Ensuring balanced distribution of prognostic factors in treatment outcome research. J Stud Alcohol Suppl. 1994;12:70–5. [DOI] [PubMed] [Google Scholar]
  • 23.Petry NM, Peirce JM, Stitzer ML, Blaine J, Roll JM, Cohen A, et al. Effect of prize-based incentives on outcomes in stimulant abusers in outpatient psychosocial treatment programs: a national drug abuse treatment clinical trials network study. Arch Gen Psychiatry. 2005;62(10):1148–56. [DOI] [PubMed] [Google Scholar]
  • 24.Becker SJ, Scott K, Murphy CM, Pielech M, Moul SA, Yap KR, et al. User-centered design of contingency management for implementation in opioid treatment programs: a qualitative study. BMC Health Serv Res. 2019;19(1):466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Scott K, Jarman S, Moul S, Murphy CM, Yap K, Garner BR, et al. Implementation support for contingency management: preferences of opioid treatment program leaders and staff. Implement Sci Commun. 2021;2(1):47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Hartzler B, Hinde J, Lang S, et al. Virtual Training Is More Cost-Effective Than In-Person Training for Preparing Staff to Implement Contingency Management. J technol behav sci. 2023;8:255–64. 10.1007/s41347-022-00283-1. [DOI] [PMC free article] [PubMed]
  • 27.Petry NM, Ledgerwood DM. The contingency management competence scale for reinforcing attendance. Farmington, CT: University of Connecticut Health Center; 2010. https://health.uconn.edu/contingency-management/wpcontent/uploads/sites/119/2017/04/CMCS_rating_manual_for_attendance_2010_1.pdf.
  • 28.Petry NM, Alessi SM, Ledgerwood DM, Sierra S. Psychometric properties of the contingency management competence scale. Drug Alcohol Depend. 2010;109(1–3):167–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Garner BR, Gotham HJ, Chaple M, Martino S, Ford JH, Roosa MR, Speck KJ, Vandersloot D, Bradshaw M, Ball EL, Toro AK, Griggs C, Tueller SJ. The implementation and sustainment facilitation strategy improved implementation effectiveness and intervention effectiveness: Results from a cluster-randomized, type 2 hybrid trial. Implement Res Pract. 2020;1. 10.1177/2633489520948073. [DOI] [PMC free article] [PubMed]
  • 30.Hinde JM, Garner BR, Watson CJ, Ramanan R, Ball EL, Tueller SJ. The implementation & sustainment facilitation (ISF) strategy: cost and cost-effectiveness results from a 39-site cluster randomized trial integrating substance use services in community-based HIV service organizations. Implement Res Pract. 2022;3:26334895221089264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Becker SJ, Ford JH II, Glass JE, Gotham H, Knudsen H, Quanbeck A, et al. Enduring contributions to implementation science: Honoring the legacy of Bryan R. Garner Front Health Serv. 2025;5:1698749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Wagner CC, Ingersoll KS. Motivational interviewing in groups: USA: Guilford Press; 2012. https://www.google.com/books/edition/Motivational_Interviewing_in_Groups/GfTyuwGbk4AC?hl=en&gbpv=0.
  • 33.Aarons GA, Ehrhart MG, Farahnak LR, Sklar M. Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annu Rev Public Health. 2014;35:255–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Jacobs SR, Weiner BJ, Bunger AC. Context matters: measuring implementation climate among individuals and groups. Implement Sci. 2014;9:46. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Williams NJ, Wolk CB, Becker-Haimes EM, Beidas RS. Testing a theory of strategic implementation leadership, implementation climate, and clinicians’ use of evidence-based practice: a 5-year panel analysis. Implement Sci. 2020;15(1):10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Muthén LK, Muthén BO. Mplus User’s Guide. Eighth Edition. Los Angeles, CA: Muthén & Muthén; 1998–2017. https://www.statmodel.com/HTML_UG/introV8.htm.
  • 37.Skinner C, Holt D, Smith T. Analysis of complex surveys. New York, NY: John Wiley & Sons Ltd.; 1989. [Google Scholar]
  • 38.New Hampshire Division of Public Health Services. Annual report on the New Hampshire Health Care Workforce and Data Collection. Department of Health and Human Services 2024. https://www.dhhs.nh.gov/sites/g/files/ehbemt476/files/documents2/2024-legislative-report-on-the-new-hampshire-health-careworkforce-and-data-collection.pdf.
  • 39.Scott K, Salas MDH, Bayles D, Sanchez R, Martin RA, Becker SJ. Substance use workforce training needs during intersecting epidemics: an analysis of events offered by a regional training center from 2017 to 2020. BMC Public Health. 2022;22(1):1063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Ferrari RM, Leeman J, Brenner AT, Correa SY, Malo TL, Moore AA, et al. Implementation strategies in the exploration and preparation phases of a colorectal cancer screening intervention in community health centers. Implement Sci Commun. 2023;4(1):118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):iii-iv, 1–72. [DOI] [PubMed]
  • 42.Alley ZM, Chapman JE, Schaper H, Saldana L. The relative value of pre-implementation stages for successful implementation of evidence-informed programs. Implement Sci. 2023;18(1):30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Egeland KM, Borge RH, Peters N, Bækkelund H, Braathu N, Sklar M, et al. Individual-level associations between implementation leadership, climate, and anticipated outcomes: a time-lagged mediation analysis. Implement Sci Commun. 2023;4(1):75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Beidas RS, Williams NJ, Green PD, Aarons GA, Becker-Haimes EM, Evans AC, et al. Concordance between administrator and clinician ratings of organizational culture and climate. Adm Policy Ment Health. 2018;45(1):142–51. [DOI] [PubMed] [Google Scholar]
  • 45.Williams NJ, Ehrhart MG, Aarons GA, Marcus SC, Beidas RS. Linking molar organizational climate and strategic implementation climate to clinicians’ use of evidence-based psychotherapy techniques: cross-sectional and lagged analyses from a 2-year observational study. Implement Sci. 2018;13(1):85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Williams NJ, Hugh ML, Cooney DJ, Worley JA, Locke J. Testing a theory of implementation leadership and climate across autism evidence-based interventions of varying complexity. Behav Ther. 2022;53(5):900–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Powell BJ, Mettert KD, Dorsey CN, Weiner BJ, Stanick CF, Lengnick-Hall R, et al. Measures of organizational culture, organizational climate, and implementation climate in behavioral health: a systematic review. Implement Res Pract. 2021;2:26334895211018864. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Fioratti I, Santos VS, Fernandes LG, Rodrigues KA, Soares RJ, Saragiotto BT. Translation, cross-cultural adaptation and measurement properties of three implementation measures into Brazilian-Portuguese. Arch Physiother. 2023;13(1):7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Bourdeau B, Guzé MA, Rebchook GM, Shade SB, Psihopaidas D, Chavis NS, et al. Measuring implementation outcomes change over time using an adapted Checklist for Assessing Readiness to Implement (CARI). AIDS Behav. 2025. 10.1007/s10461-025-04614-0. [DOI] [PubMed]
  • 50.Dir AL, Patrick BM, Salino S, DiClemente-Bosco K, Becker SJ. Tracking implementation determinants over time using the IFASIS: multi-site analysis of opioid treatment programs implementing a digital contingency management platform. Implement Sci Commun. 2025;6(1):103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the implementation climate scale (ICS). Implement Sci. 2014;9:157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Yakovchenko V, Merante M, Chinman MJ, Neely B, Lamorte C, Gibson S, et al. The “good enough” facilitator: elucidating the role of working alliance in the mechanism of facilitation. Implement Sci Commun. 2025;6(1):22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Ritchie MJ, Parker LE, Kirchner JE. From novice to expert: a qualitative study of implementation facilitation skills. Implement Sci Commun. 2020;1:25. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.


Articles from Implementation Science Communications are provided here courtesy of BMC

RESOURCES