Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Sep 1.
Published in final edited form as: Adm Policy Ment Health. 2016 Sep;43(5):783–798. doi: 10.1007/s10488-015-0693-2

Multilevel Mechanisms of Implementation Strategies in Mental Health: Integrating Theory, Research, and Practice

Nathaniel J Williams 1,
PMCID: PMC4834058  NIHMSID: NIHMS731479  PMID: 26474761

Abstract

A step toward the development of optimally effective, efficient, and feasible implementation strategies that increase evidence-based treatment integration in mental health services involves identification of the multilevel mechanisms through which these strategies influence implementation outcomes. This article (a) provides an orientation to, and rationale for, consideration of multilevel mediating mechanisms in implementation trials, and (b) systematically reviews randomized controlled trials that examined mediators of implementation strategies in mental health. Nine trials were located. Mediation-related methodological deficiencies were prevalent and no trials supported a hypothesized mediator. The most common reason was failure to engage the mediation target. Discussion focuses on directions to accelerate implementation strategy development in mental health.

Keywords: Mechanism, Mediation, Multilevel, Implementation strategy, Mental health, Systematic review

Introduction

Studies of mental health service systems reveal widespread deficits in the delivery of effective, evidence-based treatments (EBTs) shown to improve the outcomes of clinical care (Collins et al. 2011; Garland et al. 2013; McHugh and Barlow 2012; Saloner et al. 2014; Weisz et al. 2013). These deficits result in unnecessary disease burden for millions of youth and adults who experience mental illnesses each year and waste limited resources that could otherwise be allocated to effective care (Kessler et al. 2009; Steel et al. 2014). In response, the National Institute of Mental Health and the Institute of Medicine have prioritized research on implementation strategies designed to increase the adoption and integration of EBTs into mental health service systems (Insel 2009; Institute of Medicine 2001). Investigators have responded to these calls with hundreds of studies that describe barriers and facilitators to EBT implementation as well as scores of randomized controlled trials (RCTs) testing implementation strategies in mental health settings (Chaudoir et al. 2013; Greenhalgh et al. 2004; Novins et al. 2013). Randomized trials have tested a variety of implementation strategies with multiple components and multiple targeted outcomes at multiple system levels (Powell et al. 2014). However, despite the proliferation of research, the accumulating body of evidence offers little information regarding how and why effective implementation strategies facilitate EBT adoption and integration. Although numerous candidate mechanisms are suggested by implementation theories (Aarons et al. 2011; Tabak et al. 2012), as well as by primary theories of organizational and individual behavior change (Grol et al. 2007; Michie et al. 2011), recent reviews suggest implementation strategies rarely invoke, much less test, these theory-based constructs (Davies et al. 2010; Novins et al. 2013). As a result little is known about the mutable, generalizable, and causal change mechanisms through which implementation strategies improve care. This is an important deficit because of the high rate of failed implementation strategies in mental health settings, the complexity and expense of the most promising strategies, and the highly context-dependent outcomes of implementation trials (Berwick 2008; Powell et al. 2014; Novins et al. 2013). These knowledge gaps restrict efforts to improve the efficiency, effectiveness, and targeting of implementation strategies and perpetuate quality and effectiveness deficiencies in mental health service systems.

This article aims to present evidence for the utility of testing multilevel change mechanisms of implementation strategies in mental health. The large number of theories and empirical studies describing antecedents to EBT implementation at multiple levels and the burgeoning number of RCTs provide unprecedented opportunity to advance our understanding of how to improve the adoption and integration of EBTs into mental health systems. However, achieving this goal requires the integration of these two streams of research. Toward this end, the specific goals of this article are to: (a) provide an orientation to conceptualizing multilevel mediators and mechanisms relevant to EBT implementation in mental health services, (b) offer a rationale for studying multilevel mediators of implementation strategies as well as suggestions for selecting and testing mediators, (c) systematically review the empirical status of candidate mediators of implementation strategies in mental health, and (d) offer recommendations for future research. Readers are reminded that the review reported in this article does not include all RCTs of implementation strategies in mental health, only those that focus on testing mediators of implementation or clinical outcomes.

Mediators and Mechanisms in Implementation Science

A mediator (M) is an intervening variable hypothesized to transmit the effect of an independent variable X on a dependent variable Y via an implied causal chain (MacKinnon, 2008). Evidence for the hypothesized causal pathway is provided by demonstrating that X influences M and that all or part of X’s total effect on Y is indirect, or mediated, through M (MacKinnon et al. 2007). In implementation trials, mediators represent proximal targets believed to influence more distal endpoints such as implementation outcomes (e.g., EBT acceptability, adoption, fidelity, penetration, or sustainment) or clinical outcomes (e.g., reduced symptoms or improved functioning; Proctor et al. 2011). Mediators are distinct from other types of third variable effects such as moderators, confounds, or covariates because they assume a causal sequence in which X precedes and causes M which precedes and causes Y (MacKinnon et al. 2007). Moderators, for example, do not intervene in a causal chain but instead represent variables that change the relationship between X and Y such that the direction or magnitude of the relation depends on the level of the moderator.

Mediators differ from mechanisms which invoke a higher level of specificity and describe the precise sequence of operations or underlying causal processes through which an effect occurs (Kraemer et al. 1997). Whereas mediators represent measured variables that explain the statistical relationship between X and Y but might not capture the basis of the observed effect, mechanisms describe the exact series of steps through which the change came about (Kazdin 2007). In implementation science, mechanisms may cross levels in a cascading sequence that links change in a causal antecedent at one level to change in a causal antecedent at another level before influencing the final outcome. Although the identification of mediators leaves many questions unanswered, mediation analysis is often a first step in identifying mechanisms of change (Doss 2004; Kazdin 2007). A sustained program of research that systematically tests mediators within experimental, longitudinal designs can provide robust support for a hypothesized mechanism that contributes to effective EBT implementation and improved clinical outcomes.

Conceptualizing Multilevel Mediators of Implementation Strategies

Theories of EBT implementation invoke antecedents and barriers at multiple system levels including individuals, organizations, communities, and sociopolitical environments (Aarons et al. 2011; Tabak et al. 2012). As a result, multilevel theory, constructs, and analysis are frequently necessary when conceptualizing and testing mediators and mechanisms of implementation strategies in mental health (Kozlowski and Klein 2000). Two types of multilevel theory are particularly salient to mediation models in implementation research. Composition theory provides a theoretical basis for higher-level variables that emerge from processes of coalescence or convergence and are measured through the aggregation of individuals’ responses (Chan 1998; Rousseau 1985). Investigators use composition theory and related validity evidence to support the construct validity of aggregate or compositional variables in organizational and implementation research (LeBreton and Senter 2008; Rousseau 1985). Compositional variables are common in implementation research and include such variables as organizational culture, EBT implementation climate, and organizational readiness for change (Aarons et al. 2011; Williams and Glisson 2014).

Cross-level theory describes how independent variables (X) conceptualized at one level influence dependent variables (Y) conceptualized at a different level (Klein et al. 1994; Kozlowski and Klein 2000; Raudenbush and Bryk 2002). Investigators use these theoretical models to describe cross-level mediation processes in which change in properties at one system level influence properties at another level. For example, a cross-level mediation model may describe how change in higher level organizational characteristics, such as organizational culture and climate, influence and homogenize lower level characteristics, such as clinicians’ intentions (motivation) to adopt EBTs or their EBT adoption behavior (Williams 2015). Because the mediation effect in these models crosses levels, multilevel theory must be invoked to explain how the process unfolded and specialized data analytic procedures such as multilevel modeling or multilevel structural equation modeling (MSEM) must be used for statistical tests of the mediated effect (Bauer et al. 2006; Krull and MacKinnon 2001; Mathieu and Taylor 2007; Preacher et al. 2010).

Although cross-level mediation models most often describe top-down mediation effects in which a higher-level implementation strategy influences a lower-level outcome through mediating variables at the same or lower levels (Mathieu and Taylor 2007), recent developments in MSEM permit investigators to specify and test a wide range of models including those that incorporate bottom-up mediation effects (Preacher et al. 2010). Bottom-up mediation occurs when an implementation strategy targeting a lower level unit (e.g., individual clinicians) influences lower- or upper-level mediators that subsequently shape the emergence of higher-level implementation or clinical outcomes. For example, an individual-level implementation strategy in which clinicians who attend an EBT training identify personal barriers to implementing the EBT and develop contingency plans to address those barriers (i.e., implementation intentions) may be expected to increase clinicians’ individual self-efficacy for EBT implementation (a lower level mediator) and consequently increase the proportion of cases in a clinical team that meet a targeted fidelity benchmark (a higher level, population-related outcome). These types of cross-level mediation models provide a means of examining the potential population-level impacts of implementation strategies that focus on lower levels.

The use of multilevel mediation analysis also permits investigators to posit and test heterogeneous mediation processes that differ across higher-level units, sometimes referred to as moderated mediation (Edwards and Lambert 2007; Preacher et al. 2007). Such models acknowledge the inherent variability in change that typifies real world practice settings by formally testing the extent to which mediation processes vary across higher level units and potentially incorporating predictors of this variation (Kenny et al. 2003). For example, clinicians’ who are nested within agencies but randomly assigned as individuals to receive a motivational implementation strategy (X) prior to EBT training may be expected to exhibit more positive attitudes toward the EBT (M) which may in turn be expected to increase EBT adoption (Y). However, this mediation chain may not apply equally for clinicians in all agencies because of differences in the agencies’ cultures or leadership support for EBTs. Through the use of multilevel mediation modeling, investigators can specify and test the extent to which the relationship between the implementation strategy, mediator, and outcome varies across agencies (Bauer et al. 2006).

Rationale for Studying Multilevel Mediators of Implementation Strategies

Incorporating tests of mediation into implementation trials permits investigators to move beyond simple tests of whether the strategy achieved its hypothesized main effects to deeper questions regarding how and why those effects were (or were not) achieved (Judd and Kenny 1981). One reason such questions are important is to advance general scientific theory regarding the causal processes that explain individual, organizational, and system-level change. Although longitudinal studies can establish temporal association between a presumed cause and effect, experimental studies are necessary to assess the malleability of the presumed cause as well as the extent to which manipulation of the cause contributes to subsequent change in the outcome of interest (Kraemer et al. 1997). Tests of mediation in randomized trials provide a framework for such tests. Identifying which of the many theorized antecedents to EBT implementation represent mutable, generalizable, and effective causes is especially important in implementation research because of the cost of implementation RCTs and their highly-context dependent results (Berwick 2008).

A second reason to incorporate mediation analyses into RCTs of implementation strategies is to expedite the development of more effective, efficient, and feasible strategies (Chen 1990; Doss 2004; Kazdin 2007). Recent reviews indicate nearly half of the implementation strategies tested in mental health to date produce no discernible effects on any targeted implementation, services, or clinical outcome (Powell et al. 2014) and the most promising strategies are also the most resource-intensive, expensive, and consequently the least feasible (Novins et al. 2013). Investigators can use mediation analyses in RCTs to better understand why unsuccessful strategies failed, to identify key ingredients of effective strategies, and to refine complex multicomponent strategies to include only their most critical elements. For example, if an implementation strategy fails to influence the targeted implementation or clinical endpoint, mediation analyses can provide clues regarding the extent to which this occurred because of failure to activate the targeted mediating mechanism or failure of the mediating mechanism to influence the outcome. Conversely, if a strategy successfully influences an outcome, mediation analyses provide information regarding the most salient mechanisms of action, thereby informing subsequent strategy development. Once studies have identified change mechanisms that causally facilitate EBT implementation, investigators can develop better, stronger, or different implementation strategies to engage those mechanisms (Kazdin 2007; Williams and Glisson 2014). Although mediation tests are not always informative with respect to an intervention’s causal mechanisms (e.g., competing explanations such as poor timing of measurement may undermine inferences), a mediation approach couched within an experimental framework offers significant cost and feasibility advantages over other methods such as treatment dismantling studies.

A third reason for studying mediators is that an understanding of how and why implementation strategies cause change facilitates their targeted application to other settings, populations, and EBTs. Once investigators understand how an implementation strategy works they can match it to those situations in which it will be most beneficial and avoid applying it in situations where it may be ineffective. This is important because of the resource constraints that characterize most routine care settings and the need to avoid burdening staff with excessive or ineffectual change initiatives.

Approaches to Studying Multilevel Mediators

The study of mediators of implementation strategies in mental health requires specification of the social, psychological, biological, or technical domains believed to influence EBT implementation and incorporation of targets (i.e., mediators) from within those domains into the RCT measurement and design. The aims of the study are to (a) assess whether the implementation strategy engaged the targeted mediator (i.e., X to M), and (b) examine the extent to which engagement of the mediator explained the strategy’s effect on the targeted outcome (i.e., M to Y adjusted for X). Of course, the strength of causal inferences from any mediation study depends on the soundness of its guiding theory, research design and measurement, and data analysis (Mathieu et al. 2008). Ideal design features that support strong causal inference include random assignment of individuals, organizations, or other units to implementation conditions and repeated measurement of mediators and outcomes at theoretically-meaningful points in time. These features help establish a causal timeline from X to M to Y and, in concert with well-articulated theory, enable investigators to build a strong case that the implementation strategy activated the mediator which in turn influenced the implementation or clinical outcome of interest (Mathieu et al. 2008).

Because of the nature of the mediating mechanisms involved in implementation studies, direct manipulation of mediators is rarely feasible. Instead, investigators can develop a strong empirical case for a theorized mediator through a sustained program of research that develops converging lines of evidence (Kazdin 2007). Repeated validation of a mediator in RCTs that incorporate numerous settings, populations, and comparison groups, confirmation of a causal timeline, and the ruling out of competing mediation processes all advance the case for a hypothesized mediator. In addition, evidence for a gradient relationship between M and Y in which increased activation of M contributes to increased change in Y provides further evidence to support the hypothesized mediation process. Mediation “knockout” studies in which the mediator is manipulated but its theorized action is blocked in one condition can provide especially strong evidence to support a hypothesized mediator (Kazdin 2007).

To the extent that an implementation strategy incorporates multiple intervention components or a well-elaborated theoretical basis, investigators must make choices regarding which causal chains and which facets of the causal chains constitute the study’s focus. Idealized scenarios in which investigators randomly assign participants to numerous conditions and test all possible combinations of a strategy’s components (e.g., fractional factorial designs) are typically not feasible given difficulty in securing adequate sample sizes (especially of higher level units) and the resource constraints that characterize service systems. Instead, a useful approach is to target multiple mediating variables in a single condition and test the mediators singly and in combination to develop a more comprehensive mediation model (MacKinnon 2008). Methods for testing simultaneous or serial mediation models are well-developed for single-level models (e.g., Preacher and Hayes 2004; Taylor et al. 2008) and developments in multilevel modeling and multilevel SEM are expanding these methods to multilevel mediation studies as well (Krull and MacKinnon 2001; Pituch et al. 2010; Preacher et al. 2010).

Statistical Mediation Analysis

Data analytic methods for mediation analysis include causal steps approaches such as the frequently cited Baron and Kenny (1986) steps, the difference in coefficients approach, and the product of coefficients approach (MacKinnon et al. 2002). Research on the comparative performance of these methods with respect to Type I error rates, statistical power, and accuracy of confidence interval coverage consistently indicates the product of coefficients method, in combination with computationally intensive asymmetric confidence limits, is preferred over other approaches (Biesanz et al. 2010; Hayes and Scharkow 2013; MacKinnon et al. 2002; MacKinnon et al. 2004). Both the Baron and Kenny (1986) steps and the frequently used Sobel test (Sobel 1982) are statistically underpowered. The Baron and Kenny steps require a significant total effect of X on Y which may have lower power than the tests of X to M and M to Y adjusted for X (MacKinnon et al. 2002). The Sobel test relies on the inaccurate assumption that the sampling distribution of the mediated effect is normal and as a result produces overly conservative tests of significance and poor confidence interval coverage (Hayes and Scharkow 2013; MacKinnon et al. 2004). Causal steps approaches, including the joint significance test, have also been criticized for not directly quantifying the effect of interest in a mediation analysis or providing confidence intervals (MacKinnon 2008).

The product of coefficients approach represents a highly general method that is applicable to both single- and multilevel mediation models (MacKinnon et al. 2002; Pituch et al. 2010). Under this approach, equations are fit to the data which parse X’s total effect on Y into direct and indirect (mediated) effects. In the simplest case, the estimate of the mediated effect is calculated as the cross product of (a) the effect of X on M, and (b) the effect of M on Y, adjusted for X (Krull and MacKinnon 2001; MacKinnon et al. 2007). Several procedures are available for testing the statistical significance of this mediated effect estimate (Biesanz et al. 2010; Hayes and Scharkow 2013; MacKinnon et al. 2002, 2004). The most powerful and accurate approach is to form asymmetric confidence limits around the mediated effect using computationally intensive methods such as bootstrapping, the distribution of the product, or Monte Carlo methods (Hayes and Scharkow 2013; Preacher and Selig 2012). In these applications, the mediated effect is statistically significant at α = .05 if the confidence limits do not span zero (Preacher and Hayes 2004). The metric of the mediated effect represents the difference in the dependent variable between treatment and control conditions that is attributable to the mediator. An effect size similar to Cohen’s d can be derived by dividing the effect by the standard deviation of the outcome variable (MacKinnon 2008). The term complete mediation implies that X has no significant effect on Y independent of its indirect effect through M (MacKinnon et al. 2007). This condition exists when the mediated effect is statistically significant and the direct effect is not. Partial mediation implies X has a significant effect on Y through M; however, an additional significant direct effect of X on Y is also present independent of M. If both the mediated effect and the direct effect are statistically significant, the pattern is consistent with partial mediation.

Although the product of coefficients method provides a flexible approach to testing mediation, its use with multilevel research designs requires careful specification and analysis procedures in order to obtain unbiased estimates of the mediated effect and accurate statistical tests (Zhang et al. 2009). Bias occurs in estimating cross-level mediation effects when the analysis fails to address the potentially distinct between-group and within-group regression slopes that link lower-level variables in multilevel models. In a multilevel model, variables at the lowest level (i.e., level 1) consist of unique (orthogonal) within-group and between-group variance (Enders 2013; Kreft et al. 1995). As a result, two potentially distinct regression slopes characterize the relation between these variables and these slopes may differ in magnitude or sign (Neuhaus and Kalbfleisch 1998; Zhang et al. 2009). Mediation procedures that do not tease apart these potentially different slopes conflate them into a single slope that produces a biased estimate of the relationship between the two variables and consequently a biased estimate of the mediated effect in 2–1–1 and 1–1–1 mediation models (Preacher et al. 2010; Zhang et al. 2009).

Conceptually, a higher-level antecedent (or the between-group variance of a lower-level variable) can only influence the between-group variance of a lower-level consequent (Enders 2013; Zhang et al. 2009). In multilevel mediation models, this implies that the effect of a higher level implementation strategy on a lower level outcome can only be transmitted through the between-group variance of a lower-level mediator (Zhang et al. 2009). If the slope relating the lower-level mediator and the lower-level outcome conflates the between-group and within-group relationships between these two variables, the mediated effect is biased. Procedures for addressing this issue include the centered within context with means reintroduced approach explained by Zhang et al. (2009), multilevel SEM (Preacher et al. 2010), and specialized procedures for completely lower-level (1–1–1) mediation models (Bauer et al. 2006; Kenny et al. 2003).

Method

Procedure

Study Eligibility Criteria

This study aimed to review RCTs of implementation strategies in mental health that tested specific mediation targets. Studies were included that (a) used a RCT to test an implementation strategy for increasing EBT exploration, adoption, implementation, or sustainment in mental health service settings (Aarons et al. 2011), and (b) included some minimal test of one or more mediation targets. Implementation strategies were defined as any intervention or systematic process at any level designed to increase the adoption and integration of EBTs into routine care (Powell et al. 2014). Definitional criteria for EBTs were intentionally broad and included any clearly defined clinical treatment or psychosocial intervention designed to address mental disorders that incorporated formal guidelines for use and evidence of effectiveness from research. With respect to mediation, the trial had to provide a rationale for the hypothesized mediator and demonstrate temporal precedence of the implementation strategy to the mediator such that a causal effect was plausible. Although initial eligibility criteria included the requirement that the analytic approach test both (a) the relationship between X and M, and (b) the relationship between M and Y adjusted for X, this criteria was abandoned because no such trials were located. Trials that did not randomly assign participants to implementation strategies but instead tested the effects of an EBT on clinical outcomes when implemented under “usual care” conditions were excluded from the review. Studies that sought to increase EBT implementation by modifying the EBT itself were also excluded. Although these studies could potentially measure mediators from the technical domain (e.g., relative advantage, trialability, adaptability) most focused on increasing the accessibility of EBTs via a computerized delivery format and consequently were excluded.

Information Sources, Search, and Study Selection

Eligible trials were generated in three steps. First, studies identified through four previous reviews of implementation strategies in mental health (Barwick et al. 2012; Landsverk et al. 2011; Novins et al. 2013; Powell et al. 2014) were compiled and unique RCTs that reported quantitative outcomes were downloaded for full-text review. Second, with the help of an electronic resources librarian, an electronic keyword search was conducted in PubMed and PsycINFO for implementation trials published in any year up to March 2015. Consistent with earlier reviews, keyword searches focused on combinations of four concepts: (a) dissemination/implementation, (b) EBT, (c) mental health service settings, and (d) RCTs. Following prior reviews and in accordance with the inclusive definitions described above, search terms included any psychosocial intervention that was identified as an EBT (e.g., “evidence-based treatment”, “empirically supported treatment”) or classified as a “best practice”, innovation (“innovate*”), promising (“promis*”), recommended (“recommend*”) or associated with a guideline (“guideline*”). Based on a review of the resulting 1309 unique articles, a total of 72 studies that potentially met inclusion criteria were downloaded for full-text review. A total of 1237 studies were excluded during this screening step due to (a) failure to report on an RCT, (b) failure to report quantitative outcomes, (c) failure to report on an implementation strategy or include a comparison condition for the implementation strategy, or (d) exclusive incorporation of a computerized delivery system. Combined with the trials identified through the four prior reviews, this resulted in 88 unique trials. Third, eligibility for the second criteria (i.e., some minimal test of mediation) was assessed in each of the 88 full text articles via a two-step process. In the first step, the article’s abstract was examined for evidence that the trial tested mediation or assessed any type of explanatory variable relating the implementation strategy to implementation or clinical outcomes. In the second step, each article full text was searched using electronic keyword search functions to locate key words in the article related to mediation (i.e., “mechanism”, “mediat”, “interven”, “dose”). All trials that included any test of the X to M, M to Y, or M to Y adjusted for X relationships were retained in the review. A total of 79 studies were excluded for failing to test an intervening variable effect, yielding a total of nine separate trials that met study inclusion criteria.

Results

Characterization of Trials

Table 1 describes the nine RCTs of implementation strategies for mental health EBTs that included some minimal examination of mediators. Inspection of the table reveals several important observations regarding the nature of the trials. First, despite the minimally stringent criteria employed in this review, very few RCTs of implementation strategies in mental health settings incorporated tests of mediation. Of the 88 unique trials, only 9 (10 %) employed quantitative analyses that examined potential mediators. Furthermore, zero studies met minimum criteria necessary for testing the mediation hypothesis (i.e., a test of X to M and M to Y adjusted for X). Although the review may have missed some studies, these findings suggest consideration of mediators is rare in mental health implementation trials, particularly relative to the number of theories enumerating hypothesized implementation antecedents and the number of empirical studies identifying implementation barriers and facilitators. Incorporation of such tests therefore represents a significant opportunity for increasing the scientific value of implementation trials in mental health research.

Table 1.

Summary of randomized controlled trials testing mediators of implementation strategies in mental health

Study Sample EBT Implementation strategy:
1. Strategy
2. Control
Mediators Main
effects:
analyses
Outcomes:
implementation
(IO), clinical
(CO)
Mediation analyses Mediators: results
Kauth et al. (2010) k = 20 VA
primary
care and
mental
health
clinics
n = 23
clinicians
CBT
  1. External facilitation + workshop

  2. Workshop only

  1. Job-related barriers

  2. # of contacts and time spent in facilitation

ANOVA IO
  1. %Clinical time spent conducting CBT

X to M analysis: t test
comparing barriers in
facilitation versus control at
follow-up.
M to Y analysis: correlation
between time in facilitation
and change in EBT use.
Facilitation appeared to increase
CBT use but had no effect on
therapists’ report of barriers to
EBT adoption; # of facilitation
contacts and time spent in
facilitation were not correlated
with change in EBT adoption
Baer et al. (2009) k = 6
community
substance
abuse
treatment
agencies
n= 144
clinicians
Motivational
interviewing
  1. Context-tailored training (CTT) customized to sites and focused on increasing organizational support

  2. Two-day workshop

  1. Agency supportive practices for MI use

3-level
HLM
IOs
  1. MI fidelity (rater-coded, standardized interview)

  2. Reflection to question ratio (rater-coded, standardized interview)

X to M and M to Y,
controlling for X: product of
coefficients method based
on Krull and MacKinnon (2001)
to test mediated
effect (unrelated to
treatment condition).
CTT did not impact IOs or
agency supportive practices;
regardless of condition,
baseline org. climate for change
predicted increased fidelity on
both IOs; increased agency
supportive practices mediated
climate’s effects on both MI
fidelity indicators
Holth et al. (2011) k = 8 MST
teams in the
Norwegian
Ministry of
Child &
Family
Affairs
n = 21
therapists
n = 41 youth
Contingency
management
and CBT for
SA
  1. Intensive quality assurance + two-day workshop + manual (IQA)

  2. Two-day workshop + manual (WSO)

  1. Therapist adherence to CM/CBT

3-level
mixed
models
CO
  1. Youths’ cannabis use as assessed by drug screens and self-report

X to M analyses: 3-level
mixed effects growth
models.
Tested M to Y not controlling
for X: mixed effects
models.
IQA increased fidelity to CBT
relative to WSO; no diff on CM
fidelity (both groups increased);
increased CBT and CM fidelity
predicted decreased cannabis
use; did not test mediation
model; IQA unrelated to
youths’ cannabis use
Williams et al. (2014) k = 92
community
health and
behavioral
health
centers
n = 311
directors
and staff
Motivational
interviewing
  1. Six MI implementation webinars with an MI expert + information packets

  2. Information packets only

  1. Attitudes toward EBPs

  2. Pressure for change

  3. Barriers to EBPs

  4. Resources

  5. Organizational climate

  6. Management support

3-level
HLM
IO
  1. Self-reported stage of MI adoption

X to M relationship: 3-level
HLMs examined change in
mediators over time.
Webinar condition exhibited
higher adoption and slower
improvement in attitudes
toward EBPs; all other X to M
relations non-significant
Lochman et al. (2009) k = 57 Public
elementary
schools in
Alabama
n = 49
counsellors
n = 531
youth
Coping Power
Program for
youth
aggression
  1. Intensive training and feedback

  2. Basic training only

  3. No intervention control group

  1. #of sessions attended

  2. # of objectives completed

  3. # of contacts with trainers

  4. Counselor engagement w/clients

2-level
HLMs
COs
  1. Externalizing behaviors

  2. Social skills

  3. Study skills

  4. Expectancies re: aggression

  5. Consistent parenting

  6. Assaultive acts

X to M analyses: 2-level
HLMs with children
nested in counselors.
Only IO significantly affected by
intensive training (relative to
comparison groups) was counselor
engagement in the child groups;
however, not tested as a mediator;
intensive training improved
clinical outcomes
Garner et al. (2011) k = 29
outpatient
substance
abuse
service
agencies for
youth
n = 95
therapists
Adolescent
community
reinforcement
approach; or
Assertive
Continuing
Care
  1. Pay for performance + training

  2. Training only

  1. Attitudes re: two quality targets

  2. Subjective norms re: two quality targets

  3. Perceived control over two quality targets

2-level
HLMs
IOs: clinicians’
intentions to
  1. Demonstrate competence on randomly selected tapes each month, and

  2. Deliver a targeted threshold of treatment

M to Y, controlling for X:
added mediator variables
to the main effects
analyses.
Tested bivariate M to Y
relationships not
controlling for X.
P4P increased intentions for both
IOs; Attitudes predicted higher
competence intentions in bivariate
analyses but not after controlling
for P4P; Attitudes and subjective
norms predicted higher threshold
intentions in bivariate analyses but
not when entered with P4P
Glisson et al. (2010) k = 14 rural
appalachian
counties
n = 615
youth court
ordered into
mental
health
treatment
MST
  1. ARC + MST quality assurance process

  2. MST quality assurance process only

  1. MST therapist fidelity

  2. MST supervisor fidelity

2- and
3-level
HLMs
COs
  1. Rate of change in CBCL

  2. Out of home placements

X to M analyses: 2- and
3-level HLMs with youths
nested in therapists
(included only youths
assigned to MST in ARC
and non-ARC counties).
ARC had no significant effects on
therapist or supervisor fidelity;
ARC + MST condition improved
youth CBCL change trajectories;
ARC condition reduced out-of-
home placements
Atkins et al. (2008) k = 10 low-
income,
urban
elementary
schools
n = 115
teachers
ADHD best
practices
for
classroom
setting
  1. Trained key opinion leader teachers (KOL) + mental health professional (MHP) consults + workshop

  2. MPH consults + workshop

  1. KOL support

  2. MHP support

2-level
mixed
effects
regression
analysis
IO
  1. Teachers’ self- reported adoption of 11 EBT strategies during the preceding month

M to Y analyses
controlling for X:
added mediator
variables
simultaneously to the
main effects analysis.
KOL condition increased EBT
adoption; increased KOL support,
but not MHP support, predicted
greater EBT adoption controlling
for KOL condition, and rendered
the condition effect non-
significant
Rohrbach et al. (1993) k = 4 school
districts
n = 60
elementary
school teachers
n = 25
Elementary
school
Principals
n = 1147
students
Adolescent
Alcohol
Prevention
Trial
curriculum
  1. Intensive teacher training (full day, theory-guided workshop)

  2. Brief teacher training (2- hr meeting to describe curriculum)

  3. Principal support (encouraged to support implementation)

  4. No principal support

For IOs:
  1. Teacher self-efficacy

  2. Teacher preparedness

  3. Teacher enthusiasm

  4. Principal support

  5. Principal’s beliefs re: intervention

For COs:
  1. # of lessons

  2. Teacher fidelity

Mixed
models
analyses
IOs
  1. Initial implement. (% lessons)

  2. Maintenance (% lessons)

  3. Observer-rated fidelity

COs
  1. Resistance skills

  2. SA intentions

  3. SA norms

  4. SA attitudes

  5. Program knowledge

  6. Program acceptance

X to M and M to Y
analyses: mixed
effects models.
Intensive training had no effect on
initial implementation, teacher
fidelity, self-efficacy, enthusiasm,
or preparedness; Principal
intervention increased initial
implementation but not principal
encouragement or principals’
beliefs; higher teacher fidelity
predicted greater student
resistance skills, program
knowledge, and program
acceptance

Second, the wide range of EBTs and settings sampled in these RCTs reflect the broad based interest in, and need for, implementation strategies in mental health as well as the importance of identifying generalizable mechanisms of change. Settings sampled included Veteran’s Affairs clinics, elementary schools, outpatient substance abuse treatment centers, children’s outpatient specialty mental health clinics, and court-ordered home- and community-based services, among others. Only one EBT, motivational interviewing, was examined in multiple trials. Other trials focused on a range of clinical interventions including school-based procedures for managing children’s symptoms of attention deficit/hyperactivity disorder, cognitive behavioral psychotherapy, multisystemic therapy, and contingency management for substance abuse. The wide range of clinical interventions and settings sampled highlights the importance of understanding the generalizable mechanisms that contribute to change across these highly context-dependent trials. Is it the case that a relatively few number of generalizable mechanisms contribute to change in practice routines and behaviors across all settings and EBTs or do different populations of settings and EBTs require unique antecedents to facilitate their adoption and integration? Such questions can only be answered by systematically incorporating tests of mediation into RCTs and comparing information across studies.

Third, multilevel designs and analyses represent the normative approach to testing implementation strategies in mental health. All nine trials in this review incorporated a multilevel sampling design and all but one relied on multilevel or mixed effects analyses to model the dependence of observations nested within individuals (i.e., over time) or larger contexts (e.g., agencies) or both. Random assignment typically occurred at the unit level (e.g., schools, clinics) or at multiple levels (e.g., counties and youth; school districts and schools). The emphasis on multilevel sampling and designs is not surprising given the multilevel contextual factors believed to influence EBT implementation and interest in studying change in outcomes over time. Assuming adequate sampling, randomization by site or unit permits investigators to assess the influence of contextual variables either as controls or as active factors. Furthermore, unit-level randomization minimizes threats to internal validity such as treatment diffusion which may occur in naturalistic treatment settings.

Fourth, the RCTs incorporated in this review addressed both types of mediation processes relevant to implementation research in mental health services—those incorporating implementation outcomes as the endpoint and those incorporating clinical outcomes as the endpoint. Four of the studies (44 %) tested implementation outcomes (e.g., EBT fidelity) as mediators of clinical outcomes. These trials implicitly acknowledged the ultimate purpose of EBT implementation—to improve the clinical outcomes of routine care—by directly testing the hypothesis that improved EBT implementation contributes to increased clinical effectiveness (Proctor et al. 2011). The remaining studies focused on mediation processes linking implementation strategies to practice-related behavior change such as increased EBT adoption, fidelity, or sustainment.

Multilevel Mediators of Implementation Strategies in Mental Health

Given the small number of trials and the failure of any trial to conduct the minimum statistical tests necessary to evaluate mediation, firm conclusions regarding the empirical status of proposed mediators of implementation strategies in mental health are not possible. However, the nascent database reveals several observations that suggest useful hypotheses for future research. This section summarizes the results of the mediation findings in two parts. The first section discusses results from the six trials that tested mediators of implementation strategies’ effects on implementation outcomes. Results from these trials are organized using three domains from the Consolidated Framework for Implementation Research (Damschroder et al. 2009): characteristics of the inner setting, characteristics of individuals, and features of the implementation process. The second section summarizes results from the four trials that examined implementation strategies’ effects on clinical outcomes and tested implementation outcomes as mediators of these effects (one trial examined mediators of both implementation and clinical outcomes).

Mediators of Implementation Outcomes

Table 2 details the relations between implementation strategies, candidate mediators, and implementation outcomes reported in the six eligible trials. Four studies tested characteristics of organizations’ inner setting as potential mediators of implementation outcomes (Baer et al. 2009; Kauth et al. 2010; Rohrbach et al. 1993; Williams et al. 2014). Candidate mediators from the inner setting domain included leadership support, job-related barriers, organizational climate, agency supportive practices, and resources. Two of these mediators (leadership support and job-related barriers) were tested in two independent trials; the remainder in only a single trial. None of the trials supported the mediational role of any construct from the inner setting. The most common reason was the implementation strategy failed to influence the mediator—in seven unique tests none of the implementation strategies successfully manipulated mediators from the inner setting domain. Two of the inner setting mediators (leadership support and agency supportive practices) were significantly predictive of implementation outcomes in a bivariate sense (i.e., when not controlling for the implementation strategy), suggesting these constructs may have value for influencing implementation despite the fact that they were not activated by the strategies in these trials. Only one mediator from this group was tested as a predictor of implementation outcomes in a model that included the implementation strategy. Agency supportive practices significantly predicted implementation fidelity in a trial of motivational interviewing even after controlling for the effect of the implementation strategy; however, since the strategy did not influence supportive practices it was not a mediator (Baer et al. 2009).

Table 2.

Relations between implementation strategies, candidate mediators, and implementation outcomes in RCTs

Mediator Study Successfully
manipulated?
(X to M)
Bivariate relation
with IO? (M to Y)
Related to IO after
controlling for strategy? (M
to Y, controlling for X)
Characteristics of the inner setting
  Leadership support Rohrbach et al. (1993) No Yes -
Williams et al. (2014) No - -
  Job-related barriers Kauth et al. (2010) No No -
Williams et al. (2014) No - -
  Organizational climate Williams et al. (2014) No - -
  Agency supportive practices Baer et al. (2009) No Yes Yes
  Resources Williams et al. (2014) No - -
Characteristics of individuals
  Attitudes Garner et al. (2011) - Yes No
Rohrbach et al. (1993) No Yes -
Williams et al. (2014) No - -
  Readiness to change Rohrbach et al. (1993) No Yes -
Williams et al. (2014) No - -
  Self-efficacy Rohrbach et al. (1993) No Yes -
  Perceived behavioral control Garner et al. (2011) - No No
  Subjective norm Garner et al. (2011) - Yes No
Implementation process
  External change agent contact/support Atkins et al. (2008) - No No
Kauth et al. (2010) - No -
  Key opinion leader support (peer) Atkins et al. (2008) - Yes Yes

(-) indicates the relationship was not tested

Three studies tested characteristics of individuals as potential mediators of implementation strategies’ effects on implementation outcomes (Garner et al. 2011; Rohrbach et al. 1993; Williams et al. 2014). Candidate mediators in this group included mental health service providers’ attitudes toward the EBT (or EBTs more broadly), readiness to change, self-efficacy, subjective norm, and perceived behavioral control. The attitudes construct was the most frequently addressed as it was tested in three separate trials. Variables reflecting readiness to change were tested in two trials; the remaining constructs were tested only once. None of these five mediators were successfully manipulated by an implementation strategy (with one study not reporting). Four of the five mediators (attitudes, readiness to change, self-efficacy, and subjective norm) were significantly related to implementation outcomes in models that did not control for the implementation strategy. Only one study tested individual characteristics as a predictor of implementation outcomes while controlling for the experimental manipulation. Garner et al. (2011) entered attitudes, perceived control, and subjective norm simultaneously as predictors of implementation outcomes with the experimental condition variable. Their analysis showed that these variables were unrelated to implementation outcomes in this model (the p value for attitudes in this study was p = .055, raising the possibility that a different analysis method may have produced different results). However, the design of the analysis made it impossible to assess the separate role of each of the three candidate mediators on its own.

Two studies tested features of the implementation process as mediators of implementation outcomes (Atkins et al. 2008; Kauth et al. 2010). Both studies focused on the role of internal or external change agents who served as potential catalysts to facilitate EBT implementation. Although neither study provided sufficient information to fully test the mediation hypothesis, this group includes the one variable—peer key opinion leader support—that may have served as a mediator of the implementation strategy’s effects on implementation outcomes. Drawing on diffusion of innovations theory, Atkins et al. (2008) showed that the significant effect of an implementation strategy that trained teacher key opinion leaders was reduced (i.e., not statistically significant) once a variable representing opinion leader support was included in the model. However, because this study did not report on the effect of the implementation strategy on opinion leader support, it is impossible to fully assess the mediation hypothesis.

Mediators of Clinical Outcomes

Table 3 presents results from the four implementation trials that examined implementation outcomes as mediators of clinical outcomes (Glisson et al. 2010; Holth et al. 2011; Lochman et al. 2009; Rohrbach et al. 1993). Despite the small sample of studies, two important observations emerge. First, two out of three trials that experimentally tested the impact of implementation strategies on clinical outcomes provided evidence to support this relationship; moreover, the fourth trial showed that increased fidelity was significantly correlated with improved clinical outcomes. These results support the hypothesis that implementation strategies can be used to improve clinical outcomes in real world service systems.

Table 3.

Relations between implementation strategies, implementation outcomes, and clinical outcomes in RCTs

Study Did the strategy
improve clinical
outcomes? (X to Y)
Implementation
outcomes
tested as potential
mediators (M)
WasM
successfully
manipulated
(X to M)?
Was M
correlated
with COs
(M to Y)?
Was M related to
COs, controlling
for X (M to Y,
controlling for X)?
Glisson et al. (2010) Yes Therapist fidelity No - -
Supervisor fidelity No - -
Lochman et al. (2009) Yes # of sessions attended No - -
# of intervention objectives completed No - -
Counselor engagement with clients Yes - -
Holth et al. (2011) No Therapist fidelity to CBT Yes Yes -
Therapist fidelity to CM No Yes -
Rohrbach et al. (1993) Not tested Teacher fidelity No Yes -
# of sessions delivered Yes - -

(-) indicates the relationship was not tested

CO clinical outcome

However, enthusiasm for this finding is tempered by the second observation. None of the trials provided evidence that improved EBT implementation mediated the effect of an implementation strategy on clinical outcomes. The primary reason was that most of the strategies failed to influence implementation outcomes. Only 3 out of 9 (33 %) implementation outcomes tested in the four trials were positively influenced by the implementation strategy and none were tested as mediators. As a result, findings from these four trials suggest implementation strategies contribute to improved mental health treatment outcomes for youth; however, much work remains to be done to distinguish how and why these effects occur.

Discussion

The nine trials identified by this review are notable in that they represent early attempts to integrate implementation theory, research, and practice by understanding how and why implementation strategies contribute to change in mental health services. Although all of these trials provide useful information for illuminating change mechanisms, in many cases additional value could have been generated through the use of more accurate or powerful approaches to conceptualizing and testing multilevel mediators. Many studies were characterized by suboptimal tests for mediation. Common problems included simply correlating change in M with change in Y or asserting that a decrease in the statistical significance of the X–Y relationship after inclusion of the mediator in the statistical model constituted sufficient evidence to support mediation. Although consistent with mediation, neither of these analyses is sufficient on its own, or in combination, to support a mediation effect. Determining the extent to which X’s effect on Y is mediated requires a statistical test of the mediated effect.

Another common problem was insufficient theoretical development, operationalization, and analysis of higher-level compositional constructs. Several studies analyzed individuals’ reports of compositional variables (e.g., organizational climate, agency resources, leadership support) at the individual level of analysis with no discussion or theoretical rationale given to justify this choice. Theoretical and empirical work indicates compositional variables have potentially distinct meanings and empirical relationships with outcome variables at different levels of analysis and therefore must be validated and tested in analytic models using appropriate procedures (Klein et al. 1994; LeBreton and Senter 2008; Raudenbush and Bryk 2002). Failure to adequately address these multilevel conceptual and analytic issues detracts from the study’s scientific value by failing to address the roles of these variables at their theorized level; in essence, the level of analysis is not aligned with the level of theory. Of the nine trials, only one provided a clear theoretical rationale for how a compositional variable (i.e., organizational climate) was operationalized in the study design (Baer et al. 2009). This study highlighted important differences in the effects of climate at the individual (i.e., psychological climate) and organizational (i.e., organizational climate) levels thereby underscoring the importance of accurate multilevel model specification and testing.

The State of the Science: Multilevel Mediators of Implementation Strategies in Mental Health

Examination of the trials in this review suggests the field has far to go in understanding how and why implementation strategies contribute to change in mental health services. Only one of the six trials that examined mediators of implementation outcomes identified a potential mediation effect. Atkins et al. (2008) found that support from peer key opinion leader teachers potentially mediated the effect of an experimental manipulation on teachers’ adoption of evidence-based classroom behavior management strategies. Although several additional variables representing characteristics of the inner setting and characteristics of the individuals involved in implementation were significantly related to implementation outcomes, these were not influenced by the implementation strategies and therefore did not mediate the strategies’ effects. Clearly, much work remains to be done to understand how and why implementation strategies contribute to improved EBT implementation.

Assuming these numerous failed X to M relations were not the result of Type I errors, two explanations seem likely. First, features of these studies’ designs may have prevented the detection of a true X to M effect. Second, the implementation strategies tested here may not affect implementation outcomes through the mechanisms investigators anticipated. The implications of this second possibility are twofold. First, strategies that activate the candidate mediators in this review may in fact lead to change in implementation outcomes. Future research should focus on developing implementation strategies with strong theoretical links to these constructs. Second, additional theoretical development and testing is needed to discern exactly which mechanisms accounted for the changes caused by the strategies in these trials. Considering the early developmental stage of this line of research, efforts in both directions seem likely to yield fruitful information.

A second issue addressed by this review involved the extent to which implementation outcomes such as EBT fidelity mediated the effects of implementation strategies on clinical outcomes. Three of the four trials that examined this relationship demonstrated a positive association of the implementation strategy with improved clinical outcomes; however, none of the studies supported the hypothesis that improved EBT implementation mediated this relationship. In all four trials the implementation strategies failed to influence at least half of the EBT fidelity indicators and the few variables that were positively influenced were not tested as mediators. Potential explanations for this pattern of results include Type I errors, insufficient measurement and design, or an incorrect theoretical model linking the implementation strategies to clinical outcomes. The pattern of results is inconsistent with conceptual models that depict a linear relationship between implementation strategies, implementation outcomes, and clinical outcomes (Proctor et al. 2011) and suggests other factors may be activated by implementation strategies that contribute to improved client well-being. Theoretical and empirical work regarding the role of service system characteristics such as organizational culture and climate may be relevant for explaining how implementation strategies influence clinical outcomes if not through some dimension of increased EBT fidelity (Glisson and Williams 2015).

Directions for Future Research

The nine trials identified by this review highlight several directions for future research. First, it is clear that more trials are needed to test multilevel mediators of implementation strategies in mental health. A significant priority in this regard is expanding the types of mediators tested. Key theory-informed constructs such as organizational culture (Williams and Glisson 2014), strategic organizational climate (Aarons et al. 2014), clinicians’ behavioral intentions (Godin et al. 2008), and a host of additional characteristics of the inner setting, outer setting, and individuals involved in the EBT implementation process have yet to be tested as mediators (Damschroder et al. 2009; Greenhalgh et al. 2004).

Second, stronger theoretical links between implementation strategies and their hypothesized mediators must be developed and incorporated into trials. The failure of several trials to engage hypothesized mediators suggests investigators may have given insufficient attention to building implementation strategies around theorized processes of change. The development of more effective and efficient implementation strategies requires strong theoretical alignment between the change processes incorporated into the implementation strategy, the change mechanisms believed to cause change in the outcomes of interest, and the endpoint outcomes themselves.

Third, implementation scientists in mental health need to improve the design and analysis of multilevel mediation models in randomized trials. Although multilevel sampling, design, and data analysis were the norm in these nine studies, insufficient specification of multilevel theory and inadequate analysis of multilevel constructs were common. Increased care is needed to explicate and align each study’s level of theory, constructs, and analysis in order to avoid fallacies of the wrong level (Klein et al. 1994).

Fourth, as investigators move toward identifying change mechanisms, a critical next step will entail examination of the specific change processes or unique intervention components of implementation strategies that contribute most to improvement in the active change mechanisms (Doss 2004; Michie et al. 2011). Such studies promise to inform the development of optimally efficient, effective, and feasible implementation strategies in mental health.

Conclusion

The development of mental health service systems that produce optimal clinical outcomes through the ongoing adoption and integration of EBTs into routine care represents a complex endeavor shaped by numerous factors at multiple system levels. This article argues that the science of EBT implementation in mental health can be most effectively advanced through the integration of theories and research on implementation antecedents and the testing of these antecedents as multilevel mechanisms in RCTs of implementation strategies. Although the inclusion of mediation tests in implementation trials further complicates an already challenging enterprise, this article argues that the scientific yield is well worth the inconvenience. By integrating implementation theory, research, and practice, investigators can develop more effective and efficient implementation strategies that improve the lives of those who face mental illness and their families.

Acknowledgments

This work was supported by funding from the National Institute of Mental Health under Award Number F31MH099846 to Nathaniel J. Williams. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health.

References

  1. Aarons GA, Ehrhart MG, Farahnak LR, Sklar M. Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annual Review of Public Health. 2014;35:255–274. doi: 10.1146/annurev-publhealth-032013-182447. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public sectors. Administration and Policy in Mental Health. 2011;38:4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Atkins MS, Frazier SL, Leathers SJ, Graczyk PA, Talbott E, Jakobsons L, et al. Teacher key opinion leaders and mental health consultation in low-income urban schools. Journal of Consulting and Clinical Psychology. 2008;76:905–908. doi: 10.1037/a0013036. [DOI] [PubMed] [Google Scholar]
  4. Baer JS, Wells EA, Rosengren DB, Hartzler B, Beadnell B, Dunn C. Agency context and tailored training in technology transfer: A pilot evaluation of motivational interviewing training for community counselors. Journal of Substance Abuse Treatment. 2009;37:191–202. doi: 10.1016/j.jsat.2009.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology. 1986;51:1173–1182. doi: 10.1037//0022-3514.51.6.1173. [DOI] [PubMed] [Google Scholar]
  6. Barwick MA, Schachter HM, Bennett LM, McGowan J, Ly M, Wilson A, et al. Knowledge translation efforts in child and youth mental health: A systematic review. Journal of Evidence Based Social Work. 2012;9:369–395. doi: 10.1080/15433714.2012.663667. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bauer DJ, Preacher KJ, Gil KM. Conceptualizing and testing random indirect effects and moderated mediation in multilevel models: New procedures and recommendations. Psychological Methods. 2006;11:142–163. doi: 10.1037/1082-989X.11.2.142. [DOI] [PubMed] [Google Scholar]
  8. Berwick DM. The science of improvement. Journal of the American Medical Association. 2008;299:1182–1184. doi: 10.1001/jama.299.10.1182. [DOI] [PubMed] [Google Scholar]
  9. Biesanz JC, Falk CF, Savalei V. Assessing mediational models: Testing and interval estimation for indirect effects. Multivariate Behavioral Research. 2010;45:661–701. doi: 10.1080/00273171.2010.498292. [DOI] [PubMed] [Google Scholar]
  10. Chan D. Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. Journal of Applied Psychology. 1998;83:234–246. [Google Scholar]
  11. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science. 2013;8:22. doi: 10.1186/1748-5908-8-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Chen H. Theory-driven evaluations. Newbury Park, CA: Sage; 1990. [Google Scholar]
  13. Collins PY, Patel V, Joestl SS, March D, Insel TR, Daar AS. Grand challenges in global mental health. Nature. 2011;475:27–30. doi: 10.1038/475027a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implementation Science. 2010;5(14):5908–915. doi: 10.1186/1748-5908-5-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Doss BD. Changing the way we study change in psychotherapy. Clinical Psychology: Science and Practice. 2004;11:368–386. [Google Scholar]
  17. Edwards JR, Lambert LS. Methods for integrating moderation and mediation: A general analytical framework using moderated path analysis. Psychological Methods. 2007;12:1–22. doi: 10.1037/1082-989X.12.1.1. [DOI] [PubMed] [Google Scholar]
  18. Enders C. Centering predictors and contextual effects. In: Scott MA, Simonoff JS, Marx BD, editors. The SAGE handbook of multilevel modeling. Thousand Oaks, CA: Sage Publications Inc; 2013. pp. 89–108. [Google Scholar]
  19. Garland AF, Haine-Schlagel R, Brookman-Frazee L, Baker-Ericzen M, Trask E, Fawley-King K. Improving community-based mental health care for children: Translating knowledge into action. Administration and Policy in Mental Health. 2013;40:6–22. doi: 10.1007/s10488-012-0450-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Garner BR, Godley SH, Bair CML. The impact of pay-for-performance on therapists’ intentions to deliver high quality treatment. Journal of Substance Abuse Treatment. 2011;41:97–103. doi: 10.1016/j.jsat.2011.01.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS, Chapman JE. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology. 2010;78:537–550. doi: 10.1037/a0019160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Glisson C, Williams NJ. Assessing and changing organizational social contexts for effective mental health services. Annual Review of Public Health. 2015;36:507–523. doi: 10.1146/annurev-publhealth-031914-122435. [DOI] [PubMed] [Google Scholar]
  23. Godin G, Belanger-Gravel A, Eccles M, Grimshaw J. Healthcare professionals’ intentions and behaviors: A systematic review of studies based on social cognitive theories. Implementation Science. 2008;5(36):1–12. doi: 10.1186/1748-5908-3-36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: A systematic review and recommendations. Milbank Quarterly. 2004;82:581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Grol RP, Bosch MC, Hulscher ME, Eccles MP, Wensing M. Planning and studying improvement in patient care: The use of theoretical perspectives. Milbank Quarterly. 2007;85:93–138. doi: 10.1111/j.1468-0009.2007.00478.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hayes AF, Scharkow M. The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science. 2013;24:1918–1927. doi: 10.1177/0956797613480187. [DOI] [PubMed] [Google Scholar]
  27. Holth P, Torsheim T, Sheidow AJ, Ogden T, Henggeler SW. Intensive quality assurance of therapist adherence to behavioral interventions for adolescent substance use problems. Journal of Child and Adolescent Substance Abuse. 2011;20:289–313. doi: 10.1080/1067828X.2011.581974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Insel TR. Translating scientific opportunity into public health impact: A strategic plan for research on mental illness. Archives of General Psychiatry. 2009;66:128–133. doi: 10.1001/archgenpsychiatry.2008.540. [DOI] [PubMed] [Google Scholar]
  29. Institute of Medicine. Crossing the quality chasm: A new health system for the 21st century. Washington, DC: Author; 2001. [PubMed] [Google Scholar]
  30. Judd CM, Kenny DA. Process analysis: Estimating mediation in treatment evaluations. Evaluation Review. 1981;5:602–619. [Google Scholar]
  31. Kauth MR, Sullivan G, Blevins D, Cully JA, Landes RD, Said Q, Teasdale TA. Employing external facilitation to implement cognitive behavioral therapy in VA clinics: A pilot study. Implementation Science. 2010;5:75. doi: 10.1186/1748-5908-5-75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kazdin AE. Mediators and mechanisms of change in psychotherapy research. Annual Review of Clinical Psychology. 2007;3:1–27. doi: 10.1146/annurev.clinpsy.3.022806.091432. [DOI] [PubMed] [Google Scholar]
  33. Kenny DA, Korchmaros JD, Bolger N. Lower level mediation in multilevel models. Psychological Methods. 2003;8:115–128. doi: 10.1037/1082-989x.8.2.115. [DOI] [PubMed] [Google Scholar]
  34. Kessler RC, Aguilar-Gaxiola S, Alonso J, Chatterji S, Lee S, Ormel J, et al. The global burden of mental disorders: An update from the WHO World Mental Health (WMH) Surveys. Epidemiologia e Psichiatria Sociale. 2009;18(1):23–33. doi: 10.1017/s1121189x00001421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Klein KJ, Dansereau F, Hall RJ. Levels issues in theory development, data collection, and analysis. Academy of Management Review. 1994;19:195–229. [Google Scholar]
  36. Kozlowski SWJ, Klein KJ. A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent properties. In: Klein KJ, Kozlowski SWJ, editors. Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions. San Francisco: Jossey-Bass; 2000. pp. 3–90. [Google Scholar]
  37. Kraemer HC, Kazdin AE, Offord DR, Kessler RC, Jensen PS, Kupfer DJ. Coming to terms with the terms of risk. Archives of General Psychiatry. 1997;54:337–343. doi: 10.1001/archpsyc.1997.01830160065009. [DOI] [PubMed] [Google Scholar]
  38. Kreft IGG, de Leeuw J, Aiken LS. The effects of different forms of centering in hierarchical linear models. Multivariate Behavioral Research. 1995;30:1–21. doi: 10.1207/s15327906mbr3001_1. [DOI] [PubMed] [Google Scholar]
  39. Krull JL, MacKinnon DP. Multilevel modeling of individual and group level mediated effects. Multivariate Behavioral Research. 2001;36:249–277. doi: 10.1207/S15327906MBR3602_06. [DOI] [PubMed] [Google Scholar]
  40. Landsverk J, Brown CH, Reutz JR, Palinkas L, Horwitz SM. Design elements in implementation research: A structured review of child welfare and mental health studies. Administration and Policy in Mental Health. 2011;38:54–63. doi: 10.1007/s10488-010-0315-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. LeBreton JM, Senter JL. Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods. 2008;11:815–852. [Google Scholar]
  42. Lochman JE, Boxmeyer C, Powell N, Qu L, Wells K, Windle M. Dissemination of the coping power program: Importance of intensity of counselor training. Journal of Consulting and Clinical Psychology. 2009;77:397–409. doi: 10.1037/a0014514. [DOI] [PubMed] [Google Scholar]
  43. MacKinnon DP. Introduction to statistical mediation analysis. New York: Lawrence Erlbaum Associates; 2008. [Google Scholar]
  44. MacKinnon DP, Fairchild AJ, Fritz MS. Mediation analysis. Annual Review of Psychology. 2007;58:593–614. doi: 10.1146/annurev.psych.58.110405.085542. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. MacKinnon DP, Lockwood CM, Hoffman JM, West SG, Sheets V. A comparison of methods to test mediation and other intervening variable effects. Psychological Methods. 2002;7:83–104. doi: 10.1037/1082-989x.7.1.83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. MacKinnon DP, Lockwood CM, Williams J. Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research. 2004;39:99–128. doi: 10.1207/s15327906mbr3901_4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Mathieu JE, DeShon RP, Bergh DD. Mediational inferences in organizational research: Then, now, and beyond. Organizational Research Methods. 2008;11:203–223. [Google Scholar]
  48. Mathieu JE, Taylor SR. A framework for testing meso-mediational relationships in organizational behavior. Journal of Organizational Behavior. 2007;28:141–172. [Google Scholar]
  49. McHugh RK, Barlow DH. The reach of evidence-based psychological interventions. In: McHugh RK, Barlow DH, editors. Dissemination and implementation of evidence-based psychological interventions. New York: Oxford University Press; 2012. pp. 3–15. [Google Scholar]
  50. Michie S, van Stolen MM, West R. The behavior change wheel: A new method for characterizing and designing behavior change interventions. Implementation Science. 2011;6:42. doi: 10.1186/1748-5908-6-42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Neuhaus JM, Kalbfleisch JD. Between- and within-cluster covariate effects in the analysis of clustered data. Biometrics. 1998;54:638–645. [PubMed] [Google Scholar]
  52. Novins DK, Green AE, Legha RK, Aarons GA. Dissemination and implementation of evidence-based practices for child and adolescent mental health: A systematic review. Journal of the American Academy of Child and Adolescent Psychiatry. 2013;52:1009–1025. doi: 10.1016/j.jaac.2013.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Pituch KA, Murphy DL, Tate RL. Three-level models for indirect effects in school- and class-randomized experiments in education. Journal of Experimental Education. 2010;78:60–95. [Google Scholar]
  54. Powell BJ, Proctor EK, Glass JE. A systematic review of strategies for implementing empirically supported mental health interventions. Research on Social Work Practice. 2014;24:192–212. doi: 10.1177/1049731513505778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Preacher KJ, Hayes AF. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods. 2004;40:879–891. doi: 10.3758/brm.40.3.879. [DOI] [PubMed] [Google Scholar]
  56. Preacher KJ, Rucker DD, Hayes AF. Addressing moderated mediation hypotheses: Theory, methods, and prescriptions. Multivariate Behavioral Research. 2007;42:185–227. doi: 10.1080/00273170701341316. [DOI] [PubMed] [Google Scholar]
  57. Preacher KJ, Selig JP. Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures. 2012;6:77–98. [Google Scholar]
  58. Preacher KJ, Zyphur MJ, Zhang Z. A general multilevel SEM framework for assessing multilevel mediation. Psychological Methods. 2010;15:209–233. doi: 10.1037/a0020141. [DOI] [PubMed] [Google Scholar]
  59. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis. 2nd ed. Thousand Oaks, CA: Sage publications; 2002. [Google Scholar]
  61. Rohrbach LA, Graham JW, Hansen WB. Diffusion of a school-based substance abuse prevention program: Predictors of program implementation. Preventive Medicine. 1993;22:237–260. doi: 10.1006/pmed.1993.1020. [DOI] [PubMed] [Google Scholar]
  62. Rousseau DM. Issues of level in organizational research: Multi-level and cross-level perspectives. In: Cummings LL, Straw BM, editors. Research in organizational behavior. Vol. 7. Greenwich, CT: JAI Press; 1985. pp. 1–37. [Google Scholar]
  63. Saloner B, Carson N, Le Cook B. Episodes of mental health treatment among a nationally representative sample of children and adolescents. Medical Care Research and Review. 2014;71:261–279. doi: 10.1177/1077558713518347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Sobel ME. Asymptotic confidence intervals for indirect effects in structural equation models. Sociological Methodology. 1982;13:290–312. [Google Scholar]
  65. Steel Z, Marnane C, Iranpour C, Chey T, Jackson JW, Patel V, Silove D. The global prevalence of common mental disorders: A systematic review and meta-analysis 1980-2013. International Journal of Epidemiology. 2014;43:476–493. doi: 10.1093/ije/dyu038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventive Medicine. 2012;43:337–350. doi: 10.1016/j.amepre.2012.05.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Taylor AB, MacKinnon DP, Tein JY. Tests of the three-path mediated effect. Organizational Research Methods. 2008;11:241–269. [Google Scholar]
  68. Weisz JR, Kuppens S, Eckshtain D, Ugueto AM, Hawley KM, Jensen-Doss A. Performance of evidence-based youth psychotherapies compared with usual clinical care: A multilevel meta-analysis. JAMA Psychiatry. 2013;70:750–761. doi: 10.1001/jamapsychiatry.2013.1176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Williams NJ. Mechanisms of change in an organizational culture and climate intervention for increasing clinicians’ evidence-based practice adoption in mental health (Unpublished doctoral dissertation) Knoxville: University of Tennessee; 2015. http://trace.tennessee.edu/utk_graddiss/3482. [Google Scholar]
  70. Williams NJ, Glisson C. The role of organizational culture and climate in the dissemination and implementation of empirically-supported treatments for youth. In: Beidas RS, Kendall PC, editors. Dissemination and implementation of evidence based practices in child and adolescent mental health. New York: Oxford University Press; 2014. pp. 61–81. [Google Scholar]
  71. Williams JR, Williams WO, Dusablon T, Blais MP, Tregear SJ, Banks D, Hennessy KD. Evaluation of a randomized intervention to increase adoption of comparative effectiveness research by community health organizations. Journal of Behavioral Health Services and Research. 2014;41:308–323. doi: 10.1007/s11414-013-9369-4. [DOI] [PubMed] [Google Scholar]
  72. Zhang Z, Zyphur MJ, Preacher KJ. Testing multilevel mediation using hierarchical linear models: Problems and solutions. Organizational Research Methods. 2009;12:695–719. [Google Scholar]

RESOURCES