Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Oct 1.
Published in final edited form as: Psychiatry Res. 2019 Aug 9;280:112513. doi: 10.1016/j.psychres.2019.112513

An Introduction to Effectiveness-Implementation Hybrid Designs

Sara J Landes a,b,c,*, Sacha A McBain b,c, Geoffrey M Curran b,c,d
PMCID: PMC6779135  NIHMSID: NIHMS1538931  PMID: 31434011

Abstract

The traditional research pipeline that encourages a staged approach to moving an intervention from efficacy trials to the real world can take a long time. To address this issue, hybrid effectiveness-implementation designs were codified to promote examination of both effectiveness and implementation outcomes within a study. There are three types of hybrid designs and they vary based on their primary focus and the amount of emphasis on effectiveness versus implementation outcomes. A type 1 hybrid focuses primarily on the effectiveness outcomes of an intervention while exploring the “implementability” of the intervention. A type 2 hybrid has a dual focus on effectiveness and implementation outcomes; these designs allow for the simultaneous testing or piloting of implementation strategies during an effectiveness trial. A type 3 hybrid focuses primarily on implementation outcomes while also collecting effectiveness outcomes as they relate to uptake or fidelity of the intervention. This paper provides an introduction to these designs and describes each of the three types, design considerations, and examples for each.

Keywords: implementation, hybrid effectiveness-implementation design

1. Introduction

The traditional research pipeline has encouraged a staged approach to evidence-based intervention development that historically has focused on ensuring an intervention works under ideal conditions (i.e., it performs well in efficacy and subsequent effectiveness trials) before considering translation into the real world. As a result, the time lag between development of an evidence-based intervention and routine uptake in the community can take a significant amount of time. The field of implementation science evolved to address this issue as described in the introduction to this special issue (Bauer et al., 2019). However, even with a field focused on how to get practices implemented in the real world, there is still a delay if we follow the traditional route of efficacy – effectiveness – implementation (see Figure 1).

Figure 1.

Figure 1.

Traditional research pipeline.

At this point in the maturation of the field, we expect that most agree that efficacy, effectiveness, and implementation research should not be so separate and sequential, and that keeping them separate overlooks complexity and slows the field down. Indeed, many others have supported the notion of blurring across efficacy-effectiveness-implementation silos and/or bridging the practice-based/research-based divide (Brownson et al., 2013; Chambers et al., 2013; Chambers & Norton, 2016; Curran et al., 2012; Glasgow et al., 2003; Green, 2006; Owen et al., 2012; Wells, 1999; Wiltsey Stirman et al., 2013). Here, we argue that the speed of moving research findings into routine adoption can be improved by considering hybrid designs that combine elements of effectiveness and implementation research. We contend that not collecting implementation data such as barriers and facilitators is a lost opportunity.

Hybrid effectiveness-implementation designs (Curran et al., 2012) offer a potential solution to this problem, as they were codified to promote examination of both effectiveness and implementation outcomes within a study. Broadly, the word “design” refers to the planned set of procedures to 1) select units/participants for study; 2) assign these to or measure their naturally chosen conditions; and 3) evaluate before, during, and after assignment in the conduct of the study. Hybrid design refers to a specific aspect of these procedures pertaining to the relative focus on both effectiveness of the clinical intervention and its implementation, but the type of trial (e.g., stepped wedge, cluster randomized, pilot), which is often referred to as a design, is not necessarily yolked to the type of hybrid. Thus, many types of randomized and non-randomized trial designs can be used in the context of a hybrid depending on the specific aims. In regard to Figure 1, this design would span effectiveness and implementation research in the pipeline. There are three types of hybrid designs, described here briefly and in more detail below. The current manuscript seeks to summarize this type of design and provide examples for those interested in adding a focus on implementation to their work. First, for the sake of clarity, we define here a set of terms. In this paper, we use the term “intervention” to refer to the clinical practice or program that we have an interest in exploring (e.g., a psychotherapy or pharmacotherapy). We use the term “strategy” to refer to the implementation support activities or tools that we have an interest in exploring (e.g., supervision, academic detailing). Both are “interventions” whose effectiveness we are interested in, but in the hybrid design context it can get confusing. We will use “effectiveness” only when referring to the clinical intervention outcomes.

The three types of hybrid designs vary based on their primary focus and the amount of emphasis on effectiveness versus implementation outcomes; see Table 1. A type 1 hybrid focuses primarily on the effectiveness outcomes of a clinical or prevention intervention while exploring the “implementability” of the intervention. This allows for identification of what is needed to support implementation in the real world and to identify barriers and facilitators to implementation that will inform the selection of appropriate implementation strategies. A type 2 hybrid has a dual focus on effectiveness and implementation outcomes; these designs allow for the simultaneous testing or piloting of implementation strategies during what is otherwise an effectiveness trial. Finally, a type 3 hybrid focuses primarily on implementation outcomes (e.g., testing of implementation strategies) while also collecting effectiveness outcomes as they relate to uptake or fidelity of the intervention.

Table 1.

Types of hybrid designs and the associated research aims.

Study Design Hybrid Type 1 Hybrid Type 2 Hybrid Type 3

Research Aims Primary Aim:
Determine
effectiveness of an
intervention
Primary Aim:
Determine
effectiveness of an
intervention
Primary Aim:
Determine impact of
an implementation
strategy
Secondary Aim:
Better understand
context for
implementation

Co-Primary* Aim:
Determine feasibility
and/or (potential)
impact of an
implementation
strategy
Secondary Aim:
Assess clinical
outcomes associated
with implementation
*or Secondary Aim

2. Type 1

In a hybrid type 1 design, the primary focus is on testing the clinical intervention. The secondary focus is to explore implementation related factors. This design could be a traditional effectiveness study plus a “process evaluation” to provide description of the implementation experience (what worked or did not work), identification of how the intervention needs to be adapted for the setting, and/or what is needed to support the people and place implementing the intervention. Generally, this design results in identification of barriers and facilitators to implementation. The implementation focused work can be collected via interview, survey, and/or observation of participants.

A hybrid type 1 design is indicated when the clinical effectiveness evidence remains limited, therefore studying implementation alone is premature. At the same time, effectiveness study conditions offer an ideal opportunity to explore implementation issues and plan implementation strategies for the next stage. For example, Beidas and colleagues (2014) used a hybrid type 1 design to evaluate the effectiveness and safety of a physical activity intervention that had demonstrated efficacy for breast cancer survivors. The effectiveness study focused on the effectiveness and safety of the intervention when it was implemented in a community-based setting, which provided an opportunity to simultaneously assess barriers to implementation.

Hybrid type 1 designs do not include examination of implementation strategies, as the primary focus is on clinical effectiveness. However, it should be noted that all effectiveness trials use “implementation strategies” per se to support the delivery of the intervention. However, we usually do not call them that. They are normally resource-intensive supports to conduct the trial, such as paying clinics, paying interventionists, paying for care, and frequent fidelity checks and intervening when needed. It is acknowledged that many of these strategies used in the effectiveness trials may not be feasible for supporting widespread adoption. However, we can learn about potential implementation challenges from evaluating the delivery of the intervention during the trial (See Curran et al. (2011) example described below).

2.1. Design considerations.

When designing a hybrid study of any type, it is important to use a framework (e.g., CFIR, PRECIS-2) to guide the focus of the study and reporting of outcomes (e.g., RE-AIM). For additional guidance on implementation theories, models, and frameworks, see Damschroder et al. (2019) in this issue. When examining factors that impact implementation, one should consider barriers and facilitators at multiple levels of an organization, such as consumer, staff/provider, clinic, and organization levels. It is also possible to add implementation components to currently running effectiveness studies. In hybrid type 1 studies it is common for process evaluations to occur towards the end of the effectiveness trial period, so it is possible to design and conduct a relatively brief process evaluation focused on implementation during the trial and implementation needs going forward. This was the approach that Curran et al. (2012; described below in more detail) took when adding on a process evaluation to the large effectiveness trial of the CALM intervention for identifying and treating a range of anxiety disorders in primary care (Roy-Byrne et al., 2010).

The original definition of a type 1 emphasized secondary aims/questions and exploratory data collection and analysis preparatory to implementation activity. However, some type 1 studies are doing more intense focus on implementability in developing or adapting an intervention before the effectiveness trial begins by including dissemination and implementation components in the initial study design process (Brownson et al., 2013).

If there are a small number of sites participating in the effectiveness trial, one could consider expanding data collection to naïve sites (e.g., clinics not yet doing the intervention). In this case, the implementation-focused research is conducted in types of sites or clinics where the intervention would be implemented if proven effective. Using qualitative methodology, researchers can introduce clinic staff to the intervention components and seek feedback from them on its implementation potential in their setting, adaptation needs for better fit in their context, and potential implementation strategies needed to support uptake. For example, Baloh et al. (2019) recently used this methodology in a hybrid type 1 “add-on” process evaluation of a current NIAAA-funded effectiveness trial of Al-Anon Intensive Referral (AIR; NIAA R01AA024136). In parallel with the randomized controlled trial (RCT) of AIR, this process evaluation consists of conducting ongoing qualitative interviews with key informants (e.g., directors, staff) from substance use disorder programs in the AIR RCT. Preliminary findings indicated different levels of capacity and readiness for implementing AIR and highlighted key barriers (e.g., limited staff time, high staff turnover, length of AIR was longer than typical client stays) and facilitators (e.g., strong face validity, adaptability, alignment with programs’ values). This evaluation will result in recommendations for adaptations that may need to occur to implement AIR, should it be found effective (Baloh et al., 2019).

2.2. Type 1 Example.

Here we summarize a paper by Curran et al. (2012) as an example for a type 1 hybrid. This included a large effectiveness trial of an anxiety intervention (CALM; (Roy-Byrne et al., 2010) in primary care that spanned four cities and included 17 clinics and 1,004 patients. The intervention required care managers to use a software tool with patients to navigate the treatment manual. The care managers were mostly local nurses and social workers already working in the clinics. The intervention was designed with “future implementation in mind” by using a software tool to guide the clinician with the intent to assist with fidelity. The implementation focused aspect of the trial included a qualitative process evaluation near the close of the effectiveness trial. An interview guide was created and informed by an implementation framework (PARIHS;(Kitson et al., 1998); see Table 2. Interviews were conducted with providers, nurses, front office, and anxiety care managers (N = 47) and most were conducted by telephone.

Table 2.

Interview guide focus on implementation related process (abridged version).

1. What worked and what didn’t work?
2. How did CALM operate in your clinic? Adaptations?
3. How did CALM affect workload, burden, and space?
4. How was CALM received by you and others in your site and how did that change over time?
5. Were there “champions” or “opinion leaders” for CALM and if so, what happened with them?
6. How did the communication between the care manager, the external psychiatrist, and local PCPs work?
7. What outcomes are/were you seeing?
8. What changes should be made to CALM?
9. What are the prospects for CALM being sustained in your clinic and why/why not?

Significant knowledge was gained that illustrates the value of a type 1 hybrid design. By utilizing process evaluation, the investigators learned many providers were minimally engaged and did not refer their patients for the trial. The providers who did refer had an existing interest in mental health. The providers who did not refer had not been effectively persuaded during site trainings that the intervention was valuable and worth their participation. The providers’ attitudes towards the intervention resulted in poor implementation outcomes of uptake and reach despite the investigators best efforts to engage providers in referring their patients. Without engagement in a process evaluation, they may not have identified a key barrier to future implementation was provider buy-in and engagement, as standard strategies to increase engagement were largely ineffective. If this work had been done sequentially (e.g., complete effectiveness trial, then implementation-focused research after that), the investigators would have learned this about this barrier later, likely during a needs assessment prior to developing and pilot testing an implementation strategy, or during the pilot study itself, essentially losing time in the move towards implementation of their intervention.

3. Type 2

In a hybrid type 2 design, there is a dual focus on the clinical intervention and implementation related factors. There is no clear designation of how the focus should be allotted to each aspect (e.g., 50/50, 60/40). In hybrid type 2 designs, it is important to have an explicitly described implementation strategy that is thought to be plausible in the real world. This is a clear distinction from a type 1 design. Another difference is that type 2 designs always need explicit measurement of implementation outcome (e.g., adoption, fidelity; see Smith et al. 2019 this issue for more on implementation outcomes). It is important to be clear about the intervention components versus the implementation strategy components. This can be difficult, as it is not always easy to decide or describe. For example, in a study where an evidence-based practice is delivered over the phone, the delivery of the intervention by phone is a component of the intervention that will require implementation strategies to support its adoption rather than the mode of delivery being an implementation strategy in itself.

This type of hybrid could consist of an effectiveness trial paired with an implementation trial. For example, Patterson et al. (2012) simultaneously tested a clinical intervention and the impact of a set of a priori, non-randomized implementation strategies on the intervention’s effectiveness. The first level of the study was a multi-site randomized controlled trial of an efficacious safer-sex intervention for female sex workers to reduce the transmission of HIV. The second level of the study assessed organizational and provider factors that could impact implementation and effectiveness of the intervention. They also actively tested implementation strategies (e.g., fidelity checks, train the trainer model) with the intervention group to assess their impact on fidelity and maintenance of the clinical intervention (Pitpitan et al., 2018). The study utilized both qualitative interviews and quantitative assessment to explore providers’ attitudes and beliefs towards their work, the organization, and the intervention and their impact on program effectiveness. They also utilized qualitative data to culturally adapt the intervention and mixed methods to measure the social influences within the organization that may impact attitudes towards the intervention. By utilizing both qualitative and quantitative methods, the investigators were able to corroborate, compare, and expand their findings and identify barriers to and facilitators of intervention fidelity. To more directly impact fidelity, they utilized a “train-the-trainer” approach that began with an intensive training schedule comprised of in vivo coaching, direct observation, and fidelity checklists completed by the participants and counselors that was gradually tapered over time. Patterson’s design is representative of a hybrid type 2 design because it includes explicit measurement of fidelity to the intervention resulting from the use of pre-selected implementation strategies (Patterson et al., 2012; Pitpitan et al., 2018).

Another version could consist of an effectiveness trial paired with a pilot study of an implementation strategy. For example, Cully et al. (2014) used a hybrid type 2 design to evaluate an intervention that addresses diabetes and depression. Diabetes and depression often co-occur and while effective interventions exist for both, these interventions do not do as well when the diseases co-occur. An intervention to address the co-occurrence was developed based on evidence-based principles and interventions; a pilot trial indicated good results. The aims of the hybrid type 2 trial were to 1) examine the effectiveness of the intervention on diabetes and depression and 2) pilot test an implementation strategy aimed at increasing use and fidelity of the intervention.

A hybrid type 2 design is ideal when studying interventions that already have evidence of effectiveness in other settings or populations, but not in either the context or population in the current trial resulting in uncertainty that there would be a similar clinical benefit (Aarons et al., 2017). For example, there may be data to support the effectiveness of CBT for insomnia in mental health settings, but not yet in primary care settings. A hybrid type 2 assumes that data on barriers and facilitators to implementation are available. This design is also ideal when there is momentum for implementation in terms of the system or policy demands. In recent years, healthcare systems have increasingly implemented interventions with only preliminary effectiveness data because of a government policy or mandate. Hybrid type 2 designs provide an opportunity to capitalize on the momentum of a mandate by continuing to study the intervention’s effectiveness while gathering information about how to successfully implement the intervention. For example, a healthcare organization may want to roll out a method for identifying patients at high risk for suicide given preliminary effectiveness data and a promising way to intervene with a high-risk group. A hybrid type 2 design would allow for continued evaluation of the effectiveness of the intervention and could capitalize on the implementation occurring to evaluate the impact of implementation strategies.

3.1. Design considerations.

A key potential drawback of a type 2 design is that difficulties can arise if the implementation strategy leads to poor adoption and poor fidelity, as it will compromise the effectiveness trial. Utilizing implementation strategies with a relevant evidence base, identifying adoption/fidelity benchmarks, building in appropriate measurement and plans to address poor adoption and/or fidelity, and preemptively allotting time to deal with this possibility can reduce the risk of a compromised effectiveness trial. See Kirchner et al. (2019) and Miller et al. (2019) in this issue for more detailed information on implementation strategies and implementation study design, respectively.

3.2. Type 2 Example.

Here we present a paper by Cully et al. (2012) as an example of a type 2 hybrid. This study included a clinical trial of brief cognitive behavioral therapy for treating depression and anxiety and piloting of an implementation strategy. Patients were randomized to treatment condition. Both sites involved in the trial received the same bundle of implementation strategies. To measure effectiveness, Cully et al. conducted an intent-to-treat analysis of clinical outcomes (N = 320). They concurrently collected implementation data including the feasibility and acceptability of the clinical intervention as well as data on the impact of the implementation strategy package (i.e., online clinician training, enhanced referral and scheduling, clinician audit and feedback, and internal and external facilitation). To gather information about the impact of the implementation strategies, Cully et al. measured clinician knowledge acquisition and fidelity to the model. They also conducted qualitative interviews to gather information about the clinicians’ attitudes (e.g., perception of intervention’s importance) that may impact the intervention’s implementability. By utilizing the posttreatment follow-up, Cully et al. were able to measure the provision of the intervention after the trial was completed in order to assess the intervention’s sustainability. By collecting data on the impact of the implementation strategies, Cully et al. (2012) were able to gather information to prepare for an implementation trial of the implementation strategies in which the tested strategies would be randomized and compared across intervention sites. This differentiates the study from a hybrid type 1 design due to the explicit measurement of the strategies’ impact on the intervention’s feasibility and acceptability.

While rarer, it is also possible to engage in a dual-randomized trial, where sites are randomized to treatment condition and implementation strategies. Garner et al. (2017) provides an example of dual-randomization at both the organizational and client level. This study included dual aims of 1) testing the effectiveness of a motivational interviewing-based brief intervention (MIBI) for substance use as an adjunct to usual care (UC) in AIDS service organizations and 2) testing the impact of an implementation strategy, implementation and sustainment facilitation (ISF), as an adjunct to training. To test effectiveness of the MIBI intervention, Garner et al. used a multisite randomized controlled two-group (UC vs. UC + MIBI) in which there was client-level assignment to the UC + MIBI condition. To measure the impact of the implementation strategy, ISF, they used a cluster randomized design at the organizational-level. In this scenario, whole organizations were randomized to receive the implementation strategy rather than randomizing staff within the same organization.

4. Type 3

In a hybrid type 3 design, the primary focus is on implementation outcomes. The secondary focus is to observe or gather information on the intervention outcomes. A hybrid type 3 is essentially an implementation trial plus an evaluation of patient outcomes. The primary outcomes are implementation outcomes such as adoption, fidelity, and sustainability; see Smith et al. (2019) in this issue or Proctor et al. (2011) for more on implementation outcomes. These designs primarily compare implementation strategies, and when studied in healthcare settings, the strategies usually target provider, clinic, and/or system levels and their impact on implementation outcomes. Depending on the level of patient-level engagement needed, implementation strategies can also target patients. When random assignment is used in hybrid 3 studies in healthcare settings, it usually occurs at the provider, clinic, or system level. For example, clinics might be randomized to receive different sets of implementation strategies, perhaps varying in strategy types, targets, and/or intensities, which are then compared on their impact on implementation outcomes. The secondary clinical intervention outcomes are examined observationally, without patient-level randomization, and are related to the level of adoption and fidelity produced by the implementation strategies.

4.1. Design considerations.

Type 3 hybrids are indicated in several situations. When hybrid 3 designs were first proposed, these studies were indicated when a case could be made as to why collecting effectiveness data was still warranted or there were concerns that the effectiveness outcomes were especially vulnerable to fidelity of the delivery (i.e., it is unclear if the intervention can be delivered to fidelity in real world settings). However, with increased recognition of the need to adapt interventions for context and given vast potential contextual differences, we view that hybrid 3 designs should perhaps become more of the “default” option, as opposed to an implementation trial. This design option is also appropriate when there is a high-level need or call for implementation despite limited evidence base (e.g., strong momentum within a healthcare system or a formal mandate). It should be noted here as well that in seeking research funding for a large-scale hybrid 3 design (e.g., in an NIH R01 mechanism), it is usually expected, like any other intervention study, to have already demonstrated feasibility in pilot testing of the strategy or strategies under investigation.

This type of hybrid works best with easily accessible clinical outcomes (e.g., those that can be passively assessed through the medical record). It is not ideally designed for outcomes that usually require primary data collection (e.g., patient-reported depression scores). However, investigators may be able to use more readily available data in the electronic medical record that could serve as a proxy (e.g., using depression diagnostic data as a proxy for patient-reported depression scores). Further, if there is limited or no pilot data on the implementation strategy investigators intend to use, it can create a significant barrier in designing a type 3 hybrid design study proposal.

4.2. Type 3 Example.

Spoelstra, Schueller, and Sikorskii (2019) provide an example of a type 3 hybrid designed to examine the impact of a bundle of implementation strategies on adoption and sustainability of an intervention to improve physical function in community-dwelling disabled and older adults. Spoelstra et al. (2019) conducted a 2-arm mixed method randomized trial to test the impact of a bundle of implementation strategies including: relationship, coalition, and team building; readiness to implement, leadership, and clinician attitude toward evidence assessments; intervention and facilitation training interdisciplinary coordination; internal and external facilitation; and audit and feedback. To compare the impact of implementation strategies, Spoelstra et al. randomized Medicaid home and community-based waiver sites to engage in the clinical intervention with the addition of the implementation bundle with internal facilitation or an enhanced version of the bundle with the addition of external facilitation. In this study, all sites conducted the same clinical intervention and clinical outcomes were gathered at the patient-level (e.g. measures of pain, depression, falls). Each study site was then randomized to receive varying bundles of implementation strategies to measure the impact on adoption and sustainability of the intervention. This study characterizes a type 3 design because the primary outcomes of interest are implementation-based rather than effectiveness-based.

6. Recommendations

For clinical trialists who are interested in using hybrid designs, it is important to design effectiveness trials with dissemination and implementation components in the first stages of study development. It is also possible to start with an implementation study in a new effort to translate an evidence-based intervention. The use of established implementation frameworks (Nilsen, 2015; Tabak et al., 2012) can aid in designing effectiveness trials for dissemination and implementation. It is not required to wait for “perfect” effectiveness data before moving to implementation research as additional effectiveness data can be gathered while testing implementation strategies (i.e., type 2 hybrid design). When building for implementability in an effectiveness study, gathering stakeholder input and developing partnerships before the trial is crucial to developing better, culturally-adapted interventions that increase stakeholder buy-in (Frank et al., 2014). Hybrid designs can also provide insight into how clinical outcomes are related to implementation outcomes such as levels of adoption and fidelity. By not concurrently gathering both effectiveness and implementation data, investigators may be unaware of crucial contextual factors related to the success of their interventions and risk significant delays in effective implementation of clinical interventions.

For clinical trialists developing a hybrid design study for the first time, engaging stakeholders early and often can help to expand traditional effectiveness studies. Stakeholder input can be used to inform the clinical intervention, implementation strategies, and project development (Johnson et al., 2018). Introduction to hybrid design could begin simply by engaging in stakeholder interviews to assess the barriers to and facilitators of conducting the effectiveness study. Another approach is to explicitly measure and report on the efforts that are already taken to ensure fidelity in a traditional effectiveness trial consistent with a hybrid type 1 design. For those interested in a hybrid type 2 design, it is possible to randomize implementation strategies for various implementation outcomes that you may already be familiar with from effectiveness designs (e.g., adherence to the protocol, clinical competence) to measure the impact of those strategies (e.g., fidelity checks, clinician training) on outcomes. For those interested in conducting a hybrid type 3, it would be important to partner with experienced implementation scientists as the main focus is on the performance of implementation strategies on uptake of and fidelity to an intervention. As noted above, it is crucial at this stage to already have knowledge of the implementation context and a sense of how feasible the strategies are under consideration for testing.

Engagement in hybrid designs and implementation research is not without its challenges. Common problems within the field include disagreements over the quantity of evidence needed to begin including an implementation focus. It is important for researchers to adequately describe the evidence base for the intervention in consideration for implementation work, and to make the case for introducing implementation science questions and at what stage of the implementation continuum (i.e., initial questions about barriers and facilitators to uptake, development and pilot testing of implementation strategies, or comparative tests of strategies). It can also be challenging to determine the acceptable amount of variability from the original effectiveness trials and the current context in which one intends to conduct an implementation study (i.e., how different is too different to conduct a hybrid study?). As implementation science has advanced, it is not sufficient to haphazardly select implementation strategies to include in implementation studies. More emphasis is being placed on selecting evidence-based implementation strategies and providing rationale for the selection of strategies based on factors of the context, recipient, and innovation as well as theory and stakeholder input. This can be difficult for investigators when there is not enough data on the barriers to and facilitators of implementation to adequately inform the selection of proposed implementation strategies. Further, there are growing expectations to hypothesize about and measure expected mechanisms of action for the implementation strategies (Lewis et al., 2018; Pintello, 2019; Williams and Beidas, 2019). The most sophisticated designs are those that allow for the demonstration of why specific strategies work (or not), as opposed to just demonstrating their impact on reach, adoption, and fidelity.

Highlights.

  • Effectiveness-implementation hybrid designs evaluate both outcomes within a study.

  • Use of these designs help move interventions towards implementation.

  • Type 1 hybrid designs may be ideal for clinical trial researchers to consider.

Acknowledgements:

The ideas presented here are those of the authors and do not represent the views of the Department of Veterans Affairs (VA), Veterans Health Administration (VHA), or the United States Government.

Funding:

Writing of this manuscript was supported by the Department of Veterans Affairs Office of Academic Affiliations Advanced Fellowship Program in Mental Illness Research and Treatment; the Medical Research Service of the Central Arkansas Veterans Healthcare System; the Department of Veterans Affairs South Central Mental Illness Research, Education, and Clinical Center (MIRECC); and the University of Arkansas for Medical Sciences Translational Research Institute. Drs. Curran and Landes are supported by a Clinical and Translational Science Award (CTSA) program from the NIH National Center for Advancing Translational Sciences (NCATS) awarded to the University of Arkansas for Medical Sciences, grant UL1TR003107.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Competing Interest Statement

The authors have no competing interests to declare.

References

  1. Aarons GA, Sklar M, Mustanski B, Benbow N, Brown CH, 2017. “Scaling-out” evidence-based interventions to new populations or new health care delivery systems. Implementation Science 12, 111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baloh J, Curran GM, Timko C, Grant KM, Cucciare MA, 2019. Al-Anon Intensive Referral (AIR): A qualitative formative evaluation for implementation. Presented at the Academy Health Annual Research Meeting, Washington, D.C. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Beidas RS, Paciotti B, Barg F, Branas AR, Brown JC, Glanz K, DeMichele A, DiGiovanni L, Salvatore D, Schmitz KH, 2014. A hybrid effectiveness-implementation trial of an evidence-based exercise intervention for breast cancer survivors. JNCI Monographs 2014, 338–345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Brownson RC, Jacobs JA, Tabak RG, Hoehner CM, Stamatakis KA, 2013. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health 103, 1693–1699. 10.2105/AJPH.2012.301165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Chambers D, Glasgow R, Stange K, 2013. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci 8, 117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Chambers DA, Norton WE, 2016. The adaptome: Advancing the science of intervention adaptation. American Journal of Preventive Medicine 51, S124–S131. 10.1016/j.amepre.2016.05.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cully JA, Armento ME, Mott J, Nadorff MR, Naik AD, Stanley MA, Sorocco KH, Kunik ME, Petersen NJ, Kauth MR, 2012. Brief cognitive behavioral therapy in primary care: a hybrid type 2 patient-randomized effectiveness implementation design. Implement Sci 7, 64. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cully JA, Breland JY, Robertson S, Utech AE, Hundt N, Kunik ME, Petersen NJ, Masozera N, Rao R, Naik AD, 2014. Behavioral health coaching for rural veterans with diabetes and depression: a patient randomized effectiveness implementation trial. BMC health services research 14, 191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Curran Geoffrey M, Bauer M, Mittman B, Pyne JM, Stetler C, 2012. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 50, 217–226. 10.1097/MLR.0b013e3182408812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Curran Geoffrey M., Sullivan G, Mendel P, Craske MG, Sherbourne CD, Stein MB, McDaniel A, Roy-Byrne P, 2012. Implementation of the CALM intervention for anxiety disorders: a qualitative study. Implementation Science 7, 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Frank L, Basch E, Selby JV, For the Patient-Centered Outcomes Research Institute, 2014. The PCORI Perspective on Patient-Centered Outcomes Research. JAMA Network Open 312, 1513–1514. 10.1001/jama.2014.11100 [DOI] [PubMed] [Google Scholar]
  12. Garner BR, Gotham HJ, Tueller SJ, Ball EL, Kaiser D, Stilen P, Speck K, Vandersloot D, Rieckmann TR, Chaple M, Martin EG, Martino S, 2017. Testing the effectiveness of a motivational interviewing-based brief intervention for substance use as an adjunct to usual care in community-based AIDS service organizations: study protocol for a multisite randomized controlled trial. Addiction Science & Clinical Practice 12, 31 10.1186/s13722-017-0095-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Glasgow RE, Lichtenstein E, Marcus AC, 2003. Why Don’t We See More Translation of Health Promotion Research to Practice? Rethinking the Efficacy-to-Effectiveness Transition. Am J Public Health 93, 1261–1267. 10.2105/AJPH.93.8.1261 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Green LW, 2006. Public health asks of systems science: To advance our evidence-based practice, can you help us get more practice-based evidence? American Journal of Public Health 96, 406–409. 10.2105/AJPH.2005.066035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Johnson AL, Ecker AH, Fletcher TL, Hundt N, Kauth MR, Martin LA, Curran GM, Cully JA, 2018. Increasing the impact of randomized controlled trials: An example of a hybrid effectiveness–implementation design in psychotherapy research. Translational Behavioral Medicine iby116. 10.1093/tbm/iby116 [DOI] [PubMed]
  16. Kitson A, Harvey G, McCormack B, 1998. Enabling the implementation of evidence based practice: a conceptual framework. Quality in Health care 7, 149–158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-Bailey C, Weiner B, 2018. From classification to causality: Advancing understanding of mechanisms of change in implementation science. Frontiers in Public Health Services and Systems Research 6, 136 10.3389/fpubh.2018.00136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Nilsen P, 2015. Making sense of implementation theories, models and frameworks. Implementation Science 10 10.1186/s13012-015-0242-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Owen N, Goode A, Fjeldsoe B, Sugiyama T, Eakin E, 2012. Designing for the Dissemination of Environmental and Policy Initiatives and Programs for High-Risk Groups, in: Dissemination and Implementation Research in Health: Translating Science to Practice. Oxford Press, Oxford, England. [Google Scholar]
  20. Patterson TL, Semple SJ, Chavarin CV, Mendoza DV, Santos LE, Chaffin M, Palinkas LA, Strathdee SA, Aarons GA, 2012. Implementation of an efficacious intervention for high risk women in Mexico: protocol for a multi-site randomized trial with a parallel study of organizational factors. Implement Sci 7, 105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Pintello D, 2019. Commentary: A pathway forward for implementation science in the search to accelerate the delivery of effective mental health treatment and services for youth: Reflections on Williams and Beidas (2019). The Journal of Child Psychology and Psychiatry 60, 451–454. 10.1111/jcpp.13037 [DOI] [PubMed] [Google Scholar]
  22. Pitpitan EV, Semple SJ, Aarons GA, Palinkas LA, Chavarin CV, Mendoza DV, Magis-Rodriguez C, Staines H, Patterson TL, 2018. Factors associated with program effectiveness in the implementation of a sexual risk reduction intervention for female sex workers across Mexico: Results from a randomized trial. PLOS ONE 13, e0201954 10.1371/journal.pone.0201954 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M, 2011. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 38, 65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Roy-Byrne PP, Craske MG, Sullivan G, Rose RD, Edlund MJ, Lang AI, Bystritsky A, Welch SS, Chavira DA, Golinelli D, Campbell-Sills L, Sherbourne CD, Stein MB, 2010. Delivery of evidence-based treatment for multiple anxiety disorders in primary care: a randomized controlled trial. JAMA 303, 1921–1928. 10.1001/jama.2010.608 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Spoelstra SL, Schueller M, Sikorskii A, 2019. Testing an implementation strategy bundle on adoption and sustainability of evidence to optimize physical function in community-dwelling disabled and older adults in a Medicaid waiver: a multi-site pragmatic hybrid type III protocol. Implementation Science 14, 60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Tabak RG, Khoong BS, Chambers DA, Brownson RC, 2012. Bridging research and practice: Models for dissemination and implementation research. American Journal of Prevention Medicine 43, 337–350. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Wells KB, 1999. Treatment research at the crossroads: The scientific interface of clinical trials and effectiveness research. American Journal of Psychiatry 156, 5–10. 10.1176/ajp.156.1.5 [DOI] [PubMed] [Google Scholar]
  28. Williams NJ, Beidas RS, 2019. Annual Research Review: The state of implementation science in child psychology and psychiatry: A review and suggestions to advance the field. Journal of Child Psychology and Psychiatry 60, 430–450. 10.1111/jcpp.12960 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Wiltsey Stirman S, Miller CJ, Toder K, Calloway A, 2013. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science 8, 65 10.1186/1748-5908-8-65 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES