Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2024 Apr 16;39(9):1735–1743. doi: 10.1007/s11606-024-08747-1

Similarities and Differences Between Pragmatic Trials and Hybrid Effectiveness-Implementation Trials

John C Fortney 1,2,, Geoffrey M Curran 3,4, Aaron R Lyon 1, Devon K Check 5, David R Flum 6
PMCID: PMC11254859  PMID: 38627320

Abstract

Pragmatism in clinical trials is focused on increasing the generalizability of research findings for routine clinical care settings. Hybridism in clinical trials (i.e., assessing both clinical effectiveness and implementation success) is focused on speeding up the process by which evidence-based practices are developed and adopted into routine clinical care. Even though pragmatic trial methodologies and implementation science evolved from very different disciplines, Pragmatic Trials and Hybrid Effectiveness-Implementation Trials share many similar design features. In fact, these types of trials can easily be conflated, creating the potential for investigators to mislabel their trial type or mistakenly use the wrong trial type to answer their research question. Blurred boundaries between trial types can hamper the evaluation of grant applications, the scientific interpretation of findings, and policy-making. Acknowledging that most trials are not pure Pragmatic Trials nor pure Hybrid Effectiveness-Implementation Trials, there are key differences in these trial types and they answer very different research questions. The purpose of this paper is to clarify the similarities and differences of these trial types for funders, researchers, and policy-makers. In addition, recommendations are offered to help investigators choose, label, and operationalize the most appropriate trial type to answer their research question. These recommendations complement existing reporting guidelines for clinical effectiveness trials (TIDieR) and implementation trials (StaRI).

KEY WORDS: Pragmatic Trials, Implementation Trials, Hybrid Effectiveness-Implementation Trials, Trial Design, Fidelity

INTRODUCTION

In the context of the translational research continuum (see Fig. 1), the difference between Explanatory Trials (also known as Efficacy Trials) and Pragmatic Trials (also known as Effectiveness Trials) is well understood.1 Explanatory Trials are designed to answer the question “Can this intervention work under ideal conditions?”2 In contrast, Pragmatic Trials are designed to answer the question “Does this intervention work under routine care conditions?”2,3 Acknowledging that most trials are not pure Explanatory Trials nor pure Pragmatic Trials, the Pragmatic Explanatory Continuum Indicator Summary (PRECIS) tool provides guidelines for describing where on the explanatory-pragmatic continuum a trial falls.2,3 These guidelines are useful for scientific funding agencies to evaluate whether proposed trials address their research priorities, and for researchers and policy-makers to interpret published findings.

Figure 1.

Figure 1

Translational research pipeline.

On the other end of the translational research continuum (see Fig. 1), the differences between Pragmatic Trials and Implementation Trials are also well understood. Implementation trials test the success of implementation strategies4 designed to promote the use of evidence-based practices previously demonstrated to be effective in routine care. Implementation trials5 are designed to answer the question “Does this implementation strategy successfully promote the use of this evidence-based practice in routine care?” The primary outcomes of implementation trials typically include the (1) proportion of providers who adopted the evidence-based practice, (2) degree to which providers delivered the evidence-based practice with high fidelity (i.e., as intended), and (3) proportion of eligible patients reached by the evidence-based practice.6 Thus, while Pragmatic Trials test the effectiveness of clinical interventions delivered in routine care, implementation trials test the success of implementation strategies designed to promote the use of evidence-based practices in routine care. Implementation Trials may be characterized along an explanatory-pragmatic continuum using the PRECIS-Provider Strategies tool.7,8

To speed up the process by which evidence-based practices are developed and adopted, Curran et al. encouraged researchers to consider using Hybrid Effectiveness-Implementation Trials, defined as a trial “that takes a dual focus a priori in assessing clinical effectiveness and implementation.”9 By conducting one Hybrid Trial instead of sequential Pragmatic and Implementation Trials, the research timeline can ideally be shortened (see Fig. 1). There are three basic types of Hybrid Trials (see Table 1). While Pragmatic Trials and Hybrid Type 1 Trials test the effectiveness of clinical interventions in routine care, Hybrid Type 3 Trials test the success of implementation strategies to promote evidence-based practice use in routine care, and Hybrid Type 2 Trials test both. Due to the close proximity of Pragmatic Trials and Hybrid Trials on the translational research continuum (see Fig. 1), there is less clarity about the similarities and differences between these trial types.

Table 1.

Definitions of Trial Types

Trial type Definition
Pragmatic Trial Assesses the effectiveness of a clinical intervention(s)
Hybrid Type 1 Trial Primarily assesses the effectiveness of a clinical intervention(s) while exploring barriers to implementation and potential strategies for overcoming those barriers
Hybrid Type 2 Trial

Places roughly equal importance on comparing clinical interventions and implementation strategies

• Subtype a (pilot implementation)—Two or more clinical interventions are compared while one implementation strategy is evaluated

• Subtype b (dual randomization)—Two or more clinical interventions and two or more implementation strategies are compared simultaneously

Hybrid Type 3 Trial Primarily assesses the success of implementation strategies while secondarily examining the effectiveness of the clinical intervention(s) being implemented

The purpose of this paper is to clarify the similarities and differences between Pragmatic Trials and Hybrid Trials for funders, researchers, and policy-makers. Acknowledging that most trials are not pure Pragmatic Trials nor pure Hybrid Trials, blurred boundaries between trial types can hamper the evaluation of grant applications, and the scientific interpretation of findings. To illustrate this, we highlight a recently published study self-labeled as a “pragmatic cluster randomized control trial” of integrating behavioral health into primary care.10,11 The comparators in this exemplified trial are described as co-location of mental health specialists in primary care versus co-location of mental health specialists in primary care plus an online educational curriculum for providers, implementation workbook, remote quality improvement coaching services for internal facilitators, and an online learning community.10 The clinical intervention in both arms is the same (i.e., co-location), and the comparators are clearly implementation strategies (e.g., none vs. multifaceted support).4 However, the primary outcome is a measure of clinical effectiveness (change in patients’ health status) and the secondary outcome is a measure of implementation success (fidelity to the integrated care model).12 This pragmatically labeled trial, comparing two implementation strategies by examining patients’ health status, is an illustrative example of how easily the similarities between Pragmatic and Hybrid Trials can cause confusion that results in an incongruence between scientific aims and the chosen type of trial.

SIMILARITIES BETWEEN PRAGMATIC AND HYBRID TRIALS

Differences and similarities between Pragmatic and Hybrid Trial types are presented in Table 2. All trial types tend to use methods considered to be pragmatic according to PRECIS, such as specifying broad inclusion criteria and minimal exclusion criteria, being conducted in routine care settings where treatment is delivered by routine care providers, and using intention-to-treat analyses to examine outcomes.2,3 The greatest similarities are usually between Pragmatic and Hybrid Type 2 Trials because they are both designed to measure the effectiveness of clinical interventions which previous research has shown to be effective, at least in some populations, settings, or delivery modalities. In contrast, Hybrid Type 1 Trials are designed primarily to establish the effectiveness of a clinical intervention in routine care, and are usually less pragmatic than Pragmatic Trials (see Fig. 1). Hybrid Type 3 Trials are primarily designed to compare implementation strategies rather than clinical interventions.

Table 2.

Similarities and Differences Between Pragmatic Trials and Hybrid Effectiveness-Implementation Trials

Pragmatic Trial Hybrid Type 1 Trial Hybrid Type 2a* Trial Hybrid Type 2b* Trial Hybrid Type 3 Trial
Research question(s) Is the treatment effective in routine care?

Can the treatment be effective in routine care?

What are the implementation implications?

Is the treatment effective in routine care?

Is the implementation strategy successful?

Is the treatment effective in routine care?

Which implementation strategy is more successful?

Which implementation strategy is more successful?
Clinical intervention(s) are delivered in a routine care setting Yes Yes Yes Yes Yes
Broad inclusion criteria and minimal exclusion criteria Yes Yes Yes Yes Yes
Clinical intervention(s) are delivered by routine care providers Yes Yes or no Yes Yes Yes
Compares two or more clinical interventions Yes Yes Yes Yes No
Clinical intervention(s) are known to be effective† Yes No Yes Yes Yes
Compares two or more implementation strategies No No No Yes Yes
Primary/co-primary outcome is patients’ health status Yes Yes Yes Yes No
Primary/co-primary outcomes are adoption, reach, and/or fidelity No No No Yes Yes
Intention-to-treat statistical analysis Yes Yes Yes Yes Yes
Moderation analyses examine treatment heterogeneity Yes No Yes or no Yes or no Yes or no
Mediation analyses examine mechanisms of action No Yes Yes or no§ Yes or no§ Yes or no§
Uses artificial implementation strategies No Yes No No No
Uses evidence-based implementation strategies Yes Yes Yes and no Yes and no Yes and no
Implementation strategies are pre-specified Yes or no Yes or no Yes Yes Yes
Fidelity is measured Yes Yes Yes Yes Yes
Fidelity is an outcome variable No No No Yes or no Yes or no
Adoption and/or reach is an outcome variable No No Yes or no Yes or no Yes or no

*Hybrid Type 2b Trials compare two clinical interventions and two implementation strategies while Hybrid Type 2a Trials compare two clinical interventions with only one implementation strategy14

†Though not necessarily known to be effective in the setting where the clinical intervention is being delivered or for the population targeted

‡For Pragmatic and Hybrid Type 2 Trials, moderation analyses examine treatment heterogeneity between sub-populations of patients. For Hybrid Type 2b and 3 Trials, moderation analyses examine variation in implementation outcomes across sites with different inner and/or outer contexts20

§For Hybrid Type 1 and 2a Trials, the mechanism(s) of action would be clinical; for Hybrid Type 2 Trials, the mechanism(s) of action would be clinical and/or implementation; and for Hybrid Type 3 Trials, the mechanism(s) of action would be implementation

‖If a bundle of implementation strategies is being used, some might be evidence-based, and some might be novel or non-evidence-based for the implementation context

DIFFERENCES BETWEEN PRAGMATIC AND HYBRID TRIALS

The differences between Pragmatic and Hybrid Trials have to do with the (1) primary outcomes, (2) specification of implementation strategies, (3) secondary aims, (4) attention to fidelity, (5) use of “artificial” versus “practical” implementation strategies, and (6) use of “evidence-based” versus “novel” implementation strategies.

Primary Outcomes

Pragmatic, Hybrid Type 1, and Hybrid Type 2 Trials compare the effectiveness of two or more clinical interventions, with one often being usual care. Clinical interventions are treatments (e.g., psychotherapy), treatment modalities (e.g., mhealth), or service models (e.g., patient-centered medical homes) designed to directly impact patient outcomes. Thus, for Pragmatic, Hybrid Type 1, and Hybrid Type 2 Trials, the primary/co-primary outcomes are usually specified as patient-level outcomes, such as treatment compliance, procedural complications, side effects, lab results, symptoms, functioning, or hospital readmission. Hybrid Type 2b and Hybrid Type 3 Trials compare two or more implementation strategies, with one often being usual implementation. Powell et al. provide a comprehensive list of implementation strategies,4 some of which are often bundled together. Implementation strategies promote the use of evidence-based practices4 and indirectly impact patient outcomes. We acknowledge that the distinction between clinical interventions and implementation strategies is sometimes blurred, especially with regard to interventions designed to promote patient engagement in treatment (e.g., telehealth trials). Because Hybrid Type 2b and Type 3 Trials are testing implementation strategies that indirectly impact patient outcomes, the primary/co-primary outcome is implementation success reflected by such measures as provider adoption, provider fidelity, and patient reach.6 Most Hybrid Type 3 Trials specify patient-level outcomes as a secondary outcome.

Specification of Implementation Strategies

Historically, the implementation strategies of most Pragmatic and Hybrid Type 1 Trials are not pre-specified (reported before publishing the results) nor post-specified (reported with the published results),13 and if they are, they are not usually called implementation strategies.14 Because Hybrid Type 2 and Hybrid Type 3 Trials are evaluating the success of implementation strategies, often hypothesizing that one is superior to another, the implementation strategies are always pre-specified in grant applications, trial registries, and protocol papers. An important nuance is that many implementation strategies are tailored to specific sites based on a local needs assessment, and thus, strategies can vary across sites during the same trial.15 Similarly, adaptive implementation strategies can be used when more, or more intensive, implementation strategies are deployed when adoption, fidelity, and/or reach are poor at under-performing sites.1618 Nevertheless, Hybrid Type 2 and Hybrid Type 3 Trials pre-specify the tailoring or adaptive nature of the implementation strategies.

Secondary Aims

Another difference between trial types is whether moderation or mediation analyses are specified as secondary aims. Moderation analyses test interaction effects such as whether the impact of the clinical intervention depends on the characteristics of the patients or whether the impact of the implementation strategy depends on the characteristics of providers or clinics. Pragmatic Trials typically conduct moderation analyses to examine treatment heterogeneity among patients.1 Moderation analyses of implementation outcomes are much less common in Hybrid Trials because of the challenges to achieving adequate statistical power. Very large Hybrid Trials conducted in multiple healthcare systems/clinics have the potential to examine whether contextual factors are effect modifiers for the implementation strategy.19 The Consolidated Framework for Implementation Research (CFIR) describes provider-level, organization-level, and environmental-level modifiers that may make an implementation strategy more or less successful.20 Mediation analyses determine how a clinical intervention is improving patient outcomes or how an implementation strategy is promoting the use of an evidence-based practice. Mediation analyses are not typically conducted in Pragmatic Trials, because the mechanisms of action for the clinical intervention have usually already been identified in explanatory clinical trials. Hybrid Type 1 Trials and sometimes Hybrid Type 2 Trials examine whether the mechanisms of action for the clinical intervention identified in explanatory trials are still being targeted effectively when delivered in routine care. Implementation researchers should also conduct mediation analyses to determine whether implementation strategies are successfully targeting the hypothesized mechanism(s) of action.19 An exemplar in this regard is the implementation trial conducted by Williams et al. that randomized 475 mental health clinicians in 14 children’s mental health agencies to usual implementation or to a novel implementation strategy to improve organizational culture.21 Results demonstrated that the implementation strategy significantly and substantially increased the use of evidence-based practices, and that, as hypothesized, improved organizational culture partially mediated the effect.

Attention to Fidelity

Adoption and reach are frequently specified as implementation outcomes in Hybrid Type 2 and Hybrid Type 3 Trials, but are rarely measured or reported in Pragmatic Trials. Therefore, the specification of adoption and reach outcomes is a good indicator that the trial is a Hybrid Type 2 or 3 Trial and not a Pragmatic Trial. In contrast, fidelity is often measured and reported in both Pragmatic and Hybrid Trials.22 Fidelity represents the degree to which the clinical intervention is delivered as intended.23 While adaptation (intentional fidelity-consistent changes to the adaptable periphery of the clinic intervention to improve fit, engagement, and effectiveness)24,25 is encouraged, fidelity-inconsistent deviations to the core intervention components are not.25 A fundamental difference between Pragmatic and Hybrid Trial types concerns the role of fidelity: (1) whether, how, and how much fidelity is intervened upon, (2) whether that is pre-specified in grant applications, trial registries, and protocol papers, and (3) whether fidelity is analyzed as an outcome. Because the purpose of Pragmatic Trials is to estimate the effectiveness of clinical interventions in routine care, fidelity is reported descriptively. Process evaluations are currently recommended for Pragmatic Trials evaluating complex interventions,2630 to document how well the intervention was implemented in order to interpret the observed effectiveness of the clinical intervention.28,29 However, fidelity in Pragmatic Trials should not be intervened upon more than a healthcare system’s normal quality improvement activities.31 In fact, the PRECIS tool rates how pragmatic a trial is based on how much fidelity is controlled.2 In contrast, Hybrid Type 2 and Hybrid Type 3 Trials test the effectiveness of implementation strategies designed to maximize fidelity. Consequently, Hybrid Type 2 and Hybrid Type 3 Trials often specify fidelity as the primary outcome.6,23

Artificial Versus Practical Implementation Strategies

In Pragmatic, Hybrid Type 2, and Hybrid Type 3 Trials, research teams typically rely on implementation strategies that are, or are expected to be, practical to use outside the context of research. In contrast, many implementation strategies are not feasibly replicated in routine care settings. Such implementation strategies would be characterized as “explanatory” by Domain #4 of the PRECIS-2-Provider Strategies,7 but will be referred to here as “artificial.” Examples of artificial implementation strategies include (1) adoption is increased by using research funds to pay for intervention delivery, (2) reach is increased by advertising in the community for trial participants, and (3) fidelity is increased by monitoring fidelity frequently and re-training and/or removing clinicians with poor fidelity. Hybrid Type 1 Trials are more likely to use artificial implementation strategies while exploring promising practical implementation strategies. For example, in a Hybrid Type 1 Trial of tobacco treatment in oncology centers, Goshe et al. report requiring a week-long training for study counselors, followed by investigator reviews of counseling session recordings, and weekly supervision meetings to review all active cases to optimize fidelity. After the trial is over, focus groups will assess more practical implementation strategies. Importantly, the degree of artificiality may vary depending on the resources available to a particular healthcare system, such that an implementation strategy may be feasible in a healthcare system with good-quality improvement infrastructure, but not in one with inadequate infrastructure. Pragmatic Trials mostly differ from Hybrid Type 1 Trials because they rely on more practical implementation strategies rather than artificial ones. This is why Hybrid Type 1 Trials, but not Pragmatic Trials, need to conduct exploratory research to identify practical implementation strategies. When the implementation strategies used in Pragmatic and Hybrid Type 1 Trials are not described, these two trial types can be indistinguishable.

Evidence-Based Versus Novel Implementation Strategies

Just like clinical interventions, an implementation strategy can fall along the continuum of evidence-based32, to evidence-based in some contexts, to novel. Pragmatic Trials should use evidence-based implementation strategies or compare clinical interventions that do not face meaningful implementation barriers. For example, Flum et al. compared the clinical effectiveness of antibiotics to surgery for appendicitis, both of which had already been adopted into routine care.33 Many Hybrid Type 2b and Hybrid Type 3 Trials compare a novel implementation strategy to a commonly used implementation strategy known to be marginally successful (e.g., train and hope34). For example, in a Hybrid Type 3 Trial, Cucciare et al. randomized rural psychotherapists to standard training or standard training plus computer support (a novel implementation strategy).35 Therapists randomized to training plus computer support were substantially more likely to follow the therapy protocol with fidelity (primary outcome) and their patients experienced statistically greater improvements in symptoms (secondary outcome). These results suggest that the common practice of training psychotherapists in an evidence-based practice and hoping they deliver it per protocol should be replaced with one that provides ongoing fidelity support. Hybrid Type 2b and Hybrid Type 3 Trials can also compare a novel low-intensity implementation strategy to a more resource-intensive implementation strategy that is known to be effective. For example, Kolko et al. describe a Hybrid Type 3 Trial comparing three variants of practice facilitation, an implementation strategy shown to be successful at promoting the uptake of complex clinic interventions.36,37 The three practice facilitation variants are (1) targeting both front-line providers and leadership (evidence-based), (2) targeting front-line providers only (novel), and (3) targeting leadership only (novel). Results will determine whether the less evidence-based and less resource-intensive practice facilitation strategies targeting just front-line providers or just leadership are equally as successful as the more evidence-based and more resource intensive strategy.

IDENTIFYING TRIAL TYPES

Acknowledging that trial types fall along a continuum, the differences between trial types can be demarcated according to the following dimensions: (1) primary outcome, (2) attention to fidelity, (3) artificial versus practical implementation strategies, and (4) evidence-based versus novel implementation strategies. Pragmatic and Hybrid Type 1 Trials specify measures of clinical effectiveness as the primary outcome while Hybrid Type 2 and Hybrid Type 3 Trials specify measures of implementation success as primary or co-primary outcomes. Pragmatic Trials often report fidelity descriptively whereas Hybrid Type 2 and Hybrid Type 3 Trials often specify fidelity as a primary or co-primary outcome. Hybrid Type 1 Trials tend to use artificial implementation strategies, whereas Pragmatic, Hybrid Type 2, and Hybrid Type 3 Trials use practical implementation strategies. Pragmatic Trials differ from Hybrid Type 2 and Hybrid Type 3 Trials in that the implementation strategies should be evidence-based rather than novel. Figure 2 provides a simple decision tree to help identify these various trial types, including suboptimal trial types such as the one described in the introduction that specified a measure of clinical effectiveness as the primary outcome, but only examined one clinical intervention.

Figure 2.

Figure 2

Trial type decision tree. 1An artificial implementation strategy is one that is not feasibly replicated in routine care settings. 2An evidence-based implementation strategy is one that has been proven successful in the same or similar context (e.g., clinical intervention, target population, healthcare setting). 3Suboptimal indicates an incongruence between scientific aims and trial characteristics.

FIDELITY DITCHES AND GUARDRAILS IN PRAGMATIC TRIALS AND HYBRID TYPE 2 TRIALS

Because patient health status is specified as the primary/co-primary outcome in both Pragmatic and Hybrid Type 2 Trials, it is critical that fidelity to the evidence-based clinical intervention(s) be sufficiently high enough to produce pre-post clinical improvement among patients on average.14 Therefore, within the context of a process evaluation, fidelity to the core functions28 of the clinical intervention should be monitored during both of these trial types, using practical methods38 (i.e., replicable outside the context of research) if possible. A conceptual challenge to such process evaluations is whether the evaluators should take a passive role (i.e., a summative evaluation in which results are reported at the end of the trial) or an active role (i.e., a formative evaluation in which ongoing feedback is provided during the trial to facilitate the identification and correction of implementation problems).27 It is generally recommended that pragmatic trialists not conduct course corrections to improve fidelity because it compromises external validity.27 However, while fidelity should not be controlled artificially in Pragmatic or Hybrid Type 2 Trials, it is uninformative and unethical to compare evidence-based clinical interventions that are delivered with such low fidelity that patients are not experiencing within-group pre-post clinical improvement.31 Therefore, whenever fidelity is so low that patients are not benefiting clinically (i.e., the ditch), we recommend that the trial should be “rescued,” if possible, by increasing the intensity of pre-specified practical evidence-based implementation strategies and/or adding post hoc implementation strategies (i.e., the guardrails). For example, in a Hybrid Type 2 Trial, Hartzler et al. pre-specified a “fidelity drift alarm” that triggered an additional pre-specified practical implementation strategy (technical assistance to the therapist) to support the rollout of an evidence-based psychotherapy.39 For Hybrid Type 2 Trials, investigators must weigh the disadvantages of making post hoc modifications to the implementation strategies (or adaptive implementation strategies), which may sacrifice the co-primary aim of comparing pre-specified implementation strategies to rescue the co-primary aim of comparing clinical interventions. Note that if artificial implementation strategies must be used to maintain adequate fidelity, the Pragmatic Trial or Hybrid Type 2 Trial becomes a Hybrid Type 1 Trial type by default.

Importantly, it may not always be obvious when fidelity is below the threshold needed to produce clinical improvement. Ideally, data from explanatory trials or Hybrid Type 1 Trials could be used to examine the correlation between fidelity and clinical outcomes to determine the thresholds. In the absence of such data, Data Safety Monitoring Boards should monitor clinical outcomes (masked for Pragmatic and Hybrid Type 2 Trials) and alert investigators when patients are not improving. Likewise, for Hybrid Type 2 Trials that specify fidelity as a co-primary outcome (thus requiring masking), fidelity may need to be monitored by a Data Safety Monitoring Board.

RECOMMENDATIONS

Given the subtle, but important, similarities and differences between Pragmatic and Hybrid Trials, there is the potential for investigators to mislabel their trial type or mistakenly use the wrong trial type to answer their research question. The recommendations depicted in Table 3 should help investigators choose, label, and operationalize the most appropriate trial type to answer their research question. These recommendations complement the reporting guidelines for clinical effectiveness trials (TIDieR) and implementation trials (StaRI).40,41

Table 3.

Design Recommendations for Pragmatic and Hybrid Effectiveness-Implementation Trials

Recommendation 1: Hybrid Type 1 Trials should be used to determine whether a clinical intervention can be effective when delivered in routine care. Pragmatic and Hybrid Type 2 Trials should be used to determine whether an evidence-based clinical intervention(s) is effective when delivered with practical implementation strategies in routine care. Hybrid Type 2 Trial and Hybrid Type 3 Trial types should be used to determine whether practical and novel implementation strategies successfully promote the uptake of evidence-based clinical interventions

Recommendation 2: In all Pragmatic Trials and Hybrid Trials, the implementation strategies used to intervene on fidelity (and adoption and reach) should be pre-specified, and classified as artificial/practical and novel/evidence-based in the target healthcare system. If not clear cut, the degree of artificiality and the quality of the evidence should be described/discussed. The implementation strategies used should be reported in research proposals, study protocols, and publications

Recommendation 3: Hybrid Type 1 Trials may (and usually do) use artificial fidelity monitoring methodologies and artificial implementation strategies to ensure high fidelity, but should conduct summative process evaluations to explore the potential for using more practical strategies

Recommendation 4: Hybrid Type 2 and Type 3 Trials are expected to test practical novel implementation strategies or practical implementation strategies without an evidence-base in the targeted setting. In contrast, Pragmatic Trials should use practical implementation strategies that have an evidence-base in the target setting; otherwise, investigators run the risk of having to apply artificial implementation strategies post hoc to maintain adequate fidelity

Recommendation 5: In the spirit of hybridism, pragmatic trialists might consider conducting process evaluations to facilitate the large-scale rollout of clinical interventions proven to be effective in routine care. Such process evaluations could be used to (1) optimize the “implementability” of the clinical intervention if found to be overly complex, (2) improve the “practicality” of implementation strategies if any are determined to be overly artificial for the setting, and (3) identify settings that are conducive to future implementation based on observed patient-level, provider-level, organization-level, and/or environmental-level barriers

Recommendation 6: During Pragmatic Trials and Hybrid Type 2 Trials, fidelity should be monitored using practical methodologies and if it drops below a minimum threshold (ditch) that is expected to result in a lack of pre-post clinical improvement among patients, the research team may want to consider increasing the number/intensity of practical implementation strategies to promote fidelity (guardrails). Such guardrails are not necessary in Hybrid Type 3 Trials because clinical effectiveness is not a primary outcome, although similar adaptive implementation strategies may be used

Recommendation 7: In Pragmatic Trials and Hybrid Type 2 Trials, investigators may want to pre-specify minimum fidelity thresholds (ditch) in grant applications, trial registries, and protocol papers if possible. Ideally, implementation strategies used to intervene on fidelity (guardrails) should also be pre-specified, although post hoc additions to the implementation strategies may be needed if there are unforeseen barriers to fidelity

CONCLUSION

Pragmatic trial methodologies and implementation science evolved from different disciplines.42 Pragmatism is focused on increasing the external validity of research findings. Hybridism is focused on speeding up the research process by making trials less sequential in nature. Yet, Pragmatic and Hybrid Trials share many similar design features, so much so that they are easily conflated. However, there are key differences in the trial types and they answer very different research questions. Because Hybrid Type 1 Trials use artificial implementation strategies, which compromises external validity, they determine whether a clinical intervention can be effective in routine care. Because Pragmatic and Hybrid Type 2 Trials use practical implementation strategies, which optimizes external validity, they determine whether a clinical intervention is effective when delivered in routine care. However, Pragmatic Trials differ from Hybrid Type 2 and Type 3 Trials because the implementation strategies should be evidence-based for the clinical intervention, targeted patient population, and setting. In contrast, because Hybrid Type 2 and Type 3 Trials are designed to determine whether an implementation strategy is successful, the implementation strategies themselves typically do not have an evidence-base associated with their use for the clinical intervention, target population, and/or setting, or are completely novel. While fully acknowledging that most trials will not be pure Pragmatic Trials nor pure Hybrid Trials, we suggest clearly describing (1) whether the primary outcomes are clinical effectiveness and/or implementation success, (2) the degree to which fidelity (and other implementation outcomes) will be controlled and how, and (3) the degree to which the implementation strategies are artificial/pragmatic and evidence-based/non-evidence-based. To ensure a trial is informative and ethical, we also suggest considering pre-specifying fidelity thresholds when feasible in Pragmatic and Hybrid Type 2 Trials that trigger the intensification or addition of implementation strategies to ensure patients are, on average, experiencing pre-post clinical improvement. While the terminology and examples used here are focused on the implementation of clinical interventions, many of the concepts and recommendations may apply to the implementation of other evidence-based practices such as educational innovations.

Funding

This work was supported by grants from the Patient-Centered Outcomes Research Institute (PTSD-2019C1-15636), National Institute of Mental Health (UF1 MH121942), and the Department of Veterans Affairs (QUE 20–007, RCS 17–153) to Dr. Fortney. Drs. Fortney and Lyon are supported by the National Institute of Mental Health (P50MH115837). Dr. Curran is supported by the Translational Research Institute (UL1 TR003107), through the National Center for Advancing Translational Sciences of the National Institutes of Health. Dr. Check is supported by the National Institutes of Health (NIH) Pragmatic Trials Collaboratory funded by the NIH Common Fund through cooperative agreement (U24AT009676) from the Office of Strategic Coordination within the Office of the NIH Director, and by the NIH HEAL Initiative (U24AT010961).

Data Availability:

There are no data associated with this manuscript.

Declarations:

Contributors:

None.

Conflict of Interest:

The authors declare that they do not have a conflict of interest.

Footnotes

Prior presentations:

None.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.March J, Kraemer HC, Trivedi M, et al. What have we learned about trial design from NIMH-funded pragmatic trials? Neuropsychopharmacology. 2010;35(13):2491-2501. 10.1038/npp.2010.115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62(5):464-475. 10.1016/j.jclinepi.2008.12.011 [DOI] [PubMed] [Google Scholar]
  • 3.Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ (Clinical Research Ed). 2015;350:h2147. [DOI] [PubMed] [Google Scholar]
  • 4.Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21. 10.1186/s13012-015-0209-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):32. 10.1186/s40359-015-0089-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns. 2001;44(2):119-127. 10.1016/S0738-3991(00)00186-5 [DOI] [PubMed] [Google Scholar]
  • 7.Norton WE, Loudon K, Chambers DA, Zwarenstein M. Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool. Implement Sci. 2021;16(1):7. 10.1186/s13012-020-01075-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zatzick D, Palinkas L, Chambers DA, et al. Integrating pragmatic and implementation science randomized clinical trial approaches: a PRagmatic Explanatory Continuum Indicator Summary-2 (PRECIS-2) analysis. Trials. 2023;24(1):288. 10.1186/s13063-023-07313-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217-226. 10.1097/MLR.0b013e3182408812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Crocker AM, Kessler R, van Eeghen C, et al. Integrating Behavioral Health and Primary Care (IBH-PC) to improve patient-centered outcomes in adults with multiple chronic medical and behavioral health conditions: study protocol for a pragmatic cluster-randomized control trial. Trials. 2021;22(1):200. 10.1186/s13063-021-05133-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Littenberg B, Clifton J, Crocker AM, et al. A cluster randomized trial of primary care practice redesign to integrate behavioral health for those who need it most: Patients with multiple chronic conditions. Ann Fam Med. 2023;21(6):483-495. [DOI] [PMC free article] [PubMed]
  • 12.Kessler RS, Auxier A, Hitt JR, et al. Development and validation of a measure of primary care behavioral health integration. Fam Syst Health. 2016;34(4):342-356. 10.1037/fsh0000227 [DOI] [PubMed] [Google Scholar]
  • 13.Dal-Ré R, Janiaud P, Ioannidis JPA. Real-world evidence: how pragmatic are randomized controlled trials labeled as pragmatic? BMC Med. 2018;16(1):49. 10.1186/s12916-018-1038-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280:112513. 10.1016/j.psychres.2019.112513 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Powell BJ, Beidas RS, Lewis CC, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177-194. 10.1007/s11414-015-9475-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Swindle T, Rutledge JM, Selig JP, et al. Obesity prevention practices in early care and education settings: an adaptive implementation trial. Implement Sci. 2022;17(1):25. 10.1186/s13012-021-01185-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Fortney JC, Rajan S, Reisinger HS, et al. Deploying a telemedicine collaborative care intervention for posttraumatic stress disorder in the U.S. Department of Veterans Affairs: a stepped wedge evaluation of an adaptive implementation strategy. Gen Hosp Psychiatry. 2022;77:109-117. 10.1016/j.genhosppsych.2022.03.009 [DOI] [PubMed] [Google Scholar]
  • 18.Kilbourne AM, Almirall D, Goodrich DE, et al. Enhancing outreach for persons with serious mental illness: 12-month results from a cluster randomized trial of an adaptive implementation strategy. Implement Sci. 2014;9:163. 10.1186/s13012-014-0163-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Lewis CC, Klasnja P, Powell BJ, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136. 10.3389/fpubh.2018.00136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementat Sci IS. 2009;4:50. 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Williams NJ, Glisson C, Hemmelgarn A, Green P. Mechanisms of change in the ARC organizational strategy: increasing mental health clinicians’ EBP adoption through improved organizational culture and capacity. Adm Policy Ment Health. 2017;44(2):269-283. 10.1007/s10488-016-0742-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.French C, Pinnock H, Forbes G, Skene I, Taylor SJC. Process evaluation within pragmatic randomised controlled trials: what is it, why is it done, and can we find it?-a systematic review. Trials. 2020;21(1):916. 10.1186/s13063-020-04762-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65-76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117. 10.1186/1748-5908-8-117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58. 10.1186/s13012-019-0898-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Audrey S, Holliday J, Parry-Langdon N, Campbell R. Meeting the challenges of implementing process evaluation within randomized controlled trials: the example of ASSIST (A Stop Smoking in Schools Trial). Health Educ Res. 2006;21(3):366-377. 10.1093/her/cyl029 [DOI] [PubMed] [Google Scholar]
  • 27.Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ (Clinical Research Ed). 2015;350:h1258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Esmail LC, Barasky R, Mittman BS, Hickam DH. Improving comparative effectiveness research of complex health interventions: standards from the Patient-Centered Outcomes Research Institute (PCORI). J Gen Intern Med. 2020;35(Suppl 2):875-881. 10.1007/s11606-020-06093-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Process evaluation in randomised controlled trials of complex interventions. BMJ (Clinical Research Ed). 2006;332(7538):413-416. 10.1136/bmj.332.7538.413 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ (Clinical Research Ed). 2004;328(7455):1561-1563. 10.1136/bmj.328.7455.1561 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Ford I, Norrie J. Pragmatic Trials. N Engl J Med. 2016;375(5):454-463. 10.1056/NEJMra1510059 [DOI] [PubMed] [Google Scholar]
  • 32.Grol R, Grimshaw J. Evidence-based implementation of evidence-based medicine. Jt Comm J Qual Improv. 1999;25(10):503-513. [DOI] [PubMed] [Google Scholar]
  • 33.Flum DR, Davidson GH, Monsell SE, et al. A randomized trial comparing antibiotics with appendectomy for appendicitis. N Engl J Med. 2020;383(20):1907-1919. 10.1056/NEJMoa2014320 [DOI] [PubMed] [Google Scholar]
  • 34.Adrian M, Lyon AR, Nicodimos S, Pullmann MD, McCauley E. Enhanced “train and hope” for scalable, cost-effective professional development in youth suicide prevention. Crisis. 2018;39(4):235-246. 10.1027/0227-5910/a000489 [DOI] [PubMed] [Google Scholar]
  • 35.Cucciare MA, Marchant K, Abraham T, et al. A randomized controlled trial comparing a manual and computer version of CALM in VA community-based outpatient clinics. J Affect Disord Rep. 2021;6:100202. 10.1016/j.jadr.2021.100202 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kolko DJ, McGuier EA, Turchi R, et al. Care team and practice-level implementation strategies to optimize pediatric collaborative care: study protocol for a cluster-randomized hybrid type III trial. Implement Sci. 2022;17(1):20. 10.1186/s13012-022-01195-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Kolko DJ, Campo J, Kilbourne AM, Hart J, Sakolsky D, Wisniewski S. Collaborative care outcomes for pediatric behavioral health problems: a cluster randomized trial. Pediatrics. 2014;133(4):e981-992. 10.1542/peds.2013-2516 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Hogue A, Ozechowski TJ, Robbins MS, Waldron HB. Making fidelity an intramural game: localizing quality assurance procedures to promote sustainability of evidence‐based practices in usual care. Clin Psychol Sci Pract. 2013;20(1):60. 10.1111/cpsp.12023 [DOI] [Google Scholar]
  • 39.Hartzler B, Lyon AR, Walker DD, Matthews L, King KM, McCollister KE. Implementing the teen marijuana check-up in schools-a study protocol. Implement Sci. 2017;12(1):103. 10.1186/s13012-017-0633-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ (Clinical Research Ed). 2014;348:g1687. [DOI] [PubMed] [Google Scholar]
  • 41.Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) statement. BMJ (Clinical Research Ed). 2017;356:i6795. 10.1136/bmj.i6795 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Pawson R. Pragmatic trials and implementation science: grounds for divorce? BMC Med Res Methodol. 2019;19(1):176. 10.1186/s12874-019-0814-9 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

There are no data associated with this manuscript.


Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES