1. Introduction
Inherent in any clinical trial is the fundamental challenge of balancing internal validity (i.e., minimizing bias, which can be facilitated by evaluating manualized/highly standardized treatments by highly proficient expert clinicians and ensuring that those collecting outcome data are blinded) with external validity (i.e., generalizability, which is usually facilitated by evaluating treatments in real-world clinical settings).[4] Explanatory clinical trials favor internal validity, while pragmatic clinical trials are designed to favor external validity.[4; 25] This balancing challenge is particularly evident when designing pragmatic clinical trials[6] to influence clinical practice and inform future policy.[25] This paper focuses on behavioral pain interventions which can be defined as interventions designed to change participants’ behavior, cognition, and affect in order to more effectively manage pain (see Tables 1 and 2 for examples of behavioral interventions and clinical trial terminology, respectively). This paper provides a perspective on pragmatic trials of such interventions that: (1) highlights strengths and limitations, (2) identifies key issues for trial planning and conduct, and (3) makes recommendations for publishing related manuscripts. We focus on trials conducted in a given health system (e.g., Kaiser Permanente, Veteran Health System, National Health Service) – “embedded” pragmatic clinical trials[13; 14; 22; 27] – whose distinguishing feature is that the behavioral intervention being tested is delivered within routine clinical care.
Table 1.
Intervention | Goal | Core Techniques |
---|---|---|
Behavior therapy | Increase the frequency of engaging in adaptive daily activities (“well behaviors”) and decrease the frequency of maladaptive daily activities (“pain behaviors”). | 1. Teaching individuals how to use reinforcement principles to achieve their goals. 2. Behavioral activation via pleasant activity scheduling and/or exercise. 3. Using the social environment to support improvements (e.g., via training partners/caregivers or changing work environment). |
Cognitive-behavioral therapy | Teach individuals: (1) to understand how thoughts, feelings, and behaviors affect their adjustment to pain and (2) to develop skills for better managing pain-related thoughts, feelings and behaviors. | 1. Self-monitoring of thoughts, feelings, and behaviors. 2. Behavioral activation. 3. Learn how to challenge and restructure overly negative/maladaptive pain-related thoughts. 4. Training in problem solving. |
Mindfulness-based stress reduction | Enhance awareness of the present moment so as to foster understanding and reactivity to pain and pain-related stressors. | 1. Sitting meditation. 2. Body scan meditation. 3. Walking meditation. 4. Mindfulness during daily activities. |
Acceptance and commitment therapy | To allow oneself: (1) to experience pain and unpleasant feelings and lessen reactivity to those feelings and (2) to commit to engaging in valued activities, despite experiencing pain and unpleasant feelings. | 1. Practicing acceptance of unwanted experiences. 2. Cognitive defusion—learning to separate oneself from one’s thoughts and emotions. 3. Mindfulness of the present moment. 4. Self-observation of thoughts and related emotions and behaviors. 5. Identifying ones values. 6.. Values-based goal setting. |
Hypnotic cognitive therapy | Learn self-hypnosis for nurturing and enhancing adaptive thoughts. | 1. Self-monitoring of thoughts. 2. Learning to evaluate thoughts as adaptive or maladaptive. 3. Learning self-hypnosis skills to facilitate the incorporation of adaptive thoughts as “automatic” thoughts. |
Table 2.
Explanatory Trial | Primary aim of determining how an intervention performs under ideal conditions (i.e. efficacy). |
Pragmatic Trial | Primary aim of determining how an intervention performs in real world conditions (i.e. effectiveness). |
Embedded Trial | Performed in clinical setting with interventions delivered by health care providers as part of routine care. |
Cluster Randomized Trial | Unit of randomization is at a level other than individual (e.g., clinic or hospital). |
Type 1 Hybrid Effectiveness-Implementation Trial | Primary aim is to determine intervention effectiveness and secondary aim is to gain better understanding of implementation of intervention. Example Primary Aim: Determine if mindfulness-based stress reduction improves physical function for those with chronic low back, when compared to usual medical care. Example Secondary Aim: Determine what percentage of individuals with chronic low back pain completed the entire mindfulness-based stress reduction program. |
Type 2 Hybrid Effectiveness-Implementation Trial | Co-primary aims to simultaneously determine intervention effectiveness and impact of implementation strategies. Example Primary Aim: Determine if mindfulness-based stress reduction improves physical function for those with chronic low back, when compared to usual medical care. Example Primary Aim: Determine best method of mindfulness-based stress reduction program training (in-person vs. on-line vs. hybrid) for improving provider competence. |
Type 3 Hybrid Effectiveness-Implementation Trial | Primary aim is to test implementation strategy and secondary aim is to describe associated clinical outcomes. Example Primary Aim: Determine best method of mindfulness-based stress reduction program training (in-person vs. on-line vs. hybrid) for improving provider competence. Example Secondary Aim: Compare clinical outcomes for patients seeking care from providers that completed in-person training vs. on-line training. |
2. Strengths and Limitations of Pragmatic Trials of Behavioral Interventions
Clinical trials exist on a pragmatic to explanatory continuum.[5; 19] Explanatory trials (also known as efficacy studies) are highly valued for their focus on internal validity.[4] Although strong conclusions regarding efficacy can be drawn from explanatory trials, methodological shortcomings can limit generalization to the delivery of these interventions in actual health system settings.[25; 26] Systematic reviews can only address this problem to some extent. For example, systematic reviews focused on studies of a specific intervention (e.g., a meditation-based protocol) can address this problem when they examine the results of trials of that intervention conducted by different therapeutic teams across different settings.
Pragmatic trials may be particularly important for addressing knowledge gaps left by explanatory trials on applicability to important clinical populations excluded from explanatory trials on grounds such as age, clinical features of the pain condition, multimorbidity, and psychiatric conditions.[4] Embedded pragmatic trials can provide a means to address important elements of external validity.[22; 25] For most behavioral interventions (which represent over a third of pragmatic trials in a recent systematic review [9], these trials centre on whether the interventions remain effective when delivered to diverse clinical populations. In the following section, we consider both challenges and opportunities that can arise in the design and conduct of such trials.
3. Key issues to consider for behavioral pragmatic trial design and conduct
3.1. The need to justify a pragmatic design
Whether or not a pragmatic trial design offers the best way to answer a given research question is a fundamental issue to resolve during planning. There are no definite parameters, but there are certain questions the researchers should consider when making this decision. First, has the intervention being considered already demonstrated efficacy enough to warrant testing effectiveness (if not, there may be little to gain from a pragmatic trial)? (2) Second, how complex and feasible is the behavioral intervention, and what resources are required to implement it in a given health system?[16] Complexity can arise from multiple sources that act independently or that interact in producing the outcomes of interest, and careful design maps often identify presumed mechanisms and pathways before embarking on the trial.[2] Feasibility is related to how realistic it would be to incorporate the behavioral intervention of interest into clinical settings after the trial. Issues related to intervention complexity and feasibility are not specific to pragmatic trials. However, they may deserve extra consideration in the case of embedded trial designs that not only test intervention effectiveness, but also hope to sustain delivery of the intervention after trial completion. Sustaining intervention delivery is not typically an expectation when conducting an efficacy trial.
Answering these questions will guide the researchers to design a trial that primarily tests behavioral intervention effectiveness, primarily tests the ability to implement the behavioral intervention, or tests both intervention effectiveness and implementation.[3] A framework proposed by Curran and colleagues [3] described 3 types of effectiveness-implementation hybrid study designs. In a Type 1 study, the primary aim is to test intervention effectiveness, while a secondary aim is to collect information on how the intervention was implemented and/or contextual information about intervention delivery. In a Type 2 study, the effectiveness and implementation research questions are considered co-primary aims. Thus, there is equal interest in determining intervention effectiveness and ability to deliver the treatment. Finally, a Type 3 study has a primary aim to test different implementation strategies to determine which are best, with a secondary aim of determining treatment outcome. An expanded framework is available [12]; but for many research questions the original Type 1–3 framework may be sufficient to guide decision making for key design elements.
Third, what is the most appropriate design that would address the efficacy of the intervention in real world settings? If a randomized controlled trial is deemed the best approach, investigators still needed to determine if randomization should be made at the individual level, or at site level (e.g., cluster randomization)? A trial primarily focused on testing effectiveness might benefit from individual randomization, whereas a trial testing implementation outcomes might use cluster randomization, to permit efficient training of providers and to limit possible treatment contamination. Other designs should also be considered, however. These include, for example, a stepped wedge design, in which participants at different study sites are crossed over from standard care to the intervention being examined at different times during the trial.[10]
Fourth, the choice of comparator in a pragmatic trial of a behavioral intervention is related to two key factors. First, the patient perspective is of central importance,[9] because they are the ones who are seeking help for the pain, want to know that a new behavioral treatment is better than a comparator (e.g., current care), and they or their funders are paying for treatment. The PRECIS-2 tool [18; 20] highlights the importance of engaging stakeholders (e.g., patients in the setting(s) in which the trial findings might apply) in the design and planning of a pragmatic trial. Second, the choice of a comparator is related to clinical standards and feasibility of delivery, because like the treatment(s) being tested, it is also delivered by clinical staff. A common design choice is to test whether the addition of a behavioral intervention is more effective than current practice (i.e., A vs. A-plus trial).[24] Another, less common, design is to test two alternative treatments (e.g., comparing a behavioral intervention to a non-behavioral approach; an A vs. B trial).[24] In A vs. B designs, the order of treatment can be important (e.g., do effects vary if treatment A is delivered before treatment B or vice versa)? If there is prior evidence that order of treatment is important, then testing the order of treatments will need to be part of the research question. Investigators should be strategic about this decision because while an A vs. A-plus trial may be appealing from an implementation perspective, the lack of A vs. B trials has been noted as a rate-limiting factor in using clinical data to support medical decisions.[24]
A final issue to consider is if the outcomes of interest can be easily during routine clinical encounters. Outcomes could potentially be obtained from a clinical record, which for many health systems in an electronic health record. If outcomes cannot be obtained in this manner, then the researchers could consider other “direct to patient” methods that are consistent with a pragmatic approach. For example wearables, smart phone applications, and secure email surveys all provide opportunities to collect outcome data outside of the electronic health record. If the research question of interest involves more dedicated measurement approaches (e.g., multiple specimen collections), then a pragmatic trial might not be the best option.
3.2. The need to increase generalizability while controlling bias
Behavioral pragmatic trials are less able than efficacy trials to control for possible sources of methodological bias. Less restrictive inclusion and exclusion criteria create greater participant heterogeneity and potentially greater variability in treatment outcome. Relying on clinicians in the health system limits training and competence checks. As a result, how the behavioral intervention is provided can vary widely. These sources of heterogeneity increase with the number of sites included. This heterogeneity can also be viewed as a strength of embedded pragmatic trials, because it also increases study generalizability. A given behavioral treatment found to be effective under these conditions is promising to adopt in everyday practice.
At the same time, one of the possible negative effects of higher variability across multiple factors is the possibility that lower effect sizes will be found that were observe in foundational explanatory trials. The primary challenge here is the risk of an underpowered study. Thus, an increase in sample size – at least relative to the sample sizes used in prior efficacy trials – may be needed to observe clinically meaningful/relevant effects. Relatedly, even when investigators understand that there will be greater variability in the study participants, clinicians, and sites in pragmatic trials, they are unlikely to know a priori which participant, clinical, and site related variables are most closely associated with outcomes. Furthermore, a design aspect that enhances one part of the pragmatic trial may require a change in randomization approach to achieve balance across different clinical sites. For example for cluster randomized trials, there is known heterogeneity in pain populations seeking care and accounting for that variability using novel randomization approaches (e.g., covariate-constrained randomization) may be necessary to increase the chances of balance between comparison arms.[17]
As noted previously, among the strengths of large samples of patients who are more representative of broad patient populations is the potential that findings will be more generalizable. Large sample size may also permit secondary analyses of the data to identify factors associated with better or worse treatment outcomes. A priori hypotheses based on established theories/conceptual models or based on discussions with stakeholders (see next section) should guide these analyses.[2] There is growing recognition of the need to test theory-based hypotheses of how behavioral treatments work (usually addressed using mediation analyses) and for whom these treatments work (usually addressed using moderation analyses).[8] With larger sample sizes, both mediation and moderation analyses can be conducted with greater power.
3.3. The need for stakeholder involvement
Individuals who have a significant stake in the conduct and outcome of behavioral pain pragmatic trials include those who are receiving or might ultimately receive the treatment, family members of those receiving treatment, front-line clinicians providing the treatment, health care system administrators, and policy makers.[1] Stakeholder involvement is important for, but not unique to, behavioral pragmatic trials by virtue of being closer to everyday clinical practice. Trial sponsors increasingly require stakeholder involvement from the start,[23; 25] including input on study hypotheses, design, feasibility issues related to the study population and sites, and input into the intervention protocol. Stakeholders with limited knowledge of the rationale and treatment techniques included in the behavioral intervention under study may suggest changes in content that are inconsistent with underlying theory or conceptual model, that are not acceptable to study participants, or are not feasible given trial resources. On the other hand, given the depth of knowledge of stakeholders with respect to the clinical population and treatment setting(s), the stakeholders can contribute to the development of hypotheses regarding the patient and treatment setting characteristics could potentially modify treatment effects. These characteristics, identified via stakeholder engagement, could then be hypothesized a priori as treatment outcome moderators, strengthening the scientific rigor of the study design. In any case, with increased stakeholder input, investigators must be prepared to develop decision-making processes on how to best integrate this input; these and other complexities are explored in a growing literature on stakeholder involvement.[15; 21]
An example of the benefits of greater stakeholder involvement is a study of behavioral pain treatments for veterans. In the process of designing and conducting an embedded clinical trial in a Veterans’ Administration (VA) system in which three active group treatments were offered to veterans,[27] the extent of unmet need became clear. Many referrals were made to providers participating in the trial, and ultimately over 500 veterans were treated - some of whom elected to receive a behavioral treatment without trial participation. Moreover, the clinicians who were trained and supervised for over a year in the treatments were able to continue providing the treatments to veterans after the trial ended. In addition to this “infrastructure” benefit, valuable knowledge about the treatments was gained from stakeholder evaluations.[20]
3.4. The need to ensure treatment fidelity
A major challenge in pragmatic trials of behavioral interventions is ensuring the fidelity and integrity of the treatment being delivered. This is a simpler issue for pharmacotherapy trials than for complex interventions.[2; 7] Although treatment fidelity is dependent on clinicians delivering the treatment as intended, the clinicians might lack expertise in the intervention. Reporting on provider training and experience contributes to being able to evaluate treatment fidelity for the study.[9] This is essential for understanding the validity, and ultimately the impact, of a pragmatic trial; identifying function, rather than simple content, of interventions also provides a metric for standardization across sites.[7]
Clinicians who provide the behavioral intervention(s) will almost always require at least some training and supervision to ensure a minimal level of competency. In some cases, the clinicians may require extensive training. Investigators will usually need to develop treatment manuals, training procedures, and plans for monitoring treatment delivery to identify and remediate performance that falls below standards. These procedures can be viewed as being “pragmatic,” in the sense that clinical practice, licensed health care providers need to be competent in the treatments they provide. Furthermore, when new behavioral interventions are introduced into a health care system, training a provider to competence to provide the treatment without supervision is considered essential.[11] Training materials and supervision procedures developed for a pragmatic trials can serve as resources for enhancing the uptake of interventions, ultimately making them more accessible to the wider population of patients who could benefit.
One example of how treatment training and fidelity issues can be addressed is in the study with Veteran’s cited in the previous section.[27] In that study, all study clinicians were required to participate in a 2-day workshop to learn how to facilitate the treatment groups, and treatment fidelity was monitored throughout the trial via a review and coding of the audio recordings of a random selection of 25% of the treatment sessions. The fidelity coding included the identification of treatment components viewed as important but shared across all treatments (e.g., summary of group rules), components that were viewed as essential and unique to each treatment (e.g., training of mindfulness skills in the mindfulness group), and whether or not a session contained components that were inappropriate for that group (e.g., training of mindfulness skills in the education group). In addition, the training materials developed for this protocol have been subsequently used for training clinicians in other USA Veterans Administration treatment centers.
4. Publishing pragmatic trials of behavioral interventions
Papers based on pragmatic trials of behavioral interventions are of interest to scientists, providers, patients, policy makers, and the public.[4; 22; 24; 25] Authors wishing to publish peer-reviewed papers based on behavioral pragmatic trials will face challenges. In most pragmatic trials the investigators’ lengthy and detailed discussions about each decision in the design and conduct of the trial is important, but can be a challenge to report within the word limitations imposed by journals.
Using the PRECIS-2 when reporting design and protocol will facilitate understanding of which trial domains were more pragmatic and which were more explanatory.[18] The trial domains included in the PRECIS-2 tool include eligibility criteria, recruitment, setting, organization, flexibility (delivery and adherence), follow up, primary outcome, and primary analysis. Given that no trial is entirely pragmatic, it is expected that these trial domains will vary on the pragmatic to explanatory spectrum. Standardized reporting of trial results is facilitated by the CONSORT extension for pragmatic trials,[28] and CONSORT extensions for describing elements of intervention (TIDieR) and design (Reporting of Stepped Wedge Cluster Randomized Trials).
Early in the course of a pragmatic trial of a behavioral interventions, authors need to develop a comprehensive publication plan. It could include papers describing: (1) the significance of the problem and the importance of testing intervention effectiveness, (2) the development and implementation of the intervention - focusing on the unique aspects for the given trial (e.g., stakeholder engagement processes that refined intervention delivery), (3) the trial protocol, (4) the rationale and analytical plans for key secondary/exploratory aims, (5) the primary results, (6) secondary and/or exploratory aims, and (7) lessons learned from the trial.
5. Conclusion
In this paper we have provided guidance to individuals interested in designing and conducting pragmatic trials of behavioral interventions, by addressing the need to balance external validity and internal validity. Embracing the challenges inherent in designing pragmatic trials is important for advancing behavioral pain science in a manner that results in improvements for patient care that can be effectively evaluated and delivered in everyday settings.
Acknowledgements
Dr. Keefe’s contribution was supported in part by the following NIH grants: from the National Institute on Aging (5P30-AG064201-03577980), from the National Institute of Diabetes and Digestive, and Kidney Diseases (U01-DK123813-03), from the National Center for Complementary and Integrative Health (UG3/UH3 AT009790), and from the National Institute of Neurological Disorders and Stroke (1U24-NS114416-01).
Dr. Jensen’s contribution was supported in part with a grant from the National Center for Complementary and Integrative Health at the NIH, award number AT008559.
Dr. George’s contribution to this work was made possible by Grant Number U24 AT009769 (Coordinating Center) and UG3/UH3 Demonstration Project AT009790 from the National Center for Complementary and Integrative Health (NCCIH) and the Office of Behavioral and Social Sciences Research (OBSSR).
Footnotes
This manuscript is a product of the NIH-DOD-VA Pain Management Collaboratory. For more information about the Collaboratory, visit https://painmanagementcollaboratory.org/.
The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the NCCIH, OBSSR, or the National Institutes of Health.
References
- [1].Bastian LA, Cohen SP, Katsovich L, Becker WC, Brummett BR, Burgess DJ, Crunkhorn AE, Denneson LM, Frank JW, Goertz C, Ilfeld B, Kanzler KE, Krishnaswamy A, LaChappelle K, Martino S, Mattocks K, McGeary CA, Reznik TE, Rhon DI, Salsbury SA, Seal KH, Semiatin AM, Shin MH, Simon CB, Teyhen DS, Zamora K, Kerns RD, Collaboratory N-D-VPM. Stakeholder Engagement in Pragmatic Clinical Trials: Emphasizing Relationships to Improve Pain Management Delivery and Outcomes. Pain Med 2020;21(Suppl 2):S13–S20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Campbell NC, Murray E, Darbyshire J, Emery J, Farmer A, Griffiths F, Guthrie B, Lester H, Wilson P, Kinmonth AL. Designing and evaluating complex interventions to improve health care. BMJ 2007;334(7591):455–459. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012;50(3):217–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Ford I, Norrie J. Pragmatic Trials. N Engl J Med 2016;375(5):454–463. [DOI] [PubMed] [Google Scholar]
- [5].Gaglio B, Phillips SM, Heurtin-Roberts S, Sanchez MA, Glasgow RE. How pragmatic is it? Lessons learned using PRECIS and RE-AIM for determining pragmatic characteristics of research. Implement Sci 2014;9:96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Gordon KS, Peduzzi P, Kerns RD. Designing Trials with Purpose: Pragmatic Clinical Trials of Nonpharmacological Approaches for Pain Management. Pain Med 2020;21(Suppl 2):S7–S12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ 2004;328(7455):1561–1563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Hill JC, Fritz JM. Psychosocial influences on low back pain, disability, and response to treatment. Phys Ther 2011;91(5):712–721. [DOI] [PubMed] [Google Scholar]
- [9].Hohenschurz-Schmidt D, Kleykamp BA, Draper-Rodi J, Vollert J, Chan J, Ferguson M, McNicol E, Phalip J, Evans SR, Turk DC, Dworkin RH, Rice ASC. Pragmatic trials of pain therapies: a systematic review of methods. Pain 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials 2007;28(2):182–191. [DOI] [PubMed] [Google Scholar]
- [11].Keefe FJ, Main CJ, George SZ. Advancing Psychologically Informed Practice for Patients With Persistent Musculoskeletal Pain: Promise, Pitfalls, and Solutions. Phys Ther 2018;98(5):398–407. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Kemp CG, Wagenaar BH, Haroz EE. Expanding Hybrid Studies for Implementation Research: Intervention, Implementation Strategy, and Context. Front Public Health 2019;7:325. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Kerns RD, Brandt CA. NIH-DOD-VA Pain Management Collaboratory: Pragmatic Clinical Trials of Nonpharmacological Approaches for Management of Pain and Co-occurring Conditions in Veteran and Military Health Systems: Introduction. Pain Med 2020;21(Suppl 2):S1–S4. [DOI] [PubMed] [Google Scholar]
- [14].Kerns RD, Brandt CA, Peduzzi P. NIH-DoD-VA Pain Management Collaboratory. Pain Med 2019;20(12):2336–2345. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Langley J, Wolstenholme D, Cooke J. ‘Collective making’ as knowledge mobilisation: the contribution of participatory design in the co-creation of knowledge in healthcare. BMC Health Serv Res 2018;18(1):585. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, Reeves BC, Tugwell P, Hannes K, Rehfuess EA, Welch V, McKenzie JE, Burford B, Petkovic J, Anderson LM, Harris J, Noyes J. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Med Res Methodol 2017;17(1):76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Li F, Turner EL, Heagerty PJ, Murray DM, Vollmer WM, DeLong ER. An evaluation of constrained randomization for the design and analysis of group-randomized trials with binary outcomes. Stat Med 2017;36(24):3791–3806. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ 2015;350:h2147. [DOI] [PubMed] [Google Scholar]
- [19].Loudon K, Zwarenstein M, Sullivan F, Donnan P, Treweek S. Making clinical trials more relevant: improving and validating the PRECIS tool for matching trial design decisions to trial purpose. Trials 2013;14:115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Norton WE, Loudon K, Chambers DA, Zwarenstein M. Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool. Implement Sci 2021;16(1):7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Oliver K, Kothari A, Mays N. The dark side of coproduction: do the costs outweigh the benefits for health research? Health Res Policy Syst 2019;17(1):33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Ramsberg J, Platt R. Opportunities and barriers for pragmatic embedded trials: Triumphs and tribulations. Learn Health Syst 2018;2(1):e10044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Sheridan S, Schrandt S, Forsythe L, Hilliard TS, Paez KA, Advisory Panel on Patient E. The PCORI Engagement Rubric: Promising Practices for Partnering in Research. Ann Fam Med 2017;15(2):165–170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Simon GE, Platt R, Hernandez AF. Evidence from Pragmatic Trials during Routine Care - Slouching toward a Learning Health System. N Engl J Med 2020;382(16):1488–1491. [DOI] [PubMed] [Google Scholar]
- [25].Weinfurt KP, Hernandez AF, Coronado GD, DeBar LL, Dember LM, Green BB, Heagerty PJ, Huang SS, James KT, Jarvik JG, Larson EB, Mor V, Platt R, Rosenthal GE, Septimus EJ, Simon GE, Staman KL, Sugarman J, Vazquez M, Zatzick D, Curtis LH. Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory. BMC Med Res Methodol 2017;17(1):144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Williams ACC, Fisher E, Hearn L, Eccleston C. Evidence-based psychological interventions for adults with chronic pain: precision, control, quality, and equipoise. Pain 2021;162(8):2149–2153. [DOI] [PubMed] [Google Scholar]
- [27].Williams RM, Ehde DM, Day M, Turner AP, Hakimian S, Gertz K, Ciol M, McCall A, Kincaid C, Pettet MW, Patterson D, Suri P, Jensen MP. The chronic pain skills study: Protocol for a randomized controlled trial comparing hypnosis, mindfulness meditation and pain education in Veterans. Contemp Clin Trials 2020;90:105935. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, Oxman AD, Moher D, group C, Pragmatic Trials in Healthcare g. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337:a2390. [DOI] [PMC free article] [PubMed] [Google Scholar]