Abstract
Many questions regarding the clinical management of people experiencing pain and related health policy decision-making may best be answered by pragmatic controlled trials. To generate clinically relevant and widely applicable findings, such trials aim to reproduce elements of routine clinical care or are embedded within clinical workflows. In contrast with traditional efficacy trials, pragmatic trials are intended to address a broader set of external validity questions critical for stakeholders (clinicians, healthcare leaders, policymakers, insurers, and patients) in considering the adoption and use of evidence-based treatments in daily clinical care. This article summarizes methodological considerations for pragmatic trials, mainly concerning methods of fundamental importance to the internal validity of trials. The relationship between these methods and common pragmatic trials methods and goals is considered, recognizing that the resulting trial designs are highly dependent on the specific research question under investigation. The basis of this statement was an Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) systematic review of methods and a consensus meeting. The meeting was organized by the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public–private partnership. The consensus process was informed by expert presentations, panel and consensus discussions, and a preparatory systematic review. In the context of pragmatic trials of pain treatments, we present fundamental considerations for the planning phase of pragmatic trials, including the specification of trial objectives, the selection of adequate designs, and methods to enhance internal validity while maintaining the ability to answer pragmatic research questions.
Keywords: Clinical trial, Clinical research methods, Pragmatic trials, Comparative effectiveness research, Pain, Analgesia
1. Introduction
Pragmatic clinical trials are designed to answer research questions directly relevant to clinical or health policy decision-making.46,117 Examples include comparing the relative effectiveness of established treatment options under everyday clinical circumstances or answering research questions related to clinical processes, such as strategies of treatment delivery, dosing, interactions between interventions, or stepped-care approaches.
Pragmatic trials have become increasingly common in the field of pain research and other areas53,62,81,118 because the narrow remit of traditional placebo-controlled trials cannot answer the full range of clinical questions. For example, a pragmatic trial is valuable to assess whether a given therapy works as well as, or better than, established care when studied in a broad population and in nonacademic settings,22 regardless of the underlying mechanisms of benefit. Such research questions are most pertinent for therapies with an established efficacy and safety profile. They are also commonly formulated for therapies that have limited evidence of efficacy but are already widely used in clinical practice, and where the potential for harm is judged to be low, such as many complementary and integrative therapies.53,62 Particularly in chronic pain-related research, pragmatic trials may overcome limitations of trials with stringent eligibility criteria by better reflecting the realities of clinical practice, which often include patients with multiple comorbidities, high levels of disability,3,35,76 or socioeconomic barriers to treatment participation.67,69,84 Finally, pragmatic trials may provide more realistic effect size estimates and enhance translation of research findings into clinical practice.92 Key terms relevant for this article are defined in Box 1.
Box 1. Glossary of relevant terms.
Term | Definition |
---|---|
Effectiveness | Effectiveness assesses whether an intervention is beneficial when provided under usual circumstances of healthcare practice (“Does it work in practice?”)55 |
Pragmatic RCT Used interchangeably with “effectiveness RCT” |
An RCT intended to directly inform clinical or health policy decision-making, usually investigating therapeutic effectiveness or comparative effectiveness under conditions similar to clinical practice46,82,117 |
Real-world evidence | “Information on health care that is derived from multiple sources outside typical clinical research settings, including electronic health records […], […] billing data, […] registries, and […] through personal devices and health applications.” (p. 2293).119 Real-world data are distinct from pragmatic trial data, in that the latter are collected in a specifically designed research paradigm and real-world evidence from unmodified clinical practice. |
Generalizability Used interchangeably with “external validity” and “applicability”109,110 |
The degree to which trial results may be considered valid in and applicable to participants, practitioners, interventions, outcome measures, and settings outside the respective trial30,109,110 |
Internal validity | “Internal validity describes the […] accuracy of the study results by minimizing error. Thus, internal validity is the degree to which changes in the dependent variable can be attributed to the intervention and is maximized by decreasing bias using design features such as random assignment, allocation concealment, and blinding” (p. 164)140 |
Efficacy | Efficacy is the extent to which an intervention provides benefit under ideal circumstances (“Can it work?”)55 |
Explanatory RCT Used interchangeably with “efficacy RCT” |
An RCT that tests the benefits and/or harms of a treatment under relatively ideal conditions, aimed primarily at investigating a scientific or biological problem46,82,117 |
Mechanistic RCT | An RCT that investigates treatment mechanisms under relatively ideal conditions (alongside benefits and harms or exclusively) |
RCT, randomized controlled trial.
While pragmatic trials are frequently portrayed as methodologically distinct from traditional explanatory randomized controlled trials (RCTs), a more suitable conceptualization is to view the role of RCTs on an explanatory-pragmatic spectrum.82,98,127 One end of the spectrum represents highly explanatory RCTs, which focus on answering mechanistic research questions and on evaluating efficacy and safety, often comparing treatments with placebo controls in a relatively homogeneous population. The other end of the spectrum represents RCTs with pragmatic aims.82 It is more helpful to examine the research question rather than individual trial methods to determine the pragmatism of trials because it is the research question which informs the choice of trial design and methods.92 In this sense, the distinction between pragmatic and explanatory depends on a trial's ability to answer a particular type of research question. Explanatory trials commonly ask efficacy questions (Box 1). This requires the trial design to control for effects of variables other than the studied intervention components, eg, by using placebo controls and narrow eligibility criteria.61 Pragmatic trials, because of their emphasis on enhancing the generalizability of findings, are frequently designed to reproduce routine clinical care112 or are embedded within it.73
In pragmatic trials, researchers have to achieve a balance between methods known to enhance internal validity and methods that align the trial with normal clinical practice. The appropriateness of “real-world” methods, such as flexible treatment delivery, depends on the question being asked and the intervention being tested. However, even if the research question is one of effectiveness, methods from normal clinical practice may unnecessarily compromise researchers' ability to interpret the findings. Mitigating steps may be possible that do not interfere with the trial's ability to answer a pragmatic research question. For example, while reflective of normal clinical practice, relatively flexible approaches to treatment delivery may mean that it is not clear whether participants actually received the allocated interventions and to what extent. In this case, one may conclude that the treatment is or is not effective. Only monitoring of protocol adherence, participant drop-out, or use of concomitant treatments would help determine whether these effects were due to the treatment or other confounding factors. “This information is not only relevant to interpret findings but also ‘pragmatic’ as it can inform implementation and intervention development”.
As Ford and Norrie noted in an influential 2016 article,46 “Pragmatism should not be synonymous with a laissez-faire approach to trial conduct. The aim is to inform clinical practice, and that can be achieved only with high-quality trials” (p. 462). Instead of dichotomizing into explanatory and pragmatic trials, these authors call for trials that adequately state and address their main objectives, including informing clinical practice. Therefore, each design choice requires consideration of at least 2 factors: its relation to the research question and its effects on trial quality.
This article presents considerations to help clinical pain researchers to optimize the balance between internal and external validity when they develop their trial design and methods (Box 2). Drawing on examples from pain research wherever possible, the article discusses fundamental considerations for the planning phase of pragmatic trials. These considerations include the clarification of trial objectives to facilitate the appropriate choice of design features, a summary of available trial designs, and several items relevant to increase a trial's internal validity, including available blinding and randomization methods. A second paper will discuss more specific research methods for conducting pragmatic trials of pain treatments. For example, this follow-up paper will include discussions of treatment delivery, comparator and control conditions, patient populations and study sites, outcome measures, study monitoring, and approaches to data analysis. Together, these publications will present best-practice research methods, proposing considerations for specific challenges and introducing methods to enhance the quality and value of pragmatic clinical trials.
Box 2. List of core considerations for pragmatic trials in pain interventions. Individual points are elaborated on in the article text.
Clarify the objectives of the trial, including the appropriateness of a generally pragmatic vs a more explanatory approach |
Ensure that design choices allow trial objectives to be met This includes • considering whether adaptive trial designs and other less commonly used designs may better answer the research question than traditional parallel-group designs, • using the PRECIS-2 tool82 and additional considerations presented here to evaluate design choices on the explanatory–pragmatic continuum during the planning phase of a trial, and • balancing of pragmatic design choices against more controlled approaches in the context of the research question |
Report trial conduct and findings using the CONSORT extension for pragmatic trials148 and all other relevant extensions |
Consider publishing the completed PRECIS-2 table101 along with justifications for design decisions in the context of the specific trial92 |
CONSORT, Consolidated Standards of Reporting Trials; PRECIS-2, Pragmatic–Explanatory Continuum Indicator Summary-2.
2. Methods of manuscript development
On October 22 and 23, 2020, a videoconference consensus meeting was held by the Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT), under the auspices of the Analgesic, Anesthetic, and Addiction Clinical Trials, Translations, Innovations, Opportunities, and Networks (ACTTION) public–private partnership with the U.S. Food and Drug Administration. Meeting participants were invited by the IMMPACT steering committee based on their expertise or experience involving pragmatic trials and to represent stakeholders from patient organizations, public institutions (such as the FDA and the National Institute of Health), and industry. In addition, all members of the ACTTION management, steering, executive, and oversight committees were invited. The meeting's objectives were to discuss important considerations and provide best-practice suggestions regarding the design, implementation, interpretation, and evaluation of pragmatic clinical trials of pain treatments to inform the planning, conduct, and reporting of such studies. Three consensus discussions were informed by nine 25-minute presentations by content experts and co-authors of this article. Presentations included the following topics: definitions and general considerations (L.D.), statistical approaches (S.E.), lessons learnt from pragmatic trials in various settings (A.W. and R.K.), study population definition and patient recruitment (J.M. and M.B.), study sites (J.F.), concomitant and rescue treatments (M.R.), and outcome domains and measures (M.B.). Furthermore, the results of a systematic review were presented (D.H.-S.).62 All participant details, lecture slides, and meeting transcripts are available on the IMMPACT web site, http://www.immpact.org/meetings/Immpact24/participants24.html. After the meeting, the first author drafted a consensus manuscript that was then reviewed by the co-authors. The reviewed materials and meeting discussions were then categorized into general and specific considerations with extensive internal manuscript reviews. The recommendations in this article are the product of vigorous discussions at the consensus meeting and continued iterative revisions of multiple draft manuscripts that were circulated among all the authors. The issues that required the most attention addressed the distinctions between pragmatic trials that are designed to meaningfully inform clinical practice and trials that prioritize the evaluation of treatment efficacy. The major concerns included the extent to which pragmatic trials can and should focus on bias control, including the measurement of expectations, as well as the relevance of clinical trial designs other than parallel-group RCTs and their congruency with pragmatic objectives.
3. Methods in current pragmatic trials of pain treatments
The background for these best-practice considerations is provided by a systematic review of methods of 57 self-labelled “pragmatic” or “comparative effectiveness trials” of pain treatments.62 Typically, such trials were multisite comparisons of 2 or more treatments, conducted across a broad spectrum of settings, recruiting several hundred participants living with chronic, mainly musculoskeletal pain and involving follow-up periods of 1 year on average. In the reviewed trials, complex nonpharmacological interventions were often studied, such as manual and physical therapies or acupuncture (28%) and cognitive-behavioral or other psychological interventions (16%). Twenty-one percent of trials investigated pharmacological treatments, 12% surgery, and a small percentage evaluated miscellaneous approaches such as multidisciplinary care, mind–body therapies, education, or alterations in general practice procedures. The most common comparators were another active intervention or “treatment as usual.” Participants were usually individually randomized, but 10% of trials used cluster randomization. Most trials were designed as superiority trials, aiming to detect a significant difference in outcomes between groups. Less than 10% were noninferiority or equivalence trials. Blinding of participants to group allocation was reported in a quarter of the trials (n = 13), with 3 studies “blinding” participants by randomizing trial practices and not requiring participant consent,9,21,31 others comparing 2 treatments that were indistinguishable to patients,28,43 or a “cohort multiple” design104 where patients were unaware of alternate study conditions.139 Seven of the reviewed trials reported single-blinding or double-blinding by means of placebo or attention control groups.1,5,7,48,93,133,144 Outcome assessments were almost always blinded.
To assess design features of the reviewed trials across the pragmatic–explanatory spectrum, the Pragmatic–Explanatory Continuum Indicator Summary (PRECIS)-2 tool was used. This tool considers 9 domains of trial design on a spectrum from very explanatory (scored as “1”) to very pragmatic (or similar to usual practice in the field; scored as “5”). The methodological domains assessed by PRECIS-2 include eligibility criteria, recruitment methods, trial settings, expertise and resources used to deliver interventions, flexibility of delivery and adherence, follow-up methods, primary outcome choice, and the method of primary analysis.82 Across the sample of 57 recently published trials of pain treatments, the average PRECIS-2 ratings per domain ranged from 3.0 (SD 1.6) for recruitment, indicating considerable effort to recruit participants, to 4.5 (SD 1.0) for outcomes, indicating that primary outcome measures were typically clinically relevant.62
Beyond characterizing recently published pragmatic trials of pain treatments, the review highlighted several areas for improvement in methodology and reporting, such as providing clear rationales about the choice of trial methods. As a major methodological challenge, trial feasibility and validity had to be balanced with attempts to interfere minimally with routine care. Researchers responded to this challenge in often creative ways or by sacrificing one aspect for the other, for example, using more elaborate recruitment methods at the expense of “pragmatism,” as defined by PRECIS-2, but ensuring successful recruitment or recruitment targeted to their research question. Relatedly, pragmatic design choices were prioritized differently or were harder to achieve in some PRECIS-2 domains than in others. Trial sites generally were judged to be better organized and equipped than what would be expected in usual practice and follow-up intensity often exceeded normal practice (also see 53). Challenges to trial pragmatism partly depended on the trial's specific circumstances, for example, with trials of drug therapies using more treatment standardization or chronic pain studies investing more efforts into patient recruitment. This systematic evaluation of current methods illustrates the balancing act faced by trial designers: to answer pragmatic research questions while exerting a sufficient level of control for successful trial completion and research validity.
4. Consensus statement of best-practice considerations
4.1. Clarifying trial objectives
When considering a pragmatic attitude to trial design, researchers ought to clarify the appropriateness of and motivation for a pragmatic trial, including an appraisal of available efficacy and mechanistic literature. With a clearly defined study intention, the most appropriate design choices can be made.92
Nonblinded comparative effectiveness trials provide different kinds of information than placebo-controlled efficacy RCTs. Because both kinds of information are important, before testing the effectiveness of new treatments in routine practice, existing efficacy and safety evidence for a treatment or for core components of a multimodal treatment ought to be considered to determine whether more efficacy research is needed. Although sufficient efficacy and safety data are required for new drug approval,45 trials comparing the effectiveness of existing nonpharmacological therapies are regularly conducted in the absence of high-quality efficacy research.138 Whether this is appropriate depends on the research question and trial context. For example, devising credible control groups is a major challenge for trials of nonpharmacological therapies, distinguishing treatment-specific effects of interest from other effects.91,99 Indeed, blinding difficulties have been used to justify unblinded comparative effectiveness designs.15,138 To overcome this challenge, specific guidance is becoming available for nonpharmacological trials6,16 most recently a comprehensive guideline by Hohenschurz-Schmidt et al.63 In addition, when low-risk treatments are already widely used, it is sometimes difficult to justify the need to evaluate against placebo. In these cases, comparing effectiveness with another commonly used modality can be considered. In other circumstances, a pragmatic (ie, relevant to clinical or policy decision-making) research question may require a sham-controlled trial (see 7; also discussed below).
The choices of appropriate study design and methods depend on the pragmatic research question and the corresponding testable hypothesis.47 There are several categories of comparative trials (see 80 for a definition of terms):
(1) Superiority of treatment A vs control group (eg, usual care or a specifically designed control condition).
(2) Superiority of treatment A vs treatment B.
(3) Noninferiority of treatment A vs treatment B.
(4) Equivalence of treatment A vs treatment B.
Additional specific objectives that could be considered pragmatic and assessed include
(1) assessing different treatment-delivery strategies (eg, stepped, stratified, or matched care)50,73;
(2) testing effectiveness in different care settings and populations;
(3) evaluating differential effects for patient subgroups, phenotypes, and other questions aimed at personalized care; and
(4) questions of risk-benefit, cost-effectiveness, and other clinically relevant composite outcomes.
Finally, pragmatic research goals are often informed by researcher engagement with key stakeholders (clinicians, healthcare leaders, and patients).
In summary, trial designers should clarify their research question considering existing evidence and current practice and assess and justify whether a pragmatic attitude to trial design is warranted.
4.2. Meeting trial objectives with high-quality designs
The main goal of a pragmatic approach to trial design is to answer a pragmatic research question147 in a scientifically robust manner, producing clinically impactful evidence. The overall trial design is thus guided by how trial results will be used. Study objectives should be achievable using the proposed trial methods, whether that means that the trial is closely aligned to typical clinical practice or not.
The precision of treatment effect estimates decreases with increasing trial heterogeneity, which is introduced, for example, by broad patient eligibility criteria, involvement of multiple trial centers, unregulated concomitant treatments, and flexible treatment application.88,121 Variability that reflects clinical practice is desirable in pragmatic trials but may pose a challenge to the interpretation of trial results, for example, understanding why an intervention was found to be (in)effective. Notably, this may be explained by how much different patient subgroups contribute to findings or other information relevant for clinical practice, such as low adherence to treatment protocols. There are several ways that such challenges of pragmatic trials may be turned into an advantage. For example, differential effects in specific subgroups (eg, age, sex, comorbidities, and concomitant treatments) can be determined by designing the trial to include sample sizes large enough to permit adequately powered subgroup analyses.98 Heterogeneity that is not required to answer the research question may have to be reduced, controlled, or measured to help interpret outcomes. For example, a question may ask about effectiveness in a real-world population with realistic intervention prescription scenarios (ie, with no or minimal adherence requirements). In this instance, it may be desirable to assess adherence, collect information about concomitant treatments, or measure changes in other behaviors to better understand trial outcomes.
In summary, researchers designing pragmatic trials need to ensure the reliability of results. This is important to distinguish effective from ineffective treatments, or treatments with differing levels of effectiveness, and must be balanced with the aim of producing research that is clinically meaningful, relevant, and applicable. In general, possible sources of heterogeneity in pragmatic trials should be explored and pertinent data measured and included in analyses. At a minimum, the most relevant clinical confounders (eg, comorbidities and concomitant treatments) should be considered.
4.3. Balancing pragmatic and explanatory qualities
For each pragmatic trial, there is an “optimal balance point between the poles of pragmatic and explanatory qualities.”73,135 Each design decision should be carefully evaluated on this spectrum, resulting in a robust framework to answer pragmatic research questions. At the consensus meeting, there was agreement that PRECIS-2 is a useful tool to inform the process of designing individual aspects of a pragmatic trial. Trials can benefit from considering both internal and external validity (or generalizability) for each PRECIS-2 domain. Researchers may emphasize generalizability when required by the research question but should preserve internal validity as much as possible, drawing on other available tools to evaluate internal validity (such as the Cochrane risk of bias tool).82,123 Allowing and measuring heterogeneity where necessary but reducing it where possible is important. Trials that aim to align themselves as much as possible with clinical practice typically show the following design features across PRECIS-2 domains:
(1) eligibility criteria aimed at including a broad and representative patient population, eg, not excluding participants with common comorbidities;
(2) participant recruitment that uses common means to engage with patients (eg, referrals or patient-driven contact seeking);
(3) settings and organizations that provide routine care;
(4) flexibility in treatment delivery and relatively low requirements for adherence;
(5) allowing most concomitant medications (and other cointerventions);
(6) choosing outcomes that are relevant to patients; and
(7) analyzing all participants as randomized (intention-to-treat [ITT]).
Other considerations in designing pragmatic trials include:
(1) use of real-world data (RWD) for eligibility criteria definition and recruitment;
(2) considering combined (eg, risk benefit) as well as responder and other subgroup analyses in addition to primary analyses;
(3) simplifying outcome choice, such as using measures with few scales as opposed to multiple-question disability questionnaires; and
(4) using real-world data collection tools, including consideration of wearables and mobile data sampling.
Apart from trial methods usually aimed at enhancing generalizability or answering pragmatic research questions, it is worth discussing how 2 common design features that enhance internal validity may apply to conducting high-quality pragmatic trials: randomization and blinding.
4.4. Applying methods for internal validity to pragmatic trials
4.4.1. Randomization
Randomization is an essential design feature to enhance the probability that study groups are balanced in known and unknown factors that could affect treatment response. Related to randomization, allocation concealment may reduce bias,25 whereas stratification and blocking can increase precision, if applicable. Various randomization methods exist and may be considered to answer pragmatic questions:
(1) Cluster randomization involves randomizing entities or “clusters” other than individual patients—frequently trial centers, clinics, therapy providers, or geographic areas—and has been used in pragmatic pain trials.62 The choice of cluster depends on the level of intervention implementation, which may be easiest to perform and control at the clinic level. However, cluster randomization may be inappropriate when there is considerable variability in clinic size and characteristics. Another threat to validity is when the unit of allocation (cluster level) is different from the unit of outcome assessment (patients). When opting for cluster randomization, trialists need to be aware of possible selection bias arising when the assigned intervention is known during patient recruitment. To mitigate selection bias, baseline differences for potentially important predictors of treatment response should be assessed.13 Where possible, cluster randomized trials should recruit participants before site randomization to avoid selection bias. Irrespective of selection bias, trialists can recruit more clusters with fewer patients per cluster to enhance power.112
(2) Pragmatic research questions may invite researchers to consider other options to simple randomization. For example, patient preferences can be important predictors of treatment response and adherence, thus shaping clinical decision-making. Including patient preferences during randomization can be implemented in various ways but requires sophisticated controlling mechanisms and analyses.17,78
(3) More complex randomization processes, stepped-wedge designs, and enrichment methods are discussed below (Fig. 1).
Figure 1.
Schematic illustration of various group designs adaptable to pragmatic and comparative effectiveness trials. “Treatments” can also be other comparators or usual care, ® signifies randomization, and ®A an adjusted randomization ratio. In panel (E), different box widths illustrate different group sizes after the randomization ratio has been adjusted in response to interim analysis.
4.4.2. Blinding and accounting for participant expectancies
For many pragmatic research questions, it is accepted that the nonspecific (eg, contextual or placebo) effects form part of treatments' real-world effectiveness. From this perspective, blinding may not be appropriate. Furthermore, a clinical decision may be between multiple interventions with doubtful evidence of efficacy but with different risks for harm or healthcare cost. These interventions are often well-established and commonly used in clinical practice. In this situation, nonblinded comparative effectiveness trials can answer important questions while not negating the usefulness of improved efficacy research. Overall, we recommend blinding participants to group allocation where efficacy data are inconsistent and where compatible with trial objectives.
When participants cannot be blinded to group allocation, blinding to study hypotheses is often possible, for example, by not disclosing study objectives.40,62 To preempt the possibility of “resentful demoralization” or “compensatory rivalry” in the unblinded allocation to a trial condition perceived as less desirable,4,29 participants can be given limited information about the trial design—within ethical standards. Alternatively, patient preferences for all trial arms may be evaluated in a preparatory phase. Zelen or encouragement designs address this problem.64,120 Relatedly, participants' expectations of treatment benefit may be considered.71 In some scenarios, participant blinding may not be relevant, eg, when comparing patient outcomes in clinics randomized to a potentially improved form of care with clinics continuing to provide usual care. Christian et al.24 proposed a useful framework for making blinding-related decisions in pragmatic trials.
Where outcomes are collected by study staff, the blinding of outcome assessors to treatment groups is considered essential; this is also the case when patient-reported outcomes are used to reduce the risk of assessor bias. For the same reason, it is desirable that patients can enter patient-reported outcome measures directly into data capture systems, for example, electronically, reducing the potential for bias from study staff. However, benefits of electronic data capture may have to be weighed against its challenges, such as potentially lower response rates, data incompleteness, variable participant literacy or numeracy, technology access, and data privacy.83,131,144
When blinding is not possible, techniques for minimizing potential bias are available. These will be discussed below.
4.4.3. Other bias minimization methods relevant for pragmatic trials
As in other clinical trials, pragmatic trialists need to consider possible sources of bias. Although randomization is commonly used to reduce bias in pragmatic trials, other bias-reduction methods such as treatment standardization and blinding study participants to treatment may conflict with pragmatic trial objectives. Examples of relevant threats to internal validity are listed in Table 1 together with recommendations on how to address these in pragmatic trials. As discussed earlier, the possible solutions for bias control in Table 1 need to be examined for potential conflicts with a trial's pragmatic objectives. In this case, they may not be suitable or their implications for the generalizability of findings should be declared. Additional considerations can be found in Katz et al.72
Table 1.
Possible bias in clinical research and proposed considerations for methods to minimize bias in pragmatic trials of pain treatments.
Bias | Possible solutions (and explanation) |
---|---|
Recruitment bias Recruiting predominantly or failing to recruit certain subgroups of eligible participants |
• Enhanced recruitment or targeted recruitment strategies In pragmatic trials, recruitment bias is a problem when the trial fails to represent the clinical target population. If the research question requires a relatively representative sample, more effort may be required to recruit diverse participants2 even if recruitment methods no longer reflect standard clinical practice. For example, a trial may not be conducted in the eventual target setting and thus not have access to “normal” recruitment pathways. Results may still generalize to the populations typically seen in such settings if representative participants are deliberately targeted for recruitment |
Selection bias Selection of study participants skewed by factors such as participant characteristics (similar to recruitment bias but mainly driven by study staff) |
• Cluster randomized trials: site randomization after participant recruitment • (Partial) blinding of recruiting study staff24 • Monitor baseline differences and/or control for important covariates in the analyses |
Allocation bias Biased allocation of participants to study arms59 |
• Effective allocation concealment123 |
Assessor bias Knowledge of treatment allocation that influences outcome measurements |
• Blinded outcome assessment24 • Use of objective outcome measures,24 eg, actigraphy142 • Use of disability and quality of life outcomes • Use multiple follow-up assessments Although outcomes in clinical practice are typically evaluated by providers, this is rarely necessary for pragmatic research questions. Therefore, bias control should be considered (see text) |
Attrition bias Asymmetrical participant loss between study arms for nonrandom reasons94 |
• Assess risk during pilot phase • Monitor reasons for attrition • Include patient preference Although low adherence to treatments is common in clinical practice, it may undermine the interpretability of pragmatic trial results. This risk needs to be weighed against the relevance of using low-touch (“pragmatic”) strategies to increase adherence Methods to evaluate reasons for participant loss do not interfere with pragmatic research questions and should thus be implemented, especially if adherence is not promoted |
Biased interpretation and reporting of results Reporting bias typically refers to the selective reporting of positive results. Apart from withholding negative results, alternative analyses can be performed,52 and results can be misinterpreted or misrepresented14,37. |
• Evaluate overall internal validity • Preregister trial and follow protocol • Accurately report nonsignificant results in superiority trials (not claiming comparable effectiveness) • Discuss limits of generalizability and avoid overgeneralization of findings • Adhere to reporting guidelines130 Because of potentially greater heterogeneity, pragmatic trials may require more extensive reporting and more nuanced discussion than explanatory RCTs. This includes providing relevant contextual information The generalizability of trial results is usually an educated judgment,149 requiring knowledge of influential population characteristics and eligibility criteria.32 When discussing generalizability, trial authors should report relevant information to permit assessment of external validity,51,66,148 ensure that claims are supported by data, and discuss study limitations |
Please note that the potential biases listed in the table also apply to more explanatory trials, but they may pose particular challenges in trials designed to inform clinical or policy decision-making. Potential biases are listed in the left-hand column and potential approaches to minimize each bias on the right. The proposed solutions need to be examined for potential conflicts with a trial's pragmatic objectives, in which case they may not be suitable or their implications for generalizability of findings must be clearly reported by the study authors.
RCT, randomized controlled trial.
For some pragmatic research objectives and end points measurement, precision and related aspects of internal validity are less of a concern, either because adequate measurement precision is self-evident or because it is not needed to support the study purpose. Examples include comparing 2 or more treatment approaches for costs of care, duration of adherence, or time to an objective medical event or change in treatment.
4.5. Considering alternative study designs for pragmatic research questions
So far, this article has discussed the importance of clarifying the appropriateness and intention of a pragmatic trial design and of carefully weighing methods that replicate normal clinical practice against internal validity. The following section proposes alternative study designs to the parallel-group RCT that are adaptable to the purposes of pragmatic trials.
Pragmatic trials of pain treatments are almost exclusively parallel-group designs.62 However, some pragmatic research questions may be usefully addressed with variations of parallel designs, such as enrichment or adaptive designs, or cross-over designs (Fig. 1). To date, these designs are not common practice in pragmatic trials and may on occasion conflict with routine clinical practice. Although these designs are not always suitable and have limitations, their potential for effectively answering research questions relevant to clinical decision-making is underestimated. When planning a pragmatic trial, we suggest considering such alternative options for their potential to answer specific research questions and particularly whether they may increase efficiency and trial feasibility. Below, potential opportunities and limitations of alternative designs are presented. Further generic and pain-specific methodological guidance is available.38,47
4.5.1. Large, simple trials
Large, simple trials (LSTs) are defined by large sample sizes, broad eligibility criteria, minimal data collection requirements, and use of objective, often routinely collected outcome measures. Large, simple trials are parallel-group trials believed most suitable for pharmacological postapproval effectiveness and safety research. Large, simple trials typically interfere little with clinical practice and maintain scientific rigor by randomizing participants to treatments with clinical equipoise.105,107 Their large size of often several thousand participants can provide high-quality information even for rare outcomes. Their minimal interference with routine care makes them broadly aligned with a “pragmatic” attitude to trial design. There are organizational challenges and large costs associated with establishing research networks large enough to support LSTs,41,105,111 although, once established, trials become much more cost efficient. Their reliance on objectively measurable end points, such as death or hospitalization, has likely hindered their implementation in pain research. Exceptions exist, which also illustrate the usefulness of integrated healthcare systems and electronic health records to facilitate clinical research.21 Especially regarding analgesic safety studies, the lack of LSTs is a missed opportunity. Given the prevalence of comorbidities and polypharmacy in people with persistent pain, LSTs might also provide valuable insights into drug interactions while avoiding some biases of observational studies. Potentially, improvements in electronic health records and simple mobile data collection methods will facilitate the broader adoption of LSTs in pain research.11,111
4.5.2. Cross-over design
Provided certain assumptions regarding study treatments and medical conditions are met,38 cross-over designs may be useful for addressing pragmatic research questions. If components or application sequences within a complex treatment are to be assessed (eg, symptom-guided vs generalized manual therapy42), cross-over designs seem possible and may remove between-patient variance, reducing sample size requirements.137 Cross-over designs are rare in pragmatic pain trials62 and investigators typically choose parallel-group designs for pragmatic questions.42,54,86,133 For short-acting, non–disease-modifying drugs, cross-over designs may be used to answer pragmatic questions, respecting the usual methodological standards.137 Finally, switching between treatments after a certain time or on treatment failure is a common scenario in clinical practice and its effects can be assessed in pragmatic trials.106,108 Either the treatment sequence is randomized as in cross-over designs or a postrandomization treatment switch is triggered by clinical factors (not considered a traditional cross-over trial). The cross-over may act as an incentive during recruitment, mitigate patient disappointment in unblinded trials, or, under some circumstances, enable subgroup analyses. Individual (n-of-1) or multiperiod cross-over designs are adaptations geared towards clinical decision-making.85,146
4.5.3. Stepped-wedge cluster randomized design
A variant of a cluster randomized trial is a stepped-wedge cluster randomized trial. Stepped-wedge cluster randomized trials are a pragmatic attempt to reconcile various stakeholders' needs and the practical constraints of large-scale intervention or policy implementation. Starting with a nonexposure period for all study clusters, this design involves “random and sequential crossover of clusters from control to intervention until all clusters are exposed.”57 Stepped-wedge designs have been successfully applied to study healthcare interventions during routine implementation of new approaches over time,87 including multimodal workplace interventions for low back pain103 and digital health psychological interventions for children and adolescents with chronic pain.96 Recent trials also studied effects of modifications in diagnostic procedures on healthcare utilization.68 With every cluster eventually exposed, the phased baseline period acts as the internal control condition and all clusters contribute to both study conditions—a notable advantage over traditional cluster randomization.
Stepped-wedge cluster randomized trials face challenges to reconcile practical constraints (eg, the speed or extent with which an intervention can be implemented and the transition periods that may arise) and methodological requirements (eg, sample size calculation, recruitment, concealment, potential dependence within clusters, calendar time effects, and repeated measures). Social and healthcare trends outside the trial may also affect interpretability as individual clusters are affected differently. Stepped-wedge trials provide more statistical power than parallel cluster designs when clusters are heterogeneous and/or large. Additional methodological guidance is available.57
4.5.4. Enrichment designs
“Enrichment” refers to randomizing only patients with an increased likelihood of treatment success (practical, prognostic, or predictive enrichment125) or other specific characteristics. Targeted preidentification during eligibility screening, including the use of biomarkers, clinical diagnostics, and enrichment phases can be pragmatic if these methods address a clinically relevant question and are feasible in routine clinical practice. Examples of prognostic enrichment are trials of patients at high risk of developing chronic low back pain, as identified by a routinely available risk-stratification strategy such as the STarT-Back screening tool.33,60 However, although this process allows for pragmatic research questions, it certainly reduces generalizability by excluding a certain number of potential treatment recipients.82 Readers are referred to existing guidance for detailed discussions of enrichment strategies.90,130
4.5.5. Adaptive and other designs responsive to accumulating trial information
Characterized by “using results accumulating in the trial to modify the trial's course in accordance with prespecified rules,”97 adaptive designs enable researchers to respond to interim safety and efficacy data. For example, if problems are encountered early in the trial, treatment intensity (eg, dose or number of treatment sessions) can be altered, randomization ratios changed, or treatment arms added or dropped, arguably saving research resources.38 To be feasible, effectiveness and safety outcomes must be expected to occur relatively early in the treatment course. Adaptive trials are challenging to conduct, both logistically and methodologically, and require expert biostatistician support. Regarding trial designs involving outcome (or response)-adaptive randomization, there are important limitations including bias and loss of efficiency of treatment effect estimators, bias caused by temporal trends in participant characteristics, volatility in sample size distributions with more participants assigned to the inferior treatment, potentially large imbalances in participant characteristics, greater potential for unblinding, and ethical concerns.8,19,20,34,58,77,102,116,126 Further theoretical and practical considerations of adaptive designs are available.26,27 Use of such designs must be accompanied by careful consideration of these limitations.
More relevant for pragmatic trials are designs of personalized or stepped-care approaches, or adding or dropping of study arms without losing the integrity of randomization.115,136 For example, the STAR*D (Sequenced Treatment Alternatives to Relieve Depression) trial65,113,114 tested subsequent treatments for nonresponders. The design was a precursor to Sequential Multiple Assignment Randomized Trial (SMART) designs,79 which mitigate some of the above concerns regarding adaptive randomization. Sequential Multiple Assignment Randomized Trial designs can be conceptualized as sequences of empirical trials of different interventions, often mimicking and thus informing clinical practice. STAR*D was conducted in 41 outpatient settings and enrolled over 4000 participants. The trial included 4 levels of randomization for patients who did not remit with a first course of citalopram for major depressive disorder, resulting in up to 4 treatment levels of various medications, switch and augmentation options, and cognitive therapy. Albeit a trial of depression, this design may be applied to stepped care and treatment alternatives for pain, such as those outlined in guidelines for painful conditions but rarely tested against one another (eg, commencing treatment with education, reassurance, and over-the-counter analgesics before considering physiotherapy, manual therapy, multimodal rehabilitation, etc10). In STAR*D, the numerous treatment options at level 2, and inclusion of treatment preferences (patients could choose a range of potentially assigned treatments), resulted in small group sizes, making comparisons difficult, and the absence of no-treatment controls should be noted. Conversely, including patient—and possibly provider—choice of treatment12 arguably reflects routine practice,128 as in a pragmatic trial of multiple or multimodal pain therapies.39,124,129 Integrating treatment choice into randomization algorithms has also been proposed as so-called equipoise-stratified randomization.78
Although not as elaborate as STAR*D, several ongoing pain trials use SMART designs to study clinically highly relevant issues, mainly related to tailoring of nonpharmacological pain management.36,44,49,74,122 For example, a trial of breast cancer compares different doses of a pain-coping skills program and dose adaptations depending on an initial (non-)response.74 However, this trial is designed to have adequate power only for the first treatment period and related analyses (ie, before rerandomization), underlining the logistical challenges of such designs. Studies that have adequate power for subsequent analyses after switching treatments are the ongoing OPTIMIZE trial, comparing physical therapy and cognitive behavioral therapy and recruiting 945 participants,122 and the SMART LBP study aiming for 1200 participants.49 Participant loss and nonadherence are major threats to all types of trials. In SMART designs, this problem may be heightened because of repeated randomization steps and consequently smaller groups. Published protocols of SMART designs thus document increased efforts and low PRECIS-2 ratings in the “adherence” domain (ie, treatment adherence is encouraged beyond normal clinical practice). Similarly, participants are followed up closely and recruitment is more elaborate.49,122 Finally, the “Determinants of the Optimal Dose and Sequence of Functional Restoration and Integrative Therapies study” investigates standard rehabilitation and complementary therapy approaches in a military setting.44 Most of these studies also aim to identify predictors of initial treatment responses.
4.6. Exemplary trials balancing internal and external validity
We have suggested strategies that do not always reflect current practice in the field.62 Primarily, we are calling for more attention to the balance between real-world applicability and internal trial validity. To illustrate how this can be performed effectively, we discuss 2 well-designed studies:
Beard et al.7 studied arthroscopic subacromial decompression for subacromial shoulder pain, using 30 sites and 38 operating surgeons of the UK National Health Service. When the trial was planned, there was insufficient evidence from efficacy trials. Nonetheless, shoulder arthroscopies were routinely performed in clinical practice, making real-world effectiveness a pertinent research question. To answer this question while safeguarding against expectancy-related effects, the trial included both a sham control group and a no treatment group. The control intervention enabled the distinction between placebo effects and normal disease course. The trial showed no difference between arthroscopy and sham but a clear benefit of both over no treatment. Further illustrating challenges of pragmatic trials and potential solutions, Beard et al.7 reported that they struggled with participants not receiving their allocated intervention. The clinical context and possibly patient preferences may explain these problems. For example, shoulder surgery patients may change their mind or surgery slots may not become available during the trial period. Had the researchers not included preplanned sensitivity analyses to assess the effects of intervention adherence, interpretation of findings would have become nearly impossible. Recently, Kerns et al.75 have advocated for investigators to find the right “balance” between the flexibility in treatment delivery and adherence monitoring that is consistent with clinical practice and the importance of building confidence in the fidelity of the independent variable, namely, the interventions being studied. The arthroscopy trial by Beard et al.7 addressed clinically relevant questions in a typical clinical environment, while using multiple features that are commonly considered priorities in explanatory trials (eg, blinding and per-protocol analyses24,82), which added valuable information. Balancing explanatory and pragmatic methods, the study reliably informs clinicians and policy decision-makers about the utility of a pain treatment in a realistic context. Albeit a single trial, this well-conducted study resulted in the change of clinical recommendations.132
Comparisons between 2 active treatments rather than with sham comparators are more typical for pragmatic pain trials.62 For example, Cherkin et al.23 compared mindfulness-based stress reduction (MBSR) to cognitive-behavioral therapy (CBT) and to usual care for patients with chronic low back pain. This trial balanced the considerations between internal and external validity well. With an overall PRECIS-2 rating of 3.3 (placing the overall design centrally between explanatory and pragmatic poles), the researchers prioritized clinically relevant outcome measures, pragmatic data analysis (intention-to-treat), and low study questionnaire burden. The trial used more explanatory methods in the domain “flexibility of intervention delivery,” ensuring with pretrial training of providers and continuous monitoring that the interventions were delivered according to the protocol (also see 75). This design feature was mainly driven by funding requirements. Furthermore, the trial used targeted participant recruitment and dedicated trial centers. In an otherwise “pragmatic” trial, this illustrates reasons for design decisions that deviate from usual care: the reduction of bias and practical constraints. In addition, with more control over intervention content, Cherkin et al. were able to draw more definite conclusions than with MBSR practitioners who all followed their own treatment preferences. Conversely, the trial's treatment protocol was later used for a university training program in MBSR, providing a nice example of research and clinical practice informing one another. Having aimed for a large study sample and experiencing recruitment and adherence difficulties, the practical requirement to complete the trial meant that recruitment methods typical for clinical practice needed to be bolstered. With a typical, moderate attendance of MBSR and CBT, the trial showed a benefit of these interventions over usual care. As the authors acknowledge, however, the absence of a sham or attention control group prevented the assessment of effect mediators. For example, such a control intervention could have elucidated the effects of specific intervention features or of the additional attention received from healthcare providers in the treatment group.
In summary, these studies illustrate how trials can be designed to answer clinically relevant questions in a rigorous manner. In addition, they illustrate the practical challenges and research constraints that can lead to methodological compromise. To reduce research waste through small, flawed, and thus uninformative trials, funding bodies should facilitate best-practice solutions.18,41,141,143
5. Discussion
Pragmatic trials of pain treatments are conducted to inform clinical decision-making and health policy for people living with pain. They address important clinical or policy questions about both pharmacological and nonpharmacological therapies. Because of large funding initiatives in the United States,56,95,100 pragmatic trials are likely to continue to gain in importance in the future. It remains a priority to find safer, more effective, and practical approaches to pain management and to advance personalized medicine. This article has outlined the consensus of a group of participants with expertise in the design, conduct, analysis, and/or interpretation of clinical trials. The fundamental design and methodological considerations for pragmatic trials emphasize the importance of balancing relevance for clinical practice (external validity) with ensuring scientific integrity (internal validity) of the trial results. Based on a systematic review of current research practice and in-depth discussions, we identify opportunities for improving the conduct of pragmatic trials, provide guidance on their design, and presented considerations for future trials. The basic notion is that measurable variables that account for heterogeneity should be identified and controlled or included in statistical modeling where the research question permits it; but heterogeneity should be accepted and incorporated into the trial design where required by the objectives of a trial. Study designs such as sequential multiple assignment or even cross-over designs are essentially absent from current pragmatic trials of pain therapies,62 despite their potential to inform clinical and policy decision-making.
This article is limited in that it has only presented general considerations and guidance. Trial researchers will have to consider each aspect of research designs and methods individually and in the context of their specific pragmatic research question and potential study setting. We have highlighted methods for minimizing bias in pragmatic trials while recognizing that choice of methods needs to consider their impact on generalizability of findings. More rigor in this regard will increase the value of pragmatic clinical trials in shaping clinical decision-making and health policy. Furthermore, the present considerations were not developed by formal consensus methodology,89 albeit being informed by a systematic review of current practice in pragmatic trials. In addition, not all individuals involved had expertise in pragmatic trials but all represented stakeholders, such as academics, industry, regulators, and patient initiatives, that have substantial investment in evaluating the effects and safety of pain treatments.
To date, the main guidance documents for pragmatic trials are the PRECIS-2 tool for the design82 and the CONSORT extension for the reporting of pragmatic trials.148 Another useful resource is the NIH Collaboratory's “Living Textbook” (https://rethinkingclinicaltrials.org/). For reporting, we suggest the CONSORT reporting guidance (and all other relevant CONSORT extensions) and believe that better adherence will increase the usefulness of pragmatic trials. For design considerations, however, the PRECIS-2 tool requires more nuanced discussion. The tool is certainly useful in helping guide the design of individual trials70 and we recommend its use, but researchers need to be aware that a high rating may not always be desirable for each domain. High ratings are given when the trial feature is comparable to routine clinical practice and lower ratings represent departures from the normal clinical procedures or scenarios. As our considerations emphasize, pragmatic trials attempt to answer pragmatic research questions by testing hypotheses about treatment effectiveness and do not necessarily closely reproduce clinical practice. For example, pain trialists may opt for real-world resemblance more in some domains than in others, often choosing more intensive recruitment methods to obtain a patient sample representative of the population of interest or performing more in-depth outcome assessments.62 Importantly, enhanced recruitment efforts may also be required for more representative or diverse samples. Finally, we strongly recommend that authors report their reasons for all such choices. Publishing the PRECIS-2 table (rather than the more commonly reported wheel diagram) is a good basis for such reporting82,101 and such information will be of value to readers and future trial designers.62
Conflict of interest statement
The first author was renumerated by IMMPACT for their work at the consensus meeting and in drafting the manuscript. The project was supported by ACTTION, a public–private partnership. The views expressed in this article are those of the authors and no official endorsement by the Food and Drug Administration (FDA) or the pharmaceutical and device companies that provided unrestricted grants to support the activities of the ACTTION public–private partnership should be inferred. Individual authors' declarations of potential conflicts of interest are as follows: M. J. Bair reports grants or contracts from VA Health Services Research and Development, VA Cooperative Studies Program, and National Endowment for the Arts and participation on a Data Safety Monitoring Board or Advisory Board on a NIH project conducted at the University of Utah. This trial is a pragmatic trial of physical therapy intervention. Prof Cherkin reports being paid an honorarium for mentoring the first author with manuscript writing. L. L. DeBar reports support for the present manuscript from Kaiser Permanente Washington Health Research Institute (KPWHRI); grants and contracts from National Institutes of Health (NIH) and Patient-Centered Outcomes Research Institute (PCORI); an honorarium for a lecture at 2020 IMMPACT Consensus Meeting; support for attending meetings from KPWHRI and NIH; and participation on a Data Safety Monitoring Board or Advisory Board for NCCIH, BACPAC DSMB. P. Cowan is the co-founder and secretary of the World Patients Alliance and Board Member Emeritus as well as Founder of the American Chronic Pain Association. P. Desjardins reports grants or contracts from the NIH/NIDCR Opioid Analgesic Reduction Study as co-investigator; consulting fees from Acadia Pharmaceuticals, GlaxoSmithKline Consumer Healthcare, Neurana Pharmaceuticals, Bayer Consumer Healthcare, Senju USA, Antibe Therapeutics, and CenExel; payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from Bayer Consumer Healthcare; participation on the GSK Consumer Advisory Board; and roles on the Board of Directors of ProSelect/Coverys Medical Liability Insurance and the Board of Governors of South Orange Performing Arts Center. R. H. Dworkin has received in the past 5 years research grants and contracts from the U.S. Food and Drug Administration and the U.S. National Institutes of Health, and compensation for serving on advisory boards or consulting on clinical trial methods from Abide, Acadia, Adynxx, Analgesic Solutions, Aptinyx, Aquinox, Asahi Kasei, Astellas, Beckley, Biogen, Biohaven, Biosplice, Boston Scientific, Braeburn, Cardialen, Celgene, Centrexion, Chiesi, Chromocell, Clexio, Collegium, CombiGene, Concert, Confo, Decibel, Editas, Eli Lilly, Endo, Ethismos (equity), Eupraxia, Exicure, GlaxoSmithKline, Glenmark, Gloriana, Grace, Hope, Hospital for Special Surgery, Lotus, Mainstay, Merck, Mind Medicine (also equity), Neumentum, Neurana, NeuroBo, Novaremed, Novartis, OCT, Orion, OliPass, Pfizer, Q-State, Reckitt Benckiser, Regenacy (also equity), Sangamo, Sanifit, Scilex, Semnur, SIMR Biotech, Sinfonia, SK Biopharmaceuticals, Sollis, SPRIM, Teva, Theranexus, Toray, Vertex, Vizuri, and WCG. R. R. Ewards reports no conflicts of interest. J. T Farrar reports grants or contracts from the NIH-NCATS-UL1 Grant (Co-I), FDA-BAA Contract, NIH-NIDDK-U01 Grant (CoI), and NIH-NINDS-U24 Grant (PI); consulting fees from Lilly and Vertex Pharma; participation on a Data Safety Monitoring Board or Advisory Board for NIH-NIA(DSMB); and a role as President-Elect US-ASP. M. Ferguson reports grants or contracts from ACTTION paid to her institution for work on systematic reviews and payment or honoraria for lectures at IMMPACT meetings from ACTTION. R. Freeman reports consulting fees from AlgoRx, Allergan, Applied Therapeutics, Clexio, Collegium, Cutaneous NeuroDiagnostics, Glenmark, GW Pharma, GlaxoSmithKline, Eli Lilly, Lundbeck, Maxona, Novartis, NeuroBo, Regenacy, Vertex, and Worwag and stock options in Cutaneous Neurodiagnostic Life Sciences, NeuroBo, Maxona, and Regenacy. J. S. Gewandter reports grants or contracts from the NIH; consulting fees from AlgoTX, GW Pharma, Magnolia Neurosciences, Orthogonal, Science Branding Consulting, AKP Pharma, and Eikonizo and support for attending meetings or travel from SOPATE and INS. I. Gilron declares a travel stipend to attend ACTTION meeting 2019 and reports consulting fees from CombiGene, GW Research, Lilly, and Novaremed. H. Grol-Prokopczyk reports grants or contracts from the National Institute on Aging of the National Institutes of Health, Award #R01AG065351; honoraria for an invited lecture for Multidisciplinary Research in Gerontology Colloquium Series, University of Southern California, and for an invited lecture at Napa Pain Conference; and travel reimbursement for travel to Napa Pain Conference. S. H. Hertz declares consulting fees from Adial Pharmaceuticals, AIXThera Operations Limited, Asahi Kasei Pharma America Corporation, Allay Pharmaceuticals, Amygdala Neurosciences, Araim Pharmaceuticals, Artugen Therapeutics, Avenue Therapeutics, BioQ Pharma, Cali Pharmaceuticals, Celero Pharma, Centrexion Therapeutics, Collegium Pharmaceutical, Concentric Analgesics, Currax Pharmaceuticals, Delpor, Domain Therapeutics, Enalare Therapeutics, Go Medical Industries, Heron Pharmaceuticals, Innocoll Biotherapeutics, Intra-Cellular Therapies, Kaleo, Lyndra Therapeutics, Maxona Pharmaceuticals, MDI Pharma, Nema Research, Neuroderm, Neumentum, Novilla Pharmaceuticals, OncoZenge, Pfizer, PleoPharma, SafeRx Pharmaceuticals, The Scripps Research Institute, Sollis Therapeutics, Sparian Biosciences, Teikoku Pharma USA, Taiwan Liposome Company, Tremeau Pharmaceuticals, Vallon Pharmaceuticals, Vizuri Health Sciences, and WinSanTor. D. G. Hohenschurz-Schmidt reports support for the present manuscript from a PhD studentship by the Alan and Sheila Diamond Charitable Trust and a honorarium from IMMPACT; a research grant from The osteopathic Foundation (paid to institution); consulting fees from Altern Health Ltd; and the role of executive committee member of the Society for Back Pain Research. S. Iyengar reports employment and travel support from the NINDS/NIH; stock options in Retiree and Eli Lilly and Company; and other financial or nonfinancial interests through employee of NINDS/NIH and adjunct senior research professor, Indiana University School of Medicine, Departments of Anesthesia and Clinical Pharmacology. C. Kamp reports support for the present manuscript from ACTTION, FDA contract #HHSF223201000078C, with payments made directly to her institution, University of Rochester Medical Center, providing 5% salary support and consulting fees from Clintrex Research Corporation (payments made to her consulting company CLKamp Consulting LLC with none of the consulting in relation to the indication of pain). B. I. Karp declares no conflict of interest. R. D. Kerns reports honoraria for presentation at the IMMPACT consensus meeting that informed this manuscript; research grants from NIH, PCORI, and VA paid to his institution; a consulting fee for a NIH-sponsored research grant; a honorarium for planning and participation in an IMMPACT consensus conference on patient engagement in clinical pain research; honoraria for participation in NIH and PCORI DSMBs and a honorarium for participation as member of Scientific Advisory Board, Chronic Pain Centre of Excellence for Canadian Veterans; an unpaid role on the Board of Directors, A Place to Nourish your Health; and an honorarium for role as the Executive Editor, Pain Medicine. B. A. Kleykamp reports income from ACTTION as a full-time employee October 2018-August 2021; being the owner and principal of BAK and Associates, LLC a research and science writing consulting firm; contracts to BAK and Associates, LLC over the past 36 months include STATinMED, American Society of Addiction Medicine, ECRI, Hayes/Symplr, Pinney Associates, and Palladian Associates; payments and honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from University of Kentucky, STATinMED, Filter magazine, and Virginia Commonwealth University; support for attending meetings or travel from University of Kentucky and Virginia Commonwealth University; and a role as Communications Chair for the College on Problems of Drug Dependence. J. D. Loeser reports no conflict of interest. S. Mackey reports support for the present manuscript from the National Institutes of Health, U.S. Food and Drug Administration, Patient-Centered Outcomes Research Institute, Chris Redlich Professorship in Pain Research, and Dodie and John Rosekrans Pain Research Endowment Fund (all through Stanford University); consulting fees from Oklahoma University-Smith NIH Grant; payments or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from Memorial Sloan Kettering Cancer Center, National Institutes of Health, Washington University, Oakstone Publishing, Comprehensive Review of Pain Medicine CME Lecture Series, Walter Reed AFB, Web-Based Lecture, Bull Publishing, George Washington University, University of Washington, Veterans Affairs, HSRD Naloxone Distribution IIR Advisory Board, Canadian Pain Society, National Institutes of Health, and New York University; support for attending meetings or travel from American Academy of Pain Medicine, American Society of Regional Anesthesia and Pain Medicine, Washington University, George Washington University, National Institutes of Health, University of Washington, U.S. Federal Drug Administration, New York University, Weill Cornell Medical College, and the International Neuromodulation Society (INS); roles on the Drug Safety and Risk Management Advisory Committee, Anesthetic and Analgesic Drug Products Advisory Committee (DSaRM/AADPAC)/(FDA) (unpaid role as Advisory Committee Member) and for HSRD Naloxone Distribution IIR Veterans Affairs (VA) (honorarium paid to himself for a role as Advisory Board Member); an unpaid role as Vice-Chair Committee on Temporomandibular Disorders for the National Academies of Sciences, National Institutes of Health, (NAS)/(NIH); other financial interests through the National Institutes of Health for T32 Postdoctoral Fellows who conduct research in laboratory; and salary supported by NIH and administered through Stanford University. R. Malamut reports no conflicts of interest. J. D. Markman declares consulting fees from Lateral Pharma, Editas, Clexio Pharma, Nektar, Pfizer, Eliem, Biogen, and Lilly; participation on Data Safety Monitoring or Advisory Boards for Regenacy Pharmaceuticals, Tonix Pharmaceuticals, and Novartis Pharmaceuticals; roles as an ex officio board member of the North American Neuromodulation Society and Treasurer of the Neuromodulation SIG of IASP; and stock options in Yellowblack Corp and Flowonix Corp. M. P. McDermott declares grants or contracts from the NIH, U.S. Food and Drug Administration, Cure SMA, and PTC Therapeutics; consulting fees from Fulcrum Therapeutics, Inc, and Neuroderm, Ltd; and participation on a Data and Safety Monitoring Board or Advisory Board for the NIH, Eli Lilly and Company, Catabasis Pharmaceuticals, Inc, Vaccinex, Inc, Neurocrine Biosciences, Inc, Voyager Therapeutics, Prilenia Therapeutics Development, Ltd, ReveraGen BioPharma, Inc, and NS Pharma, Inc. E. McNicol reports grants or contracts from ACTTION paid to his institution for work on systematic reviews and payment or honoraria for lectures and attending IMMPACT meetings from ACTTION. K. V. Patel reports grants and contracts from the U.S. Centers for Disease Control and Prevention and the National Institutes of Health and consultancy work for GlaxoSmithKline LLC. Prof Rice reports support for the present manuscript from IMMPACT; grants and studentships from UKRI (Medical Research Council and BBSRC), Versus Arthritis, Royal British Legion, European Commission, UK Ministry of Defence, Dr Jennie Gwynn Bequests, Alan and Sheila Diamond Trust, the British Pain Society, and the Royal Society of Medicine; consultancy and advisory board work for Imperial College Consultants, which, in the past 36 months, has included remunerated work for Confo, Vertex, PharmaNova, Lateral, Novartis, Mundipharma, Orion, Shanghai SIMR Biotech, Asahi Kasei and Toray; and lecture honoraria from MD Anderson Cancer Center, Royal Marsden Hospital, and UCSF. A. S. C. Rice is named as an inventor on patents: A. S. C. Rice, S. Vandevoorde, and D. M. Lambert Methods using N-(2-propenyl)hexadecanamide and related amides to relieve pain. WO 2005/079771 and K. Okuse et al. Methods of treating pain by inhibition of vgf activity EP13702262.0/WO2013 110945; a role as Chair of the Trial Steering Committee (TSC) for the OPTION-DM trial, National Institute for Health Research (NIHR); a role as councilor for IASP; he also was the owner of share options in Spinifex Pharmaceuticals from which personal benefit accrued on the acquisition of Spinifex by Novartis in July 2015 (the final payment was made in 2019); and other interests are in the British National Formulary, Joint Committee on Vaccine and Immunisation-varicella subcommittee, Medicines and Healthcare products Regulatory Agency (MHRA), Commission on Human Medicines- Neurology, Pain, & Psychiatry Expert Advisory Group, Nonfreezing Cold Injury Independent Senior Advisory Committee (NISAC), and Royal College of Anaesthetists—Heritage and Archives Committee. M. C. Rowbotham reports consulting fees from SiteOne Therapeutics, GenEdit, and Sustained Therapeutics; payments for expert testimony from Haapala, Thompson & Abern (law firm for clinical payment to himself as medical-legal expert witness), payments from Helixmith Co, Ltd for work on a data monitoring or advisory board, and unpaid work as Treasurer of the International Association for the Study of Pain from 2020 to 2024; and holds stock options from SiteOne and CODA Biotherapeutics. F. Sandbrink reports a role as the National Program Executive Director for Pain Management, Opioid Safety, and Prescription Drug Monitoring Program, Veterans Health Administration. K. Schmader reports a grant by GSK for vaccine research, paid to his institution. D. J. Steiner reports being full-time employee of Eli Lilly and Company (Pain & Neurodegeneration). L. Simon reports consulting fees from AstraZeneca, Pfizer, Rigel, Eupraxia, Biosplice, EMDSerono, Horizon, Direct, Lilly, Kiniska, Protalix, Chemomab, TLC, SpineThera, Kyoto, PPD, Galvani, Urica, Transcode, Boehringer Ingelheim, Bristol Myers Squibb, Priovant, Roivant, Ampio, Aura, Aurinia, GSK, Xalud, Neumentum, Neema, Amzell, Applied Bio, Aptinyx, Bexson, Bone Med, Bone Therapeutics, Cancer Prevention, Cerebral Therapeutics, ChemoCentryx, Diffusion Bio, Elorac, Enalare, Foundry Therapeutics, Galapagos. Histogen, Gilead, Idera, Intravital, InGel, Kiel Labs, Mesoblast, Mpathix, Minerva, Regenosine, Samus, Sana, StageBio, Theraly, Unity, and Viridian. D. C. Turk reports royalties and licenses from Wolters Kluwer (Editor-n-Chief, Clinical Journal of Pain) and the American Psychological Association (Book Author); consulting fees from GSK/Novartis; and a role as Associate Director of Analgesic, Anesthetic, and Addiction Clinical Trials, Innovations, Opportunities and Networks (ACTTION). C. Veasley reports no conflicts of interest. J. Vollert reports consulting fees from Vertex Pharmaceuticals, Embody Orthopaedic, and Casquar. Finally, A. D. Wasan reports no conflicts of interest.
Acknowledgements
The authors thank Ms. Valorie Thompson for the organization of the IMMPACT meeting and the technical support team for the successful conduct on the day. The authors also acknowledge Dr. Inna Belfer's contributions to the meeting.
Previous publications: This manuscript or its contents have not been presented, published, or considered for publication elsewhere.
Declaration of authorship: All authors have contributed substantially to the development, drafting, and/or review of the presented material.
Footnotes
Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article.
Contributor Information
Dan Cherkin, Email: dancherkin@gmail.com.
Andrew S.C. Rice, Email: a.rice@imperial.ac.uk.
Robert H. Dworkin, Email: robert_dworkin@urmc.rochester.edu.
Dennis C. Turk, Email: turkdc@uw.edu.
Michael P. McDermott, Email: Michael_McDermott@URMC.Rochester.edu.
Matthew J. Bair, Email: mbair@iupui.edu.
Lynn L. DeBar, Email: Lynn.Debar@kp.org.
Robert R. Edwards, Email: rredwards@partners.org.
John T. Farrar, Email: jfarrar@pennmedicine.upenn.edu.
Robert D. Kerns, Email: Robert.kerns@yale.edu.
John D. Markman, Email: john_markman@urmc.rochester.edu.
Michael C. Rowbotham, Email: mcrowbotham@gmail.com.
Karen J. Sherman, Email: Karen.J.Sherman@kp.org.
Ajay D. Wasan, Email: wasanad@upmc.edu.
Penney Cowan, Email: pcowan@theacpa.org.
Paul Desjardins, Email: paul.j.desjardins@gmail.com.
McKenzie Ferguson, Email: mcfergu@siue.edu.
Roy Freeman, Email: rfreeman@bidmc.harvard.edu.
Jennifer S. Gewandter, Email: Jennifer_Gewandter@URMC.Rochester.edu.
Ian Gilron, Email: gilroni@queensu.ca.
Hanna Grol-Prokopczyk, Email: hgrol@buffalo.edu.
Sharon H. Hertz, Email: hertz@hertzandfields.com.
Smriti Iyengar, Email: smriti.iyengar@nih.gov.
Cornelia Kamp, Email: Cornelia.Kamp@cmsu.rochester.edu.
Barbara I. Karp, Email: karpb@ninds.nih.gov.
Bethea A. Kleykamp, Email: akleykamp@gmail.com.
John D. Loeser, Email: jdloeser@u.washington.edu.
Sean Mackey, Email: smackey@stanford.edu.
Richard Malamut, Email: rmalamut@collegiumpharma.com.
Ewan McNicol, Email: ewanmcnicol@comcast.net.
Kushang V. Patel, Email: kvpatel@uw.edu.
Friedhelm Sandbrink, Email: friedhelm.sandbrink@va.gov.
Kenneth Schmader, Email: kenneth.schmader@duke.edu.
Lee Simon, Email: lssconsult@aol.com.
Deborah J. Steiner, Email: steiner_deborah_j@lilly.com.
Christin Veasley, Email: cveasley@cpralliance.org.
Jan Vollert, Email: j.vollert@imperial.ac.uk.
References
- [1].Adams AS, Schmittdiel JA, Altschuler A, Bayliss EA, Neugebauer R, Ma L, Dyer W, Clark J, Cook B, Willyoung D, Jaffe M, Young JD, Kim E, Boggs JM, Prosser LA, Wittenberg E, Callaghan B, Shainline M, Hippler RM, Grant RW. Automated symptom and treatment side effect monitoring for improved quality of life among adults with diabetic peripheral neuropathy in primary care: a pragmatic, cluster, randomized, controlled trial. Diabetic Med 2019;36:52–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Ali J, Davis AF, Burgess DJ, Rhon DI, Vining R, Young-McCaughan S, Green S, Kerns RD. Justice and equity in pragmatic clinical trials: considerations for pain research within integrated health systems. Learn Health Syst 2022;6:e10291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Barjandi G, Kosek E, Hedenberg-Magnusson B, Velly AM, Ernberg M. Comorbid conditions in temporomandibular disorders myalgia and myofascial pain compared to fibromyalgia. J Clin Med 2021;10:3138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Bärnighausen T, Tugwell P, Røttingen J-A, Shemilt I, Rockers P, Geldsetzer P, Lavis J, Grimshaw J, Daniels K, Brown A, Bor J, Tanner J, Rashidian A, Barreto M, Vollmer S, Atun R. Quasi-experimental study designs series—paper 4: uses and value. J Clin Epidemiol 2017;89:21–9. [DOI] [PubMed] [Google Scholar]
- [5].Bayer O, Adrion C, Al Tawil A, Mansmann U, Strupp M; PROVEMIG investigators. Results and lessons learnt from a randomized controlled trial: prophylactic treatment of vestibular migraine with metoprolol (PROVEMIG). Trials 2019;20:813. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Beard DJ, Campbell MK, Blazeby JM, Carr AJ, Weijer C, Cuthbertson BH, Buchbinder R, Pinkney T, Bishop FL, Pugh J, Cousins S, Harris IA, Lohmander LS, Blencowe N, Gillies K, Probst P, Brennan C, Cook A, Farrar-Hockley D, Savulescu J, Huxtable R, Rangan A, Tracey I, Brocklehurst P, Ferreira ML, Nicholl J, Reeves BC, Hamdy F, Rowley SC, Cook JA. Considerations and methods for placebo controls in surgical trials (ASPIRE guidelines). Lancet 2020;395:828–38. [DOI] [PubMed] [Google Scholar]
- [7].Beard DJ, Rees JL, Cook JA, Rombach I, Cooper C, Merritt N, Shirkey BA, Donovan JL, Gwilym S, Savulescu J, Moser J, Gray A, Jepson M, Tracey I, Judge A, Wartolowska K, Carr AJ; CSAW Study Group. Arthroscopic subacromial decompression for subacromial shoulder pain (CSAW): a multicentre, pragmatic, parallel group, placebo-controlled, three-group, randomised surgical trial. Lancet 2018;391:329–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Begg CB. Ethical concerns about adaptive randomization. Clin Trials 2015;12:101. [DOI] [PubMed] [Google Scholar]
- [9].Berdal G, Bø I, Dager TN, Dingsør A, Eppeland SG, Hagfors J, Hamnes B, Mowinckel P, Nielsen M, Sand-Svartrud A-L, Slungaard B, Wigers SH, Hagen KB, Dagfinrud HS, Kjeken I. Structured goal planning and supportive telephone follow-up in rheumatology care: results from a pragmatic, stepped-wedge, cluster-randomized trial. Arthritis Care Res (Hoboken) 2018;70:1576–86. [DOI] [PubMed] [Google Scholar]
- [10].Bernstein IA, Malik Q, Carville S, Ward S. Low back pain and sciatica: summary of NICE guidance. BMJ 2017;356:i6748. [DOI] [PubMed] [Google Scholar]
- [11].Berwanger O. Azithromycin, RECOVERY, and the power of large, simple trials. Lancet 2021;397:559–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Bishop MD, Bialosky JE, Penza CW, Beneciuk JM, Alappattu MJ. The influence of clinical equipoise and patient preferences on outcomes of conservative manual interventions for spinal pain: an experimental study. J Pain Res 2017;10:965–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Bolzern JE, Mitchell A, Torgerson DJ. Baseline testing in cluster randomised controlled trials: should this be done? BMC Med Res Methodol 2019;19:106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Boutron I, Haneef R, Yavchitz A, Baron G, Novack J, Oransky I, Schwitzer G, Ravaud P. Three randomized controlled trials evaluating the impact of “spin” in health news stories reporting studies of pharmacologic treatments on patients'/caregivers' interpretation of treatment benefit. BMC Med 2019;17:105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Boutron I, Tubach F, Giraudeau B, Ravaud P. Blinding was judged more difficult to achieve and maintain in nonpharmacologic than pharmacologic trials. J Clin Epidemiol 2004;57:543–50. [DOI] [PubMed] [Google Scholar]
- [16].Braithwaite FA, Walters JL, Moseley GL, Williams MT, McEvoy MP. Towards more homogenous and rigorous methods in sham-controlled dry needling trials: two Delphi surveys. Physiotherapy 2020;106:12–23. [DOI] [PubMed] [Google Scholar]
- [17].Brewin CR, Bradley C. Patient preferences and randomised clinical trials. BMJ 1989;299:313–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Buchbinder R, Underwood M, Hartvigsen J, Maher CG. The Lancet Series call to action to reduce low value care for low back pain: an update. PAIN 2020;161(suppl 1):S57–S64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Buyse M, Saad ED, Burzykowski T. Adaptive randomization of neratinib in early breast cancer. N Engl J Med 2016;375:1591–2. [DOI] [PubMed] [Google Scholar]
- [20].Chappell R, Karrison T. Continuous Bayesian adaptive randomization based on event times with covariates by Cheung et al., statistics in medicine, 2006; 25:55–70. Stat Med 2007;26:3050–2. [DOI] [PubMed] [Google Scholar]
- [21].Cherkin D, Balderson B, Wellman R, Hsu C, Sherman KJ, Evers SC, Hawkes R, Cook A, Levine MD, Piekara D, Rock P, Estlin KT, Brewer G, Jensen M, LaPorte A-M, Yeoman J, Sowden G, Hill JC, Foster NE. Effect of low back pain risk-stratification strategy on patient outcomes and care processes: the MATCH randomized trial in primary care. J Gen Intern Med 2018;33:1324–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Cherkin DC. Are methods for evaluating medications appropriate for evaluating nonpharmacological treatments for pain?—challenges for an emerging field of research. JAMA Intern Med 2021;181:328–9. [DOI] [PubMed] [Google Scholar]
- [23].Cherkin DC, Sherman KJ, Balderson BH, Cook AJ, Anderson ML, Hawkes RJ, Hansen KE, Turner JA. Effect of mindfulness-based stress reduction vs cognitive behavioral therapy or usual care on back pain and functional limitations in adults with chronic low back pain: a randomized clinical trial. JAMA 2016;315:1240–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Christian JB, Brouwer ES, Girman CJ, Bennett D, Davis KJ, Dreyer NA. Masking in pragmatic trials: who, what, and when to blind. Ther Innov Regul Sci 2020;54:431–6. [DOI] [PubMed] [Google Scholar]
- [25].Clark L, Dean A, Mitchell A, Torgerson DJ. Envelope use and reporting in randomised controlled trials: a guide for researchers. Res Methods Med Health Sci 2021;2:2–11. [Google Scholar]
- [26].Coffey CS. Adaptive Design Across Stages of Therapeutic Development. In: Ravina B, Cummings J, McDermott M, Poole RM, editors. Clinical Trials in Neurology: Design, Conduct, Analysis. Cambridge: Cambridge University Press, 2012. pp. 91–100. [Google Scholar]
- [27].Coffey CS, Levin B, Clark C, Timmerman C, Wittes J, Gilbert P, Harris S. Overview, hurdles, and future work in adaptive designs: perspectives from a National Institutes of health-funded workshop. Clin Trials 2012;9:671–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Cohen SP, Bicket MC, Kurihara C, Griffith SR, Fowler IM, Jacobs MB, Liu R, Anderson White M, Verdun AJ, Hari SB, Fisher RL, Pasquina PF, Vorobeychik Y. Fluoroscopically guided vs landmark-guided sacroiliac joint injections: a randomized controlled study. Mayo Clinic Proc 2019;94:628–42. [DOI] [PubMed] [Google Scholar]
- [29].Cook TD, Campbell DT. Quasi-experimentation: design and analysis issues for field settings, 1979. Available at: https://www.scholars.northwestern.edu/en/publications/quasi-experimentation-design-and-analysis-issues-for-field-settin. Accessed December 11, 2020. [Google Scholar]
- [30].Cronbach LJ, Shapiro K. Designing Evaluations of Educational and Social Programs. San Francisco, CA: Jossey-Bass, 1982. [Google Scholar]
- [31].Darlow B, Stanley J, Dean S, Abbott JH, Garrett S, Wilson R, Mathieson F, Dowell A. The Fear Reduction Exercised Early (FREE) approach to management of low back pain in general practice: a pragmatic cluster-randomised controlled trial. PLoS Med 2019;16:e1002897. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Dekkers OM, von Elm E, Algra A, Romijn JA, Vandenbroucke JP. How to assess the external validity of therapeutic trials: a conceptual approach. Int J Epidemiol 2010;39:89–94. [DOI] [PubMed] [Google Scholar]
- [33].Delitto A, Patterson CG, Stevans JM, Brennan GP, Wegener ST, Morrisette DC, Beneciuk JM, Freel JA, Minick KI, Hunter SJ, Ephraim PL, Friedman M, Simpson KN, George SZ, Daley KN, Albert MC, Tamasy M, Cash J, Lake DS, Freburger JK, Greco CM, Hough LJ, Jeong J-H, Khoja SS, Schneider MJ, Sowa GA, Spigle WA, Wasan AD, Adams WG, Lemaster CM, Mishuris RG, Plumb DL, Williams CT, Saper RB. Study protocol for targeted interventions to prevent chronic low back pain in high-risk patients: a multi-site pragmatic cluster randomized controlled trial (TARGET Trial). Contemp Clin Trials 2019;82:66–76. [DOI] [PubMed] [Google Scholar]
- [34].Dodd LE, Freidlin B, Korn EL. Platform trials—beware the noncomparable control group. N Engl J Med 2021;384:1572–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Dominick CH, Blyth FM, Nicholas MK. Unpacking the burden: understanding the relationships between chronic pain and comorbidity in the general population. PAIN 2012;153:293–304. [DOI] [PubMed] [Google Scholar]
- [36].Doorenbos A. Hybrid effectiveness-implementation trial of guided relaxation and acupuncture for chronic sickle cell disease pain. clinicaltrials.gov, 2022. Available at: https://clinicaltrials.gov/ct2/show/record/NCT04906447. Accessed July 7, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Draper-Rodi J, Vaucher P, Hohenschurz-Schmidt D, Morin C, Thomson OP. 4 M's to make sense of evidence—avoiding the propagation of mistakes, misinterpretation, misrepresentation and misinformation. Int J Osteopathic Med 2022;44:29–35. [Google Scholar]
- [38].Dworkin RH, Evans SR, Mbowe O, McDermott MP. Essential statistical principles of clinical trials of pain treatments. PAIN Rep 2021;6:e863. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Dworkin RH, Turk DC, Farrar JT, Haythornthwaite JA, Jensen MP, Katz NP, Kerns RD, Stucki G, Allen RR, Bellamy N, Carr DB, Chandler J, Cowan P, Dionne R, Galer BS, Hertz S, Jadad AR, Kramer LD, Manning DC, Martin S, McCormick CG, McDermott MP, McGrath P, Quessy S, Rappaport BA, Robbins W, Robinson JP, Rothman M, Royal MA, Simon L, Stauffer JW, Stein W, Tollett J, Wernicke J, Witter J. Core outcome measures for chronic pain clinical trials: IMMPACT recommendations. PAIN 2005;113:9–19. [DOI] [PubMed] [Google Scholar]
- [40].Dworkin RH, Turk DC, Peirce-Sandner S, Burke LB, Farrar JT, Gilron I, Jensen MP, Katz NP, Raja SN, Rappaport BA, Rowbotham MC, Backonja M-M, Baron R, Bellamy N, Bhagwagar Z, Costello A, Cowan P, Fang WC, Hertz S, Jay GW, Junor R, Kerns RD, Kerwin R, Kopecky EA, Lissin D, Malamut R, Markman JD, McDermott MP, Munera C, Porter L, Rauschkolb C, Rice ASC, Sampaio C, Skljarevski V, Sommerville K, Stacey BR, Steigerwald I, Tobias J, Trentacosti AM, Wasan AD, Wells GA, Williams J, Witter J, Ziegler D. Considerations for improving assay sensitivity in chronic pain clinical trials: IMMPACT recommendations. PAIN 2012;153:1148–58. [DOI] [PubMed] [Google Scholar]
- [41].Eapen ZJ, Lauer MS, Temple RJ. The imperative of overcoming barriers to the conduct of large, simple trials. JAMA 2014;311:1397–8. [DOI] [PubMed] [Google Scholar]
- [42].Eklund A, Jensen I, Lohela-Karlsson M, Hagberg J, Leboeuf-Yde C, Kongsted A, Bodin L, Axén I. The Nordic Maintenance Care program: effectiveness of chiropractic maintenance care versus symptom-guided treatment for recurrent and persistent low back pain—a pragmatic randomized controlled trial. PLoS One 2018;13:e0203029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [43].de-Figueiredo FED, Lima LF, Lima GS, Oliveira LS, Ribeiro MA, Brito-Junior M, Correa MB, Sousa-Neto M, Faria e Silva AL. Apical periodontitis healing and postoperative pain following endodontic treatment with a reciprocating single-file, single-cone approach: a randomized controlled pragmatic clinical trial. PLoS One 2020;15:e0227347. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [44].Flynn D, Eaton LH, Langford DJ, Ieronimakis N, McQuinn H, Burney RO, Holmes SL, Doorenbos AZ. A SMART design to determine the optimal treatment of chronic pain among military personnel. Contemp Clin Trials 2018;73:68–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Food and Drug Administration. The drug development process. 2020. Available at: https://www.fda.gov/patients/learn-about-drug-and-device-approvals/drug-development-process. Accessed January 22, 2021. [Google Scholar]
- [46].Ford I, Norrie J. Pragmatic trials. N Engl J Med 2016;375:454–63. [DOI] [PubMed] [Google Scholar]
- [47].Freedland KE, King AC, Ambrosius WT, Mayo-Wilson E, Mohr DC, Czajkowski SM, Thabane L, Collins LM, Rebok GW, Treweek SP, Cook TD, Edinger JD, Stoney CM, Campo RA, Young-Hyman D, Riley WT. The selection of comparators for randomized controlled trials of health-related behavioral interventions: recommendations of an NIH expert panel. J Clin Epidemiol 2019;110:74–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [48].Friedman BW, Cisewski D, Irizarry E, Davitt M, Solorzano C, Nassery A, Pearlman S, White D, Gallagher EJ. A randomized, double-blind, placebo-controlled trial of naproxen with or without orphenadrine or methocarbamol for acute low back pain. Ann Emerg Med 2018;71:348–56.e5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [49].Fritz JM, Rhon DI, Teyhen DS, Kean J, Vanneman ME, Garland EL, Lee IE, Thorp RE, Greene TH. A sequential multiple-assignment randomized trial (SMART) for stepped care management of low back pain in the military health system: a trial protocol. Pain Med 2020;21(suppl 2):S73–S82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [50].Gilron I, Blyth F, Smith BH. Translating clinical trials into improved real-world management of pain: convergence of translational, population-based, and primary care research. PAIN 2020;161:36–42. [DOI] [PubMed] [Google Scholar]
- [51].Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: what can it tell us about approaches to chronic illness management? Patient Educ Couns 2001;44:119–27. [DOI] [PubMed] [Google Scholar]
- [52].Gluud LL. Bias in clinical intervention research. Am J Epidemiol 2006;163:493–501. [DOI] [PubMed] [Google Scholar]
- [53].Gordon KS, Peduzzi P, Kerns RD. Designing trials with purpose: pragmatic clinical trials of nonpharmacological approaches for pain management. Pain Med 2020;21(suppl 2):S7–S12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Griswold D, Learman K, Kolber MJ, O'Halloran B, Cleland JA. Pragmatically applied cervical and thoracic nonthrust manipulation versus thrust manipulation for patients with mechanical neck pain: a multicenter randomized clinical trial. J Orthop Sports Phys Ther 2018;48:137–45. [DOI] [PubMed] [Google Scholar]
- [55].Haynes B. Can it work? Does it work? Is it worth it?: the testing of healthcare interventions is evolving. BMJ 1999;319:652–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [56].HEAL Initiative Research Plan. Research plan|NIH HEAL initiative, 2019. Available at: https://heal.nih.gov/about/research-plan. Accessed October 23, 2019. [Google Scholar]
- [57].Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ 2015;350:h391. [DOI] [PubMed] [Google Scholar]
- [58].Hey SP, Kimmelman J. Are outcome-adaptive allocation trials ethical? Clin Trials 2015;12:102–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [59].Higgins JPT, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JAC; Cochrane Bias Methods Group, Cochrane Statistical Methods Group. The Cochrane collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011;343:d5928. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [60].Hill JC, Dunn KM, Lewis M, Mullis R, Main CJ, Foster NE, Hay EM. A primary care back pain screening tool: identifying patient subgroups for initial treatment. Arthritis Rheum 2008;59:632–41. [DOI] [PubMed] [Google Scholar]
- [61].Hohenschurz-Schmidt D, Draper-Rodi J, Vase L, Scott W, McGregor A, Soliman N, MacMillan A, Olivier A, Cherian CA, Corcoran D, Abbey H, Freigang S, Chan J, Phalip J, Nørgaard Sørensen L, Delafin M, Baptista M, Medforth NR, Ruffini N, Skøtt Andresen S, Ytier S, Ali D, Hobday H, Santosa AANAA, Vollert J, Rice ASC. Blinding and sham control methods in trials of physical, psychological, and self-management interventions for pain (article I): a systematic review and description of methods. PAIN 2023;164:469–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [62].Hohenschurz-Schmidt D, Kleykamp BA, Draper-Rodi J, Vollert J, Chan J, Ferguson M, McNicol E, Phalip J, Evans SR, Turk DC, Dworkin RH, Rice ASC. Pragmatic trials of pain therapies: a systematic review of methods. PAIN 2022;163:21–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [63].Hohenschurz-Schmidt D, Vase L, Scott W, Annoni M, Ajayi OK, Barth J, Bennell K, Berna C, Bialosky J, Braithwaite F, Finnerup NB, Williams AC de C, Carlino E, Cerritelli F, Chaibi A, Cherkin D, Colloca L, Côtè P, Darnall BD, Evans R, Fabre L, Faria V, French S, Gerger H, Häuser W, Hinman RS, HO D, Janssens T, Jensen K, Lunde SJ, Keefe F, Kerns RD, Koechlin H, Kongsted A, Michener LA, Moerman DE, Musial F, Newell D, Nicholas M, Palermo PM, Palermo S, Peerdeman KJ, Pogatzki-Zahn EM, Puhl AA, Roberts L, Rossettini G, Johnston C, Matthiesen ST, Underwood M, Vaucher P, Vollert J, Wartolowska K, Weimer K, Werner CP, Rice ASC, Draper-Rodi J. Recommendations for the Development, Implementation, and Reporting of Control Interventions in Efficacy and Mechanistic Trials of Physical, Psychological, and Self-Management Therapies - The CoPPS Statement. BMJ 2023. (accepted for publication). [DOI] [PubMed] [Google Scholar]
- [64].Homer CSE. Using the Zelen design in randomized controlled trials: debates and controversies. J Adv Nurs 2002;38:200–7. [DOI] [PubMed] [Google Scholar]
- [65].Howland RH. Sequenced treatment alternatives to relieve depression (STAR*D)—part 2: study outcomes. J Psychosoc Nurs Ment Health Serv 2008;46:21–4. [DOI] [PubMed] [Google Scholar]
- [66].Huebschmann AG, Leavitt IM, Glasgow RE. Making health research matter: a call to increase attention to external validity. Annu Rev Public Health 2019;40:45–63. [DOI] [PubMed] [Google Scholar]
- [67].Janevic MR, McLaughlin SJ, Heapy AA, Thacker C, Piette JD. Racial and socioeconomic disparities in disabling chronic pain: findings from the health and retirement study. J Pain 2017;18:1459–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [68].Jarvik JG, Meier EN, James KT, Gold LS, Tan KW, Kessler LG, Suri P, Kallmes DF, Cherkin DC, Deyo RA, Sherman KJ, Halabi SS, Comstock BA, Luetmer PH, Avins AL, Rundell SD, Griffith B, Friedly JL, Lavallee DC, Stephens KA, Turner JA, Bresnahan BW, Heagerty PJ. The effect of including benchmark prevalence data of common imaging findings in spine image reports on health care utilization among adults undergoing spine imaging: a stepped-wedge randomized clinical trial. JAMA Netw Open 2020;3:e2015713. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [69].Jay MA, Bendayan R, Cooper R, Muthuri SG. Lifetime socioeconomic circumstances and chronic pain in later adulthood: findings from a British birth cohort study. BMJ Open 2019;9:e024250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [70].Johnson KE, Neta G, Dember LM, Coronado GD, Suls J, Chambers DA, Rundell S, Smith DH, Liu B, Taplin S, Stoney CM, Farrell MM, Glasgow RE. Use of PRECIS ratings in the national Institutes of health (NIH) health care systems research collaboratory. Trials 2016;17:32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [71].Kaptchuk TJ, Hemond CC, Miller FG. Placebos in chronic pain: evidence, theory, ethics, and use in clinical practice. BMJ 2020;370:m1668. [DOI] [PubMed] [Google Scholar]
- [72].Katz N, Dworkin RH, North R, Thomson S, Eldabe S, Hayek SM, Kopell BH, Markman J, Rezai A, Taylor RS, Turk DC, Buchser E, Fields H, Fiore G, Ferguson M, Gewandter J, Hilker C, Jain R, Leitner A, Loeser J, McNicol E, Nurmikko T, Shipley J, Singh R, Trescot A, van Dongen R, Venkatesan L. Research design considerations for randomized controlled trials of spinal cord stimulation for pain: IMMPACT/ION/INS recommendations. PAIN 2021;162:1935–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [73].Keefe FJ, Jensen MP, de C Williams AC, George SZ. The Yin and Yang of pragmatic clinical trials of behavioral interventions for chronic pain: balancing design features to maximize impact. PAIN 2022;163:1215–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [74].Kelleher SA, Dorfman CS, Plumb Vilardaga JC, Majestic C, Winger J, Gandhi V, Nunez C, Van Denburg A, Shelby RA, Reed SD, Murphy S, Davidian M, Laber EB, Kimmick GG, Westbrook KW, Abernethy AP, Somers TJ. Optimizing delivery of a behavioral pain intervention in cancer patients using a sequential multiple assignment randomized trial SMART. Contemp Clin Trials 2017;57:51–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [75].Kerns RD, Davis AF, Fritz JM, Keefe FJ, Peduzzi P, Rhon DI, Taylor SL, Vining R, Yu Q, Zeliadt SB, George SZ. Intervention fidelity in pain pragmatic trials for nonpharmacologic pain management: nuanced considerations for determining PRECIS-2 flexibility in delivery and adherence. J Pain 2022. doi. 10.1016/j.jpain.2022.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [76].Kleykamp BA, Ferguson MC, McNicol E, Bixho I, Arnold LM, Edwards RR, Fillingim R, Grol-Prokopczyk H, Turk DC, Dworkin RH. The prevalence of psychiatric and chronic pain comorbidities in fibromyalgia: an ACTTION systematic review. Semin Arthritis Rheum 2021;51:166–74. [DOI] [PubMed] [Google Scholar]
- [77].Korn EL, Freidlin B. Outcome-adaptive randomization: is it useful? J Clin Oncol 2011;29:771–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [78].Lavori PW, Rush AJ, Wisniewski SR, Alpert J, Fava M, Kupfer DJ, Nierenberg A, Quitkin FM, Sackeim HA, Thase ME, Trivedi M. Strengthening clinical effectiveness trials: equipoise-stratified randomization. Biol Psychiatry 2001;50:792–801. [DOI] [PubMed] [Google Scholar]
- [79].Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A “SMART” design for building individualized treatment sequences. Annu Rev Clin Psychol 2012;8:21–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [80].Lesaffre E. Superiority, equivalence, and non-inferiority trials. Bull NYU Hosp Jt Dis 2008;66:150–4. [PubMed] [Google Scholar]
- [81].Lewis RJ. The pragmatic clinical trial in a learning health care system. Clin Trials 2016;13:484–92. [DOI] [PubMed] [Google Scholar]
- [82].Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ 2015;350:h2147. [DOI] [PubMed] [Google Scholar]
- [83].Malik I, Burnett S, Webster-Smith M, Morden J, Ereira S, Gillman A, Lewis R, Hall E, Bliss J, Snowdon C. Benefits and challenges of electronic data capture (EDC) systems versus paper case report forms. Trials 2015;16:P37. [Google Scholar]
- [84].Maly A, Vallerand AH. Neighborhood, socioeconomic, and racial influence on chronic pain. Pain Manage Nurs 2018;19:14–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [85].Matthews J. Multi-period crossover trials. Stat Methods Med Res 1994;3:383–405. [DOI] [PubMed] [Google Scholar]
- [86].McKee MD, Nielsen A, Anderson B, Chuang E, Connolly M, Gao Q, Gil EN, Lechuga C, Kim M, Naqvi H, Kligler B. Individual vs. group delivery of acupuncture therapy for chronic musculoskeletal pain in urban primary care-a randomized trial. J Gen Intern Med 2020;35:1227–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [87].Mdege ND, Man M-S, Taylor nee Brown CA, Torgerson DJ. Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation. J Clin Epidemiol 2011;64:936–48. [DOI] [PubMed] [Google Scholar]
- [88].Meske DS, Vaughn BJ, Kopecky EA, Katz N. Number of clinical trial study sites impacts observed treatment effect size: an analysis of randomized controlled trials of opioids for chronic pain. J Pain Res 2019;12:3161–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [89].Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med 2010;7:e1000217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [90].Moore RA, Wiffen PJ, Eccleston C, Derry S, Baron R, Bell RF, Furlan AD, Gilron I, Haroutounian S, Katz NP, Lipman AG, Morley S, Peloso PM, Quessy SN, Seers K, Strassels SA, Straube S. Systematic review of enriched enrolment, randomised withdrawal trial designs in chronic pain: a new framework for design and reporting. PAIN 2015;156:1382–95. [DOI] [PubMed] [Google Scholar]
- [91].Musial F. Acupuncture for the treatment of pain—a mega-placebo? Front Neurosci 2019;13:1110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [92].Nicholls SG, Zwarenstein M, Hey SP, Giraudeau B, Campbell MK, Taljaard M. The importance of decision intent within descriptions of pragmatic trials. J Clin Epidemiol 2020;125:30–7. [DOI] [PubMed] [Google Scholar]
- [93].Noll E, Shodhan S, Romeiser JL, Madariaga MC, Page C, Santangelo D, Guo X, Pryor AD, Gan TJ, Bennett-Guerrero E. Efficacy of acupressure on quality of recovery after surgery: randomised controlled trial. Eur J Anaesthesiol 2019;36:557–65. [DOI] [PubMed] [Google Scholar]
- [94].Page SJ, Persch AC. Recruitment, retention, and blinding in clinical trials. Am J Occup Ther 2013;67:154–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [95].Pain management collaboratory—supporting research in pain management for veterans and military service members. Available at: https://painmanagementcollaboratory.org/. Accessed July 10, 2022. [Google Scholar]
- [96].Palermo TM, de la Vega R, Murray C, Law E, Zhou C. A digital health psychological intervention (WebMAP Mobile) for children and adolescents with chronic pain: results of a hybrid effectiveness-implementation stepped-wedge cluster randomized trial. PAIN 2020;161:2763–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [97].Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, Holmes J, Mander AP, Odondi L, Sydes MR, Villar SS, Wason JMS, Weir CJ, Wheeler GM, Yap C, Jaki T. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med 2018;16:29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [98].Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues Clin Neurosci 2011;13:217–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [99].Power M, Hopayian K. Exposing the evidence gap for complementary and alternative medicine to be integrated into science-based medicine. J R Soc Med 2011;104:155–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [100].Pragmatic Clinical Studies. Pragmatic clinical studies: PCORI, 2016. Available at: https://www.pcori.org/research/about-our-research/pragmatic-clinical-studies. Accessed July 10, 2022. [Google Scholar]
- [101].PRECIS-2 toolkit. Available at: https://precis-2.org/Help/Documentation/Toolkit. Accessed July 11, 2022. [Google Scholar]
- [102].Proschan M, Evans S. Resist the temptation of response-adaptive randomization. Clin Infect Dis 2020;71:3002–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [103].Rasmussen CDN, Holtermann A, Bay H, Søgaard K, Birk Jørgensen M. A multifaceted workplace intervention for low back pain in nurses' aides: a pragmatic stepped wedge cluster randomised controlled trial. PAIN 2015;156:1786–94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [104].Relton C, Torgerson D, O'Cathain A, Nicholl J. Rethinking pragmatic randomised controlled trials: introducing the “cohort multiple randomised controlled trial” design. BMJ 2010;340:c1066. [DOI] [PubMed] [Google Scholar]
- [105].Reynolds RF, Lem JA, Gatto NM, Eng SM. Is the large simple trial design used for comparative, post-approval safety research? Drug Saf 2011;34:799–820. [DOI] [PubMed] [Google Scholar]
- [106].Rigoard P, Basu S, Desai M, Taylor R, Annemans L, Tan Y, Johnson MJ, Van den Abeele C, North R. Multicolumn spinal cord stimulation for predominant back pain in failed back surgery syndrome patients: a multicenter randomized controlled trial. PAIN 2019;160:1410–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [107].Roehr B. The appeal of large simple trials. BMJ 2013;346:f1317. [DOI] [PubMed] [Google Scholar]
- [108].Rothrock JF, Adams AM, Lipton RB, Silberstein SD, Jo E, Zhao X, Blumenfeld AM. FORWARD study: evaluating the comparative effectiveness of OnabotulinumtoxinA and Topiramate for headache prevention in adults with chronic migraine. Headache 2019;59:1700–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [109].Rothwell PM. Commentary: external validity of results of randomized trials: disentangling a complex concept. Int J Epidemiol 2010;39:94–6. [DOI] [PubMed] [Google Scholar]
- [110].Rothwell PM. External validity of randomised controlled trials: “To whom do the results of this trial apply?”. Lancet 2005;365:82–93. [DOI] [PubMed] [Google Scholar]
- [111].Roundtable on Value and Science-Driven Health Care, Forum on Drug Discovery, Development, and Translation, Board on Health Sciences Policy, Institute of Medicine. Large simple trials and knowledge generation in a learning health system: workshop summary. Washington, DC: National Academies Press (US), 2013. Available at: http://www.ncbi.nlm.nih.gov/books/NBK201274/. Accessed February 21, 2021. [PubMed] [Google Scholar]
- [112].Rowbotham MC, Gilron I, Glazer C, Rice ASC, Smith BH, Stewart WF, Wasan AD. Can pragmatic trials help us better understand chronic pain and improve treatment? PAIN 2013;154:643–6. [DOI] [PubMed] [Google Scholar]
- [113].Rush AJ, Fava M, Wisniewski SR, Lavori PW, Trivedi MH, Sackeim HA, Thase ME, Nierenberg AA, Quitkin FM, Kashner TM, Kupfer DJ, Rosenbaum JF, Alpert J, Stewart JW, McGrath PJ, Biggs MM, Shores-Wilson K, Lebowitz BD, Ritz L, Niederehe G; STAR*D Investigators Group. Sequenced treatment alternatives to relieve depression (STAR*D): rationale and design. Controlled Clin Trials 2004;25:119–42. [DOI] [PubMed] [Google Scholar]
- [114].Rush AJ, Warden D, Wisniewski SR, Fava M, Trivedi MH, Gaynes BN, Nierenberg AA. STAR*D: revising conventional wisdom. CNS Drugs 2009;23:627–47. [DOI] [PubMed] [Google Scholar]
- [115].Saville BR, Berry SM. Efficiencies of platform clinical trials: a vision of the future. Clin Trials 2016;13:358–66. [DOI] [PubMed] [Google Scholar]
- [116].Saxman SB. Ethical considerations for outcome-adaptive trial designs: a clinical researcher's perspective. Bioethics 2015;29:59–65. [DOI] [PubMed] [Google Scholar]
- [117].Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis 1967;20:637–48. [DOI] [PubMed] [Google Scholar]
- [118].Sepehrvand N, Alemayehu W, Das D, Gupta AK, Gouda P, Ghimire A, Du AX, Hatami S, Babadagli HE, Verma S, Kashour Z, Ezekowitz JA. Trends in the explanatory or pragmatic nature of cardiovascular clinical trials over 2 decades. JAMA Cardiol 2019;4:1122–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [119].Sherman RE, Anderson SA, Dal Pan GJ, Gray GW, Gross T, Hunter NL, LaVange L, Marinac-Dabic D, Marks PW, Robb MA, Shuren J, Temple R, Woodcock J, Yue LQ, Califf RM. Real-world evidence—what is it and what can it tell us? N Engl J Med 2016;375:2293–7. [DOI] [PubMed] [Google Scholar]
- [120].Simon GE, Shortreed SM, DeBar LL. Zelen design clinical trials: why, when, and how. Trials 2021;22:541. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [121].Singla NK, Desjardins PJ, Chang PD. A comparison of the clinical and experimental characteristics of four acute surgical pain models: dental extraction, bunionectomy, joint replacement, and soft tissue surgery. PAIN 2014;155:441–56. [DOI] [PubMed] [Google Scholar]
- [122].Skolasky RL, Wegener ST, Aaron RV, Ephraim P, Brennan G, Greene T, Lane E, Minick K, Hanley AW, Garland EL, Fritz JM. The OPTIMIZE study: protocol of a pragmatic sequential multiple assessment randomized trial of nonpharmacologic treatment for chronic, nonspecific low back pain. BMC Musculoskelet Disord 2020;21:293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [123].Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng H-Y, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019;366:l4898. [DOI] [PubMed] [Google Scholar]
- [124].Sullivan MD, Ballantyne JC. Must we reduce pain intensity to treat chronic pain? PAIN 2016;157:65–9. [DOI] [PubMed] [Google Scholar]
- [125].Temple R. Enrichment of clinical study populations. Clin Pharmacol Ther 2010;88:774–8. [DOI] [PubMed] [Google Scholar]
- [126].Thall P, Fox P, Wathen J. Statistical controversies in clinical research: scientific and ethical problems with adaptive randomization in comparative clinical trials. Ann Oncol 2015;26:1621–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [127].Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, Tunis S, Bergel E, Harvey I, Magid DJ, Chalkidou K. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol 2009;62:464–75. [DOI] [PubMed] [Google Scholar]
- [128].Torgerson DJ, Sibbald B. Understanding controlled trials. What is a patient preference trial? BMJ 1998;316:360. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [129].Turk DC, Dworkin RH. What should be the core outcomes in chronic pain clinical trials? Arthritis Res Ther 2004;6:151–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [130].U.S. Food and Drug Administration. Enrichment strategies for clinical trials to support approval of human drugs and biological products. US Food and Drug Administration, Center for Drug Evaluation and Research, 2019. Available at: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/enrichment-strategies-clinical-trials-support-approval-human-drugs-and-biological-products. Accessed March 23, 2021. [Google Scholar]
- [131].Van Bulck L, Wampers M, Moons P. Research Electronic Data Capture (REDCap): tackling data collection, management, storage, and privacy challenges. Eur J Cardiovasc Nurs 2022;21:85–91. [DOI] [PubMed] [Google Scholar]
- [132].Vandvik PO, Lähdeoja T, Ardern C, Buchbinder R, Moro J, Brox JI, Burgers J, Hao Q, Karjalainen T, van den Bekerom M, Noorduyn J, Lytvyn L, Siemieniuk RAC, Albin A, Shunjie SC, Fisch F, Proulx L, Guyatt G, Agoritsas T, Poolman RW. Subacromial decompression surgery for adults with shoulder pain: a clinical practice guideline. BMJ 2019;364:l294. [DOI] [PubMed] [Google Scholar]
- [133].Verra ML, Angst F, Brioschi R, Lehmann S, Benz T, Aeschlimann A, De Bie RA, Staal JB. Effectiveness of subgroup-specific pain rehabilitation: a randomized controlled trial in patients with chronic back pain. Eur J Phys Rehabil Med 2018;54:358–70. [DOI] [PubMed] [Google Scholar]
- [134].Wang Y, Lombard C, Hussain SM, Harrison C, Kozica S, Brady SRE, Teede H, Cicuttini FM. Effect of a low-intensity, self-management lifestyle intervention on knee pain in community-based young to middle-aged rural women: a cluster randomised controlled trial. Arthritis Res Ther 2018;20:74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [135].Wasan AD. Efficacy vs effectiveness and explanatory vs pragmatic: where is the balance point in pain medicine research? Pain Med 2014;15:539–40. [DOI] [PubMed] [Google Scholar]
- [136].Wason J, Stallard N, Bowden J, Jennison C. A multi-stage drop-the-losers design for multi-arm clinical trials. Stat Methods Med Res 2017;26:508–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [137].Wellek S, Blettner M. On the proper use of the crossover design in clinical trials: part 18 of a series on evaluation of scientific publications. Dtsch Arztebl Int 2012;109:276–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [138].Williams ACdC, Fisher E, Hearn L, Eccleston C. Evidence-based psychological interventions for adults with chronic pain: precision, control, quality, and equipoise. PAIN 2021;162:2149–53. [DOI] [PubMed] [Google Scholar]
- [139].Williams A, Wiggers J, O'Brien KM, Wolfenden L, Yoong SL, Hodder RK, Lee H, Robson EK, McAuley JH, Haskins R, Kamper SJ, Rissel C, Williams CM. Effectiveness of a healthy lifestyle intervention for chronic low back pain: a randomised controlled trial. PAIN 2018;159:1137–46. [DOI] [PubMed] [Google Scholar]
- [140].Wright PJ, Pinto BM, Corbett CF. Balancing internal and external validity using precis-2 and re-aim: case exemplars. West J Nurs Res 2021;43:163–71. [DOI] [PubMed] [Google Scholar]
- [141].Yordanov Y, Dechartres A, Porcher R, Boutron I, Altman DG, Ravaud P. Avoidable waste of research related to inadequate methods in clinical trials. BMJ 2015;350:h809. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [142].Yu SP, Ferreira ML, Duong V, Caroupapoullé J, Arden NK, Bennell KL, Hunter DJ. Responsiveness of an activity tracker as a measurement tool in a knee osteoarthritis clinical trial (ACTIVe-OA study). Ann Phys Rehabil Med 2022;65:101619. [DOI] [PubMed] [Google Scholar]
- [143].Zarin DA, Goodman SN, Kimmelman J. Harms from uninformative clinical trials. JAMA 2019;322:813–4. [DOI] [PubMed] [Google Scholar]
- [144].Zhang J, Sun L, Liu Y, Wang H, Sun N, Zhang P. Mobile device–based electronic data capture system used in a clinical randomized controlled trial: advantages and challenges. J Med Internet Res 2017;19:e66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [145].Zhuang Q, Tao L, Lin J, Jin J, Qian W, Bian Y, Li Y, Dong Y, Peng H, Li Y, Fan Y, Wang W, Feng B, Gao N, Sun T, Lin J, Zhang M, Yan S, Shen B, Pei F, Weng X. Postoperative intravenous parecoxib sodium followed by oral celecoxib post total knee arthroplasty in osteoarthritis patients (PIPFORCE): a multicentre, double-blind, randomised, placebo-controlled trial. BMJ Open 2020;10:e030501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [146].Zucker DR, Ruthazer R, Schmid CH. Individual (N-of-1) trials can be combined to give population comparative treatment effect estimates: methodologic considerations. J Clin Epidemiol 2010;63:1312–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [147].Zwarenstein M, Thorpe K, Treweek S, Loudon K. PRECIS-2 for retrospective assessment of RCTs in systematic reviews. J Clin Epidemiol 2020;126:202–6. [DOI] [PubMed] [Google Scholar]
- [148].Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, Oxman AD, Moher D; CONSORT Group; Pragmatic Trials in Healthcare (Practihc) Group. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337:a2390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [149].Zwarenstein M, Treweek S, Loudon K. PRECIS-2 helps researchers design more applicable RCTs while CONSORT extension for pragmatic trials helps knowledge users decide whether to apply them. J Clin Epidemiol 2017;84:27–9. [DOI] [PubMed] [Google Scholar]