Table 1.
Pragmatic study | Traditional clinical efficacy | |
---|---|---|
Stakeholder involvement | Engaged in all study phases including study design, conducting the study, collecting data, interpreting results, disseminating findings | Limited engagement, often in response to investigator ideas or study subjects |
Research design | Includes internal and external validity, design fidelity and local adaptation, real life settings and populations, contextual assessments | Focus on limiting threats to internal validity, typically uses randomized controlled trial, participants and settings typically homogenous |
Outcomes | Reach, effectiveness, adoption, implementation, comparative effectiveness, sustainability | Efficacy, mechanism identification, component analysis |
Measures | Brief, valid, actionable with rapid clinical utility, feasible in real world and low-resource settings | Validated measures that minimize bias, focus on internal consistency and theory rather than clinical relevance |
Costs | Assessments include intervention costs and replication costs in relation to outcomes | Often not collected or reported |
Data source | May include existing data (electronic health records, administrative data) and brief patient reports | Data generation and collection part of clinical trial |
Analyses | Process and outcome analyses relevant to stakeholders and from different perspectives | Specified a priori and typically restricted to investigator hypotheses |
Availability of findings | Rapid learning and implementation | Delay between trial completion and analytic availability |
Source: Krist et al. Designing a valid randomized pragmatic primary care implementation trial: the my own health report (MOHR) project. Implement Sci 2013; 8: 73.