Skip to main content
. 2023 Jul 7;3:1162762. doi: 10.3389/frhs.2023.1162762

Table 4.

Summary of selective study designs with potential to respond to context during research phases of protocol development, execution of the study and analysis of findings.

Research design Description Responsiveness of study design to context Considerations Examples from literature
Protocol develop-ment Study execution Analysis
Participatory research Defined by various terms, including participatory action research, community based participatory research, engaged scholarship, and integrated knowledge translation. It involves an approach that “partners the researcher and participants in a collaborative effort to address issues in specific systems” [(15) p.2] and to “to foster democratic processes in the co-creation of knowledge” [(59) p.7] H H H Engagement with intended end-users is a pre-requisite. (60, 62)
Need to consider the time and resources required to build authentic and trusting relationships between research team members (61).
Realist evaluation Realist evaluation is a theory-driven approach driven by the question: what works, how, for whom, in what circumstances and to what extent? It involves developing and testing explanatory theory based on context, mechanism, and outcome configurations (CMOcs). These represent hypotheses about how a program works (O) because of the action of some underlying mechanism/s (M) that only function in particular contexts (C) (63, 64). Typically undertaken iteratively to test and refine theoretical propositions over time. H H H The theory-based approach of realist evaluation aligns with theory informed and informative implementation research and explicitly explores contextual influences on intervention outcomes (65). (37, 69)
Development of CMOcs can be challenging (66). There are published examples of applying realist evaluation in implementation research, particularly to conduct process evaluations embedded within randomized controlled trials (37); however process evaluation needs to be conducted prospectively to enable optimal responsiveness to context and engagement with intended users of the research is important to articulate and refine program theory/ies.
Whether a realist approach can be incorporated within randomised controlled trials is an area of debate (67, 68).
Developmental evaluation Described as an extension of utilization-focused evaluation (70) that is informed by complexity science and systems thinking. The focus is on users and real use of evaluation findings. This involves studying programs in context and understanding program activities as they operate in dynamic environments with complex interactions (71, 72). H H H Well suited to early stages of implementation and where a need for implementation strategy adaption is anticipated. Does not apply a conventional logic model, but applies systems thinking to map relationships, inter-connections, and assumptions about how change is expected to occur.
Researchers need to be comfortable with uncertainty and be willing to change or abandon an intervention and/or implementation strategy mid-course if the data is suggesting another approach might be better. Detailed documentation throughout the study is important to capture decision points and feedback in a timely manner.
(71, 73)
Ethnography With roots in anthropology, ethnography involves engagement with a small number of study settings to build relationships and undertake in-depth study. Data collection is typically iterative and involves qualitative methods of data collection such as observation, field notes and interviews. As such, if conducted in a participatory way, it is potentially well suited to incorporating end-user perspectives and examining complex implementation processes and contextual influences on implementation (74). H H H Evidence of increasing use in implementation research, although meanings of ethnography are contested which can make it difficult to evaluate the rigour of the research (74).
As with other participatory approaches, reflexivity is an important skill and practice for researchers undertaking ethnographic study, as is awareness of positionality (75).
(76, 77)
Quality/rapid cycle improvement:
Single site
Multi-site collaborative
Quality improvement (QI) involves a systematic and coordinated approach to solving a problem using specific methods and tools with the aim of bringing about a measurable improvement [(78) p.3].
QI collaboratives involve groups of professionals coming together in real time, either from within an organisation or across multiple organisations, to learn from and motivate each other to improve the quality of health services. Collaboratives often use a structured approach, such as setting targets and undertaking rapid cycles of change (79).
H H H Healthcare staff are likely to have existing knowledge and experience of quality improvement.
There are recognized similarities between QI and implementation research and calls to align them more closely (80, 81). However, QI may lack a strong theory and evidence component compared to implementation science.
Evidence on the impact of QI collaboratives is mixed, suggesting they “achieve positive – although limited and variable – improvements in processes of care and clinical outcomes” [(82) p.2]
There is evidence to suggest that participation in QI collaborative activities may improve problem-solving skills, teamwork and shared leadership (83).
(82, 84)
Case study:
Single site
Multiple sites
Defined as “an empirical inquiry that investigates a contemporary phenomenon (the“case”) in depth and within its real-world context” [(85) p.18]
Typically, they are observational to understand phenomena and their causal mechanisms, including context. However, case study methods can vary from a more positivist to more constructionist focus, which could influence the extent to which they can respond to context (86).
H M M When case study research is conducted using a prospective approach, then it is possible to identify and respond to contextual barriers and enablers during the study. Multi-site and longitudinal case studies (including studies of failure) are useful to capture the dynamics of implementation and build theory (87).
However, in the field of implementation science to date, case studies have been described “as a form of post hoc process evaluation, to disseminate how the delivery of an intervention is achieved, the mechanisms by which implementation strategies produce change, or how context impacts implementation and related outcomes”[(88) p.2].
(87, 89)
Adaptive randomized controlled trial Also described as sequential trial designs, adaptive designs allow for staged modifications to key components of the implementation interventions according to pre-specified decision rules. Unlike conventional experimental designs, where the learning typically occurs after the trial is completed, adaptive designs intend for continual learning as the data accumulate, hence the potential to respond to context (90). Examples include the Sequential Multiple Assignment Randomized Trial (SMART) design (54) and the Multiphase Optimization Strategy (MOST) design (91). M M M Adaptive designs have mostly been conducted in trials of clinical interventions and there are relatively few published examples of adaptive implementation trials.
As there is a need for interim data analysis to inform decisions about modification, there is a need for access to rapidly available and measurable outcome data. Temporal trends are also important to consider and can add to the complexity of data analysis (92).
(93, 94)
Stepped wedge randomized controlled trial Following a baseline period, the implementation intervention is sequentially rolled out to participants. The order of the roll-out sequence is randomized and by the end of the study all participants receive the intervention. “The design is particularly relevant where it is predicted that the intervention will do more good than harm … and/or where, for logistical, practical or financial reasons, it is impossible to deliver the intervention simultaneously to all participants” [(95), p.1] M L/M M The sequential nature of roll-out means that participants experience different length intervention periods, which can be problematic as those who come in later have a shorter time to implement. (96, 98)
Temporal trends can influence the study results and make data analysis more complex (97). If a prospective process evaluation is embedded with the trial, then there could be potential to respond to identified contextual factors during the conduct of the study.
Hybrid effectiveness-implementation trial Originally proposed in 2012 as a type of experimental trial design that could combine questions about the effectiveness of an intervention with questions about how best to implement it (25). Three different types of hybrid design were proposed, ranging from a primary focus on testing intervention effectiveness whilst gathering some data about implementation (Type 1), to placing equal weight on testing both the intervention and implementation strategies (Type 2), or primarily testing an implementation strategy and implementation outcomes whilst collecting some information about the intervention (Type 3). L/M L L/M The hybrid design approach has been widely adopted in the field of implementation science and suggestions put forward for further development or expansion to address context (99). Initially the focus was on testing clinical interventions alongside implementation, although there are many examples of using the approach to evaluate implementation interventions. Ratings are likely to differ from Type 1 to Type 3; the greater the focus on implementation (Type 3), the greater the potential to respond to context if there is an embedded, prospective process evaluation.
A recent reflection paper from the original developers of the hybrid design (100) suggests replacing the term ‘design’ with ’study’ to acknowledge that the hybrid approach can be applied more broadly to non-trial research designs. This has the potential to change the level of responsiveness and adaptation to context.
(101, 102)
Pragmatic randomized controlled trial In contrast to explanatory trials that aim to test the effectiveness of an intervention under optimal conditions, pragmatic trials are designed to evaluate effectiveness under real-world conditions such as the clinical practice setting (103). The PRECIS (The pragmatic explanatory continuum indicator summary tool) and updated PRECIS-2 tool was developed to help researchers design trials along the explanatory to pragmatic continuum taking account of factors such as eligibility criteria, recruitment, setting, flexibility of delivery and adherence (104). L/M L. L Frequently employed in implementation studies as they place an emphasis on external validity – asking not whether an implementation intervention can work but does it work in routine clinical or health policy contexts (26). This can involve assessment of contextual factors at the study design stage to inform the implementation strategy, although there would not be an active response to contextual factors that emerge during the study. (105, 106)
The pragmatic nature of the research is expected to make findings more generalizable; however, what works in one context rarely works exactly the same in another context, raising questions about the degree of generalizability (103).
Uncontrolled before and after study (pre-post study design) Involves the measurement of specified outcomes before and after the delivery of the implementation intervention in the same study site or sites. L/M L L Relatively simple to conduct but cannot necessarily attribute observed changes to the intervention as other factors, including secular trends and unplanned changes, could be at play. Therefore, results have to interpreted with caution - there may be a tendency to over-estimate the effect size of the implementation intervention (107). (108, 109)
Controlled before and after study Similar to the pre-post design described above but a control population as similar as possible to the intervention site is identified and data are collected in both groups before and after implementation. L/M L L Can be difficult to identify a comparable control group and baseline starting points of the intervention and control groups may differ, meaning that some caution is required when interpreting results. (110, 111)
Interrupted time series Attempts to detect whether an intervention has an effect that is significantly greater than the underlying secular trends. This involves collecting data related to implementation outcomes at multiple time-points both pre- and post-intervention. L L L Need to collect sufficient data points, including pre-intervention, to undertake data analysis. This could have implications for the timescale of data collection and can be easier to do if there is access to routine data that can be used for analysis. (112, 113)
Natural experiment The research team do not plan or direct the implementation intervention but rather observe outcomes of interest and antecedents in their natural context (114). L L L Useful for studying implementation occurring a real-world context, but limited potential to respond to contextual factors during the research. (115, 116)