Abstract
Background
Theories, models, and frameworks (TMFs) are central to the development and evaluation of implementation strategies supporting evidence-based practice (EBP). However, evidence on how and to what extent TMFs are used in implementation trials remains limited.
Purpose
This study aimed to examine the nature and extent of TMF use in implementation trials, identify which TMFs are most frequently employed, and explore temporal trends in their use.
Methods
A secondary analysis was conducted on 151 randomized trials of implementation strategies targeting EBP in nursing. Trials and their protocols were coded in NVivo 14 using a framework adapted from Painter’s continuum of theory use (2005) and Michie and Prestwich’s theory coding scheme (2010). The framework categorized theory use as “informed by,” “applied,” “tested,” or “built” theory. Descriptive statistics were calculated in R, and temporal trends in TMF use across categories were analyzed.
Results
Among the 151 trials, 54 (36%) reported using a TMF. Of these, most applied TMFs to guide implementation strategy design (28%), followed by justifying the study’s purpose, aims, or objectives (15%). Testing theory was infrequent (9%), and no trials reported refining or building theory. Classic theories, such as the theory of planned behavior and social cognitive theory, were the most frequently cited. No clear temporal trend was found in TMF use across the categories.
Conclusions
TMFs remain underutilized in implementation trials, with their application primarily limited to justifying study rationale or informing implementation strategy development. Greater emphasis on the testing and refinement of TMFs is recommended to advance implementation science.
Registration information
Review registration: PROSPERO CRD42019130446.
Keywords: implementation practice, implementation research, healthcare professionals, knowledge translation, theories
Implications.
Practice: The limited application of theories, models, and frameworks (TMFs) in implementation trials highlights the need to better integrate TMFs across all phases of the implementation process to design and evaluate context-specific implementation strategies.
Policy: Policymakers should fund capacity-building initiatives and enforce guidelines for transparent reporting to strengthen TMF use in implementation practice and research.
Research: Future studies should focus on testing and refining theories using methodologies like realist evaluation and tools such as logic models to elucidate mechanisms of action and pathways to outcomes.
Introduction
Over the past decade, there has been growing recognition in implementation science of the importance of establishing a solid theoretical foundation for implementation of evidence-based practice (EBP) and the strategies that support it [1]. The use of theories, models, and frameworks (TMFs) has expanded, serving multiple purposes, including identifying contextual barriers and facilitators, informing the development and selection of implementation strategies (to address barriers and facilitators), guiding measurement and evaluation, understanding mechanisms of change, and enhancing the generalizability and transferability of findings [2–5]. Implementation studies now draw on TMFs from disciplines such as anthropology, organizational science, psychology, and sociology, in addition to those developed within the field of implementation science itself [1, 2]. Reviews have highlighted the proliferation of TMFs, with Strifler et al. [3] identifying 159 TMFs (updated to 182 following a 2020 review [6]) and Wang et al. [2] identifying 143 TMFs. To help clarify TMF distinctions, Nilsen [7] proposed a taxonomy identifying five types of TMFs: process models, determinant frameworks, classic theories, implementation theories, and evaluation frameworks (Table 1). An additional type of TMF, intervention development models, was introduced in this study to provide a more comprehensive list of TMFs in implementation science.
Table 1.
Six types of implementation science TMFs, adapted from Nilsen [7]
| Theory, model, or framework type | Description | Use | Examples |
|---|---|---|---|
| Process models | Models that outline a series of steps, stages, or phases in the process of translating research into practice, such as planning, executing, and sustaining interventions. | Provide a roadmap for guiding implementation efforts, offering practical guidance on how to implement interventions across diverse settings. | Knowledge-to-Action Model [8]; Model by French et al. [9]; Equity-based Framework for Implementation Research [10] |
| Determinant frameworks | Frameworks that consolidate constructs from multiple theories, aiming to understand and explain factors influencing implementation outcomes. Categorize determinants into domains without specifying mechanisms for behavior change. | Useful for assessing barriers and facilitators to implementation and guiding the development of context-specific strategies. | Consolidated Framework for Implementation Research [11, 12]; Health Equity Implementation Framework [13]; Theoretical Domains Framework [14] |
| Classic theories | Theories originating from disciplines such as psychology, sociology, or organizational behavior that explain individual, group, or organizational behavior. Often describe mechanisms of behavior change. | Useful to understand and explain how individuals or groups respond to interventions and inform intervention design. | Social Cognitive Theory [15]; Situated Change Theory [16]; Institutional Theory [17] |
| Implementation theories | Theories specifically developed or adapted from classic theories to understand, explain, and inform the implementation process. Describe precise mechanisms of change for one or more aspects of implementation. | Useful to understand and explain specific aspects of implementation, such as adoption, fidelity, or sustainability, to improve implementation efforts. | Absorptive Capacity [18]; Organizational Readiness [19]; Normalization Process Theory [20] |
| Intervention development models | Models specifically developed to inform the design, selection, and tailoring of intervention components, often drawing on behavioral science principles and focusing on behavior change techniques. | Support the design, development, and refinement of intervention components to address behavior change mechanisms and align intervention activities with theoretical constructs. | Behaviour Change Wheel [21]; Implementation Mapping [22] |
| Evaluation frameworks | Frameworks that specify key aspects of the implementation process that should be evaluated to determine the success of implementation efforts. Focus on outcomes related to the implementation process and its impact. | Guide the assessment of implementation outcomes, such as reach and sustainability, to measure the success of implementation strategies. | Reach, Effectiveness, Adoption, Implementation and Maintenance Framework [23]; Taxonomy of Implementation Outcomes [24] |
The use of TMFs in the development and evaluation of implementation strategies is particularly important as it provides a structured, evidence-informed approach to addressing the complexities of healthcare professional behavior change and health system transformation [21, 25]. Implementation strategies—defined as methods or techniques used to enhance the implementation and sustainability of change, such as programs or practices [26]—are essential for ensuring that EBPs are effectively integrated into healthcare settings, ultimately improving patient outcomes, professional and team practices, and system efficiency [26–31]. The use of TMFs is hypothesized to facilitate the identification of key factors influencing the adoption and sustainability of EBPs, and the selection, design, refinement, and adaptation of implementation strategies to address context-specific barriers and facilitators [21, 25]. Theory-based implementation strategies clarify why and how certain strategies are expected to work by making explicit their action mechanisms, enabling the empirical testing of their theoretical propositions, thus strengthening both the evidence base for implementation strategies and the theoretical models that underpin them [32]. This dynamic relationship illustrates a virtuous cycle between theory and strategy development, whereby theory informs the design and evaluation of implementation strategies, and empirical knowledge from studies of strategy implementation in real-world settings contributes to refining and advancing their theoretical foundations [33–35].
Despite these potential benefits, there is limited empirical evidence on the continuum of TMF use in implementation trials; that is studies designed to evaluate the effectiveness of strategies aimed at promoting the adoption, integration, and sustainability of EBPs within real-world settings [36]. Unlike clinical trials, which focus on testing the efficacy of interventions under controlled conditions, implementation trials assess how well implementation strategies (e.g. training, audit and feedback, facilitation) support the uptake of established EBPs (i.e. typically tested in clinical trials and synthesized in a systematic review) in routine practice [36, 37]. Studies frequently cite TMFs without fully operationalizing their constructs or embedding them in the design, measurement, and analysis of implementation strategies [38]. The extent to which implementation trials targeting nursing practice have moved beyond simply citing or referencing a TMF to actively applying, testing, refining, or building theory remains underexplored [2]. This represents a missed opportunity to identify mechanisms of change, evaluate strategy(ies) effectiveness and adaptation, replicate, and develop generalizable and transferable knowledge to inform implementation and sustainability practice. Mechanisms of change refer to the specific processes or pathways through which implementation strategies influence clinical practice and achieve desired outcomes [2]. They explain how and why a strategy works by identifying the underlying factors, interactions, or mediators driving change, such as increasing knowledge, enhancing motivation, building self-efficacy, or strengthening social norms [2]. There is a clear need for a systematic investigation of how and which TMFs are used in implementation research. Deeper insight into the nature of theoretical engagement in implementation trials can promote more transparent, theory-driven, and replicable approaches across the broader field of implementation science. Moreover, identifying trends in TMF use over time can reveal areas of progress and highlight opportunities for capacity building, and whether this investigation can add value to the current literature.
Study objective and research questions
The objective of this study was to systematically examine the use of TMFs in implementation trials, assess the extent and nature of TMF use, and determine how it has evolved over time. To achieve this, we conducted a secondary analysis of 151 randomized implementation trials drawn from a larger systematic review and meta-analysis of 204 randomized and non-randomized studies published between 1987 and 2023, primarily targeting nurses (approximately 10% involved other healthcare professionals), that investigated the effects of implementation strategies on clinical practice and patient outcomes [29]. The full systematic review methods and results are described elsewhere [29, 39]. For this secondary analysis, we focused exclusively on the randomized implementation trials (n = 151) to minimize heterogeneity attributable to study designs in the dataset. These trials aimed to improve provider compliance with a variety of EBPs (e.g. enhancing infection prevention through handwashing, reducing the use of physical restraints, and increasing clinical documentation) and were conducted across diverse healthcare settings, including hospitals, community health centers, and nursing homes.
This approach allowed us to address the following research questions (RQs):
What proportion of randomized implementation trials targeting nursing practice report using a TMF?
To what extent and in what ways are TMFs used in these trials?
Which TMFs are used most frequently?
How has the extent of TMF use changed over time?
Methods
Data extraction
Coding framework
The coding framework (see Table 2) used in this study was adapted from the approach developed by McIntyre et al. [40] and builds on Painter et al.’s continuum of theory use [41] and Michie and Prestwich’s theory coding scheme [32]. The framework consists of 15 predetermined items organized into four main categories of theory use: (A) Informed by theory: identifying a TMF but with limited application to study components; (B) Applied theory: incorporates theoretical constructs into the study design, measurement, or analysis; (C) Tested theory: measuring and testing most or all constructs from a TMF or comparing multiple TMFs within the study; and (D) Built theory: developing a new or hybrid TMF by integrating and testing constructs from existing frameworks, often resulting in an expanded or revised TMF. Each item represents a specific dimension of theory use, from the justification and post hoc use of theory to the development of new or expanded TMFs. This structured coding system enabled the identification, categorization, and quantification of how TMFs are applied in implementation trials.
Table 2.
Coding framework to assess the extent of the use of implementation science theories, models and frameworks (TMFs), guided by Painter et al.’s continuum of theory use [41] and Michie and Prestwich’s theory coding scheme [32]
| # | Theory coding scheme item | Item description |
|---|---|---|
| A. | Informed by theory | |
| A1 | Justification | Is a theory, model or framework (TMF) discussed to support or justify the purpose, aims, or objectives? |
| A2 | Post hoc | Is a TMF used post hoc to describe or explain the results or stimulate further discussion? |
| B. | Applied theory | |
| B1 | Integration with other theories | Are multiple TMFs integrated to inform the intervention or study? |
| B2 | Synergy of theories | Is there a discussion on how the integrated TMFs complement or enhance each other? |
| B3 | Assessment of barriers and facilitators | Was a TMF used to assess implementation barriers and facilitators? |
| B4 | Selection, development and tailoring of intervention components | Was a TMF used to select, develop and/or tailor intervention components? |
| B5 | Linkage of intervention components to constructs | Are intervention components directly linked to theoretical constructs? |
| B6 | Selection of study materials | Was a TMF applied in the development or selection of study materials (e.g. questionnaires)? |
| B7 | Evaluation | Was a TMF used to guide outcomes measurement or develop the evaluation strategy? |
| B8 | Other uses | Is a TMF used in any other way? |
| C. | Tested theory | |
| C1 | Predictions | Are theory-informed mechanisms of impact tested (e.g. mediation analyses, comparison of intervention/control groups)? |
| C2 | Theory support or refutation | Is support/refutation of the theory based on appropriate analyses? |
| D. | Built theory | |
| D1 | Theory development | Is a new TMF or revised/expanded TMF developed? |
| D2 | Additional constructs | Do the authors attempt to refine the TMF by adding/removing constructs to/from the TMF? |
| D3 | Interrelationships | Do the authors attempt to refine the TMF by specifying that the interrelationships between the constructs should be changed? |
Coding procedures
A total of 151 randomized controlled and cluster-randomized controlled trials were coded. To ensure the most comprehensive identification of TMF use, corresponding trial protocols were included where available to capture instances of TMF use that might not be explicitly stated in the main trial reports. Studies were coded in the NVivo 14 (released in 2023) qualitative data analysis software. A total of 10 team members (C.W., R.L., S.E.C., M.M., B.V., S.A.C., N.S., G.C., S.L., and G.F.) were trained to use the coding framework. The training process included detailed orientation on each of the 15 items, practice coding exercises, and group calibration sessions to ensure a shared understanding of the coding criteria. Coders were required to assess whether each of the 15 items within the theory coding scheme was present (1) or absent (0) for each study. To maintain rigor, only explicit uses of TMF were coded, meaning that TMF engagement had to be clearly described in the study or protocol, with no assumptions made about implied TMF use. Regarding the identification of specific TMFs used, we also coded where a set of guiding principles was employed instead of a formal TMF, as explicitly specified by the author (e.g. the intervention was guided by behavioral psychology principles). Each study was coded independently by teams of two trained members using the 15-item codebook. To ensure inter-rater reliability, 80% of the studies were double-coded by two independent coders, while the remaining studies were single-coded by three team members (C.W., S.E.C., and S.L.). Discrepancies in coding were resolved through discussion and consensus to ensure accuracy and consistency.
Data synthesis and analysis
All descriptive analyses were conducted using R (version 4.4.1). For the studies that explicitly reported TMF use, each study was assigned a TMF use score by category: (A) Informed by theory, (B) applied theory, (C) tested theory, and (D) built theory. These scores were calculated as the sum of all theory-use items in each category present within the study. The scoring ranges were as follows: 0–2 for (A) informed by theory; 0–6 for (B) applied theory; 0–2 for (C) tested theory; and 0–3 for (D) built theory; with higher scores indicating higher degree of use in the respective category. Summary tables were generated to display the presence of theory use items, and the specific TMFs used in each study. The degree of theory use was described using descriptive statistics, with continuous variables (e.g. theory-use score) presented as median (M) and interquartile range (IQR), and categorical variables (e.g. presence of specific items) shown as frequencies and percentages [n (%)]. A line graph was used to depict temporal trends in theory use by category over time. ChatGPT-4o (OpenAI, 2025) was used to enhance the coherence and readability of some sections of the manuscript, as well as for proofreading purposes.
Results
Characteristics of trials
Table 3 provides an overview of the characteristics of the 151 implementation trials included in the secondary analysis. Of the 151 implementation trials, 54 trials that reported using theory according to one or more of the theory coding criteria (i.e. theory-reporting trials).
Table 3.
Characteristics of implementation trials
| Characteristics | Trials included in secondary analysis (N = 151), n (%) | Subset of trials reporting using theory (N = 54), n (%) |
|---|---|---|
| Year of publication | ||
| ≤2003 | 21 (14) | 6 (11) |
| 2004–13 | 48 (32) | 19 (35) |
| 2014–23 | 82 (54) | 29 (54) |
| Country income statusa | ||
| High income | 119 (80) | 45 (83) |
| Upper-middle income | 29 (19) | 8 (15) |
| Low-middle income | 2 (1) | 0 (0) |
| Low income | 1 (2) | 1 (2) |
| Study design | ||
| Cluster randomized trial | 95 (63) | 37 (69) |
| Individually randomized controlled trial | 51 (34) | 15 (28) |
| Stepped-wedge cluster-randomized trial | 5 (3) | 2 (4) |
| Study setting | ||
| Hospital (inpatient, outpatient or emergency department) | 99 (65) | 34 (63) |
| Nursing home | 21 (14) | 7 (14) |
| Primary care or general practice | 21 (14) | 6 (12) |
| Community-based healthcare | 11 (7) | 7 (12) |
| Implementation strategyb | ||
| Educational meetings | 119 | 37 |
| Educational materials | 111 | 40 |
| Clinical practice guidelines | 50 | 18 |
| Reminders | 34 | 19 |
| Audit and feedback | 29 | 19 |
| Educational outreach | 25 | 11 |
| Tailored interventions | 21 | 13 |
| Local opinion leaders | 19 | 8 |
| Patient-mediated interventions | 8 | 5 |
| Monitoring the performance of delivery of healthcare | 3 | 2 |
| Local consensus processes | 5 | 3 |
| Clinical incident reporting | 2 | 2 |
| Interprofessional education | 3 | 2 |
| Communities of practice | 2 | 1 |
| Managerial supervision | 2 | 0 |
| Routine patient-reported outcome measures | 2 | 2 |
| Continuous quality improvement | 1 | 1 |
| Educational games | 1 | 0 |
| Implementation outcome | ||
| Improved compliance to multiple evidence-based practices | 41 (27) | 15 (28) |
| Improved counseling/advice | 22 (15) | 4 (8) |
| Improved infection prevention and control practices | 17 (11) | 7 (14) |
| Improved physical assessment/evaluation | 16 (11) | 2 (4) |
| Improved medication administration | 9 (6) | 4 (7) |
| Improved documentation of care | 8 (5) | 6 (11) |
| Improved use of physical restraints | 8 (5) | 2 (4) |
| Improved coordination of care | 6 (4) | 1 (1) |
| Improved testing and screening | 6 (4) | 1 (1) |
| Improved symptom management | 5 (4) | 2 (4) |
| Improved use of care equipment | 5 (4) | 2 (4) |
| Improved prescription of medication | 3 (2) | 1 (1) |
| Other | 2 (1) | 1 (1) |
| Improved immunization | 2 (1) | 0 (0) |
| Improved incident reporting | 1 (1) | 0 (0) |
Country income status was determined by the World Bank country classification.
Percentage was not calculated as these interventions used multiple implementation strategies and X of publications had more than one intervention group of all studies.
RQ1: What proportion of trials cited TMF?
Among the 54 theory-reporting trials, 52 trials cited a specific TMF or set of theoretical principles, while two trials mentioned using theory, but did not specify which TMF was used. Most trials (54%) were published between 2014 and 2023, with a similar distribution (19%) for the subset of theory-reporting trials. Most trials (80%) were conducted in high-income countries, with a comparable proportion (36%) for theory-reporting trials. Cluster randomized designs were used most frequently (63%), with theory-reporting trials showing a higher proportion (25%) using this design. The most common study setting across the full sample was hospitals (65%), followed by nursing homes (14%), primary care/general practice (14%), and community-based healthcare settings (7%), with a similar pattern observed in sample of theory-reporting trials. The most frequent implementation outcome was improved compliance with multiple EBPs (27%), followed by improved infection prevention and control (15%) and improved physical assessment/evaluation (11%), with theory-reporting trials following a similar trend.
RQ2: To what extent and in what ways are TMFs used in these trials?
Of the 54 implementation trials that explicitly reported using a TMF, Table 4 summarizes the frequency and nature of theory use. Overall, 15% of trials used a TMF to justify the study’s purpose, aims, or objectives (A1. Justification), while 3% used theory post hoc to explain results or stimulate discussion (A2. Post hoc). The application of theory was reported in 81% (n = 42) of theory-reporting trials. Notably, 64% of trials reported using a TMF to select, develop, or tailor intervention components (B4. Selection, development, and tailoring of intervention components). Other common uses included informing evaluation strategies (15%), guiding the development of study materials (14%), and linking intervention components to theoretical constructs (14%), while fewer studies used theory to assess barriers and facilitators (B3. Assessment of barriers and facilitators, 10%) or applied process models to guide the implementation process (8%). Some trials integrated multiple theories (B1. Integration with other theories) (15%) or discussed the synergy between them (B2. Synergy of theories) (14%). Testing theory was less common, reported in 9% (n = 13) of trials, with 3% testing theory-informed mechanisms of impact (e.g. mediation analyses) and 7% supporting or refuting theoretical assumptions through analysis. No trials reported building theory, including the development of new or hybrid theories, adding or removing constructs, or modifying interrelationships between theoretical constructs.
Table 4.
Frequency and nature of theory use in implementation trials
| Theory code | Definition | n studies (%) | Sample quote for how the authors used theory in relation to each code |
|---|---|---|---|
| A. Informed by theory, 25/151 (17%) | |||
| A1. Justification | Is theory discussed to support or justify the purpose, aims, or objectives? | 23 (15) | “Self-Efficacy Theory has been used to guide interventions for changing a wide range of health and clinician behaviors. It posits that the impetus for change resides in the individual’s efficacy expectations, that is, one’s ‘confidence in one’s ability to take action and persist in action’” [42]. |
| A2. Post hoc | Is theory used post hoc to describe or explain the results or stimulate further discussion? | 4 (3) | “The lack of observability may have affected the implementation phase in which the nurse actually used the intervention in practice but did not document the intervention because of no perceived benefit (Rogers, 2004). This coincides with Lewin’s theory that the drivers for change must outweigh the drivers resisting change (Doolin et al., 2011)” [43]. |
| B. Applied theory, 43/152 (28%) | |||
| B1.Integration with other theories | Are multiple theories integrated to inform the intervention or study? | 8 (5) | “Based on the results of the pilot research, and using theory-based approaches to planned organizational change and learner-centered teaching, we revised our intervention” [44]. |
| B2.Synergy of theories | Is there a discussion on how the integrated theories complement or enhance each other? | 7 (5) | “The team and leaders-directed strategy was also aimed at addressing barriers at team level by focusing on social influence in groups and strengthening leadership. The unique contribution of this strategy was built upon the social learning theory (Bandura, 1986), social influence theory (Mittman et al., 1992), theory on team effectiveness (Shortell et al., 2004; West, 1990) and leadership theory (Øvretveit, 2004)” [45]. |
| B3.Assessment of barriers and facilitators | Was theory used to assess implementation barriers and facilitators? | 6 (4) | “Consistent with the theoretical framing of the study, our proposition was that a multi-level facilitation model would increase adherence to Clinical Practice Guideline (CPG) recommendations through assessment and response to barriers that related to views about the innovation/evidence (the CPG), the target group for implementation (the nursing staff on intervention wards) and contextual factors at the ward and hospital level.” (Bucknall, 2022) |
| B4.Selection, development and tailoring of intervention components | Is theory used to select, develop and/or tailor intervention components? | 34 (22) | “Intervention components were developed in accordance with the Behavior Change Wheel and strategies with a preferred emphasis on the following components of Behavior Change Wheel: education, persuasion, training, modeling, and enablement” [46]. |
| B5.Linkage of intervention components to constructs | Are intervention components directly linked to theoretical constructs? | 7 (5) | “The learning modules were based on the social cognitive theory (SCT) constructs of self-efficacy (confidence in providing physical activity counseling), behavioral capacity (knowledge and skills related to physical activity counseling), outcome expectations (anticipated outcomes of physical activity counseling), and situation and environment (perceived barriers to physical activity counseling). SCT is a versatile model of human behavior that highlights the capacity for self-regulation (Bandura, 1986). It has been used as the framework for a wide variety of interventions that require changing behavior, including training individuals to perform counseling behaviors (Larson, 1998). Each of the learning modules was developed to target one or more of the SCT constructs in an effort to improve physical activity counseling” [47]. |
| B6.Selection of study materials | Is theory applied in the development or selection of study materials (e.g. questionnaires)? | 7 (5) | “The motivation construct was measured based on attitudes toward developmental positioning (DP). Attitudes toward DP were measured using the instrument developed by Van der Pal et al, and revised and supplemented into the Korean version of the Agreement with Theory of Planned Behavior Statement instrument by Lee” [48]. |
| B7. Evaluation | Is theory used to guide outcomes measurement or develop the evaluation strategy? | 9 (6) | “Additionally, based on the Theory of Planned Behaviour, a series of questions assess the individual’s intent to change behaviour in response to the feedback, and if so, how” [49]. |
| B8. Other uses | Is theory used in any other way? | 4 (3) | “A four-step theoretical domains framework was used to develop the intervention (French et al., 2012). Step 1 identified target behaviors and capabilities related to nasogastric tubes (NGT) placement verification. Step 2 chose the theoretical framework most likely to elicit the process of learning effects. Step 3 designed the contents of the NGT practice program” [50]. |
| C. Tested theory, 13/151 (9%) | |||
| C1. Predictions | Are theory-informed mechanisms of impact tested (e.g. mediation analyses, comparison of intervention/control groups)? | 4 (3) | “Several socio-cognitive factors were assessed as potential predictors of guideline adherence, informed by the I-Change Model” [51]. |
| C2.Theory support or refutation | Is support/refutation of the theory based on appropriate analyses? | 10 (7) | “Our results are in line with theories from the behavioral sciences where social influence (Mittman et al., 1992), team effectiveness (Shortell et al., 2004; West, 1990), role modeling (Bandura, 1986) and leadership (Øvretveit, 2004) are considered relevant to successfully changing behaviour” [45] |
| D. Built theory, n = 0/151 (0%) | |||
| D1.Theory development | Is a new theory or revised/expanded theory developed? | 0 (0) | NA |
| D2.Additional constructs | Do the authors attempt to refine the theory by adding/removing constructs to/from the theory? | 0 (0) | NA |
| D3. Interrelationships | Do the authors attempt to refine the theory by specifying that the interrelationships between the constructs should be changed? | 0 (0) | NA |
RQ3: Which TMFs are used most frequently?
Table 5 presents the frequency of use of specific TMFs across the 54 trials that explicitly reported using theory. Of the six types of TMFs noted in Table 1, classic theories were the most used, with 18 studies employing TMFs within this category. Prominent examples include the theory of planned behavior [52], Knowles’ adult learning theory [53], and social cognitive theory [54], highlighting a strong use by trialists on well-established theoretical foundations to understand and influence behavior change and learning processes in implementation interventions. Evaluation frameworks were the second most frequently used approach, with nine studies referencing TMFs in this category, such as Kirkpatrick’s four levels of evaluation [55] and the Donabedian structure-process-outcome model [56], which inform the assessment of intervention outcomes. A comparable number of studies (n = 8) utilized determinant frameworks, with notable examples including Promoting Action on Research Implementation in Health Services [57] and the consolidated framework for implementation research [58]. Process models (n = 8) and intervention development models (n = 5) were less frequently employed, with examples such as Grol’s implementation model [11] and the behavior change wheel [59], respectively. Three of the 54 trials were guided by broad principles rather than a specific TMF (e.g. behavioral psychology principles). Interestingly, no trials utilized TMFs classified as implementation theories (e.g. absorptive capacity [18]; organizational readiness [19]; normalization process theory [20]).
Table 5.
Frequency of theories, models and frameworks used across the 54 theory-reporting implementation trials targeting nursing practice
| Theory, model or framework used | N of studies | Theoretical approaches |
|||||
|---|---|---|---|---|---|---|---|
| Process model | Determinant framework | Classic theory | Implementation theory | Intervention model | Evaluation framework | ||
| Promoting action on research implementation in health services (PARiHS) | n = 6 [30–65] | ✓ | |||||
| Theory of planned behavior | n = 4 [48, 60, 61, 68] | ✓ | |||||
| Knowles’ adult learning theory | n = 4 [67–70] | ✓ | |||||
| Social cognitive theory | n = 3 [42, 45, 47] | ✓ | |||||
| Implementation model of Grol | n = 3 [58, 71, 72] | ✓ | |||||
| Dual task theory of human performance | n = 2 [73, 74] | ✓ | |||||
| Kirkpatrick’s four levels of evaluation | n = 2 [75, 76] | ✓ | |||||
| Problem-based learning approach | n = 2 [77, 78] | ✓ | |||||
| PRECEDE/PROCEED model | n = 2 [73, 79] | ✓ | |||||
| Learner-centered teaching | n = 2 [44, 80] | ✓ | |||||
| Behavior change wheel | n = 1 [46] | ✓ | ✓ | ||||
| Behavioral psychology principles | n = 1 [81] | ✓ | |||||
| Blended learning model | n = 1 [82] | ✓ | |||||
| Cascade model | n = 1 [83] | ✓ | |||||
| CATCH model | n = 1 [68] | ✓ | ✓ | ||||
| Coiera’s communication information continuum | n = 1 [84] | ✓ | |||||
| Consolidated framework for implementation research | n = 1 [85] | ✓ | |||||
| Constructive learning theory | n = 1 [78] | ✓ | |||||
| Donabedian’s structure-process-outcome model | n = 1 [76] | ✓ | |||||
| Glasziou and Haynes’ pathway | n = 1 [86] | ✓ | |||||
| Health belief model | n = 1 [87] | ✓ | |||||
| I-Change model | n = 1 [51] | ✓ | ✓ | ||||
| Information-motivation-behavioral skills model | n = 1 [48] | ✓ | |||||
| Theoretical model on effective knowledge mobilization | n = 1 [60] | ✓ | ✓ | ||||
| Leadership theory | n = 1 [45] | ✓ | |||||
| Lewin’s theory | n = 1 [43] | ✓ | |||||
| Mayer’s 10 principles of multimedia design | n = 1 [88] | ✓ | |||||
| Mezirow’s transformative learning theory | n = 1 [76] | ✓ | |||||
| Operational model for evidence-based practices | n = 1 [89] | ✓ | |||||
| Planned organization change | n = 1 [44] | ✓ | |||||
| Principles from educational outreach | n = 1 [83] | ✓ | |||||
| Pronovost’s 4E | n = 1 [90, 91] | ✓ | |||||
| Quality improvement theory | n = 1 [92] | ✓ | |||||
| Self-directed learning theories | n = 1 [78] | ✓ | |||||
| Social psychology principles | n = 1 [93] | ✓ | |||||
| Social influence theory | n = 1 [45] | ✓ | |||||
| Stages of change | n = 1 [80] | ✓ | |||||
| Theory on team effectiveness | n = 1 [45] | ✓ | |||||
| Translation research model | n = 1 [94] | ✓ | ✓ | ||||
| UK Medical Research Council’s framework | n = 1 [95] | ✓ | ✓ | ✓ | |||
| Web-based learning design model | n = 1 [96] | ✓ | |||||
| Total | 9 | 8 | 18 | 0 | 5 | 9 | |
RQ4: How has the extent of TMF use changed over time?
Figure 1 illustrates the median theory use score by category. Median score was highest for “B. Applied theory” (M = 1.0, IQR: 1.0, 2.0, possible range: 0–6), followed by “A. Informed by theory” (M = 0.5, IQR: 0, 1.0, range: 0–2), and then “C. Tested theory” (M = 0, IQR: 0, 0.75, range: 0–2). Among the 54 trials that employed a theory, the figure found no clear trend across theory use categories.
Figure 1.
Median theory use score by category over time.
Discussion
This study offers a comprehensive analysis of the use of implementation science TMFs across 151 randomized implementation trials aimed at supporting the uptake and promoting the use of EBPs among nurses. The analysis was guided by Painter et al.’s continuum of theory use [41], Michie and Prestwich’s theory coding scheme [32], and additional guidelines from McIntyre et al. [40]. This integrated approach provided a robust and systematic framework to assess the extent and nature of TMF use, allowing for a nuanced examination of how TMFs are applied, tested, and built within the field of implementation science. The findings reveal that only 54 out of 151 trials (36%) reported using at least one TMF, with 52 (34%) citing a specific TMF or set of theoretical principles.
The most frequently cited TMFs were classic theories, particularly those originating from the fields of education and learning, behavior and behavior change, and organizational leadership. In contrast, more contemporary approaches, such as determinant frameworks, implementation theories, and evaluation frameworks, were either rarely cited or not used at all. The observed reliance on classic, primarily individual-level behavior theories (e.g. theory of planned behavior and social cognitive theory) has important implications. While these theories offer well-validated constructs, they focus mainly on cognitive determinants of individual action and may insufficiently capture organizational, system, and equity-related influences that shape implementation success [2, 3, 6, 7]. Continued dependence on these TMFs risks oversimplifying multilevel implementation challenges and may impede theoretical innovation in the field. Broader uptake of determinant frameworks, implementation theories, and integrative or hybrid models could enable more comprehensive, context-sensitive strategies and accelerate theoretical advancement [97].
In cases where TMFs were used, their application was often partial, with most studies employing them to inform the development of implementation strategies or to justify the study’s purpose, rather than to test or build theory. This is not surprising because the primary focus of implementation trials has traditionally been on addressing practical challenges in applying evidence-based interventions, prioritizing problem-solving over advancing theoretical understanding through the testing or development of theory. Several structural factors help explain why TMFs are frequently cited yet seldom operationalized or tested. Journal word limits and reporting conventions often encourage concise theoretical justifications without the space-intensive detail needed to map constructs onto strategy components or analytic models [98]. Incentive structures likewise reward pragmatic, easily measurable outcomes—such as changes in adherence or audit metrics—more than mediation analyses that probe mechanisms [38, 99]. Researchers also face practical barriers: validated instruments for many TMF constructs are scarce, and collecting multilevel data (individual, team, and organizational) can be resource-intensive in applied settings [100, 101]. Together, these constraints make comprehensive theory testing less feasible, even when investigators recognize its scientific value.
The underuse of TMFs represents a missed opportunity to assess facilitators and barriers to change, make explicit and deepen understanding of mechanisms of change, refine the design of theory-based and context-specific implementation strategies, conduct replication, and improve the generalizability and transferability of findings. These results are consistent with prior research in implementation science published before 2017 [2, 38, 40, 102], which has observed similar degrees of theory use (23%–47%). The research also highlights that TMFs are often cited, but not fully embedded in the design, analysis, or interpretation of implementation studies. To move the implementation science field forward, there is a clear need for capacity-building initiatives to be established to strengthen researchers’ and practitioners’ abilities to apply, test, and build theory in implementation science and practice. Furthermore, there is need for complete and transparent reporting of TMF use with reporting guidelines such as the Proctor et al.’s implementation strategy reporting areas [26], and the Standards for Reporting Implementation Studies (StaRI) guidelines [103].
When theory was used, it was most frequently applied to inform the development of implementation strategies (B criteria, Table 4), followed by its use to justify the purpose of the study (A criteria). However, the more complex uses of theory—that is those related to testing theory (C criteria) and building new theory (D criteria)—were rarely observed. This aligns with findings from McIntyre et al. [40] who noted that process evaluation studies are more likely to emphasize on the application of theory rather than its testing or development. McIntyre et al. further argued that the domain of pure science prioritizes theory testing, while the field of implementation science focuses on the application of EBPs, which naturally shifts the emphasis toward applying TMFs rather than building or testing them [40]. Another contributing factor is that many researchers may not be familiar with explicitly identifying or analyzing the mechanisms of action underlying the intervention they test. This points to a pressing need for enhanced education and training in methodologies that support the rigorous use of theory, such as realist evaluation and program evaluation. Tools like logic models, program impact theories, and process theories can help researchers articulate and assess the pathways through which interventions achieve their outcomes, ultimately advancing both the theoretical and practical dimensions of implementation science.
McIntyre et al.’s review on the use of TMFs in process evaluations found different patterns [40], with greater use of implementation theories and evaluation frameworks. This distinction is logical, as process evaluations inherently focus on understanding the mechanisms, context, and implementation processes that influence the success or failure of an intervention. Implementation theories provide specific guidance on how change occurs during implementation, while evaluation frameworks offer structured approaches to assess process-related outcomes such as fidelity, reach, and sustainability. This difference highlights how the purpose of the study (process evaluation vs. implementation trial) shapes the selection and application of TMFs.
The findings suggest that the reported use of theory fluctuated across the four theory domains. Only two studies achieved the maximum theory use score in two of the four theory domains: “A. Informed by theory” [74] and “C. Testing theory” [48], though these domains contained only two items each. The median scores for all four domains were 1 or lower, indicating that theory was used sparingly across the continuum. These results suggest that TMFs were often applied to a limited segment of the study, rather than being fully embedded across multiple study phases, such as the development of implementation strategies, evaluation of outcomes, and interpretation of findings. The results also revealed that most studies rely on a single TMF to inform their research; with the applicability of a TMF often limited to specific phases of the implementation process, this narrow scope of application may help explain the observed patterns in TMF use.
A key strength of this study was the inclusion of study protocols, which allowed for the capture of theory use not explicitly reported in study findings. This study has limitations that may affect the comprehensiveness and interpretation of its findings. Implicit theory use was not considered, meaning some studies may have used theory without explicitly reporting it, potentially underestimating theory use. Reporting constraints, such as word limits, may have also contributed to this underreporting, highlighting the need for a reporting guideline or checklist. Theory-informed studies preceding the trial (e.g. studies of barriers and facilitators to implementation) might have been published prior to the trial paper but were not captured in this study. Coder expertise varied, with some coders having extensive experience in implementation science while others had less familiarity, potentially affecting coding consistency. To mitigate this, consensus discussions, training, and review of coding files were conducted to ensure reliability. Finally, the quality of theory use was not assessed. Although we quantified whether a TMF was employed, we did not appraise how well each study operationalized its chosen TMF. Future investigations should pair extent-of-use metrics with a structured quality appraisal to capture fidelity and coherence of theory application.
Conclusion
This study provides a critical analysis of how TMFs were used in 151 randomized implementation trials targeting nurses, across a range of clinical settings and contexts. Despite the central role of TMFs in guiding implementation research, only 36% of trials reported their use. The most frequently cited TMFs were PARiHS, the theory of planned behavior, and adult learning theory, reflecting a reliance on classic and well-established theories over more contemporary implementation theories or evaluation frameworks emerging from the field of implementation science. TMFs were primarily used to justify study rationale or to inform the development of implementation strategies, but far less frequently to guide evaluation, test theoretical assumptions, or build new theory. Notably, the median scores for theory use across all four theory use domains were low, suggesting that most studies engaged with TMFs in a partial, fragmented fashion. This underuse represents a missed opportunity to leverage TMFs to strengthen the design, evaluation, and generalizability of implementation strategies. Future research should focus on testing, refining, and building implementation theories and provide further guidance on how to integrate theory for this purpose.
Acknowledgements
We wish to acknowledge Nikolas Argiropoulos for the technical support provided to C.W. during data analysis.
Contributor Information
Charlene Weight, Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, Montréal, Canada; Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, Montréal, QC, Canada.
Rachael Laritz, Centre for Nursing Research, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, Montréal, QC, Canada.
Simonne E Collins, IWK Health, Halifax, NS, Canada; Faculty of Health, School of Nursing, Dalhousie University, Halifax, NS, Canada; School of Psychological Sciences, Monash University, Clayton, VIC, Australia; Murdoch Children’s Research Institute, Parkville, VIC, Australia.
Meagan Mooney, Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, Montréal, Canada; Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, Montréal, QC, Canada.
Billy Vinette, School of Physical and Occupational Therapy, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC, Canada.
Sonia A Castiglione, Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, Montréal, Canada.
Nicola Straiton, Nursing Research Institute, St Vincent’s Health Network Sydney, St Vincent’s Hospital Melbourne and the Australian Catholic University, Darlinghurst, NSW, Australia.
Gabrielle Chicoine, Research Centre of the Centre Hospitalier de L’Université de Montréal, Montreal, QC, Canada; Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Unity Health Toronto, Toronto, ON, Canada.
Shuang Liang, School of Population Health, UNSW Sydney, Kensington, NSW, Australia.
Justin Presseau, Centre for Implementation Research, Methodological and Implementation Research Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada; School of Epidemiology and Public Health and Department of Psychology, University of Ottawa, Ottawa, ON, Canada.
Kristin Konnyu, Aberdeen Centre for Evaluation, School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, United Kingdom.
Marie-Pierre Gagnon, CHU de Québec-Université Laval Research Centre, Québec City, QC, Canada; Faculty of Nursing, Université Laval, Pavillon Ferdinand-Vandry, Québec City, QC, Canada.
Sonia Semenic, Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, Montréal, Canada.
Sandy Middleton, Nursing Research Institute, St Vincent’s Health Network Sydney, St Vincent’s Hospital Melbourne and the Australian Catholic University, Darlinghurst, NSW, Australia; School of Nursing, Midwifery and Paramedicine, Australian Catholic University, North Sydney, NSW, Australia.
Natalie Taylor, School of Population Health, UNSW Sydney, Kensington, NSW, Australia.
Vasiliki Bessy Bitzas, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, Montréal, QC, Canada.
Catherine Hupé, Département des Sciences Infirmières, Université du Québec à Rimouski, Rimouski, QC, Canada.
Nathalie Folch, Centre Hospitalier de l’Université de Montréal, Montréal, QC, Canada.
Brigitte Vachon, École de Réadaptation, Université de Montréal, Montréal, QC, Canada; Centre de Recherche du CIUSSS de l’Est de Montréal et de l’Institut Universitaire en Santé Mentale de Montréal, Montréal, QC, Canada.
Geneviève Rouleau, Département des sciences de la santé, Université du Québec en Outaouais, Montréal, QC, Canada.
Andrea Patey, IWK Health, Halifax, NS, Canada.
Nicola McCleary, Child Health Evaluative Sciences Program, SickKids Research Institute, Toronto, ON, Canada; Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada.
Joshua Porat-Dahlerbruch, School of Nursing, University of Pittsburgh, Pittsburgh, PA, United States.
Guillaume Fontaine, Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, Montréal, Canada; Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, Montréal, QC, Canada; Centre for Nursing Research, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, Montréal, QC, Canada; Centre for Implementation Research, Methodological and Implementation Research Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada; Kirby Institute, UNSW Sydney, Kensington, NSW, Australia.
Funding Source
This project is supported by a project grant from the Quebec Health Research Fund (FRQ-S) (grant number: 347409) and the Réseau de recherche en interventions en sciences infirmières du Québec/Quebec Network on Nursing Intervention Research (RRISIQ) (grant number: NA). GF is supported by a Junior 1 Research Scholar Award from the Fonds de recherche du Québec—Santé (FRQS). The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.
Conflicts of Interest
The authors declare that they have no competing interests.
Human Rights
This article does not contain any studies with human participants performed by any of the authors.
Informed Consent
This study does not involve human participants and informed consent was therefore not required.
Welfare of Animals
This article does not contain any studies with animals performed by any of the authors.
Transparency Statements
Study Registration: The study was preregistered at PROSPERO (CRD42019130446). Analytic Plan Pre-Registration: The analysis plan was registered prior to beginning data collection at PROSPERO (CRD42019130446). Analytic Code Availability: The analytic code used to conduct the analyses presented in this study is not available in a public archive. They may be available by emailing the corresponding author. Materials Availability: All materials used to conduct the study are available by emailing the corresponding author.
Data Availability
De-identified data from this study are not available in a public archive. De-identified data from this study will be made available by emailing the corresponding author.
References
- 1. Nilsen P. Making sense of implementation theories, models, and frameworks. In: Albers B, Shlonsky A, Mildon R (eds.), Implementation Science 3.0. Cham, Switzerland: Springer, 2020, 53–80. [Google Scholar]
- 2. Wang Y, Wong EL-Y, Nilsen P et al. A scoping review of implementation science theories, models, and frameworks—an appraisal of purpose, characteristics, usability, applicability, and testability. Implement Sci 2023;18:43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Strifler L, Cardoso R, McGowan J et al. Scoping review identifies significant number of knowledge translation theories, models, and frameworks with limited use. J Clin Epidemiol 2018;100:92–102. [DOI] [PubMed] [Google Scholar]
- 4. Tabak RG, Khoong EC, Chambers DA et al. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012;43:337–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Walsh-Bailey C, Tsai E, Tabak RG et al. A scoping review of de-implementation frameworks and models. Implement Sci 2021;16:100–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Esmail R, Hanson HM, Holroyd-Leduc J et al. A scoping review of full-spectrum knowledge translation theories, models, and frameworks. Implement Sci 2020;15:1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci 2015;10:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Graham ID, Logan J, Harrison MB et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof 2006;26:13–24. 10.1002/chp.47 [DOI] [PubMed] [Google Scholar]
- 9. French SD, Green SE, O’Connor DA et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the theoretical domains framework. Implement Sci 2012;7:38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Eslava-Schmalbach J, Garzon-Orjuela N, Elias V et al. Conceptual framework of equity-focused implementation research for health programs (EquIR). Int J Equity Health 2019;18:80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Damschroder LJ, Aron DC, Keith RE et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Damschroder LJ, Reardon CM, Widerquist MAO et al. The updated consolidated framework for implementation research based on user feedback. Implement Sci 2022;17:75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Woodward EN, Matthieu MM, Uchendu US et al. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implement Sci 2019;14:26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Michie S, Johnston M, Abraham C et al. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care 2005;14:26–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 1977;84:191–215. [DOI] [PubMed] [Google Scholar]
- 16. Orlikowski W. Improvising organizational transformation over time: a situated change perspective. Inform Syst Res 1996;7:63–92. [Google Scholar]
- 17. DiMaggio PJPW. The New Institutionalism and Organizational Analysis. Chicago, IL, USA: University of Chicago Press, 1991. [Google Scholar]
- 18. Zahra SA, George G. Absorptive capacity: a review, reconceptualization and extension. Acad Manage Rev 2002;27:185–203. [Google Scholar]
- 19. Bj W. A theory of organizational readiness for change. Implement Sci 2009;4:67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. May C, Finch T. Implementing, embedding and integrating practices: an outline of normalization process theory. Sociology 2009;43:535–54. [Google Scholar]
- 21. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci 2011;6:42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Fernandez ME, Ten Hoor GA, van Lieshout S et al. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health 2019;7:158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 1999;89:1322–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Proctor E, Silmere H, Raghavan R et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011;38:65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Michie S, Johnston M, Francis J et al. From theory to intervention: mapping theoretically derived behavioural determinants to behaviour change techniques. Appl Psychol 2008;57:660–80. [Google Scholar]
- 26. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci 2013;8:1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Effective Practice and Organisation of Care (EPOC). EPOC Taxonomy. https://epoc.cochrane.org/epoc-taxonomy (4 November 2024, date last accessed).
- 28. Powell BJ, Waltz TJ, Chinman MJ et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci 2015;10:1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Fontaine G, Vinette B, Weight C et al. Effects of implementation strategies on nursing practice and patient outcomes: a comprehensive systematic review and meta-analysis. Implement Sci 2024;19:68. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Patey AM, Fontaine G, Francis JJ et al. Healthcare professional behaviour: health impact, prevalence of evidence-based behaviours, correlates and interventions. Psychol Health 2023;38:766–94. [DOI] [PubMed] [Google Scholar]
- 31. Siegel JT, Navarro MA, Tan CN et al. Attitude–behavior consistency, the principle of compatibility, and organ donation: a classic innovation. Health Psychol 2014;33:1084–91. [DOI] [PubMed] [Google Scholar]
- 32. Michie S, Prestwich A. Are interventions theory-based? Development of a theory coding scheme. Health Psychol 2010;29:1–8. [DOI] [PubMed] [Google Scholar]
- 33. Michie S, Carey RN, Johnston M et al. From theory-inspired to theory-based interventions: a protocol for developing and testing a methodology for linking behaviour change techniques to theoretical mechanisms of action. Ann Behav Med 2018;52:501–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Davidoff F, Dixon-Woods M, Leviton L et al. Demystifying theory and its use in improvement. BMJ Qual Saf 2015;24:228–38. [Google Scholar]
- 35. Garnett C, Crane D, Brown J et al. Reported theory use by digital interventions for hazardous and harmful alcohol consumption, and association with effectiveness: meta-Regression. J Med Internet Res 2018;20:e69. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Wolfenden L, Foy R, Presseau J et al. Designing and undertaking randomised implementation trials: guide for researchers. BMJ 2021;372:m3721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Ivers N, Yogasingam S, Lacroix M et al. Audit and feedback: effects on professional practice. Cochrane Database Syst Rev 2025;3:CD000259. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci 2010;5:14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Fontaine G, Cossette S, Maheu-Cadotte M-A et al. Effect of implementation interventions on nurses’ behaviour in clinical practice: a systematic review, meta-analysis and meta-regression protocol. Syst Rev 2019;8:305–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. McIntyre SA, Francis JJ, Gould NJ et al. The use of theory in process evaluations conducted alongside randomized trials of implementation interventions: a systematic review. Transl Behav Med 2020;10:168–78. [DOI] [PubMed] [Google Scholar]
- 41. Painter JE, Borba CP, Hynes M et al. The use of theory in health behavior research from 2000 to 2005: a systematic review. Ann Behav Med 2008;35:358–62. [DOI] [PubMed] [Google Scholar]
- 42. Curtis JR, Nielsen EL, Treece PD et al. Effect of a quality-improvement intervention on end-of-life care in the intensive care unit: a randomized trial. Am J Respir Crit Care Med 2011;183:348–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Cortez S, Dietrich MS, Wells N. Measuring clinical decision support influence on evidence-based nursing practice. Oncol Nurs Forum 2016;43:E170–7. [DOI] [PubMed] [Google Scholar]
- 44. Evans D, Mellins R, Lobach K et al. Improving care for minority children with asthma: professional education in public health clinics. Pediatrics 1997;99:157–64. [DOI] [PubMed] [Google Scholar]
- 45. Huis A, Schoonhoven L, Grol R et al. Impact of a team and leaders-directed strategy to improve nurses’ adherence to hand hygiene guidelines: a cluster randomised trial. Int J Nurs Stud 2013;50:464–74. [DOI] [PubMed] [Google Scholar]
- 46. Hasnain MG, Levi CR, Ryan A et al. Can a multicomponent multidisciplinary implementation package change physicians’ and nurses’ perceptions and practices regarding thrombolysis for acute ischemic stroke? An exploratory analysis of a cluster-randomized trial. Implement Sci 2019;14:98. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Karvinen KH, Balneaves L, Courneya KS et al. Evaluation of online learning modules for improving physical activity counseling skills, practices, and knowledge of oncology nurses. Oncol Nurs Forum 2017;44:729–38. [DOI] [PubMed] [Google Scholar]
- 48. Yun EJ, Kim TI. Development and effectiveness of an educational program on developmental positioning for neonatal intensive care unit nurses in South Korea: a quasi-experimental study. Child Health Nurs Res 2022;28:70–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Hutchinson AM, Sales AE, Brotto V et al. Implementation of an audit with feedback knowledge translation intervention to promote medication error reporting in health care: a protocol. Implement Sci 2015;10:70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Yang FH, Lin FY, Hwu YJ. The feasibility study of a revised standard care procedure on the capacity of nasogastric tube placement verification among critical care nurses. J Nurs Res 2019;27:e31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. de Ruijter D, Smit ES, de Vries H et al. Web-based computer-tailoring for practice nurses aimed to improve smoking cessation guideline adherence: a study protocol for a randomized controlled effectiveness trial. Contemp Clin Trials 2016;48:125–32. [DOI] [PubMed] [Google Scholar]
- 52. Fishbein M, Ajzen I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley, 1977.
- 53. Knowles M. The Adult Learner: A Neglected Species. Houston, TX: Gulf Publishing Company, 1988. [Google Scholar]
- 54. Bandura A. Social Foundations of Thought and Action. Englewood Cliffs, NJ: Prentice-Hall, 1986, p. 2. [Google Scholar]
- 55. Kirkpatrick DL, Kirkpatrick JD. Implementing the Four Levels: A Practical Guide for Effective Evaluation of Training Programs. San Francisco, CA: Berrett-Koehler Publishers, 2007. [Google Scholar]
- 56. Donabedian A. Evaluating the quality of medical care. Milbank Memor Fund Q 1966;44:166–206. [Google Scholar]
- 57. Rycroft-Malone J. The PARIHS framework—a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual 2004;19:297–304. [DOI] [PubMed] [Google Scholar]
- 58. Van Gaal BG, Schoonhoven L, Mintjes JA et al. The SAFE or SORRY? Programme. Part II: effect on preventive care. Int J Nurs Stud 2011;48:1049–57. [DOI] [PubMed] [Google Scholar]
- 59. Michie S, Van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci 2011;6:1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Blanco-Mavillard I, Bennasar-Veny M, De Pedro-Gomez JE et al. Implementation of a knowledge mobilization model to prevent peripheral venous catheter-related adverse events: PREBACP study-a multicenter cluster-randomized trial protocol. Implement Sci 2018;13:100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Hutchinson AM, Brotto V, Chapman A et al. Use of an audit with feedback implementation strategy to promote medication error reporting by nurses. J Clin Nurs 2020;29:4180–93. [DOI] [PubMed] [Google Scholar]
- 62. Bucknall TK, Considine J, Harvey G et al. Prioritising responses of nurses to deteriorating patient observations (PRONTO): a pragmatic cluster randomized controlled trial evaluating the effectiveness of a facilitation intervention on recognition and response to clinical deterioration. BMJ Qual Saf 2022;31:818–30. [Google Scholar]
- 63. Forberg U, Unbeck M, Wallin L et al. Effects of computer reminders on complications of peripheral venous catheters and nurses’ adherence to a guideline in paediatric care—a cluster randomised study. Implement Sci 2016;11:10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Johnston CC, Gagnon A, Rennick J et al. One-on-one coaching to improve pain assessment and management practices of pediatric nurses. J Pediatr Nurs 2007;22:467–78. [DOI] [PubMed] [Google Scholar]
- 65. Snelgrove‐Clarke E, Davies B, Flowerdew G et al. Implementing a fetal health surveillance guideline in clinical practice: a pragmatic randomized controlled trial of action learning. Worldviews Evid Based Nurs 2015;12:281–8. [DOI] [PubMed] [Google Scholar]
- 66. Haut A, Kopke S, Gerlach A et al. Evaluation of an evidence-based guidance on the reduction of physical restraints in nursing homes: a cluster-randomised controlled trial [ISRCTN34974819]. BMC Geriatr 2009;9:42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. Collins D, Inglin L, Laatikainen T et al. Evaluation and pilot implementation of essential interventions for the management of hypertension and prevention of cardiovascular diseases in primary health care in the Republic of Tajikistan. BMC Health Serv Res 2021;21:472. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Maruyama N, Kataoka Y, Horiuchi S. Effects of e-learning on the support of midwives and nurses to perinatal women suffering from intimate partner violence: a randomized controlled trial. Jpn J Nurs Sci 2022;19:e12464. [DOI] [PubMed] [Google Scholar]
- 69. Ghazali SA, Abdullah KL, Moy FM et al. The impact of adult trauma triage training on decision-making skills and accuracy of triage decision at emergency departments in Malaysia: a randomized control trial. Int Emerg Nurs 2020;51:100889. [DOI] [PubMed] [Google Scholar]
- 70. Passos I, Padoveze MC, Zem-Mascarenhas SH et al. An innovative strategy for nursing training on standard and transmission-based precautions in primary health care: a randomized controlled trial. Am J Infect Contr 2022;50:657–62. [Google Scholar]
- 71. Noome M, Dijkstra BM, van Leeuwen E et al. Effectiveness of supporting intensive care units on implementing the guideline ‘end-of-life care in the intensive care unit, nursing care’: a cluster randomized controlled trial. J Adv Nurs 2017;73:1339–54. [DOI] [PubMed] [Google Scholar]
- 72. Reynolds SS, Woltz P, Keating E et al. Results of the CHlorhexidine gluconate bathing implementation intervention to improve evidence-based nursing practices for prevention of central line associated bloodstream infections study (CHanGing BathS): a stepped wedge cluster randomized trial. Implement Sci 2021;16:45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73. McDonald MV, Pezzin LE, Feldman PH et al. Can just-in-time, evidence-based “reminders” improve pain management among home health care nurses and their patients? J Pain Sympt Manage 2005;29:474–88. [Google Scholar]
- 74. Murtaugh CM, Pezzin LE, McDonald MV et al. Just-in-time evidence-based e-mail “reminders” in home health care: impact on nurse practices. Health Serv Res 2005;40:849–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75. Esche CA, Warren JI, Woods AB et al. Traditional classroom education versus computer-based learning: how nurses learn about pressure ulcers. J Nurses Prof Dev 2015;31:21–7. [DOI] [PubMed] [Google Scholar]
- 76. Liu WI, Edwards H, Courtney M. Case management educational intervention with public health nurses: cluster randomized controlled trial. J Adv Nurs 2010;66:2234–44. [DOI] [PubMed] [Google Scholar]
- 77. Meyer G, Warnke A, Bender R et al. Effect on hip fractures of increased use of hip protectors in nursing homes: cluster randomised controlled trial. BMJ 2003;326:76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78. Pitkala KH, Juola AL, Kautiainen H et al. Education to reduce potentially harmful medication use among residents of assisted living facilities: a randomized controlled trial. J Am Med Dir Assoc 2014;15:892–8. [DOI] [PubMed] [Google Scholar]
- 79. Pagaiya N, Garner P. Primary care nurses using guidelines in Thailand: a randomized controlled trial. Trop Med Int Health 2005;10:471–7. [DOI] [PubMed] [Google Scholar]
- 80. Ogden J, Hoppe R. The relative effectiveness of two styles of educational package to change practice nurses’ management of obesity. Int J Obes Relat Metab Disord 1997;21:963–71. [DOI] [PubMed] [Google Scholar]
- 81. von Lengerke T, Lutze B, Krauth C et al. Promoting hand hygiene compliance: PSYGIENE-a Cluster-Randomized controlled trial of tailored interventions. Dtsch Arztebl Int 2017;114:29–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82. Teesing GR, Erasmus V, Nieboer D et al. Increased hand hygiene compliance in nursing homes after a multimodal intervention: a cluster randomized controlled trial (HANDSOME). Infect Control Hosp Epidemiol 2020;41:1169–77. [DOI] [PubMed] [Google Scholar]
- 83. Cheater FM, Baker R, Reddish S et al. Cluster randomized controlled trial of the effectiveness of audit and feedback and educational outreach on improving nursing practice and patient outcomes. Med Care 2006;44:542–51. [DOI] [PubMed] [Google Scholar]
- 84. Carroll DL, Dykes PC, Hurley AC. An electronic fall prevention toolkit: effect on documentation quality. Nurs Res 2012;61:309–13. [DOI] [PubMed] [Google Scholar]
- 85. Smeland AH, Twycross A, Lundeberg S et al. Educational intervention to strengthen pediatric postoperative pain management: a cluster randomized trial. Pain Manag Nurs 2022;23:430–42. [DOI] [PubMed] [Google Scholar]
- 86. Valimaki M, Lantta T, Anttila M et al. An evidence-based educational intervention for reducing coercive measures in psychiatric hospitals: a randomized clinical trial. JAMA Netw Open 2022;5:e2229076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87. Jeihooni AK, Kashfi SH, Bahmandost M et al. Promoting preventive behaviors of nosocomial infections in nurses: the effect of an educational program based on health belief model. Invest Educ Enferm 2018;36:e09. [DOI] [PubMed] [Google Scholar]
- 88. Day T, Wainwright SP, Wilson-Barnett J. An evaluation of a teaching intervention to improve the practice of endotracheal suctioning in intensive care units. J Clin Nurs 2001;10:682–96. [DOI] [PubMed] [Google Scholar]
- 89. Maki-Turja-Rostedt S, Leino-Kilpi H, Korhonen T et al. Consistent practice for pressure ulcer prevention in long-term older people care: a quasi-experimental intervention study. Scand J Caring Sci 2021;35:962–78. [DOI] [PubMed] [Google Scholar]
- 90. Brennan K, Sanchez D, Hedges S et al. A nurse-led intervention to reduce the incidence and duration of delirium among adults admitted to intensive care: a stepped-wedge cluster randomised trial. Aust Crit Care 2023;36:441–8. [DOI] [PubMed] [Google Scholar]
- 91. Lynch J, Rolls K, Hou YC et al. Delirium in intensive care: a stepped-wedge cluster randomised controlled trial for a nurse-led intervention to reduce the incidence and duration of delirium among adults admitted to the intensive care unit (protocol). Aust Crit Care 2020;33:475–9. [DOI] [PubMed] [Google Scholar]
- 92. Walsh TS, Kydonaki K, Antonelli J et al. Staff education, regular sedation and analgesia quality feedback, and a sedation monitoring technology for improving sedation and analgesia quality for critically ill, mechanically ventilated patients: a cluster randomised trial. Lancet Respir Med 2016;4:807–17. [DOI] [PubMed] [Google Scholar]
- 93. Seto WH, Ching TY, Yuen KY et al. The enhancement of infection control in-service education by ward opinion leaders. Am J Infect Contr 1991;19:86–91. [Google Scholar]
- 94. Titler MG, Herr K, Brooks JM et al. Translating research into practice intervention improves management of acute pain in older hip fracture patients. Health Serv Res 2009;44:264–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95. Gengiah S, Barker PM, Yende-Zuma N et al. A cluster-randomized controlled trial to improve the quality of integrated HIV-tuberculosis services in primary healthcare clinics in South Africa. J Int AIDS Soc 2021;24:e25803. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96. Kim HJ, Hwang SY. Effect of website-based learning on improved monitoring of adverse drug reactions by clinical nurses. Asian Nurs Res (Korean Soc Nurs Sci) 2022;16:45–51. [DOI] [PubMed] [Google Scholar]
- 97. Birken SA, Powell BJ, Presseau J et al. Combined use of the consolidated framework for implementation research (CFIR) and the theoretical domains framework (TDF): a systematic review. Implement Sci 2017;12:2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98. Hansford HJ, Cashin AG, Doyle J et al. Barriers and enablers to using intervention reporting guidelines in sports and exercise medicine trials: a mixed-methods study. J Orthopaed Sports Phys Ther 2024;54:142–52. [Google Scholar]
- 99. Beidas RS, Dorsey S, Lewis CC et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci 2022;17:55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100. Strifler L, Barnsley JM, Hillmer M et al. Identifying and selecting implementation theories, models and frameworks: a qualitative study to inform the development of a decision support tool. BMC Med Inform Decis Mak 2020;20:91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101. Wensing M. Reflections on the measurement of implementation constructs. Implement Res Pract 2021;2:26334895211020125. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102. Liang L, Bernhardsson S, Vernooij RW et al. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review. Implement Sci 2017;12:1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103. Pinnock H, Barwick M, Carpenter CR et al. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
De-identified data from this study are not available in a public archive. De-identified data from this study will be made available by emailing the corresponding author.

