Abstract
Background and Objectives
Significant quality problems exist in long-term care (LTC). Interventions to improve care are complex and often have limited success. Implementation remains a black box. We developed a program theory explaining how implementation of a complex intervention occurs in LTC settings—examining mechanisms of impact, effects of context on implementation, and implementation outcomes such as fidelity.
Research Design and Methods
Concurrent process evaluation of Safer Care for Older Persons in residential Environments (SCOPE)—a frontline worker (care aide) led improvement trial in 31 Canadian LTC homes. Using a mixed-methods exploratory sequential design, qualitative data were analyzed using grounded theory to develop a conceptual model illustrating how teams implemented the intervention and how it produced change. Quantitative analyses (mixed-effects regression) tested aspects of the program theory.
Results
Implementation fidelity was moderate. Implementation is facilitated by (a) care aide engagement with core intervention components; (b) supportive leadership (internal facilitation) to create positive team dynamics and help negotiate competing workplace priorities; (c) shifts in care aide role perceptions and power differentials. Mixed-effects model results suggest intervention acceptability, perceived intervention benefits, and leadership support predict implementation fidelity. When leadership support is high, fidelity is high regardless of intervention acceptability or perceived benefits.
Discussion and Implications
Our program theory addresses important knowledge gaps regarding implementation of complex interventions in nursing homes. Results can guide scaling of complex interventions and future research.
Keywords: Evaluation, Intervention/trial methods, Long-term care
Background and Objectives
Care quality problems in long-term care (LTC) are endemic, transcend national boundaries (Konetzka, 2020), and are a pressing research priority (Morley et al., 2014). The pandemic revealed new depths of these problems in LTC with homes becoming “hubs in their communities for the worst clinical manifestations of COVID-19” (Barnett & Grabowski, 2020). Trends in population aging predict a continually increasing need for LTC (Alzheimer Society of Canada, 2010) yet staff levels and skills aren’t changing significantly. Care aides (nursing assistants) provide roughly 90% of direct care (Hewko et al., 2015), they report increased workloads and decreased quality of work life (Reinhardt et al., 2022), and their work is increasingly devalued (Scales, 2021). Indeed, a recent special issue in this journal suggested care aide workforce issues are the most significant challenge facing the LTC sector (Meeks & Degenholtz, 2021).
Research communities invest heavily in interventions to improve the quality of care and work life; however, interventions are not always successful and often have declining success when replicated (Schooler, 2014). There is increasing recognition that implementing interventions is “messy” (Ogrinc & Shojania, 2014) and requires models and approaches which recognize that most quality interventions are complex and are implemented in equally complex, dynamic health systems (Greenhalgh & Papoutsi, 2018). Although several frameworks identify factors that enable or impede intervention implementation (Damschroder et al., 2009; Kitson et al., 1998; Ovretveit et al., 2011), less knowledge exists regarding the implementation process.
To move complex interventions from research to practice, the UK Medical Research Council (MRC; Moore et al., 2015) and others (e.g., Dixon-Woods et al., 2011) argue it is vital to evaluate them using concurrent process evaluations which run in parallel to the intervention and look at how the program is implemented. Process evaluations can be used to come up with program theories that specify an intervention’s components and provide “a narrative about the structures, behaviours, processes and contextual features that will be needed to achieve the aims and actions of the intervention” (Davidoff et al., 2015, p. 230). Greater efforts to use methodologically pluralistic approaches such as process evaluations, and to generate program theories which provide rich depictions of complex phenomena are needed (Greenhalgh & Papoutsi, 2018).
Study Objectives
Using a mixed-methods, concurrent process evaluation, we aimed to:
Develop a program theory explaining how implementation of a complex intervention (SCOPE) occurs—examining impact mechanisms, effects of context on implementation, and implementation outcomes.
Test our program theory by quantitatively examining factors that influence fidelity with which teams implemented SCOPE.
The SCOPE Intervention
Safer Care for Older Persons (in residential) Environments (SCOPE; ClinicalTrials.gov ID: NCT03426072) is a pragmatic controlled trial of a multicomponent improvement intervention for nursing homes. It is designed to engage unit teams, led by care aides, to improve best practice use, staff work-life outcomes, and resident outcomes such as pain, mobility, and responsive behaviors. Modeled on the Institute for Healthcare Improvement’s Breakthrough Series Collaborative Model (Kilo, 1998) and using the PARiHS framework (Kitson et al., 1998), SCOPE was designed to be implementable—addressing the technical aspects of change and using facilitation to support teams as they address sociocultural and contextual factors that affect implementation.
SCOPE teams were unit based, were led by and included mostly care aides. Teams received support internally from a team sponsor (unit manager) and senior sponsor (LTC home administrator), and externally from a Quality Advisor. The core components of the intervention include:
(1) care-aide-led teams working on a focused clinical area;
(2) in-person quarterly Learning Congresses (LCs) with all teams;
(3) use of quality improvement methods by teams (Plan-Do-Study-Act cycles);
(4) quality improvement coaching from a Quality Advisor during action periods between LCs;
(5) supporting leaders (sponsors) to internally facilitate change by the care-aide-led teams.
Additional SCOPE details and a schematic are provided in Supplementary Item 1. The SCOPE main trial (Wagg et al., in press) is described elsewhere.
Research Design and Methods
Conceptual Framework and Design
This mixed-methods study used the UK MRC process evaluation framework for evaluating complex interventions (Moore et al., 2015) to understand how SCOPE implementation occurred. The framework suggests examining three dynamically related areas: (1) implementation—what is implemented and how; (2) mechanisms of impact—how the delivered intervention produces change; and (3) how context affects implementation and outcomes. We explored implementation outcomes from the taxonomy proposed by Proctor et al. (2011) which outlines eight indicators of implementation success such as acceptability of the intervention to stakeholders, agreement to adopt, fidelity of implementation, penetration within the target setting, and sustainability (routinization of intervention practices).
We used a mixed-methods exploratory sequential design (Creswell et al., 2003). Qualitative data from multiple stakeholders were collected and analyzed in a grounded theory approach to create a program theory outlining how teams implemented SCOPE (Objective 1). Aspects of the program theory were tested with quantitative data (Objective 2). Our methodological approach is consistent with current guidance on evaluating health services interventions in complex systems (Long et al., 2018) which encourages multiple methods, values all stakeholders’ input, and seeks to achieve a broad understanding of whole, complex systems in context (Greenhalgh & Papoutsi, 2018).
Sample and Unit of Analysis
We evaluated 31 unit-based teams in four regions in Western Canada that participated in the SCOPE intervention arm: 8 from Edmonton Zone (ABN), 6 from the Calgary Zone (ABS), 6 from Interior Health, and 11 from Fraser Health. Most process evaluation data (described later) were gathered at the team level. Individual-level survey data from care aides were aggregated to the team level. SCOPE was delivered within each region (e.g., regions had their own Quality Advisor but LCs were delivered by all three Quality Advisors together). Regional variations in implementation were minor and space constraints prohibit their discussion.
Data Collection
Qualitative and quantitative process evaluation data were collected from all SCOPE stakeholders (care aides, Sponsors, Quality Advisors, and researchers) during the 1-year trial (2018–2019). Table 1 provides details about each data collection approach. Qualitative data included Quality Advisor diary entries made following every interaction with a team, observations of LC activities, and focus group data collected at the final LC celebration.
Table 1.
SCOPE Process Evaluation Data Collection Approaches
| Approach | Purpose | Data quality approach | |
|---|---|---|---|
| Quality Advisor diaries | QAs completed a diary entry after each interaction with a team = 2–3 diary entries per team between LCs + 1 immediately after each LC. Diary entries included observations (e.g., “Team tested × change”) and impressions (e.g., “physio seems to be driving changes”) | To capture QAs’ perspectives regarding team-level engagement, progress, challenges, enactment of SCOPE’s core components, deviations from intended practice (adaptations), and the role of context in implementation | QAs were trained using examples for diary completion. Quality and completeness of the first several diary entries was assessed by the researchers to ensure relevant information was captured in sufficient detail. Early review led to a revised template separating facts and impressions |
| Participant observations | Researchers used a semi-structured template to observe 2–4 team activities at each LC. Activities captured pertinent processes (leadership approaches, team dynamics) and fidelity receipt/enactment of QI methods central to SCOPE | To capture qualitative data, primarily about the processes through which teams engage and interact with the intervention | Observers underwent training and a calibration exercise before the first learning congress with 1 hr of retraining prior to the other learning congresses |
| Focus groups | Conducted at the final celebratory LC with care aides from each team to obtain team-specific data. Separate focus groups were conducted across teams with professional staff and Sponsors. | To obtain data directly from SCOPE participants on their implementation experience including factors that influence implementation, fidelity receipt, and enactment | Adhere to the principle of homogeneous strangers to eliminate power differentials within a focus group (Morgan, 1997). Focus groups were facilitated by research team members familiar with qualitative methods, recorded, and transcribed |
| Learning Congress checklist | Simple checklist completed following each learning congress to ensure all agenda items were delivered | To monitor thoroughness and consistency of delivery for LC material and activities | Verify completeness following each learning congress |
| Questionnaires | Completed at each learning congress by care aides, sponsors and QAs. | To collect team-level data from multiple stakeholders evaluating core intervention components as well as implementation and intervention outcomes | Collected at learning congresses to reduce data collection burden and ensure high response rates; care aides given private space to complete questionnaires away from more senior people in their organization; all survey questions were pilot tested or used previously |
| Observer fidelity enactment ratings | Ratings provided by a SCOPE researcher at LCs 3 and 4 based on progress presentation and Q&A | To obtain expert ratings of the extent to which SCOPE teams have successfully enacted the core components of the intervention | Researchers were provided with “guidelines for rating,” including things to “look for” during progress presentations; prior to each LC researchers were trained on use of the rating scale and a calibration exercise was completed. At LC2, IRR was examined by having two experts rate each team. IRR was examined using a one-way random-effects model. For all 30 cases. IRR for fidelity enactment was good (IRR = 0.73) |
Notes: IRR = interrater reliability; LC = Learning Congress; QA = Quality Advisor; QI = quality improvement; SCOPE = Safer Care for Older Persons in residential Environments intervention.
Quantitative data on eight variables came from questionnaires/rating scales completed at LC 3 and LC 4 by SCOPE team members, sponsors, quality advisors, and research observers (see Table 2). We focus on quantitative data from LC 3 and LC 4 because most implementation activities occurred leading up to these two LCs and we tested some measures during LC 1 and LC 2.
Table 2.
SCOPE Process Evaluation Quantitative Measures
| Variable | Response scale | |
|---|---|---|
| SCOPE mechanisms of impact | ||
| V2 | Quality Advisor helpfulness—measured with a single item from the care aide perspective aggregated to the team level: “Over the last 3 months, how often has your SCOPE Quality Advisor helped your SCOPE team when you had difficulty moving forward with SCOPE?” | 1 = never/almost never; 5 = always/almost always |
| V3 | Team Sponsor support— measured with a single item from the care aide perspective aggregated to the team level: “Over the last 3 months, how helpful was support from the team sponsor for achieving your team’s goals?” | 1 = not at all helpful; 5 = extremely helpful |
| Context | ||
| V4 | Priority of SCOPE for the nursing home facility—measured with a single item from the Quality Advisor perspective: “Facilities face many competing priorities. To what extent was SCOPE a priority for the facility in the last 3 months?” | 1 = very low priority; 5 = very high priority |
| Implementation and intervention outcomes | ||
| V5 | Intervention acceptability (“belief that SCOPE has been an excellent learning experience”)—measured with a single item from the care aide perspective aggregated to the team level: “SCOPE has been an excellent learning experience” | 1 = strongly disagree; 5 = strongly agree |
| V6 | Perceived effects of SCOPE on residents (“belief that SCOPE participation is helping to improve resident care”)—measured with a single item from the care aide perspective aggregated to the team level: “Participating in SCOPE is helping our care unit to improve care for our residents” | 1 = strongly disagree; 5 = strongly agree |
| V7 | Extent SCOPE team is care aide-led—measured from the care aide perspective with an adapted four-item shared team leadership scale (Hiller et al., 2006; alpha = 0.891), aggregated to the team level (measured only at LC4): “Thinking about all SCOPE activities and decisions over the last 3 months, how often did care aides on the SCOPE team play a lead role in: (a) setting your SCOPE team’s goals, (b) planning how the work gets done, (c) organizing tasks so that work flowed smoothly, (d) developing solutions when problems arise.” | 1 = never; 5 = always |
| V8 | Overall fidelity enactment—single-item rating provided during LC3 and LC4 by a SCOPE researcher after they observed a team’s progress presentation and asked them questions. “Enactment refers to a team’s actual implementation of SCOPE activities as intended by the intervention. Using the ‘[core component list of things to] Look fors’ provided, rate the team’s level of enactment.”a | 0 = no/very low enactment; 4 = very high enactment |
Notes: LC = Learning Congress; PDSA = Plan-Do-Study-Act; SCOPE = Safer Care for Older Persons in residential Environments intervention.
a.This kind of global rating faithfully reflects competency when completed by subject-matter experts in time-limited interactions during Objective Structured Clinical Exams (Regehr et al., 1998; van der Vleuten et al, 2005) and has been used by members of this team to evaluate fidelity enactment of the INFORM intervention (Ginsburg et al., 2020).
Supplementary Item 2 details qualitative and quantitative process evaluation data collected by MRC framework category (Moore et al., 2015). All data collection tools are included in Supplementary Item 3. This study was approved by the Research Ethics Boards of the University of Alberta (Pro00000012517) and University of British Columbia (H14-03286), with operational approval obtained from all included facilities as required. SCOPE sponsors and team members were asked for oral informed consent before participating in any primary data collection (evaluation surveys, focus groups, and interviews).
Data Analysis
To develop a program theory of how implementation of complex interventions occurs (Objective 1), we explored the processes through which intervention participants in different roles engaged with the intervention. Following principles of grounded theory (Glaser & Strauss, 1967), we iteratively collected and examined data on participants’ experiences over the implementation interval, broadening and deepening areas for data collection as data collection proceeded. Ultimately, we used the constant comparative approach to compare and contrast categories, themes, and relationships to build our process model (Glaser, 1965). Data were analyzed in NVivo by A. Massie, A. Easterbrook, and L. R. Ginsburg. Experiences of teams, regions, and time periods were purposely compared and contrasted to illuminate implementation processes and examine longitudinal trends. Initially, two researchers (A. Easterbrook and A. Massie) independently open-coded transcripts, remaining as close to the participants’ language as possible. Examples of initial codes included, “you guys have given us [care aides] an opportunity to speak out and change.” Initial codes were categorized into categories and subcategories based on a code book iteratively developed as data were analyzed. In biweekly meetings, A. Massie, A. Easterbrook, and L. R. Ginsburg discussed and revised codes and categories. Once saturation was reached with category development, we began axial coding, or finding the relationships between, and within, categories and subcategories. Triangulation of data sources and methods, peer debriefing, and audit trail were used to ensure rigor in data analyses.
In our exploratory sequential design, we tested aspects of the process model (derived using the approach above) with quantitative analyses (Objective 2). To examine factors influencing fidelity with which teams enact SCOPE, we created a long-form data set and specified repeated-measures mixed-effects regression models (MIXED REML estimation, SPSS) to account for dependencies of a team’s responses at LC 3 and LC 4. The model examined variance in Overall fidelity enactment (dependent variable [V8]) that is explained by variables (V1–V6) above. Given the small sample size (i.e., to ensure sufficient power and model identification), Model 1 examined variance explained by two pairs of variables suggested to be most critical for implementation by the qualitative modeling and Spearman correlations: SCOPE’s priority within the nursing home (V4) and perceived effects of SCOPE on residents (V6); and team sponsor support (V3) and intervention acceptability (V5). Variance explained by PDSA learning (V1) and Quality Advisor helpfulness (V2) was also examined to test the statistical significance (p < .05) of these fixed parameters. Variance explained by extent SCOPE team is care aide led (V7) could not be examined as data were only collected at the fourth LC. We examined interaction effects between each variable and team sponsor support (model 2) given its importance in the qualitative analysis. The team sponsor support variable (V3) was negatively skewed so was dichotomized to compare variance in fidelity enactment explained by each fixed effect when team sponsor support was rated 5-“extremely helpful” versus when it was rated < 5 (“not at all helpful” to “very helpful”).
Results
An average of 2.9 care aides per team attended each LC; the team members were mostly female (93.5%). Eighty-seven percent of SCOPE teams (27/31) participated in all four LCs and 74.2% (23/31) sent a Team or Senior Sponsor. Three teams (9.7%) became disengaged after LC 2 and did not attend LC 3/4. See Table 3.
Table 3.
Key Variable Descriptives
| Team descriptives | Quantitative variable scores b,c | Mean (SD) | |
|---|---|---|---|
| SCOPE team membersa—% female | 93.5 (72/77) | V1. PDSA learning LC3 | 4.23 (0.69) |
| Mean # of care aides per team at each LC (range) | 2.9 (1-5) | V1. PDSA learning LC4 | 4.01 (0.65) |
| # of teams participating in LCs 1 and 2 only (%) | 3 (9.7%) | V2. Quality Advisor helpfulness LC3 | 4.53 (0.65) |
| # of teams participating in LCs 1–3 only (%) | 1 (3.2%) | V2. Quality Advisor helpfulness LC4 | 4.58 (0.73) |
| # of teams participating in all four LCs (%) | 27 (87.1%) | V3. Team Sponsor support LC3 | 4.34 (0.89) |
| # of teams that sent a team or senior sponsor to LC 1 or LCs1 and 2 only (%) | 3 (9.7%) | V3. Team Sponsor support LC4 | 4.48 (0.92) |
| # of teams that sent a team or senior sponsor to LCs 1–3 only (%) | 5 (16.1%) | V4. Priority of SCOPE for the nursing home facility LC3 | 2.50 (0.97) |
| # of teams that sent a team or senior sponsor to all four LCs (%) | 23 (74.2%) | V4. Priority of SCOPE for the nursing home facility LC4 | 2.62 (1.15) |
| V5. Intervention acceptability LC3 | 4.51 (0.41) | ||
| V5. Intervention acceptability LC4 | 4.57 (0.46) | ||
| V6. Perceived effects of SCOPE on residents LC3 | 4.42 (0.55) | ||
| V6. Perceived effects of SCOPE on residents LC4 | 4.60 (0.46) | ||
| V7. Extent SCOPE team is care aide-led LC4 | 4.55 (0.70) | ||
| V8. Overall fidelity enactment LC3 | 2.25 (1.08) | ||
| V8. Overall fidelity enactment LC4 | 2.52 (1.37) |
Notes: LC = Learning Congress; PDSA = Plan-Do-Study-Act; SCOPE = Safer Care for Older Persons in residential Environments intervention; SD = standard deviation.
aSCOPE team membership sometimes fluctuated; LC3 numbers are reported here.
bV1–V8 are defined in Table 2. Because SCOPE is a team-level intervention, care aide data (V1–V3, V5 and V6) were aggregated to the SCOPE team level. Before aggregation, we used mixed-effects regression models with a nursing home-level random intercept to estimate intracluster correlation (ICC) of each variable. ICC reflects proportion of total variance explained by care aides’ nursing home membership. ICC values >0.05 are sufficient for aggregation (Bliese, 2000). With one exception, ICC exceeded 0.05 for all care aide variables (range = 0.05–0.75).
cDifferences between each pair of variables below between LC3 and LC4 are not statistically significant using independent samples t tests.
Qualitative Results
Overall, for implementation outcomes we examined (Proctor et al., 2011), qualitative analysis suggests many teams enacted the intervention with reasonable fidelity within their unit and a few enacted beyond their unit (penetration). By engaging with and applying SCOPE’s core elements, most teams recognized the value of QI approaches and the potential for care aides to lead change initiatives (implementation acceptability and feasibility). Most teams demonstrated commitment to continuing QI approaches (sustaining the intervention). The best example of sustainment/routinization was the team who created a new verb: If a resident demonstrated responsive behaviors, staff said they would “simply SCOPE them” (ABS004, Quality Advisor diary, post-LC 4)—meaning they would study the resident’s care processes to learn what prompted the behaviors and try small changes to reduce them.
Our analysis suggests four interrelated processes that describe how teams implemented SCOPE (Figure 1). These processes are described here, along with illustrative quotations, and deepen understanding of the three dynamically related areas in the MRC framework (what was implemented, how it produced the outcomes noted above, and how context affects implementation and outcomes).
Figure 1.
Schematic of how teams implement the SCOPE intervention. CA = care aide; QA = quality advisor; QI = quality improvement; SCOPE = Safer Care for Older Persons in residential Environments intervention.
Process 1. Learning and Applying QI Knowledge and Skills
Teams learned and applied QI knowledge and skills by (a) engaging meaningfully with core SCOPE elements and (b) creating supportive/learning environments.
Engaging meaningfully with core SCOPE elements
Teams with previous QI exposure knew the potential benefits of QI and applied SCOPE with greater ease. Teams with little or no QI experience questioned their capacity to implement SCOPE and required additional support to understand and apply core concepts. The Quality Advisor was crucial in keeping these teams on track. As teams gained QI experience with SCOPE, many grew more comfortable with the approach. However, not all teams truly understood and adopted QI methods.
For many teams, the most difficult aspect of SCOPE was effectively measuring the impact of their changes (the “study” part of PDSA) using run charts, safety cross, etc., producing poor measurement enactment. Our data also suggest care aides understand and demonstrate evidence of impact using different paradigms than researchers. Many teams did not demonstrate the impact of changes using measurement approaches intended by SCOPE, though the change was palpable: “This team is doing good work … and thoughtfully approaching complex resident/staff problems – and have almost no documentation to show it ... they are struggling to understand the tools we provided” (FH042, Quality Advisor impressions, pre-LC 3). Instead, care aides relied more heavily on anecdotal “stories” that indicate successful change:
… the story they just shared about modifications they made for the resident to eat upstairs is an example of assessment and then intervention that resulted in positive change… They agreed the story telling approach might work better than the run chart templates. (FH012, Quality Advisor diary)
Creating supportive/learning environments
Teams’ learning and application of QI knowledge and skills was noticeably influenced by how and how much Sponsors created supportive environments (internal facilitation). Some Sponsors encouraged care aides to speak during activities and contribute to project ownership: “This is your guys’s [project], and if you need it [support], let me know” (ABS002, Sponsor, focus group); others created a supportive learning environment more subtly: “Senior Sponsor effectively coached them … she wasn’t giving them answers but asked questions to get their thoughts, then elaborated on answers when necessary” (ABN007, Quality Advisor impressions).
Most sponsors adapted their leadership style to meet evolving care aide needs. Sponsors typically became less directive and more supportive as care aides developed self-efficacy and enacted project elements more autonomously. One Senior Sponsor started with a very directive leadership style as the care aides were initially “shy and maybe uncomfortable being thrust forward as ‘leaders’ … [but the Sponsor was] actively working to back off” (IH027, Quality Advisor impressions, LC 2). Another Sponsor noted “I may have been too involved and could help empower the group more” (FH020, Team Sponsor, LC 3 surveys). A few Sponsors remained directive, either because care aides who speak English as an additional language struggled to engage with project materials or because the Sponsor lacked confidence in care aides’ ability to lead QI work. One Senior Sponsor was “surprised when the Quality Advisor stated that on other teams, care aides had written the care plans, and done education of their peers” (ABN007, Quality Advisor impression, pre-LC 3).
Teams who learned QI knowledge and skills early in SCOPE had advantages. They built on this through a scaffolding process, learning how to implement PDSA cycles, adapting them following measurement, and sharing interventions across their unit or their nursing home. When teams did not gain QI skills, they could not effectively transfer knowledge to others, limiting intervention spread. When teams engaged meaningfully with technical QI elements, and when supportive/learning environments were created, teams could better negotiate relationships, navigate workplace challenges, and assume new roles and responsibilities—the other three overarching processes.
Process 2. Negotiating (Developing/Fostering) Relationships
Teams negotiated relationships by (a) developing/fostering positive team dynamics and cohesion and (b) maintaining a team and attaining legitimacy in the unit.
Developing/fostering positive team dynamics and cohesion
Initial team perceptions and openness to SCOPE greatly affected team cohesion and dynamics. Care aide teams who volunteered for SCOPE typically had stronger initial cohesion and came from flatter organizational hierarchies which facilitated engagement and role change. Care aides selected/“voluntold” by Sponsors to participate were concerned about workload and more reluctant to engage, though apprehensions diminished over time: “We got all this top-down s**t dumped on us, but now we have more freedom and it is functional” (IH001, LC 2 observations).
Some teams developed cohesion when pride in the SCOPE work emerged: “One care aide for whom English is a second language, described what the SCOPE work meant to her and how proud she is with what she has accomplished, and of being part of a team” (ABN025, Quality Advisor diary, LC 4). Pride emerged and was evident among many care aides who recognized their ability to enact change and had this affirmed by their peers (e.g., “Other Care aides have commented on how they enjoy seeing the change that the SCOPE team is undertaking” ABS004, Quality Advisor diary) and by management: “[the Care aides] were invited to talk about the project on the national conference call…everyone but me [Sponsor]!” (ABS015, focus group). Pride sometimes aided team cohesion in teams with poor dynamics. One team did not initially get along yet “the whole team seems proud to be affiliated with the project” and engagement changed “negative to good team dynamics” by LC 3 (IH001, Quality Advisor diary, LC 2, LC 3).
Team dynamics during LCs reflected how teams worked together within their nursing homes. Teams that communicated openly and demonstrated positive interpersonal relationships (e.g., laughter, nodding, encouragement) displayed these same dynamics at meetings with the Quality Advisor between LCs. Teams with unfavorable dynamics demonstrated tensions and leadership fights, reducing care aide empowerment to contribute to discussions.
Positive team dynamics and cohesion increased in most teams as implementation led to observable benefits for residents, increased self-efficacy among team members, and as noted above, recognition from peers: “The sponsor reported noticing other staff are recognising the SCOPE team as ‘experts’ in the nursing home and according them more respect and asking for advice” (FH042, Quality Advisor diary, pre-LC 2). Recognition created shared pride and motivated continued engagement. Failure to see improvements in residents bred discouragement and often slowed momentum or caused disengagement.
Maintaining a team and attaining legitimacy on the unit
During the project, most teams lost and gained team members (moving units, changing shifts/rotations, etc.). Recruiting new team members was disruptive and slowed progress, although losing a team member who caused conflict or a leader who provided inadequate support was beneficial. One team was “derailed by transitions in leadership” but “the new team sponsor [was] more on the ground (on unit) with the team [which was] helpful for moving the project along” (ABS004, Quality Advisor impressions, pre-LC 4).
Teams also had to negotiate relationships and engage other unit staff in implementation. Often, unit staff “didn’t know what we are doing … so, they say, ‘I’m not getting anything. Why are you wasting your time and our time?’ … to change that attitude and convince those people, it took a long way” (ABN004, care aide, focus group). To manage resistance, some teams made SCOPE more visible on the unit (posters in high-traffic areas, communication binders, presentations) to achieve a shared purpose among staff. Other teams used informal communication between coworkers, targeting specific individuals who were more open to change.
Some Sponsors leveraged their position to ease resistance. Sponsors in one nursing home “put out the first introduction to SCOPE,” which the care aides believe gave them the “leeway to be able to do what we wanted to do … as the message got through better than if it had come from any one of us” (FH012, care aide, focus group). For other teams, resistance eased only when staff noticed SCOPE benefits for residents or for staff quality of work life.
Overall, this process created collaborative environments where many staff, not only SCOPE team members, engaged in intervention processes and experienced role change.
Process 3. Navigating Workplaces
Teams navigated workplaces by (a) finding capacity for SCOPE and (b) building team resilience against challenges. Despite chronic understaffing in nursing homes, teams could create, adopt, and maintain (routinize) new and consistent care approaches while changing care aide roles and responsibilities.
Finding capacity for SCOPE
Most teams lacked time/staffing/resources to complete both daily care tasks and SCOPE tasks. Lack of protected time for SCOPE activities was identified in LC exit surveys as the biggest challenge for SCOPE engagement and progress. A few teams resorted to working during off hours or using communication binders or group texts to maintain interaction. Most teams met informally (e.g., team huddles, discussions in passing) or during “any time we were able to get together … even if it was just five minutes” (ABN004, care aide, focus group).
Teams who struggled most with communication had team members working different shifts who “did not have the chance to work together and connect” (IH007, care aide, focus group). Reliance on casual staff (contract staff who work infrequently to fill scheduling gaps) is common in LTC homes and exacerbated communication and engagement problems for most teams. Teams reported spending their time educating casual staff rather than trialing new change ideas. Consistent measurement in PDSA cycles was disrupted for one team who noted casual staff “were not able to use the tracking tool we developed” (FH014, Team Sponsor, LC 2 survey).
Most teams contended with other organizational pressures like audits, restructuring, and outbreaks which slowed or paused implementation and reduced engagement with other SCOPE elements such as meetings with the Quality Advisor or attending an LC. Teams were more negatively impacted by external pressures faced early in interventions—teams facing pressures later in interventions, with new care approaches more routinized, saw progress slow rather than stall.
Building team resilience against challenges
Some teams were more resilient and able to re-engage with SCOPE following disruptions. More resilient teams believed in the benefits of SCOPE, were more engaged, and/or had Sponsors who supported care aides even when they failed. Work environments that accepted or welcomed failure allowed care aides to build capacity to identify creative solutions to challenges. One team noted they had done nothing on SCOPE between LC 3 and LC 4 due to unexpected challenges, but “they will continue to provide the non-pharmacological interventions on the unit and to teach other units to try them.” The Quality Advisor impression (FH023, LC 4) was that this teams’ resilience resulted from “overall context in the home that supported this team to have the courage to try and even fail.”
LCs promoted resilience by prompting teams to re-engage with a shared goal, get advice from clinical experts, and connect with other teams. LC deadlines promoted accountability: “We kind of want to be on this right away, as opposed to the week before [the next congress]…And it sort of teaches you better time management” (ABS003, focus group).
Early in SCOPE, LC activities surfaced “noticeable insecurity regarding the task and how to proceed” (ABN009, LC 1 observations). During an LC 1 activity, one team “started out quite negative, lots of sarcasm and disagreement” but “with prompting and encouragement it became obvious this was due to unease—group was not sure what to do” (IH001, LC 1 observations). With additional facilitation, teams often grasped concepts. Rarely, interactions with researchers at a LC were unhelpful: “advice … by a researcher at Learning Congress 2 was very complicated and they felt too overwhelmed to utilise the measurement plan created” (FH012, Quality Advisor diary, post-LC 2). This illustrates both the measurement challenges noted earlier and the importance of the Quality Advisor facilitation role in assisting teams.
Perseverance was evident as teams continued to engage with SCOPE, even when faced with challenges, eventually grasping concepts and seeing benefits. At LC 2, one Quality Advisor reported: “I do not get the sense that they really comprehend the PDSA model or what we are trying to accomplish with SCOPE” (ABS004, Quality Advisor impressions, LC 2). By LC 4 she notes: “This team has…really come to understand the IHI model, change, and improvement” (ABS004, Quality Advisor impressions, pre-LC 4).
Process 4. Shifting Culture Around Roles/Hierarchy
Reduced power differences created opportunities for care aides to take on new roles and responsibilities and to shift away from task-focused care. This process occurred within a positive feedback loop of (a) enhancing empowerment and building trust and (b) reducing tension and conflict.
Enhancing empowerment and building trust
For most LTC homes, SCOPE was the first purposeful attempt to empower care aides in leadership roles—a process that occurred slowly during the project. Sponsors typically struggled to release control, so Quality Advisors routinely emphasized the importance of care aide leadership in SCOPE and worked to enable it: “I suspect the team sponsor is running the project…[I] Will work to…find areas where the care aides can begin to take charge” (FH020, Quality Advisor impressions, pre-LC 3).
The extent and speed of care aide empowerment differed across LTC homes and depended in part on how hierarchical the culture was. For care aides in one home, simply being included in SCOPE (as opposed to leading) was meaningful. This home achieved “slow and small progress,” though it was “meaningful for them [care aides] and representative of an overall culture shift where they are being actively included in the resolution of issues they are faced with every day” (FH032, Quality Advisor impressions, post-LC 3). In other homes, care aides stepped into more substantial leadership roles: “[care aide] presented it [SCOPE] at every single mandatory education we had for all of our staff … it’s the first time a care aide has ever presented during mandatory education” (ABS002, focus group).
As empowerment and team cohesion developed (Process 3) and care aides took more responsibility, “a new paradigm of trust” (FH017, Quality Advisor impressions, post-LC 4) emerged and communication across hierarchies changed. Many care aides reported feeling more respected and comfortable speaking to Sponsors: “I wasn’t able to talk to our managers before. Just ‘Hello’ ... Now it’s like we have the relationship already...we know her better, and she knows us better” (ABN010, focus group). Participation in LCs also broke down hierarchies and facilitated communication: “Sitting together at these Learning Congresses, it felt to me like we were more on an even level as opposed to, ‘oh you’re my boss’” (ABS002, focus group).
SCOPE helped to build trust by shifting perceptions of care aides’ abilities. Although they provide most resident care, care aides are typically considered low down in the organizational hierarchy: “We’re the bottom feeders” (IH019, care aide, focus group). Placing care aides in leadership roles in SCOPE seemed to change some nurses’ and managers’ perceptions of care aides’ roles and ability. Many accounts from family members, staff, management, board members, and Sponsors stated how impressed they were by care aides: “I’ve seen huge capacity for change” (ABS002, Senior Sponsor, focus group).
Reduced tension and conflict
For most teams, jealousy initially caused tension between SCOPE care aides and other staff who felt SCOPE team members received preferential treatment. Jealousy was more prominent in nursing homes where care aides were “voluntold” for SCOPE, with perceptions that Sponsors “picked favourites to be part of the SCOPE team” (ABS004, Quality Advisor diary, post-LC 3). Jealousy often bred resistance to care aides in leadership roles, which hindered the uptake of new care approaches across the unit. Resistance was sometimes overt—one Team Sponsor noted that “during presentations by SCOPE team members, some staff were negative, disrespectful to presenters” (ABN016, Team Sponsor, LC 2 survey). Other times resistance was more subtle with staff refusing to adopt new care approaches.
During the “wise crowd” activity at LC 2, which focused on strategies to improve relationships and obtain buy-in, many teams spoke of tensions while implementing their first PDSA cycle. Consistent with Process 2, jealousy and resistance eased as care aides demonstrated success: “Our co-workers need proof” (FH040, care aide, LC 2 observations). New care approaches became routinized: “As more and more staff got on board, [the resistors] stuck out like a sore thumb” (FH012, care aide, focus group). Nevertheless, only a few teams gained complete buy-in from other staff. Most teams experienced some resistance throughout SCOPE. Our data suggest SCOPE may have, initially and inadvertently, created unease and jealousy among non-SCOPE staff and created isolation and emotional labor among SCOPE care aides:
I think at first it was kind of like, ‘We’re not a part of the SCOPE team—You guys are.’ … But then, people … got curious, started asking questions about it. And then, once they got to know about it—that isolation was gone—you didn’t necessarily have the buy in from them or the understanding from them. But at least they weren’t isolating you. (ABS002, care aide, focus group)
Power struggles also occurred between SCOPE care aides and nursing staff. A care aide noted: “Some nurses won’t listen to us [care aides] at all,” the sponsor replied, “I don’t think some of them [nurses] want to see the care aides leading because they’re supposed to be the team leaders, right?” (IH007, care aide, focus group). As many nurses came to appreciate the abilities and unique strengths of care aides (e.g., intimate resident knowledge), communication and collaboration improved: “last week an RN put a bunch of old care plans on the nursing station and said: ‘I want your help updating these care plans, because you guys [care aides] know these people best.’ … I’ve never seen that before” (IH025, Sponsor, focus group). Improved communication and collaboration between care aides and allied health professionals were also reported.
Finally, as part of the hierarchy culture shift, SCOPE seemed to spark movement away from task-focused care, to reflect on care practices, and trial creative solutions to challenges. Examples are FH017 using a “map of Canada to record resident progress toward a mobility goal” and IH025 buying “baby spoons that change colour if the food is too hot.” Adopting change ideas yielded positive outcomes for residents (e.g., reduced pain, falls, and responsive behaviors) and staff (e.g., improved quality of work life, reduced workload). Importantly, participants indicated that adopting a work approach favoring reflection and autonomy was SCOPE’s biggest benefit.
Quantitative Results
Core intervention components PDSA learning (V1), Quality Advisor helpfulness (V2), and Team sponsor support (V3) were evaluated positively (ratings over 4 on 1–5 scale) as was intervention acceptability (V5) and effects of SCOPE on residents (V6). SCOPE fidelity enactment (V8) was moderate 2.25 on 0–4 scale at LC 3, rising to 2.52 at LC 4 (not significant, t(53) = −0.81, p = .23). Extent to which SCOPE is a nursing home priority (V4) was relatively low (2.6 on 1–5 scale). See Table 3. Correlations among key study variables are provided in Supplementary Item 4.
Results of our linear mixed models suggest four key factors that influence the fidelity with which teams enact SCOPE. Higher care aide ratings of effects of SCOPE on residents (V6; F = 9.4, p = .004) and higher Quality Advisor ratings of extent to which SCOPE is a nursing home priority (V4; F = 8.2, p = .006) are both associated with higher researcher ratings of fidelity enactment (Table 4, Model 1a). Similarly, higher care aide ratings of team sponsor support (V3; F = 6.4, p = .014) and higher care aide ratings of intervention acceptability (V5; F = 5.1, p = .029) are also associated with higher researcher ratings of fidelity enactment (Table 4, Model 1b). Other mechanisms of impact (PDSA learning [V1], Quality Advisor Helpfulness [V2]) were not statistically significant. Substantial amounts of variance (29%) in fidelity enactment were explained by within-team similarities over time (repeated-measures intracluster correlation [ICC] = 0.22, 95% confidence interval [CI]: 0.08; 0.49).
Table 4.
Estimates of Fixed Effects for the Outcome of Fidelity Enactment (n = 27)
| Modela | Estimate | SE | LCI | UCI | p |
|---|---|---|---|---|---|
| Model 1a | |||||
| Intercept | −2.710 | 1.175 | −5.084 | −0.335 | .026 |
| Effects of SCOPE on residents (V6) | 0.880 | 0.288 | 0.299 | 1.461 | .004 |
| SCOPE as Facility Priority (V4) | 0.429 | 0.150 | 0.128 | 0.731 | .006 |
| Model 1b | |||||
| Intercept | −3.241 | 1.775 | −6.812 | 0.329 | .074 |
| Team sponsor support (V3) | 0.439 | 0.170 | 0.095 | 0.783 | .014 |
| Intervention acceptability (V5) | 0.813 | 0.361 | 0.087 | 1.540 | .029 |
| Model 2a | |||||
| Intercept | −8.208 | 2.273 | −12.833 | −3.582 | .001 |
| Intervention acceptability (V5) | 2.247 | 0.500 | 1.230 | 3.263 | .000 |
| Team sponsor support (V3—dichotomized) | 10.710 | 2.930 | 4.774 | 16.646 | .000 |
| Intervention acceptability (V5) × Team sponsor support (V3—dichot) | −2.186 | 0.644 | −3.491 | −0.881 | .002 |
| Model 2b | |||||
| Intercept | −4.744 | 1.597 | −7.974 | −1.513 | .005 |
| Effects of SCOPE on residents (V6) | 1.551 | 0.365 | 0.813 | 2.289 | .000 |
| Team sponsor support (V3—dichot) | 4.964 | 2.415 | 0.087 | 9.841 | .046 |
| Effects of SCOPE on residents (V6) × Team sponsor support (V3—dichot) | −0.998 | 0.533 | −2.074 | 0.078 | .068 |
Notes: LCI = lower confidence interval; p = p value; SCOPE = Safer Care for Older Persons in residential Environments intervention; SE = standard error; UCI = upper confidence interval.
aV1–V8 are defined in Table 2.
Results of mixed models with interaction terms (Table 4, Models 2a and 2b) show a significant interaction between intervention acceptability (V5) and team sponsor support (V3; F = 11.5, p = .002) and, using a more generous threshold for significance, between effects of SCOPE on residents and team sponsor support (F = 3.5, p = .068). Specifically, when team sponsor support was rated at the highest level (“extremely helpful”), fidelity enactment was high regardless of the level of intervention acceptability (Figure 2A) or perceived effects of SCOPE on residents (Figure 2B). Conversely, when team sponsor support was rated as anything less than “extremely helpful,” intervention acceptability and effects of SCOPE on residents become important predictors of fidelity enactment (Figure 2).
Figure 2.
Relationship between intervention characteristics and fidelity at different levels of sponsor support. SCOPE = Safer Care for Older Persons in residential Environments intervention.
Discussion and Implications
This study sought to add to the literature on how complex interventions are implemented in LTC by outlining a program theory that demonstrates the complexity of, and dynamic relationship between elements needed for quality improvement implementation in these under-resourced settings. The proposed program theory brings together the results of our qualitative and quantitative analyses and outlines how the delivered intervention produces change using concepts from both the MRC framework and Proctor’s taxonomy of implementation outcomes.
Program Theory
Our program theory suggests frontline-led teams can implement complex interventions. Implementation is facilitated by engagement with core technical components of the intervention and by supportive leadership (strong internal facilitation) that creates an environment for resilient teams with positive dynamics and low levels of conflict. Implementation is further facilitated by shifts in role perceptions and power differentials. Genuine engagement and empowerment of care aides boost their self-efficacy, enhance trust among them and with their professional and managerial colleagues, and help them shift their work from task orientation to critical reflection. Intervention acceptability and perceived outcomes (care aide perceptions of benefits to resident care) are both important for facilitating intervention implementation, for achieving buy-in from non-SCOPE team members, and for spreading the intervention. Sustainability of SCOPE (which was seen following a recent pilot study; Song et al., 2022) is expected given its emphasis on internal facilitators and potential for the intervention to shift culture in a way that promotes routinization of SCOPE improvement activities.
Four processes emerged from our qualitative data: building QI skills, negotiating relationships, navigating workplaces, and shifting culture around care aide roles. These are interrelated (Figure 1 fidelity enactment area). Over time, these processes led to team/unit perceptions that SCOPE benefitted resident care and staff work life. These processes increased empowerment and self-efficacy among team members and achieved recognition from their peers/improved relationships (other efficacy). Together, these processes and outcomes are part of reinforcing feedback loops that shifted culture and promoted and sustained teams’ implementation efforts (Figure 1). Sponsors navigated pressures, created learning environments, and facilitated care aide role change (Figure 1, bottom). More broadly, we saw the increased capacity for change alongside perspective and—ultimately—culture change.
Quantitative data support relationships suggested by the qualitative model. Mixed-effects model results suggest that important predictors of fidelity enactment (implementation) are sponsor support at the unit and organization levels and perceived benefits of SCOPE for residents. Significant interactions further suggest conditions under which certain variables facilitate intervention enactment (Figure 2).
The proposed program theory of how teams implement SCOPE is consistent with our previous retrospective study examining implementation experiences of six teams in the SCOPE pilot study (Ginsburg et al., 2018). That study also suggested that facilitation elements of the QI program structure, developing individual QI skills, and observable program impact on residents, care aides, and leaders operated as part of a reinforcing feedback loop that boosted team members’ ability to navigate workplace and relationship challenges and implement SCOPE.
Our findings are also consistent with recent work in the VA on implementation of a frontline staff-led quality improvement platform to improve resident–staff interactions (Hartmann et al., 2018; Mills et al., 2019). SCOPE used a blended facilitation model (Pimentel et al., 2019) like the VA intervention which includes (a) a team-based approach to implementation; (b) fosters and makes use of local leadership familiar with the organizational context, structures, and culture (internal facilitation); and (c) uses some external facilitation to support the more technical aspects of the intervention. The kind of relationship building (which recognizes unique expertise held by each stakeholder) that we found was crucial for success in SCOPE is also central to the blended facilitation approach (Pimentel et al., 2019) and was important for the success of the VA QI intervention.
As we consider how teams implement interventions in complex settings and how to facilitate implementation two further points are noteworthy. First, the most impactful facilitation in SCOPE was provided internally by local leaders (sponsors) followed by the external Quality Advisors. For sustainability reasons, SCOPE was designed so that researchers would play a very small facilitation role. Although Quality Advisors were external facilitators in SCOPE, this kind of technical support role is sustainable as it is often played by a corporate quality program (Pimentel et al., 2019). Second, although SCOPE teams struggled with the technical measurement aspects of the PDSA approach—a problem noted in studies of PDSA fidelity in the United Kingdom (McNicholas et al., 2019; Taylor et al., 2014)—the VA intervention was successful in part because it involved a less technical, easy to conduct, bundle of practices to improve quality (Mills et al., 2018). Technically straightforward interventions are, therefore, important, although they cannot be successfully implemented without simultaneous attention to the complex organizational/sociocultural contexts within which they take place (Bosk et al., 2009; van de Ven, 1986).
Although implementation determinants are increasingly well studied, knowledge of the process of implementation is limited (Nilsen, 2015). There is a very small body of implementation science research in LTC and meaningful use of implementation theories, models and frameworks in this sector is limited (Sullivan et al., 2022); our program theory offers setting-specific knowledge that can guide LTC homes as they implement and scale up improvement interventions. Future research could examine how this program theory, coupled with a more thorough exploration of higher-level theories at the micro (e.g., Social Learning Theory; Bandura, 1977) and macro levels (e.g., complexity theory [van de Ven, 1986], role theory [Hindin, 2007], theories of organizational culture [Schein, 1990]), could help develop/refine much needed middle-range implementation process theories (Wensing & Grol, 2019).
Finally, by modeling relationships among implementation outcomes longitudinally, this study responds to calls for greater consideration of outcomes in implementation research and raises questions about what constitutes success in complex trials such as SCOPE. SCOPE’s primary outcome was improved conceptual research use by care aides and it was, technically, a negative trial (Wagg et al., in press). However, data from this concurrent process evaluation suggest SCOPE had a notable impact on other important variables including care aide empowerment and care aides’ ability to implement process change—two variables that are related to resident outcomes (Barry et al., 2005) and that, if enhanced, have the potential to improve longstanding workforce challenges (Scales, 2021). We, therefore, reinforce the idea that implementation outcomes (Proctor et al., 2011) and implementation processes, both of which we focus on in the current paper, are extremely important to explore in trials of complex interventions. This is particularly important in LTC where commonly used resident outcomes are subject to considerable time lag and are therefore “suboptimal” (Wensing & Grol, 2019). Additional empirical work on the optimal fidelity–adaptation equilibrium continues to be necessary (Ginsburg et al., 2021), particularly as we attend to the need for sustainable and feasible interventions (Hartmann et al., 2018). Interested readers are encouraged to see recently published papers in this journal on tailoring implementation support to promote scalability (Hughes et al., 2022) and the intervention adaptation process in LTC (Madrigal et al., 2022).
Strengths and Limitations
Strengths
This study is one of the few that examined implementation concurrently rather than retrospectively, yielding longitudinal data that underscore different implementation trajectories. The mixed-methods design enabled us to understand both processes and outcomes of implementation in complex systems (Greenhalgh & Papoutsi, 2018). It responds to calls for knowledge on instrumentation approaches in implementation research that are both practical and robust (Martinez et al., 2014). Results are based on the perspectives of multiple SCOPE stakeholders (care aides, Sponsors, Quality Advisors, and SCOPE researchers), avoiding common methods bias.
Limitations
The sample was drawn from homes participating in a larger Translating Research in Elder Care (TREC) program (Estabrooks et al., 2009). These homes may be more quality focused so results may not be generalizable to all LTC homes. With a sample size of 31 LTC homes we could not, in our mixed models, adjust for additional team or unit characteristics (e.g., unit-level staffing) that may explain variance in fidelity enactment. Finally, the proximity of SCOPE trial and process evaluation researchers is a potential bias, though there are also benefits to close integration between teams evaluating intervention processes and outcomes (Moore et al., 2015).
Conclusion
SCOPE was an ideal testbed for advancing knowledge of QI, intervention evaluation, and implementation science in LTC settings. The proposed program theory provides insight into how a complex intervention like SCOPE produces change (mechanisms of impact) and can practically inform policy for health system administrators and clinicians regarding how to successfully implement complex evidence-based interventions in LTC where quality concerns are widespread.
Supplementary Material
Acknowledgments
We would like to thank the facilities, administrators, and their care teams who participated in this study. We would also like to thank Don McLeod for facilitating the Learning Congresses and contributing to the development of the SCOPE materials for participants; the Quality Advisors (Carolyn Brandly, Fiona MacKenzie, and Barb Stolee) for supporting the SCOPE teams and keeping them engaged; Judith Palfreyman for administrative support; the TREC data unit manager, Joseph Akinlawon.
Contributor Information
Liane R Ginsburg, School of Health Policy & Management, York University, Toronto, Ontario, Canada.
Adam Easterbrook, Centre for Health Evaluation and Outcome Sciences, University of British Columbia, Vancouver, British Columbia, Canada.
Ariane Massie, School of Kinesiology & Health Science, York University, Toronto, Ontario, Canada.
Whitney Berta, Institute of Health Policy, Management & Evaluation, University of Toronto, Toronto, Ontario, Canada.
Malcolm Doupe, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, Manitoba, Canada.
Matthias Hoben, School of Health Policy & Management, York University, Toronto, Ontario, Canada.
Peter Norton, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada.
Colin Reid, School of Health and Exercise Science, University of British Columbia Okanagan, Kelowna, British Columbia, Canada.
Yuting Song, Faculty of Nursing, University of Alberta, Edmonton, Alberta, Canada.
Adrian Wagg, Division of Geriatric Medicine, University of Alberta, Edmonton, Alberta, Canada.
Carole Estabrooks, Faculty of Nursing, University of Alberta, Edmonton, Alberta, Canada.
Funding
This study was funded by a Canadian Institutes of Health Research Transitional Operating Grant CIHR PS 148582 Wagg.
Conflict of Interest
None.
Data Availability
Statistical and anonymous aggregate data, the full data set creation plan, and underlying analytic code associated with this paper are available from the authors upon request, understanding that the programs may rely on coding templates or macros that are unique to TREC. All measurement instruments are provided as Supplementary Item 3. The data used for this article are housed in a secure and confidential Health Research Data Repository (HRDR) in the Faculty of Nursing at the University of Alberta, in accordance with the health privacy legislation of participating TREC jurisdictions. These health privacy legislations and the ethics approvals covering TREC data do not allow public sharing or removal of disaggregated data from the HRDR, even if deidentified. The data were provided under specific data sharing agreements only for approved use by TREC within the HRDR. Where necessary, access to the HRDR to review the original source data may be granted to those who meet pre-specified criteria for confidential access, available at request from the TREC data unit manager (https://trecresearch.ca/about/people), with the consent of the original data providers and the required privacy and ethical review bodies. The SCOPE trial was preregistered at ClinicalTrials.gov ID: NCT03426072.
References
- Alzheimer Society of Canada. (2010). Rising tide: The impact of dementia on Canadian Society. https://alzheimer.ca/sites/default/files/documents/Rising-tide_Alzheimer-Society.pdf (accessed April 13, 2022). [Google Scholar]
- Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. doi: 10.1037//0033-295x.84.2.191 [DOI] [PubMed] [Google Scholar]
- Barnett, M. L., & Grabowski, D. C (2020). Nursing homes are ground zero for COVID-19 pandemic. JAMA Health Forum, 1(3), e200369. doi: 10.1001/jamahealthforum.2020.0369 [DOI] [PubMed] [Google Scholar]
- Barry, T., Brannon, D., & Mor, V. (2005). Nurse aide empowerment strategies and staff stability: Effects on nursing home resident outcomes. Gerontologist, 45(3), 309–317. doi: 10.1093/geront/45.3.309 [DOI] [PubMed] [Google Scholar]
- Bliese, P. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In Klein K. J. & Kozlowski S. (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349–381). Jossey-Bass. [Google Scholar]
- Bosk, C. L., Dixon-Woods, M., Goeschel, C. A., & Pronovost, P. J. (2009). Reality check for checklists. Lancet, 374(9688), 444–445. doi: 10.1016/s0140-6736(09)61440-9 [DOI] [PubMed] [Google Scholar]
- Creswell, J. W., Plano-Clark, V. L., Gutmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods research designs. In Tashakkori A. & Teddlie C. (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp. 209–240). [Google Scholar]
- Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50. doi: 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davidoff, F., Dixon-Woods, M., Leviton, L., & Michie, S. (2015). Demystifying theory and its use in improvement. BMJ Quality and Safety, 24(3), 228–238. doi: 10.1136/bmjqs-2014-003627 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dixon-Woods, M., Bosk, C. L., Aveling, E. L., Goeschel, C. A., & Pronovost, P. J. (2011). Explaining Michigan: Developing an ex post theory of a quality improvement program. Milbank Quarterly, 89(2), 167–205. doi: 10.1111/j.1468-0009.2011.00625.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Estabrooks, C. A., Hutchinson, A. M., Squires, J. E., Birdsell, J., Cummings, G. G., Degner, L., Morgan, D., & Norton, P. G. (2009). Translating research in elder care: An introduction to a study protocol series. Implementation Science, 4(1), 51. doi: 10.1186/1748-5908-4-51 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ginsburg, L., Easterbrook, A., Berta, W., Norton, P., Doupe, M., Knopp-Sihota, J., Anderson, R. A., & Wagg, A. (2018). Implementing frontline-worker–led quality improvement in nursing homes: Getting to “how”. Joint Commission Journal on Quality and Patient Safety, 44(9), 526–535. doi: 10.1016/j.jcjq.2018.04.009 [DOI] [PubMed] [Google Scholar]
- Ginsburg, L. R., Hoben, M., Easterbrook, A., Andersen, E., Anderson, R. A., Cranley, L., Lanham, H. J., Norton, P. G., Weeks, L. E., & Estabrooks, C. A. (2020). Examining fidelity in the INFORM trial: a complex team-based behavioral intervention. Implementation Science, 15(1), 1–11. doi: 10.1186/s13012-020-01039-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ginsburg, L., Hoben, M., Easterbrook, A., Anderson, R. A., Estabrooks, C. A., & Norton, P. G. (2021). Fidelity is not easy! Challenges and guidelines for assessing fidelity in complex interventions. Trials, 22(372). doi: 10.1186/s13063-021-05322-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glaser, B. G. (1965). The constant comparative method of qualitative analysis. Social Problems, 12(4), 436–445. doi: 10.2307/798843 [DOI] [Google Scholar]
- Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded strategies for qualitative research. Aldine. doi: 10.2307/2575405 [DOI] [Google Scholar]
- Greenhalgh, T., & Papoutsi, C. (2018). Studying complexity in health services research: Desperately seeking an overdue paradigm shift. BMC Medicine, 16(1), 95. doi: 10.1186/s12916-018-1089-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hartmann, C. W., Mills, W. L., Pimentel, C. B., Palmer, J. A., Allen, R. S., Zhao, S., Wewiorski, N. J., Sullivan, J. L., Dillon, K., Clark, V., Berlowitz, D. R., & Snow, A. L. (2018). Impact of intervention to improve nursing home resident-staff interactions and engagement. Gerontologist, 58(4), e291–e301. doi: 10.1093/geront/gny039 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hewko, S. J., Cooper, S. L., Huynh, H., Spiwek, T. L., Carleton, H. L., Reid, S., & Cummings, G. G. (2015). Invisible no more: A scoping review of the health care aide workforce literature. BMC Nursing, 14(38). doi: 10.1186/s12912-015-0090-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hiller, N. J., Day, D. V., & Vance, R. J. (2006). Collective enactment of leadership roles and team effectiveness: A field study. The Leadership Quarterly, 17, 387–397. doi: 10.1016/j.leaqua.2006.04.004 [DOI] [Google Scholar]
- Hindin, M. (2007). Role theory. In Ritzer G. (Ed.), The Blackwell encyclopedia of sociology (pp. 3959–3962). Blackwell Publishing. doi: 10.1002/9781405165518.wbeosr078 [DOI] [Google Scholar]
- Hughes, J. M., Zullig, L. L., Choate, A. L., Decosimo, K. P., Wang, V., van Houtven, C. H., Allen, K. D., & Nicole Hastings, S. (2022). Intensification of implementation strategies: developing a model of foundational and enhanced implementation approaches to support national adoption and scale-up. Gerontologist. 63(3), 604–613. doi: 10.1093/geront/gnac130 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kilo, C. M. (1998). A framework for collaborative improvement: Lessons from the Institute for Healthcare Improvement’s Breakthrough Series. Quality management in health care 6(4), 1–13. doi: 10.1097/00019514-199806040-00001 [DOI] [PubMed] [Google Scholar]
- Kitson, A., Harvey, G., & McCormack, B. (1998). Enabling the implementation of evidence based practice: A conceptual framework. Quality and Safety in Health Care, 7(3), 149–158. doi: 10.1136/qshc.7.3.149 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Konetzka, R. T. (2020). The challenges of improving nursing home quality. JAMA Network Open, 3(1), e1920231. doi: 10.1001/jamanetworkopen.2019.20231 [DOI] [PubMed] [Google Scholar]
- Long, K. M., McDermott, F., & Meadows, G. N. (2018). Being pragmatic about healthcare complexity: Our experiences applying complexity theory and pragmatism to health services research. BMC Medicine, 16(1), 94. doi: 10.1186/s12916-018-1087-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Madrigal, C., Mills, W. L., Keleher, V. C., Pimentel, C. B., Hartmann, C. W., Snow, A. L., Camp, C., & Hilgeman, M. M. (2022). A spotlight on adaptation: Preimplementation of montessori-based activity programming in long-term care using the Framework for Reporting Adaptations and Modifications-Enhanced (FRAME). Gerontologist. Advance online publication. 63(3), 589–603. doi: 10.1093/geront/gnac133 [DOI] [PubMed] [Google Scholar]
- Martinez, R. G., Lewis, C. C., & Weiner, B. J. (2014). Instrumentation issues in implementation science. Implementation Science, 9(118). doi: 10.1186/s13012-014-0118-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McNicholas, C., Lennox, L., Woodcock, T., Bell, D., & Reed, J. E. (2019). Evolving quality improvement support strategies to improve Plan-Do-Study-Act cycle fidelity: A retrospective mixed-methods study. BMJ Quality and Safety, 28(5), 356–365. doi: 10.1136/bmjqs-2017-007605 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meeks, S., & Degenholtz, H. B. (2021). Workforce issues in long-term care: Is there hope for a better way forward? Gerontologist, 61(4), 483–486. doi: 10.1093/geront/gnab040 [DOI] [PubMed] [Google Scholar]
- Mills, W. L., Pimentel, C. B., Palmer, J. A., Snow, A. L., Wewiorski, N. J., Allen, R. S., & Hartmann, C. W. (2018). Applying a theory-driven framework to guide quality improvement efforts in nursing homes: The LOCK model. Gerontologist, 58(3), 598–605. doi: 10.1093/geront/gnx023 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mills, W. L., Pimentel, C. B., Snow, A. L., Allen, R. S., Wewiorski, N. J., Palmer, J. A., Clark, V., Roland, T. M., McDannold, S. E., & Hartmann, C. W. (2019). Nursing home staff perceptions of barriers and facilitators to implementing a quality improvement intervention. Journal of the American Medical Directors Association, 20(7), 810–815. doi: 10.1016/j.jamda.2019.01.139 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore, G. F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W., Moore, L., O’Cathain, A., Tinati, T., Wight, D., Baird, J., O’Cathain, A., Tinati, T., Wight, D., & Baird, J. (2015). Process evaluation of complex interventions: Medical Research Council guidance. British Medical Journal, 350, h1258. doi: 10.1136/bmj.h1258 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morgan, D. L. (1997). Focus groups as qualitative research. Qualitative research methods series (Vol. 16). Sage Publications. doi: 10.4135/9781412984287 [DOI] [Google Scholar]
- Morley, J. E., Caplan, G., Cesari, M., Dong, B., Flaherty, J. H., Grossberg, G. T., Holmerova, I., Katz, P. R., Koopmans, R., Little, M. O., Martin, F., Orrell, M., Ouslander, J., Rantz, M., Resnick, B., Rolland, Y., Tolson, D., Woo, J., & Vellas, B. (2014). International survey of nursing home research priorities. Journal of the American Medical Directors Association, 15(5), 309–312. doi: 10.1016/j.jamda.2014.03.003 [DOI] [PubMed] [Google Scholar]
- Nilsen, P. (2015). Making sense of implementation theories, models and frameworks. Implementation Science, 10(1), 1–13. doi: 10.1186/s13012-015-0242-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ogrinc, G., & Shojania, K. G. (2014). Building knowledge, asking questions. BMJ Quality & Safety, 23(4), 265–267. doi: 10.1136/bmjqs-2013-002703 [DOI] [PubMed] [Google Scholar]
- Ovretveit, J. C., Shekelle, P. G., Dy, S. M., McDonald, K. M., Hempel, S., Pronovost, P., Rubenstein, L., Taylor, S. L., Foy, R., & Wachter, R. M. (2011). How does context affect interventions to improve patient safety? An assessment of evidence from studies of five patient safety practices and proposals for research. BMJ Quality & Safety, 20, 604–610. doi: 10.1136/bmjqs.2010.047035 [DOI] [PubMed] [Google Scholar]
- Pimentel, C. B., Mills, W. L., Palmer, J. A., Dillon, K., Sullivan, J. L., Wewiorski, N. J., Snow, A. L., Allen, R. S., Hopkins, S. D., & Hartmann, C. W. (2019). Blended facilitation as an effective implementation strategy for quality improvement and research in nursing homes. Journal of Nursing Care Quality, 34(3), 210–216. doi: 10.1097/ncq.0000000000000376 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R., & Hensley, M. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76. doi: 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Regehr, G., MacRae, H., Reznick, R. K., & Szalay, D. (1998). Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Academic Medicine, 73(9), 993–997. doi: 10.1097/00001888-199809000-00020 [DOI] [PubMed] [Google Scholar]
- Reinhardt, J. P., Franzosa, E., Mak, W., & Burack, O. (2022). In their own words: The challenges experienced by certified nursing assistants and administrators during the COVID-19 pandemic. Journal of Applied Gerontology, 41(6), 1539–1546. doi: 10.1177/07334648221081124 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scales, K. (2021). It is time to resolve the direct care workforce crisis in long-term care. Gerontologist, 61(4), 497–504. doi: 10.1093/geront/gnaa116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schein, E. H. (1990). Organizational culture. American Psychologist, 45(2), 109–119. doi: 10.1037/0003-066x.45.2.109 [DOI] [Google Scholar]
- Schooler, J. W. (2014). Metascience could rescue the “replication crisis”. Nature, 515(9), 9. doi: 10.1038/515009a [DOI] [PubMed] [Google Scholar]
- Song, Y., MacEachern, L., Doupe, M. B., Ginsburg, L., Chamberlain, S. A., Cranley, L., Easterbrook, A., Hoben, M., Knopp-Sihota, J., Reid, R. C., Wagg, A., Estabrooks, C. A., Keefe, J. M., Rappon, T., & Berta, W. B. (2022). Influences of post-implementation factors on the sustainability, sustainment, and intra-organizational spread of complex interventions. BMC Health Services Research, 22(666). doi: 10.1186/s12913-022-08026-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sullivan, J. L., Montano, A. R. L., Hughes, J. M., Davila, H. W., O’Malley, K. A., Engle, R. L., Hawley, C. E., Shin, M. H., Smith, J. G., & Pimentel, C. B. (2022). A citation review of 83 dissemination and implementation theories, models, or frameworks utilized in U.S.-based aging research. Gerontologist. 63(3), 405–415. doi: 10.1093/geront/gnac096 [DOI] [PubMed] [Google Scholar]
- Taylor, M. J., McNicholas, C., Nicolay, C., Darzi, A., Bell, D., & Reed, J. E. (2014). Systematic review of the application of the Plan-Do-Study-Act method to improve quality in healthcare. BMJ Quality and Safety, 23(4), 290–298. doi: 10.1136/bmjqs-2013-001862 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van De Ven, A. H. (1986). Central problems in the management of innovation. Management Science, 32(5), 590–607. doi: 10.1287/mnsc.32.5.590 [DOI] [Google Scholar]
- van der Vleuten, C. P., & Schuwirth, L. W. (2005). Assessing professional competence: from methods to programmes. Medical Education, 39(3), 309–317. doi: 10.1111/j.1365-2929.2005.02094.x [DOI] [PubMed] [Google Scholar]
- Wagg, A., Hoben, M., Ginsburg, L., Double, M., Berta, W., Song, Y., Norton, P., Knopp-Sihota, J.Estabrooks, C. (In press). Safer Care for Older Persons in (residential) Environments (SCOPE): A pragmatic controlled trial of a care aide-led quality improvement intervention. Implementation Science. 18(9), doi: 10.1186/s13012-022-01259-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wensing, M., & Grol, R. (2019). Knowledge translation in health: How implementation science could contribute more. BMC Medicine, 17(88). doi: 10.1186/s12916-019-1322-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Statistical and anonymous aggregate data, the full data set creation plan, and underlying analytic code associated with this paper are available from the authors upon request, understanding that the programs may rely on coding templates or macros that are unique to TREC. All measurement instruments are provided as Supplementary Item 3. The data used for this article are housed in a secure and confidential Health Research Data Repository (HRDR) in the Faculty of Nursing at the University of Alberta, in accordance with the health privacy legislation of participating TREC jurisdictions. These health privacy legislations and the ethics approvals covering TREC data do not allow public sharing or removal of disaggregated data from the HRDR, even if deidentified. The data were provided under specific data sharing agreements only for approved use by TREC within the HRDR. Where necessary, access to the HRDR to review the original source data may be granted to those who meet pre-specified criteria for confidential access, available at request from the TREC data unit manager (https://trecresearch.ca/about/people), with the consent of the original data providers and the required privacy and ethical review bodies. The SCOPE trial was preregistered at ClinicalTrials.gov ID: NCT03426072.


