Skip to main content
Health Services Research logoLink to Health Services Research
. 2024 Jul 25;59(Suppl 2):e14344. doi: 10.1111/1475-6773.14344

Effectiveness of a virtual quality improvement training program to improve reach of weight management programs within a large health system

Laura J Damschroder 1,, Richard Evans 1, H Myra Kim 1,2, Jeremy Sussman 1,3,4, Michelle B Freitag 1, Claire H Robinson 1, Jennifer A Burns 1, Nicholas R Yankey 1, Julie C Lowery 1
PMCID: PMC11540586  PMID: 39054798

Abstract

Objective

To test effectiveness of the LEAP (Learn Engage Act Process) Program on engaging frontline Veteran Health Administration (VHA) medical center teams in continuous quality improvement (QI), a core capability for learning health systems.

Data Sources and Study Setting

Data sources included VHA electronic health record (EHR) data, surveys, and LEAP coaching field notes.

Study Design

A staggered difference‐in‐differences study was conducted. Fifty‐five facilities participated in LEAP across eight randomly assigned clusters of 6–8 facilities per cluster over 2 years. Non‐participating facilities were used as controls. A MOVE! weight management program team completed a Plan‐Do‐Study‐Act cycle of change supported by learning curriculum, coaching, and virtual collaboratives in LEAP facilities. Primary outcome was program reach to Veterans. A mixed‐effects model compared pre‐ versus post‐LEAP periods for LEAP versus control facilities. LEAP adherence, satisfaction, and cost to deliver LEAP were evaluated.

Data Collection/Extraction Methods

Thirty months of facility‐level EHR MOVE! enrollment data were included in analyses. LEAP Satisfaction and QI skills were elicited via surveys at baseline and 6‐month post‐LEAP.

Principal findings

Fifty‐five facilities were randomly assigned to eight time‐period‐based clusters to receive LEAP (71% completed LEAP) and 82 non‐participating facilities were randomly assigned as controls. Reach in LEAP and control facilities was comparable in the 12‐month pre‐LEAP period (p = 0.07). Though LEAP facilities experienced slower decline in reach in the 12‐month post‐LEAP period compared with controls (p < 0.001), this is likely due to unexplained fluctuations in controls. For LEAP facilities, satisfaction was high (all mean ratings >4 on a 5‐point scale), self‐reported use of QI methods increased significantly (p‐values <0.05) 6 months post‐LEAP, and delivery cost was $4024 per facility‐based team.

Conclusion

Control facilities experienced declining reach in the 12‐month post‐LEAP period, but LEAP facilities did not, plus they reported higher engagement in QI, an essential capability for learning health systems.

Keywords: clinical trial design and implementation, health care organizations and systems


What is known on this topic

  • Team‐based engagement in learning, such as conducting Plan‐Do‐Study‐Act (PDSA) cycles of change as part of continuous quality improvement (QI), is a core competency for mature learning systems.

  • Many frontline workers lack capability and experience in doing PDSAs and QI.

  • Easy‐to‐use, hands‐on training is needed to consistently engage frontline teams in QI.

What this study adds

  • LEAP is a 6‐month QI learning program with: coaching for frontline teams who learn‐as‐they‐do, paced curriculum that avoids technical jargon, and assignments aimed at completing at least one PDSA cycle in 6 months.

  • Fifty‐five facilities randomly assigned to LEAP experienced less fluctuation and less decline in a key program metric compared to 82 control facilities during the 12‐month post‐LEAP period in an intention‐to‐treat analysis.

  • LEAP resulted in significant increases in use of QI methods and teams reported intentions to continue QI together but also reported time constraints.

1. INTRODUCTION

Strong learning health systems are determined by the degree to which “clinical informatics, incentives, and culture are aligned to promote continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by‐product of the delivery experience [italics added].” 1 (p136) Quality improvement (QI) is also a core implementation approach to achieve maturity as a High Reliability Organization (HRO) that aims for zero patient harm. 2 The Veterans Health Administration (VHA), one of the largest integrated systems in the world that serves about 6 million enrolled Veterans, 3 has targeted both HRO and learning system maturity as high priority initiatives. 2 , 4 , 5 , 6 The centrality of QI capability within organizations is also highlighted in implementation science as a strategy to achieve fully optimized and sustained implementation of innovations. Implementation science can help guide development of pathways forward and overcome challenges in developing learning health systems. 7 The Dynamic Sustainability Framework 8 is one implementation science framework that insists that a dynamic, incremental process of optimization through rapid cycles of learning (specifically, Plan‐Do‐Study‐Act [PDSA] cycles of change) is necessary to move toward sustained, optimized fit of programs. The framework places PDSA learning cycles at its heart; learning systems, likewise, place learning loops at the center of learning culture. 9 PDSAs comprise four iterative, dynamic steps: Plan a small change, Do the test, Study effects, and Act on newly generated knowledge. 10

In 2006, the VHA National Center for Health Promotion and Disease Prevention (NCP) established the MOVE! Weight Management Program (MOVE!), an evidence‐based comprehensive lifestyle intervention designed to address obesity, 11 which significantly impacts life expectancy. 12 , 13 Prevalence of obesity among US Veterans (41%) is higher than the general US adult population (38%), 14 , 15 and higher now than in the previous decade. Most VHA facilities offer group‐based MOVE!, but there is high variability in implementation of MOVE! across the system. 16 , 17 , 18 One study found that MOVE! programs with teams engaged in QI were associated with better program outcomes. 19

Four evaluations conducted of multiple program implementations involving 21 VHA facilities, revealed a recurring pattern of barriers to implementation in VHA facilities. 11 , 20 , 21 , 22 , 23 These barriers include lack of (1) planning, (2) engagement of key individuals, and (3) reflecting on and evaluating progress of implementation and effects of changes. Building QI skills among frontline workers can help overcome these recurring barriers. This, along with VHA's priorities in maturing as a learning system and HRO, led to development of the “Learn. Engage. Act. Process.” (LEAP) program. 24 , 25 LEAP packages basic QI methods into a virtually delivered, hands‐on learning program for frontline clinical teams.

The goal of this study was to test the LEAP's effect on reach of the MOVE! program to Veterans who would benefit from weight management programing. We conducted a randomized controlled trial and process evaluation to explore team experiences and continued use of QI methods after participating in LEAP.

2. METHODS

This was a staggered cluster randomized trial with a parallel control group, designed to assess the effectiveness of the LEAP program on reach of group‐based MOVE! programs. Teams participated in LEAP from October 2016 to January 2019. Our reporting follows the Standards for Reporting Implementation Studies (Appendix B). 26

2.1. Trial outcomes

Our primary outcome was monthly reach, defined as the number of patients who completed their first MOVE! visit per 1000 Veterans who were candidates for MOVE! within each facility. Administrative data were used to compute reach starting 12 months prior to the start of LEAP for each facility (pre‐LEAP period) through the 12‐month period following the end of LEAP for each facility (post‐LEAP period). To calculate reach, the numerator comprised the monthly number of MOVE! patients (defined as a Veteran who completed a MOVE! visit for the first time or after at least a six‐month gap of participation in MOVE!, following the definition used by NCP). The denominator was updated quarterly and defined as the number of facility‐enrolled Veterans who were candidates for MOVE! within the respective fiscal quarter, and who lived within 40 miles of the facility for each month. The 40‐mile criteria was applied because the focus in this trial was on in‐person group MOVE! programs; our NCP operational partner determined this as a reasonable maximum travel distance for weekly sessions. VHA criteria for MOVE! program candidacy was used: body mass index (BMI) of 30 or higher or BMI of 25–29 with a co‐occurring obesity‐related condition (e.g., diabetes); Veterans were excluded from reach calculations if they were in hospice care, had life expectancy of less than 6 months, or were diagnosed with end‐stage cancer (determined by electronic health record [EHR] administrative data).

2.2. Facility eligibility criteria

In this study, medical centers with a MOVE! program and a coordinator with a valid email address that did not participate in a prior pilot of LEAP were eligible to participate; Table 1 shows the flow of facility identification through participation.

TABLE 1.

Flow of facilities from identification to inclusion in intention‐to‐treat analyses and LEAP a completion.

Year 1 Year 2
Potentially eligible facilities b N = 167 N/A
Excluded

No MOVE Coordinator (n = 25)

Participated in LEAP Pilot (n = 3)

Duplicate program (n = 3)

Bad Email Address (n = 4)

N/A
Randomized allocation to year c N = 132 N/A
Excluded Randomized to receive invitation for Year 2 (n = 65)

Started LEAP d

(Clusters 1–4; n = 22)

Started LEAP d

(Cluster 5; n = 6)

Started LEAP d

(Cluster 6; n = 6)

Lost MOVE Coordinator (n = 1)

Started LEAP d

(Cluster 7; n = 6)

Added N/A New Coordinator email (n = 4) New Coordinator email (n = 1)
Timing of Invitation August 2016 July 2017 Oct 2017 Jan 2018 Apr 2018
Invited N = 67 N = 114 N = 109 N = 102 N = 96
Excluded

Declined (n = 13)

No response (n = 25)

Declined (n = 22)

No response (n = 56)

Declined (n = 22)

No response (n = 57)

Declined (n = 28)

No response (n = 50)

Declined (n = 33)

No response (n = 46)

Randomized allocation n = 29 n = 36 n = 30 n = 24 n = 18
Not assigned to cluster n = 5 N/A
Randomized to waitlist e N/A n = 30 n = 24 n = 18 n = 12
Randomized to cluster Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8
n = 6 n = 6 n = 6 n = 6 n = 6 n = 6 n = 6 n = 6
Starting Month Oct 2016 Jan 2017 Apr 2017 Jul 2017 Oct 2017 Jan 2018 Apr 2018 Jul 2018
Added from Waitlist f N/A n = 2 n = 1 n = 2 n = 2
Included in ITT g n = 6 n = 6 n = 6 n = 6 n = 8 n = 7 n = 8 n = 8
Started LEAP (%ITT)

n = 5

(83.3)

n = 6

(100.0)

n = 5

(83.3)

n = 6

(100.0)

n = 6

(75.0)

n = 6

(85.7)

n = 6

(75.0)

n = 6

(75.0)

Dropped out before the 7th week (%Started)

n = 1

(20.0)

n = 2

(40.0)

n = 1

(20.0)

n = 0

(0.0)

n = 0

(0.0)

n = 0

(0.0)

n = 0

(0.0)

n = 1

(20.0)

Completed LEAP (%Started)

n = 4

(80.0)

n = 3

(50.0)

n = 4

(80.0)

n = 6

(100.0)

n = 5

(83.3)

n = 6

(100.0)

n = 6

(100.0)

n = 5

(83.3)

a

LEAP: “Learn. Engage. Act. Process.” quality improvement program.

b

Included a list of all facilities identified as a Medical Center within the Veterans Health Administration's corporate data warehouse.

c

After exclusions, facilities were randomized to receive an invitation in Year 1—the remainder (n = 65) were excluded from the first set of invitations. Starting in Year 2, the recruiting process changed so that all eligible facilities were invited 3 months prior to each starting date for Clusters 5–8.

d

Facilities that started LEAP as a member of a previous cluster were excluded from future invitations.

e

In Year 2, we added a waiting list. Facilities that expressed willingness (“yes”) to participate, were randomly allocated to the respective Cluster or to a waiting list.

f

Facilities assigned to the waiting list were randomly ordered; if a facilities dropped out before starting LEAP, the facility at the top of the waiting list queue was invited to replace the facility that dropped out.

g

Intention‐to‐treat; included all facilities assigned to a cluster i.e., intervention facility.

2.3. Recruitment

Recruitment and randomization processes were different for Year 1 versus Year 2 of the trial. In the first year, 132 facilities were eligible to participate (see Table 1). Reach was computed for each facility‐month. A randomized allocation, stratified by high/low (above/below median) reach, was applied to select n = 67 facilities to receive emailed invitations to participate in Year 1. We planned to invite the remaining 65 facilities in Year 2. Our LEAP coaches had the capacity to coach up to six teams at a time (thus, the trial assigned six teams per cluster). Twenty‐nine of the 67 invited facilities responded “yes.” Of these, 24 (6 per cluster) were randomly allocated to Clusters 1–4 that started in October 2016, January, April, or July 2017, respectively (see Table 1 and Table A1). Each cluster comprised three facilities with low‐reach and three facilities with high‐reach MOVE! programs. All facilities were notified of their assigned start date on September 1, 2016. Thus, coordinators had approximately one‐and‐a‐half to 8 months' lead time, depending on the cluster they were assigned.

In Year 1, more facilities expressed interest in participating than we were able to accommodate. However, two facilities assigned were unable to start LEAP at their assigned time and an additional two facilities dropped out in Week 1, leaving “slots” open that could have been filled by another willing facility. Thus, in Year 2, we adopted a more flexible randomization process. Specifically, invitations were sent 3 months prior to each of four planned starting times to facilities that had not already participated in LEAP. Each quarter, six facilities (three high‐ and three low‐reach facilities) were randomly selected from positive responses to our invitation. With each round of invitations, we had more willingness to participate than we could accommodate (see Table 1); thus, facilities not randomized to a cluster were placed on a waiting list in random order. If a facility dropped out of their assigned cluster before starting LEAP, another facility was invited from the top of the waiting list. Invitation lists varied by cluster based on new additions or loss of program coordinators; facilities that started LEAP in previous clusters were excluded from subsequent invitations (Table 1). Facilities in Clusters 5–8 started in October 2017, January 2018, April 2018, or July 2018, respectively.

Upon completion of the LEAP intervention, facilities not randomized to the LEAP intervention (n = 82) were randomly allocated to one of the eight clusters as controls. Figure A1 shows counts of LEAP intervention and assigned control facilities for each cluster across the two‐year trial.

2.4. IRB approval and ethical considerations

This study was deemed a non‐research operations activity with the aim of developing and evaluating LEAP's impact on frontline teams' capability to engage in QI to improve treatment and, thus, IRB review was not required (Trial Registration: clinicaltrials.gov identifier: NCT02825680.).

2.5. Trial interventions

The trial was conducted within context of all VHA facilities with MOVE! programs already receiving centralized support from NCP leaders who hosted monthly conference calls and provided quarterly education sessions, online resources (see: www.move.va.gov), and ad‐hoc technical assistance as a part of normal operations. All MOVE! coordinators also had access to administrative program data.

2.6. The LEAP program

The LEAP program was designed to activate and empower frontline workers to engage in QI. LEAP is a virtual, structured program developed to (1) use hands‐on participative teaching to build skills in doing PDSA cycles of change, (2) support teams with coaching as they choose an aim and implement a change, and (3) bring peers together from different facilities to establish a learning community. Program components are described elsewhere. 24 , 25 After completing LEAP, MOVE! program coordinators were sent a copy of The Improvement Guide book, 27 all team members received a completion certificate, and teams had the option of participating in monthly virtual collaborative (VC) sessions.

2.6.1. Teaching

LEAP materials and curriculum were adapted from a Massive‐Open Online Course developed by HarvardX in collaboration with Institute of Healthcare Improvement (IHI). 28 IHI's approach minimizes use of technical jargon. Appendix C lists a calendar of curriculum topics. Content was designed to be easy‐to‐understand with hands‐on learning‐while‐doing opportunities within the constraints of daily clinical work. An online platform included short videos, brief written descriptions of methods, and tool templates. In Year 1, program duration was 21 weeks. However, teams struggled with time constraints, so we extended program duration to 26 weeks by adding extra “working weeks” to allow teams time to work on their PDSA cycle of change. Additionally, we further streamlined the curriculum; for example, instead of teaching multiple methods for conducting root cause analysis, we focused on a single method. Teams did not have the time or cognitive energy to learn multiple methods and puzzle over which one to use; additional methods were made available online for those who were interested. Curriculum, program reports, and other resources were available for the full length of the trial.

2.6.2. Coaching

A LEAP coach was assigned to each team. LEAP coaches had at least a master's‐level degree in public health, social work, or health administration. All coaches completed the IHI Improvement Coach Professional Development Program or equivalent. At the beginning of each cluster's start date, coaches encouraged each MOVE! coordinator to form a local improvement team to participate in LEAP. LEAP team leaders and members' professions were most often dietitians, nurses, pharmacists, psychologists, and physicians; they had varying degrees of QI experience prior to LEAP. Team leaders committed to participating in weekly one‐hour, coaching or group VC sessions, convening their local improvement team in meetings to develop a Project Charter (Appendix C), and completing one PDSA cycle of change. The hours that team members and leaders spent in team meetings and coaching sessions, and on project work varied based on the complexity of their chosen aim and phase of work (up to 6 h per week). At least half of this time was dedicated to working on their PDSA cycle of change. Coaches guided teams through each step of their PDSA project.

Newly developed reports based on administrative program data were provided to each LEAP team. One coaching session was dedicated to orienting teams to their suite of data reports and having an open dialogue to reflect on the data and brainstorm possibilities for their PDSA project. These reports followed a user‐centered design approach 29 based on semi‐structured interviews with MOVE! coordinators prior to the start of the trial. Usefulness of the reports was evaluated during a pilot of LEAP. Coaches encouraged teams to choose improvement aims that aligned with the data reports: focusing on recruiting more patients into MOVE (reach), getting enrolled patients to stay in MOVE longer (retention), or increasing weight loss for participants. For 6 months, post‐LEAP, coaches provided monthly support sessions for each cluster of teams with topics based on requests or at the discretion of the coaches.

2.6.3. Virtual collaboratives

Nine coaching sessions (plus two optional sessions) were with a LEAP coach and individual teams. Nine of the LEAP sessions were group VCs where all teams within a cluster participated together. This helped to promote cross‐team peer support and accountability. Teams presented their learnings, accomplishments, and plans for future PDSAs in the final two VCs of the program.

2.7. Statistical analyses

To estimate the difference in reach associated with LEAP intervention, we modeled facility‐month reach using a linear mixed model. For each facility, time periods were divided into three parts: 12 months before LEAP (pre‐LEAP), during LEAP, and 12 months following LEAP completion (post‐LEAP). We excluded data for the “during LEAP” period for all facilities. In the model, we included an indicator for the post‐LEAP period, an indicator for LEAP facilities, time in months, and the corresponding two‐ and three‐way interactions to model regression discontinuity. Interactions modeled discontinuities between pre‐ and post‐LEAP period, with the three‐way interaction term estimating the differential rate of monthly change (differential time slopes) in reach between the two periods in LEAP compared to control facilities. Facilities were included as random intercepts. Time (months) was centered at the beginning of the post‐LEAP period for both intervention and control facilities and thus, ranged from −12 to −1 for the pre‐LEAP period and from 0 to 11 for the post‐LEAP period. Because each facility had differing numbers of MOVE! candidates, we weighted the model by the total facility MOVE! candidates at baseline. All analyses were intention‐to‐treat, i.e., facilities randomized to LEAP were included regardless of LEAP completion status. Models were built using the glmmTMB package (v1.1.7) of the R statistical programing language (v4.3.1).

2.8. Process evaluation

We conducted process evaluations to explore the LEAP teams' chosen improvement topics, LEAP program adherence, LEAP program satisfaction, and use of QI methods.

2.8.1. Improvement topics

The LEAP program culminated with each team presenting results of their improvement project. Teams chose their own topic within three general topics (reach, retention, or weight loss); thus, the goal for each improvement project varied across the teams. The focus of PDSA improvement projects were thematically analyzed based on the content of each team's final presentation.

2.8.2. Assignment completion

Assignments were designed to build on each other, providing milestones as teams developed their Project Charter with support from their coach. Table A2 lists 10 assignments that were uploaded to the online platform when completed. Completion is reported as an indicator of the degree to which teams achieved key milestones for their PDSA project.

2.8.3. LEAP program satisfaction

Satisfaction with LEAP and intentions to continue working as a team were assessed by online survey (see Appendix D) of all participants at the end of LEAP using Qualtrics survey software (Qualtrics, LLC, Provo, Utah). Descriptive statistics were generated for all measures. All analyses were conducted using SAS 9.4 (SAS Institute, Inc, Cary, NC).

2.8.4. Use of QI methods

We assessed QI methods in both years of the trial but used different approaches each year. Results from Year 1 have already been published and revealed significant improvement in self‐rated QI expertise in five of six categories. 24 In Year 2, we elicited self‐reported use (16 items using a 4‐point scale from 1 = Never to 4 = Frequently) across five categories of QI methods (see Appendix D). Assessments were administered at baseline and 6 months post‐LEAP using Qualtrics survey software. Descriptive statistics and paired t‐tests were used to compare pre‐ and post‐LEAP ratings.

2.9. Cost to deliver LEAP

We used a micro‐costing method 30 to determine the costs to deliver LEAP. The LEAP coaches and project manager developed a list of coaching and administrative activities they performed to build a tracking database and logged the time they spent on each of those activities over several clusters of teams. Using these data, we developed a model of the average time for each activity and applied this to the number of times it takes place over the course of LEAP for each cluster of teams. Along with this model, we tracked the fixed non‐personnel costs for products necessary for delivering LEAP. Using our staffing model and fixed costs, we computed the costs to deliver LEAP to varying sizes of clusters of teams and average cost for a single team, and we developed a generalized staffing algorithm.

3. RESULTS

Of the 137 facilities eligible to participate over the two‐year trial, 55 were randomly assigned to eight LEAP clusters (Tables 1 and A1). Thirty‐nine (70.9%) of the 55 facilities in our analyses completed the LEAP program; 14 (25.5%) facilities did not start LEAP or dropped out within the first 7 weeks of LEAP. Table 2 shows baseline reach and facility characteristics for LEAP compared with control facilities.

TABLE 2.

Characteristics for LEAP a intervention facilities and facilities assigned as controls.

Control Intervention Overall Standardized Mean difference p‐value
(N = 82) (N = 55) (N = 137)
Reach
Mean (SD) b 4.61 (3.19) 5.01 (3.32) 4.77 (3.23) 0.122 0.486 c
Rurality count (%)
Rural 9 (11.1%) 8 (14.5%) 17 (12.5%) 0.107 0.741 d
Urban 73 (88.9%) 47 (85.5%) 119 (87.5%)
US region count (%) 0.236 0.626 d
Continental 12 (14.8%) 13 (23.6%) 25 (18.4%)
Northeast 33 (40.7%) 21 (38.2%) 54 (39.7%)
Pacific 14 (17.3%) 8 (14.5%) 22 (16.2%)
Southeast 22 (27.2%) 13 (23.6%) 35 (25.7%)
Complexity count (%) e 0.044 0.957 d
High 54 (66.7%) 38 (69.1%) 92 (67.6%)
Medium 11 (13.6%) 7 (12.7%) 18 (13.2%)
Low 16 (19.8%) 10 (18.2%) 26 (19.1%)
a

LEAP: “Learn. Engage. Act. Process.” quality improvement program.

b

Means of monthly reach ratios across the 12‐month pre‐LEAP period.

c

p‐value is based on between‐groups t‐test. Differences between control and LEAP facilities were also tested by adjusting for cluster using a linear model predicting average reach in the pre‐LEAP period; this test yielded a p‐value = 0.479 for difference between control and LEAP facilities.

d

p‐values are based on chi‐square test.

e

Complexity categories are algorithmically assigned to all VHA facilities based on facility‐level patient characteristics, clinical services offered, educational and research missions, and administrative complexity. 41 Five complexity categories range from 1A, 1B, 1C (High Complexity), 2 (Medium Complexity), and 3 (Low Complexity).

3.1. Change in reach

During the 12‐month pre‐LEAP period, unadjusted reach averaged 4.6(3.9) patients per 1000 facility‐enrolled MOVE! eligible Veterans for control facilities and 5.0 (3.3) patients per 1000 facility‐enrolled MOVE! eligible Veterans for LEAP facilities (p = 0.49 for difference; Table 2). Post‐LEAP, unadjusted reach averaged 4.9 (6.4) per 1000 facility‐enrolled Veterans for control facilities and 5.2 (5.1) for LEAP facilities. There were no significant differences in mean unadjusted reach from pre‐ to post‐LEAP periods for control (p = 0.52) or for LEAP facilities (p = 0.38).

Mixed‐effects modeling of reach (see Table 3 and Figure 1) indicated that control facilities experienced a significant increase in reach of 0.58 (β 2, p < 0.001) per 1000 Veterans per month at the start of the post‐LEAP period, followed by significantly decreasing reach of 0.13 per 1000 Veterans per month (computed as β 1 + β 4 = 0.02–0.15, p < 0.001). In LEAP facilities, reach was relatively stable during the post‐LEAP period with a change of −0.02 per 1000 Veterans per month (computed as β 1 + β 4 + β 5 + β 7 = 0.02–0.15 + 0.01 + 0.10, p‐value = 0.55), which was significantly less decline compared to controls (p < 0.001). The three‐way interaction coefficient (β 7) of 0.10 (p < 0.001) showed a significant difference between LEAP versus control facilities in differential monthly reach slope between periods; that is, there was little change in between‐period slopes in LEAP facilities versus a large drop for between‐period slopes in control facilities (Figure 1); the large drop in control facilities is likely due to a large increase in reach post‐LEAP. We ran additional models which had the same results, including: (1) adjusting for characteristics listed in Table 2; and (2) removing an extreme outlier. We tested for potential influence of enthusiasm by adding a binary variable indicating whether the coordinator at an assigned control said yes to any of five invitations (n = 24 said yes at least once; 29.3%) and found no effect. We also assessed reach differences between periods for each cluster, in LEAP versus control sites and then compared across the eight clusters and did not find any trends over time.

TABLE 3.

Mixed effects model results of monthly reach in the 12‐month pre‐LEAP a and 12‐months post‐LEAP periods.

Reach
Predictors b Estimates 95% CI p‐value
(β 0) Intercept 4.83 3.99–5.67 <0.001
(β 1) Time c (months) 0.02 0.01–0.02 <0.001
(β 2) Post‐LEAP period (vs. pre‐LEAP) 0.58 0.58–0.59 <0.001
(β 3) Treatment (LEAP vs. control) 0.40 −0.93 to 1.72 0.555
(β 4) Time × Post‐LEAP period −0.15 −0.15 to −0.15 <0.001
(β 5) Time × treatment 0.01 0.01–0.01 <0.001
(β 6) Post LEAP period × treatment −0.65 −0.65 to −0.64 <0.001
(β 7) Time × post LEAP period × treatment 0.10 0.10–0.11 <0.001
ICC 0.73
N Facility 137
N Cluster 8
a

LEAP: “Learn. Engage. Act. Process.” quality improvement program.

b

Model based on 3152 observations from 137 facilities.

c

Time in months centered to the beginning of the post‐LEAP period.

FIGURE 1.

FIGURE 1

Mixed effects model results of monthly reach in the 12‐month pre‐LEAP and 12‐month post‐LEAP periods. (A) LEAP: “Learn. Engage. Act. Process.” quality improvement program. (B) Reach is the number of new patients in MOVE! weight management per 1000 patients enrolled at the facility. (C) Time ranges are as follows: (1) Pre‐LEAP period comprises months −12 through −1; (2) Post‐LEAP period comprises months 0–12. There is a gap in time during which intervention facilities were actively participating in the LEAP intervention.

3.2. Improvement topics

Aim statements developed by the LEAP teams reflect a range of improvement topics. Based on thematic analysis, the most frequently occurring topic for improvement was to increase reach (n = 25; 64% of LEAP teams' aims). The second most common topic was improving retention in MOVE! (n = 8; 21% of teams) and the rest addressed other topics. Subgroup analyses revealed that reach trajectories were similar regardless of improvement topic.

3.3. Assignment completion

Most teams completed all 10 assignments, as indicated by uploads to the online platform (see Table A2). All teams completed a Project Charter, an essential step in implementing their planned improvement; 67% of those Charters were peer‐reviewed by another team with feedback. Nearly all teams (92%) completed their final presentation.

3.4. LEAP satisfaction

Two‐hundred‐thirteen LEAP participants completed the satisfaction survey. Most respondents were satisfied or very satisfied with all LEAP components: coaching support (88%), written materials (86%), technology requirements (82%), videos (77%), and number of assignments (75%).

Nearly all respondents agreed LEAP was relevant to the needs of their MOVE! Program (95%) and felt comfortable using the LEAP materials to guide improvements (86%). Only 53% agreed they had the time to do the work required. Nevertheless, most respondents indicated their team would continue working together (80%) and planned to continue monitoring data reports (73%). Fifty‐five percent planned to attend optional post‐LEAP monthly VCs and 82.1% (n = 32/39) of facilities had at least one team member participating in at least one of the optional post‐LEAP monthly VCs.

3.5. Use of QI methods

For LEAP facilities that participated in Clusters 5–8 in Year 2, self‐reported use of QI methods increased significantly (p‐values <0.01) for four of five categories at six‐months post‐LEAP compared to baseline (see Table 4).

TABLE 4.

Change in self‐reported frequency a using quality improvement methods b .

QI method category Baseline 6‐months post‐LEAP p‐value for difference

Develop a change (n = 3 items)

(e.g., Use priority matrices, fishbone diagrams, or other QI tools to document the process/system to be changed in an improvement project)

1.84 2.36 <0.001

Support change with data (n = 3 items)

(e.g., Build clear and unambiguous operational definitions for measures)

2.19 2.58 0.0013

Test a change (n = 3 items)

(e.g., Design, set up, and run Plan‐Do‐Study‐Act cycles)

1.85 2.40 <0.001

Spread a change (n = 3 items)

(e.g., Develop new structures and procedures to sustain the implemented change [e.g., training, new policies and procedures, new equipment])

2.41 2.83 0.0045

Human side of change (n = 4 items)

(e.g., Plan and conduct effective team meetings [e.g., set agendas, assign roles such as recorder and timekeeper, establish ground rules for behavior])

3.24 3.33 0.28
a

Findings pertain only to LEAP teams that participated in Clusters 5–8 in Year 2.

b

Response scale is 1 = Never to 4 = Frequently. See Appendix D for survey items.

3.6. Cost to deliver LEAP

We identified discrete activities performed by the LEAP coaches and project manager plus fixed non‐personnel costs (Table A3). Altogether, coaches spent approximately 149 h and the project manager spent approximately 116 h per cluster of teams. Multiplying each person's time by their hourly rate plus fringe benefits plus non‐personnel costs yielded a per‐cluster cost of $24,146, or an average of $4024 per team.

4. DISCUSSION

Our theory of change was informed by the Dynamic Sustainability Framework 8 which asserts that program optimization can be achieved with teams engaging in PDSA cycles of change. LEAP was designed to engage frontline teams in a PDSA project. For the 71% of teams that completed LEAP, all completed a project charter, 92% presented their final project, and 80% planned to continue working together. Satisfaction scores were high. Participants in Year 2 reported more use of QI methods 6 months post‐LEAP compared to baseline for four of five categories (see Table 3); this result mirrors increases in self‐rated expertise among Year 1 participants. 24 Gaining a better understanding about which categories of QI methods may lead to the most impactful improvements is an important area for future research.

Our primary outcome for the trial was MOVE! program reach. Though there were no pre‐/post‐LEAP differences in reach for control or LEAP facilities, our mixed‐effects model revealed that LEAP facilities experienced less decline in reach compared with controls in the 12‐month post‐LEAP period. However, this latter result may be influenced by an unexplained spike in reach among controls at the beginning of the post‐LEAP period, while LEAP facilities had relatively stable levels of reach. Reach may have been affected by secular changes within the VHA system, including the introduction of “direct scheduling” for patients. This meant that patients could directly enroll in the MOVE! program without a formal referral from a primary care provider. This change was intended to provide patients with more flexible access to MOVE! weight management services, but this also meant that established referral pathways were disrupted.

McDonald and colleagues reviewed 89 articles based on 13 years' worth of research to identify competencies necessary for learning health systems, many of which aligned with establishing continuous QI in organizations 32 and with design principles 33 used in LEAP. For example, one core competency included using and making sense of data. Coaches helped teams understand their data and use it to drive change by articulating what they understood about their current state based their data and identifying improvement goals. Self‐reported use of “supporting change with data” rose 6 months after LEAP compared with baseline (Table 4).

McDonald and colleagues also identified skills in collaboration, teamwork, and self‐reflection as competencies necessary for learning health systems; these are all activities that coaches encouraged LEAP teams to do. Six months after LEAP, most survey respondents (80%) reported their team intended to continue working together. This is notable in light of challenges with engaging teams in continuous QI. 31 As teams coalesce, they can become increasingly creative and agile, pivoting toward increasingly streamlined cycles of rapid learning. 34

Though LEAP increased capability for doing QI, system leaders must match this by investing sufficiently to bolster capacity for frontline teams to engage in QI within a supportive learning culture. Many organizations do not sufficiently invest in developing the necessary culture to support team‐based QI. 35 , 36 , 37 , 38 Lack of time is the single most common barrier for teams to form and engage in QI; having sufficient time was the lowest rated area in our satisfaction survey. Future research is needed to develop effective tactics to address time challenges, which will, no doubt, be an ongoing challenge for people wanting to engage in QI. One team that participated in piloting LEAP described their 4‐year journey as they continued working as a team in a supportive learning environment, achieving significant and meaningful improvements in patient outcomes. 25 Their story offers a positive vision that can help system leaders see that investing in QI is “an enlightened strategy, not an expense.” 36 (p51)

Findings must be interpreted within the context of several limitations. First, this was a pragmatic trial. Our partnership with the national office overseeing the MOVE! program (NCP), gave us the opportunity to market the trial via national calls with local MOVE! coordinators before emailing invitations. We were attentive to the need to maximize use of limited resources; our LEAP program had capacity for six facilities within each cluster. The randomization process changed from Year 1 to Year 2 of the trial for two reasons: (1) we had more demand than we could accommodate (n = 29 were willing, but we only had 24 slots available in Year 1); and (2) we wanted to minimize empty coaching slots. Our randomization approach in Year 1 was inflexible: everyone who expressed willingness to participate was assigned to one of four starting times; everyone learned of their assignment in September 2016, which meant that some facilities had 1 month's advanced notice (Cluster 1), and others had up to 8 months' notice (Cluster 4). Our revised approach in Year 2, gave everyone flexibility to choose their timing of participation by only responding to invitations that were optimally timed.

Additionally, as a pragmatic trial, we chose an outcome measure that relied on MOVE! program visits and enrollment data recorded via EHR systems linked to a regularly (daily) updated nationwide corporate data warehouse. Though national protocols guide how these visits are coded within the EHR, local facilities have latitude in documentation. Furthermore, facilities had the option of working on improvements other than reach; 64% of teams chose to do so but the remaining teams worked on other topics.

The causal pathway from LEAP to reach outcomes was subject to multiple factors outside the control of the trial including policy changes in referral processes. Additionally, assessing changes in use of QI methods relied on LEAP team members participating in both baseline and follow‐up surveys. LEAP team rosters did not reflect the degree to which individual team members actually engaged in LEAP and many did not participate in the surveys. Some teams fell behind and were not able to collect enough datapoints to affirm accomplishing their aim, though many showed promising trends with intentions to continue monitoring. Many LEAP teams were challenged by identifying a metric that was logically linked to the planned change; these challenges have also been highlighted in other studies. 39 Additionally, it is difficult to know the extent to which teams actually continued QI work. The indicators for whether LEAP teams are continuing QI are mixed. On the one hand, the relatively high rate of post‐LEAP participation in optional VCs indicates possible continued QI. Conversely, based on post‐LEAP interviews reported elsewhere, 40 probabilities are low that teams were continuing QI. Future efforts should focus on how to expand capacity for QI by resolving time constraints and providing accountability to help keep teams on track.

Notwithstanding limitations, LEAP is a virtually delivered program that successfully engaged frontline teams in hands‐on learning through coaching, structured curriculum, and VCs, all approaches that were identified in a systematic review as core competencies for continuous QI. 32 Teams were formed, they learned together, and started a PDSA change cycle in 6 months even though very few had dedicated time for QI. Despite this, all teams completed their Project Charter and most had intentions to continue team‐based QI, which bolsters efforts within VHA to mature as a learning system.

Supporting information

Data S1. Appendix.

HESR-59-0-s001.docx (337.3KB, docx)

ACKNOWLEDGMENTS

This study was supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, and the Quality Enhancement Research Initiative (QUERI). Study design and conduct and all statistical analyses were completed by the study team, independent of the funder. We are grateful for all participating MOVE! Teams, and their unwavering dedication to serving Veterans and to NCP leaders who supported this work.

Damschroder LJ, Evans R, Kim HM, et al. Effectiveness of a virtual quality improvement training program to improve reach of weight management programs within a large health system. Health Serv Res. 2024;59(Suppl. 2):e14344. doi: 10.1111/1475-6773.14344

REFERENCES

  • 1. Institute of Medicine . Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. National Academies Press; 2013:13444. doi: 10.17226/13444 [DOI] [PubMed] [Google Scholar]
  • 2. Veazie S, Peterson K, Bourne D, Anderson J, Damschroder L, Gunnar W. Implementing high‐reliability organization principles into practice: a rapid evidence review. J Patient Saf. 2022;18(1):e320‐e328. doi: 10.1097/PTS.0000000000000768 [DOI] [PubMed] [Google Scholar]
  • 3. United States Department of Veterans Affairs . VA Utilization Profile FY 2016. 2017.
  • 4. Kilbourne AM, Elwy AR, Sales AE, Atkins D. Accelerating research impact in a learning health care system: VA's quality enhancement research initiative in the choice act era. Med Care. 2017;55(7 Suppl 1):S4‐S12. doi: 10.1097/MLR.0000000000000683 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Atkins D, Kilbourne AM, Shulkin D. Moving from discovery to system‐wide change: the role of research in a learning health care system: experience from three decades of health systems research in the Veterans Health Administration. Annu Rev Public Health. 2017;38:467‐487. doi: 10.1146/annurev-publhealth-031816-044255 [DOI] [PubMed] [Google Scholar]
  • 6. Veterans Health Administration (VHA). VHA's HRO journey officially begins. Accessed May 24, 2021. https://www.patientsafety.va.gov/features/VHA_s_HRO_journey_officially_begins.asp
  • 7. Ellis LA, Sarkies M, Churruca K, et al. The science of learning health systems: scoping review of empirical research. JMIR Med Inform. 2022;10(2):e34907. doi: 10.2196/34907 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Lapré MA, Nembhard IM. Inside the organizational learning curve: understanding the organizational learning process. Found Trends® Technol Inf Oper Manag. 2010;4(1):1‐103. doi: 10.1561/0200000023 [DOI] [Google Scholar]
  • 10. Perla RJ, Provost LP, Parry GJ. Seven propositions of the science of improvement: exploring foundations. Qual Manag Healthc. 2013;22(3):170‐186. [DOI] [PubMed] [Google Scholar]
  • 11. Damschroder LJ, Lowery JC. Evaluation of a large‐scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8(1):51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Aune D, Sen A, Prasad M, et al. BMI and all cause mortality: systematic review and non‐linear dose‐response meta‐analysis of 230 cohort studies with 3.74 million deaths among 30.3 million participants. BMJ. 2016;353:i2156. doi: 10.1136/bmj.i2156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Flegal KM, Kit BK, Orpana H, Graubard BI. Association of all‐cause mortality with overweight and obesity using standard body mass index categories: a systematic review and meta‐analysis. JAMA. 2013;309(1):71‐82. doi: 10.1001/jama.2012.113905 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Breland JY, Phibbs CS, Hoggatt KJ, et al. The obesity epidemic in the Veterans Health Administration: prevalence among key populations of women and men veterans. J Gen Intern Med. 2017;32(S1):11‐17. doi: 10.1007/s11606-016-3962-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Flegal KM, Kruszon‐Moran D, Carroll MD, Fryar CD, Ogden CL. Trends in obesity among adults in the United States, 2005 to 2014. JAMA. 2016;315(21):2284. doi: 10.1001/jama.2016.6458 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Maciejewski ML, Arterburn DE, Berkowitz TS, et al. Geographic variation in obesity, behavioral treatment, and bariatric surgery for veterans. Obesity. 2019;27(1):161‐165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Miech EJ, Freitag MB, Evans RR, et al. Facility‐level conditions leading to higher reach: a configurational analysis of national VA weight management programming. BMC Health Serv Res. 2021;21(1):797. doi: 10.1186/s12913-021-06774-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Damschroder LJ, Miech EJ, Freitag MB, et al. Facility‐level program components leading to population impact: a coincidence analysis of obesity treatment options within the Veterans Health Administration. Transl Behav Med. 2022;12(11):1029‐1037. doi: 10.1093/tbm/ibac051 [DOI] [PubMed] [Google Scholar]
  • 19. Kahwati LC, Lewis MA, Kane H, et al. Best practices in the Veterans Health Administration's MOVE! weight management program. Am J Prev Med. 2011;41(5):457‐464. doi: 10.1016/j.amepre.2011.06.047 [DOI] [PubMed] [Google Scholar]
  • 20. Goodrich DE, Lowery JC, Burns JA, Richardson CR. The phased implementation of a national telehealth weight management program for veterans: mixed‐methods program evaluation. JMIR Diabetes. 2018;3(4):e14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Haverhals LM, Sayre G, Helfrich CD, et al. E‐consult implementation: lessons learned using consolidated framework for implementation research. Am J Manag Care. 2015;21(12):e640‐e647. [PMC free article] [PubMed] [Google Scholar]
  • 22. Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC) program: organizational factors associated with successful implementation. Transl Behav Med. 2017;7(2):233‐241. doi: 10.1007/s13142-016-0424-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Damschroder LJ, Reardon CM, AuYoung M, et al. Implementation findings from a hybrid III implementation‐effectiveness trial of the Diabetes Prevention Program (DPP) in the Veterans Health Administration (VHA). Implement Sci. 2017;12(1):94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Damschroder LJ, Yankey NR, Robinson CH, et al. The LEAP program: quality improvement training to address team readiness gaps identified by implementation science findings. J Gen Intern Med. 2020;36:1‐8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Robinson CH, Thompto AJ, Lima EN, Damschroder LJ. Continuous quality improvement at the frontline: one interdisciplinary clinical team's four‐year journey after completing a virtual learning program. Learn Health Syst. 2022;6(4):e10345. doi: 10.1002/lrh2.10345 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) statement. BMJ. 2017;356:i6795. doi: 10.1136/bmj.i6795 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. John Wiley & Sons; 2009. [Google Scholar]
  • 28. Practical Improvement Science in Health Care: A Roadmap for Getting Results. https://www.edx.org/course/ph556x-practical-improvement-science-in-health-care-a-roadmap-for-getting-results
  • 29. Parisi KE, Dopp AR, Munson SA, Lyon AR. A glossary of user‐centered design strategies for implementation experts. Trans Behav Med. 2018;9:1057‐1064. doi: 10.1093/tbm/iby119 [DOI] [PubMed] [Google Scholar]
  • 30. Cidav Z, Mandell D, Pyne J, Beidas R, Curran G, Marcus S. A pragmatic method for costing implementation strategies using time‐driven activity‐based costing. Implement Sci. 2020;15(1):28. doi: 10.1186/s13012-020-00993-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. McNicholas C, Lennox L, Woodcock T, Bell D, Reed JE. Evolving quality improvement support strategies to improve Plan–Do–Study–Act cycle fidelity: a retrospective mixed‐methods study. BMJ Qual Saf. 2019;28(5):356‐365. doi: 10.1136/bmjqs-2017-007605 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Loper AC, Jensen TM, Farley AB, Morgan JD, Metz AJ. A systematic review of approaches for continuous quality improvement capacity‐building. J Public Health Manag Pract. 2022;28(2):E354‐E361. doi: 10.1097/PHH.0000000000001412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. McDonald PL, Phillips J, Harwood K, Maring J, Van Der Wees PJ. Identifying requisite learning health system competencies: a scoping review. BMJ Open. 2022;12(8):e061124. doi: 10.1136/bmjopen-2022-061124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Foster AA, Stack AM. Quality improvement in a pandemic. Pediatr Qual Saf. 2020;5(4):e321. doi: 10.1097/pq9.0000000000000321 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Vaughn VM, Saint S, Krein SL, et al. Characteristics of healthcare organisations struggling to improve quality: results from a systematic review of qualitative studies. BMJ Qual Saf. 2019;28(1):74‐84. doi: 10.1136/bmjqs-2017-007573 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Swensen SJ, Dilling JA, Mc Carty PM, Bolton JW, Harper CM Jr. The business case for health‐care quality improvement. J Patient Saf. 2013;9(1):44‐52. [DOI] [PubMed] [Google Scholar]
  • 37. Reed JE, Card AJ. The problem with plan‐do‐study‐act cycles. BMJ Qual Saf. 2016;25(3):147‐152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Godfrey MM, Oliver BJ. Accelerating the rate of improvement in cystic fibrosis care: contributions and insights of the learning and leadership collaborative. BMJ Qual Saf. 2014;23(Suppl 1):i23‐i32. [DOI] [PubMed] [Google Scholar]
  • 39. Woodcock T, Liberati EG, Dixon‐Woods M. A mixed‐methods study of challenges experienced by clinical teams in measuring improvement. BMJ Qual Saf. 2021;30(2):106‐115. doi: 10.1136/bmjqs-2018-009048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Robinson C, Damschroder LJ. A pragmatic context assessment tool (pCAT): using a think aloud method to develop a practical assessment of contextual barriers to change. In Review. 2022. 10.21203/rs.3.rs-1696597/v1. [DOI] [PMC free article] [PubMed]
  • 41. National Academies of Sciences, Engineering, and Medicine . Division of behavioral and social sciences and education; board on human‐systems integration; division on engineering and physical sciences; board on infrastructure and the constructed environment; committee on facilities staffing requirements for veterans health administration. In: Debad SJ, ed. Facilities Staffing Requirements for the Veterans Health Administration‐Resourcing, Workforce Modeling, and Staffing: Proceedings of a Workshop. Washington, DC: National Academies Press; 2019. Accessed January 17, 2024. http://www.ncbi.nlm.nih.gov/books/NBK545587/ [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data S1. Appendix.

HESR-59-0-s001.docx (337.3KB, docx)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES