Skip to main content
PLOS One logoLink to PLOS One
. 2022 Mar 17;17(3):e0265396. doi: 10.1371/journal.pone.0265396

Testing implementation facilitation for uptake of an evidence-based psychosocial intervention in VA homeless programs: A hybrid type III trial

David A Smelson 1,2,*, Vera Yakovchenko 1, Thomas Byrne 1,3, Megan B McCullough 1,4, Jeffrey L Smith 5, Kathryn E Bruzios 1,2, Sonya Gabrielian 6,7
Editor: Annika C Sweetland8
PMCID: PMC8929696  PMID: 35298514

Abstract

Background

Healthcare systems face difficulty implementing evidence-based practices, particularly multicomponent interventions. Additional challenges occur in settings serving vulnerable populations such as homeless Veterans, given the population’s acuity, multiple service needs, and organizational barriers. Implementation Facilitation (IF) is a strategy to support the uptake of evidence-based practices. This study’s aim was to simultaneously examine IF on the uptake of Maintaining Independence and Sobriety Through Systems Integration, Outreach and Networking-Veterans Edition (MISSION-Vet), an evidence-based multicomponent treatment engagement intervention for homeless Veterans with co-occurring mental health and substance abuse, and clinical outcomes among Veterans receiving MISSION-Vet.

Methods

This multi-site hybrid III modified stepped-wedge trial involved seven programs at two Veterans Affairs Medical Centers comparing Implementation as Usual (IU; training and educational materials) to IF (IU + internal and external facilitation).

Results

A total of 110 facilitation events averaging 27 minutes were conducted, of which 85% were virtual. Staff (case managers and peer specialists; n = 108) were trained in MISSION-Vet and completed organizational readiness assessments (n = 77). Although both sites reported being willing to innovate and a desire to improve outcomes, implementation climate significantly differed. Following IU, no staff at either site conducted MISSION-Vet. Following IF, there was a significant MISSION-Vet implementation difference between sites (53% vs. 14%, p = .002). Among the 93 Veterans that received any MISSION-Vet services, they received an average of six sessions. Significant positive associations were found between number of MISSION-Vet sessions and outpatient treatment engagement measured by the number of outpatient visits attended.

Conclusions

While staff were interested in improving patient outcomes, MISSION-Vet was not implemented with IU. IF supported MISSION-Vet uptake and increased outpatient service utilization, but MISSION-Vet still proved difficult to implement particularly in the larger healthcare system. Future studies might tailor implementation strategies to organizational readiness.

Trial registration

ClinicalTrials.gov, NCT02942979.

Introduction

Large healthcare systems, including the Department of Veterans Affairs (VA), are working towards eliminating care inefficiencies by integrating administrative, operations, and novel clinical interventions [1]. This is relevant to the VA’s strategic commitment to end Veteran homelessness, resulting in dramatic expansion in their programs [2]. This expansion included implementing evidence-based practices VA-wide to improve outcomes, which can often prove difficult given the need to broadly address individual (e.g., training, involvement in decision making) and organizational factors (organization size, climate, support for the practices among staff and administrators [35]. Evidence-based practices are often challenging to uniformly adopt across a system even when they are known to improve outcomes due to a variety of barriers including time-consuming training requirements or changing provider routines [3]. These barriers can be particularly challenging for programs serving vulnerable populations, such as Veterans experiencing homelessness, given that these programs often address multiple behavioral health, substance abuse, medical care and social client needs simultaneously, while also being mindful of such issues as care fragmentation and treatment engagement [6].

Maintaining Independence and Sobriety through Systems Integration, Outreach and Networking-Veterans Edition (MISSION-Vet) is an evidence-based multicomponent wraparound treatment engagement approach for homeless Veterans with a co-occurring mental health and substance use disorder [7]. While a detailed description of MISSION-Vet is included in the methods section below, in short, MISSION-Vet is a team-based hybrid psychosocial and linkage intervention with the primary objective of engaging homeless Veterans with co-occurring disorders in outpatient care. This intervention aims to address the challenges posed by low rates of treatment engagement for homeless Veterans with co-occurring disorders, as treatment is critical for housing sustainability and recovery [8, 9]. MISSION-Vet has improved treatment attendance and engagement, mental health and substance abuse outcomes, and reduced days homeless [5, 1014]. MISSION-Vet was also certified by Substance Abuse and Mental Health Services Administration’s National Registry of Evidence Based Practices [5, 1012]. While MISSION-Vet implementation within VA homeless programs fills a current care gap, as a team-based multicomponent intervention, it also presents implementation challenges [15, 16]. These challenges include MISSION-Vet requiring 1) case managers and peers to work together as a team despite having somewhat different training and philosophies, 2) staff to use as hybrid model, which includes both running psychoeducational groups and doing assertive community outreach, and 3) staff using a stepdown model with decreasing intensity so clients ultimately engage in community supports [17]. Thus, the model intensity and complexity make it more complicated to implement as compared to single discipline and single modality interventions.

A previous study with Getting to Outcomes (GTO), an implementation model focused on capacity building for uptake of MISSION-Vet in VA homeless programs nationally found that while MISSION-Vet was implemented at all the sites, the intervention intensity and complexity, organizational demands, and time required for GTO resulted in lower MISSION-Vet uptake [18]. There are other implementation approaches to help organizations adopt and sustain complex evidence-based approaches like MISSION-Vet [19]. Implementation facilitation (IF) is an evidence-based strategy conceptualized within the integrated ‘Promoting Action on Research Implementation in Health Services’ (i-PARIHS) framework, in which facilitators external and/or internal to a healthcare organization work to support successful implementation of clinical innovations by assisting stakeholders with planning, execution, and refinement that addresses factors related to: (a) characteristics of the innovation itself; (b) the outer and inner context of the healthcare setting; and (c) characteristics of the recipients of the innovation [20]. IF is multifaceted, involving interactive problem-solving and support that occurs in a context of a recognized need for improvement and supportive interpersonal relationships [19]. Given previous work showing IF to be an effective strategy for implementing complex clinical innovations, we posited that it may be effective for implementing a multicomponent intervention like MISSION-Vet, in a complex healthcare system like VA, and in programs serving a high acuity population of homeless Veterans with a co-occurring disorder [21]. Therefore, consistent with recommendations for Type III hybrid implementation-effectiveness designs [22], the primary aim was to study the impact of the IF strategy to support MISSION-Vet implementation and fidelity. The secondary implementation aim was to examine organizational readiness differences between sites. The effectiveness aim was to examine the association between receipt of MISSION-Vet and treatment engagement in clinical services, as measured by VA inpatient and outpatient service utilization.

Method

Study design

This was a multi-site randomized hybrid type III implementation-effectiveness modified stepped-wedge trial in seven homeless programs at two VA Medical Centers (VAMCs) serving homeless Veterans with co-occurring mental health and substance use problems [22]. Hybrid type III trials are intended for interventions like MISSION-Vet with robust effectiveness data; though effectiveness data may be further gathered, the core focus of these trials is implementation of the client level intervention (here, MISSION-Vet). We selected a modified stepped-wedge design to compare MISSION-Vet uptake under two staff level intervention conditions: Implementation as Usual (IU) versus Implementation Facilitation (IF) [22]. This is a modified stepped-wedge design in that, in contrast to a traditional stepped-wedge wherein all sites initially start in the control condition, with programs switching over to the intervention condition at fixed intervals or steps and then remaining in the intervention condition for the duration of the study, in our study, it was not feasible to implement IF at all programs simultaneously. As a result, in the present study, each program received six months of IU and crossed over to IF for another six months, with data collection continuing for an additional six months after the end of IF. Moreover, in terms of calendar time, the rollout of the control and intervention conditions in our study occurred in a sequential manner across programs, with each program being in the intervention and control conditions for a fixed period of time, and not contributing to the analysis in periods in which they are neither in the intervention or control condition (Fig 1). This type of design is akin to what the stepped-wedge design literature refers to as an incomplete stepped-wedge design with limited measurements prior to and after crossover [23, 24] and is appropriate to use when it is not feasible to implement an intervention or collect data in all clusters simultaneously, either due to resource constraints or for other reasons. In this case, we did not have the resources to do IF in every location at the same time. In addition, to further the existing literature supporting MISSION-Vet outcomes, this trial also enabled us to extract existing data from the VA Electronic Medical Record to examine treatment engagement among those receiving MISSION-Vet.

Fig 1. Modified stepped-wedge trial design.

Fig 1

The project was deemed quality improvement and received an exempt status by the Institutional Review Board at the Bedford, Massachusetts VAMC according to the VA Program Guide 1200.21 [25], thus waiving the need for written or verbal informed consent. The staff being trained were the study participants and they were informed about the project’s designation as Quality Improvement. Data from the medical record were extracted for the clients being served by the staff members offering MISSION-Vet. Staff were informed of this data extraction and records were not anonymized as it was necessary to identify the clients being served by the staff offering MISSION-Vet.

MISSION-Vet intervention

A detailed description of MISSION-Vet client level intervention is reported in a previous protocol paper [26]. In brief, MISSION-Vet is delivered by a master’s level social work case manager, and a peer specialist, the latter of which is someone with prior lived experience with homelessness, substance abuse and mental health issues. The MISSION-Vet team delivers the following five treatment components: critical time intervention, dual recovery therapy (DRT), peer support, vocational and educational support, and trauma-informed care, all guided by Housing First and harm reduction philosophies that emphasize low barrier services for clients [2732]. Both the case manager and peer specialist offer psychoeducational sessions either individually or in groups (13 DRT co-occurring disorders groups and 11 peer support recovery groups both designed to empower Veterans to engage in treatment) along with unstructured community outreach sessions to engage clients in care and link Veterans to other needed community support. MISSION-Vet was offered for approximately 2-hours a week for 6-months, and service delivery was guided by a Treatment Manual [7]. Veterans could also receive a MISSION-Vet Workbook that includes assignments reinforcing recovery [33].

Participants and recruitment

The project was conducted at two VAMCs (hereafter Sites A and B) and offered to staff at seven homeless programs (four locations at Site A and three locations at Site B) with the unit of measurement being the two VAMC’s. The two VAMCs were selected because of the size and scope of the healthcare systems, geographic dispersion, and the rate of homelessness in the regions [34]. Site A was in a large VA urban setting, serving approximately 83,000 unique Veterans annually with the highest-level complexity, and two on-site residential buildings, and two off-site buildings. Site B was a smaller suburban medium complexity VA serving approximately 18,000 unique Veterans annually, with one on-site and off-site residential building. Both Sites A and B had community-based non-residential treatment which included housing placement, case management, linkages to mental health and substance use programing, but not MISSION-Vet.

There were two groups of participants in the study: staff and clients. The first group are the case managers and peer specialists delivering MISSION-Vet; staff participation was voluntary with no incentives provided. The second group were, Veterans (clients) being served by these staff in their respective VA homeless programs. Staff were encouraged to follow the recommended MISSION-Vet inclusion and exclusion criteria. This included: (1) enrolled in a VA homeless program at one of the implementation sites; (2) met Diagnostic and Statistical Manual of Mental Disorders, 5th Edition [35] diagnostic criteria or International Classification of Diseases, 10th Revision [36] for current substance use disorder (e.g., alcohol, marijuana, cocaine) and a co-occurring mental illness which includes anxiety, mood, or a psychotic spectrum disorder.

Implementation strategy

Implementation as usual (IU)

At the outset, leadership from both VAMCs were introduced to MISSION-Vet and invited to participate. IU was comprised of a 1.5-hour webinar training offered at least twice to case manager and peer specialist staff within the two VAMCs and seven programs to accommodate scheduling. Training provided an overview of the MISSION-Vet approach and the implementation materials (MISSION-Vet Treatment Manual, MISSION-Vet Consumer Workbook, MISSION-Vet Fidelity Measure), and staff roles [7, 33]. It also presented how to use the MISSION-Vet service delivery fidelity measure, embedded within the VA medical record. We used this fidelity measure to capture the total number and type of MISSION-Vet sessions delivered, which served as our measures of MISSION-Vet uptake. This passive implementation strategy has been used in previous studies [37].

Implementation facilitation (IF)

Following initial training, there was a 6-month waiting period prior to the 6-months of IF being offered to each of the seven programs at the two VAMCs. As noted in Fig 1, IF was turned on in a stepwise fashion at each of the seven sites over a 21-month period, with the sites randomly assigned to a particular step. External facilitation is the form of IF used in this study and delivered by outside IF experts with specialized knowledge of implementation and quality improvement approaches [3840]. In MISSION-Vet, IF experts partner with facility staff to implement MISSION-Vet through implementation planning, goal-setting and problem-solving [4143]. Another IF goal was to work with VAMCs to tailor the evidence based practice where appropriate to meet local contextual demands. The external facilitators (JLS, VY) held bi-weekly meetings with local program staff executing MISSION-Vet to address implementation barriers, troubleshoot, and provide implementation fidelity reports, which included feedback on number and type of MISSION-Vet services delivered. External facilitators also provided regular feedback on the staff’s use of the fidelity measure within the VA medical record since this fidelity measure was used to construct our measure of implementation uptake.

Measures

Project measures captured information about organizational readiness, implementation outcomes (both IF and the implementation of MISSION-Vet), and VA health services utilization. Depending on the outcome, data were measured at the site, staff, and/or veteran level. With regard organizational readiness, we used an abbreviated version of the Organizational Readiness to Change Assessment (ORCA) context subscale and Jacobs’ Implementation Climate survey, resulting in a 21-item 5-point Likert scale to get at site and staff level readiness [44, 45]. Higher scores indicate greater organizational readiness and implementation climate. Following MISSION-Vet training, staff were asked to complete the organizational readiness survey and demographic survey regarding their age, sex, role/position, and tenure in VA.

Consistent with recommendations for type III hybrid effectiveness-implementation designs, our primary outcome was MISSION-Vet uptake (as measured by number of MISSION-Vet sessions delivered) during the IU versus IF time periods [22] and the secondary aim was to assess clinical outcomes (health service utilization). For this comparison of IU versus IF timeframes, we used a standardized facilitation tracking sheet completed by the external facilitators at the site level that included date, length of time, parties involved, activity type [46]. In addition, MISSION-Vet implementation was collected with a fidelity measure that was embedded in the Veterans’ electronic medical record using a specially created service tracking note template to quantify the type and amount of MISSION-Vet delivered [18]. Information captured in this note template included: which DRT sessions, peer support sessions, and Consumer Workbook exercises were completed; whether the MISSION-Vet Consumer workbook was provided; whether community activities were done with a Veteran (e.g., taken to appointment, NA/AA meetings, meetings with landlords); and referrals made to other services. Finally, treatment engagement as an outcome was captured with Veterans’ medical records obtained from the VA Corporate Data Warehouse, which included number of MISSION-Vet contacts and other outpatient visits (mental health, substance use, medicine, primary care, emergency department, other, total). Each service utilization outcome was aggregated over the 1-year period following the date of Veterans’ initial MISSION-Vet session.

Data analysis

Our analytic strategy involved four components that align with the four study aims. Specifically, we examined: 1) pre-implementation organizational readiness; 2) IF process, including IF events; 3) MISSION-Vet implementation in the IU and IF time periods; and 4) association between MISSION-Vet and VA health services. Because the number of trained providers to deliver MISSION-Vet and the number of Veterans who received it at the seven homeless programs was too small for meaningful program comparison, our analysis focuses on a comparison between the two VAMCs (Sites A and B) rather than the seven individual programs when making comparisons for all measures of interest. In other words, practical considerations necessitated that we modify our intended analysis plan to make the site, rather than the program the cluster unit of interest.

First, we examined organizational readiness using descriptive statistics and conducted comparisons of organizational readiness between the Sites A and B and by staff type (case manager vs. peer specialist), staff age, staff sex, and duration of employment with the VA using non-parametric Wilcoxon and Kruskal-Wallis tests. Second, we used descriptive statistics to examine IF events, including number, duration, and type of IF activities. Third, and similarly, we used descriptive statistics to examine implementation of MISSION-Vet. We summarize information about the number and type of MISSION-Vet sessions provided overall, at the Veteran-level, and by VAMC. We also examined provision of MISSION-Vet separately by staff type (i.e., whether a case manager or peer specialist). Additionally, to assess the potential impact of IF on MISSION-Vet, we examined how the overall provision of MISSION-Vet changed over time both before and after the start of IF using descriptive measures of the number of MISSION-Vet sessions provided at each site by month. Our intent was to estimate the intervention effect using a statistical model in line with established practices for stepped-wedge designs. However, this analysis plan was not feasible for two reasons. First, as noted above, the number of trained providers deliver MISSION-Vet and the number of Veterans receiving MISSION-Vet at each of the seven original program sites was small, which caused us to shift from using the program to the site as our cluster of interest in our analysis. Second, because neither of the two sites provided any MISSION-Vet services in the IU period, there was no variation in the outcome of interest during this time thus rendering it impractical to estimate such a model. We therefore rely solely on descriptive statistics to examine the impact of IF on the provision of MISSION-Vet services.

Fourth, as an exploratory analysis, we examined the relationship between receipt of MISSION-Vet and Veteran-level measures of engagement in clinical service (i.e., VA inpatient and outpatient services). To do so, we estimated a series of bivariate linear regression models in which our service utilization measures (i.e., number of outpatient and inpatient visits, by type, in the year after a Veteran’s initial MISSION-Vet session) served as the outcomes of interest and the number of MISSION-Vet sessions in the year following a Veteran’s initial MISSION-Vet session served as the predictor of interest in all models. As these models were purely exploratory, they did not adjust for any additional covariates.

Results

Staff characteristics

This study commenced in February 2016 and recruitment stopped in July 2019. 108 staff were trained in MISSION-Vet as part of Implementation as Usual across two VAMCs (93 at Site A across 11 trainings,15 at Site B across four trainings). Following training, 77 staff (69% Site A, 87% Site B) completed an organizational readiness and demographic survey. Most respondents were case managers (77%) or peer specialists (16%) and few were unknown (8%). There was an even mix of males (47%) and females (45%), and unknown (8%); the average age was 49 ± 12 (range 26–72), length of time in VA was 5 ± 5 years (range <1 year-25 years). It is noteworthy that while MISSION-Vet was intended to be implemented by a case manager-peer specialist dyad, peer specialists were less available at Site A, which is also why fewer peer specialists were trained throughout the project.

Organizational readiness

As described in Table 1, staff reported moderate to high organizational culture and climate at both sites. Site B had higher scores than Site A on nearly all individual items, although only several differences were statistically significant, perhaps due to the small sample size. The most consistent differences between sites were on the ORCA context staff culture subscale, with 100% of Site B staff agreeing to cooperate to maintain and improve patient care effectiveness and being willing innovate to improve clinical procedures, compared to 85% (p = .055) and 77% (p = .03) at Site A, respectively. Sites also differed on implementation climate scores regarding support to use MISSION-Vet being higher at Site B than Site A (92% vs 76%, p = .005), despite both sites reporting low recognition and appreciation for using MISSION-Vet. There were no statistically significant differences in organizational measures by role, sex, age, or site. There was, however, a positive significant relationship (r = .32, p = .01) between duration of VA employment and the ORCA measurement scale regarding goals, guidelines, feedback, and accountability.

Table 1. Organizational readiness scores by site, N = 77.

Site A (N = 64) Site B (N = 13) p
Implementation Climate Score 0.181
 I am expected to use MISSION-Vet with a certain number of Veterans. 65% 75% 0.313
 I am expected to help my organization meet its goals for implementing MISSION-Vet. 79% 77% 0.984
 I will get the support I need to identify potentially eligible Veterans for MISSION-Vet. 76% 92% 0.005
 I will get the support I need to use MISSION-Vet with Veterans. 77% 85% 0.212
 I will receive recognition when I use MISSION-Vet with Veterans. 52% 58% 0.845
 I will receive appreciation when I use MISSION-Vet with Veterans. 49% 58% 0.638
ORCA/Context, Staff Culture, Staff 0.081
 Have a sense of personal responsibility for improving patient care and outcomes. 82% 100% 0.099
 Cooperate to maintain and improve effectiveness of patient care. 85% 100% 0.055
 Are willing to innovate and/or experiment to improve clinical procedures. 77% 100% 0.028
 Are receptive to change in clinical processes. 78% 85% 0.464
ORCA/Context, Leadership Culture 0.507
 Reward clinical innovation and creativity to improve patient care. 70% 91% 0.075
 Solicit opinions of clinical staff regarding decisions about patient care. 75% 75% 0.715
 Seek ways to improve patient education and increase patient participation in treatment. 79% 83% 0.437
ORCA/Context, Leadership Behavior 0.418
 Provide effective management for continuous improvement of patient care. 69% 85% 0.486
 Clearly define areas of responsibility and authority for clinical managers and staff. 69% 85% 0.262
 Promote team building to solve clinical care problems. 71% 77% 0.506
 Promote communication among clinical services and units. 72% 85% 0.822
ORCA/Context, Measurement 0.304
 Provide staff with information on VA performance measures and guidelines. 70% 100% 0.058
 Establish clear goals for patient care processes and outcomes. 77% 92% 0.448
 Provide staff members with feedback/data on effects of clinical decisions. 67% 75% 0.378
 Hold staff members accountable for achieving results. 69% 77% 0.958

Implementation facilitation

Following Implementation as Usual, one external facilitator per site supported MISSION-Vet implementation. While we recognize that on the surface, it might suggest an imbalance in workload given that Site A was much larger than Site B, the facilitator in Site A had more available time to devote to this project. Facilitation activities included stakeholder engagement, site assessment, preparation/planning, ongoing process monitoring, education, program adaptation, marketing, and problem identification and problem solving. A total of 110 facilitation events averaging 27-minutes were conducted. At Site A, there was a total of 70 virtual (phone or Skype) external facilitation events across the four participating programs at Site A that averaged 24 minutes over a 17-month period. There was a total of 40 facilitation events across the three participating programs at Site B averaging 34-minutes over an 11-month period, including 60% virtual and 40% in-person. Site B had a higher per person facilitation dose (i.e., how many staff participated in MISSION-Vet vs. how many facilitation events were provided) than Site A.

Implementation as usual versus implementation facilitation

To explore the primary study objective, which is the potential impact of IF on provision of MISSION-Vet services, we examined how the overall provision of MISSION-Vet across both sites changed over time following the start date of IF. As noted above, although the calendar date on which IF began at each site varied, we standardized our analysis of MISSION-Vet over time to examine the provision of MISSION-Vet services over a standard 21-month period indexed to the start date of IF by site.

Fig 2 shows the results of this analysis and aggregates the total number of MISSION-Vet sessions provided across both Sites in the 6-months prior to and 12-months following the start of IF. There were no MISSION-Vet services provided at either site prior to the start of IF. However, MISSION-Vet services commenced immediately after the start of IF, suggesting that IF was effective in increasing the implementation of MISSION-Vet.

Fig 2. Total number of MISSION-Vet sessions delivered across 2 VAMCs by month relative to the start of facilitation.

Fig 2

Table 2 summarizes results of our analysis of the provision of MISSION-Vet, including the between site comparisons of the provision of MISSION-Vet during the IF period. As the table shows, when examining the actual implementation of MISSION-Vet in the IF period, of the 108 staff trained, 53% at Site B tried MISSION-Vet, as evidenced by staff entering at least one MISSION-Vet note, as compared to 14% of staff at Site A completed a MISSION-Vet note. This difference was statistically significant (p = .002). A total of 574 MISSION-Vet notes were entered during the IF period. This included 424 DRT notes (273 at Site A vs 151 at Site B) and 89 peer notes (32 at Site A vs 57 at Site B) by 14 case managers (9 at Site A vs 5 at Site B) and seven peer specialists (four at Site A vs three at Site B), with 93 (70 at Site A vs 23 at Site B) Veterans. Relatively fewer peer support sessions were conducted at Site A (12% of all sessions), as compared to Site B (38% of all sessions), as evidenced by the notes. Nearly all the community events were conducted at Site B (39 vs 2). No service referrals were conducted at Site A, compared to 114 at Site B.

Table 2. MISSION-Vet implementation outcomes.

Total Site A Site B p
n % n % n %
Staff Trained 108 100% 93 100% 15 100% --
 Staff Implemented MISSION 21 19% 13 14% 8 53% 0.002
  Case Managers 14 13% 9 10% 5 33% --
  Peers 7 6% 4 4% 3 20% --
MISSION-Vet Implemented 734 100% 348 100% 386 100% < .001
 DRT Sessions 424 58% 273 78% 151 39% --
 Peer Sessions 89 12% 32 9% 57 15% --
 Self-guided Exercises 66 9% 41 12% 25 6% --
 Community Events 41 6% 2 1% 39 10% --
 Service Referrals 114 16% 0 0% 114 30% --
Veterans served 93 100% 70 100% 23 100% --
 DRT & Peer Session (Fidelity) 17 18% 3 4% 14 61% < .001
 DRT Session Only 66 71% 57 81% 9 39% --
 Peer Session Only 2 2% 2 3% 0 0% --
 Self-guided Exercises Only 8 9% 8 11% 0 0% --
Total Sessions per person (mean) 6.3 -- 4.9 -- 10.9 -- 0.005
 DRT Sessions/person (mean) 5.4 -- 4.5 -- 7.9 -- 0.017
 Peer Sessions/person (mean) 4.7 -- 6.4 -- 4.1 -- 0.426

With regards to the Veterans served, 93 received MISSION-Vet services by the trained staff during the IF period. Again, Site B provided more per-person DRT sessions, on average, (4.5 sessions, vs 7.9 sessions, p = .017) and similar per-person peer sessions, on average, (6.4 vs 4.1, p = .426) than Site A. No Veterans received the complete DRT dose or peer dose: 62 (67%) received only DRT sessions, two (2%) received only peer sessions, 17 (18%) received both DRT and peer sessions, and 12 (13%) received unstructured MISSION-Vet. MISSION-Vet was implemented with great overall fidelity with regards to both DRT and peer support at Site B as compared to Site A (61% vs 4%, p < .001). Furthermore, across Sites A and B, Veterans overall tended to remain engaged in MISSION-Vet, averaging 4.5-months of MISSION-Vet services of the anticipated 6-months of services.

Health service utilization

The regression models we estimated to assess the relationship between the number of MISSION-Vet sessions and our health services utilization measures identified significant positive associations between the total number of MISSION-Vet sessions and the total number of outpatient medical-specialty visits, with each additional MISSION-Vet session associated with, on average, an additional 0.28 visits (B = .28, t(91) = 2.63, p = .001) in the one-year period after the start of MISSION-Vet services. Likewise, each additional MISSION-Vet session was associated with an increase of roughly 1 outpatient substance abuse treatment visit (B = .92, t(91) = 2.61, p = .01) and 2 total outpatient visits (B = 2.18, t(91) = 2.25, p = .03) in the one-year period following a Veteran’s initial MISSION-Vet session. The number of MISSION-Vet sessions was not significantly associated with any of our inpatient service utilization measures, or the total number of inpatient hospitalizations.

Discussion

In this hybrid III trial, we used a modified stepped-wedge design to compare IU versus IF with respect to the provision of MISSION-Vet. Our key findings were that neither site implemented MISSION-Vet with usual implementation and that IF significantly increased uptake of MISSION-Vet. Further, after IF, significantly more staff at Site B implemented MISSION-Vet compared to Site A. We also examined associations between pre-implementation organizational readiness and implementation. While we knew that Site B was a smaller and less complex site than Site A, we did not know the extent to which complexity and organizational readiness would drive outcomes. We found that the smaller site (Site B) had more flexibility to adopt MISSION-Vet than Site A. This suggests that it is not only the size of a site but also the organizational characteristics affected uptake. Higher MISSION-Vet uptake was associated with higher doses of IF overall, and this may be impacted by the in-person or virtual delivery. In exploratory analysis, this study also identified a positive relationship between the number of MISSION-Vet sessions and outpatient service utilization. These findings may suggest that MISSION-Vet assisted with outpatient treatment engagement, although some caution is warranted in making any firm conclusions about the extent to which this was the case, as this analysis did not adjust for any potential individual level or site-level factors that might confound the relationship between volume of receipt of MISSION-Vet and service utilization.

This project was done during a period of heightened attention to Veteran homelessness and following a commitment from the President of the United States and VA Secretary to end Veteran Homelessness [47]. It is not surprising that regarding organizational readiness, staff felt a personal responsibility for enhancing patient care and were interested in innovative practices to improve outcomes. However, these attributes alone were insufficient to stimulate uptake of MISSION-Vet. As mentioned above, several organizational characteristics were related to MISSION-Vet outcomes. We found that although Sites A and B had similar perceptions of leadership culture, staff culture, and implementation climate, Site B had a more supportive climate and staff were more willing to improve current practices. Powell et al. had similar findings that strategic (implementation climate) compared to general (organizational culture and climate, transformational leadership) organizational factors had more influence on knowledge and attitudes towards current mental health evidence-based practices [48]. Closer examination of how practitioners’ perspectives of context and climate shift between pre-implementation (i.e., knowledge and attitudes) to implementation may offer important clues into the tailoring of facilitation and selection of other implementation strategies [40].

Consistent with other recent work on implementation of behavioral health care interventions, we determined IF could be an appropriate strategy to support staff behavior change and address baseline needs by sites, and specifically, interpersonal support tailored to local needs, the mechanism underlying IF, which may be of benefit [38, 39, 49]. As noted above, sites implemented MISSION-Vet only after IF was initiated as training alone did no stimulate MISSION-Vet use. On-site facilitation (Site B) appeared to be more effective in generating use compared to virtual support (Site A). Although Site B had fewer facilitation events, Site B’s facilitation was delivered both in-person and virtually, and dose/personal attention was higher and more focused. These results also align with our prior MISSION-Vet study using GTO as the implementation strategy which also found that implementation support was needed to initiate MISSION-Vet use [18]. However, we do not attribute the differences only to virtual IF, as in reality, a constellation of contextual factors are responsible for the greater uptake and also include, medical center size and program scope and organizational readiness; these bear further study.

It is also interesting to note that no Veterans received a full dose of MISSION-Vet (24 sessions along with outreach and linkages as needed). Other studies of multicomponent behavioral health interventions have found implementation fidelity difficult to achieve compared to the implementation of singe component behavioral interventions and one provider [11, 12, 37, 50, 51]. Recent literature on facilitation indicates that it has a mixed legacy in implementation work; in some other studies facilitation is not as effective as the literature suggests in terms of format (active versus passive implementation strategies) and dose, however, for MISSION-Vet facilitation was a relatively productive approach [52, 53]. The contrast between Site A and B illustrates another aspect of this issue which is that larger medical centers may face additional barriers in implementing multimodal team-based behavioral interventions. However, among those Veterans offered MISSION-Vet, more MISSION-Vet services were associated with increased VA outpatient treatment engagement. Our prior studies have also found a relationship between MISSION-Vet, increased use of outpatient services and a reduction in homelessness [54]. Engagement in both MISSION-Vet and other outpatient treatments is critical for homeless Veterans with a co-occurring disorder as these Veterans often cycle in and out of healthcare services and are unstably engaged, which can result is housing instability and loss and exacerbation of medical problems [54].

Despite the promising findings, this study has several limitations. First, it was done in two VAMCs, of incomparable size, thus the data is not generalizable to the entire VA system. Importantly, practical considerations regarding the volume of MISSION-Vet provided across our original seven programs required that we adjust our clustering unit to the VAMC site. Second, given that no Veterans received a full dosed of MISSION-Vet, it was not possible to assess differences across the intended seven individual programs within VAMCs and thus we had to report comparison between the two VAMCs (Sites A and B). Third, this study used an IU comparison group and a slightly more active low intensity implementation intervention could have shown some effect. Nonetheless, this study did teach us that training alone (IU) was not effective in these VA settings and more proactive strategies (such as IF) may be needed to help guide future implementation. Fourth, IF was offered in person and virtually at Site B and virtually alone in site A, and this could have also been responsible for the site differences above and beyond the site size and organizational readiness differences, which were unable to be teased apart. Fifth, despite access to the VA medical record, no data was available regarding other clinical outcomes besides engagement as the VA data tracking client progress are not offered for every client at a standard time, thus making other clinical comparisons impossible. Sixth, it was not possible to control for secular changes over time between sites. Therefore, it is plausible that the increased national attention to the problem of homelessness during the implementation period or some other extraneous factor could account for the higher readiness and cooperation in Site B. However, while this is unlikely as no site did MISSION during IU, a qualitative component would have added additional nuance to the analyses.

Conclusion

Despite these limitations, this study offers the field lessons on how to get a team-based multimodal intervention like MISSION-Vet incorporated in both smaller and larger VA’s and in programs serving a population of homeless Veterans with a co-occurring disorder. This study suggests that standard MISSION-Vet web training is insufficient, and a more active and ongoing implementation support is needed. It is also possible that some tailoring of MISSION-Vet within certain settings might be warranted in future implementation studies involving MISSION-Vet given the time and intensity needed for service delivery. Moreover, MISSION-Vet was specifically developed to offer a comprehensive service delivery experience rather than Veterans receiving these services from separate programs and providers, the latter of which could fragment care. However, in the future, it might be feasible to cross train staff in programs already delivering MISSION-Vet type services such as assertive community outreach to deliver the psychoeducational components of MISSION-Vet or to train staff offering co-occurring disorders psychoeducational groups how to deliver community outreach and linkage support. Another important future direction might be to enhance the MISSION-Vet training, select a more intensive implementation strategy, as well as perhaps have a more robust external plus internal facilitation approach, the latter of which was previously found to be successful in implementing other complex clinical innovations in diverse healthcare settings [55]. Lastly, future studies might also examine virtual versus in-person facilitation in more rigorous ways.

Acknowledgments

The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government, and shall not be used for advertising or product endorsement purposes. We wish to thank the VA Central Office, Homeless Program Office for their support and guidance on this project and the seven homeless programs within the two VAMCs for the collaboration.

Data Availability

These analyses were performed using VHA data. Deidentified data can be provided upon request pending ethical approval and in accordance with VHA guidelines and permissions. Please contact Dr. John Wells at john.wells5@va.gov or (781) 687-2924.

Funding Statement

All the authors are funded by a grant from the Health Services Research and Development Quality Enhancement Research Initiative, “Bridging the Care Continuum” (QUE 15-284).

References

  • 1.Bentley TGK, Effros RM, Palar K, Keeler EB. Waste in the U.S. health care system: a conceptual framework. Milbank Q. 2008;86(4):629–59. doi: 10.1111/j.1468-0009.2008.00537.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.U.S. Department of Veterans Affairs Homeless Veterans Program. Employment Toolkit. 2017, July.
  • 3.Mechanic D, Tanner J. Vulnerable people, groups, and populations: societal view. Health Aff. 2007;26(5):1220–30. [DOI] [PubMed] [Google Scholar]
  • 4.Goetz MB, Hoang T, Knapp H, Burgess J, Fletcher MD, Gifford AL, et al. QUERI-HIV/Hepatitis Program. Central implementation strategies outperform local ones in improving HIV testing in Veterans Healthcare Administration facilities. J Gen Intern Med. 2013;28(10):1311–7. doi: 10.1007/s11606-013-2420-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Smelson D, Losonczy M, Ziedonis D, Castles-Fonseca K, Kaune M. Six month outcomes from a booster case management program for individuals with a cooccurring substance abuse and a persistent psychiatric disorder. Eur J Psychiatry Clin Neurosci. 2007;21:143–52. [Google Scholar]
  • 6.Szymkowiak D, Montgomery AE, Johnson EE, Manning T, O’Toole TP. Persistent super-utilization of acute care services among subgroups of veterans experiencing homelessness. Med Care. 2017;55(10):893–900. doi: 10.1097/MLR.0000000000000796 [DOI] [PubMed] [Google Scholar]
  • 7.Smelson DA, Sawh L, Kane V, Kuhn J, Ziedonis DM. MISSION-VET Treatment Manual. Worcester: University of Massachusetts Medical School; 2011. [Google Scholar]
  • 8.Durbin A, Kapustianyk G, Nisenbaum R, Wang R, Aratangy T, Khan B, et al. Recovery education for people experiencing housing instability: an evaluation protocol. Int J Soc Psychiatry. 2019;65(6):468–78. doi: 10.1177/0020764019858650 [DOI] [PubMed] [Google Scholar]
  • 9.Ellison ML, Schutt RK, Yuan LH, Mitchell-Miland C, Glickman ME, McCarthy S, et al. Impact of peer specialist services on residential stability and behavioral health status among formerly homeless veterans with cooccurring mental health and substance use conditions. Med Care. 2020;58(4):307–13. doi: 10.1097/MLR.0000000000001284 [DOI] [PubMed] [Google Scholar]
  • 10.Smelson D, Losonczy M, Castles-Fonseca K, Sussner B, Rodrigues S, Kaune M, et al. Preliminary outcomes from a community linkage intervention for individuals with co-occurring substance abuse and serious mental illness. J Dual Diagn. 2005;1(3):47–59. [Google Scholar]
  • 11.Smelson D, Kalman D, Losonczy M, Kline A, St. Hill L, Castles-Fonseca K, et al. A brief treatment engagement intervention for individuals with co-occurring mental illness and substance use disorders: Results of a randomized clinical trial. Community Ment Health J. 2012;48(2):127–32. doi: 10.1007/s10597-010-9346-9 [DOI] [PubMed] [Google Scholar]
  • 12.Smelson D, Kline A, Kuhn J, Rodrigues S, O’Connor K, Fisher W, et al. A wraparound treatment engagement intervention for homeless veterans with co-occurring disorders. Psychol Serv. 2013;10(2):161–7. doi: 10.1037/a0030948 [DOI] [PubMed] [Google Scholar]
  • 13.Smelson DA, Pinals DA, Sawh L, Fulwiler C, Singer S, Guevremont N, et al. An alternative to incarceration: co-occurring disorders treatment intervention for justice-involved veterans. World Med Health Policy. 2015;7(4):329–48. [Google Scholar]
  • 14.Smelson D, Farquhar I, Fisher W, Pressman K, Pinals DA, Samek B, et al. Integrating a co-occurring disorders intervention in drug courts: an open pilot trial. Community Ment Health J. 2018;55(2). [DOI] [PubMed] [Google Scholar]
  • 15.Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350. doi: 10.1136/bmj.h1258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implement Sci. 2010;5(1):1–9. doi: 10.1186/1748-5908-5-67 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Horton TJ, Illingworth JH, Warburton WH. Overcoming challenges in codifying and replicating complex health care interventions. Health Aff. 2018;37(2):191–7. doi: 10.1377/hlthaff.2017.1161 [DOI] [PubMed] [Google Scholar]
  • 18.Chinman M, McCarthy S, Hannah G, Byrne TH, Smelson DA. Using Getting To Outcomes to facilitate the use of an evidence-based practice in VA homeless programs: a cluster-randomized trial of an implementation support strategy. Implement Sci. 2017;12(1):34. doi: 10.1186/s13012-017-0565-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21. doi: 10.1186/s13012-015-0209-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Harvey G, Kitson A. Implementing evidence-based practice in healthcare: Taylor & Francis; 2015. [Google Scholar]
  • 21.Kirchner JE, Kearney LK, Ritchie MJ, Dollar KM, Swensen AB, Schohn M. Research & services partnerships: lessons learned through a national partnership between clinical leaders and researchers. Psychiatr Serv. 2014;65(5):577–9. doi: 10.1176/appi.ps.201400054 [DOI] [PubMed] [Google Scholar]
  • 22.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217. doi: 10.1097/MLR.0b013e3182408812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Unni RR, Lee SF, Thabane L, Connolly S, Van Spall HG. Variations in stepped-wedge cluster randomized trial design: Insights from the Patient-Centered Care Transitions in Heart Failure trial. American heart journal. 2020;220:116–26. doi: 10.1016/j.ahj.2019.08.017 [DOI] [PubMed] [Google Scholar]
  • 24.Hemming K, Lilford R, Girling AJ. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs. Statistics in medicine. 2015;34(2):181–96. doi: 10.1002/sim.6325 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Department of Veterans Affairs. Office of Research and Development Program Guide: 1200.21. VHA Operations Activities That May Constitute Research. Washington, DC: Department of Veterans Affairs; 2019.
  • 26.Simmons MM, Gabrielian S, Byrne T, McCullough MB, Smith JL, Taylor TJ, et al. A hybrid III stepped wedge cluster randomized trial testing an implementation strategy to facilitate the use of an evidence-based practice in VA Homeless Primary Care Treatment Programs. Implement Sci. 2017;12(1):46. doi: 10.1186/s13012-017-0563-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Susser E, Valencia E, Conover S, Felix A, Tsai WY, Wyatt RJ. Preventing recurrent homelessness among mentally ill men: a "critical time" intervention after discharge from a shelter. Am J Public Health. 1997;87(2):256–62. doi: 10.2105/ajph.87.2.256 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ziedonis DM, Stern R. Dual recovery therapy for schizophrenia and substance abuse. Psychiatr Ann. 2001;31(4):255. [Google Scholar]
  • 29.Chinman M, Shoai R, Cohen A. Using organizational change strategies to guide peer support technician implementation in the Veterans Administration. Psychiatr Rehabil J. 2010;33(4):269. doi: 10.2975/33.4.2010.269.277 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Bond GR, McHugo GJ, Becker DR, Rapp CA, Whitley R. Fidelity of supported employment: lessons learned from the National Evidence-Based Practice Project. Psychiatr Rehabil J. 2008;31(4):300. doi: 10.2975/31.4.2008.300.305 [DOI] [PubMed] [Google Scholar]
  • 31.Najavits LM. Expanding the boundaries of PTSD treatment. JAMA. 2012;308(7):714–6. doi: 10.1001/2012.jama.10368 [DOI] [PubMed] [Google Scholar]
  • 32.Tsemberis S, Gulcur L, Nakae M. Housing first, consumer choice, and harm reduction for homeless individuals with a dual diagnosis. Am J Public Health. 2004;94(4):651–6. doi: 10.2105/ajph.94.4.651 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Smelson DA, Sawh L, Rodrigues S, Munoz EC, Marzilli A, Tripp J, et al. The MISSION-VET Consumer Workbook. Worcester: University of Massachusetts Medical School; 2011. [Google Scholar]
  • 34.Department of Veterans Affairs. VA 25 Cities Initiative 2014, March [https://www.va.gov/HOMELESS/25cities.asp.
  • 35.American Psychiatric Association. Diagnostic and statistical manual of mental disorders (DSM-5). Arlington, VA: American Psychiatric Publishing; 2013. [Google Scholar]
  • 36.World Health Organization. International statistical classification of diseases and health related problems (The) ICD-10: World Health Organization; 2004. [Google Scholar]
  • 37.Smelson DA, Zaykowski H, Guevermont N, Siegfriedt J, Sawh L, Modzelewski D, et al. Integrating permanent supportive housing and co-occurring disorders treatment for individuals who are homeless. J Dual Diagn. 2016;12(2):193–201. doi: 10.1080/15504263.2016.1174010 [DOI] [PubMed] [Google Scholar]
  • 38.Chinman M, Hunter SB, Ebener P, Paddock SM, Stillman L, Imm P, et al. The getting to outcomes demonstration and evaluation: an illustration of the prevention support system. Am J Community Psychol. 2008;41(3–4):206–24. doi: 10.1007/s10464-008-9163-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Fritz J, Wallin L, Söderlund A, Almqvist L, Sandborgh M. Implementation of a behavioral medicine approach in physiotherapy: a process evaluation of facilitation methods. Implement Sci. 2019;14(1):94. doi: 10.1186/s13012-019-0942-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Parchman M, Hsu C, Fagnan L, van Borkulo N, Tuzzio L. Building a learning health care organization: external facilitation tailors support to the learning capacity of primary care settings. J Patient Cent Res Rev. 2017;4(3):187. [Google Scholar]
  • 41.Stetler CB, McQueen L, Demakis J, Mittman BS. An organizational framework and strategic implementation for system-level change to enhance research-based practice: QUERI Series. Implement Sci. 2008;3:30. doi: 10.1186/1748-5908-3-30 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Ajzen I, Fishbein M. Attitude-behavior relations: a theoretical analysis and review of empirical research. Psychol Bull. 1977;84(5):31. [Google Scholar]
  • 43.Rosenheck RA. Organizational process: a missing link between research and practice. Psychiatr Serv. 2001;52(12):6. doi: 10.1176/appi.ps.52.12.1607 [DOI] [PubMed] [Google Scholar]
  • 44.Helfrich CD, Li Y-F, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4(1):38. doi: 10.1186/1748-5908-4-38 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Jacobs SR, Weiner BJ, Bunger AC. Context matters: measuring implementation climate among individuals and groups. Implement Sci. 2014;9(1):46. doi: 10.1186/1748-5908-9-46 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Ritchie MJ, Kirchner JE, Townsend JC, Pitcock JA, Dollar KM, Liu CF. Time and organizational cost for facilitating implementation of primary care mental health integration. J Gen Intern Med. 2019;35(4):1001–10. doi: 10.1007/s11606-019-05537-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.The White House. Ending Veteran Homelessness [https://obamawhitehouse.archives.gov/issues/Veterans/ending-homelessness.
  • 48.Powell BJ, Mandell DS, Hadley TR, Rubin RM, Evans AC, Hurford MO, et al. Are general and strategic measures of organizational context and leadership associated with knowledge and attitudes toward evidence-based practices in public behavioral health settings? A cross-sectional observational study. Implement Sci. 2017;12(1):64. doi: 10.1186/s13012-017-0593-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2015;11(1):33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Brunette MF, Asher D, Whitley R, Lutz WJ, Wieder BL, Jones AM, et al. Implementation of integrated dual disorders treatment: a qualitative analysis of facilitators and barriers. Psychiatr Serv. 2008;59(9):989–95. doi: 10.1176/ps.2008.59.9.989 [DOI] [PubMed] [Google Scholar]
  • 51.Drake RE, Bond GR. Implementing integrated mental health and substance abuse services. J Dual Diagn. 2010;6(3–4):251–62. [Google Scholar]
  • 52.Kauth MR, Sullivan G, Blevins D, Cully JA, Landes RD, Said Q, et al. Employing external facilitation to implement cognitive behavioral therapy in VA clinics: a pilot study. Implement Sci. 2010;5:75. doi: 10.1186/1748-5908-5-75 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Garner B, Gotham H, Chaple M, Martino S, Ford J, Roosa M, et al. The implementation & sustainment facilitation strategy improved implementation effectiveness and intervention effectiveness: results from a cluster-randomized type 2 hybrid trial. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Smelson DA, Perez CK, Farquhar I, Byrne T, Colegrove A. Permanent supportive housing and specialized co-occurring disorders wraparound services for homeless individuals. J Dual Diagn. 2019:1–10. [DOI] [PubMed] [Google Scholar]
  • 55.Kirchner JE, Ritchie MJ, Pitcock JA, Parker LE, Curran GM, Fortney JC. Outcomes of a partnered facilitation strategy to implement primary care-mental health. J Gen Intern Med. 2014;29 Suppl 4:904–12. doi: 10.1007/s11606-014-3027-2 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Annika C Sweetland

21 Oct 2021

PONE-D-21-13578Testing Implementation Facilitation for Uptake of an Evidence-Based Psychosocial intervention in VA Homeless Programs: A Hybrid Type III TrialPLOS ONE

Dear Dr. Bruzios,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please see detailed notes below based on 2 peer reviews.

Please submit your revised manuscript by Dec 05 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Annika C. Sweetland, DrPH, MSW

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for including your ethics statement: "The project was deemed quality improvement and received an exempt status by the Institutional Review Board at the Bedford, Massachusetts VAMC according to the VA Program Guide 1200.21 "

a) Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

3. You indicated that ethical approval was not necessary for your study. We understand that the framework for ethical oversight requirements for studies of this type may differ depending on the setting and we would appreciate some further clarification regarding your research. Could you please provide further details on why your study is exempt from the need for approval and confirmation from your institutional review board or research ethics committee (e.g., in the form of a letter or email correspondence) that ethics review was not necessary for this study? Please include a copy of the correspondence as an ""Other"" file.

4. We note that you have included the phrase “data not shown” in your manuscript. Unfortunately, this does not meet our data sharing requirements. PLOS does not permit references to inaccessible data. We require that authors provide all relevant data within the paper, Supporting Information files, or in an acceptable, public repository. Please add a citation to support this phrase or upload the data that corresponds with these findings to a stable repository (such as Figshare or Dryad) and provide and URLs, DOIs, or accession numbers that may be used to access these data. Or, if the data are not a core part of the research being presented in your study, we ask that you remove the phrase that refers to these data.

Additional Editor Comments (if provided):

Our sincere apologies about the delayed decision on this manuscript. Due to difficulty finding an additional external reviewer, Academic Editor performed the secondary review.

Overall, I agree with Reviewer 1 that the article is interesting, well written, of value to the field and appropriate for publication in PLOS-One. However, some methodological issues raise questions about the interpretation of findings.

In addition to the concerns raised by Reviewer 1, I would add that a major concern is that the authors describe the study as having used a stepped-wedge design, but the analysis of findings does not match this. The authors describe pragmatic sequential implementation roll out occurring in 7 waves across two sites (site A followed by site B), wherein the IU in all sites (“control condition”) produced no change in any of the sites/waves. Since it was not possible to compare each wave to itself pre-implementation (intervention IF vs. control IU) as intended (suggested by the stepped-wedge design), instead the analysis shifts to a comparison of implementation outcomes between sites A and B, that have some significant qualitative differences (e.g. size, location, readiness) as well as implementation differences (e.g. hybrid in-person and virtual vs. virtual only). The findings are still valuable and interesting, but the analysis and conclusions need to match the methodological reality.

An additional limitation is that since the authors did not randomize the sequence of implementation across sites (all waves in site A followed by all waves in site B), it is not possible to control for secular changes over time between sites. It seems plausible that increased national attention to the problem of homelessness during the implementation period (lines 390-392) could be a factor that accounted for the higher readiness and cooperation in Site B.

Finally (minor) it may be worth highlighting that the current "training as usual" (IU) strategy at the VA of watching a self-instructional video was totally ineffectual should not be continued. A more proactive strategy (such as IF) could help guide more effective future implementation within the VA.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an interesting paper that examines the implementation and uptake of the MISSION-Vet program for homeless veterans with co-occurring mental health and substance use disorders. The paper addresses knowledge gaps surrounding the optimal approaches to implementing multicomponent interventions like MISSION-Vet and the role of implementation facilitation in this process. The paper is well-written but I do have some concerns about some of the conclusions drawn by the authors based on their approach and findings.

Main comments:

- The authors have conducted a stepped wedge trial unlike other stepped wedge trials that I have seen. Normally with this design, all study sites are exposed to the control condition during the first time period and then with each passing time period a new site receives the intervention. Sites are randomized such that they will receive both control and intervention conditions but in different sequences. This is not what these authors have done, as each program received the same six months of implementation as usual followed by implementation facilitation for a six-month period. The intervention was rolled out in a step-wise fashion but the time in which sites were exposed to the control condition (“IU”) did not differ across programs or sites. This reality raises a number of questions that have implications for the conclusions that can be drawn about the influence of facilitation on implementation and other outcomes. My first question is what justified this atypical approach to data collection if the goal was to conduct a stepped wedge trial that captures secular trends in implementation?

- Given the atypical study design, it is hard to know whether the changes in number of MISSION-Vet services is truly due to the arrival of implementation facilitation, or could it be that a certain amount of time is necessary for sites to prepare for uptake of the intervention and that services would have been started to be delivered after 6 months regardless of the presence of facilitation. How do we know that the observed results are actually due to facilitation and not due to the normal time it takes to internally get organized and be ready to deliver new services? The inability to capture temporal trends in the study seems like a major limitation.

- In a similar vein, the authors argue that Site B may have certain characteristics (e.g. support for identifying veterans, willingness to innovate) that may have made it more receptive and ready for the MISSION-Vet program than in Site A. However, as per Figure 1, Site B also received its facilitation after all the programs in Site A had been exposed to facilitation. How do we know that the facilitation at Site B wasn’t significantly better given all the lessons learned through interactions with programs in Site A? Given also that the facilitation at site B included an in-person component, can we really draw firm conclusions about the role of program/site characteristics when there may have been more important differences at the level of the facilitation programs/sites received?

- One thing that was less clear to me was the number of facilitators involved in the project. The authors state on page 16 “one external facilitator per site supported MISSION-Vet implementation”. Does this mean that there were two facilitators in total, one for Site A and one for Site B? Or were there facilitators at each program site? If there were only two facilitators overall, this raises questions about how dedicated they were to each program. In Site A, the overlapping exposure to facilitation means that the facilitator would have had to provide supports to multiple programs at the same time. This does not appear to be the case for Site B, where the IF periods don’t overlap. Is it possible that the quality of facilitation differed because there were more competing demands on the facilitator providing supports in Site A?

- With respect to the linear regression analyses used in the study, the data are clearly in a hierarchical structure but I saw no attempts to determine whether multi-level analyses were feasible/appropriate or not. Also, it was not clear to me whether the regression analyses included any confounding variables, this should be made explicit. I would urge the authors to be cautious in their interpretation of results if no confounding variables were included in their models.

- As a reader, it remains unclear what explains the differences in intervention uptake across the different programs and sites. A qualitative component to this study would have been highly valuable but was not performed. This should probably be mentioned as a limitation because it is hard to draw conclusions based on the limited organizational readiness data, especially with the limited sample size at Site B.

Minor comments:

- On page 12, line 266, the authors state that neither of the two sites provided MISSION-Vet services during the “IF period” but it was during the “IU period” that no services were provided.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Matthew Menear

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Mar 17;17(3):e0265396. doi: 10.1371/journal.pone.0265396.r002

Author response to Decision Letter 0


29 Dec 2021

November 15, 2021

To Whom It May Concern,

We wish to thank the reviewers for their thoughtful feedback on our manuscript entitled “Testing implementation facilitation for uptake of an evidence-based psychosocial intervention in VA homeless programs: A hybrid type III trial and Manuscript # PONE-D-21-13578.” Below are the reviewer responses followed by ours in bold. They also include page numbers where they can be found in the manuscript.

Response to Reviewers

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

This has been reviewed and addressed.

2. Thank you for including your ethics statement: "The project was deemed quality improvement and received an exempt status by the Institutional Review Board at the Bedford, Massachusetts VAMC according to the VA Program Guide 1200.21"

a. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

We have now provided additional details regarding participant consent in the Methods section. On page 7, lines 155-160 we explain “…thus waiving the need for written or verbal informed consent. The staff being trained were the study participants and they were informed about the project’s designation as Quality Improvement. Data from the medical record were extracted for the clients being served by the staff members offering MISSION-Vet. Staff were informed of this data extraction and records were not anonymized as it was necessary to identify the clients being served by the staff offering MISSION-Vet.

b. If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information.

As noted above, the IRB deemed this Quality Improvement and thus waived the requirement for informed consent. It is also noteworthy that we did not include anonymized records as we needed to identify the clients being served by the staff offering MISSION-Vet. See page 7, lines 155-160.

c. Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

As noted above, this statement was added to the Methods section on page 7 lines 155-160. It reads “…thus waiving the need for written or verbal informed consent. The staff being trained were the study participants and they were informed about the project’s designation as Quality Improvement. Data from the medical record were extracted for the clients being served by the staff members offering MISSION-Vet. Staff were informed of this data extraction and records were not anonymized as it was necessary to identify the clients being served by the staff offering MISSION-Vet.” This has also been added to the ethics statement field in the submission form.

3. You indicated that ethical approval was not necessary for your study. We understand that the framework for ethical oversight requirements for studies of this type may differ depending on the setting and we would appreciate some further clarification regarding your research. Could you please provide further details on why your study is exempt from the need for approval and confirmation from your institutional review board or research ethics committee (e.g., in the form of a letter or email correspondence) that ethics review was not necessary for this study? Please include a copy of the correspondence as an ""Other"" file.

This has been addressed in comments 2a and 2b and within the manuscript (see page 7 lines 155-160). A copy of the memo determining that the IRB called this non-research has been uploaded as an “Other” file.

4. We note that you have included the phrase “data not shown” in your manuscript. Unfortunately, this does not meet our data sharing requirements. PLOS does not permit references to inaccessible data. We require that authors provide all relevant data within the paper, Supporting Information files, or in an acceptable, public repository. Please add a citation to support this phrase or upload the data that corresponds with these findings to a stable repository (such as Figshare or Dryad) and provide and URLs, DOIs, or accession numbers that may be used to access these data. Or, if the data are not a core part of the research being presented in your study, we ask that you remove the phrase that refers to these data.

Thank you for bringing this to our attention. The phrase has been removed from the manuscript.

Additional Editor Comments:

5. Overall, I agree with Reviewer 1 that the article is interesting, well written, of value to the field and appropriate for publication in PLOS-One. However, some methodological issues raise questions about the interpretation of findings.

In addition to the concerns raised by Reviewer 1, I would add that a major concern is that the authors describe the study as having used a stepped-wedge design, but the analysis of findings does not match this. The authors describe pragmatic sequential implementation roll out occurring in 7 waves across two sites (site A followed by site B), wherein the IU in all sites (“control condition”) produced no change in any of the sites/waves. Since it was not possible to compare each wave to itself pre-implementation (intervention IF vs. control IU) as intended (suggested by the stepped-wedge design), instead the analysis shifts to a comparison of implementation outcomes between sites A and B, that have some significant qualitative differences (e.g. size, location, readiness) as well as implementation differences (e.g. hybrid in-person and virtual vs. virtual only). The findings are still valuable and interesting, but the analysis and conclusions need to match the methodological reality.

Thank you for this feedback. Because stepped-wedge design (comment #8), intervention groups (comment #13), and analysis (comment #12) come up separately in reviewer 1’s feedback, we address these below for ease of review and to reduce duplication. In addition, we address contextual and facilitation differences below (comment #9, 10, 11).

6. An additional limitation is that since the authors did not randomize the sequence of implementation across sites (all waves in site A followed by all waves in site B), it is not possible to control for secular changes over time between sites. It seems plausible that increased national attention to the problem of homelessness during the implementation period (lines 390-392) could be a factor that accounted for the higher readiness and cooperation in Site B.

We appreciate this feedback and have added the following statement in the limitations described in the discussion on page 25, lines 495-500 about secular trends. We note “Sixth, it was not possible to control for secular changes over time between sites. Therefore, it is plausible that the increased national attention to the problem of homelessness during the implementation period or some other extraneous factor could account for the higher readiness and cooperation in Site B. However, while this is unlikely as no site did MISSION during IU, a qualitative component would have added additional nuance to the analyses.”

7. Finally (minor) it may be worth highlighting that the current "training as usual" (IU) strategy at the VA of watching a self-instructional video was totally ineffectual and should not be continued. A more proactive strategy (such as IF) could help guide more effective future implementation within the VA.

We appreciate this feedback and have now added this more boldly in the discussion on page 23, line 449. We note “As noted above, sites implemented MISSION-Vet only after IF was initiated as training alone did no stimulate MISSION-Vet use.” We also added this as a limitation on page 25, lines 487-489, “Nonetheless, this study did teach us that training alone (IU) was not effective in these VA settings and more proactive strategies (such as IF) may be needed to help guide future implementation.”

Reviewers' comments:

Reviewer #1:

Main comments:

8. The authors have conducted a stepped wedge trial unlike other stepped wedge trials that I have seen. Normally with this design, all study sites are exposed to the control condition during the first time period and then with each passing time period a new site receives the intervention. Sites are randomized such that they will receive both control and intervention conditions but in different sequences. This is not what these authors have done, as each program received the same six months of implementation as usual followed by implementation facilitation for a six-month period. The intervention was rolled out in a step-wise fashion but the time in which sites were exposed to the control condition (“IU”) did not differ across programs or sites. This reality raises a number of questions that have implications for the conclusions that can be drawn about the influence of facilitation on implementation and other outcomes. My first question is what justified this atypical approach to data collection if the goal was to conduct a stepped wedge trial that captures secular trends in implementation?

Thank you for the thoughtful feedback on the design. We agree that the description of our stepped-wedge trial lacks clarity. We have changed the manuscript in a number of places to more clearly describe the design of our study. First, we now call it a “modified stepped-wedge” design in the Abstract (line 35), pages 6-7, and 21. We have also included citations for these modified stepped wedge design (reference #s 23, 24). Second, besides characterizing it as a modified step wedge design, on 6-7, lines 132-148 we further explain it. We write “This is a modified stepped-wedge design in that, in contrast to a traditional stepped-wedge wherein all sites initially start in the control condition, with programs switching over to the intervention condition at fixed intervals or steps and then remaining in the intervention condition for the duration of the study, in our study, it was not feasible to implement IF at all programs simultaneously. As a result, in the present study, each program received six months of IU and crossed over to IF for another six months, with data collection continuing for an additional six months after the end of IF. Moreover, in terms of calendar time, the rollout of the control and intervention conditions in our study occurred in a sequential manner across programs, with each program being in the intervention and control conditions for a fixed period of time, and not contributing to the analysis in periods in which they are neither in the intervention or control condition (Fig 1). This type of design is akin to what the stepped-wedge design literature refers to as an incomplete stepped-wedge design with limited measurements prior to and after crossover [23, 24] and is appropriate to use when it is not feasible to implement an intervention or collect data in all clusters simultaneously, either due to resource constraints or for other reasons. In this case, we did not have the resources to do IF in every location at the same time.” In this case, we did not have the resources to do IF in every location at the same time. Third, we now have more clearly depicted this in Figure 1 on page 7, line 52.

9. Given the atypical study design, it is hard to know whether the changes in number of MISSION-Vet services is truly due to the arrival of implementation facilitation, or could it be that a certain amount of time is necessary for sites to prepare for uptake of the intervention and that services would have been started to be delivered after 6 months regardless of the presence of facilitation. How do we know that the observed results are actually due to facilitation and not due to the normal time it takes to internally get organized and be ready to deliver new services? The inability to capture temporal trends in the study seems like a major limitation.

Thank you for bringing up this important point about differences in facilitation and potential organizational differences related to MISSION-Vet uptake. We have added two statements to address this in the discussion section. First, on page 22, line 418-420, “Higher MISSION-Vet uptake was associated with higher doses of IF overall, and this may be impacted by the in-person or virtual delivery.” Then we added the following limitation on page 25, lines 495-500, “Sixth, it was not possible to control for secular changes over time between sites. Therefore, it is plausible that the increased national attention to the problem of homelessness during the implementation period or some other extraneous factor could account for the higher readiness and cooperation in Site B. However, while this is unlikely as no site did MISSION during IU, a qualitative component would have added additional nuance to the analyses.”

10. In a similar vein, the authors argue that Site B may have certain characteristics (e.g. support for identifying veterans, willingness to innovate) that may have made it more receptive and ready for the MISSION-Vet program than in Site A. However, as per Figure 1, Site B also received its facilitation after all the programs in Site A had been exposed to facilitation. How do we know that the facilitation at Site B wasn’t significantly better given all the lessons learned through interactions with programs in Site A? Given also that the facilitation at site B included an in-person component, can we really draw firm conclusions about the role of program/site characteristics when there may have been more important differences at the level of the facilitation programs/sites received?

We appreciate this point about differences in facilitation delivery. We now mention the in-person vs. virtual facilitation in the discussion section on page 22, lines 418-420, “Higher MISSION-Vet uptake was associated with higher doses of IF overall, and this may be impacted by the in-person or virtual delivery.”

11. One thing that was less clear to me was the number of facilitators involved in the project. The authors state on page 16 “one external facilitator per site supported MISSION-Vet implementation”. Does this mean that there were two facilitators in total, one for Site A and one for Site B? Or were there facilitators at each program site? If there were only two facilitators overall, this raises questions about how dedicated they were to each program. In Site A, the overlapping exposure to facilitation means that the facilitator would have had to provide supports to multiple programs at the same time. This does not appear to be the case for Site B, where the IF periods don’t overlap. Is it possible that the quality of facilitation differed because there were more competing demands on the facilitator providing supports in Site A?

Yes, this study included one facilitator per site. Thank you for also noting that there may have been differences in facilitator quality between the two sites and their respective facilitators. We have clarified this on page 17, lines 337-339, “While we recognize that on the surface, it might suggest an imbalance in workload given that Site A was much larger than Site B, the facilitator in Site A had more available time to devote to this project.”

12. With respect to the linear regression analyses used in the study, the data are clearly in a hierarchical structure but I saw no attempts to determine whether multi-level analyses were feasible/appropriate or not. Also, it was not clear to me whether the regression analyses included any confounding variables, this should be made explicit. I would urge the authors to be cautious in their interpretation of results if no confounding variables were included in their models.

We thank the reviewer for this comment about the analysis. We consider these analyses to be exploratory in nature and have clarified this in a number of places in the manuscript. In the data analysis plan, we added on page 12, lines 271-273, “In other words, practical considerations necessitated that we modify our intended analysis plan to make the site, rather than the program the cluster unit of interest.” We also added on page 13, lines 288-292, “However, this analysis plan was not feasible for two reasons. First, as noted above, the number of trained providers deliver MISSION-Vet and the number of Veterans receiving MISSION-Vet at each of the seven original program sites was small, which caused us to shift from using the program to the site as our cluster of interest in our analysis.” We also added clarification to this statement on page 13, line 294-296, “We therefore rely solely on descriptive statistics to examine the impact of IF on the provision of MISSION-Vet services.” On page 13, line 297, we explain “as an exploratory analysis,” and thus we did not attempt to account for clustering by site and do not include confounders, which we also added on page 14, lines 304-305; “As these models were purely exploratory, they did not adjust for any additional covariates.” This is also clarified on page 22, line 420-421, “In exploratory analysis, this study also identified a positive relationship between the number of MISSION-Vet sessions and outpatient service utilization.” We also agree that, given the limitations of this analysis, it is important to be cautious in interpreting the results of these models. Our interpretations are now more clearly articulated on page 22, lines 421-426, “These findings may suggest that MISSION-Vet assisted with outpatient treatment engagement, although some caution is warranted in making any firm conclusions about the extent to which this was the case, as this analysis did not adjust for any potential individual level or site-level factors that might confound the relationship between volume of receipt of MISSION-Vet and service utilization.” In the discussion section we now more clearly articulate the limitations of this analysis where we added the following statement on page 25, lines 480-482, “Importantly, practical considerations regarding the volume of MISSION-Vet provided across our original seven programs required that we adjust our clustering unit to the VAMC site.”

We also note “exploratory analysis” on page 22.

13. As a reader, it remains unclear what explains the differences in intervention uptake across the different programs and sites. A qualitative component to this study would have been highly valuable but was not performed. This should probably be mentioned as a limitation because it is hard to draw conclusions based on the limited organizational readiness data, especially with the limited sample size at Site B.

Thank you, we agree that a qualitative component would add nuance to the analyses. We address this limitation on page 25, lines 495-500, “Sixth, it was not possible to control for secular changes over time between sites. Therefore, it is plausible that the increased national attention to the problem of homelessness during the implementation period or some other extraneous factor could account for the higher readiness and cooperation in Site B. However, while this is unlikely as no site did MISSION during IU, a qualitative component would have added additional nuance to the analyses.” However, while this is unlikely as no site did MISSION during IU, a qualitative component would have added additional nuance to the analyses.

Minor comments:

14. On page 12, line 266, the authors state that neither of the two sites provided MISSION-Vet services during the “IF period” but it was during the “IU period” that no services were provided.

Sorry for this confusion. This is now changed to “IU” on page 13, line 293.

Thank you again for your review and for your consideration of this manuscript. Please address all correspondence concerning this manuscript to me at David.Smelson@va.gov.

Dr. David A. Smelson, PsyD

Professor

Director of the University of Massachusetts Center of Excellence in Addictions

Department of Psychiatry, University of Massachusetts Medical School

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Annika C Sweetland

2 Mar 2022

Testing Implementation Facilitation for Uptake of an Evidence-Based Psychosocial Intervention in VA Homeless Programs: A Hybrid Type III Trial

PONE-D-21-13578R1

Dear Dr. Bruzios,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Annika C. Sweetland, DrPH, MSW

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed all of my comments and I am satisfied with their responses, I have no further comments.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Matthew Menear

Acceptance letter

Annika C Sweetland

7 Mar 2022

PONE-D-21-13578R1

Testing Implementation Facilitation for Uptake of an Evidence-Based Psychosocial Intervention in VA Homeless Programs: A Hybrid Type III Trial

Dear Dr. Bruzios:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Annika C. Sweetland

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    These analyses were performed using VHA data. Deidentified data can be provided upon request pending ethical approval and in accordance with VHA guidelines and permissions. Please contact Dr. John Wells at john.wells5@va.gov or (781) 687-2924.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES