Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jul 1.
Published in final edited form as: J Acquir Immune Defic Syndr. 2022 Jul 1;90(Suppl 1):S74–S83. doi: 10.1097/QAI.0000000000002967

Tailored Motivational Interviewing in Adolescent HIV Clinics: Primary Outcome Analysis of a Stepped-Wedge Implementation Trial

Sylvie Naar 1, Karen MacDonell 2, Jason Chapman 3, Lisa Todd 4, Yuxia Wang 5, Julia Sheffler 6, M Isabel Fernandez 7
PMCID: PMC10153471  NIHMSID: NIHMS1791118  PMID: 35703758

Abstract

Background:

Youth continue to have the poorest outcomes along the HIV prevention and care continua. Motivational Interviewing may promote behavior change and reduce perceived stigma, but providers often demonstrate inadequate MI competence. This study tested Tailored Motivational Interviewing, a set of implementation strategies designed to improve MI competence in youth HIV providers.

Setting:

Ten HIV clinics in the Adolescent Trials Network for HIV/AIDS Interventions

Methods:

In a stepped wedge design, 10 clinics (N=151 providers) were randomized in 5 clusters every three months to receive TMI for a 12-month implementation period. Sites were re-randomized within each cluster to receive communities of practice guidance with or without internal facilitator support in the sustainment period. Standard patient assessments were coded every three months for 36 months.

Results:

Nesting was addressed using mixed-effects regression models, with random effects for providers and sites. TMI resulted in significantly improved MI competence over baseline. Despite small reductions in competence in the sustainment window, competence was still significantly improved over baseline, with no difference between the two sustainment conditions.

Conclusions:

TMI may be an important tool to capacitate the HIV healthforce to end the HIV epidemic in young people.

Keywords: Adolescent, Emerging Adults, HIV, Implementation, Motivational Interviewing


In 2015, the White House formally recognized young people (13–24) as a key HIV population in the United States1 and the Centers for Disease Control issued a call to action.2 Youth continue to have the poorest outcomes along the prevention and care continua,3 and youth of color are especially suffering.4 A quarter century of behavioral intervention research has focused on behavior change for primary and secondary HIV prevention, but the full benefits that should be possible for youth have yet to be realized, in large part because efficacious interventions have not been successfully implemented in real-world settings.5,6

MI is a collaborative, goal-oriented style of communication with specific strategies to elicit intrinsic motivation and reinforce the language of change.7 Motivational Interviewing (MI) is the only behavior change intervention to demonstrate success across the youth HIV prevention and care cascades.811,12 In fact, one meta-analysis found that MI was the only effective intervention for behavior change in youth living with HIV.13 An MI-based intervention is the only known intervention to reduce perceived stigma in youth living with HIV in the U.S.14 MI is already embedded in the clinical guidelines for HIV care1518 and HIV risk reduction.19 MI has been adapted by the first author to address behavior change in adolescents and young adults.20 Thus, MI may be the ideal evidence-based practice to implement to improve the youth HIV prevention and treatment continua.

However, several studies suggest that achieving MI competence is difficult for many providers,21 and that a lecture or workshop alone is insufficient for providers to deliver MI with fidelity.2225 In fact, in a study of adolescent HIV care providers from different disciplines across 10 clinics in the United States, only 7 percent scored in the intermediate or advanced MI competence range utilizing a standardized assessment of simulated patient interactions,26 although providers reported receiving some prior MI training.

Thus, we carefully translated basic behavioral and social science of skills acquisition and communication sequencing into a provider intervention to improve MI competence and address HIV-related target behaviors and stigma in adolescents and emerging adults. The resulting intervention, Tailored Motivational Interviewing (TMI), demonstrated proof of concept in Adolescent Trials Network (ATN) Protocol 128.27 After completing a pilot randomized trial,28 TMI was tested in the current study, a Hybrid Type 3 implementation-effectiveness trial (ATN 146) using a stepped wedge-design in 10 adolescent HIV clinics in the United States.29 Consistent with a Type 3 hybrid,30 the primary implementation hypothesis was that MI competence ratings would be higher among providers during the TMI phase compared to the treatment-as-usual phase. As an exploratory hypothesis, we tested whether training an internal facilitator to provide ongoing coaching increased provider competence in the sustainment period. The current paper describes the implementation process utilizing the Exploration, Preparation, Implementation, Sustainment (EPIS) framework31 and presents primary analysis of the effect of TMI implementation strategies on MI competence to address HIV-related behaviors in youth providers.

METHODS

Participants and Procedures

The protocol methods have been previously published.29 Primary implementation outcome was provider competence, and secondary effectiveness outcomes from electronic health records are underway in ATN 15432 (i.e., viral suppression, retention in care, STIs). Eligible participants included all HIV providers and staff (prevention and treatment) at the target ATN clinics with at least four hours of youth (ages 13–24) clinical contact per week. TMI study staff received providers’ contact information from clinics and contacted potential participants via email or phone to provide the TMI information sheet and schedule the first assessments. A participant was considered enrolled once they reviewed the information sheet, provided verbal consent, and completed the first assessment. The site principal investigators were not involved in consenting procedures to avoid coercion. Participants (N=151) were classified by job description into 3 groups: medical providers (e.g., medical doctor, nurse; 38.7%, n = 58), psychologists or social workers (17.3%, n = 26), and other (e.g., health educator, peer counselor; 44%, n = 67). All procedures were approved by the first author’s university institutional review board and reliance agreements were obtained with all study clinics. Each clinic received a $3000 incentive to be utilized at their discretion and providers received a $10 gift card for completing roleplays to assess MI competence. See Figure 1 for CONSORT diagram statement.

Figure 1.

Figure 1.

TMI CONSORT Diagram

In this stepped wedge design (see Figure 2), 10 ATN clinics were randomly assigned to receive TMI in clusters. Two clinics were randomized every three months until the fifth cluster was randomized to TMI. All participants completed assessments every three months for 36 months for a total of 13 assessments per participants. The timing of assessments and the duration of the implementation intervention period (i.e. 12 months) were the same for all clinics. The follow-up windows (sustainment) varied based on randomization. The first two clinics had a baseline window of three months (two assessments), an implementation window of 12 months (four assessments), and a follow-up window of 21 months (13 assessments). The last two clinics had a baseline window of 15 months (six assessments), an implementation window of 12 months (four assessments), and a follow-up window of 9 months (three assessments). After 12 months of implementation, before beginning the sustainment period, clinics were re-randomized, within each cluster, to receive internal facilitation (IF; funding for four hours per week for an MI coach local to the organization) plus a communities of practice (CoP) manual to self-sustain MI practice versus the CoP manual alone.

Figure 2.

Figure 2.

TMI Stepped Wedge Design

TMI Implementation Intervention

Guided by the EPIS framework, the exploration phase involved a qualitative multilevel assessment of potential barriers and facilitators of MI implementation within inner and outer context factors specified by the EPIS model (see ATN Protocol 15333). In the preparation phase, information gathered in EPIS interviews was reviewed by each clinic using local implementation teams (iTeams) made up of organization leaders and representative staff. The iTeams were guided by a facilitator who was not a TMI trainer or researcher. They were encouraged to adapt strategies that were identified as flexible (e.g., scheduling of in-person workshop components; substitution of virtual components, utilization of the $3000 incentive) and to overcome barriers to strategies that were fixed (e.g., attendance).

TMI training begins in the 12-month implementation phase began with a 10-hour group workshop delivered by a member of the Motivational Interviewing Network of Trainers. Based on research in cooperative learning environments,34,35 the 10 hour workshop is structured with cooperative learning activities, video examples and behavioral skills acquisition steps (modeling, verbal and behavioral rehearsal, feedback). Although clinics could opt for partial remote training (maximum 5 hours), all clinics selected to complete the full workshop in person. Coaching followed a standardized process: 1) elicit motivation around learning MI; 2) standard patient interaction if not completed prior; 3) feedback on two highest and two lowest ratings; 4) standardized experiential activities targeting the lowest ratings. Providers completed two mandatory 1-hour individual coaching sessions immediately post-workshop. Then, providers completed 4 quarterly competence assessments (15 minutes) and received a tailored feedback report. After each assessment, participants who scored in the “Beginner” or “Novice” range (see measures), completed a mandatory coaching session which was optional for those scoring in the “Intermediate” or “Advanced” range. Each iTeam was encouraged to meet monthly throughout the implementation period to address aggregated adherence and competence data, thereby keeping individual participant data confidential. iTeams were encouraged to plan for sustainment and funding needs.

After 12 months of implementation, clinics entered the sustainment phase and no longer received assessment feedback, coaching, or iTeam facilitation. All clinics received a “communities of practice” (CoP) manual to self-sustain MI competence with group peer coaching activities.36 The manual consisted of a set of experiential activities for each item on the MI-CRS. The local iTeam determined implementation plans for the CoP. Half of the clinics (one per cluster in the stepped wedge design) were re-randomized to internal facilitation (IF), consisting of funding for a coach internal to the organization to continue MI support for four hours per week, or CoP. The coach was required to achieve advanced MI competence and to complete a five-session standardized coaching program with the TMI trainer.

Primary Outcome Measure

The MI Coach Rating Scale (MI-CRS)37 was developed with item response theory methods. MI-CRS is a 12-item measure, rated on a 4-point scale (Beginner, Novice, Intermediate, Advanced) that can be used by MI coaches as well as researchers and is designed to be rated in one pass of real or simulated encounters. The items represented essential MI components such as a collaborative stance, autonomy support, open questions to elicit motivational language (i.e., change talk), reflections of change talk, affirmations, and summaries. The measure has demonstrated reliability and validity on several indicators using Rasch modeling in diverse settings and samples.37 In sum, dimensionality results indicated that the MI-CRS measured a single underlying construct of MI competence as compared to other conceptions of MI skill having at least two dimensions. Second, item-session maps were indicative of a well-performing instrument. Third, the MI-CRS measures aspects of competence at the level counselors and the level of clients/sessions (i.e., the item variance due to these levels was roughly equivalent), which is beneficial when rating competence as a provider-level implementation outcome or for ongoing quality assurance and feedback loops. The four-point rating scale showed excellent functionality and good item fit. Thresholds based on the mean of the 12 items were defined using a Rasch-based objective standard setting procedure. The final competence categories and associated threshold scores were: Beginner (<2.0), Novice (>=2.0 and <2.6), Intermediate (>=2.6 and <3.3), and Advanced (>=3.3).

In the current study, all participants completed the same 15–20 minute standard patient interaction by phone with different standard patient actors for each timepoint. Most of the standard patients were utilized in preliminary studies, and all standard patients were approved by the study’s youth advisory board. Coders participated in a monthly coding lab where they discussed discrepancies in co-coded interactions. Ten percent of interactions were coded by two coders to compute interrater reliability using single-measurement, absolute-agreement, two-way mixed-effects model to calculate the intraclass correlation (ICC). Based on traditional guidelines,38 ICC was at the high end of good at .73.

Data Analysis Strategy

The analyses were designed to test for change in MI competence across Preparation, Implementation, and Sustainment phases. With each phase having longitudinal data, the model was formulated to evaluate multiple questions simultaneously: (1) Did MI competence change significantly during Preparation? (2) Did MI training lead to an initial, overall increase in competence? (3) Did competence change significantly during Implementation, and did the rate of change differ from the prior rate during Preparation? (4) Did the end of Implementation lead to an initial, overall increase/decrease in competence for Sustainment? (5) Did competence change significantly during Sustainment, and did the rate of change differ from the prior rate during Implementation?

To formulate the model, several features of the data were important to consider. The research design led to a nested data structure, with repeated measurements (level-1) nested within providers (level-2) who were nested within clinics (level-3) that were nested within clusters (level-4). Nesting was addressed using mixed-effects regression models, with random effects for providers and clinics. However, there were too few clusters (N = 5) to support accurate estimation of a random effect, and as such, potential differences between clusters were controlled using a series of dummy coded indicators.39 To aid interpretation, these indicators were grand mean centered, with the resulting intercept reflecting an average across clinics, weighted for the proportion of providers in each cluster. The number of clinics was modest (N = 10) for estimating a random clinic effect; however, the primary parameters of interest were expected to be estimated accurately.40,41 Additionally, as noted above, the MI-CRS provided two scores, an overall average score, which was continuous and analyzed according to a Gaussian distribution, and an ordered categorical criterion score, which was analyzed according to an ordinal distribution (logit link). The ordinal outcome was reverse-coded to aid interpretation, with the resulting estimates for a particular category reflecting the log-odds of being above the lower categories. The analyses used an intention to treat approach, with all available data analyzed in the respective study phase. The models were performed using HLM software.42 Statistical significance tests were based on the Wald test, with asymptotic SEs for the test statistic, and p < .05.

Two other features had specific implications for modeling change over time. First, there were repeated measurements within each of three phases (i.e., Preparation, Implementation, Sustainment). Second, intervention “condition” was not a single status for each provider; rather, it changed over time, with each provider participating in each phase. These features were addressed using a piecewise, or discontinuous change, modeling strategy in a single model for each outcome.43,44 The models included several terms that, when combined, provided targeted tests of the questions above: (1) a linear term across all measurement occasions (scaled in months) to test for change during Preparation, (2) an indicator for the start of Implementation (0 = Preparation, 1 = Implementation or Sustainment) to test for an initial increase following training; (3) a linear term from the start of Implementation (scaled in months, values of 0 prior to Implementation) to test for change during Implementation (and compared to Preparation), (4) an indicator for the start of the Sustainment (0 = Preparation or Implementation, 1 = Sustainment) to test for an initial increase/decrease following Implementation; and (5) a linear term from the start of the Sustainment (i.e., scale in months, values of 0 prior to Sustainment) to test for change during Sustainment (and compared to Implementation).

Linear trends were based on providers’ actual assessment dates, computed as the number of months between the clinic’s start date (in each phase) and providers’ assessment dates. For each of these terms, random provider effects were specified based on the Wald test (random clinic effects were not considered due to the modest number of clinics). Planned contrasts were specified to obtain statistical significance tests that were not directly provided by the model formulation, specifically, tests for the significance of linear slopes during the Implementation phase (Was there significant change during Implementation?) and the sustainment phases (Was there significant change during Sustainment?). Next, a follow-up model tested for differences between CoPs and CoPs+IF during the Sustainment phase. This was accomplished by adding an interaction between the randomized condition (0 = CoPs, 1 = CoPs+IF) and the two sustainment terms (i.e., the phase indicator and the linear term). Finally, a simplified, exploratory model (described in Results) tested for differences in MI competence criterion levels across three types of providers: medical, psychologist/social-worker, and paraprofessional.

Results

MI Competence

Results are reported in Table 1 and illustrated in Figure 3. At the beginning of the Preparation phase, the average level of MI competence was 1.930 (Beginner level). During Preparation, this changed significantly, with competence increasing by 0.008 points per month. With this phase lasting from 3–15 months (depending on the cluster), this meant that by the end of Preparation, competence was 1.95–2.05 (Beginner to Novice level). At the time of the first assessment in Implementation (i.e., the start of MI training), the overall level of competence increased significantly, by an average of 0.345 points, to 2.30–2.40 (Novice level). During the 12 months of Implementation, competence increased significantly, by 0.024 points per month. This was significantly faster, by 0.016 points per month, than the rate during Preparation. Thus, by the end of Implementation, MI competence was 2.59–2.69 (Intermediate level). At the time of the first assessment in the Sustainment phase (i.e., the end of Implementation), the overall level of MI competence decreased significantly, by 0.105 points, to 2.49–2.59 (Novice to Intermediate level). During Sustainment, which lasted from 9–21 months, competence did not change significantly, with a near-zero rate of −0.002 points per month. However, compared to the rate during the Implementation phase, this was a significant decrease of 0.026 points per month.

Table 1.

Results of mixed-effects regression models testing for change in MI competence

MI-CRS Average Score a
Est. SE p 95% CI
Fixed Effects
 Preparation
  Intercept 1.930 0.042 <.001 [1.848, 2.013]
  Linear 0.008 0.004 .026 [0.001, 0.016]
 Implementation
  Phase 0.345 0.048 <.001 [0.250, 0.440]
  Linear 0.016 0.006 .011 [0.004, 0.028]
 Sustainment
  Phase −0.105 0.047 .025 [−0.197, −0.013]
  Linear −0.026 0.006 <.001 [−0.038, −0.014]
 Planned Contrasts b
  Imp. (Linear = 0) 0.024 0.005 <.001 [0.014, 0.034]
  Sus. (Linear = 0) −0.002 0.004 >.500 [−0.009, 0.006]
  Sus. v. Prep. 0.240 0.075 .002 [−0.124, 0.172]

Est. SD p

Variance Components
  Error 0.118 0.344
  Linear <0.001 0.007 .023
  Imp. Phase 0.058 0.240 .002
  Provider 0.095 0.308 <.001
  Clinic 0.005 0.071 .003

MI-CRS Criterion Score c
Est. SE p 95% CI (Est.) OR 95% CI (OR)

Fixed Effects
 Preparation
  Intercept −6.083 0.288 <.001 [−6.647 −5.519] 0.002 [0.001, 0.005]
  Linear 0.039 0.022 .073 [−0.004 0.082] 1.040 [0.996, 1.086]
 Implementation
  Phase 1.523 0.273 <.001 [0.988 2.058] 4.587 [2.677, 7.858]
  Linear 0.084 0.036 .019 [0.013 0.155] 1.087 [1.014, 1.166]
 Sustainment
  Phase −0.516 0.266 .053 [−1.037 0.005] 0.597 [0.354, 1.006]
  Linear −0.132 0.035 <.001 [−0.201 −0.063] 0.876 [0.818, 0.938]
 Thresholds
  Intermediate 2.988 0.150 <.001 [2.694 3.282] 19.843 [14.773, 26.653]
  Beginner 5.606 0.183 <.001 [5.247 5.965] 272.043 [189.799, 389.926]
 Planned Contrasts b
  Imp. (Linear = 0) 0.123 0.028 <.001 [0.068 0.178] 1.131 [1.070, 1.195]
  Sus. (Linear = 0) −0.009 0.020 >.500 [−0.048 0.030] 0.991 [0.952, 1.031]
  Sus. v. Prep. 1.001 0.426 .017 [0.171, 1.843] 2.738 [1.187, 6.315]

Est. SD p

Variance Components
  Imp. Phase 0.876 0.936 .010
  Provider 1.909 1.382 <.001
  Clinic 0.130 0.361 .004

Note. The models included a series of indicators (grand-mean centered) to control for differences across the clusters. The model intercepts reflect the average predicted score at the time of the first assessment in the baseline phase, weighted for the proportion of providers in each cluster.

a

MI-CRS raw average score.

b

Tests whether the rate of change in each phase was significantly different than zero because the model formulation tested for a difference in the rate of change between consecutive phases.

c

MI-CRS criterion score with four ordered categories (1 = Novice, 2 = Beginner, 3 = Intermediate, 4 = Advanced), analyzed according to an ordinal outcome distribution. The results indicate the log-odds of MI competence being in higher categories relative to lower categories. The intercept reflects the log-odds of a score in the Advanced category relative to the other categories. The thresholds reflect the increase in log-odds for a score in the Intermediate category and the Beginner category.

Figure 3.

Figure 3.

Predicted MI-CRS Average Scores across Longitudinal Baseline, Implementation, and Sustainment Phases

Note. MI-CRS = MI Coach Rating Scale. MI-CRS average scores range from 1–4. The duration of the Baseline and Sustainment phases varied by cluster, with the Figure illustrating 12-month Baseline, Implementation, and Sustainment phases.

MI Competence Criterion Level

Results are reported in Table 1 and illustrated in Figure 4. The ordinal MI competence outcome (Beginner, Novice, Intermediate, Advanced) reflects the log-odds of being in higher categories relative to lower categories, and to aid interpretation, the values were converted to predicted probabilities. To describe the results, it was necessary to select a reference threshold. In the present case, this was the Intermediate category, and as such, the reported values reflect the probability of MI competence scores above the level of Beginner and Novice—that is, scores at the level of Intermediate or Advanced. At the beginning of the Preparation phase, the average probability of the MI competence score being Intermediate or Advanced was 4%, and during Preparation, this did not change significantly. At the time of the first assessment in the Implementation phase, the log-odds of Intermediate or Advanced MI competence increased significantly, by an average of 1.523 logits to a probability of approximately 24%. Over the whole Implementation phase, the log-odds of Intermediate or Advanced MI competence increased significantly, at a rate of 0.123 logits per month, increasing the probability to approximately 55%. When comparing the rate of change during Implementation to the rate during Preparation, the increase of 0.084 logits was statistically significant. At the time of the first assessment in the Sustainment phase, the overall log-odds of Intermediate or Advanced MI decreased by 0.516 logits, but this was not statistically significant. Over the Sustainment phase, the log-odds of Intermediate or Advanced MI competence did not change significantly, with a near-zero rate of −0.009 logits per month. However, compared to the rate of change during the Implementation phase, this was a significant decrease of 0.132 logits.

Figure 4.

Figure 4.

Predicted MI-CRS Criterion Scores across Longitudinal Baseline, Implementation, and Sustainment Phases

Note. MI-CRS = MI Coach Rating Scale. MI-CRS criterion scores had four categories (1 = Novice, 2 = Beginner, 3 = Intermediate, 4 = Expert) and was analyzed according to an ordinal outcome distribution. The choice of threshold determines the placement of the line. The line illustrated in the figure reflects the predicted probability of a MI-CRS criterion score in the Intermediate or Advanced category relative to the Novice or Beginner categories. The duration of the Baseline and Sustainment phases varied by cluster, with the Figure illustrating 12-month Baseline, Implementation, and Sustainment phases.

Communities of Practice versus Communities of Practice with Internal Facilitation

The model for each outcome was extended to test for a differential effect of CoPs and CoPs+IF on MI competence during the sustainment phase. There were two comparisons: the overall level of MI competence at the start of Sustainment (i.e., the end of the Implementation phase) and the rate of change during the 12-month Sustainment phase. For both outcomes, there were no statistically significant differences between CoPs and CoPs+IF. For the MI competence score, the between-group difference at the start of Sustainment was 0.001 points (Est. = 0.001, SE = 0.079, p = .987, 95%CI [−0.154, 0.157], and the difference in the rate of change was <0.001 points (Est. = −0.0004, SE = 0.008, p = .965, 95% CI [−0.016, 0.015]). For the log-odds of Intermediate or Advanced MI competence, the between-group difference at the start of Sustainment was 0.092 logits (Est. = 0.092, SE = 0.441, p = .836, OR = 1.096, 95% CI [−0.461, 2.607]), and the difference in the rate of change was 0.016 points (Est. = −0.016, SE = 0.042, p = .711, OR = 0.984, 95% CI [0.906, 1.070]).

Differences in MI Competence by Provider Type

To compare MI competence criterion levels across provider types, the model was reduced to two dummy-coded indicators differentiating the Implementation and Sustainment phases from Preparation (retaining controls for clusters). Planned contrasts were specified for comparisons not directly provided by the model formulation. The results are in Table 2, and reported below are predicted probabilities for Intermediate- or Advanced-level competence (versus Beginner or Novice). During Preparation, Psychologists/Social-Workers had a significantly higher probability of Intermediate or Advanced competence compared to Medical providers (19% vs. 4%) and Paraprofessionals (19% vs. 4%). During Implementation and Sustainment, Psychologists/Social-Workers continued to have a higher probability compared to Medical providers (Implementation: 75% vs. 37%, Sustainment: 76% vs. 33%) and Paraprofessionals (Implementation: 75% vs. 32%, Sustainment: 76% vs. 38%). From Preparation to Implementation, all three groups had significant increases in the probability of Intermediate or Advanced competence (Medical: 4% to 37%, Psychologist/Social-Worker: 19% to 75%, Paraprofessional: 4% to 32%), but the amount of change did not differ by provider type. From Implementation to Sustainment, none of the groups changed significantly or differed on the amount of change.

Table 2.

Results of mixed-effects regression models testing for differences in MI competence criterion scores by provider type and phase

MI-CRS Criterion Score a
Est. SE p 95% CI (Est.) OR 95% CI (OR)
Fixed Effects
 Preparation
  Intercept −6.076 0.302 <.001 [−6.668, −0.157] 0.002 [0.001, 0.005]
  Psych./Social Worker 1.764 0.423 <.001 [0.935, 10.055] 5.836 [2.534, 13.442]
  Paraprofessional −0.028 0.333 .934 [−0.681, 6.499] 0.973 [0.504, 18.77]
 Implementation
  Phase 2.706 0.228 <.001 [2.259, 7.175] 14.962 [9.558, 23.423]
  Psych./Social Worker −0.125 0.383 .744 [−0.876, 7.382] 0.882 [0.417, 1.869]
  Paraprofessional −0.212 0.312 .496 [−0.824, 5.903] 0.809 [0.439, 1.490]
 Sustainment
  Phase 2.525 0.247 <.001 [2.041, 7.366] 12.496 [7.701, 20.767]
  Psych./Social Worker 0.089 0.400 0.824 [−0.695, 7.929] 1.093 [0.498, 2.397]
  Paraprofessional 0.236 0.345 0.494 [−0.440, 6.998] 1.266 [0.643, 2.492]
 Thresholds
  Intermediate 2.838 0.141 <.001 [2.562, 5.602] 17.082 [12.955, 22.524]
  Beginner 5.383 0.174 <.001 [5.042, 8.793] 217.708 [154.745, 306.290]
Planned Contrasts b
 Preparation
  Psych. vs. Para. 1.792 0.430 <.001 [0.949, 2.635] 6.001 [2.584, 13.941]
 Implementation
  Psych. vs. Med. 1.639 0.428 <.001 [0.800, 2.478] 5.150 [2.226, 11.916]
  Para. vs. Med. −0.240 0.334 >.500 [−0.895, 0.415] 0.787 [0.409, 1.514]
  Psych vs. Para. 1.879 0.432 <.001 [1.032, 2.726] 6.547 [2.807, 15.267]
 Sustainment
  Psych. vs. Med. 1.853 0.447 <.001 [0.977, 2.729] 6.379 [2.656, 15.319]
  Para. vs. Med. 0.208 0.365 >.500 [−0.507, 0.923] 1.231 [0.602, 2.518]
  Psych. vs. Para. 1.645 0.455 <.001 [0.753, 2.537] 5.181 [2.124, 12.639]
 Prep. to Imp. (Change)
  Psych. 2.580 0.320 <.001 [1.953, 3.207] 13.197 [7.048, 24.71]
  Para. 2.493 0.230 <.001 [2.042, 2.944] 12.098 [7.708, 18.988]
  Psych. vs. Para. 0.087 0.385 >.500 [−0.668, 0.842] 1.091 [0.513, 2.320]
 Imp. to Sus. (Change)
  Medical 0.180 0.222 >.500 [−0.255, 0.615] 1.197 [0.775, 1.850]
  Psych./Social Worker −0.034 0.305 >.500 [−0.632, 0.564] 0.967 [0.532, 1.757]
  Paraprofessional −0.268 0.225 .231 [−0.709, 0.173] 0.765 [0.492, 1.189]
  Psych. vs. Med. −0.214 0.263 >.500 [−0.729, 0.301] 0.807 [0.482, 1.352]
  Para. vs. Med. −0.448 0.222 .354 [−0.883, −0.013] 0.639 [0.413, 0.987]
  Psych. vs. Para. 0.234 0.265 >.500 [−0.285, 0.753] 1.264 [0.752, 2.124]

Est. SD p

Variance Components
  Provider 2.111 1.453 <.001
  Clinic 0.051 0.226 .018

Note. The models included a series of indicators (grand-mean centered) to control for differences across the clusters. The model intercepts reflect the average predicted criterion score for the baseline phase, weighted for the proportion of providers in each cluster.

a

MI-CRS criterion score with four ordered categories (1 = Novice, 2 = Beginner, 3 = Intermediate, 4 = Advanced), analyzed according to an ordinal outcome distribution. The results indicate the log-odds of MI competence being in higher categories relative to lower categories. The intercept reflects the log-odds of a score in the Advanced category relative to the other categories. The thresholds reflect the increase in log-odds for a score in the Intermediate category and the Beginner category.

b

Planned contrasts for significance tests not provided by the model formulation.

DISCUSSION

The TMI implementation package, grounded in the EPIS framework, was rigorously tested and resulted in significantly improved MI competence to promote autonomy-supportive and collaborative communication that reinforces motivation to change HIV-related behaviors in young people. These improvements were sustained after centralized trainer support ended, and total provider training time to achieve this outcome was only 12–16 hours over 12 months. There was an initial large improvement in competence following the initial training (workshop plus two coaching sessions), and there was continued steady improvement over the subsequent implementation period suggesting that the full 12-month intervention was efficient and effective. However, cost-effectiveness analysis is necessary to confirm this conclusion.

Majority of psychosocial staff achieved at least intermediate competency, the level of fidelity typical required in randomized clinical trials,45 suggesting that psychologists and social workers maybe the most successful as internal facilitators. However, 1/3 of physicians and paraprofessional staffs achieved this competency, which may have positive rippling effects across the organization. Further the analysis of ATN 154,33 a large mixed methods study of EPIS constructs over the course of TMI and other implementation studies in the Scale It Up ATN U19, is in progress. These data will further explicate the variability in adherence to TMI strategies and associated competence both at the individual provider and clinic level.

There was no difference in competence between Communities of Practice with versus without training an internal facilitator, suggesting that training internal coaches may not be worth the resources involved. The feasibility of internal facilitator training is also questionable as one clinic declined funding due to staffing limitations, and another could not identify a facilitator until the end of the study. Of the five clinics randomized to this condition, one clinic declined funding for the position because of lack of staffing. Mixed methods analyses of EPIS factors associated with variability in sustainment is also underway. As increased attention is paid to implementation of evidence-based behavioral interventions to improve the HIV prevention and treatment cascades, it is critical to understand which clinics and providers are capable of delivery with training and support and when centralized delivery from experienced providers or program developers is warranted.

Although a Type 3 hybrid trial focuses primarily on implementation outcomes, and MI has already been shown to improve patient outcomes in young people, secondary outcomes analysis utilizing electronic health records to assess viral suppression, retention in care, and STIs is underway. Limitations in the current study included utilization of standard patient interactions versus observation of real patients, use of phone-based interactions versus in-person, not addressing staff turnover for training and outcomes assessment, lack of data regarding clinic activities in the sustainment period, and lack of data on patient perceptions. Furthermore, the study focused on ATN clinics that were primarily academic medical centers. A pilot of TMI with community-based clinics is underway. Given that MI-based intervention have been shown to reduce perceived stigma,14 future studies can address whether TMI’s improvements in provider communication are recognized by patients.

This is the first trial to test implementation strategies to improve youth HIV providers communication and skill in delivering evidence-based behavioral interventions. The Department of Health and Human Services specifies the HIV health force as a primary pillar to ending the HIV epidemic as a “a boots-on-the-ground workforce of culturally competent and committed public health professionals.”17 TMI may be an important tool to address this mandate to end the epidemic in young people.

Acknowledgments

The authors would like to thank Carolyn Blue, Maurice Bulls, Demetria Cain, Sara Green, Scott Jones, Leah King, Sonia Lee, Sarah Martinez, clinic PIs, study coordinators, and the passionate adolescent HIV providers.

This work was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Adolescent Medicine Trials Network for HIV/AIDS

Interventions (ATN 146) as part of the Scale It Up Program (U19HD089875; PI: Naar)

Footnotes

Compliance with ethical standards: All authors declare that they have no conflict of interest.

Contributor Information

Sylvie Naar, Florida State University, Center for Translational Behavioral Science.

Karen MacDonell, Wayne State University, Department of Family Medicine and Public Health Sciences.

Jason Chapman, Oregon Social Learning Center.

Lisa Todd, Wayne State University, Department of Family Medicine and Public Health Sciences.

Yuxia Wang, Florida State University, Department of Behavioral Sciences and Social Medicine.

Julia Sheffler, Florida State University, Center for Translational Behavioral Science.

M. Isabel Fernandez, Nova South Eastern University, College of Osteopathic Medicine.

References

  • 1.Policy WHOoNA. National HIV/AIDS Strategy for the United States: updated to 2020. Accessed April 2020, https://www.hiv.gov/federal-response/national-hiv-aids-strategy/nhas-update
  • 2.Koenig LJ, Hoyer D, Purcell DW, Zaza S, Mermin J. Young People and HIV: A Call to Action. Am J Public Health. Mar 2016;106(3):402–5. doi: 10.2105/ajph.2015.302979 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Lee S, Kapogiannis BG, Allison S. Improving the Youth HIV Prevention and Care Continuums: The Adolescent Medicine Trials Network for HIV/AIDS Interventions. JMIR Res Protoc. Mar 26 2019;8(3):e12050. doi: 10.2196/12050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Prevention CfDCa. HIV and Youth. Accessed April 2020, https://www.cdc.gov/hiv/group/age/youth/index.html
  • 5.MacPherson P, Munthali C, Ferguson J, et al. Service delivery interventions to improve adolescents’ linkage, retention and adherence to antiretroviral therapy and HIV care. Tropical Medicine & International Health. 05/13 2015;20(8):1015–1032. doi: 10.1111/tmi.12517 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Eisinger RW, Dieffinbach C, Fauci A. Role of Implementation Science. Journal of Acquired Immune Deficiency Syndrome. 2019;82:S171–S172. [DOI] [PubMed] [Google Scholar]
  • 7.Miller WR, Rollnick S. Motivational interviewing: Helping people change. Guilford Press; 2012. [Google Scholar]
  • 8.Chen X, Murphy DA, Naar-King S, Parsons JT. A Clinic-based Motivational Intervention Improves Condom Use Among Subgroups of Youth Living With HIV. Journal of Adolescent Health. 2011/08/01/ 2011;49(2):193–198. doi: 10.1016/j.jadohealth.2010.11.252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Naar-King S, Outlaw A, Green-Jones M, Wright K, Parsons JT. Motivational interviewing by peer outreach workers: a pilot randomized clinical trial to retain adolescents and young adults in HIV care. AIDS Care. 2009/07/01 2009;21(7):868–873. doi: 10.1080/09540120802612824 [DOI] [PubMed] [Google Scholar]
  • 10.Naar-King S, Parsons JT, Murphy DA, Chen X, Harris DR, Belzer ME. Improving Health Outcomes for Youth Living With the Human Immunodeficiency Virus: A Multisite Randomized Trial of a Motivational Intervention Targeting Multiple Risk Behaviors. Archives of pediatrics & adolescent medicine. 2009;163(12):1092–1098. doi: 10.1001/archpediatrics.2009.212 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Outlaw AY, Naar-King S, Parsons JT, Green-Jones M, Janisse H, Secord E. Using Motivational Interviewing in HIV Field Outreach With Young African American Men Who Have Sex With Men: A Randomized Clinical Trial. American Journal of Public Health. 07/23/accepted 2010;100(Suppl 1):S146–S151. doi: 10.2105/AJPH.2009.166991 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Naar S, Robles G, Kolmodin MacDonell K, et al. Comparative Effectiveness of Community vs Clinic Healthy Choices Motivational Intervention to Improve Health Behaviors Among Youth Living with HIV: A Randomized Trial. Journal of the American Medical Association. 2020; [DOI] [PMC free article] [PubMed]
  • 13.Mbuagbaw L, Ye C, Thabane L. Motivational interviewing for improving outcomes in youth living with HIV. Cochrane Database of Systematic Reviews. 2012;(9)doi: 10.1002/14651858.CD009748.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Budhwani H, Robles G, Starks TJ, MacDonell KK, Dinaj V, Naar S. Healthy Choices Intervention is Associated with Reductions in Stigma Among Youth Living with HIV in the United States (ATN 129). AIDS and Behavior. 2020:1–9. [DOI] [PMC free article] [PubMed]
  • 15.A guide to primary care of people with HIV/AIDS (U.S. Department of Health and Human Services; ) (2004). [Google Scholar]
  • 16.Kahn JA, Goodman E, Huang B, Slap GB, Emans SJ. Predictors of papanicolaou smear return in a hospital-based adolescent and young adult clinic. Obstetrics & Gynecology. 2003/03/01/ 2003;101(3):490–499. doi: 10.1016/S0029-7844(02)02592-9 [DOI] [PubMed] [Google Scholar]
  • 17.Magnus M, Jones K, Phillips G, et al. Characteristics Associated With Retention Among African American and Latino Adolescent HIV-Positive Men: Results From the Outreach, Care, and Prevention to Engage HIV-Seropositive Young MSM of Color Special Project of National Significance Initiative. JAIDS Journal of Acquired Immune Deficiency Syndromes. 2010;53(4):529–536. doi: 10.1097/QAI.0b013e3181b56404 [DOI] [PubMed] [Google Scholar]
  • 18.Substance use and dependence among HIV-infected adolescents and young adults (New York: State Department of Health; ) (2009). [Google Scholar]
  • 19.Centers for Disease Control and Prevention HAPRSP. Compendium of HIV Prevention Interventions with Evidence of Effectiveness. Division of HIV/AIDS Prevention NCfHA, Viral Hepatitis, Sexual Transmitted Diseases and Tuberculosis Prevention, Centers for Disease Control and Prevention; 1999:1–64. https://www.cdc.gov/hiv/pdf/research/interventionresearch/rep/prevention_research_compendium.pdf [Google Scholar]
  • 20.Naar S, Suarez M. Motivational Interviewing with Adolescents and Young Adults. 2nd ed. Guilford Press; in press. [Google Scholar]
  • 21.Hallgren KA, Dembe A, Pace BT, Imel ZE, Lee CM, Atkins DC. Variability in motivational interviewing adherence across sessions, providers, sites, and research contexts. Journal of Substance Abuse Treatment. 2018;84:30–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology. 2004;72:1050–1062. [DOI] [PubMed] [Google Scholar]
  • 23.Mitcheson L, Bhavsar K, McCambridge J. Randomized trial of training and supervision in motivational interviewing with adolescent drug treatment practitioners. Journal of Substance Abuse Treatment. 2009;37(1):73–78. [DOI] [PubMed] [Google Scholar]
  • 24.Moyers TB, Manuel JK, Wilson PG, Hendrickson SM, Talcott W, Durand P. A randomized trial investigating training in motivational interviewing for behavioral health providers. Behavioural and Cognitive Psychotherapy. 2008;36(2):149. [Google Scholar]
  • 25.Moyers TB, Martin T, Houck JM, Christopher PJ, Tonigan JS. From in-session behaviors to drinking outcomes: A causal chain for motivational interviewing. Journal of Consulting and Clinical Psychology. 2009;77(6):1113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.MacDonell KK, Pennar AL, King L, Todd L, Martinez S, Naar S. Adolescent HIV Healthcare Providers’ Competencies in Motivational Interviewing Using a Standard Patient Model of Fidelity Monitoring. AIDS and Behavior. 2019:1–3. [DOI] [PMC free article] [PubMed]
  • 27.Naar S, Pennar A, Wang B, Brogan Hartlieb K, Fortenberry D. Tailored Motivational Interviewing (TMI): Translating Basic Science in Skills Acquisition in a Behavioral Intervention to Improve Community Health Worker Motivational Interviewing Competence for Youth Living with HIV. Health Psychology. under review; [DOI] [PubMed]
  • 28.Todd L, MacDonell K, Naar S, Carcone AI, Secord E. Tailored Motivational Interviewing (TMI): A Pilot Implementation-Effectiveness Trial to Promote MI Competence in Adolescent HIV Clinic [Manuscript submitted for publication]. 2020; [DOI] [PMC free article] [PubMed]
  • 29.Naar S, MacDonell K. Tailored Motivational Interviewing: Implementation and Sustainment of Evidence-Based Patient-Provider Communication in Adolescent HIV Clinics. presented at: Paper presented at AIDSImpact 2019; 2019; London, UK. [Google Scholar]
  • 30.Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care. 2012;50(3):217–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Pennar AL, Dark T, Simpson KN, et al. Cascade Monitoring in Multidisciplinary Adolescent HIV Care Settings: Protocol for Utilizing Electronic Health Records. Protocol. JMIR Res Protoc. 2019;8(5):e11185. doi: 10.2196/11185 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Idalski Carcone A, Coyle K, Gurung S, et al. Implementation Science Research Examining the Integration of Evidence-Based Practices Into HIV Prevention and Clinical Care: Protocol for a Mixed-Methods Study Using the Exploration, Preparation, Implementation, and Sustainment (EPIS) Model. Protocol. JMIR Res Protoc. 2019;8(5):e11202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Millis BJ, Cottell PG Jr. Cooperative Learning for Higher Education Faculty. Series on Higher Education. ERIC; 1997. [Google Scholar]
  • 35.Kocak R. The effects of cooperative learning on psychological and social traits among undergraduate students. Social Behavior and Personality: an international journal. 2008;36(6):771–782. [Google Scholar]
  • 36.Barwick MA, Peters J, Boydell K. Getting to Uptake: Do Communities of Practice Support the Implementation of Evidence-Based Practice? Journal of the Canadian Academy of Child and Adolescent Psychiatry. 12/08/received 02/02/accepted 2009;18(1):16–29. [PMC free article] [PubMed] [Google Scholar]
  • 37.Naar S, Chapman JE, Cunningham PB, Ellis DA, Todd L, MacDonell K. Development of the Motivational Interviewing Coach Rating Scale (MI-CRS) for Health Equity Implementation Contexts. Health Psychology. in press; [DOI] [PMC free article] [PubMed]
  • 38.Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological assessment. 1994;6(4):284. [Google Scholar]
  • 39.McNeish D, Stapleton LM. Modeling clustered data with very few clusters. Multivariate behavioral research. 2016;51(4):495–518. [DOI] [PubMed] [Google Scholar]
  • 40.Maas CJM, Hox JJ. Sufficient Sample Sizes for Multilevel Modeling. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences. 2005;1:85–91. [Google Scholar]
  • 41.Snijders TA, Bosker RJ. Multilevel analysis: An introduction to basic and advanced multilevel modeling. Sage; 2011. [Google Scholar]
  • 42.HLM 8: Hierarchical Linear and Nonlinear Modeling (version 8). Scientific Software International; 2019. [Google Scholar]
  • 43.Raudenbush SW, Bryk AS. Hierarchical linear models (2nd ed.). Sage; 2002. [Google Scholar]
  • 44.Singer JD, Willett JB. Applied longitudinal data analysis: Modeling change and event occurrence. Oxford University Press, USA; 2003. [Google Scholar]
  • 45.Naar S, Robles G, MacDonell K, et al. Comparative Effectiveness of Community vs Clinic Healthy Choices Motivational Intervention to Improve Health Behaviors Among Youth Living with HIV: A Randomized Trial. JAMA Open Network. 2020;3(8):e2014659. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES