Skip to main content
Translational Behavioral Medicine logoLink to Translational Behavioral Medicine
. 2021 Oct 26;12(1):ibab137. doi: 10.1093/tbm/ibab137

Using factorial mediation analysis to better understand the effects of interventions

Jillian C Strayhorn 1,, Linda M Collins 2, Timothy R Brick 3, Sara H Marchese 4, Angela Fidler Pfammatter 5, Christine Pellegrini 6, Bonnie Spring 7
PMCID: PMC8764990  PMID: 34698351

Abstract

To improve understanding of how interventions work or why they do not work, there is need for methods of testing hypotheses about the causal mechanisms underlying the individual and combined effects of the components that make up interventions. Factorial mediation analysis, i.e., mediation analysis applied to data from a factorial optimization trial, enables testing such hypotheses. In this commentary, we demonstrate how factorial mediation analysis can contribute detailed information about an intervention’s causal mechanisms. We briefly review the multiphase optimization strategy (MOST) and the factorial experiment. We use an empirical example from a 25 factorial optimization trial to demonstrate how factorial mediation analysis opens possibilities for better understanding the individual and combined effects of intervention components. Factorial mediation analysis has important potential to advance theory about interventions and to inform intervention improvements.

Keywords: Mediation analysis, Factorial experiment, Optimization trial, Multiphase optimization strategy


Mediation analysis based on factorial optimization trial results has important potential to help intervention scientists understand why multicomponent interventions work or do not work.


Implications.

Practice: The insights factorial mediation analysis yields about intervention components’ individual and combined effects can help practitioners prepare for successful implementation of the intervention in real-world contexts.

Policy: The insights factorial mediation analysis yields about intervention components’ individual and combined effects can help policymakers make informed policy decisions about the selection of interventions.

Research: The insights factorial mediation analysis yields about intervention components’ individual and combined effects can help intervention scientists refine their conceptual models, make decisions about the composition of optimized interventions, and/or identify next steps for continual optimization.

This commentary complements recent articles by Lo et al. (2020) and by Rosas et al. (2020) on the use of mediation analysis to better understand why interventions do or do not work. Each of these articles serves as an excellent model of how mediation analysis of data from a two-arm randomized control trial (RCT) enables the testing of hypotheses about mediation of an overall treatment effect: for Lo et al. [1], the overall effect of the Strong Hearts, Healthy Communities (SHHC) intervention, and for Rosas et al. [2], the overall effect of an integrated collaborative care intervention. In each case, the authors demonstrate how mediation analysis can yield information that, among other things, may help to inform future intervention improvements.

We wish to respond in particular to a suggestion from Lo et al. [1] that “additional studies… that are specifically designed and powered to examine the pathways through which multilevel health behavior interventions work, and to what degree, are needed.” One limitation of the way mediation analysis is typically carried out, i.e. based on data from a two-arm RCT, is that it does not extend to testing hypotheses about mediation of the effects of the individual components that make up an intervention. Yet many effective interventions, including those Lo et al. [1] and Rosas et al. [2] describe, include multiple components that may plausibly contribute to change in the outcome of interest via different mechanisms. In fact, in intervention science, a set of intervention components is frequently assembled with the idea that each component will affect different specific key behavioral antecedents of the outcome of interest (e.g., [3–5]). If the full potential for mediation analysis to contribute useful, actionable insights about the mechanisms underlying complex interventions is to be met, there is need for (a) the use of experimental designs that enable intervention scientists to obtain information about the individual and joint performance of components within an intervention and (b) strategies for mediation analysis that facilitate the testing of hypotheses about the individual and combined effects of intervention components.

The multiphase optimization strategy (MOST; e.g., [6]) is an emerging alternative framework for intervention science that explicitly recognizes and capitalizes on the hypothesized contributions of individual intervention components. In MOST, the individual and combined effects of intervention components are estimated in a rigorous optimization trial, usually using an experimental design from the factorial family. The information about components’ individual and combined effects is used to optimize the intervention by weeding out underperforming components and selecting components so as to achieve intervention EASE: i.e., to balance effectiveness against affordability, scalability, and efficiency. MOST borrows concepts from engineering, among them continual optimization, the idea that one purpose of an optimization trial is to lay the groundwork for subsequent research aimed at programmatic improvement of the intervention. Mediation analysis of data from a factorial optimization trial conducted within the MOST framework opens intriguing possibilities for understanding, improving upon, and continually optimizing behavioral and biobehavioral interventions [7].

The purpose of this commentary is to convey a sense of the possibilities offered by application of mediation analysis to data from the special case of the factorial experiment in which each factor has two levels, known as the 2k factorial (e.g., [3–6, 8–10]). We provide an empirical example, a 25 (i.e., 2 × 2 × 2 × 2 × 2) factorial optimization trial used in Opt-IN, a behavioral weight loss intervention for adults with overweight and obesity [8]. We show how, unlike the RCT, the factorial optimization trial enables testing specific a priori hypotheses and/or conducting exploratory analyses concerning how individual intervention components work or do not work, as opposed to how the complete intervention works as a package. We suggest that mediation analysis of data from a factorial optimization trial can provide a detailed and nuanced look at the mechanisms underlying interventions and intervention components, including interactions among components.

BRIEF INTRODUCTION TO THE 2k FACTORIAL OPTIMIZATION TRIAL

In a 2k factorial optimization trial, each of k intervention components is operationalized as a two-level factor, most commonly with the levels “Off/On” or “Low/High.” In a full 2k factorial, participants are randomized into 2k experimental conditions, one for each combination of factor levels. In a balanced factorial experiment, exactly half of the sample of participants is assigned to each of the two levels of each of the k factors.

We assume the use of effect coding to represent factor levels in statistical analysis, as is traditional in factorial experiments [11]. We remind readers that effect coding (−1, +1) and dummy coding (0, 1) estimate effects that must be interpreted differently [12]. When effect coding is used, regression coefficients (b-weights) correspond to the classical definition of main and interaction effects (when the coefficients are multiplied by a scaling constant, which does not affect hypothesis testing). This means that a significant main effect for any given factor indicates that the factor demonstrates an effect on average, across all combinations of levels of the remaining k − 1 factors. A two-way interaction effect indicates that the effect of one factor differs depending on the level to which a second factor is set. A three-way interaction effect indicates that a two-way interaction between two factors differs depending on the level to which a third factor is set, etc. Interaction effects may indicate synergistic or antagonistic effects among components: that, when combined, intervention components have effects that are larger or smaller, respectively, than the sum of their individual effects. In a full 2k factorial experiment, up to a k-way interaction effect may be estimated.

EMPIRICAL EXAMPLE: THE Opt-IN TRIAL

The 25 factorial optimization trial Opt-IN tested five experimental intervention components that were hypothesized to contribute to successful weight loss [8, 13, 14]. The experimental intervention components that were tested in the Opt-IN trial are described in Table 1; more information about these components, as well as the constant component that all participants received, is provided in Spring et al. [8]. Demographic information about the 562 participants in the Opt-IN trial is also provided in detail in Spring et al. [8]. The full 25 factorial design used in Opt-IN can be found in Supplemental Table A.

Table 1.

Opt-IN experimental intervention components [8]

Experimental component Description Corresponding factor Factor levels
Coaching calls Telephone-based sessions with a weight loss coach COACH “12 calls” versus “24 calls”
Text messages Automated messages providing support and accountability toward goal attainment or facilitation toward those goals TEXT “Off” versus “On”
Buddy training The provision of phone and webinar-based formal support training for participants’ self-selected weight loss buddies BUDDY “Off” versus “On”
Meal replacement recommendations Recommendations to use meal replacement products regularly MEAL “Off” versus “On”
Contact with a primary care physician Sending a mailed progress report about participants’ weight loss goals and progress to their primary care physicians PCP “Off” versus “On”

Mediation in Opt-IN

Opt-IN’s five experimental intervention components (Table 1) were expected to influence weight loss through nine potential mediators: Facilitation, participants’ self-reported beliefs that the study tools they received helped them to achieve their goals; two aspects of self-efficacy, Self-efficacy for Dieting and Self-efficacy for Exercise; four aspects of self-regulation, Restraint, Disinhibition, and Autonomous and Controlled Motivation; and two aspects of supportive accountability, Therapeutic Alliance and Perceived Autonomy Support. Put another way, it was expected that Opt-IN’s experimental components would produce change in these important behavioral antecedents of weight loss, and that change in these antecedents would produce weight loss.

General analytic strategy

Operationalization of the outcome. Measurement in Opt-IN took place at three time points: baseline, three months, and six months. For ease of interpretation, we defined the outcome variable as Weight Loss (in lbs.) over six months: weight at baseline minus weight at six months. As a result, a more positive value on the outcome variable Weight Loss implies more successful (larger) weight loss. (A different analytic strategy was used in the primary analysis of the results of the Opt-IN trial, described in Spring et al. [8].)

Operationalization of the mediators. We defined potential mediators in terms of three-month measures in the interest of temporal precedence. We decided that use of this strategy would give us the best chance of evaluating whether early change in a putative mediator was associated with change in the outcome across the full period of interest, from baseline to six months. We operationalized mediators within the self-efficacy and self-regulation categories as change scores (we used three-month scores minus baseline scores as mediators), and we operationalized Facilitation and mediators within the supportive accountability category using the three-month scores themselves. Information about the measurement of the nine total potential mediators (including scales, internal consistency, and sample items) is provided in Supplemental Table B. Descriptive statistics for Weight Loss and the putative mediators are provided in Supplemental Table C.

Analytic strategy. Various ways to conduct mediation analyses can be applied to a factorial experiment. To evaluate mediated effects we used the joint significance test [15], which is a causal steps approach that tests two null hypotheses: first, that the effect of mediator M on outcome Y (the b path) is equal to zero and second, that effect of the independent variable X on mediator M (the a path) is equal to zero. It is considered evidence in favor of mediation if both the b and a pathways associated with a potential mediator are found to be significantly different from zero. Unlike the causal steps approach proposed by Judd and Kenny [16] or Baron and Kenny [17], the joint significance test does not require that a significant total effect be established [15]. We chose the joint significance test for its alignment with a fundamental purpose of factorial mediation analysis of data from an optimization trial: to test hypotheses that informed the inclusion of specific intervention components, e.g., “component X is expected to effect change in mediator M (the hypothesized a path), which is an important behavioral antecedent of change in outcome Y (the hypothesized b path).” In this case, the so-called “direct effect,” or the effect of component X on outcome Y via some unspecified mediator(s) other than mediator M, is less relevant; the question is more whether the intervention component appears to effect change via its hypothesized mediator and less whether the component effects change via that hypothesized mediator exclusively.

We included both main and interaction effects, 31 in total, in single mediator models; an example is provided in Fig. 1. In a balanced factorial experiment, these main and interaction effects are completely uncorrelated. In Opt-IN, which, as noted below, was characterized by a level of attrition typical of longitudinal intervention research [18], the main and interaction effects were nearly uncorrelated (thus all 31 could be included in the model without little loss of the ability to estimate mediated effects). We included all 31 effects so that we could examine mediation of both main and interaction effects in each single mediator model. To avoid additional model complexity, we relied primarily upon single mediator models. We fit these models as structural equation models using Mplus Version 8.0 [19].

Fig 1.

Fig 1

Example single mediator model with all main and interaction effects, Opt-IN. Note. Some two-way, three-way, and four-way interaction effects are omitted from this figure for simplicity. Shading emphasizes the mediation of the BUDDY × PCP × TEXT interaction effect.

Missing data. Participant retention in Opt-IN at the three-month and six-month timepoints was 90.6% and 84.3%, respectively. Missing data in the present analyses were assumed to be missing at random (MAR) and handled using Full Information Maximum Likelihood [20]. In each single mediator model, the other potential mediators (not currently being modeled) were included as auxiliary variables; thus each potential mediator contributed to all missing data models.

USING FACTORIAL MEDIATION ANALYSIS TO UNDERSTAND INDIVIDUAL COMPONENT EFFECTS

Significant main effect

When a factor corresponding to an intervention component has a significant main effect on the outcome, mediation analysis can shed light on the mechanisms(s) underlying that effect. In Opt-IN, BUDDY, the factor associated with Buddy Training, had a significant main effect in the desired direction on weight loss [8]. It had been anticipated that Buddy Training would target Facilitation, as well as mediators in the self-efficacy and self-regulation categories. Mediation analysis investigates the following: Can the significant main effect of BUDDY be understood by testing hypotheses concerning the underlying functioning of Buddy Training?

Our exploratory mediation analysis with the main effect of BUDDY does suggest that Buddy Training is functioning in ways that are consistent with expectations: Buddy Training targets important social cognitive mediators, Self-Efficacy for Dieting and Facilitation in particular. Self-Efficacy for Dieting was associated with Weight Loss (the b path; b = 0.233 (SE = 0.045), p < 0.001), and BUDDY was associated with a significant increase in Self-Efficacy for Dieting (the a path; a = 0.961 (SE = 0.356), p = 0.007). Similarly, Facilitation was associated with Weight Loss (the b path; b = 4.188 (SE = 0.562); p < 0.001), and BUDDY was associated with greater Facilitation (the a path; a = 0.090 (SE=0.039); p = 0.020).

There was less evidence of mediation with the putative mediators in the self-regulation category. With Autonomous Motivation, for example, there is no evidence of mediation if a traditional alpha cutoff of α = 0.05 is set a priori. Autonomous Motivation was associated with Weight Loss (the b path; b = 1.425 (SE = 0.425); p = 0.001), but BUDDY was not associated with growth in Autonomous Motivation (the a path; a = 0.091 (SE = 0.049); p = 0.064). However, some may choose to interpret the association between BUDDY and Autonomous Motivation as a marginal effect. We interpret this result to mean that Autonomous Motivation is important in terms of its relationship with Weight Loss—but that BUDDY did not have a sufficient effect on Autonomous Motivation, given traditional thresholds.

Results like these may contribute to decision-making about the composition of the optimized intervention. For example, the finding that Buddy Training appeared to be functioning as anticipated via social cognitive mechanisms may reinforce the decision to include the Buddy Training component in the optimized intervention. This finding may also contribute to theory about the role Buddy Training plays in weight loss, and it may have implications for the training of interventionists. For example, it may be useful for interventionists to understand that key purposes of buddies include helping participants to build their sense of self-efficacy and their sense that they have someone in their own environment supporting their continued effort.

Non-significant main effect

When a factor corresponding to a component of an intervention does not have a significant main effect on the outcome, exploratory analyses can shed light on why. In Opt-IN, COACH, the factor associated with Coaching Calls, did not have a significant main effect on weight loss [8]. Contrary to expectations, there was no benefit to weight loss detected with a higher (24 calls) versus lower (12 calls) dose of coaching calls. There is prior empirical evidence for the usefulness of 12 coaching calls in weight loss (e.g., [21]); the question motivating the selection of factor levels for COACH in Opt-IN was whether 24 coaching calls would perform better than 12, given additional evidence supporting higher doses in weight loss interventions (e.g., [22]). Because the higher versus lower dose of Coaching Calls was expected to target putative mediators within the supportive accountability category, Therapeutic Alliance and Perceived Autonomy Support, the finding that COACH did not have the anticipated effect on Weight Loss could potentially be understood in terms of a failure to support either the hypotheses linking the higher versus lower dose of Coaching Calls to the supportive accountability mediators or the hypotheses linking the supportive accountability mediators to Weight Loss. In fact, the supportive accountability variables that COACH was expected to target were associated with Weight Loss, suggesting that the conceptual theory linking the hypothesized mediators to weight loss was sound. However, the effects of COACH on the supportive accountability variables were not significant, suggesting that the higher versus lower dose of Coaching Calls did not have the anticipated effects on Therapeutic Alliance and Perceived Autonomy Support. If, by contrast, exploratory analyses had suggested that COACH had effects on the supportive accountability variables, but that these variables were not associated with Weight Loss, then the message would have been different; this would have suggested that the larger versus smaller dose of Coaching Calls was doing something, just not something that was associated with Weight Loss.

Factorial mediation results may inform the refinement and/or development of theories about the functioning of intervention components, e.g. about dose in weight loss coaching. Results like these may also reinforce decision-making (e.g., in Opt-IN, the decision to include the Coaching Calls component at its lower dose [8]) or point the way forward for continual improvement of an intervention. For example, in a subsequent optimization trial investigators could examine the difference between 12 coaching calls and a dose between 12 and 24 calls (say, 18 coaching calls), with the idea that 24 was perhaps simply too large a dose. Alternatively, investigators could stay with 12 calls and examine the effect of an additional intervention component specifically designed as a booster for the effect of the Coaching Calls component on supportive accountability variables.

Using factorial mediation analysis to understand the combined effects of components

Although there is growing empirical evidence that interactions between intervention components occur [3], behavioral theory is largely silent on what variables might mediate interaction effects. Factorial optimization trials provide an opportunity to test for the presence of interactions and, importantly, to explore what variables might mediate interaction effects, possibly informing the development of theory as to why components work together synergistically or antagonistically.

In Opt-IN, interpretation of the significant three-way interaction effect BUDDY × PCP × TEXT on Weight Loss (Supplemental Figure A) contributed to the selection of the optimized intervention [8]. In this interaction, the pattern of marginal means was different across the two levels of BUDDY. The main effect for BUDDY had already suggested a desirable effect on Weight Loss, so the focus was on the marginal means for the levels of PCP and TEXT when BUDDY was set to the “ON” level. These means demonstrated that the most favorable outcome on Weight Loss occurred when TEXT was “OFF” and PCP was “ON,” suggesting that the Contact with a Primary Care Physician component should be included in the intervention and reinforcing the decision not to include Text Messages.

We included the BUDDY × PCP × TEXT interaction effect, along with the remaining 30 main and interaction effects, in single mediator models with each of the potential mediators. This exploratory mediation analysis suggested that the three-way BUDDY × PCP × TEXT interaction effect was mediated by Restraint, one of the constructs within the self-regulation category. There was evidence of a BUDDY × PCP × TEXT effect on Restraint (Supplemental Figure B): the a path, a = 0.207 (SE = 0.106), p = 0.052. There was also evidence of an association between change in Restraint and Weight Loss: the b path, b = −0.814 (SE = 0.171), p < 0.001. Just like the overall BUDDY × PCP × TEXT interaction effect on Weight Loss, the BUDDY × PCP × TEXT interaction effect on Restraint also suggested that the best outcome occurred when BUDDY and PCP were set to “ON” and TEXT was set to “OFF.” Therefore, it seems plausible that the three-way BUDDY × PCP × TEXT interaction effect on Weight Loss can be explained in part by (a) the three-way interaction on Restraint and (b) the association between Restraint and Weight Loss.

A note on challenges in factorial mediation analysis

Successfully designing an experiment around a given theoretical model of change represents an important challenge in all longitudinal research [23]. However, in a factorial optimization trial the presence of multiple factors representing intervention components—and therefore possibly multiple theoretical models of change—calls for particular care in data analysis. If two intervention components operate on different timescales but their effects on their respective hypothesized mediators are measured with similar timing, the results may be misleading; for example, change may be identified in one mediator but missed in the other.

One implication is the necessity for caution in the interpretation of results of mediation analyses like those in this commentary. An unconfirmed hypothesis about the link between a given component and a putative mediator, for example, could signal that an intervention component was ineffective at targeting a certain mediating variable—or that an inappropriate strategy was used to quantify change in that mediating variable, e.g. such that the mediator was measured at the wrong time. Eventually, perhaps theoretical models of an intervention will specify the timescales at which intervention components operate; in our view, this information is as vital as the direction and size of anticipated effects.

As usual, an unconfirmed hypothesis could also reflect insufficient power to detect a given effect. Notably, the 2k factorial experiment is a highly efficient design [6]. As mentioned above, in a balanced (effect-coded) factorial experiment the main and interaction effects are uncorrelated, and the presence of interaction effects does not reduce power for the estimation of main effects. Because each of the mediation paths we consider represents a distinct hypothesis, we do not correct for multiple testing, although doing so might be sensible in other cases. We refer interested readers to Collins [6] for further discussion of statistical power.

CONCLUSION

We celebrate the emphasis recent articles by Lo et al. [1] and Rosas et al. [2] place on the role mediation analysis can play in informing the improvement of interventions and echo the call for methods and designs that can examine in more detail how multicomponent behavioral interventions work or do not work. In this commentary we highlight the potential in applying mediation analysis to data from a 2k factorial optimization trial. Whereas mediation analysis based on a standard two-arm RCT tests hypotheses about the functioning of a complete intervention package, mediation analysis based on data from a factorial optimization trial tests hypotheses about the individual and combined functioning of intervention components. Therefore, factorial mediation analysis is particularly well-equipped to inform nuanced intervention improvements—and to shed light more generally on whether individual components within the intervention are having their intended effects.

Supplementary Material

ibab137_suppl_Supplementary_Figure_A
ibab137_suppl_Supplementary_Figure_B
ibab137_suppl_Supplementary_Tables

Contributor Information

Jillian C Strayhorn, Department of Human Development and Family Studies, The Pennsylvania State University, State College, TX, USA.

Linda M Collins, School of Global Public Health, New York University, New York, NY, USA.

Timothy R Brick, Department of Human Development and Family Studies, The Pennsylvania State University, State College, TX, USA.

Sara H Marchese, Department of Behavioral Medicine, Northwestern University, Chicago, IL, USA.

Angela Fidler Pfammatter, Department of Behavioral Medicine, Northwestern University, Chicago, IL, USA.

Christine Pellegrini, Department of Exercise Science, Arnold School of Public Health, University of South Carolina, Columbia, SC, USA.

Bonnie Spring, Department of Behavioral Medicine, Northwestern University, Chicago, IL, USA.

Funding

Support for Drs Collins, Pfammatter, Pellegrini, Spring and Marchese was provided in part by R01DK097364 (MPIs: Spring/Collins). Ms Strayhorn acknowledges support from F31DA052140; Dr Marchese acknowledges support from award number F31DK120151; Dr Collins acknowledges support from P50DA039838, P01CA180945, and P30DA011041; Dr Brick acknowledges support from the Penn State SSRI and from award number 1U24AA027684-01; Dr Spring acknowledges support from P30CA060553 and UL1TR001422. The content is solely the responsibility of the authors and does not necessarily represent the views of the National Institutes of Health.

Compliance with Ethical Standards

Conflict of Interest: None declared.

References

  • 1. Lo  BK, Graham ML, Folta SC, Strogatz D, Parry SA, Seguin-Fowler RA. Physical activity and healthy eating behavior changes among rural women: an exploratory mediation analysis of a randomized multilevel intervention trial. Transl Behav Med. 2021;11(10):1839–1848. doi: 10.1093/tbm/ibaa138 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Rosas  LG, Xiao L, Lv N, et al.  Understanding mechanisms of integrated behavioral therapy for co-occurring obesity and depression in primary care: a mediation analysis in the RAINBOW trial. Transl Behav Med. 2021;11(2):382–392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Piper  ME, Fiore MC, Smith SS, et al.  Identifying effective intervention components for smoking cessation: a factorial screening experiment. Addiction. 2016;111(1):129–141. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Wyrick  DL, Tanner AE, Milroy J, et al.  itMatters: optimization of an online intervention to prevent sexually transmitted infections in college students. J Am Coll Health. 2020:1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Gwadz  MV, Collins LM, Cleland CM, et al.  Using the multiphase optimization strategy (MOST) to optimize an HIV care continuum intervention for vulnerable populations: a study protocol. BMC Public Health. 2017;17(1):383. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Collins  LM.  Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST).  New York, NY: Springer; 2018. [Google Scholar]
  • 7. Smith  RA, Coffman DL, Zhu X. Investigating an intervention’s causal story: mediation analysis using a factorial experiment and multiple mediators. In: Collins LM, Kugler KC, eds. Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics. New York, NY: Springer; 2018:269–294. [Google Scholar]
  • 8. Spring  B, Pfammatter AF, Marchese SH, et al.  A factorial experiment to optimize remotely delivered behavioral treatment for obesity: results of the Opt-IN study. Obesity (Silver Spring). 2020;28(9):1652–1662. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Bernstein  SL, Dziura J, Weiss J, et al.  Tobacco dependence treatment in the emergency department: a randomized trial using the Multiphase Optimization Strategy. Contemp Clin Trials. 2018;66:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Celano  CM, Albanese A, Millstein RA, et al.  Optimizing a positive psychology intervention to promote health behaviors following an acute coronary syndrome: the Positive Emotions after Acute Coronary Events-III (PEACE-III) randomized factorial trial. Psychosomatic Med. 2018;80:526–534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Wu  CFJ, Hamada MS.  Experiments: Planning, Analysis, and Optimization. 2nd ed. Hoboken, NJ: John Wiley & Sons; 2009. [Google Scholar]
  • 12. Kugler  KC, Dziak JJ, Trail J. Coding and interpretation of effects in analysis of data from a factorial experiment. In: Collins LM, Kugler KC, eds. Optimization of Multicomponent Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics. New York, NY: Springer; 2018. [Google Scholar]
  • 13. Pellegrini  CA, Hoffman SA, Collins LM, Spring B. Optimization of remotely delivered intensive lifestyle treatment for obesity using the Multiphase Optimization Strategy: Opt-IN study protocol. Contemp Clin Trials. 2014;38(2):251–259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Pellegrini  CA, Hoffman SA, Collins LM, Spring B. Corrigendum to “Optimization of remotely delivered intensive lifestyle treatment for obesity using the Multiphase Optimization Strategy: Opt-IN study protocol” [Contemp. Clin. Trials 38 (2014) 251–259]. Contemp Clin Trials. 2015;45(Pt B):468–469. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. MacKinnon  DP, Lockwood CM, Hoffman JM, West SG, Sheets V. A comparison of methods to test mediation and other intervening variable effects. Psychol Methods. 2002;7(1):83–104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Judd  CM, Kenny DA. Process analysis: estimating mediation in treatment evaluations. Eval Rev. 1981;5:602–619. [Google Scholar]
  • 17. Baron  RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: conceptual, strategic and statistical considerations. J Personal Soc Psychol. 1986;51:1173–1182. [DOI] [PubMed] [Google Scholar]
  • 18. Hansen  WB, Tobler NS, Graham JW. Substance prevention. Eval Rev. 1999;14(6):677–685. [Google Scholar]
  • 19. Muthén  LK, Muthén BO.  Mplus User’s Guide. 8th ed. Los Angeles, CA: Muthén & Muthén; 1998–2017. [Google Scholar]
  • 20. Graham  JW. Adding missing-data-relevant variables to FIML-based structural equation models. Struct Equ Model. 2003;10(1):80–100. [Google Scholar]
  • 21. Hartman  SJ, Nelson SH, Cadmus-Bertram LA, Patterson RE, Parker BA, Pierce JP. Technology- and phone-based weight loss intervention: pilot RCT in women at elevated breast cancer risk. Am J Prev Med. 2016;51(5):714–721. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Wilfley  DE, Saelens BE, Stein RI, et al.  Dose, content, and mediators of family-based treatment for childhood obesity: a multisite randomized clinical trial. JAMA Pediatr. 2017;171(12):1151–1159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Collins  LM. Analysis of longitudinal data: the integration of theoretical model, temporal design, and statistical model. Annu Rev Psychol. 2006;57:505–528. [DOI] [PubMed] [Google Scholar]
  • 24. Watkins  E, Newbold A, Tester-Jones M, et al.  Implementing multifactorial psychotherapy research in online virtual environments (IMPROVE-2): study protocol for a phase III trial of the MOST randomized component selection methods for internet cognitive-behavioural therapy for depression. BMC Psychiatry. 2016;16:345. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ibab137_suppl_Supplementary_Figure_A
ibab137_suppl_Supplementary_Figure_B
ibab137_suppl_Supplementary_Tables

Articles from Translational Behavioral Medicine are provided here courtesy of Oxford University Press

RESOURCES