Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2022 Oct 28;65(11):4159–4171. doi: 10.1044/2022_JSLHR-21-00551

Mediators, Moderators, and Covariates: Matching Analysis Approach for Improved Precision in Cognitive-Communication Rehabilitation Research

Emily L Morrow a,b,c,, Melissa C Duff a, Lindsay S Mayberry b,c
PMCID: PMC9940892  PMID: 36306506

Abstract

Purpose:

The dual goals of this tutorial are (a) to increase awareness and use of mediation and moderation models in cognitive-communication rehabilitation research by describing options, benefits, and attainable analytic approaches for researchers with limited resources and sample sizes and (b) to describe how these findings may be interpreted for clinicians consuming research to inform clinical care.

Method:

We highlight key insights from the social sciences literature pointing to the risks of common approaches to linear modeling, which may slow progress in clinical–translational research and reduce the clinical utility of our work. We discuss the potential of mediation and moderation analyses to reduce the research-to-practice gap and describe how researchers may begin to implement these models, even in smaller sample sizes. We discuss how these preliminary analyses can help focus resources for larger trials to fully encapsulate the heterogeneity of individuals with cognitive-communication disorders.

Results:

In rehabilitation research, we study groups, but we use the findings from those studies to treat individuals. The most functional clinical research is about more than establishing only whether a given effect exists for an “average person” in the group of interest. It is critical to understand the active ingredients and mechanisms of action by which a given treatment works (mediation) and to know which circumstances, contexts, or individual characteristics might make that treatment most beneficial (moderation).

Conclusions:

Increased adoption of mediation and moderation approaches, executed in appropriate steps, could accelerate progress in cognitive-communication rehabilitation research and lead to the development of targeted treatments that work for more clients. In a field that has made limited progress in developing successful interventions for the last several decades, it is critical that we harness new approaches to advance clinical–translational research results for complex, heterogeneous groups with cognitive-communication disorders.


Researchers in the field of communication sciences and disorders routinely ask questions about causality to understand relationships driving clinical outcomes and determine which treatments work (Duffy et al., 1981; Hayes & Rockwood, 2017). For example, a researcher specializing in cognitive-communication rehabilitation might assess whether a given treatment is effective in improving memory performance after traumatic brain injury. To identify the treatment effect, researchers typically assess how a given independent variable (e.g., the memory treatment being tested) is related to a given outcome or dependent variable (e.g., performance on a standardized or real-world memory task). To focus on the effect of interest (the effect of the independent variable on the dependent variable), researchers might treat other individual characteristics (e.g., demographic characteristics such as sex or age, or behavioral characteristics such as exercise or sleep) as noise by removing their associated variability from the statistical model (Schisterman et al., 2009). This gives researchers an idea of whether a treatment works at the group level, removing the effects of other characteristics that might affect treatment outcome. In other words, this approach provides the treatment effect for the average person in the study sample, with the idea that controlling for other factors increases the precision of that group estimate.

However, this common analytic approach might be contributing to the current reckoning in the field of cognitive-communication rehabilitation as to whether, and if so for whom, our existing treatments work (Lu et al., 2012; Spell et al., 2020). In rehabilitation research, we study groups, but we use the findings from those studies to treat individuals. Therefore, the most useful clinical research should tell us more than whether a given effect exists for an “average person” in the group of interest. When we rely solely on the group estimate of an effect and adjust for individual characteristics in our models, we may be removing key clues that could help us develop targeted treatment plans for our individual clients (Hayes & Rockwood, 2017). For example, we may conduct a clinical trial with a null result at the group level when in fact the treatment worked well for some participants in the study sample (e.g., those in a certain demographic category). By contrast, a significant treatment effect in a clinical trial may reflect treatment gains in only a subset of the study sample, meaning that the treatment would only be appropriate for clients whose characteristics match that subgroup and should not be implemented in all clients with the overarching diagnosis.

Moreover, when studies of treatments show no effect on targeted outcomes, we do not maximize our learning from those studies if we move on to examine other treatments without considering the active ingredients and mechanisms of action 1 producing our results (Hayes & Rockwood, 2017). If we adjust for the factors driving relationships between our independent and dependent variables, rather than including them as causal variables in our statistical models, we might not find a treatment effect when in fact it exists. Alternatively, if a treatment indeed was not effective, considering the mechanisms of action that determine how our treatment does or does not affect outcomes can inform next steps for intervention development. By testing treatments at the whole-group level without considering how individual differences and mechanisms affect the target outcome, we risk setting aside treatments that might work or overprescribing treatments to a whole group when they will only benefit some members of that group (Covington & Duff, 2020; Hayes & Rockwood, 2017; Kraemer et al., 2001).

Fortunately, there is a potential to increase the clinical applicability of cognitive-communication research by implementing methods from the social sciences that provide information about factors affecting treatment outcome and context for treatment success (Covington & Duff, 2020; Maas et al., 2012). For example, mediation analyses help us understand how active ingredients and mechanisms of action combine to produce change in a treatment target, and moderation analyses help us determine which circumstances, contexts, or individual characteristics might make that treatment most beneficial (see the Appendix for the glossary of terms). Understanding an effect's active ingredients, mechanisms of action, and boundary conditions is at the heart of the push for precision medicine, in which treatments are matched not only to a patient's diagnosis but also to individual characteristics such as genes, lifestyle, and environment (Denny & Collins, 2021). Given limited progress in developing successful treatments using a group-based approach in the last several decades (Lu et al., 2012), there is impetus to extend this analytic approach to improve the precision of research and treatment in the field of cognitive-communication disorders.

Moving beyond group-level analysis is especially important for researchers interested in complex disorders like traumatic brain injury, wherein population heterogeneity belies the efficacy of a one-size-fits-all approach (Covington & Duff, 2020; Maas et al., 2012). For decades, researchers have cited population heterogeneity and complex symptom interactions as barriers when treatments fail to stand up to assessment via clinical trial (Covington & Duff, 2020; Dahdah et al., 2016; Hart et al., 2014; Lu et al., 2012). However, a targeted approach using moderation analyses allows us to turn this thinking on its head by capitalizing on, rather than ignoring or adjusting for, this population heterogeneity (Hayes & Rockwood, 2017; Kraemer et al., 2001; Maas et al., 2012; Preacher & Hayes, 2008b). For example, using moderation analyses to identify meaningful clinical subgroups might allow us to mitigate health disparities (e.g., based on health literacy, socioeconomic status, or education) by ensuring that treatments work for patients across demographic categories (Kraemer et al., 2001; Mayberry et al., 2014, 2018). In addition, using mediation to investigate the active ingredients and mechanisms of action by which treatments succeed or fail can help us learn from null findings and understand which ingredients are more or less important to ensure maximum effects in designing the next treatments and trials (Hayes & Rockwood, 2017; Kraemer et al., 2001).

A challenge in our field is the use of small sample sizes for populations that are rare or difficult to recruit. When we interpret findings from these small sample sizes at the group level, we risk generalizing results from an idiosyncratic subgroup to an entire population or missing factors that might lead to meaningful subgrouping for better treatment targeting (Covington & Duff, 2020). Increased adoption of mediation and moderation approaches has the potential to be a part of the solution to accelerate progress in rehabilitation research at this critical time when our field must develop new, individualized treatment approaches that work. Fortunately, mediation and moderation analyses can be completed in steps, wherein researchers complete studies with smaller samples to support generating hypotheses to guide tests in larger samples so that resources for larger studies are optimally targeted. With this promise in mind, the dual goals of this tutorial are (a) to increase awareness and use of mediation and moderation models in cognitive-communication rehabilitation research by describing options, benefits, and attainable statistical approaches for researchers with limited resources and sample sizes and (b) to describe how these findings may be interpreted for clinicians consuming research to inform clinical care.

Although we focus on the field of cognitive-communication rehabilitation here as our area of expertise and a section of the field that urgently needs to implement new approaches for increased precision and progress, these principles apply to science and treatment across many communication disorders and have in fact gained traction in other heterogeneous clinical populations such as autism (Contaldo et al., 2020; Davis et al., 2011; Lombardo et al., 2019; Sievers et al., 2018). We hope that increased implementation of these analytic approaches, combined with team science to increase sample size and other advanced statistical and experimental approaches, will lead to faster advances in treatment design and improved outcomes for patients with cognitive-communication disorders.

Understanding a Variable's Role for Improved Analytic Models: Adjusting Is Not Enough

As statistical tests are agnostic to causation, conclusions made based on statistics rely on careful constraint of those tests by experimental design. When researchers are focused on identifying the presence or absence of a given effect (e.g., whether a treatment works to affect a given target), data analysis often involves a linear model assessing the relationship between the independent variable (active ingredient or treatment) and the dependent variable (target or outcome; Preacher & Hayes, 2008b). For example, researchers might use an analysis of variance or regression model to assess the relationship between a manipulated variable like treatment condition or a measured variable like time since injury onset and performance on a neuropsychological assessment or real-world task. In many cases, in this standard linear model, researchers may adjust, or control for, for individual differences, including demographic or behavioral characteristics that could affect treatment outcomes, by treating them as covariates or confounders. Although this is a common statistical approach, many individual differences have the potential to affect a treatment's efficacy or determine which individuals will benefit most from a treatment (Covington & Duff, 2020). Therefore, it is important to be selective in adjusting for variables that may explain part of treatment outcomes, as described below.

Including Covariates and Confounders to Assess the Relationship of Interest

A covariate affects the outcome variable but is not related to the independent variable (see Figure 1; Salkind, 2010). Covariates are included in models to enhance precision because they remove the variability in the outcome associated with the covariate, making it easier to identify the target effect (Bloom et al., 2007; Field-Fote, 2019; Fisher, 1949). For example, say that we are interested in how stroke and traumatic brain injury differentially affect performance on a given memory outcome. We recruit two groups of people, half with a history of traumatic brain injury and half with a history of stroke, to complete the assessment. In this case, our independent variable (X) would be whether a person has a history of traumatic brain injury or stroke, and our dependent variable (Y) would be performance on the memory outcome. In this model, researchers may treat time since onset as a covariate: Time since onset may affect a person's performance on a memory outcome (Y), but it is not causally associated with the presence or absence of a traumatic brain injury or stroke (X). It may be that a researcher would adjust for time since onset as a covariate to assess whether traumatic brain injury affects performance on the memory outcome, regardless of time since onset.

Figure 1.

Figure 1.

Relationships of variables to the independent variable (X) and dependent variable (Y). The causal pathway is in black. Mediators (M) lie on the causal pathway, such that X affects M, which then affects Y, and are mechanisms driving the relationship between X and Y (e.g., why a treatment works). Moderators affect the size or direction of the relationship between X and Y and determine the contexts in which X affects Y (e.g., under what circumstances or for what types of people). Covariates explain some of the variability in Y but are not related to X or on the causal pathway. Confounders drive variability in both X and Y but do not drive the relationship between them. Adapted from Field-Fote (2019) with permission of Wolters Kluwer Health, Inc.

In contrast, a confounder is causally associated with both the independent and dependent variables but does not drive the association between them (see Figure 1; Ananth & Schisterman, 2017; Field-Fote, 2019; Schisterman et al., 2009). Consider how age might play a role in the model from the aforementioned example. Increasing age is associated with both an increased risk of traumatic brain injury and stroke (Kissela et al., 2012; Thompson et al., 2006; X) and poorer performance on many memory outcomes (Grady & Craik, 2000; Y). However, traumatic brain injury or stroke does not affect a person's age. Age might be driving some of the variation in both X (traumatic brain injury or stroke diagnosis) and Y (memory performance), but it could not be causing the association between them. In this case, a researcher might treat age as a confounder and adjust for it in a linear model to remove age-related variability when estimating the association between traumatic brain injury or stroke diagnosis and memory performance. This approach works best if the researcher does not believe that age is of interest for affecting the relationship between the independent and dependent variables; we need a different approach if it is (described below; Ananth & Schisterman, 2017; Field-Fote, 2019; Schisterman et al., 2009).

Choosing Covariates Wisely for a Targeted Analytical Approach

We make some key assumptions when we adjust for a variable as a covariate or confounder: (a) We assume that the covariate/confounder is not causing the association between our independent and dependent variables (i.e., is not a mechanism of action linking the treatment to the target), and (b) we assume that the covariate/confounder is not of clinical interest for affecting the relationship between our independent and dependent variables. We may be ignoring key information when we take this approach in violation of those assumptions, which slows our progress on patient outcomes.

Consider another potential variable in our example: The amount of exercise a person gets in a week might affect memory performance (Erickson et al., 2011). In a standard linear model, a researcher might adjust for exercise as a covariate to assess whether traumatic brain injury affects the target (memory performance) without the effects of exercise (see Figure 2A). However, what if exercise is, in fact, a key part of the picture? Maybe traumatic brain injury causes people to exercise less, and this reduction in exercise, in turn, affects memory performance. In this case, exercise is on the causal pathway: Our independent variable (X, traumatic brain injury) affects exercise (M), which, in turn, affects our dependent variable (Y, memory performance; see Figure 2B; Preacher & Hayes, 2008b; Schisterman et al., 2009). If exercise is on the causal pathway, we should treat it as a mediator (as described below) instead of a covariate to account for its role in explaining the relationship between our independent and dependent variables.

Figure 2.

Figure 2.

(A) A standard linear model, in which the independent variable (X) affects the dependent variable (Y), and researchers adjust for a covariate (M). (B) If M explains part of the relationship between X and Y, it is on the causal pathway and should be treated as a mediator, not adjusted for as a covariate.

Matching Analytic Approach: Treating Variables of Interest as Mediators and Moderators

Mediators Lie on the Causal Pathway

A key piece of determining a variable's appropriate role in a statistical model is understanding its place on the causal pathway. In the aforementioned example, exercise is an intermediate variable on the causal pathway because traumatic brain injury (X) may affect how much a person exercises (M), which, in turn, affects memory performance (Y). Mediation, by definition, requires the causal assertion: the assertion that the independent variable (X) is a theoretical cause of the mediator (M), which, in turn, affects the dependent variable (Y; Hayes & Rockwood, 2017).

Risks of Overadjusting for Variables on the Causal Pathway

Treating variables on the causal pathway (mediators) as covariates is counterproductive to our goals. If we control for exercise when it is on the causal pathway, we remove key explanatory power from our model and bias our results toward the null (Schisterman et al., 2009). For example, if traumatic brain injury by itself explains 25% of variability in memory performance, and exercise explains another 50%, a model that includes exercise on the causal pathway will explain 75% of variability in memory scores. If we treat exercise as a covariate and adjust for it, we remove the variability that it explains from our model, thereby only explaining 25%/75%, or 33%, of the variability in our outcome. This form of overadjustment, or treating a variable as a covariate when it is a mechanism of action on the causal pathway, might be part of why many clinical trials in cognitive-communication research do not have significant results, as we may be removing variables that explain part of the target effects (Schisterman et al., 2009).

Using Mediation Models to Maximize Learning in Clinical–Translational Research

Instead of overadjusting, mediation analysis helps researchers understand why treatments work and how to improve them (Hayes & Rockwood, 2017; Kraemer et al., 2001). Continuing the aforementioned example, if exercise is on the causal pathway and mediates the relationship between traumatic brain injury and memory, then researchers might design an intervention to target exercise as a path to improving the target (memory). The researchers could then use a mediation framework to assess (a) whether the intervention effectively increased exercise and (b) whether the intervention's effect on memory worked through the increase in exercise and to what extent. The mediation model provides key context to accelerate and maximize what we learn from a study. If there were no mediation model and the intervention did not improve memory, it would be unclear if the intervention did not increase exercise or if it increased exercise, but doing so did not result in memory gains. Findings from the mediation model would help researchers design their next study iteration if the initial design failed to improve memory, as they would know whether the new version of their treatment needed to be better at improving exercise or if they needed to add other active ingredients in addition to or instead of the exercise intervention to improve the memory target.

Importantly, mediation modeling is not limited to intervention research. A researcher can still make and test the theoretical causal assertion about a variable that is measured and not manipulated (e.g., keeping track of the amount of exercise without manipulating it, then assessing it as a mediator of memory performance after traumatic brain injury; Hayes & Rockwood, 2017; Preacher & Hayes, 2008b). This observational research often guides design for future treatment studies.

Moderators Are Key to Understanding Context

In some cases, a variable that is not on the causal pathway might prove crucial to creating a model that explains outcome variability. A moderator is not on the causal pathway but interacts with the independent variable in a way that influences the outcome (see Figure 3). A moderator determines the context (under what circumstances, or for what types of people) in which an effect exists or does not and in what magnitude (see Figure 3; Field-Fote, 2019; Hayes & Rockwood, 2017). A moderation model differs from controlling or adjusting for a covariate in that it asks what effect a treatment has on outcomes at different levels of the moderator, whereas adjusting for a covariate asks what effect the treatment would have on outcome if the value of the covariate was held constant. Thus, a moderation analysis allows us to identify meaningful conditions under which an effect is strongest in keeping with the principles of precision medicine (Denny & Collins, 2021; Kraemer et al., 2001).

Figure 3.

Figure 3.

The moderation model. The moderator, W, determines the direction or magnitude of the effect of independent variable X on dependent variable Y.

Risks of Removing Key Variables of Clinical Interest From the Model

We may have inaccurate results for trials in heterogeneous populations when we adjust for variables that could be of key interest to the clinical picture without considering them as moderators. Say, for example, that we have designed a memory intervention for people with traumatic brain injury and will test it in a group of 100 people with traumatic brain injury, 35 of whom also have a diagnosis of depression (which can also affect memory; Burt et al., 1995). The treatment is not very effective for people with traumatic brain injury who also have a diagnosis of depression, with an effect size of .10, but it is more effective for those who do not, with an effect size of .45. If we conduct a study on this entire group and adjust for depression diagnosis instead of assessing it as a meaningful contributor to the treatment's effectiveness, we will identify a relatively modest effect size of .33 for our treatment (.10 × 35% + .45 × 65%). From this, we might conclude that the treatment was moderately effective and base our determination about further study or incorporation into clinical care on a moderate effect for any given patient with traumatic brain injury. However, if we recognize depression as a moderator, we will learn that our treatment was more effective in people in this sample who do not have depression and not very effective in people who do. From this knowledge, we would proceed differently: We may move forward refining the treatment for a larger trial for adults who do not have depression and designing a different intervention for those who do. There is incentive, then, to use moderation analyses to identify meaningful subgroups to inform targeting of intervention development and clinical trials.

Using Moderation Analyses to Increase the Precision of Clinical–Translational Results

Many variables that are treated as covariates or confounders in common linear models should be analyzed as moderators as well (Ananth & Schisterman, 2017; Schisterman et al., 2009). Although there may be theoretical grounds to control for some variables in assessing treatment outcome, other variables that are commonly treated as “noise” in statistical models may be useful in targeting treatments to individuals. When a variable affects our treatment outcome but is not our independent variable, we should ask if that variable could be key to the clinical picture before adjusting for it. For example, both age and time since onset (from our covariate and confounder examples aforementioned) could affect whether a treatment works for certain subgroups of a sample. The same is true for study conditions; rather than adjusting for when a treatment is administered (e.g., time of day or location of administration), it might be important to assess whether participants scored better in a certain context (e.g., in the morning or evening). This has particular significance for clinical–translational research, as many treatments are focused on translating gains made in controlled clinical environments to “real-world” circumstances. Treating context as a moderator in analyzing outcomes can help us determine if our treatments move beyond the controlled treatment setting in producing functional results. Moderation analyses can also be used to address health care disparities (e.g., by helping ensure that a treatment works for people of all ethnicities or levels of education; Kraemer et al., 2001; Mayberry et al., 2018). If, for example, a treatment works only for people with higher levels of educational attainment, we may need to assess the accessibility of our treatment materials.

There might be a theoretical reason to control for a potential moderator (e.g., patient motivation) to assess the relationship of interest more precisely. In some cases, researchers might adjust for a variable to determine the group average treatment effect and then conduct a moderation analysis to identify meaningful subgroups that benefit more or less from the treatment. For example, researchers might adjust for education to estimate the overall treatment effect at the group's average level of education and then separately estimate the treatment effect for subgroups with higher and lower levels of education. This dual-analysis approach allows researchers to both assess group average treatment effects and conduct analyses based on meaningful subgroups that will result in tailored treatments that work better for individuals with complex, heterogeneous cognitive-communication disorders.

Designing a Theory-Based Model

Once a researcher determines a variable's theoretical role on the causal pathway, the next step is to match the analytic approach. It is important to note that conclusions about causation rely on the constrained use of statistical tests, which do not by themselves demonstrate causation, together with a theory-based experimental design. Below, we provide some preliminary guidance for conducting mediation and moderation analyses in real-world settings.

Mediation: Modeling the Causal Pathway and Conducting Exploratory Path Analysis in Smaller Sample Sizes

To statistically model mediation, the gold standard model involves a path analytic framework using a resampling method to estimate the mediated effect (also called the indirect effect, or effect of X on Y that operates via mediator M; Preacher & Hayes, 2004, 2008a; Zhao et al., 2010). Several “paths,” or effect estimates, are critical for understanding and interpreting a mediation model. First, the total effect (see c path in Figure 4) is simply the effect of X on Y (or the effect of the active ingredient on the target; Preacher & Hayes, 2004, 2008a). This total effect can be decomposed into a direct effect, which is the effect of X on Y when adjusted for M (see c' path in Figure 4), and the indirect effect, which is the effect of X on Y via M. The indirect effect is a product of the a path and the b path (see Figure 4). The a path is the effect of X on M (Preacher & Hayes, 2004, 2008a). In the intervention context, this path tells you if your intervention had the intended effect on the hypothesized mediator (e.g., Did your treatment increase exercise?). The b path is the effect of M on Y, adjusted for X (Preacher & Hayes, 2004, 2008a). In the intervention context, this path tells you if changes in M (the mediator) were associated with changes in the target (e.g., Was increased exercise associated with improved memory?).

Figure 4.

Figure 4.

Analytic pathways in the mediation model.

The c, c', a, and b path estimates can use either a p value or confidence interval to determine statistical significance. p values rely on a normal distribution for the estimates. Because both the a and b paths have normal distributions, multiplying them with each other to obtain an indirect effect often results in a nonnormal distribution. Therefore, resampling methods, such as bootstrapping, are used to generate a confidence interval for the indirect effect, instead of a p value. An indirect effect with a confidence interval of 95% excluding zero provides evidence of mediation (Preacher & Hayes, 2004, 2008a). Bias-corrected bootstrapping appears to be the best resampling method in terms of power to detect an indirect effect when one is present (Hayes & Scharkow, 2013; Koopman et al., 2015; MacKinnon et al., 2004; Tofighi & MacKinnon, 2016). Estimating the indirect effect using bootstrapping can be conducted using any structural equation modeling program and most commonly the PROCESS macro in SPSS or SAS. Simulation studies (Hayes & Scharkow, 2013; Koopman et al., 2015; MacKinnon et al., 2004; Tofighi & MacKinnon, 2016) indicate that bootstrapping indirect effects are usually not advisable with samples less than 100, whereas for samples of 200 or more, this approach performs reasonably in most simulations. Power is a function of several parameters and assumptions; hence, samples ranging 100–200 might or might not be sufficient to test for indirect effects with bootstrapping. Schoemann et al. (2017) developed a user-friendly R package to evaluate power for models testing indirect effects using Monte Carlo simulations (Schoemann et al., 2017). Interested readers can learn about mediation analysis using bootstrapping in more detail in the study of Preacher and Hayes, (2008a). Another option for estimating the indirect effect in a mediation model is Bayesian estimation, which can provide estimates of the indirect effect without the assumption that it is normally distributed. In smaller samples, where there is substantial guidance from theory and prior research, Bayesian methods with an informative prior can be used to test for mediation (Koopman et al., 2015). Interested readers can learn about mediation analysis using Bayesian estimation in more detail the studies of Miočević et al. (2018) and Yuan and MacKinnon (2009).

Although formal mediation modeling of the indirect effect often requires a larger sample size than most studies in the field, exploratory analyses to identify mediators can be illuminating even in much smaller samples and inform larger trials aimed at testing formal mediation. The underlying paths can be estimated with a series of four regression models to estimate the c, a, b, and c' paths (see Figure 4; Hayes & Rockwood, 2017). If the a and b paths are significant in the expected direction, the study has yielded preliminary evidence for mediation in the designated framework. Common rules of thumb for regression models suggest number of predictor variables to sample size ratios of 1:15–1:20 for studies in which the effect size is expected to be at least moderate. Therefore, regression models underlying formal tests of mediation can be tested with samples as small as 30–40 (the b path and c paths require two predictors in the model, as they include both the independent variable and the mediator; Austin & Steyerberg, 2015). Of course, if additional covariates are needed in the models, or if the effect size is expected to be smaller, these factors should be accommodated as well in the sample size calculation.

This exploratory analytic approach may allow researchers interested in rare disorders or smaller study populations, including many populations with cognitive-communication disorders, to determine where evidence for mediation exists and decide where to put their resources when developing treatments or pursuing larger studies aimed at more fully testing mediation using the bootstrapping method or modeling the interaction of multiple patient characteristics in producing treatment outcome (e.g., see Kent et al., 2010, for a summary of how exploratory subgroup analysis must inform larger trials to capture the full scope of meaningful patient heterogeneity). Furthermore, in pilot intervention work, testing the a and b paths with regression can illuminate if the intervention is effectively improving the mechanism of action (e.g., a path, did we increase exercise?) and if the mechanism of action improved the target (e.g., b path, was increased exercise associated with improved memory?) before running a large trial with potentially null results.

Moderation: Interaction Effect and Subgroups Analysis in Smaller Sample Sizes

Moderation is tested with an interaction effect between the predictor and the moderator (X interacts with W; see Figure 3). It is common for people to not look at potential moderators because they fear they are “underpowered to test an interaction effect.” That is often true but does not preclude the ability to examine subgroup differences, which answer meaningful and important questions.

For example, if we were testing the efficacy of a memory treatment in traumatic brain injury, it might be that participants who do not have a co-occurring diagnosis of depression benefit more from the intervention than those who also have depression. If our treatment has an effect size of .10 for people who also have depression and .45 for people who do not, we would need sufficient power to detect an effect size of .35 (the difference between the two) to be “powered” to detect an interaction. If the study was designed to detect an overall treatment effect that was small to moderate (~.35), then it may not be powered to detect an interaction term of that same magnitude. However, in this scenario, the magnitude of the difference between effects in people with and without depression is not the most important result. What matters more is whether the treatment works both for people who do not have depression and those who do (whether there is a strong effect size in both groups), which can be determined without a study that was designed to test for an interaction effect.

Particularly, when the moderator of interest is categorical and roughly evenly distributed across the sample, the sample can be split for an exploratory subgroup analysis. In our example, the subgroup analysis would reveal that our treatment is not very effective for people who have a co-occurring diagnosis of depression, with an effect size of .10, but it is effective for those who do not have depression, with an effect size of .45. If researchers notice a big difference between the effect sizes they identify in each subgroup, they might move forward to directly test moderation with an interaction effect in a larger independent sample, but they could also work on developing the treatment and testing it with a larger sample for the groups that exhibited a larger effect size (those without depression) and identifying alternative treatments for the groups that did not experience a benefit (those who also have depression). Thus, conducting an exploratory moderation analysis using subgroup analyses might be more attainable for researchers facing the practical constraints of conducting research in clinical–translational settings, with the caveat that these preliminary analyses are meant to provide guiding information about where to direct resources for larger clinical trials that are required to assess interacting factors and make strong claims about treatment targeting (Kent et al., 2010; Kraemer et al., 2001). Additionally, as there is no formal test of an interaction term, these analyses should not be used as evidence for a difference between groups. Isolated effect sizes only tell us how well the treatment worked within those groups. If the effect size in one subgroup is smaller than the other, this does not mean there is a meaningful difference between the groups without a formal test of the interaction term. However, these preliminary subgroup analyses from the smaller study might provide evidence supporting the design of a subsequent, adequately powered study to test for the interaction.

In the case of continuous potential moderators, researchers should be cautious about the aforementioned approach. It is critical to consider if there are meaningful, well-defined subgroups within the continuous variable. For example, dichotomizing some continuous variables (e.g., income, such that one group has income below the poverty line and the other above) might produce meaningful subgroups for assessing treatment effects in some contexts. However, there is danger in dichotomizing a continuous variable without meaningful cut points (Altman & Royston, 2006; Royston et al., 2006). For example, if we were hoping to determine if a treatment worked for both younger and older adults and split our groups at age 60 years (i.e., people 60 years and younger are in the younger group, older than age 60 years in the older group), the oldest members of the younger group and the youngest members of the older group would be more likely to experience similar treatment effects than to average-age members of their assigned groups. Thus, dichotomizing age would be inappropriate, and we require an alternate analysis strategy.

For continuous potential moderators, a more appropriate modeling approach might be estimating a single model with all subjects and the interaction effect included and then using the model to generate predictions of the treatment effect at specified values of the age moderator (e.g., older and younger participants). This approach would ignore statistical significance, instead using the model output to generate predictions of the treatment effect on the outcome at specified values of the age moderator. To support meaningful estimates, the continuous moderator can be centered at different values to obtain treatment effects at different values. Resampling methods or Bayesian approaches can be used in this approach to develop confidence intervals to support estimation of treatment effects. Interested readers can learn more in Stephan et al. (2009) and Wasserman (2000).

Is It a Mediator or a Moderator?

How do you know if your variable should be examined as a mediator or a moderator? The causal assertion differentiates the two (Field-Fote, 2019; Hayes & Rockwood, 2017). A mediator lies on the causal pathway between X (the active ingredient or treatment) and Y (the target or outcome). A moderator, by contrast, affects the relationship between X and Y (e.g., changes the magnitude or direction of the effect) but does not form part of the causal chain linking them. Importantly, a mediator must vary with the independent variable or active ingredient (Hayes & Rockwood, 2017). Therefore, stable characteristics such as sex or ethnicity are never mediators, as they cannot be caused by or vary with the independent variable, but they might be important moderators. However, constructs such as behaviors, symptoms, and functional status often are potential mediators, but they could be moderators as well. In our example, traumatic brain injury may affect the amount that a person exercises, meaning that it could lie on the causal pathway as a mediator. If findings indicate that traumatic brain injury does not affect exercise (i.e., we do not find support for the causal assertion), we may then shift our research question to consider whether exercise might be a moderator. For instance, we may ask if exercise protects against the detrimental effects of brain injury on memory (i.e., if people with brain injuries who exercise more do better on memory tests than people with brain injuries who do not). In this case, exercise is not a mediator but rather a moderator providing context to the relationship between brain injury and memory.

Whether or not a specific behavior, symptom, or functional state is a potential mediator should be determined by the existing literature, prior research, and, most importantly, theory. It is certainly possible for mediators and moderators to coexist in the same model or for there to be multiple mediators of the relationship between two variables (e.g., for both diet and exercise to mediate the relationship between traumatic brain injury and memory; Kraemer et al., 2001; Preacher & Hayes, 2008b). Larger studies are required to test these interactions, although preliminary analyses as described here may support the design of larger trials assessing interacting mediators and moderators (Kent et al., 2010). Discussing analytic approaches for these multicomponent models is beyond the scope of this tutorial, but researchers may consider building multicomponent models in some cases when addressing the complexities present in many cognitive-communication disorders.

A Note on Statistical Power, Interpretation of Findings, and Team Science

A central goal of this tutorial was to describe practical first steps for conducting preliminary explorations of mediation and moderation in smaller samples that may be underpowered to detect statically significant effects. We have described these preliminary approaches within the conventional frequentist framework of assessing for statistical significance of treatment effects, although we mentioned how Bayesian modeling may contribute to exploratory mediation and moderation analyses. There are broader issues of debate in interpretation within frequentist statistics (i.e., treating p = .053 as meaningfully different from p = .048 is just as problematic as forming subgroups from a continuous moderator; Wasserstein et al., 2019). Overinterpretation of findings as significant or not significant based on p values in the context of small and heterogeneous samples is problematic, and Bayesian approaches may be better suited to address these issues. However, until these methods become more commonly understood, we hope this tutorial can guide exploratory examinations within the frequentist framework.

Furthermore, it is essential that researchers be cautious and intentional about selecting a potential mediator or moderator for exploration. A study with a small sample is not designed to test multiple mediators nor multiple moderators. Mediator selection should be guided by theory and the treatment design; our goal was not to imply one should “look for” potential mediators but rather to support examination of whether the hypothesized mediator was affected by the treatment and affects the outcome of interest. Similarly, when parsing heterogeneity, it may be tempting to examine multiple subgroups or moderators separately and even testing three-way (i.e., “higher order”) interaction terms. Following these avenues can lead to errant conclusions and uninterpretable results, which is counter to the goal of generating hypotheses worthy of testing in larger studies to accelerate meaningful findings and improve treatment.

Given the potential to inadvertently sample an idiosyncratic subgroup of a population with a smaller sample size (Covington & Duff, 2020), these preliminary approaches are meant to inform the development and refinement of interventions for testing in larger study groups. Conducting a stepwise approach in which a treatment is first refined using preliminary analyses may allow researchers who work with smaller sample sizes to parse heterogeneity in smaller samples and direct resources for larger scale studies to well-developed interventions. However, the first steps of the approach are not substitutes for the last (i.e., formal tests of mediation and moderation), and conducting preliminary analyses without careful follow-up could further contribute to replication challenges by sampling subsets of heterogeneous populations (Covington & Duff, 2020). Consequently, it may be prudent to combine preliminary mediation and moderation analyses with a team science approach in some cases. For example, laboratories conducting preliminary analyses with smaller sample sizes could pool information and data with larger initiatives aimed at sampling a more representative sample of a given population (e.g., large and well-designed registries for patients with traumatic brain injury; Duff et al., 2022). In addition to sharing data, it would also be important to ensure that analyses from different laboratories are reproducible to better resolve and compare disparities between laboratories. By combining well-matched and practical analysis approaches with a collaborative model, we stand to make significant gains in improving treatment precision for the management of a variety of cognitive-communication disorders.

For Clinicians: Interpreting the Results of Mediation and Moderation Analyses for Clinical Practice

Mediation analyses tell clinicians where to focus to improve the results of their interventions (Hayes & Rockwood, 2017; Kraemer et al., 2001). For example, if another manipulable factor like exercise proved to be a mediator or mechanism of action on the causal pathway between traumatic brain injury and memory, clinicians might focus on increasing exercise to improve memory performance rather than attempting restorative memory treatments with limited proven efficacy (Lu et al., 2012). Understanding the mechanisms of action that determine how treatments work, therefore, gives clinicians the tools to adjust the focus of those treatments in real-world clinical settings.

A moderator, by contrast, tells clinicians who should get a treatment and under what circumstances (Hayes & Rockwood, 2017; Kraemer et al., 2001). This supports application of the principles of precision medicine to improve the efficacy of cognitive-communication treatment plans. In our example, our hypothetical findings would provide current “best evidence” to support the conclusion that clinicians should use our hypothetical memory treatment in people who do not have a co-occurring diagnosis of depression but consider a different treatment for people who have depression and do not receive the same benefit. Furthermore, we can use moderation analyses to ensure that our treatments work across groups. For example, if we determine that a treatment works for people with more educational attainment but not less, we might consider altering our materials so that they are digestible for a broader audience. This approach can increase our focus on equity by ensuring that our treatments do not widen and perpetuate existing disparities by benefiting least the persons already at risk for worse outcomes.

When a study's analytic plan does not consider individual differences, it is unclear if subgroups differed in their response to a given treatment. In studies with high variability in treatment response within the sample, moderation analyses can help determine if individual differences or meaningful subgroups drive part of that variability (Hayes & Rockwood, 2017). Designing an analytic plan that takes individual differences into account represents a key area for collaboration between researchers and clinicians, as clinicians may provide valuable insights as to which individual differences could be most meaningful and should be analyzed for driving treatment outcomes. Collaborating on these new analytic approaches could guide and advance thinking by reducing the research-to-practice gap in cognitive-communication disorders (Douglas & Burshnic, 2019; Olswang & Prelock, 2015).

Mediation and Moderation Models as Part of a New Approach to Heterogeneity in Cognitive-Communication Rehabilitation Research

In a field that has made limited progress in developing successful interventions in the last several decades, it is critical that we take a new approach to make sense of clinical–translational research results and to improve the precision of cognitive-communication rehabilitation research. Preliminary mediation and moderation approaches using path analysis and subgroup analysis may be more attainable for researchers who work with smaller sample sizes and can underlie both improved focusing of resources for larger studies and collaboration across groups to better parse heterogeneity in larger study samples. Taking a new statistical approach using mediation and moderation could form part of a focused analytic framework that leverages heterogeneity and may lead to much-needed breakthroughs that result in treatments that work in diverse clinical populations with cognitive-communication disorders.

Acknowledgments

This work was made possible by National Institutes of Health (NIH) Grants F31 DC019555 awarded to Emily L. Morrow and R01 NS110661 awarded to Melissa C. Duff. This work was also supported in part by the Vanderbilt Clinical and Translational Science Awards Grant UL1TR002243 from the National Center for Advancing Translational Sciences/NIH. Emily L. Morrow's time was supported in part by grant number T32 HS026122 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

Appendix

Glossary

Causal assertion: The assertion that the independent variable (or active ingredient) is a theoretical cause of the mediator (or mechanism of action), which, in turn, is causally associated the dependent variable (or target). The causal assertion is necessary to consider a variable a mediator.

Covariate: A variable that affects the target/dependent variable but is not related to the independent variable. Covariates are included in models to enhance precision because they remove the variability in the outcome associated with the covariate, making it easier to identify the target effect.

Confounder: A variable that is causally associated with both the independent and dependent variables but does not drive the association between them. Researchers often adjust for confounders in linear models to remove their associated variability from the outcome.

Mediator: A mechanistic variable driving the association between the independent and dependent variables (or between the active ingredient and the target). By definition, a mediator must lie on the causal pathway, such that the independent variable is causally associated with the mediator, which, in turn, is causally associated with the dependent variable. Mediation analyses help us identify the mechanisms that determine which treatments do or do not work.

Moderator: A variable that determines the context (under what circumstances, or for what types of people) in which an effect exists or does not and in what magnitude. A moderator does not cause the association between the independent and dependent variables (i.e., does not lie on the causal pathway between the treatment and the target), but it interacts with the independent variable to determine the nature of their association. Moderation analyses help us understand the contexts or subgroups in which treatments work best.

Overadjusting: Treating a variable as a covariate (i.e., adjusting for it to remove its variability from the model) when it is on the causal pathway or is of interest in affecting the relationship between independent and dependent variables. Overadjusting risks slowing progress in clinical–translational research when we may instead gain valuable information by analyzing variables of interest as mediators or moderators.

Funding Statement

This work was made possible by National Institutes of Health (NIH) Grants F31 DC019555 awarded to Emily L. Morrow and R01 NS110661 awarded to Melissa C. Duff. This work was also supported in part by the Vanderbilt Clinical and Translational Science Awards Grant UL1TR002243 from the National Center for Advancing Translational Sciences/NIH. Emily L. Morrow's time was supported in part by grant number T32 HS026122 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.

Footnote

1

Where possible, we have adopted terminology from the Rehabilitation Treatment Specification System (van Stan et al., 2021) in the rest of this tutorial. This system is tripartite: The patient function that is to be changed by an intervention is the target, the things that the clinician does to change the target are active ingredients, and mechanisms of action are the causal links between clinician's behavior and the target (i.e., how the intervention is hypothesized to work).

References

  1. Altman, D. G. , & Royston, P. (2006). The cost of dichotomising continuous variables. British Medical Journal, 332(7549), 1080. https://doi.org/10.1136/bmj.332.7549.1080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ananth, C. V. , & Schisterman, E. F. (2017). Confounding, causality, and confusion: The role of intermediate variables in interpreting observational studies in obstetrics. American Journal of Obstetrics & Gynecology, 217(2), 167–175. https://doi.org/10.1016/j.ajog.2017.04.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Austin, P. C. , & Steyerberg, E. W. (2015). The number of subjects per variable required in linear regression analyses. Journal of Clinical Epidemiology, 68(6), 627–636. https://doi.org/10.1016/j.jclinepi.2014.12.014 [DOI] [PubMed] [Google Scholar]
  4. Bloom, H. S. , Richburg-Hayes, L. , & Black, A. R. (2007). Using covariates to improve precision for studies that randomize schools to evaluate educational interventions. Educational Evaluation and Policy Analysis, 29(1), 30–59. https://doi.org/10.3102/0162373707299550 [Google Scholar]
  5. Burt, D. B. , Zembar, M. J. , & Niederehe, G. (1995). Depression and memory impairment: A meta-analysis of the association, its pattern, and specificity. Psychological Bulletin, 117(2), 285–305. https://doi.org/10.1037/0033-2909.117.2.285 [DOI] [PubMed] [Google Scholar]
  6. Contaldo, A. , Colombi, C. , Pierotti, C. , Masoni, P. , & Muratori, F. (2020). Outcomes and moderators of early start Denver model intervention in young children with autism spectrum disorder delivered in a mixed individual and group setting. Autism, 24(3), 718–729. https://doi.org/10.1177/1362361319888344 [DOI] [PubMed] [Google Scholar]
  7. Covington, N. V. , & Duff, M. C. (2020). Heterogeneity is a hallmark of traumatic brain injury, not a limitation: A new perspective on study design in rehabilitation research. American Journal of Speech-Language Pathology, 30(2S), 974–985. https://doi.org/10.1044/2020_AJSLP-20-00081 [DOI] [PubMed] [Google Scholar]
  8. Dahdah, M. N. , Barnes, S. , Buros, A. , Dubiel, R. , Dunklin, C. , Callender, L. , Harper, C. , Wilson, A. , Diaz-Arrastia, R. , Bergquist, T. , Sherer, M. , Whiteneck, G. , Pretz, C. , Vanderploeg, R. D. , & Shafi, S. (2016). Variations in inpatient rehabilitation functional outcomes across centers in the traumatic brain injury model systems study and the influence of demographics and injury severity on patient outcomes. Archives of Physical Medicine and Rehabilitation, 97(11), 1821–1831. https://doi.org/10.1016/j.apmr.2016.05.005 [DOI] [PubMed] [Google Scholar]
  9. Davis, T. E. , Moree, B. N. , Dempsey, T. , Reuther, E. T. , Fodstad, J. C. , Hess, J. A. , Jenkins, W. S. , & Matson, J. L. (2011). The relationship between autism spectrum disorders and anxiety: The moderating effect of communication. Research in Autism Spectrum Disorders, 5(1), 324–329. https://doi.org/10.1016/j.rasd.2010.04.015 [Google Scholar]
  10. Denny, J. C. , & Collins, F. S. (2021). Precision medicine in 2030—Seven ways to transform healthcare. Cell, 184(6), 1415–1419. https://doi.org/10.1016/j.cell.2021.01.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Douglas, N. F. , & Burshnic, V. L. (2019). Implementation science: Tackling the research to practice gap in communication sciences and disorders. Perspectives of the ASHA Special Interest Groups, 4(1), 3–7. https://doi.org/10.1044/2018_pers-st-2018-0000 [Google Scholar]
  12. Duff, M. C. , Morrow, E. L. , Edwards, M. , McCurdy, R. , Clough, S. , Patel, N. N. , Walsh, K. , & Covington, N. V. (2022). The value of patient registries to advance basic and translational research in the area of traumatic brain injury. Frontiers in Behavioral Neuroscience, 16. https://doi.org/10.3389/fnbeh.2022.846919 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Duffy, J. R. , Watt, J. , & Duffy, R. J. (1981). Path analysis: A strategy for investigating multivariate causal relationships in communication disorders. Journal of Speech and Hearing Research, 24(4), 474–490. https://doi.org/10.1044/jshr.2404.474 [PubMed] [Google Scholar]
  14. Erickson, K. I. , Voss, M. W. , Prakash, R. S. , Basak, C. , Szabo, A. , Chaddock, L. , Kim, J. S. , Heo, S. , Alves, H. , White, S. M. , Wojcicki, T. R. , Mailey, E. , Vieira, V. J. , Martin, S. A. , Pence, B. D. , Woods, J. A. , McAuley, E. , & Kramer, A. F. (2011). Exercise training increases size of hippocampus and improves memory. Proceedings of the National Academy of Sciences of the United States of America, 108(7), 3017–3022. https://doi.org/10.1073/pnas.1015950108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Field-Fote, E. E. (2019). Mediators and moderators, confounders and covariates: Exploring the variables that illuminate or obscure the “active ingredients” in neurorehabilitation. Journal of Neurologic Physical Therapy, 43(2), 83–84. https://doi.org/10.1097/NPT.0000000000000275 [DOI] [PubMed] [Google Scholar]
  16. Fisher, R. A. (1949). The design of experiments (5th ed.). Oliver & Boyd. [Google Scholar]
  17. Grady, C. L. , & Craik, F. I. (2000). Changes in memory processing with age. Current Opinion in Neurobiology, 10(2), 224–231. https://doi.org/10.1016/S0959-4388(00)00073-8 [DOI] [PubMed] [Google Scholar]
  18. Hart, T. , Benn, E. K. T. , Bagiella, E. , Arenth, P. , Dikmen, S. , Hesdorffer, D. C. , Novack, T. A. , Ricker, J. H. , & Zafonte, R. (2014). Early trajectory of psychiatric symptoms after traumatic brain injury: Relationship to patient and injury characteristics. Journal of Neurotrauma, 31(7), 610–617. https://doi.org/10.1089/neu.2013.3041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hayes, A. F. , & Rockwood, N. J. (2017). Regression-based statistical mediation and moderation analysis in clinical research: Observations, recommendations, and implementation. Behaviour Research and Therapy, 98, 39–57. https://doi.org/10.1016/j.brat.2016.11.001 [DOI] [PubMed] [Google Scholar]
  20. Hayes, A. F. , & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science, 24(10), 1918–1927. https://doi.org/10.1177/0956797613480187 [DOI] [PubMed] [Google Scholar]
  21. Kent, D. M. , Rothwell, P. M. , Ioannidis, J. P. A. , Altman, D. G. , & Hayward, R. A. (2010). Assessing and reporting heterogeneity in treatment effects in clinical trials: A proposal. Trials, 11, Article 85. https://doi.org/10.1186/1745-6215-11-85 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Kissela, B. M. , Khoury, J. C. , Alwell, K. , Moomaw, C. J. , Woo, D. , Adeoye, O. , Flaherty, M. L. , Khatri, P. , De Los Rios La Rosa, F. , Broderick, J. P. , & Kleindorfer, D. O. (2012). Age at stroke: Temporal trends in stroke incidence in a large, biracial population. Neurology, 79(17), 1781–1787. https://doi.org/10.1212/WNL.0b013e318270401d [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Koopman, J. , Howe, M. , Hollenbeck, J. R. , & Sin, H. P. (2015). Small sample mediation testing: Misplaced confidence in bootstrapped confidence intervals. Journal of Applied Psychology, 100(1), 194–202. https://doi.org/10.1037/a0036635.supp [DOI] [PubMed] [Google Scholar]
  24. Kraemer, H. C. , Stice, E. , Kazdin, A. , Offord, D. , & Kupfer, D. (2001). How do risk factors work together? Mediators, moderators, and independent, overlapping, and proxy risk factors. American Journal of Psychiatry, 158(6), 848–856. https://doi.org/10.1176/appi.ajp.158.6.848 [DOI] [PubMed] [Google Scholar]
  25. Lombardo, M. V. , Lai, M. C. , & Baron-Cohen, S. (2019). Big data approaches to decomposing heterogeneity across the autism spectrum. Molecular Psychiatry, 24(10), 1435–1450. https://doi.org/10.1038/s41380-018-0321-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Lu, J. , Gary, K. W. , Neimeier, J. P. , Ward, J. , & Lapane, K. L. (2012). Randomized controlled trials in adult traumatic brain injury. Brain Injury, 26(13–14), 1523–1548. https://doi.org/10.3109/02699052.2012.722257 [DOI] [PubMed] [Google Scholar]
  27. Maas, A. I. R. , Menon, D. K. , Lingsma, H. F. , Pineda, J. A. , Sandel, M. E. , & Manley, G. T. (2012). Re-orientation of clinical research in traumatic brain injury: Report of an international workshop on comparative effectiveness research. Journal of Neurotrauma, 29(1), 32–46. https://doi.org/10.1089/neu.2010.1599 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. MacKinnon, D. P. , Lockwood, C. M. , & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39(1), 99–128. https://doi.org/10.1207/s15327906mbr3901_4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Mayberry, L. S. , Rothman, R. L. , & Osborn, C. Y. (2014). Family members' obstructive behaviors appear to be more harmful among adults with type 2 diabetes and limited health literacy. Journal of Health Communication, 19(Suppl. 2), 132–143. https://doi.org/10.1080/10810730.2014.938840 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Mayberry, L. S. , Schildcrout, J. S. , Wallston, K. A. , Goggins, K. , Mixon, A. S. , Rothman, R. L. , Kripalani, S. , Bachmann, J. , Bell, S. P. , Donato, K. M. , Harrell, F. E. , Schnelle, J. F. , Vasilevskis, E. E. , Cawthon, C. , & Nwosu, S. K. (2018). Health literacy and 1-year mortality: Mechanisms of association in adults hospitalized for cardiovascular disease. Mayo Clinic Proceedings, 93(12), 1728–1738. https://doi.org/10.1016/j.mayocp.2018.07.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Miočević, M. , Gonzalez, O. , Valente, M. J. , & MacKinnon, D. P. (2018). A tutorial in Bayesian potential outcomes mediation analysis. Structural Equation Modeling, 25(1), 121–136. https://doi.org/10.1080/10705511.2017.1342541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Olswang, L. B. , & Prelock, P. A. (2015). Bridging the gap between research and practice: Implementation science. Journal of Speech, Language, and Hearing Research, 58(6), S1818–S1826. https://doi.org/10.1044/2015_JSLHR-L-14-0305 [DOI] [PubMed] [Google Scholar]
  33. Preacher, K. J. , & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36(4), 717–731. https://doi.org/10.3758/BF03206553 [DOI] [PubMed] [Google Scholar]
  34. Preacher, K. J. , & Hayes, A. F. (2008a). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879–891. https://doi.org/10.3758/BRM.40.3.879 [DOI] [PubMed] [Google Scholar]
  35. Preacher, K. J. , & Hayes, A. F. (2008b). Contemporary approaches to assessing mediation in communication research. In Hayes A. F., Slater M. D., & Snyder L. B. (Eds.), The SAGE sourcebook of advanced data analysis methods for communication research (pp. 13–54). SAGE Publications, Inc. https://doi.org/10.4135/9781452272054.n2 [Google Scholar]
  36. Royston, P. , Altman, D. G. , & Sauerbrei, W. (2006). Dichotomizing continuous predictors in multiple regression: A bad idea. Statistics in Medicine, 25(1), 127–141. https://doi.org/10.1002/sim.2331 [DOI] [PubMed] [Google Scholar]
  37. Salkind, N. J. (2010). Covariate. In Encyclopedia of research design. SAGE Publications, Inc. https://doi.org/10.4135/9781412961288 [Google Scholar]
  38. Schisterman, E. F. , Cole, S. R. , & Platf, R. W. (2009). Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology, 20(4), 488–495. https://doi.org/10.1097/EDE.0b013e3181a819a1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Schoemann, A. M. , Boulton, A. J. , & Short, S. D. (2017). Determining power and sample size for simple and complex mediation models. Social Psychological and Personality Science, 8(4), 379–386. https://doi.org/10.1177/1948550617715068 [Google Scholar]
  40. Sievers, S. B. , Trembath, D. , & Westerveld, M. (2018). A systematic review of predictors, moderators, and mediators of augmentative and alternative communication (AAC) outcomes for children with autism spectrum disorder. Augmentative and Alternative Communication, 34(3), 219–229. https://doi.org/10.1080/07434618.2018.1462849 [DOI] [PubMed] [Google Scholar]
  41. Spell, L. A. , Richardson, J. D. , Basilakos, A. , Stark, B. C. , Teklehaimanot, A. , Hillis, A. E. , & Fridriksson, J. (2020). Developing, implementing, and improving assessment and treatment fidelity in clinical aphasia research. American Journal of Speech-Language Pathology, 29(1), 286–298. https://doi.org/10.1044/2019_AJSLP-19-00126 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Stephan, K. E. , Penny, W. D. , Daunizeau, J. , Moran, R. J. , & Friston, K. J. (2009). Bayesian model selection for group studies. NeuroImage, 46(4), 1004–1017. https://doi.org/10.1016/j.neuroimage.2009.03.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Thompson, H. J. , McCormick, W. C. , & Kagan, S. H. (2006). Traumatic brain injury in older adults: Epidemiology, outcomes, and future implications. Journal of the American Geriatrics Society, 54(10), 1590–1595. https://doi.org/10.1111/j.1532-5415.2006.00894.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Tofighi, D. , & MacKinnon, D. P. (2016). Monte Carlo confidence intervals for complex functions of indirect effects. Structural Equation Modeling, 23(2), 194–205. https://doi.org/10.1080/10705511.2015.1057284 [Google Scholar]
  45. van Stan, J. H. , Whyte, J. , Duffy, J. R. , Barkmeier-Kraemer, J. M. , Doyle, P. B. , Gherson, S. , Kelchner, L. , Muise, J. , Petty, B. , Roy, N. , Stemple, J. , Thibeault, S. , & Tolejano, C. J. (2021). Rehabilitation treatment specification system: Methodology to identify and describe unique targets and ingredients. Archives of Physical Medicine and Rehabilitation, 102(3), 521–531. W.B. Saunders. https://doi.org/10.1016/j.apmr.2020.09.383 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of Mathematical Psychology, 44(1), 92–107. https://doi.org/10.1006/jmps.1999.1278 [DOI] [PubMed] [Google Scholar]
  47. Wasserstein, R. L. , Schirm, A. L. , & Lazar, N. A. (2019). Moving to a world beyond “p < 0.05”. The American Statistician, 73(Suppl. 1), 1–19. https://doi.org/10.1080/00031305.2019.1583913 [Google Scholar]
  48. Yuan, Y. , & MacKinnon, D. P. (2009). Bayesian mediation analysis. Psychological Methods, 14(4), 301–322. https://doi.org/10.1037/a0016972 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Zhao, X. , Lynch, J. G. , & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197–206. https://doi.org/10.1086/651257 [Google Scholar]

Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES