Abstract
Diverse fields rely on the development of effective interventions to change human behaviors, such as following prescribed medical regimens, engaging in recommended levels of physical activity, getting vaccinations that promote individual and public health, and getting a healthy amount of sleep. Despite recent advancements in behavioral intervention development and behavior-change science, systematic progress is stalled by the lack of a systematic approach to identifying and targeting mechanisms of action that underlie successful behavior change. Further progress in behavioral intervention science requires that mechanisms be universally pre-specified, measurable, and malleable. We developed the CheckList for Investigating Mechanisms in Behavior-change Research (CLIMBR) to guide basic and applied researchers in the planning and reporting of manipulations and interventions relevant to understanding the underlying active ingredients that do—or do not—drive successful change in behavioral outcomes. We report the rationale for creating CLIMBR and detail the processes of its development and refinement based on feedback from behavior-change experts and NIH officials. The final version of CLIMBR is included in full.
Keywords: Reporting guidelines, Behavior change, Mechanisms of action
Introduction
Over the past two decades, research on human behavior change has advanced due to the systematic development and coding of behavior change techniques (BCTs; Michie et al., 2013), the continued refinement of new, objective measures of health behaviors (e.g., sedentary behavior, dose-missing medication nonadherence, ingestive behavior; Diaz et al., 2017; Kronish et al., 2021; Rollo et al., 2016), and the rapid expansion of promising internet-based interventions to change human health behaviors (Webb et al., 2010). Despite these notable developments, however, research that aims to understand the mechanisms underlying successful behavior change is currently still not universally conducted or reported in ways that are optimized for allowing a systematic, cumulative advancement of scientific knowledge (Sumner et al., 2018). Indeed, much of behavior change research has been plagued by a failure to reveal exactly which mechanisms make successful interventions effective (Rothman & Sheeran, 2021).
During the last decade, many health behavior researchers have reasoned that investigating mechanisms—whether neural, cognitive, interpersonal, environmental, systemic, etc.—is necessary for developing reliably effective interventions (e.g., group-based health behavior change interventions; Borek et al., 2019; action planning and implementation intention interventions; Hagger & Luszczynska, 2014; social-cognitive mechanisms underlying health behavior change interventions; Schwarzer et al., 2011). First, a mechanistic understanding is crucial for determining whether the theories that motivate selection of specific interventions or intervention components are empirically supported. Relatedly, a mechanistic understanding helps to delineate similarities and differences between theories that support the same intervention for potentially different reasons. Even interventions that are implemented with perfect fidelity will not be effective if the assumed theory-driven mechanisms are ultimately irrelevant to changing the outcome of interest (Astbury & Leeuw, 2010). A second reason for focusing on mechanisms throughout intervention development and testing is the high degree of heterogeneity in observed efficacy of behavioral interventions, very little of which is explained by measuring moderators alone (i.e., between-subjects factors responsible for effect modification; (see Rothman & Sheeran, 2021, for a review of 46 meta-analyses). Because the field of behavioral medicine research has not yet adopted the proposed mechanistic focus, we rarely know why health behavior interventions fail—or even why they work, when they do, and for whom they will work in future trials. Mechanism-focused behavioral science allows us to ask questions such as, are the observed differences in intervention efficacy due to failure to engage the mechanisms that underlie successful behavior change? Or, conversely, do some mechanisms precipitate behavior change in some samples and not others?
The literature on medication adherence illustrates this deficit and the need for scientists to adopt new practices. Proper medication adherence is reliably associated with prevention of adverse medical outcomes (e.g., major adverse cardiovascular events, mortality) and substantially reduced health care costs (Bitton et al., 2013; Chowdhury et al., 2013). Thus, improving adherence to prescribed medications is a ripe target for improving health outcomes through behavioral intervention. Indeed, by 2018, the National Institutes of Health (NIH) had funded 62 RCTs for which medication adherence was a primary (n=18 studies) or secondary (n=44 studies) outcome. However, only 2 of those trials (~3%) measured a hypothesized mechanism of action (Edmondson et al., 2018).
The NIH Science Of Behavior Change (SOBC) program promotes widespread adoption of the experimental medicine approach to identifying mechanisms of behavior change, optimizing intervention development based on their ability to influence identified mechanisms, testing hypothesized mechanistic pathways in subsequent RCTs, and leveraging mechanistic knowledge to efficiently disseminate interventions that are shown to improve human health behaviors through known mechanisms (Nielsen et al., 2018; Riddle & Science of Behavior Change Working Group, 2015). Similar principle apply when designing interventions to improve economic behaviors, social behaviors, and more. Only through broad adoption of the experimental medicine approach, consistent measurement of mechanisms using valid instruments, and consistent reporting of positive and null findings can the field expect to consistently deliver interventions that work and can be deployed at scale.
Each discovery of an underlying mechanism that, when intervened upon successfully, yields observable behavior change must be reported consistently by researchers so that other scientists can build upon that discovery. In a field that prioritizes mechanisms, each study contributes to a cumulative science that promotes incremental improvement in the speed of effective intervention development by better exploiting established mechanistic pathways or expanding the intervention’s influence to concurrently activate multiple mechanistic pathways. The present manuscript outlines the development of a resource designed to address this need: the CheckList for Investigating Mechanisms in Behavior-change Research (CLIMBR).
CLIMBR was created to explicate and formalize the necessary steps for rigorously applying the experimental medicine approach to behavior change research, according to the NIH SOBC program’s guiding principles. The overarching goal of this effort was to provide a helpful resource to guide the conduct and reporting of mechanism-focused research and to maximize precision, consistency, and transparency in results reporting. The development of the checklist was informed by six core SOBC principles: (1) putative mechanisms of behavior change should be identified (i.e., hypothesized a priori) explicitly and logically, (2) mechanisms should be measured (not simply invoked as likely or assumed drivers of change), (3) assays (i.e., measures) of mechanisms should be psychometrically sound, (4) all empirical results related to mechanisms (including null findings) should be shared with the scientific community, (5) a putative mechanism should be prioritized for experimental manipulation to engage it once (a) a valid measure of the mechanism that could reasonably be changed by manipulation/intervention is identified and (b) observed change in the valid measure of that mechanism has been reliably associated with change in a valid measure(s) of behavior in observational studies, and (6) a putative mechanism of behavior change will be considered a demonstrated driver of behavior change if (a) a valid measure of that mechanism can be reliably engaged (i.e., changed in the predicted direction) by an intervention component and (b) change in the measured mechanism subsequently and reliably leads to observable change in the predicted direction on a valid measure of behavior, in at least one population.
The CLIMBR checklist is designed to improve the quality of scientific research and its dissemination, similar to other checklists. For example, the Consolidated Standards of Reporting Trials (CONSORT; Altman et al., 2001; Schulz et al., 2010), the Strengthening the Reporting of Observational Studies in Epidemiology checklist (STROBE; Von Elm et al., 2007), and the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA; Page et al., 2021) are widely used checklists that address, respectively, the reporting of randomized trials (CONSORT), observational studies (STROBE), and systematic reviews and meta-analyses (PRISMA). These existing checklists have improved research by significantly increasing the completeness of reported information (Liu et al., 2019; Turner et al., 2012), as well as standardizing study designs and formalizing best practices. It is our hope that CLIMBR will similarly increase the rigor and reporting of mechanism-focused behavior change research.
Checklist Development
The checklist was developed in three stages. First, a CLIMBR executive committee was assembled with five members drawn from the NIH SOBC Resource and Coordinating Center and its multiple working groups. The CLIMBR executive committee (the authors of the present manuscript) drafted the initial version of the checklist according to the SOBC principles outlined above. It was determined at the outset of development that the checklist should be compatible with the NIH Stage Model for intervention development (Onken, 2019), due to the core emphasis on incorporating basic science questions about mechanisms into every stage of clinical science research in the stage model. However, it was also decided that the checklist items should not dictate exactly how the stage model must be mapped onto mechanistic research, because not all mechanistic behavioral research is explicitly intervention-focused at inception, and to prioritize ease of use for all researchers who use CLIMBR.
Second, the CLIMBR committee revised and expanded the checklist based on feedback from discussions with SOBC’s NIH program officer and other NIH officials associated with the initiative. The checklist was expanded to encompass not only RCTs, but also experimental study designs relevant to modulating potential mechanisms of behavior change and observational or correlational study designs. During this phase, CLIMBR acquired a multiple-column structure to accommodate basic and applied researchers from a wide variety of disciplines. The first column corresponds to “studies that investigate the effect(s) of an intervention or manipulation (X) on a putative mechanism of behavior change (M), without measuring a behavior change outcome (Y).” This category could pertain to psychological scientists who conduct manipulations intended to influence psychological outcomes that may be considered potential mechanisms of behavior change (e.g., mindfulness training intended to increase self-compassion). The second column corresponds to “studies that investigate the association between a putative mechanism of behavior change (M) and a behavior change outcome (Y), without including an intervention or manipulation (M)” (e.g., examining the relationship between stress reactivity and nicotine use). The third and final column corresponds to “studies that investigate the effect(s) of an intervention or manipulation (X) on a behavior-change outcome (Y) and test whether a putative mechanism of behavior change (M) can explain these changes in behavior” (e.g., a randomized controlled trial of the effects of an episodic future thinking intervention on seatbelt use as mediated by future time perspective).
The third stage of development for the checklist began with an open-comment period. The initial open-comment phase lasted from June 10, 2022 to July 8, 2022. A link to the checklist and an open-comment portal (hosted at Boston University via REDCap) was distributed directly via email to 18 experts in behavior change research to invite their written feedback. These invited experts included editors at six relevant journals (in alphabetical order: Annals of Behavioral Medicine, Behavior Therapy, General Hospital Psychiatry, Health Psychology, Social Science & Medicine, and Translational Behavioral Medicine) and members of the Behavioral Medicine Research Council. (Note that only a subset of the invited experts at the organizations listed above chose to provide feedback.) An invitation to a second open-comment period from July 8, 2022 to July 29, 2022 was extended to researchers via Twitter (SOBC account and shared by SOBC affiliates) and via the SOBC website to capture a broader range of stakeholder feedback. All commentors who offered feedback and suggestions via the open-comment portal were given the option to respond anonymously or to provide their names. The executive committee discussed the feedback point by point and revised the checklist in line with submitted suggestions. The content of the feedback and its incorporation in the finalized checklist are described below.
Expert Feedback
The open-comment period yielded a total of 26 comments, including 18 comments from invited experts and 8 comments from NIH program officials. In addition to positive comments supporting the mission of CLIMBR and selected details of the approach, commenters highlighted the following concerns and suggestions for revision. These suggestions can be grouped in three broad categories. First, suggestions were made about how to use the checklist, and these suggestions affected the content of the preamble to the checklist or the column headings. Examples under this category include adding the labels “X” (intervention or manipulation), “M” (mechanism), and “Y” (behavioral outcome) to the checklist columns, clarifying whether a mechanism can itself be a behavior, clarifying the wide scope of potential behavioral outcomes, defining the meaning of “interventions” and “manipulations” in the context of the checklist, and encouraging researchers to submit their completed CLIMBR and CONSORT checklists together during the review process of a manuscript, if appropriate. A second group of comments referred to elements to be ideally included either in the analytic design or the reported findings. These types of comments affected the items in the checklist’s results section, for example, the recommended use of random-effects models, reporting not only between-group effects but also within-subject changes, reporting the success rate difference when appropriate, and reporting not only mediation effects but also treatment-by-mediator interaction effects. A third category of comments included miscellaneous suggestions that affected checklist items not in the results section. Examples included specifying a potential behavioral outcome of interest in a manuscript’s introduction even if the reported study did not measure any behaviors, explicitly considering (e.g., in a manuscript’s discussion section) whether one or more unmeasured constructs may have been partially responsible for any observed effects, and a recommendation for manuscript titles to be informative about results.
Finalization of the Checklist
The members of the CLIMBR executive committee drafted and circulated suggestions for changes based on the assembled comments, along with a written rationale for the wording of each suggested addition or edit, internally. The committee reached consensus via sharing revised versions of the document and via electronic discussions of the suggested edits. Twenty-five of the 26 comments (96%) resulted in edits to the checklist. The only comment that did not result in an edit to the checklist was a concern from one reviewer that CLIMBR may cause manuscript or grant reviewers, or scientists themselves, to view other (i.e., non-CLIMBR) approaches to behavior-change research negatively. Although we appreciate this potential concern, we believe that the checklist does not merit a special notice of caution in this regard. If research pertains to proposed mechanisms that are not modifiable, then the experimental medicine approach—and thus the checklist—would not apply. However, it is our hope that any research on putatively changeable mechanisms would benefit from the application of this framework because the goal of CLIMBR is to improve behavior change research on potentially modifiable mechanisms. The full version of the final checklist is provided in Appendix A. The checklist is also available on the SOBC website (https://scienceofbehaviorchange.org/resources/).
Future directions
An important focus for the future will be to ensure that journal editors, grant reviewers, and behavior-change researchers working in diverse disciplines are aware of CLIMBR, its rationale, and its use cases. To achieve this goal, partnerships between SOBC and relevant research organizations (e.g., behavioral medicine research groups, journals) may be established to encourage the use of the checklist by scientists writing grant applications, designing studies, and disseminating findings relevant to understanding the drivers of behavior change. We plan to share an instructional how-to video to demonstrate how basic scientists and applied researchers can apply CLIMBR to their own research domains using a variety of study designs (e.g., lab-based experiments intended to modulate psychological constructs, randomized controlled trials targeting mechanisms and the real-world health behaviors they drive). CLIMBR will also be disseminated via conference presentations and workshops to reach broad audiences.
Conclusions
CLIMBR is an easy-to-use, comprehensive checklist that details the necessary components researchers should include to conduct mechanism-focused behavioral intervention science that advances the field. It includes the items to be reported in manuscripts to demonstrate that behavior-change research has been conducted rigorously and transparently in line with a focus on understanding the modifiable mechanisms that underlie behavior change. Applying CLIMBR is also useful for enhancing the rigor and competitiveness of grant applications that address behavior-change research questions, as it can ensure that grant writers describe all necessary features of impactful mechanistic research designs. If broadly employed, CLIMBR will accomplish the fundamental goal of the NIH-funded Science of Behavior Change (SOBC; https://scienceofbehaviorchange.org; https://www.nia.nih.gov/research/dbsr/science-behavior-change-sobc) to bring about a generational boost to the rigor, reproducibility, and cumulative impact of behavioral science through the field’s adoption of research practices that reveal the underlying mechanisms of behavior change in ways that allow for steady accumulation of knowledge, easy harmonization of data, and growing efficiency, efficacy, and scalability of interventions.
Acknowledgements
This research was funded by The National Institute on Aging (U24AG052175). The authors thank the members of the Strategy and Innovation Team for the Science Of Behavior Change (SOBC) who provided valuable feedback that led to the development of this work (in alphabetical order): Warren Bickel, Patrick Bissett, Karina Davidson, Amy Gorin, Chanita Hughes-Halbert, Blair Johnson, Jonathan King, Erica Rose Scioli, Luke Stoeckel, and Tracey Wilson. We also thank (in alphabetical order) Will Aklin, Elaine Collier, Christine Hunter, Chandra Keller, Rosalind King, Lis Nielsen, Lisa Onken, Melissa Riddle, Janine Simmons, and Allie Walker at the National Institutes of Health. We are grateful to Marie Parsons for technical assistance with gathering the open-comment feedback and to Lilly Derby, Deanna Lewis, and Issa Kahn for administrative assistance in support this project. We thank Luis Blanco for creative assistance in developing the checklist’s name, acronym, and logo.
Appendix
Purpose of this checklist:
The goal of developing and optimizing interventions intended to change human behavior may be more effectively realized when researchers study and report on potential mechanisms of behavior change in a standardized way. The NIH Science of Behavior Change (SOBC) is a trans-NIH initiative focused on the mechanisms of behavior change. SOBC embraces five core principles. First, identified mechanisms should be grounded in theory and/or prior empirical work. Second, mechanisms cannot be tested unless they are measured. Third, measures of mechanisms should be valid and reliable ways of measuring the construct of interest (i.e., good psychometric properties are needed). Fourth, transparent sharing of scientific findings—both positive and negative—promotes progress in mechanism-focused behavior-change research. Finally, a putative mechanism shows evidence of explaining behavior if all of the following are true: (A) an intervention can affect a measure of the mechanism, (B) the measured mechanism is associated with a behavior-change outcome, and (C) the intervention-related changes in the measured mechanism are associated with changes in the behavior. The Checklist for Investigating Mechanisms in Behavior-change Research (CLIMBR) was created as part of the SOBC initiative to serve as a resource to applied and basic behavioral scientists who study mechanisms of behavior change. CLIMBR is an easy-to-use checklist of guidelines for reporting the findings of behavioral intervention development studies to advance mechanism-focused science. Each item (row) in the checklist reflect one or more of the five SOBC principles noted above, and each of the three columns is applicable to a different behavior-change research design. For ease of use, the sections of CLIMBR reflect the standard organization of a scientific manuscript. For the purposes of this checklist, behavioral outcomes include typical health behaviors (e.g., physical activity, medication adherence, and sleep), but this checklist may also be applied to research on other outcomes of interest to behavioral health researchers (e.g., moods, emotions, cognitions, physical states). Mechanisms include any potentially modifiable and measurable constructs that are hypothesized to drive behavior change. Manipulations and interventions include any procedures designed to change a potential mechanism and/or a behavioral outcome.
To use CLIMBR, identify which column corresponds to the type of study you will report, and follow the instructions for that columns’ items only. Some items span all three columns.
Column A (X → M) should be used to report the results of studies that investigate the effect(s) of an intervention or manipulation (X) on a putative mechanism of behavior change (M), without measuring a behavior change outcome (Y).
Column B (M → Y) should be used to report the results of studies that investigate the association between a putative mechanism of behavior change (M) and a behavior change outcome (Y), without including an intervention or manipulation (M).
Column C (X → M → Y) should be used to report the results of studies that investigate the effect(s) of an intervention or manipulation (X) on a behavior change outcome (Y) and test whether a putative mechanism of behavior change (M) can explain these changes in behavior via a test of mediation.
To facilitate the manuscript review process, the NIH’s SOBC recommends that authors include the completed checklist together with their submitted manuscripts (in addition to the CONSORT diagram, if appropriate). If particular items cannot be satisfied, the 'Reported on page #' field should be reported as “N/A,” and the authors should briefly explain the reasons for not adhering to the guidelines (e.g., space limitations in the abstract).
| Section/topic | # |
A: For studies that investigate the effect(s) of an intervention or manipulation (X) on a putative mechanism of behavior change (M), without measuring a behavior change outcome (Y) X → M Example: a study of the effects of a mindfulness intervention on self-compassion |
B: For studies that investigate the association between a putative mechanism of behavior change (M) and a behavior change outcome (Y),
without including an intervention or manipulation (M) M → Y Example: a study of the relationship between stress reactivity and nicotine use |
C: For studies that investigate the effect(s) of an intervention or manipulation (X) on a behavior-change outcome (Y) and test whether a putative mechanism of behavior change (M) can explain these changes in behavior. X → M → Y Example: a randomized controlled trial of the effects of an episodic future thinking intervention on seatbelt use as mediated by future time perspective |
Reported on page # |
| TITLE | |||||
| Title | 1 | If space allows, the title should refer to one or more mechanisms of behavior change as well as the intervention or manipulation. If the journal guidelines allow it, then titles that are informative rather than neutral about the study findings should be considered. | If space allows, the title should refer to one or more mechanisms of behavior change. If the journal guidelines allow it, then titles that are informative rather than neutral about the study findings should be considered. | If space allows, the title should refer to one or more mechanisms of behavior change as well as the intervention or manipulation. If the journal guidelines allow it, then titles that are informative rather than neutral about the study findings should be considered. | |
| ABSTRACT | |||||
| Identify mechanism(s) and behavior(s) | 2 | Specify at least one hypothesized mechanism of behavior change, and specify at least one behavior. | |||
| Reporting of intervention-mechanism association | 3 | Report the degree to which the intervention engaged the mechanism. That is, report the effect size that represents the difference between the intervention and control groups in (1) a post-intervention measure of the mechanism and/or (2) a pre-to-post change in the measure of the mechanism. | <Not applicable for this study design> | Report the degree to which the intervention engaged the mechanism. That is, report the effect size that represents the difference between the intervention and control groups in (1) a post-intervention measure of the mechanism and/or (2) a pre-to-post change in the measure of the mechanism. | |
| Reporting of mechanism-behavior change association | 4 | <Not applicable for this study design> | Report the degree to which a measure of an identified mechanism was associated with a behavioral outcome. | For a randomized controlled trial, report the degree to which the intervention-vs-control difference in an identified mechanism was associated with a behavioral outcome. Furthermore, in trials in which a mediation test was conducted to test a potential mechanism’s role in an intervention-behavior association, then report the indirect effect (path a*b) for the mediation analysis. | |
| INTRODUCTION | |||||
| Identify mechanism(s) | 5 | Specify a priori at least one hypothesized mechanism of behavior change. Describe the causal model implied by the selected mechanism, as well as the level at which the mechanism is thought to operate in this study (e.g., neural, cognitive, behavioral, interpersonal, policy). If relevant, state whether the present mechanism is thought to work in conjunction with the other mechanisms. | |||
| Refer to a relevant behavioral outcome | 6 | Specify a priori at least one behavioral outcome that is relevant to the hypothesized mechanism(s) of behavior change, even though the present study does not measure a change in behavior. | Specify a priori at least one behavioral outcome that is relevant to the hypothesized mechanism(s) of behavior change. | Specify a priori at least one behavioral outcome that is relevant to the hypothesized mechanism(s) of behavior change. | |
| Provide rationale for mechanism(s) | 7 | Provide clear and appropriate documentation of theory and/or prior evidence that suggests that the mechanism could be engaged by an intervention/manipulation. Mechanism engagement is defined as change in a mechanism that may be attributed to the effects of an intervention/manipulation. If such support is insufficient or if relevant research is currently lacking, then explain the rationale for the selected mechanism(s). | Provide clear and appropriate documentation of theory and/or prior evidence that suggests that the mechanism is associated with a behavioral outcome investigated in the study. If such support is insufficient or if relevant research is currently lacking, then explain the rationale for the selected mechanism(s). | Provide clear and appropriate documentation of theory and/or prior evidence that suggests that (1) the mechanism could be engaged by an intervention and (2) the mechanism is associated with a behavioral outcome investigated in the study. Mechanism engagement is defined as change in a mechanism that may be attributed to the effects of an intervention. If such support is insufficient or if relevant research is currently lacking, then explain the rationale for the selected mechanism(s). | |
| METHOD | |||||
| Construct validity of each mechanism’s measure(s) | 8 | Cite prior research for each of the included measures of the hypothesized mechanism(s) that provides evidence of adequate construct validity. Provide evidence of convergent and divergent validity as available. If evidence of validity is poor or absent for a given measure, then provide a rationale for the inclusion of the particular measure(s) in spite of that limitation. | |||
| Reliability of each mechanism’s measure(s) | 9 | Cite prior research for each of the included measures of the hypothesized mechanism(s) that provides evidence of adequate reliability (e.g., good internal consistency). | |||
| Expected intervention/ manipulation effects on measured mechanism(s) | 10 | Describe the intervention or manipulation to be tested, including active components. Specify how the intervention or manipulation was believed to engage the mechanism. Specify why the control condition was believed not to engage the mechanism. In the case of multiple studied mechanisms or multiple studied interventions/manipulations, describe which mechanism(s) was/were expected to be engaged by which intervention(s)/manipulation(s). | <Not applicable for this study design> | Describe the intervention or manipulation to be tested, including active components. Specify how the intervention or manipulation was believed to engage the mechanism. Specify why the control condition was believed not to engage the mechanism. In the case of multiple studied mechanisms or multiple studied interventions/manipulations, describe which mechanism(s) was/were expected to be engaged by which intervention(s)/manipulation(s). | |
| Behavioral outcome measure | 11 | <Not applicable for this study design> | Describe any behavioral outcome measures included and the measurement properties of each. | Describe any behavioral outcome measures included and the measurement properties of each. | |
| RESULTS | |||||
| Sample size justification | 12 | Report the results of an a priori power analysis to determine the sample size needed to have sufficient statistical power to detect an intervention effect on the measure of each hypothesized mechanism. Provide an effect size justification for each effect used in the power analysis. | Report the results of an a priori power analysis to determine the sample size needed to have sufficient statistical power to detect (1) a meaningful association between an identified mechanism and behavioral outcome and (2) a meaningful association between the degree of change in an identified mechanism and a change in a clinical outcome. Provide an effect size justification for each effect used in the power analysis. | Report the results of an a priori power analysis to determine the sample size needed to have sufficient statistical power to detect: (1) a meaningful association between an identified mechanism and behavioral outcome, (2) an intervention effect on the measure of each hypothesized mechanism, and (3) a meaningful association between the degree of change in an identified mechanism and a change in a clinical outcome. Provide an effect size justification for each effect used in the power analysis. | |
| Measured reliability | 13 | Report the internal consistency reliability using the present study’s data for each measure of each mechanism. | |||
| Measured construct validity | 14 | If relevant data were gathered, report findings related to convergent and divergent validity in the present study for each measure of each mechanism. | |||
| Observed effect size of intervention or manipulation on measured mechanism(s) * | 15 | Report the effect(s) of the intervention on the measure(s) of each of the hypothesized mechanisms. In the case of a randomized controlled trial, report the standardized effect size (e.g., Cohen’s d, Hedges’ g) and its confidence interval comparing the experimental group to the comparison group. If applicable, consider reporting the success rate difference, its confidence interval, and the number-needed-to-treat. If available, also report the within-subjects change in the measured mechanism for each group. | <Not applicable for this study design> | Report the effect(s) of the intervention on the measure(s) of each of the hypothesized mechanisms. In the case of a randomized controlled trial, report the standardized effect size (e.g., Cohen’s d, Hedges’ g) and its confidence interval comparing the experimental group to the comparison group. If applicable, consider reporting the success rate difference, its confidence interval, and the number-needed-to-treat. If available, also report the within-subjects change in the measured mechanism for each group. | |
| Observed effect size of measured mechanism(s) on target behavior * | 16 | <Not applicable for this study design> | Report the association(s) between the measure(s) of each of the hypothesized mechanisms and the target behavior. Report the results of the association, including the effect size (e.g., standardized coefficient) and its 95% confidence interval. | Report the association(s) between the measure(s) of each of the hypothesized mechanisms and the target behavior. Report the results of the association, including the effect size (e.g., standardized coefficient) and its 95% confidence interval. | |
| Observed extent of behavior change associated with mechanism change * | 17 | <Not applicable for this study design> | Report the association between changes in measure(s) of the identified mechanism(s) and changes in at least one behavioral outcome. This test can take multiple forms (e.g., a simple zero-order correlation, an association in a regression model). It should include assessments of change over time (e.g., not measures of the two constructs at a single time point). If appropriate, consider the use of random-effects models to tease apart within-person changes from between-person differences | Report the association between changes in measure(s) of the identified mechanism(s) and changes in at least one behavioral outcome. This test can take multiple forms (e.g., a simple zero-order correlation, an association in a regression model). It should include assessments of change over time (e.g., not measures of the two constructs at a single time point). If appropriate, consider the use of random-effects models to tease apart within-person changes from between-person differences. | |
| Evidence for mediation by the measured mechanism(s) | 18 | <Not applicable for this study design> | <Not applicable for this study design> | Conduct and report the results of a mediation test to assess the standardized effect size of the indirect effect of each measured mechanism. That is, report the extent to which the intervention’s effect on a target behavior was mediated by the measured mechanism of action. Ideally, the mediation test should model the proposed mediator as change in the measured mechanism and model the outcome as change in the behavior. If a mediation test of change is not possible due to the study design, then a cross-sectional mediation analysis should be reported instead. Proper care should be taken to conduct and interpret the mediation results properly, including accounting for potential confounders using covariates, as appropriate, and assessing possible treatment-bymediator interactions. | |
| DISCUSSION | |||||
| Consider the intervention’s effect on the mechanism(s) | 19 | Provide an interpretation of the findings that address the extent to which the intervention/manipulation in question may have shifted one or more hypothesized mechanisms of interest that are relevant to a target behavioral outcome. Consider the intervention’s characteristics (e.g., dose, frequency, duration). Consider also the time elapsed between the conclusion of the intervention/manipulation and the subsequent assessment time of the measured mechanism (i.e., short- vs. long-term change). Consider and discuss the possibility that one or more unmeasured constructs that may have been correlated with the measured mechanism may have been partially responsible for any observed effects. | <Not applicable for this study design> | Provide an interpretation of the findings that address the extent to which the intervention/manipulation in question may have shifted one or more hypothesized mechanisms of action that are relevant to a target behavior. Consider the intervention’s characteristics (e.g., dose, frequency, duration). Consider also the time elapsed between the conclusion of the intervention/manipulation and the subsequent assessment time of the measured mechanism (i.e., short- vs. long-term change). Consider and discuss the possibility that one or more unmeasured constructs that may have been correlated with the measured mechanism may have been partially responsible for any observed effects. | |
| Consider the association(s) of the mechanism(s) with behavior change | 20 | <Not applicable for this study design> | Provide an interpretation of the findings that address the extent to which one or more hypothesized mechanisms of interest were associated with change in a target behavior. Consider whether the measured mechanism and the behavior were each assessed via self-report or via differing methodologies. | Provide an interpretation of the findings that address the extent to which one or more hypothesized mechanisms of interest were associated with change in a target behavior. Consider whether the measured mechanism and the behavior were each assessed via self-report or via differing methodologies. | |
| Consider the evidence for mediation | 21 | <Not applicable for this study design> | <Not applicable for this study design> | Discuss the strength of evidence (or lack thereof) that each of the measured mechanisms may underlie changes in behavior resulting from effects of the intervention. | |
| OTHER INFORMATION | |||||
| Study protocol | 22 | If a protocol for the study exists (e.g., clinicaltrials.gov, Open Science Framework), then provide the relevant information in the manuscript. Similarly, if a protocol paper has been published, that should also be cited. | |||
Note. The bolded items that are accompanied by the * symbol are the essential components that should be included to report on findings that investigate potential mechanisms in behavioral research.
Highlights.
CLIMBR is a checklist promoting mechanism-focused behavioral intervention science.
It focuses on modifiable mechanisms that underlie successful behavior change.
It ensures that research is conducted rigorously and reported transparently.
The present manuscript describes CLIMBR’s rationale and development.
Footnotes
Conflicts of interest/competing interests: The authors would like to acknowledge the following relationships: Dr. Otto receives compensation as a consultant for Big Health. No other authors have relevant financial or non-financial interests to report.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gøtzsche PC, Lang T, & Group, C. (2001). The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine, 134(8), 663–694. [DOI] [PubMed] [Google Scholar]
- Astbury B, & Leeuw FL (2010). Unpacking black boxes: Mechanisms and theory building in evaluation. American journal of evaluation, 31(3), 363–381. 10.1177/1098214010371972 [DOI] [Google Scholar]
- Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, Ogedegbe G, Orwig D, Ernst D, & Czajkowski S (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology, 23(5), 443–451. 10.1037/0278-6133.23.5.443 [DOI] [PubMed] [Google Scholar]
- Bitton A, Choudhry NK, Matlin OS, Swanton K, & Shrank WH (2013). The impact of medication adherence on coronary artery disease costs and outcomes: A systematic review. The American Journal of Medicine, 126(4), 357. [DOI] [PubMed] [Google Scholar]
- Borek AJ, Abraham C, Greaves CJ, Gillison F, Tarrant M, Morgan-Trimmer S, McCabe R, & Smith JR (2019). Identifying change processes in group-based health behaviour-change interventions: Development of the mechanisms of action in group-based interventions (MAGI) framework. Health Psychology Review, 13(3), 227–247. 10.1080/17437199.2019.1625282 [DOI] [PubMed] [Google Scholar]
- Chowdhury R, Khan H, Heydon E, Shroufi A, Fahimi S, Moore C, Stricker B, Mendis S, Hofman A, & Mant J (2013). Adherence to cardiovascular therapy: A meta-analysis of prevalence and clinical consequences. European Heart Journal, 34(38), 2940–2948. 10.1093/eurheartj/eht295 [DOI] [PubMed] [Google Scholar]
- Diaz KM, Howard VJ, Hutto B, Colabianchi N, Vena JE, Safford MM, Blair SN, & Hooker SP (2017). Patterns of sedentary behavior and mortality in US Middle-aged and older adults: A national cohort study. Annals of Internal Medicine, 167(1), 465–475. 10.7326/M17-0212 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edmondson D, Falzon L, Sundquist KJ, Julian J, Meli L, Sumner JA, & Kronish IM (2018, Feb). A systematic review of the inclusion of mechanisms of action in NIH-funded intervention trials to improve medication adherence. Behaviour Research & Therapy, 101, 12–19. 10.1016/j.brat.2017.10.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hagger MS, & Luszczynska A (2014). Implementation intention and action planning interventions in health contexts: State of the research and proposals for the way forward. Applied Psychology: Health and Well-Being, 6(1), 1–47. 10.1111/aphw.12017 [DOI] [PubMed] [Google Scholar]
- Kronish IM, Thorpe CT, & Voils CI (2021). Measuring the multiple domains of medication nonadherence: Findings from a Delphi survey of adherence experts. Translational Behavioral Medicine, 11(1), 104–113. 10.1093/tbm/ibz133 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu H, Zhou X, Yu G, & Sun X (2019). The effects of the PRISMA statement to improve the conduct and reporting of systematic reviews and meta-analyses of nursing interventions for patients with heart failure. International Journal of Nursing Practice, 25(3), e12729. 10.1111/ijn.12729 [DOI] [PubMed] [Google Scholar]
- Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, Eccles MP, Cane J, & Wood CE (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46(1), 81–95. 10.1007/s12160-013-9486-6 [DOI] [PubMed] [Google Scholar]
- Nielsen L, Riddle M, King JW, Aklin WM, Chen W, Clark D, Collier E, Czajkowski S, Esposito L, Ferrer R, Green P, Hunter C, Kehl K, King R, Onken L, Simmons JM, Stoeckel L, Stoney C, Tully L, & Weber W (2018, Feb). The NIH Science of Behavior Change Program: Transforming the science through a focus on mechanisms of change. Behaviour Research & Therapy, 101, 3–11. 10.1016/j.brat.2017.07.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Onken LS (2019). History and evolution of the NIH stage model. In Dimidjian S (Ed.), Evidence–based practice in action: Bridging clinical science and intervention (pp. 28–42). The Guilford Press. [Google Scholar]
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, & Brennan SE (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews, 10(1), 1–11. 10.1371/journal.pmed.1003583 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riddle M, & Science of Behavior Change Working Group. (2015). News from the NIH: Using an experimental medicine approach to facilitate translational research. Translational Behavioral Medicine, 5(4), 486–488. 10.1007/s13142-015-0333-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rollo ME, Williams RL, Burrows T, Kirkpatrick SI, Bucher T, & Collins CE (2016). What are they really eating? A review on new approaches to dietary intake assessment and validation. Current nutrition reports, 5(4), 307–314. 10.1007/s13668-016-0182-6 [DOI] [Google Scholar]
- Rothman AJ, & Sheeran P (2021). The operating conditions framework: Integrating mechanisms and moderators in health behavior interventions. Health Psychology, 40(12), 845–857. 10.1037/hea0001026 [DOI] [PubMed] [Google Scholar]
- Schulz KF, Altman DG, & Moher D (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMC Medicine, 8(1), 18. 10.1186/1741-7015-8-18 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schwarzer R, Lippke S, & Luszczynska A (2011). Mechanisms of health behavior change in persons with chronic illness or disability: the Health Action Process Approach (HAPA). Rehabilitation psychology, 56(3), 161–170. 10.1037/a0024509 [DOI] [PubMed] [Google Scholar]
- Sumner JA, Beauchaine T, & Nielsen L (2018, Feb). A mechanism-focused approach to the science of behavior change: An introduction to the special issue. Behaviour Research and Therapy, 101, 1–2. 10.1016/j.brat.2017.12.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Turner L, Shamseer L, Altman DG, Schulz KF, & Moher D (2012). Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Systematic Reviews, 1(1), 60. 10.1002/14651858.MR000030.pub2 ( ) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, & Initiative S (2007). The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. Annals of Internal Medicine, 147(8), 573–577. [DOI] [PubMed] [Google Scholar]
- Webb T, Joseph J, Yardley L, & Michie S (2010). Using the internet to promote health behavior change: a systematic review and meta-analysis of the impact of theoretical basis, use of behavior change techniques, and mode of delivery on efficacy. Journal of Medical Internet Research, 12(1), e1376. 10.2196/jmir.1376 [DOI] [PMC free article] [PubMed] [Google Scholar]
