Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 12.
Published in final edited form as: J Clin Child Adolesc Psychol. 2022 May 12;51(3):360–373. doi: 10.1080/15374416.2022.2062762

The Development of Psychosocial Therapeutic and Preventive Interventions for Mental Disorders (R61/R33): A User’s Guide

Judy Garber 1
PMCID: PMC9177818  NIHMSID: NIHMS1800729  PMID: 35549571

Abstract

One of the four major goals outlined in the National Institute of Mental Health (NIMH) strategic plan (2021) is to develop and test new treatments and prevention strategies. The aim of the Funding Opportunity Announcement (FOA) for the R61/R33 grant mechanism has been to support the efficient pilot testing of exploratory clinical trials of novel interventions for mental disorders in adults and children through an experimental therapeutics approach. The present commentary (a) describes the R61/R33 grant mechanism, defines terms, and summarizes information about current grants in the system, (b) outlines the review criteria, and (c) highlights several common critiques. Frequent concerns expressed by applicants as well as reviewers include defining and measuring the target/mechanism, establishing dose, selecting an appropriate control group, measuring fidelity, and determining power. Finally, alternative pathways for conducting randomized clinical trials for intervention development are discussed in contrast to or in addition to the experimental therapeutics approach for discovering novel interventions aimed at reducing and preventing mental illness across the lifespan.


The current mission of the National Institute of Mental Health (NIMH) is “to transform the understanding and treatment of mental illnesses through basic and clinical research, paving the way for prevention, recovery, and cure. NIMH fulfills its mission by supporting and conducting research on mental illnesses and the underlying basic science of the brain and behavior.” https://www.nimh.nih.gov/health/publications/about-the-national-institute-of-mental-health-nimh One of the four major goals outlined in the NIMH strategic plan (2021) is to develop and test new treatments and prevention strategies (https://www.nimh.nih.gov/strategicplan).

Over a decade ago, NIMH modified their approach to treatment development. In 2010, the National Advisory Mental Health Council’s Workgroup wrote a report aimed at “accelerating the development of new and personalized interventions for mental illness” (p. 1). They argued that: “Recent breakthroughs in basic science and in the understanding of complex illnesses offer promising new opportunities for researchers to pursue and offer new hope for those living with mental illnesses” (p. 1). The Workgroup made several recommendations for how to develop interventions across a range of modalities, although the members clearly recognized that their suggestions were most congruent with pharmaceutical rather than psychosocial approaches. Nevertheless, they did endorse the development of non-pharmacological treatments such as behavioral approaches, devices, and technology. Ultimately, the goal was to guide the rapid development of new interventions that stop the debilitating progression of mental illnesses and that lead to substantial improvements in the lives of individuals with mental disorders. They suggested shifting away from treating symptoms and moving toward prevention and personalization, which would benefit both those with current mental illnesses and those at risk for future psychopathology through early identification and preemptive interventions. The workgroup proposed that advances in basic and clinical science regarding the discovery of disease mechanisms and pathophysiological pathways would lead to the construction of innovative trial designs to find cures for mental disorders.

Interestingly, however, the workgroup claimed that “Despite the tremendous advances in basic neuroscience and behavioral science that drive our understanding of the mechanisms underlying mental disorders, there is a dearth of new therapeutics in the discovery pipeline” (p. 1). Although this has been a primary rationale for the creation of the R61/R33 grant mechanism, the extant evidence showing that some interventions effectively prevent and treat certain mental illnesses contradicts this assertion (see Hollon et al., 2021 for a review of some of the treatment literature). Indeed, there likely are multiple pathways for treatment discoveries.

The present commentary discusses the experimental therapeutics approach to treatment innovation and the associated R61/R33 grant procedures. It does not, however, comment on the very excellent papers in this special section, all of which are outstanding examples of how to implement studies using this grant mechanism. Rather, this paper has the following aims: (1) to describe the R61/R33 grant mechanism, define terms, and summarize information about current grants in the system; (2) to outline the review criteria, and (3) highlight several common critiques and suggest alternative pathways for treatment development and discovery.

What is the Funding Opportunity Announcement (FOA): R61/R33?

The purpose of this Funding Opportunity Announcement (FOA) has been to support the efficient pilot testing of exploratory clinical trials of novel interventions for mental disorders in adults and children through an experimental therapeutics approach. “Under this FOA, trials must be designed so that results, whether positive or negative, will provide information of high scientific utility and will support “go/no-go” decisions about further development or testing of the intervention. Studies of novel interventions include, but are not limited to behavioral, pharmacological, biologics-based, cognitive, device-based, interpersonal, physiological, or combined approaches” https://grants.nih.gov/grants/guide/pa-files/PAR-21-135.html

Support is provided for up to two years (R61 phase) for preliminary milestone-driven testing of the intervention’s engagement of the therapeutic target, possibly followed by up to 3 years of support (R33 phase) for studies to replicate target engagement and relate change in the intervention target to functional or clinical effects. Ultimately, this R61/R33 FOA is intended to speed the translation of emerging basic science findings of mechanisms and processes underlying mental disorders into novel interventions that can be efficiently tested for their promise in restoring function and reducing symptoms for those living with mental disorders.” https://grants.nih.gov/grants/guide/pa-files/PAR-21-135.html

The R61 phase tests whether an objective measure can be used to assess intervention effects at the molecular, circuit, or system level (i.e., target engagement). It includes testing hypotheses about the mechanism of action or a preliminary evaluation of the clinical effect of manipulating the target. The specific activities and milestones for the R61 phase depend on the type of intervention being studied and its stage of development regarding efficacy.

To be successful, investigators must show adequate functional target engagement of the key criterion of a “go/no-go” decision to move from the R61 to R33 phase. The proposal not only should describe the R61 phase milestones that, if met, will justify moving to the R33 study, but it also should specify the conditions under which they would not proceed to the R33 phase. Success criteria are defined in terms of outcomes achieved (e.g., specific measures of target engagement) rather than as tasks completed (e.g., recruitment goals).

The milestones must be objective, quantifiable, and scientifically justified. Investigators must specify the measures to evaluate whether the intervention engages and alters the hypothesized target and should indicate the quantitative threshold values for go/no-go decision making. The application should describe how specific, measurable, and achievable progress will be determined; that is what is the indicator of success. Will the investigators and the NIMH Program Officials be able to determine if the project succeeded in (a) demonstrating that the intervention alters the target (thus providing an initial proof of principle), and (b) provides preliminary evidence of feasibility – that the intervention can be applied in a clinical population with adequate safety, acceptability, and tolerability to patients.

To advance to the R33 phase, investigators must successfully meet the milestones of the R61 phase. The R33 phase should demonstrate that the intervention substantially improves clinical outcomes and should indicate at least a signal of efficacy – that there is an association between the target engagement and change in clinical status and functioning. Finally, the study should inform the design of a subsequent confirmatory efficacy trial, if indicated.

Neither the R61 nor the R33 are expected to be powered to conduct a strong test of clinical efficacy. Rather, this would be an aim of a larger, efficacy trial. Secondary aims of the R33 phase include intervention refinement (e.g., further manual or protocol development; revising fidelity scales), further tests of feasibility, safety, and acceptability, and improving recruitment methods, randomization, retention, and assessments.

History of the R61/R33 Funding Mechanism

In 2010, the National Institute of Mental Health commissioned a workgroup to guide their advisory council in how to best address the urgent need for new treatments of mental illness. In the resulting National Advisory Mental Health Council’s report, “From Discovery to Cure,” (http://www.nimh.nih.gov/about/advisory-boards-and-groups/namhc/reports/fromdiscoverytocure.pdf) they recommended several changes to the NIMH clinical trials portfolio. Insel and Gogtay (2014) proclaimed that we need better treatments for mental disorders given that neuropsychiatric disorders are a leading source of medical disability in the United States. They further stated that such disorders have been increasing since1990 despite a concomitant increase in pharmacologic treatments that have been successful commercially. Few medications, however, have shown new mechanisms of action or enhanced efficacy, and we also lack effective medications for the more severe psychiatric disorders. Given the relatively high failure rates in clinical trials testing new medications and the absence of a clear understanding of the mechanisms underlying these disorders, NIMH decided they needed a new approach to treatment development and efficacy testing. Of note is that most of the treatments to which they referred were pharmacological, and the NIMH guidelines for conducting clinical trials that followed were consistent with this medical orientation.

The workgroup called for a more rapid treatment development process, moving quickly into humans for proof-of-concept studies. They also stated that the trials should identify and validate new targets and mediators. As a consequence of these recommendations, the NIMH moved to a focus on experimental therapeutics, in which interventions are used as probes of disease mechanisms, as well as tests of efficacy (Insel & Gogtay, 2014). The new funding announcements called for changes in “what” and “how” trials were conducted. Going forward, the aim of clinical trials was to learn more about the processes underlying the psychopathology and the mechanisms of the interventions. Clinical trials now are required to demonstrate “target engagement, in addition to assessing changes in symptoms.” Moreover, clinical trials are supposed to “identify a critical dose and duration of intervention that would engage or modulate the target in addition to assessing symptom change, with a goal of informing further research or treatment strategies” (Insel & Gogtay, 2014; p. 745). Finally, all clinical trials need to test a mediator or the mechanism of action, inform the dose and duration required, and design the trial so that negative results are informative.

Experimental Therapeutics

The experimental therapeutics approach translates the identification of factors that cause and sustain mental illnesses into new or improved approaches to prevention and treatment. Presumably, discoveries in basic neuroscience and behavioral science will suggest malleable targets (and potential mediators) for novel intervention strategies. The experimental therapeutics approach aims to show that changes in the target(s) or mediators are associated with changes in symptoms. This approach supposedly enables us to refine therapies to increase potency and efficiency and to personalize interventions to be optimally matched to individuals’ needs.

Some researchers have suggested that the experimental therapeutics approach is an impediment to psychosocial intervention research. Gordon (2017), however, disagreed and argued that the experimental therapeutics approach is consistent with the longstanding commitment of the field of clinical science to advance understanding of therapeutic change mechanisms. This empirically grounded, mechanism-based approach to the development of interventions identifies potentially mutable factors discovered from basic psychopathology research on the etiology, maintenance, severity, or course of disorders.

Thus, the focus of the experimental therapeutics approach is not only on testing whether interventions work, but also on understanding whether interventions work through the presumed mechanisms. Gordon (2017) was optimistic that with increasing knowledge of genetics and neuroscience there likely will be transformative pharmacological treatments in the future, while also recognizing that in the near term, “we need to look elsewhere for more modest but crucial gains to help the patients of today” https://www.nimh.nih.gov/about/director/messages/2017/an-experimental-therapeutic-approach-to-psychosocial-interventions.

The funding announcements for the development of psychosocial therapeutic and preventive interventions for mental disorders support the development and testing of a wide range of prevention and treatment modalities such as cognitive, behavioral, interpersonal, and other psychosocial approaches including technology-assisted methods that facilitate the uptake and delivery of behavioral and psychosocial interventions. The FOA funds projects that optimize the potency and efficiency of preventive and therapeutic interventions. Presumably, this approach will enhance our understanding of the mechanisms underlying psychiatric illness and recovery.

Of most interest are empirically supported psychosocial interventions that have the potential for near-term impact. In addition, because some individuals do not respond to even the best interventions, there is a need to both develop and test new psychosocial approaches and to refine and optimally deploy existing strategies. Finally, the field needs to identify individual (e.g., biological, cognitive, affective) and contextual (e.g., cultural, societal) characteristics to optimize and personalize interventions.

Clinical Trials

In 2014, NIH revised its definition of a clinical trial (see NOT-OD-15-015) and in 2015, launched a multi-faceted effort to expand its oversight of clinical trials while elevating biomedical research to a new level of transparency and accountability. Since then, the NIH has defined a Clinical Trial as a research study in which one or more human subjects are prospectively assigned to one or more interventions (which may include a placebo or other type of control condition) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes (NOT-OD-15–015). Prospectively assigned refers to a process (e.g., randomization) specified in an approved protocol that stipulates the assignment of research participants to one or more arms (e.g., intervention, placebo, or other control condition) of a clinical trial. An intervention is defined as a manipulation of the participant or their environment for the purpose of modifying one or more health-related biomedical or behavioral processes and/or endpoints (https://grants.nih.gov/policy/clinical-trials/why-changes.htm). Since NIH made these changes, they have required that all applications involving one or more independent clinical trials be submitted through a Funding Opportunity Announcement (FOA) specifically designated as ‘Clinical Trial Required’, ‘Clinical Trial Optional’, or ‘Basic Experimental Studies with Humans’ (NIMHClinicalTrials@mail.nih.gov).

Current Status of Funding of R61/R33 Grants

According to the Research Portfolio Online Reporting Tool (RePORTER https://report.nih.gov/) 461 grants have been funded under this FOA since 2018; these have included different types of grants including R61, R33, R34, R21, and K awards. The total amount of funding for these grants as of March 2022 has been $278,716,521. Figure 1 shows the number of these grants funded in each year from 2018 through 2022, indicating a steady increase in funding of these grants.

Figure 1.

Figure 1.

Number of grants funded annually from 2018 through February 2022 under this Funding Opportunity Announcement (FOA)

Figure 2 shows the number of these grants funded in various NIH agencies, with the most awarded to NIMH (https://reporter.nih.gov/search/FV4YCQ83V0eEmXlhVJUCkg/projects). Many investigators from scientifically diverse backgrounds have received grants under this FOA. One investigator has had three R33 grants, 32 have had two R61 or R33 grants, and the remaining investigators have had one. Fortunately, NIH continues to support these funding opportunity announcements, which allows this funding mechanism to continue at this time.

Figure 2.

Figure 2.

Number of R61/R33 grants awarded by NIH agency as of February 2022.

Application Review Information

The R61/R33 phased innovation grant supports investigations of novel scientific ideas or new interventions, model systems, tools, or technologies that have the potential for significant impact on biomedical or behavioral and social sciences research. For this grant mechanism, it is not clear the extent to which preliminary data are needed. The FOA explicitly states that an R61/R33 grant application need not have preliminary data and information or extensive background material; however, they may be included if available. One challenge regarding preliminary data is that on the one hand it is important to demonstrate that an infrastructure exists that will allow the investigator to conduct the proposed study. On the other hand, the investigator should not have completed enough of the trial so that it no longer is novel.

Appropriate justification for the proposed work can be provided through literature citations, data from other sources, or, when available, from investigator-generated data. Accordingly, reviewers will focus their evaluation on the conceptual framework, the level of innovation, and the potential to significantly advance our knowledge or understanding. Nevertheless, having good pilot data can be useful to demonstrate feasibility of recruitment and acceptability of the implementation approach.

Reviewers assign a single impact score for the entire application, which includes both the R61 and R33 phases. Reviewers address the strengths and weaknesses of each phase of the award in their review. Of note, both phases may not receive the same degree of reviewer enthusiasm. Thus, it is possible that the two phases are judged differently, although the overall scores reflect reviewers’ ratings of the entire proposal.

Overall Impact

Reviewers provide an overall impact score to reflect their assessment of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved, in consideration of the following review criteria and additional review criteria (as applicable for the project proposed).

Scored Review Criteria

Reviewers consider each review criterion to determine the overall scientific merit of the proposal and give separate scores for each criterion (see Table S1 for a description of the review criteria). Reviewers’ overall rating is not a simple average of the five criterion scores. Indeed, it is not uncommon for an application to receive excellent scores for the Investigators and the Environment, but poor scores for the Approach. An application does not need to be strong in all categories to receive a good overall scientific impact rating. For example, a project may not be innovative but still might be essential to advancing the field. Often, the rating of the Approach carries the most weight in the overall impact score.

Significance

According to the FOA, the Significance of a proposal is judged regarding whether the project addresses an important problem, unmet mental health need, or a critical barrier to progress in the field. The results of the proposed project should increase our scientific knowledge, technical capability, or clinical practice. The proposed intervention should have a strong, well-supported theoretical rationale that is ready for early-phase testing; and is expected to increment clinical approaches beyond what is currently available.

There should be clear and refutable hypotheses, and articulation of what will be learned from conducting the study even if the hypotheses are not supported? Will conducting the clinical trial further our understanding of mechanisms underlying the psychopathology or improved clinical care? Will successful completion of the aims change the concepts, methods, technologies, interventions, or services for reducing the burden of mental disorders?

Earlier in the FOA, it explicitly states: “An R61/R33 grant application need not have preliminary data and information or extensive background material; however, they may be included if available.” The Significance section asks: “Are the scientific rationale and need for a clinical trial to test the proposed hypothesis or intervention well supported by preliminary data, clinical and/or preclinical studies, or information in the literature or knowledge of biological mechanisms?” Thus, although pilot data are not required, reviewers will attend to it, if it is available, and some reviewers might give a less positive rating without it.

The FOA further states: “For trials focusing on clinical or public health endpoints, is this clinical trial necessary for testing the safety, efficacy, or effectiveness of an intervention that could lead to a change in clinical practice, community behaviors or health care policy? For trials focusing on mechanistic, behavioral, physiological, biochemical, or other biomedical endpoints, is this trial needed to advance scientific understanding?” Thus, is the proposed clinical trial necessary and will it change clinical practice or advance scientific knowledge?

Investigator(s)

Investigators are judged regarding their level of experience and expertise and their track record of prior research accomplishments relevant to the proposed project. Early-stage investigators not only should have sufficient training and proficiency to be able to conduct the project, but they also should collaborate with others who can complement their strengths. Established Investigators should have an ongoing record of accomplishments that have advanced their field through publications and other scientific products. They also should have a history of conducting and successfully completing other clinical trials or related research.

Multi-PD/PI projects should have a record of collaboration and coordination. An organizational plan should be outlined with appropriate leadership roles based on areas of specialization. If multiple sites are involved, is there a designated center for protocol oversight and a center for data management and analysis? Are the skills among the collaborators complementary or redundant? Does the team have the necessary capabilities to conduct all aspects of the proposed project? Is there a designated biostatistician with clear quantitative skills who can oversee the data center. Does the team also include someone with managerial competence to ensure that required milestones and timelines are met? Overall, can the proposed study be conducted satisfactorily given the identified personnel?

Innovation

To be innovative, the application should challenge and shift current research or clinical practice paradigms. A proposal might be novel regarding theoretical concepts, approaches or methodologies, instrumentation, or interventions. Will the results be a refinement, improvement, or new application of concepts, methods, or interventions procedures? Not every part of the proposal (i.e., design, methods, intervention) themselves need to be innovative, as long as the overall study addresses important and novel scientific or clinical questions.

Proposals can be innovative by identifying a novel target or mechanism or by introducing a new approach to engaging an existing target. Linking the proposed intervention to new findings from basic neuroscientific or behavioral research or translating an established finding in a novel way (e.g., new methods or translation to a developmental framework) also can be quite innovative. If the proposed project involves an adaptation or extension of an established intervention with known efficacy, then the investigator needs to provide a conceptual or empirical rationale for how the intervention efficacy will be substantially improved in a novel way. For example, a proposed enhancement of an intervention can be demonstrated by applying it to a different subpopulation, maintenance of effects, novel target, or new outcome. Overall, the design or research plan should include some innovative elements that expand knowledge about its sensitivity, generalizability, sustainability, or underlying mechanism(s).

Approach

The Approach is the most extensive and probably the most important section of the grant proposal. It includes a description of preliminary data, if available, and the conceptual model and theoretical rationale for the study and design. Is the scientific rationale/premise of the proposed study based on previously well-designed preclinical and/or clinical research? The Approach section should include details about the participant sample, recruitment procedures, design, measures, intervention(s), and data analysis plan.

In anticipation of reviewers’ critiques, it is useful when PIs include a section explaining the reasons for the design choices made. For example, investigators can explain why one type of comparison group was selected as opposed to another. Are the overall strategy, methodology, and analyses well-reasoned and appropriate for accomplishing the specific aims of the project? Have the investigators addressed weaknesses in the rigor of prior research that serve as the key support for the proposed clinical trial? Are potential problems, alternative strategies, and benchmarks for success presented? Are the inclusion or exclusion of individuals based on sex/gender, age, race, and ethnicity justified given the scientific goals and research strategy proposed?

Milestones (Go/No-Go Criteria):

The key criterion of a “go/no-go” decision to move from the R61 to R33 phase is the adequate functional engagement of the target. This section must provide a clear description of the R61 phase milestones that, if met, will justify taking the proposed intervention into the R33 phase. The milestones must be objective, quantifiable, and scientifically justified so that success of the R61 can be clearly determined.

The application must specify the measures and criterion for evaluatimg whether the intervention engages and alters the hypothesized target/mechanism. If more than one target/mechanism is proposed, the milestones must be specific about whether target engagement is required of each to justify moving to the R33 pilot study. Applicants must justify whether to make their go/no-go decision contingent on all the proposed targets or some combination of target measures. Milestones do not merely reflect progress in accomplishing tasks or in following the study timeline (e.g., meeting target enrollment). Rather, they must include a description of specific, quantitative threshold values for any measures proposed for go/no-go decision-making. This often is described in terms of an expected effect size, significance level, or standard deviation of difference.

Study Design.

Is the study design justified, clear, informative, and relevant to the hypotheses being tested? Does it appropriately address the primary and secondary outcome variable(s)/endpoints? Given the methods used to assign participants and deliver interventions, is the study design adequately powered to answer the research question(s), test the proposed hypotheses, and provide interpretable results? Are the study populations (gender, age, race, and ethnicity), proposed intervention arms, dose, and duration of the trial well justified?

Are potential ethical issues adequately addressed? Is the process for obtaining informed consent or assent appropriate? Is the eligible population available? Are the plans for recruitment outreach, enrollment, retention, handling dropouts, missed visits, and losses to follow-up appropriate to ensure robust data collection? Are the planned recruitment timelines feasible and is the plan to monitor accrual adequate? Has the need for randomization (or not), masking (if appropriate), controls, and inclusion/exclusion criteria been addressed? Are there expected differences in the intervention effect as a function of sex/gender, age, race/ethnicity, or other potentially important moderators? Are there adequate plans to standardize, assure quality of, and monitor adherence to the trial protocol and data collection guidelines?

Data Management and Statistical Analysis.

Is there a clearly articulated data management and data analytic plan? Are the planned analyses appropriate for the proposed design? Does the application describe the methods used to assign participants and deliver the interventions, and are they appropriate? What are the procedures for data management and quality control? For multi-site studies, how will data be handled across locations? Is the plan for completing the data analyses within the proposed period of the award?

Environment

How is the overall research environment? Are institutional supports, equipment, and other physical resources available and adequate? To what extent is the environment in which the study will be done likely to enhance or diminish the successful implementation and completion of the study? Have the PD(s)/PI(s) successfully carried out studies of similar structure and complexity as in the current application in the specified setting? Does the environment support timely participant recruitment and completion of each of the two phases (R61 and R33)? Are the administrative, data coordinating, enrollment, and laboratories appropriate for the proposed trial? Is there adequate space to conduct the study at the proposed site(s)? If multi-sites/centers, is there evidence that each individual site or center can: (1) enroll the proposed numbers; (2) adhere to the protocol; (3) collect and transmit data in an accurate and timely fashion; and (4) operate within the proposed organizational structure?

Additional Review Criteria

Reviewers evaluate several additional criteria when determining the scientific and technical merit, and when deriving an overall impact score, but these additional items are not given separate scores. A careful description of the following items is required (see Table S1 for more detail): milestones, timeline, protection of human subjects, and a data safety monitoring plan. If the application is a resubmission, then reviewers judge how responsive investigators were to the prior reviews in the revision.

A study timeline should provide details about the plans for start-up activities such as hiring staff, training, and connecting with recruitment sources. If needed, is the timeline for establishing necessary agreements with research partners (e.g., single IRB) realistic? The application should report that anticipated rate of enrollment by month or quarter. The timeline is evaluated regarding feasibility, justification, and likelihood of completion within the timeframe of the grant. Finally, anticipating future challenges to completing the proposed timeline and providing possible solutions demonstrates forethought and planning.

Reviewers also consider several other items, the evaluations of which are not included as part of the overall impact score and are not given scores of their own. Additional criteria are evaluated if the application is from a foreign organization. Reviewers also judge if the resource sharing plan is adequate, and if the budget and justification are reasonable.

Review and Selection Process

Applications are assigned to review committees based on established PHS referral guidelines to the appropriate NIH Institute or Center. A scientific review officer (SRO) works to ensure that the scientific review group (i.e., study section) identifies the most meritorious science for funding. The SRO assigns applications to reviewers by matching the science in the proposal to the reviewer’s knowledge about and interest in the goals of the project, expertise in the techniques proposed, reviewer workload, and real or perceived conflicts of interest. SROs encourage reviewers to inform them of any concerns they have about their assignments, such as conflicts of interest or lack of knowledge in an area covered in the application. SROs configure the review committee of experts who evaluate the applications. Most applications are reviewed by three to four members of either a standing committee comprised of a regular group of reviewers who serve on the committee for three to five years, and some additional ad hoc reviewers, or a special emphasis panel configured for one review cycle. In general, reviewers are assigned about three to six applications, although they can review as few as one or as many as ten or even more.

The scoring rating scale used for each of the five subsections and the overall impact score ranges from 1 to 9, with lower scores being better. Reviewers are strongly encouraged by both the SRO and those in Program to use the full range on the scale. Typically, however, reviewers tend to give scores from 2 through 5, although this has been changing in recent years.

Overall Impact or Criterion Strength Score Descriptor
High 1 Exceptional
2 Outstanding
3 Excellent
Medium 4 Very Good
5 Good
6 Satisfactory
Low 7 Fair
8 Marginal
9 Poor

Reviewers submit their scores and the written critiques several days to a week before the meeting, so they have time to review the other critiques of the applications they reviewed and to look over the other applications and critiques to which they were not assigned. Reviewers are completely unaware of the scores of the other reviewers until they have completed and submitted their reviews, thus resulting in totally independent ratings. Nevertheless, after the committee meeting, reviewers may alter their ratings and critiques presumably based on further review and on the discussion among the member of the review committee. Therefore, the final critiques of discussed applications provided in the summary sheets might not be the original reviews, but rather might include changes made after the meeting.

Applications are evaluated for scientific and technical merit by the Scientific Review Group convened in accordance with NIH peer review policy and procedures, using the standard review criteria. All applications receive a written critique that includes comments by independent reviewers with expertise in the area. The review committee meets, either in person or virtually, to discuss the strengths and limitations of the applications with a focus on the five main areas – significance, innovation, investigators, approach, and environment (see Table S1). Generally, the applications with scores in the top half of those being reviewed by the committee for this round of applications are discussed, with the bottom half being scored as not discussed. The summary sheets for the not discussed applications contain the original critiques by the reviewers, whereas the summary sheets of the discussed applications include information about the content of the discussion and the final critiques by the original reviewers, which might have changed after the discussion.

An interesting observation that I and other reviewers have noted, which should be explored empirically, is that some of the applications that are discussed end up with worse scores than when they started, and some applications that were not discussed end up with higher overall impact ratings than some of those that were discussed. This may be a function of how the review process is conducted. Because there is so little time to discuss each application (typically 20 to 30 minutes per application), the main focus tends to be on the criticisms. The person assigned to be the first reviewer presents information about the application, highlighting both its strengths and limitations. Next, the second and third reviewers are instructed to only comment on things that the first reviewer did not report. Unfortunately, this typically ends up being more criticisms. Thus, by the time the committee is ready to make the final ratings, the bulk of the discussion has been about the application’s weaknesses. Indeed, I have heard reviewers say that the critiques presented lead them to give the application a worse score than it had at the beginning of the review. This anecdotal observation could be validated using Information about how scores change after the discussion in review committee meetings.

Funding decisions.

Applications compete for available funds with all other applications. Following initial peer review, recommended applications receive a second level of review by the appropriate national Advisory Council or Board. Several factors are considered in the funding decisions including the scientific and technical merit of the proposed project as determined by scientific peer review, relevance of the proposed project to program priorities, and the availability of funds.

Common Critiques and Concerns

In my experience as both a recipient and a reviewer of R61/R33s, several questions, critiques, and concerns repeatedly arise. Although some of these issues do not have clear resolutions, they are worth keeping in mind when both submitting and critiquing these types of grants. Comments that often appear in summary sheets are: “The proposal is overly ambitious,” “The scientific premise for the specific approach is underdeveloped,” and “So, what is new?” Common concerns occur especially in reference to the selection and measurement of the target, dose, control group(s), fidelity, power, testing mediation, length of a follow-up, and incremental science.

The Target/Mechanism

By far, the most important criterion used to evaluate an R61 application is the selection, description, justification, measurement, and validity of the target/mechanism. PIs need to clearly define the target/mechanism and the rationale for intervening on that specific mechanism. The application should describe the evidence showing that the target/mechanism is implicated in conferring risk, causing, or maintaining the symptom(s) of interest, and that variability in the expression of the target is associated with variation in symptom severity?

One critical question often raised is whether the target reflects the mechanism underlying the symptoms/disorder or if the target in an R61 can be the hypothesized process through which the intervention is believed to affect the symptoms? An example of the former would be if we hypothesized that the absence of positive reinforcement causes individuals to become depressed (Lewinsohn & Gotlib, 1995), in which case increasing engagement and enjoyment of pleasant activities might be the mechanism targeted by the intervention, which then presumably would reduce depressive symptoms. Alternatively, an intervention might target coping strategies when faced with stress. In this case, the cause of the symptoms might not be a coping skills deficit, although building one’s coping repertoire might decrease symptoms when they occur.

To what extent does changing the hypothesized mechanism (i.e., target) inform us about the processes underlying the onset and maintenance of the symptoms or disorder? This dilemma is reminiscent of the old argument that although an aspirin might reduce a headache, an aspirin deficit did not cause the headache in the first place. Thus, when selecting the target/mechanism for an R61 clinical trial, are both types of targets acceptable? My subjective impression is that there is a preference for targeting the underlying mechanism driving the symptoms and disorder, but this is not stated explicitly in the FOA, and some applications that target the mechanism of the intervention are indeed funded.

Another question about the target is what is the difference between “hitting the target” (i.e., actually altering an underlying process) and a simple manipulation check? For example, if an intervention explicitly teaches a skill (e.g., coping; sensitivity to facial affect; perspective taking) and those receiving the intervention show a clear improvement in the use of the targeted skill, is that sufficient for demonstrating target engagement? A critique often lodged against such proposals is that the intervention is simply “teaching to the test.” On the other hand, if acquisition of the targeted skill(s) serves to reduce the symptoms of interest, then is this a sufficient demonstration of target engagement?

A third crucial question about the target/mechanism is whether to select more than one. Given that it is unlikely that any disorder is the result of a single target/mechanism or that an intervention “works” through just one process, how many mechanisms should be targeted? From the perspective of the grant applicant, it likely is best to select a simple, easily measurable target that can be demonstrably “hit,” because investigators cannot move to the R33 unless the target is engaged in the R61. On the other hand, the relation between underlying causes and symptoms is complex, multifaceted, and probabilistic (Gordon, 2017), and therefore a single mechanism is not likely to account for the symptoms.

If more than one target is proposed, then several important questions follow. What happens if one target is engaged, but not the other? Is that sufficient for proceeding to the R33? Should applicants identify a primary and secondary target, but only be required to show an effect on the primary target?

Second, what is the relation among the proposed targets? Are they independent, additive, or interactive? Is there evidence of mediation, sequential mediation, or moderated mediation when more than one target is involved? For example, suppose that a PI proposes a diathesis-stress model of depression, and the intervention aims to both reduce stress and decrease the cognitive vulnerability (i.e., diathesis) to interpret events negatively. If the intervention does indeed reduce negative cognitions, but not the level of stress, then would this be considered a failure to engage the targeted mechanism?

Regarding the measurement of the proposed mechanism, how often should the target be measured? What is the ideal interval between measurements? Should multiple measures of the target construct be included? As with the question of multiple targets, what if there is evidence of target engagement for one measure but not another? Is it necessary or desirable to include a measure of a biological process, even if that is not the focus of either the intervention or the hypothesized mechanism? Are reviewers more favorable to proposals that include a measure of biological processes?

Finally, given that the R61 is not likely to have sufficient power to detect a significant effect, what statistic is used to determine that the target has been met. Often, investigators propose a minimal effect size that must be met, although the method for determining the size of the effect to propose is not clear. Some investigators propose a certain standard deviation differential or a difference in the specific amount of change from pre- to post-intervention. More clarity and guidelines for defining and quantifying the target engagement would be useful to both applicants to and reviewers of R61/R33 grants.

Dose

One requirement for conducting a clinical trial using the R61 mechanism is that there should be an adequate plan to evaluate dose or protocol optimization. Does the proposal describe the protocol parameters for configuring the intervention such as intensity, duration, and session frequency? How is the dose of the intervention determined and specified and how will the parameters of the protocol be optimized to test the intervention’s capacity to engage the specified target?

The FOA provides very little information about how to determine the optimal dose for engaging the target? Is there likely to be an ideal dose and to what extent will this vary by other possible moderating factors (e.g., age, gender, cognitive abilities)? One way to address the question of dose would be to conduct a study in which participants are randomly assigned to receive different levels of dose. For example, if the literature indicates that significant effects of an intervention can occur after only six sessions, whereas other studies found the strongest effects after 12 weeks, then participants could be randomized to a 6- versus a 12-session protocol. If, however, this design would require a greater sample size than is feasible, given the other requirements of the R61, then an alternative approach should be considered.

Another strategy for testing dose is to measure the target frequently throughout the trial. This allows investigators to examine patterns of change in the target for the whole sample and also to test for possible moderators of this change. Although discovering the optimal dose of psychosocial interventions would be useful, it probably is less clear than doses of psychopharmacological agents. Tests of dose in medication trials likely served as the model upon which this requirement was derived and its relevance for psychosocial interventions should be re-examined.

Control Groups

An extremely important issue for all randomized controlled trials (RCT) is the selection of the appropriate comparison group(s). The choice of the control groups should be based upon the specific research questions and hypotheses being tested and the current state of the field. The hierarchy of type of control condition typically is (1) a no-intervention/assessment only control, (2) an active comparison group that has all of the same inactive features of the targeted intervention except the presumed “active” ingredient(s) that are expected to hit the target, (3) another active intervention with known efficacy, but with a different target mechanism, and (4) an active comparison condition that includes all the characteristics of the original intervention except the feature(s) that are the focus of the novel enhanced approach.

Design Options with Different Control/Comparison Conditions
  1. Intervention A vs. No Intervention

  2. Intervention A vs. Active Control* vs. No Intervention

  3. Intervention A vs. Intervention B vs. No Intervention

  4. Intervention C1 vs. Intervention C vs. No Intervention

A = The new intervention that is the focus of the proposal

*

Active Control can be a placebo condition that controls for nonspecific factors or other possibly active factors that do not include the target ingredient of intervention A.

Intervention B = Another intervention with known efficacy that presumably operates through a different mechanism than Intervention A.

C1= Enhanced Intervention C, which is an established intervention with known efficacy.

“Most investigators would likely agree that nonspecific control conditions are not necessary or appropriate for phase I or II trials, as this would constitute a nearly insurmountable hurdle in moving novel treatments from an initial development into more rigorous testing” (Mohr et al., 2009, p. 282). Later phase trials might include nonspecific controls, although this depends on the specific hypotheses, and the presumed role of these nonspecific factors in the target intervention. The important thing for both applicants and reviewers to keep in mind is that in early phase RCTs, such as an R61, it is not always necessary to include a “placebo” control. If such a control condition is used, then a third condition also should be included that is a no-intervention/assessment only. The reason for this is that if the target intervention A and the nonspecific control conditions do not differ significantly, then we cannot conclude either that they both were efficacious or that they both did not work, thereby resulting in null results. Having a no-intervention control against which to compare these two “active” interventions allows researchers to determine if either of these interventions is better than nothing, which is the first question to answer before being concerned with the role of nonspecific factors.

One problem frequently encountered in designing an R61 is that the time frame is short, and the sample size tends to be small, which prohibits investigators from using this preferred, three-condition design. Therefore, it probably makes sense to include a no-intervention control in the R61 and then add a nonspecific/placebo control group in the R33. In an ideal world, early in the intervention development phase, most RCTs would include a no-intervention control condition. Once an intervention is found to be effective, however, withholding it can create an ethical concern about not providing effective help to those in need. In such cases, RCTs often will use a “treatment as usual” (TAU) comparison group so that the participants receive what they normally would get if they were not in the trial. Characterizing what is included in the TAU condition can be difficult, however.

A variant on this design is if the investigator is proposing an enhancement of an already effective intervention. Investigators need to justify how modifying an existing and already efficacious intervention (e.g., CBT) will substantially enhance the outcomes over and above the original intervention. In this case, it is necessary to demonstrate that the enhanced version is significantly better than the existing intervention. Including a no-intervention condition in this design would help determine if the interventions were implemented sufficiently well enough to show a significant effect better than no intervention. It still is necessary, however, to show a difference between the existing and enhanced interventions. If they both differ from the no-intervention control, but not each other, then the enhanced intervention probably is not necessary, unless it can be shown to have a better effect regarding a specific target mechanism or parameters of the intervention (e.g., duration, intensity).

Thus, the selection of the control conditions is complex and should consider the stage of knowledge of the specific field. In general, it makes sense to first show that the proposed intervention can affect the target at all as compared to no intervention. Then in a subsequent RCT with a larger sample and more resources, additional comparison groups can be included, as long as it is clear what can and cannot be concluded from such expanded designs.

Fidelity

An important and often forgotten part of the proposal is a description of the initial manual or protocol development along with the construction of fidelity scales that are linked to the manual. The R33 should describe plans for further intervention refinement and standardization along with improvement of the fidelity scales. The overall project timeline should include information about when during the trial the intervention manual will be completed and when the fidelity assessments will be conducted.

Investigators should explain how delivery of the intervention will be operationalized, monitored, and quantified. If manuals for delivering the intervention and procedures for assessing fidelity (i.e., adherence to the manual and competent delivery) do not exist yet, PIs need to provide detailed plans for developing the manual and the corresponding fidelity measures. Although it is not a requirement of the R61 that a manual already exists, having a manual that can be included with the application (e.g., in an appendix) can be positive if it is well-developed. If a manual is not yet finished, then plans for its further development during the early phases of the R61 should be described.

Relatedly, although pilot data are not required for the R61, including a description of several cases in which the proposed intervention was implemented, can be helpful in demonstrating to reviewers that the intervention exists in reality, and not just in the mind of the investigator, and that it can be delivered feasibly to the population of interest. Investigators need to find the “right” balance between having some evidence that they can conduct the proposed RCT, but not so much that it is already completed and is no longer novel.

The purpose of checking fidelity is to assure adherence to the intervention manual and study protocol and to determine that the intervention delivery was done competently. In addition, not only should the target intervention be checked for manual adherence, but also the comparison conditions should be evaluated to ensure that those delivering the alternative interventions are not doing the target intervention. Thus, fidelity scales should be constructed for each intervention condition.

Fidelity measures can include a simple adherence checklist, but they also can include ratings of the quality of the delivery. That is, fidelity forms can assess not only did the interventionists do what is in the manual, but how well they did it. This can be rated along a continuum from poor, fair, satisfactory, good, or outstanding. Thus, the person delivering the intervention can adhere to the manual, but not do it well. For example, a rating of poor would be given if they just read from the manual without attending to the participant’s reactions. They also might present the material satisfactorily but miss opportunities to teach a skill in the moment during the session.

Another issue for fidelity is who should do the fidelity ratings? Whereas fidelity checks for adherence can be rated by individuals who have less intervention experience, quality ratings typically require a person with more expertise with the intervention, often a supervisor. In addition, there should be a plan for checking inter-rater reliability of the fidelity ratings. How many sessions will be evaluated? What percent of the entire sample of sessions will be rated and how many of these will be rated by more than one person? How are the sessions selected for ratings – for example, are they chosen randomly or every third or fourth session?

Finally, fidelity scales can be useful for training and supervision. Supervisors can use the fidelity scales to provide feedback to therapists in training. In addition, having the persons implementing the intervention watch their own sessions and rate them for fidelity can be a useful training tool.

Power

A common critique of R61s and R33 is that they appear to be underpowered to detect a significant effect. Unfortunately, the guidelines in the FOA are contradictory. On the one hand, the FOA states in the study design that “Given the methods used to assign participants and deliver interventions, is the study design adequately powered to answer the research question(s), test the proposed hypothesis/hypotheses, and provide interpretable results?” Later in the same FOA, however, it explicitly states that “Pilot studies supported by the R33 should not be powered as strong tests of clinical efficacy, but rather should test the link between the degree of the intervention’s target engagement and functional outcomes in a clinical or at-risk population.” This likely is even more true regarding the R61. Thus, according to this statement, one could conclude that neither the R61 nor the R33 is supposed to be powered to detect a significant difference between the target intervention and the comparison condition on either the target or clinical outcome. Then, how does the investigator determine the sample size and how is target engagement quantified?

According to the FOA, the R61 should provide an operational definition and specification of objective measures of the target/mechanism and establish that the proposed measures are sensitive to intervention-induced change. Second, the intervention protocol should demonstrate that it can alter the target. Also, there should be target engagement with the proposed treatment parameters such as intensity, duration, and frequency (i.e., “dose”). The FOA, however, lacks clear guidelines for how to operationalize the target engagement. As noted earlier in the section discussing the target, examples of the quantitative measure of target engagement can be a specified effect size, a certain standard deviation difference, level of significance, or absolute level cut points. The type of quantitative index proposed will depend upon the nature of the measures of the target. Therefore, the FOA cannot provide the specific parameters to use that will apply to every proposal. Unfortunately, this leaves both the grant applicant and the reviewers with few directions on how to construct and evaluate, respectively, target engagement. Given that over 400 proposals have been funded, some applicants have been able to figure out how to quantify the target satisfactorily. The papers in this special issue provide good examples of how this issue can be handled satisfactorily.

Transition to the R33

The transition to the R33 is contingent on successfully meeting the milestones in the R61 phase – that the R61 demonstrated appropriate target engagement. This is determined by the program officer after reviewing a progress report that includes detailed results. In general, in the R61 phase, it is not necessary to have shown an effect on the clinical outcome as well as the target, although this can be further evidence that moving to the R33 makes sense. The primary aim of the R33 is to demonstrate that the intervention affects the target, which in turn affects the clinical outcomes (e.g., symptoms, diagnoses, functioning). Although this sounds like a test of mediation, often the R33 is not sufficiently powered to conduct such a test. The FOA states that “the R33 should not be powered as strong tests of clinical efficacy, but rather should test the link between the degree of the intervention’s target engagement and functional outcomes in a clinical or at-risk population.” Nevertheless, reviewers tend to look for a power analysis in the application.

In addition, other aims of the R33 include: (a) refining the intervention, revising the manual, and modifying the fidelity forms as needed; (b) collecting further data about feasibility, safety, and acceptability of the intervention; (c) improving methods of recruitment, randomization, and retention; and (d) evaluating the adequacy of the assessment tools for measuring the target and the clinical outcomes, and modifying the measures as required. The results of the R33 phase should inform the design of a subsequent, fully powered, confirmatory efficacy trial, if indicated.

Several questions about the R33 phase occur to applicants and reviewers. First, should the R33 have the same design as the R61 so that the findings can be replicated? Should additional comparison groups be included (see earlier discussion about control groups)? Second, is it sufficient that the R33 is just a modest incremental step beyond the R61, or does it have to be more innovative? That is, is incremental science acceptable for the R33?

Third, is it necessary to include a follow-up period to test the sustainability of the effects? If so, how long should the follow-up be? Substantively, the answer to this should be linked to how long the effects of the intervention are expected to last. The constraints of the R33 to be completed in three years, however, also will dictate decisions about how long to follow the sample after the intervention is completed.

Finally, what do the findings from the R61 tell us about dose (e.g., number, duration, frequency of sessions) to be used in the R33? Are there hints from the R61 regarding possible moderators of the effects of the intervention that could be tested in the R33 such as participants’ age, gender, race/ethnicity, characteristics of the interventionists (e.g., years of training, match to the participant in gender or race/ethnicity), or mode of delivery (e.g., in person vs. remote). Last, what are the implications of the findings for clinical practice?

Clinical Trials Research

The R61/R33 funding mechanism has many strengths. It has forced intervention researchers to think about mechanisms. That is, not just does an intervention “work” but how does it work. Although I totally agree with this aim, I question the way NIMH has gone about it. First, I question the basic assumption upon which this funding mechanism was based. That is, that there has been little progress in demonstrating efficacious psychosocial interventions for the treatment and prevention of psychiatric disorders? This simply is not the case. For example, over the last three decades, there has been good progress in the treatment and prevention of depression and anxiety (e.g., Garber et al., 2016; Hollon et al., 2021). Much more work is needed, however, particularly regarding personalization; that is, for whom do which interventions work best (e.g., DeRubeis et al., 2014; Young et al., 2021).

Second, Insel and Gogtay (2014) were correct in noting that we still know very little about the mechanisms underlying the disorders as well as the mechanisms that account for improvement as a function of interventions. However, is the R61/R33 the best or only way to learn about mechanisms in clinical trials? This approach clearly can and has yielded some interesting and worthwhile findings. Another approach, however, is to first demonstrate that an intervention produces clinically meaningful change, and then pursue uncovering the mechanism(s) to explain it. Why not allow both approaches to conducting clinical trials of psychosocial interventions?

Third, is the R61/R33 approach really going to get us answers any more rapidly than other ways of conducting clinical trials? It takes two years to do the R61; then three more years to conduct the R33. Only after that can investigators conduct a large, sufficiently powered R01 clinical trial, which could be another five years. Although the experimental therapeutics approach eventually may yield important breakthroughs, how do we justify and balance this with the need to reduce the suffering of patients today? Conducting RCTs that provide data about what works may be more efficient than waiting to determine how it works first.

A fundamental problem with the entire enterprise is that NIMH has been moving to a top-down approach in which the scientists and administrators in Program decide how research should be conducted. The alternative is a bottom-up approach in which the scientists conducting the clinical trials propose the research designs and other scientists pass judgement on these proposals in scientific review committees. As it stands now, clinical trials researchers are only permitted to use the R61/R33 FOAs and scientific review committees are instructed to score applications with this funding mechanism in mind. Investigators who want to submit an R01 to conduct a clinical trial must have first demonstrated that they have “hit a target” even if they did not go through the preliminary R61/R33.

One possible solution is to allow both types of approaches to conducting clinical trials. That is, allow some investigators to conduct RCTs that demonstrate an effect on the outcome(s) of interest and that also include tests of mediators (target mechanisms) and moderators. At the same time, other investigators can use the R61/R33 approach and mainly target the hypothesized mechanism(s) in the R61, and subsequently show an effect on symptoms in the R33. It would be interesting to compare these different scientific methods and their progress after a decade or two of doing it each way. Why not use multiple approaches rather than mandating a specific experimental design? Of course, NIMH has finite resources, so it makes sense to use designs that are most likely to yield important results. Which research designs accomplish this goal remains an open question, however.

Finally, given that this approach to treatment discovery came out of attempts to create new medications, we should revisit whether a parallel approach makes sense for psychosocial intervention development. Does a focus on experimental therapeutics make sense when conducting clinical trials of nonpharmacological interventions? Similarly, is the requirement to determine the optimal dose less relevant to psychosocial interventions? Thus, is it time to question the basic assumption that this is the best way to the discovery of new interventions.

In summary, the R61/R33 funding mechanism has yielded some important and interesting findings about mechanisms underlying psychopathology and change as a result of interventions. Various concerns about this current approach still need to be addressed, however. This commentary outlined several of these issues and some possible solutions.

Supplementary Material

1

Acknowledgments

This work was supported in part by grants from the National Institute of Mental Health (R61MH115125, R33MH115125, R61MH119270).

References

  1. DeRubeis RJ, Cohen ZD, Forand NR, Fournier JC, Gelfand LA, & Lorenzo-Luaces L (2014) The Personalized Advantage Index: Translating research on prediction into individualized treatment recommendations. A demonstration. PLoS ONE, 9(1): e83875. 10.1371/journal.pone.0083875 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Garber J, Brunwasser S, Zerr AA, Schwartz KTG, Sova K, & Weersing VR (2016). Treatment and prevention of depression and anxiety in youth: Test of cross-over effects. Depression and Anxiety 33, 939–959. doi: 10.1002/da.22519 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Gordon J (2017). NIMH Director’s Messages: An experimental therapeutic approach to psychosocial interventions. National Institute of Mental Health. March 20, 2017. https://www.nimh.nih.gov/about/director/messages/2017/an-experimental-therapeutic-approach-to-psychosocial-interventions
  4. Hollon SD, Andrews PW, Keller MC, Singla DR, Maslej MM, & Mulsant B (2021). Combining psychotherapy and medications: It’s all about the squids and sea bass (at least for nonpsychotic patients). In Barkham M, Lutz W, & Castonguay LG (Eds.), Handbook of psychotherapy and behavior change (7th ed). (p. 705–738). New York: Wiley. [Google Scholar]
  5. Insel TR, & Gogtay N (2014). National Institute of Mental Health clinical trials: New opportunities, new expectations. JAMA Psychiatry, 71(7), 745–746. doi: 10.1001/jamapsychiatry.2014.426 [DOI] [PubMed] [Google Scholar]
  6. Lewinsohn PM, & Gotlib IH (1995). Behavioral theory and treatment of depression. In Beckham EE & Leber WR (Eds.), Handbook of depression (pp. 352–375). Guilford Press. [Google Scholar]
  7. Mohr DC, Spring B, Freedland KE, Beckner V, Arean P, Hollon SD, Ockene J, & Kaplan R (2009). The Selection and Design of Control Conditions for Randomized Controlled Trials of Psychological Interventions. Psychotherapy and Psychosomatics, 78, 275–284. DOI: 10.1159/000228248 [DOI] [PubMed] [Google Scholar]
  8. National Institute of Mental Health (2021) Strategic Plan for research. NIH Publication Number: 20-MH-8096 https://www.nimh.nih.gov/strategicplan Published: May 2020 Updated: July 2021.
  9. Young JF, Jones JD, Gallop R, Benas JS, Schueler CM, Garber J, Hankin BL (2021). Personalized depression prevention: A randomized controlled trial to optimize effects through risk-informed personalization. Journal of the American Academy of Child and Adolescent Psychiatry. 60(9), 1116–1126. e1. Epub 2020 Nov 13. doi: 10.1016/j.jaac.2020.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES