Skip to main content
International Journal of Methods in Psychiatric Research logoLink to International Journal of Methods in Psychiatric Research
. 2024 Oct 1;33(4):e70003. doi: 10.1002/mpr.70003

Design of a multicenter randomized controlled trial of a post‐discharge suicide prevention intervention for high‐risk psychiatric inpatients: The Veterans Coordinated Community Care Study

Lauren M Weinstock 1, Todd M Bishop 2,3, Mark S Bauer 4, Jeffrey Benware 5, Robert M Bossarte 2,6, John Bradley 4,7, Steven K Dobscha 8, Jessica Gibbs 9, Sarah M Gildea 10, Hannah Graves 1, Gretchen Haas 11,12, Samuel House 13,14, Chris J Kennedy 4,15, Sara J Landes 16,17, Howard Liu 2,10, Alex Luedtke 18,19, Brian P Marx 20,21, Aletha Miller 22, Matthew K Nock 23, Richard R Owen 14,24, Wilfred R Pigeon 2,3, Nancy A Sampson 10, Alejandro Santiago‐Colon 22, Geetha Shivakumar 22,25, Snezana Urosevic 26,27, Ronald C Kessler 10,
PMCID: PMC11443605  PMID: 39352173

Abstract

Background

The period after psychiatric hospital discharge is one of elevated risk for suicide‐related behaviors (SRBs). Post‐discharge clinical outreach, although potentially effective in preventing SRBs, would be more cost‐effective if targeted at high‐risk patients. To this end, a machine learning model was developed to predict post‐discharge suicides among Veterans Health Administration (VHA) psychiatric inpatients and target a high‐risk preventive intervention.

Methods

The Veterans Coordinated Community Care (3C) Study is a multicenter randomized controlled trial using this model to identify high‐risk VHA psychiatric inpatients (n = 850) randomized with equal allocation to either the Coping Long Term with Active Suicide Program (CLASP) post‐discharge clinical outreach intervention or treatment‐as‐usual (TAU). The primary outcome is SRBs over a 6‐month follow‐up. We will estimate average treatment effects adjusted for loss to follow‐up and investigate the possibility of heterogeneity of treatment effects.

Results

Recruitment is underway and will end September 2024. Six‐month follow‐up will end and analysis will begin in Summer 2025.

Conclusion

Results will provide information about the effectiveness of CLASP versus TAU in reducing post‐discharge SRBs and provide guidance to VHA clinicians and policymakers about the implications of targeted use of CLASP among high‐risk psychiatric inpatients in the months after hospital discharge.

Clinical trials registration

ClinicalTrials.Gov identifier: NCT05272176 (https://www.clinicaltrials.gov/ct2/show/NCT05272176).

Keywords: behavior, clinical trials, suicide

1. INTRODUCTION

Suicide is the 12th leading cause of death in the US (CDC's WISQARS™, 2020). The great majority of suicides occur among people with psychiatric disorders (Arsenault‐Lapierre et al., 2004; Favril et al., 2022). Among patients in treatment for mental disorders, those recently discharged from psychiatric hospitalization have highest suicide risk (Ahmedani et al., 2019; Chung et al., 2017; Krause et al., 2020), with the roughly 1% of US adults discharged annually from psychiatric hospitalizations accounting for approximately 14% of all suicides (Chung et al., 2017). Interventions involving proactive, structured clinical outreach can reduce these post‐discharge suicides (Matarazzo et al., 2017; Miller et al., 2016, 2017; Stanley et al., 2018). However, such interventions can be labor‐intensive and would be more cost‐effective if delivered exclusively to high‐risk patients. Based on this realization, machine learning (ML) models using electronic health records have been developed to predict suicides occurring after psychiatric hospital discharge (Kessler, Bauer, et al., 2020; Kessler et al., 2015; Nielsen et al., 2023). Based on the strength of such a model developed in the Veterans Health Administration (VHA) system (Kessler et al., 2023), a randomized controlled trial was launched to evaluate the effects of a proactive, structured clinical outreach intervention in preventing suicide‐related behaviors (SRBs) after discharge among VHA inpatients defined as high‐risk by the ML model (U.S. National Library of Medicine, 2022). SRBs are defined as either suicide deaths, deaths that may be potentially suicidal (e.g., opioid overdose, other substance‐related deaths), or nonfatal suicide attempts recorded in VHA administrative records or self‐reported in follow‐up surveys of the sample. The current report describes the methods of this Veterans Coordinated Community Care (3C) Study.

2. MATERIALS AND METHODS

2.1. Purpose

The purpose of 3C is to determine whether adding the Coping Long Term with Active Suicide Program (CLASP; Miller et al., 2016, 2017, 2022) to treatment‐as‐usual (TAU) reduces SRBs relative to TAU in the 6 months after hospital discharge among veterans at high risk for suicide following discharge from inpatient psychiatric care. CLASP is a remote, moderate‐intensity (15–45 min clinical telephone or telehealth outreach contacts with decreasing frequency over 6 months), adjunctive intervention (i.e., one that can be delivered even when patients are receiving psychotherapy and medication management from other mental health professionals) designed to be delivered by trained and supervised masters‐level mental health clinicians and clinical social workers. CLASP, which is described in greater detail in a recently published treatment manual (Miller et al., 2022), was developed through an iterative approach consistent with the stage model of treatment development (Rounsaville et al., 2001). Early clinical trials (Miller et al., 2016) and subsequent evaluation in the large, multi‐site Emergency Department Safety Assessment and Follow‐up Evaluation (ED‐SAFE) study (Miller et al., 2017) support its effectiveness for the reduction of suicide behaviors among civilian samples in the 6–12 months after hospital discharge. As noted above, we define SRBs as either suicide deaths, deaths that may be potentially suicidal (e.g., opioid overdose, other substance‐related deaths), or nonfatal suicide attempts recorded in VHA administrative records or self‐reported in follow‐up surveys of the sample. Secondary outcomes include patient self‐reports of suicide ideation, plans, and intent in 6‐month follow‐up surveys.

2.2. Primary and secondary aims

The primary aim of 3C is to estimate aggregate effects of CLASP. Yet the threshold for determining who might benefit most from CLASP should be based on information about the strength of the association between CLASP and SRBs rather than on concentration of risk. Hence, it would be useful to investigate the possibility that the effects of CLASP might vary depending on patient characteristics. Based on this reasoning, the secondary aim of 3C is to carry out an exploratory analysis of heterogeneous treatment effects (HTEs). If the effects of CLASP are found to vary significantly depending on administrative and self‐report information available for patients prior to randomization, a graphic representation will be developed to help guide clinical decision‐makers in determining an optimal intervention threshold.

2.3. Design

A previously developed VHA post‐discharge ML risk model is used to target 3C randomization (Kessler et al., 2023). This model uses electronic health record (EHR), administrative, and residential geocode data as input. We created a protocol to make these data available within 24 h (for admissions Monday to Thursday) or 72 h (for admissions Friday to Sunday) of each new VHA admission to any of the psychiatric inpatient recruitment sites. A flag for patients predicted by the ML model to be at high risk of post‐discharge suicide is then sent by secure file transfer to the project research assistant (RA) at the participating unit. After obtaining permission from the treatment team to approach eligible patients, the RA explains the study to the patient and then seeks informed consent. The RA then administers a computerized baseline self‐report questionnaire to consenting patients. If unable to approach the patient in the unit prior to discharge due to time constraints, the RA attempts to approach the patient remotely within 10‐business days post‐discharge.

Randomization is then carried out automatically by computer with equal allocation between arms and systematic sampling within centers blocking on level of predicted risk. Information on whether the patient is assigned to the intervention or control group is conveyed electronically to the RA, who notifies participants of their assignment to a CLASP Advisor, a masters‐level mental health clinician or clinical social worker, who delivers an adjunctive, telephone‐based outreach intervention designed to reduce SRBs among individuals at high risk during acute care transitions (Miller et al., 2022). CLASP clinicians are referred to as Advisors rather than therapists to distinguish them from the psychotherapists that many patients have independent of the CLASP intervention. In the current study, patients are also given the flexibility to participate in sessions by a Veterans Affairs (VA) Central IRB (CIRB)‐approved video telehealth platform. Intervention content is discussed in more detail below in Section 3. The initial two to three remote CLASP sessions occur while patients are on the inpatient unit, scheduled with the assistance of the site RA, but may happen immediately after hospital discharge if more feasible, similar to CLASP implementation in ED‐SAFE (Miller et al., 2017).

Another unique element of CLASP is the opportunity for patients to identify and include a supportive significant other (SO) for optional participation, to enlist their support in working toward treatment goals, at specific time points. If so, the CLASP Advisor communicates not only with the patient but also with the SO either in joint sessions or in separate sessions, always keeping the patient informed if separate sessions occur with the SO. CLASP delivery for patients (and SOs, if participating) continues through 6 months after hospital discharge; at the end of this period, patients in both arms are administered an online or telephone (depending on patient preferences) questionnaire that assesses self‐reported outcomes. VHA administrative data are combined with self‐report outcomes data to create consolidated outcomes for analysis. The Department of Veterans Affairs (VA) Central IRB (CIRB) maintains regulatory oversight over the 3C study protocol, which is further monitored by an independent Data Safety and Monitoring Board comprised of experts in suicide prevention, veterans' mental health, clinical trials methodology, and biostatistics.

2.4. Eligibility

As noted above, eligibility is determined by scores on an ML risk model. The threshold for this model was initially set to include only the 15% of patients with highest predicted risk, which captured close to 50% of all the suicide deaths in the development test sample. The threshold was then lowered temporarily in response to changes in hospital admission rates over the recruitment period to maintain a steady volume of work for the CLASP Advisors. As such changes can affect intervention effectiveness, information about threshold changes is being included in the predictor set for the analysis of heterogeneity of treatment effects. This analysis phase is described in Section 6.2.2.

2.5. Inclusion and exclusion criteria

Inclusion criteria are being a veteran age 18+ in inpatient psychiatric VHA treatment, meeting the high‐risk threshold of the ML model, and having access to a telephone after discharge. Exclusion criteria are impaired decision‐making capacity to provide informed consent, limited English language proficiency, and terminal illness.

2.6. Statistical power

The target sample is n = 850 patients completing the baseline assessment and enrolling in the intervention (n = 425 per arm). The sample size was determined by a power analysis that used as input: (i) an estimated 6‐month SRB prevalence of 22% in the total inpatient population based on observational studies in high‐income countries that followed suicidal psychiatric inpatients 6‐months after hospital discharge (Azcárate‐Jiménez et al., 2019; Teti et al., 2014); (ii) an estimate that this prevalence will be increased to 37.2% in our sample taking into consideration the strength of our prediction model in previous observational studies (Kessler, Bauer, et al., 2020; Kessler et al., 2023) and our study's focus on high‐risk patients; (iii) estimates from previous research suggesting that CLASP is associated with a 30%–50% reduction in SRBs (Miller et al., 2016, 2017); and (iv) the assumption that we will have a 25% loss to follow‐up. It is noteworthy that we did not use VHA data on 6‐month prevalence of administratively recorded SRBs for the first of these four estimates, because we know from previous research that administrative records substantially under‐report SRBs (Nock et al., 2008). This is also the reason we are carrying out follow‐up surveys with patients to learn of self‐reported (and in some cases informant‐reported, when the patient is deceased) SRBs rather than relying exclusively on VHA administrative records about SRBs.

Taken together, these inputs suggest that the outcome rates will be 18.6%–26% for patients randomized to CLASP (i.e., 37.2 × 0.50–0.70) compared to 37.2% for controls. We seek 0.8 power to detect the lower end of this range of effects using a 0.05‐level two‐sided test. Power analysis using the standard calculations for a two‐sample z test of proportional differences (Chow et al., 2017) showed that we will need n = 318 patients per arm to achieve 0.8 power. However, with adjustment for loss to follow‐up (i.e., 318/.75), we will need n = 425 patients per arm enrolled in the study. This was the basis for setting the target sample size for the trial as 850 patients enrolled in the trial with equal allocation between cases and controls.

The trial was not powered to detect HTE, as this was a secondary aim. It is noteworthy, though, that power analyses for HTE are complex because they require simulation. Luedtke et al. (2019) carried out a simulation of this type to evaluate HTE for differential effects of treatments for major depressive disorders across patients assuming population‐level average treatment effects similar in magnitude to those assumed here. Results showed that 0.7–0.8 power to detect HTE in a plausible range for such effects would require a sample of n = 300–500 patients per arm. We consider these results relevant to our trial given the comparable expected effect size and range of predictors. Our sample size is somewhat above the midpoint of the sample size range suggested by Luedtke and associations.

2.7. Randomization

The computerized randomization scheme implements equal allocation between arms within medical centers blocked on level of predicted risk. Blocking was done by defining a series of stratification cells for level of predicted risk, randomizing the first patient in each cell to either the intervention or control arms within each hospital using a Hadamard matrix that balanced selections across the combination of cells and hospitals to force balance, and then using sequential allocation within hospital and cells for subsequent assignments. Seven VHA medical centers are participating in 3C: Boston, Central Arkansas, Minneapolis, Nashville, North Texas, Pittsburgh, and St. Louis (Table 1). Study enrollment started in these centers between January 11, 2022 (St. Louis) and August 14, 2023 (Nashville).

TABLE 1.

Veterans Health Administration centers participating in Veterans Coordinated Community Care Study with start dates of recruitment.

Center Start date
Boston 09/27/22
Central Arkansas 04/25/22
Minneapolis 02/14/22
Nashville 08/14/23
North Texas 02/07/22
Pittsburgh 06/13/22
St. Louis 01/11/22

2.8. Consent

As noted above, informed consent is obtained by the recruitment site RA after the treatment team provides permission to approach the patient. The RA further evaluates patient decision‐making capacity to provide informed consent before explaining study aims and procedures. The RA emphasizes to patients that participation is voluntary, that patients are free to withdraw from the study at any time, and that decisions about participation and withdrawal will have no effect on current or future patient treatment or eligibility for other VA services. As required by the VA CIRB as part of the informed consent process, patients sign a separate Health Insurance Portability and Accountability Act Authorization Form allowing the study staff to access their Protected Health Information (PHI) for research purposes. Patients who choose to provide informed consent for participation provide a digital signature through DocuSign, after which a printed copy of the signed informed consent document is provided. For those who request additional time to consider their participation, written materials providing this same information are left with patients to review. As SOs are identified through a collaborative discussion between the patient and CLASP Advisor in the early CLASP sessions, the informed consent process with SOs is conducted verbally by CLASP Advisors over the telephone, facilitated by an IRB‐approved waiver of written informed consent.

3. INTERVENTION DESCRIPTION

As described in greater detail in prior publications (Miller et al., 2016, 2017) and the published treatment manual (Miller et al., 2022), CLASP is an adjunctive, telephone‐based outreach intervention designed to reduce SRBs among individuals at high risk during acute care transitions (Miller et al., 2022). In 3C, patients have the flexibility to participate in sessions by telephone or a VA CIRB‐approved video telehealth platform. To enhance potential scalability for future implementation, CLASP was designed to be delivered by masters‐level mental health clinicians and clinical social workers. As an intervention that is delivered as an adjunct to unrestricted routine care, the clinicians who deliver CLASP are referred to as CLASP Advisors to minimize confusion between the CLASP providers and the patient's routine care providers.

In 3C, CLASP Advisors are recruited from VA and civilian mental health settings. Although employed per diem for purposes of facilitating the research aims, the CLASP Advisors are nevertheless selected on the basis of having credentials (i.e., minimum of a masters degree in a mental health or related field, experience working with those at risk for suicide) that would be similar to those of clinicians employed in VHA settings, which is where we would anticipate CLASP being implemented should it be found effective. The 3C CLASP Advisors receive training and certification prior to intervention delivery with ongoing supervision and fidelity monitoring throughout the study period. To facilitate the latter, sessions are recorded through a secure audio recorder with storage behind the VA firewall. A random sample of 10% of the session recordings are independently reviewed and rated on the CLASP fidelity rating scale available in the published treatment manual (Miller et al., 2022).

CLASP centers on a series of sessions with patients and supportive SOs identified by patients. As described in greater detail in the published treatment manual (Miller et al., 2022), these initial sessions focus on orientation to CLASP, the role of the SO in care, risk and needs assessment, safety planning, and identification of post‐discharge goals, grounded in an exploration of personal values. These goals are documented in what is referred to as the patient's Life Plan, which is a written plan organized such that the patient's values are mapped onto short‐term goals, with articulated action steps to take toward those goals. The third session includes orientation of the SO to CLASP and review of the Life Plan to enlist support toward goals from the SO after discharge. If the patient is unable or unwilling to identify an SO for participation, CLASP proceeds according to the standard schedule, but without the SO sessions. In ED‐SAFE, for example, only 20% of the participants in the CLASP arm had a participating SO, yet there was nevertheless a significant reduction in suicide behaviors over the 52‐week follow‐up (Miller et al., 2017). Following discharge, patients (and SOs, if included) are contacted for up to 11 brief (10–15 min) contacts over 6 months, which are focused on ongoing risk assessment (and triage if needed), a review of progress toward goals, and encouraging and problem‐solving around engagement with routine care, with contacts weekly for the first month, biweekly for 2 months, and monthly for the final 3 months. One direct benefit to delivering this program within a VHA closed system of care is that CLASP Advisors can document contacts within the EHR to further enhance coordination of care among routine VHA providers.

Previous research has shown that significant reductions in SRBs associated with CLASP occur not only in the original hybrid in‐person/telephone‐based version of the intervention (Miller et al., 2016) but also in the fully remote telephone‐based intervention similar to that used in 3C (Miller et al., 2017). It is noteworthy that CLASP was previously piloted with mixed results in one VHA setting (Primack et al., 2022). Given the pilot trial design, and the robustness of VHA TAU (see below), the mixed results found in this pilot may have been a function of the small sample. Further, inclusion in that pilot study was not based on risk determined by an ML algorithm but rather patient response to a research questionnaire.

In 3C, we will compare outcomes from those assigned to CLASP versus those who receive TAU‐alone. Given that SO participation will be variable across patients, and may impact outcomes, the presence/absence of an SO in the CLASP intervention will be included in the HTE analyses. When using TAU as a comparator, it is important to delineate what TAU entails, enhancing the ability to extrapolate results to other settings. It is noteworthy that post‐discharge TAU in VHA is generally well‐defined, has clear practice guidelines, and often includes some combination of medication management, psychotherapy, chart flags, and monitoring (Nurjannah et al., 2014). However, this protocol can vary across treatment centers and across patients within centers based on either administrative decisions or staffing constraints. We will carefully document TAU in our analysis to help provide context for interpreting effects of CLASP.

4. MEASUREMENTS

4.1. Baseline administrative and geospatial data

The ML model used to predict suicide risk in the population is based on VA EHR and other administrative data and geospatial data about patient residential neighborhoods. The four broad classes of variables in those models will be linked to the 3C database both as control variables and to be included among the predictors in the HTE analysis: (i) psychopathological risk factors (diagnoses, treatments, suicidality), including information about these variables during the hospitalization; (ii) physical disorders and treatments, including counts of prescribed medications classified by the Federal Drug Administration as increasing suicide risk; (iii) facility‐level quality indicators (e.g., inpatient staff turnover rate) over the 12 months prior to the date of the hospitalization; and (iv) indicators of Social Determinants of Health at both the patient‐level (from ICD‐9/10‐CM codes and socio‐demographic information) and at the geospatial‐level as of the month before the hospitalization. A detailed description of these variables is available elsewhere (Kessler, Bauer, et al., 2020; Kessler et al., 2023).

4.2. Baseline patient self‐report assessment

The baseline self‐report assessment was designed to obtain patient reports about a wide range of variables found in previous research to predict SRBs beyond the information included in EHR, administrative and geospatial data (Franklin et al., 2017; Holliday et al., 2020; Klonsky et al., 2016; Nock et al., 2013). The eight domains of these self‐report predictors include demographics, self‐injurious thoughts and behaviors, psychopathological risk factors, personality/temperament, physical health, exposure to recent and lifetime stressors, social networks and supports, and other covariates and social determinants of health (military service, homelessness, firearm ownership). The assessment also obtains baseline scores on a study‐specific aggregate outcome measure that we refer to as the 3C Suicidal Ideation and Functioning Outcome Measure. This measure consists of selected items from the Columbia‐Suicide Severity Rating Scale (C‐SSRS; Posner et al., 2011), Self‐Injurious Thoughts and Behaviors Interview (SITBI; Fox et al., 2020), Suicidal Behaviors Questionnaire (SBQ; Osman et al., 2001), P4 Screener (Dube et al., 2010), Suicide Attempt Beliefs Scale (SABS; Siddaway et al., 2019), Brief Reasons for Living Inventory 10‐item version (BRLI‐10; Cwik et al., 2017), Suicide‐Related Coping Scale (SRCS; Stanley et al., 2017), and Acquired Capacity for Suicide Scale (ACSS; Van Orden et al., 2008). The baseline assessment (Supporting Information S1: Appendix A) takes approximately 45 min to complete. Participants are compensated with a $40 gift card for completing this assessment.

4.3. Outcome assessment

As noted above, the primary outcome is SRBs in the 6 months after discharge. The great majority of these will be nonfatal suicide attempts, which will be assessed both in VHA administrative records (ICD‐10‐CM Codes T14.91 and X60‐84) and in patient self‐reports obtained in a 6‐month follow‐up survey. The definition of attempts will include aborted attempts and interrupted attempts. However, we will also learn of some deaths from informant reports during phone calls in which we attempt to carry out follow‐up interviews. VA also maintains a repository of information about known deaths that we will be able to refer to for patients for whom we are unable to make contacts for follow‐up interviews. Although this death repository does not record cause of death, it might be possible to make this determination for the participating patients recorded as being deceased by screening newspapers of record in the communities where patients lived or other means. We will explore such options before carrying out final data analyses. The secondary outcome will include self‐reports in the 6‐month outcome assessment of the baseline 3C Suicidal Ideation and Functioning Outcome Measure. The outcome assessment (Supporting Information S1: Appendix B) takes approximately 30 min to complete. Participants are compensated with a $60 gift card for completing this assessment. We hope subsequently to follow patients longer than 6 months through VHA administrative records with VA CIRB approval, but our current funding is limited to a 6‐month follow‐up period.

5. SAFETY MONITORING AND RELATED PROTOCOLS

There are numerous procedures in place for careful monitoring of safety and suicide risk in the 3C study, especially at the baseline and 6‐month follow‐up time points. The baseline assessment, which occurs on the psychiatric inpatient unit, does not include assessment of current (i.e., on the unit) suicidal thoughts and behaviors (only those prior to hospital admission). However, procedures are in place to respond to a spontaneous disclosure of current (i.e., past 48 h) suicidal thoughts or behaviors, such that the study RA will notify the participant's inpatient treatment team of the disclosure.

During the 6‐month follow‐up assessment, certain criteria are in place to determine whether someone requires additional clinical attention at the “lower risk” and the “higher risk” thresholds. Importantly, these post‐randomization risk thresholds assigned for patients for the purposes of ongoing safety monitoring are distinct from the pre‐randomization predicted risk threshold that is used to determine eligibility for the intervention. The lower post‐randomization risk threshold is defined by the endorsement during the follow‐up assessment of any of the following within the past week: wish to be dead, non‐specific active suicidal thoughts, and/or active suicidal ideation without a plan or intent to act. Further, if the participant endorses suicide behaviors (i.e., actual, interrupted, or aborted attempts, and/or preparatory acts) since the last assessment—but not within the past week—a lower risk safety protocol is triggered. If a participant meets the specified lower risk threshold, they are provided with a list of clinical supports and resources (including the Veterans Crisis Line; VCL). This resource list is provided either through an automated pop‐up screen, if the participant completed the assessment online, or verbally from the assessment RA if the participant completed the assessment by telephone. Qualtrics, which is used for survey data collection, is programmed to send email notification to the study investigators and staff responsible for safety risk monitoring when a participant's assessment response reveals a lower risk threshold. No further follow‐up is required for those who meet this lower risk threshold on the safety monitoring protocol.

The higher post‐randomization risk threshold for patients is defined by the endorsement during the follow‐up assessment of any of the following within the past week: active suicidal ideation with some intent to act (but with no plan), active suicidal ideation with a plan, and/or active suicidal ideation with a plan and intent to act. Further, if the participant endorses suicidal behaviors (i.e., actual, interrupted, or aborted attempts, and/or preparatory acts) in the past week, a higher risk safety protocol is triggered. If the participant meeting this higher risk threshold has completed the assessment online, they are presented with an automated pop‐up message that offers resources (including the VCL) and a statement that a study team member or VCL clinician will contact them to follow‐up within 24 business hours. The study investigators and staff responsible for safety risk monitoring are immediately notified through the automated Qualtrics email alert, and will initiate telephone outreach with risk assessment, referral, and emergency triage, as needed. If the participant who met the higher risk threshold completed the assessment on the telephone, the RA will offer resources and initiate a warm transfer to the VCL for immediate assessment and triage. If the call becomes disconnected or the participant hangs up at this time, the RA will call the VCL and request that they directly outreach and complete the follow‐up with the participant.

Finally, for those randomized to CLASP, participants are in regular contact with CLASP Advisors who conduct routine risk assessment as part of their intervention contacts. CLASP Advisors are trained to manage clinical risk and do so using their clinical judgment. However, should they determine that further assessment and emergency triage is required, they are trained to facilitate a warm or other transfer to VCL for further assistance. One benefit of coordination with VCL to conduct safety follow‐up and additional clinical management, as needed, is that the VCL clinician will document the encounter and disposition in the patient's medical record, so that study staff may learn of the outcome of the call. All adverse events (AEs) and serious adverse events (SAEs) during study participation, including mental health or substance‐related inpatient hospitalization, suicide attempt, or death, are systematically documented in REDCap and reported to the IRB and the Data and Safety Monitoring Board (DSMB), with timelines determined by their expectedness and relatedness to the study. For example, those events that are determined to be unrelated and expected (i.e., a suicide attempt in a sample selected to be at risk for suicide) are reported and reviewed annually to the IRB and on a biannual basis to the DSMB. Those events that are determined to be related and/or unexpected are reported within five business days. If the event is a death that is determined to be related and/or unexpected, the IRB and DSMB are immediately notified.

6. STATISTICAL ANALYSIS

6.1. Aim 1

The primary endpoint for Aim 1 will be any SRB over the 6 months after hospital discharge. The causal parameter to evaluate this effect will be the Adjusted Risk Difference (ARD) of outcome prevalence in the intervention group versus the TAU control group, where the adjustment will be for randomization imbalance and systematic loss to follow‐up. As noted above, we expect ARD to be in the range 18.6%–24.9%. The secondary endpoint will be the occurrence of broader suicidality (i.e., ideation, plan, and intent) over the same follow‐up period. Analysis will be from an intention‐to‐treat perspective. Per National Research Council recommendations and subsequent commentaries (Groenwold et al., 2014), we will use a doubly robust estimation method to adjust for residual confounding and informative loss to follow‐up with a missing at random (MAR) assumption. We will then do sensitivity analyses to evaluate the MAR assumption using two MAR methods: selection modeling (Little & Rubin, 2019) and pattern mixture modeling with a mean offset (Little, 1994). The doubly robust method used will be targeted minimum loss‐based estimation (TMLE; Gruber & Laan, 2012) implemented in the tmle3 R package (Coyle, 2021). This is a semi‐parametric method that places minimal assumptions on the data distribution by pooling estimates across a set of flexible ML algorithms. In addition, the TMLE secondary targeting step optimizes the bias‐variance trade‐off for the causal parameter.

6.2. Aim 2

6.2.1. Risk models

Two approaches exist to study HTE: risk modeling and effect modeling. In risk modeling, a model to predict the clinical outcome is developed for “base risk” either in an observational sample prior to implementing a trial or in an entire trial sample ignoring treatment assignment. This risk modeling approach is what we did in the ML model that we developed to determine which inpatients are high‐risk for post‐discharge suicide. The base risk estimate in this model is then assigned to each patient in the trial regardless of treatment assignment and used to define subgroups for investigating whether aggregate absolute intervention effects vary with base risk (Kent et al., 2016). The assumption underlying this approach is that some patients have low risk of SRBs even in the absence of treatment, in which case the intervention is unlikely to have a large absolute effect (unless the intervention is harmful). The strength of risk modeling is that it provides a stable estimate of variation in ARD that will often be a strong, although not optimal, predictor of HTE. Consistent with this thinking, a recent secondary analysis of 32 large clinical trials (primarily in cardiology) found that most trials with significant ARD also had significant HTE, where highest ARD usually occurred among patients with highest base risk and lowest ARD among patients with lowest base risk (Kent et al., 2016). Some of these associations were striking. For example, in the RITA‐3 trial of early intervention versus usual care for unstable angina, more than half the significant aggregate treatment effect was due to an extremely strong effect among the one‐eighth of patients with highest base risk and there was no meaningful intervention effect among the 50% of patients with lowest base risk (Fox et al., 2005).

We will estimate two types of risk models in 3C. The first will examine variation in ARD as a function of predicted probabilities of post‐discharge suicide from the ML model used to select patients for the trial. The second will examine variation in ARD as a function of predicted probabilities of our broader measure of SRBs based on a new ML model that we will generate in our sample to predict this outcome. Importantly, information about intervention assignment will not be included in the predictor set for this model. The super learner stacked generalization ML method (van der Laan et al., 2007) will be used to estimate this prediction model. Ten‐fold cross‐validation will be used to avoid over‐fitting. The same doubly robust approach as in Aim 1 will be used to adjust for loss to follow‐up in estimating model coefficients.

6.2.2. Effect models

Whereas risk models estimate a single interaction between the composite predicted risk score and treatment types to evaluate HTE, effect models estimate a separate interaction for each one of hypothesized prescriptive predictors in a multivariable model (where the risk model score may or may not be one of the predictors) and then combine the significant interactions to create an individualized treatment rule (ITR). The ITR typically is created by using counter‐factual logic to compute two predicted probabilities of the outcome for each patient from a multivariable model that includes numerous interactions. The first predicted outcome probability assumes the patient was assigned to CLASP + TAU. The second assumes the patient was assigned to TAU‐alone. These two within‐person predicted probabilities are then subtracted to create an individual‐level estimate of ARD referred to as the conditional average treatment effect (CATE; Kovalchik et al., 2013). In more sophisticated versions of effect modeling, this set of within‐person predicted difference scores is then treated as an outcome in a second model that uses doubly robust methods to optimize the bias‐variance trade‐off in estimating CATE. The individual‐level predicted outcome scores based on this second model is the final estimate of CATE (VanderWeele et al., 2019).

The strength of effect modeling is that it has the potential to create an optimal characterization of HTE. However, there is also a greater likelihood than in risk modeling of instability because of estimating many interactions (van Klaveren et al., 2019). This problem is exacerbated when data‐driven methods are used to search for interactions, as this leads to the great majority of detected interactions being false positives unless methods are used to reduce over‐fitting. The need to protect against over‐fitting in building data‐driven models makes it important to use methods that evaluate ML methods (i.e., sample splitting) even when simple linear interaction algorithms fit the data as well as more complex specifications (VanderWeele et al., 2019).

Based on these considerations, we will use a relatively simple ML modeling approach, causal forests (Athey et al., 2019), to estimate CATEs in 3C. Causal forests is a doubly robust extension of random forests (Breiman, 2001) designed specifically to evaluate the existence of treatment heterogeneity. This is done by estimating propensity scores and expected marginal outcomes given baseline covariates, and then estimating CATEs via a localized partial linear modeling estimator that orthogonalizes out the propensity score and marginalized outcome estimates to generate separate counter‐factual predicted outcome probabilities for each patient based on the ensemble of trees in the forest (Athey & Wager, 2019; Nie & Wager, 2021). Implementation of causal forests will be carried out with the grf R package (Tibshirani et al., 2022). CATEs will be estimated with 10‐fold cross‐validation and their performance evaluated using the same logic as in the risk modeling analysis: by estimating a regression equation with three predictors consisting of a dummy variable for treatment assignment, the estimated CATE, and an interaction between treatment assignment and estimated CATE to predict treatment outcome. The existence of a significant interaction in this model will then be used to determine whether significant HTE exists in the effects of CLASP in the high‐risk sample in which the trial is being implemented.

One special consideration in estimating CATE concerns the fact that only a subset of patients nominates a SO to participate in CLASP. It is of interest to know whether CLASP ARD is influenced by the presence or absence of an SO. As we noted above in Section 3, the SO is assigned only after randomization and only in the intervention group, making it impossible to use conventional experimental methods (which could be used if an opportunity to select an SO was made available only to a probability subsample of patients in the CLASP arm) or conventional subgroup analysis methods (which could be used if an opportunity to select an SO was randomized in both the intervention and TAU control groups) to evaluate the effect of having an SO. As a result, we will instead use a method that assigned a predicted probability of selecting an SO to each participant in both the CLASP arm and the TAU arm and use that predicted probability as a specifier to examine the effects of having an SO. This method requires the development of a prediction model in the intervention arm that uses information obtained from all respondents, both cases and controls, prior to randomization to predict whether patients in the intervention arm selected a SO, and the assignment of this predicted probability in a cross‐validation framework to each case and each control based on this model (Dunn & Bentall, 2007). Cross‐validation is necessary in assigning predicted probabilities in the intervention group to make sure the predicted probability is not over‐fitting the outcome in the intervention arm than the control arm. Any type of model can be used for this purpose, but we will use a random forest model (Breiman, 2001) given that we are using an extension of random forests to estimate CATE. Once the predicted probability of selecting an SO is available, it can be used as a specifier in a conventional interaction model that evaluates variation in CLASP ARD as a function of predicted probability of SO. Or the predicted probability of selecting an SO can be included as a composite predictor variable in the causal forests analysis.

6.2.3. Graphic representation of HTE results

We noted above that risk model results are often presented by estimating ARD within subsamples defined by ranges on the aggregate risk score. CATE can be presented in the same fashion. However, as CATE is defined at the level of the individual patient, it can also be graphed as a continuous score (e.g., Kessler, Chalker, et al., 2020). An especially useful way to do this is by generating a decision curve (Fitzgerald et al., 2015) to determine an optimal intervention threshold taking into consideration the minimal CATE needed to justify implementing the intervention based on information about strength of the intervention effect, intervention costs, and the value placed on preventing an SRB. We will use this approach to graph CATE if our analysis finds significant HTE.

7. DATA MANAGEMENT

The designation of patients as being high‐risk based on predicted probability of suicide from the previously developed ML model is based on analyses that use R software and are implemented using the VA Informatics and Computing infrastructure (VINCI). Communications between VINCI and the RA at each local inpatient unit is carried out using a secure file transfer using MS Teams as the preferred software of the VA to share PHI. The baseline self‐report assessment is administered with a computer tablet using Qualtrics software. Qualtrics is also used to administer the 6‐month outcome assessment. CLASP Advisors document patient outreach and clinical contacts using the VA instance of REDCap. AEs and SAEs are documented and tracked within REDCap. No PHI is collected or stored in Qualtrics or REDCap. Consolidated data management is carried out using SAS software. Data analysis will use SAS and R software and be carried out at Harvard Medical School using a deidentified password protected dataset. Categorical variables will be dummy coded. Quantitative variables will be standardized to a mean of 0 and variance of 1. Item‐level missing data will be imputed using multiple imputation by super learning (Carpenito & Manjourides, 2022).

8. LIMITATIONS AND CHALLENGES

We initially recruited a somewhat different set of inpatient units than those involved in the final study. Some sites were replaced during their start‐up due to difficulties in protocol implementation such as in recruiting/retaining RAs, coordinating with other ongoing trials, and in one case, the unit referring patients elsewhere due to a hospital construction project. We addressed this challenge by adjusting the threshold for study entry in the remaining sites until new sites were identified and onboarded.

One additional challenge encountered in protocol implementation relates to the process of inpatient recruitment, given that some patients may not be approached until their clinical status has stabilized and the treatment team determines whether the RA can approach them for informed consent. The VA CIRB prohibits approach for research consent of patients who are admitted involuntarily, which limits the number of otherwise eligible patients. The pace and structure of care delivery on inpatient units further makes it difficult to find time and space to speak privately with patients. These challenges are exacerbated by the fact that up to half of all patients are admitted over the weekend and the RAs must wait until Monday for treatment team clearance to approach these patients. The volume of most inpatient units is too small to justify having a full‐time RA on site, leading to scheduling RAs at times when there typically is greater access to new admissions who are approved for recruitment (i.e., Monday to Tuesday). Yet the actual distribution of admissions of high‐risk patients varies widely, resulting in some patients being discharged before they can be approached by our RAs to seek informed consent. Given the challenges associated with inpatient recruitment and that some patients decline participation, we have so far been able to enroll and randomize only about 25% of high‐risk patients identified by the ML prediction model. However, the timeline for study enrollment has been calibrated to account for these challenges to recruitment, further informed by our experience in prior CLASP trials.

One final challenge encountered in 3C relates to staffing of CLASP Advisors. When recruitment has slowed, either due to challenges with onboarding of sites or when the census of all VHA psychiatric inpatient units in 2023 dropped for reasons that up to now remain unexplained, it can be difficult to keep the CLASP Advisor caseloads full. Alternatively, during times of enhanced rates of recruitment, the challenge has been to distribute CLASP Advisor case assignments without overburdening clinician capacity. Reliance upon per diem clinicians, with flexible hours, has minimized the impact of recruitment flow on CLASP Advisor workload. This staffing arrangement has an added benefit in that we do not pay for clinician time when caseloads decrease, and they are not engaged in CLASP outreach or intervention delivery. As described above, we have also addressed this challenge by adjusting the enrollment threshold to increase patient flow into the CLASP Advisor caseloads when recruitment has slowed.

9. DISCUSSION

3C was funded in July 2020. The COVID‐19 pandemic complicated study initiation and influenced recruitment because the demands of hospital beds created by the pandemic led to a decrease in inpatient psychiatric admissions. As a result, recruitment did not begin until January 2022. We have currently enrolled roughly two‐thirds of the target sample. At the time of this report, the response rate of the 6‐month outcome assessment is excellent (82%). We anticipate, based on current experience, that enrollment will be completed by September 2024. Final data processing of the baseline data will begin at that time. Data processing of the outcomes data will begin once the 6‐month outcome assessments are completed. However, a lag of several months sometimes occurs in EHR records, so we will continue to check EHR outcomes data through 9 months after baseline. There is an even greater lag in accessing VHA death records. As a result, we will carry out initial analyses focusing only on the EHR and patient self‐report outcomes, with the death record data added once they become available.

The standard of care for discharge from psychiatric inpatient units is to develop an outpatient treatment plan for the patient while they are still in the hospital and then to outreach after discharge to confirm that the patient is following through in making an initial contact for the outpatient treatment plan (Nurjannah et al., 2014). The 3C trial is designed to provide guidance to VHA about the value of going beyond this basic plan to implement more proactive clinical outreach and follow‐up in the service of preventing SRBs in the segment of the patient population with highest SRB risk. VA already has a high‐risk suicide prevention program known as the Recovery Engagement and Coordination for Health—Veterans Enhancement Treatment (REACH VET) program (U.S. Department of Veterans Affairs, 2018) that each month uses information based on a different ML model to target the 0.1% of VHA users with highest predicted imminent (30‐day) suicide risk for outreach and case management. The ML model on which REACH VET is based found that approximately 2% of all VHA suicides occurred among this 0.1% of VHA users (Kessler et al., 2017). Given that about 1% of VHA users are hospitalized for a psychiatric disorder per year, the fraction of VHA users in REACH VET is equivalent to 10% of VHA psychiatric inpatients. However, approximately 6% of all VHA suicides occur among the 10% of VHA psychiatric inpatients at highest predicted risk in this model—approximately 3 times the concentration of risk as the REACH VET model. The REACH VET evaluation found that REACH VET does not lead to a significant reduction in suicides, although it does lead to a modest reduction in nonfatal suicide attempts (McCarthy et al., 2021). The 3C evaluation will allow VHA to determine whether investment in high‐risk prevention might benefit from more targeted intervention with recently discharged psychiatric inpatients as one, although possibly not the only, more targeted segment of the patient population than those identified in the population‐wide REACH VET model.

AUTHOR CONTRIBUTIONS

Lauren M. Weinstock: Conceptualization; funding acquisition; investigation; methodology; project administration; resources; software; supervision; validation; writing—original draft; writing—review and editing. Todd M. Bishop: Data curation; investigation; project administration; supervision; validation; writing—review and editing. Mark S. Bauer: Methodology; writing—review and editing. Jeffrey Benware: Project administration; resources; supervision; writing—review and editing. Robert M. Bossarte: Conceptualization; funding acquisition; investigation; methodology; writing—review and editing. John Bradley: Project administration; resources; supervision; writing—review and editing. Steven K. Dobscha: Conceptualization; investigation; methodology; writing—review and editing. Jessica Gibbs: Project administration; resources; visualization; writing—review and editing. Sarah M. Gildea: Data curation; software; writing—review and editing. Hannah Graves: Data curation; software; visualization; validation; writing—review and editing. Gretchen Haas: Writing—review and editing; project administration; resources; supervision. Samuel House: Writing—review and editing; project administration; resources; supervision. Chris J. Kennedy: Formal analysis; methodology; software; writing—review and editing. Sara J. Landes: Resources; supervision; writing—review and editing. Howard Liu: Data curation; formal analysis; visualization; writing—review and editing. Alex Luedtke: Formal analysis; methodology; writing—review and editing; supervision; software. Brian P. Marx: Conceptualization; investigation; funding acquisition; writing—review and editing. Aletha Miller: Project administration; resources; supervision; writing—review and editing. Matthew K. Nock: Writing—review and editing; conceptualization; funding acquisition; investigation. Richard R. Owen: Project administration; writing—review and editing; supervision; resources. Wilfred R. Pigeon: Writing—review and editing; funding acquisition; investigation; conceptualization. Nancy A. Sampson: Writing—review and editing; formal analysis; project administration; supervision. Alejandro Santiago‐Colon: Writing—review and editing; project administration; resources; supervision. Geetha Shivakumar: Writing—review and editing; resources; project administration; supervision. Snezana Urosevic: Writing—review and editing; project administration; resources; supervision. Ronald C. Kessler: Writing—review and editing; writing—original draft; supervision; project administration; conceptualization; funding acquisition; investigation; methodology.

CONFLICT OF INTEREST STATEMENT

In the past 3 years, Dr. Lauren M. Weinstock has received publication royalties from the Oxford University Press. Dr. Ronald C. Kessler has been a consultant over this time period for Cambridge Health Alliance, Canandaigua VA Medical Center, Child Mind Institute, Holmusk, Massachusetts General Hospital, Partners Healthcare, Inc., RallyPoint Networks, Inc., Sage Therapeutics and University of North Carolina. He has stock options in Cerebral Inc., Mirah, PYM (Prepare Your Mind), Roga Sciences, and Verisense Health. Dr. Matthew K. Nock receives publication royalties from Macmillan, Pearson, and UpToDate. He has been a paid consultant in the past 3 years for Apple, Microsoft, COMPASS Pathways, and Cambridge Health Alliance, and for legal cases regarding a death by suicide. He has stock options in Cerebral Inc. He is an unpaid scientific advisor for Empatica, Koko, and TalkLife. The remaining authors do not have any conflicts to declare.

ETHICS STATEMENT

The study protocol was approved by the Department of Veterans Affairs (VA) Central IRB (CIRB), which is further monitored by an independent Data Safety and Monitoring Board. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.

Supporting information

Supporting Information S1

MPR-33-e70003-s001.docx (521.3KB, docx)

ACKNOWLEDGMENTS

This work was supported by a grant from the Warren Alpert Foundation and a contract (36C24122P0883) from the Department of Veterans Affairs National Center for Posttraumatic Stress Disorder/VA Boston Healthcare System. Lauren M. Weinstock and Ronald C. Kessler are joint PIs. Sara J. Landes was supported in part by UL1 TR003107 awarded by the National Center for Advancing Translational Sciences of the National Institutes of Health and by QUE 20‐026, PII 18‐195, and EBP 22‐104 awarded by the Department of Veterans Affairs Quality Enhancement Research Initiative (QUERI). The funder had no role in the design or conduct of the study, collection, management, analysis, or interpretation of the data, preparation, review, or approval of the manuscript, and decision to submit the manuscript for publication.

Weinstock, L. M. , Bishop, T. M. , Bauer, M. S. , Benware, J. , Bossarte, R. M. , Bradley, J. , Dobscha, S. K. , Gibbs, J. , Gildea, S. M. , Graves, H. , Haas, G. , House, S. , Kennedy, C. J. , Landes, S. J. , Liu, H. , Luedtke, A. , Marx, B. P. , Miller, A. , Nock, M. K. , … Kessler, R. C. (2024). Design of a multicenter randomized controlled trial of a post‐discharge suicide prevention intervention for high‐risk psychiatric inpatients: The Veterans Coordinated Community Care Study. International Journal of Methods in Psychiatric Research, e70003. 10.1002/mpr.70003

DATA AVAILABILITY STATEMENT

De‐identified study data will be made available through the Inter‐university Consortium for Political and Social Research (ICPSR) after the study has ended.

REFERENCES

  1. Ahmedani, B. K. , Westphal, J. , Autio, K. , Elsiss, F. , Peterson, E. L. , Beck, A. , Waitzfelder, B. E. , Rossom, R. C. , Owen‐Smith, A. A. , Lynch, F. , Lu, C. Y. , Frank, C. , Prabhakar, D. , Braciszewski, J. M. , Miller‐Matero, L. R. , Yeh, H. H. , Hu, Y. , Doshi, R. , Waring, S. C. , & Simon, G. E. (2019). Variation in patterns of health care before suicide: A population case‐control study. Preventive Medicine, 127, 105796. 10.1016/j.ypmed.2019.105796 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Arsenault‐Lapierre, G. , Kim, C. , & Turecki, G. (2004). Psychiatric diagnoses in 3275 suicides: A meta‐analysis. BMC Psychiatry, 4(1), 37. 10.1186/1471-244x-4-37 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Athey, S. , Tibshirani, R. , & Wager, S. (2019). Generalized random forests. Annals of Statistics, 47(2), 1179–1203. 10.1214/18-AOS1709 [DOI] [Google Scholar]
  4. Athey, S. , & Wager, S. (2019). Estimating treatment effects with causal forests: An application. Observational Studies, 5(2), 37–51. 10.1353/obs.2019.0001 [DOI] [Google Scholar]
  5. Azcárate‐Jiménez, L. , López‐Goñi, J. J. , Goñi‐Sarriés, A. , Montes‐Reula, L. , Portilla‐Fernández, A. , & Elorza‐Pardo, R. (2019). Repeated suicide attempts: A follow‐up study. Actas Españolas de Psiquiatría, 47(4), 127–136. [PubMed] [Google Scholar]
  6. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. 10.1023/A:1010933404324 [DOI] [Google Scholar]
  7. Carpenito, T. , & Manjourides, J. (2022). MISL: Multiple imputation by super learning. Statistical Methods in Medical Research, 31(10), 1904–1915. 10.1177/09622802221104238 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. CDC’s WISQARS™ . (2020). 10 leading causes of death, United States. Retrieved from https://wisqars.cdc.gov/data/lcd/home
  9. Chow, S.‐C. , Shao, J. , Wang, H. , & Lokhnygina, Y. (2017). Sample size calculations in clinical research (3rd ed.). Chapman and Hall/CRC. [Google Scholar]
  10. Chung, D. T. , Ryan, C. J. , Hadzi‐Pavlovic, D. , Singh, S. P. , Stanton, C. , & Large, M. M. (2017). Suicide rates after discharge from psychiatric facilities: A systematic review and meta‐analysis. JAMA Psychiatry, 74(7), 694–702. 10.1001/jamapsychiatry.2017.1044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Coyle, J. R. (2021). Tmle3: The extensible TMLE framework. Retrieved from https://tlverse.org/tmle3/
  12. Cwik, J. C. , Siegmann, P. , Willutzki, U. , Nyhuis, P. , Wolter, M. , Forkmann, T. , Glaesmer, H. , & Teismann, T. (2017). Brief reasons for living inventory: A psychometric investigation. BMC Psychiatry, 17(1), 358. 10.1186/s12888-017-1521-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dube, P. , Kurt, K. , Bair, M. J. , Theobald, D. , & Williams, L. S. (2010). The P4 screener: Evaluation of a brief measure for assessing potential suicide risk in 2 randomized effectiveness trials of primary care and oncology patients. Primary Care Companion to the Journal of Clinical Psychiatry, 12(6), 27151. 10.4088/PCC.10m00978blu [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Dunn, G. , & Bentall, R. (2007). Modelling treatment‐effect heterogeneity in randomized controlled trials of complex interventions (psychological treatments). Statistics in Medicine, 26(26), 4719–4745. 10.1002/sim.2891 [DOI] [PubMed] [Google Scholar]
  15. Favril, L. , Yu, R. , Uyar, A. , Sharpe, M. , & Fazel, S. (2022). Risk factors for suicide in adults: Systematic review and meta‐analysis of psychological autopsy studies. Evidence‐Based Mental Health, 25(4), 148–155. 10.1136/ebmental-2022-300549 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Fitzgerald, M. , Saville, B. R. , & Lewis, R. J. (2015). Decision curve analysis. JAMA, 313(4), 409–410. 10.1001/jama.2015.37 [DOI] [PubMed] [Google Scholar]
  17. Fox, K. A. , Poole‐Wilson, P. , Clayton, T. C. , Henderson, R. A. , Shaw, T. R. , Wheatley, D. J. , Knight, R. , & Pocock, S. (2005). 5‐year outcome of an interventional strategy in non‐ST‐elevation acute coronary syndrome: The British Heart Foundation RITA 3 randomised trial. Lancet, 366(9489), 914–920. 10.1016/s0140-6736(05)67222-4 [DOI] [PubMed] [Google Scholar]
  18. Fox, K. R. , Harris, J. A. , Wang, S. B. , Millner, A. J. , Deming, C. A. , & Nock, M. K. (2020). Self‐injurious thoughts and behaviors interview‐revised: Development, reliability, and validity. Psychological Assessment, 32(7), 677–689. 10.1037/pas0000819 [DOI] [PubMed] [Google Scholar]
  19. Franklin, J. C. , Ribeiro, J. D. , Fox, K. R. , Bentley, K. H. , Kleiman, E. M. , Huang, X. , Musacchio, K. M. , Jaroszewski, A. C. , Chang, B. P. , & Nock, M. K. (2017). Risk factors for suicidal thoughts and behaviors: A meta‐analysis of 50 years of research. Psychological Bulletin, 143(2), 187–232. 10.1037/bul0000084 [DOI] [PubMed] [Google Scholar]
  20. Groenwold, R. H. , Moons, K. G. , & Vandenbroucke, J. P. (2014). Randomized trials with missing outcome data: How to analyze and what to report. Canadian Medical Association Journal, 186(15), 1153–1157. 10.1503/cmaj.131353 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Gruber, S. , & Laan, M. (2012). tmle: An R package for targeted maximum likelihood estimation. Journal of Statistical Software, 51(13), 1–35. 10.18637/jss.v051.i13 23504300 [DOI] [Google Scholar]
  22. Holliday, R. , Borges, L. M. , Stearns‐Yoder, K. A. , Hoffberg, A. S. , Brenner, L. A. , & Monteith, L. L. (2020). Posttraumatic stress disorder, suicidal ideation, and suicidal self‐directed violence among U.S. military personnel and veterans: A systematic review of the literature from 2010 to 2018. Frontiers in Psychology, 11, 1998. 10.3389/fpsyg.2020.01998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kent, D. M. , Nelson, J. , Dahabreh, I. J. , Rothwell, P. M. , Altman, D. G. , & Hayward, R. A. (2016). Risk and treatment effect heterogeneity: Re‐analysis of individual participant data from 32 large clinical trials. International Journal of Epidemiology, 45(6), 2075–2088. 10.1093/ije/dyw118 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kessler, R. C. , Bauer, M. S. , Bishop, T. M. , Bossarte, R. M. , Castro, V. M. , Demler, O. V. , Gildea, S. M. , Goulet, J. L. , King, A. J. , Kennedy, C. J. , Landes, S. J. , Liu, H. , Luedtke, A. , Mair, P. , Marx, B. P. , Nock, M. K. , Petukhova, M. V. , Pigeon, W. R. , Sampson, N. A. , … Weinstock, L. M. (2023). Evaluation of a model to target high‐risk psychiatric inpatients for an intensive postdischarge suicide prevention intervention. JAMA Psychiatry, 80(3), 230–240. 10.1001/jamapsychiatry.2022.4634 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Kessler, R. C. , Bauer, M. S. , Bishop, T. M. , Demler, O. V. , Dobscha, S. K. , Gildea, S. M. , Goulet, J. L. , Karras, E. , Kreyenbuhl, J. , Landes, S. J. , Liu, H. , Luedtke, A. R. , Mair, P. , McAuliffe, W. H. B. , Nock, M. , Petukhova, M. , Pigeon, W. R. , Sampson, N. A. , Smoller, J. W. , … Bossarte, R. M. (2020). Using administrative data to predict suicide after psychiatric hospitalization in the Veterans Health Administration System. Frontiers in Psychiatry, 11, 390. 10.3389/fpsyt.2020.00390 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kessler, R. C. , Chalker, S. A. , Luedtke, A. R. , Sadikova, E. , & Jobes, D. A. (2020). A preliminary precision treatment rule for remission of suicide ideation. Suicide and Life‐Threatening Behavior, 50(2), 558–572. 10.1111/sltb.12609 [DOI] [PubMed] [Google Scholar]
  27. Kessler, R. C. , Hwang, I. , Hoffmire, C. A. , McCarthy, J. F. , Petukhova, M. V. , Rosellini, A. J. , Sampson, N. A. , Schneider, A. L. , Bradley, P. A. , Katz, I. R. , Thompson, C. , & Bossarte, R. M. (2017). Developing a practical suicide risk prediction model for targeting high‐risk patients in the Veterans Health Administration. International Journal of Methods in Psychiatric Research, 26(3), e1575. 10.1002/mpr.1575 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kessler, R. C. , Warner, C. H. , Ivany, C. , Petukhova, M. V. , Rose, S. , Bromet, E. J. , Brown, M., III , Cai, T. , Colpe, L. J. , Cox, K. L. , Fullerton, C. S. , Gilman, S. E. , Gruber, M. J. , Heeringa, S. G. , Lewandowski‐Romps, L. , Li, J. , Millikan‐Bell, A. M. , Naifeh, J. A. , Nock, M. K. , … Ursano, R. J. (2015). Predicting suicides after psychiatric hospitalization in US Army soldiers: The Army Study To Assess Risk and Resilience in Servicemembers (Army STARRS). JAMA Psychiatry, 72(1), 49–57. 10.1001/jamapsychiatry.2014.1754 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Klonsky, E. D. , May, A. M. , & Saffer, B. Y. (2016). Suicide, suicide attempts, and suicidal ideation. Annual Review of Clinical Psychology, 12(1), 307–330. 10.1146/annurev-clinpsy-021815-093204 [DOI] [PubMed] [Google Scholar]
  30. Kovalchik, S. A. , Varadhan, R. , & Weiss, C. O. (2013). Assessing heterogeneity of treatment effect in a clinical trial with the proportional interactions model. Statistics in Medicine, 32(28), 4906–4923. 10.1002/sim.5881 [DOI] [PubMed] [Google Scholar]
  31. Krause, T. J. , Lederer, A. , Sauer, M. , Schneider, J. , Sauer, C. , Jabs, B. , Etzersdorfer, E. , Genz, A. , Bauer, M. , Richter, S. , Rujescu, D. , & Lewitzka, U. (2020). Suicide risk after psychiatric discharge: Study protocol of a naturalistic, long‐term, prospective observational study. Pilot Feasibility Stud, 6(1), 145. 10.1186/s40814-020-00685-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Little, R. J. , & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). John Wiley & Sons. 10.1002/9781119482260 [DOI] [Google Scholar]
  33. Little, R. J. A. (1994). A class of pattern‐mixture models for normal incomplete data. Biometrika, 81(3), 471–483. 10.1093/biomet/81.3.471 [DOI] [Google Scholar]
  34. Luedtke, A. , Sadikova, E. , & Kessler, R. C. (2019). Sample size requirements for multivariate models to predict between‐patient differences in best treatments of major depressive disorder. Clinical Psychological Science, 7(3), 445–461. 10.1177/21677026188154 [DOI] [Google Scholar]
  35. Matarazzo, B. B. , Farro, S. A. , Billera, M. , Forster, J. E. , Kemp, J. E. , & Brenner, L. A. (2017). Connecting veterans at risk for suicide to care through the HOME program. Suicide and Life‐Threatening Behavior, 47(6), 709–717. 10.1111/sltb.12334 [DOI] [PubMed] [Google Scholar]
  36. McCarthy, J. F. , Cooper, S. A. , Dent, K. R. , Eagan, A. E. , Matarazzo, B. B. , Hannemann, C. M. , Reger, M. A. , Landes, S. J. , Trafton, J. A. , Schoenbaum, M. , & Katz, I. R. (2021). Evaluation of the recovery engagement and coordination for health‐veterans enhanced treatment suicide risk modeling clinical program in the Veterans Health Administration. JAMA Network Open, 4(10), e2129900. 10.1001/jamanetworkopen.2021.29900 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Miller, I. , Gaudiano, B. , & Weinstock, L. (2022). The Coping Long term with Active Suicide Program (CLASP): A multi‐modal intervention for suicide prevention. Oxford University Press. 10.1093/med-psych/9780190095260.001.0001 [DOI] [Google Scholar]
  38. Miller, I. W. , Camargo, C. A., Jr. , Arias, S. A. , Sullivan, A. F. , Allen, M. H. , Goldstein, A. B. , Manton, A. P. , Espinola, J. A. , Jones, R. , Hasegawa, K. , & Boudreaux, E. D. (2017). Suicide prevention in an emergency department population: The ED‐SAFE study. JAMA Psychiatry, 74(6), 563–570. 10.1001/jamapsychiatry.2017.0678 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Miller, I. W. , Gaudiano, B. A. , & Weinstock, L. M. (2016). The coping long term with active suicide program: Description and pilot data. Suicide and Life‐Threatening Behavior, 46(6), 752–761. 10.1111/sltb.12247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Nie, X. , & Wager, S. (2021). Quasi‐oracle estimation of heterogeneous treatment effects. Biometrika, 108(2), 299–319. 10.1093/biomet/asaa076 [DOI] [Google Scholar]
  41. Nielsen, S. D. , Christensen, R. H. B. , Madsen, T. , Karstoft, K. I. , Clemmensen, L. , & Benros, M. E. (2023). Prediction models of suicide and non‐fatal suicide attempt after discharge from a psychiatric inpatient stay: A machine learning approach on nationwide Danish registers. Acta Psychiatrica Scandinavica, 148(6), 525–537. 10.1111/acps.13629 [DOI] [PubMed] [Google Scholar]
  42. Nock, M. K. , Borges, G. , Bromet, E. J. , Cha, C. B. , Kessler, R. C. , & Lee, S. (2008). Suicide and suicidal behavior. Epidemiologic Reviews, 30(1), 133–154. 10.1093/epirev/mxn002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Nock, M. K. , Deming, C. A. , Fullerton, C. S. , Gilman, S. E. , Goldenberg, M. , Kessler, R. C. , McCarroll, J. E. , McLaughlin, K. A. , Peterson, C. , Schoenbaum, M. , Stanley, B. , & Ursano, R. J. (2013). Suicide among soldiers: A review of psychosocial risk and protective factors. Psychiatry, 76(2), 97–125. 10.1521/psyc.2013.76.2.97 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Nurjannah, I. , Mills, J. , Usher, K. , & Park, T. (2014). Discharge planning in mental health care: An integrative review of the literature. Journal of Clinical Nursing, 23(9–10), 1175–1185. 10.1111/jocn.12297 [DOI] [PubMed] [Google Scholar]
  45. Osman, A. , Bagge, C. L. , Gutierrez, P. M. , Konick, L. C. , Kopper, B. A. , & Barrios, F. X. (2001). The Suicidal Behaviors Questionnaire‐Revised (SBQ‐R): Validation with clinical and nonclinical samples. Assessment, 8(4), 443–454. 10.1177/107319110100800409 [DOI] [PubMed] [Google Scholar]
  46. Posner, K. , Brown, G. K. , Stanley, B. , Brent, D. A. , Yershova, K. V. , Oquendo, M. A. , Currier, G. W. , Melvin, G. A. , Greenhill, L. , Shen, S. , & Mann, J. J. (2011). The Columbia‐Suicide Severity Rating Scale: Initial validity and internal consistency findings from three multisite studies with adolescents and adults. American Journal of Psychiatry, 168(12), 1266–1277. 10.1176/appi.ajp.2011.10111704 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Primack, J. M. , Bozzay, M. L. , Gaudiano, B. A. , Weinstock, L. M. , Armey, M. F. , Brick, L. A. , Holman, C. S. , & Miller, I. W. (2022). A randomized controlled trial of the veterans Coping Long Term With Active Suicide (CLASP) Program. Psychiatric Annals, 52(5), 199–207. 10.3928/00485713-20220512-01 [DOI] [Google Scholar]
  48. Rounsaville, B. J. , Carroll, K. M. , & Onken, L. S. (2001). A stage model of behavioral therapies research: Getting started and moving on from stage I. Clinical Psychology: Science and Practice, 8(2), 133–142. 10.1093/clipsy.8.2.133 [DOI] [Google Scholar]
  49. Siddaway, A. P. , Wood, A. M. , O'Carroll, R. E. , & O'Connor, R. C. (2019). Characterizing self‐injurious cognitions: Development and validation of the Suicide Attempt Beliefs Scale (SABS) and the Nonsuicidal Self‐Injury Beliefs Scale (NSIBS). Psychological Assessment, 31(5), 592–608. 10.1037/pas0000684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Stanley, B. , Brown, G. K. , Brenner, L. A. , Galfalvy, H. C. , Currier, G. W. , Knox, K. L. , Chaudhury, S. R. , Bush, A. L. , & Green, K. L. (2018). Comparison of the safety planning intervention with follow‐up vs usual care of suicidal patients treated in the emergency department. JAMA Psychiatry, 75(9), 894–900. 10.1001/jamapsychiatry.2018.1776 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Stanley, B. , Green, K. L. , Ghahramanlou‐Holloway, M. , Brenner, L. A. , & Brown, G. K. (2017). The construct and measurement of suicide‐related coping. Psychiatry Research, 258, 189–193. 10.1016/j.psychres.2017.08.008 [DOI] [PubMed] [Google Scholar]
  52. Teti, G. L. , Rebok, F. , Grendas, L. N. , Rodante, D. , Fógola, A. , & Daray, F. M. (2014). Patients hospitalized for suicidal ideation and suicide attempt in a Mental Health Hospital: Clinicodemographical features and 6‐month follow‐up. Vertex, 25(115), 203–212. [PubMed] [Google Scholar]
  53. Tibshirani, J. , Athey, S. , Friedberg, R. , Hadad, V. , Hirshberg, D. , Miner, L. , Sverdrup, E. , Wager, S. , & Wright, M . (2022). Package ‘grf’: Generalized random forests. Retrieved from https://cran.r‐project.org/web/packages/grf/grf.pdf
  54. U.S. Department of Veterans Affairs . (2018). REACH VET: Recovery engagement and coordination for health – veterans enhanced treatment. Retrieved from https://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/3527‐notes.pdf
  55. U.S. National Library of Medicine . (2022). Veterans Coordinated Community Care (3C) Study (3C). Retrieved from https://www.clinicaltrials.gov/ct2/show/NCT05272176?term=3C&draw=2&rank=2
  56. van der Laan, M. J. , Polley, E. C. , & Hubbard, A. E. (2007). Super learner. Statistical Applications in Genetics and Molecular Biology, 6(1). 10.2202/1544-6115.1309 [DOI] [PubMed] [Google Scholar]
  57. VanderWeele, T. J. , Luedtke, A. R. , van der Laan, M. J. , & Kessler, R. C. (2019). Selecting optimal subgroups for treatment using many covariates. Epidemiology, 30(3), 334–341. 10.1097/ede.0000000000000991 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. van Klaveren, D. , Balan, T. A. , Steyerberg, E. W. , & Kent, D. M. (2019). Models with interactions overestimated heterogeneity of treatment effects and were prone to treatment mistargeting. Journal of Clinical Epidemiology, 114, 72–83. 10.1016/j.jclinepi.2019.05.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Van Orden, K. A. , Witte, T. K. , Gordon, K. H. , Bender, T. W. , Joiner, T. E., Jr. (2008). Suicidal desire and the capability for suicide: Tests of the interpersonal‐psychological theory of suicidal behavior among adults. Journal of Consulting and Clinical Psychology, 76(1), 72–83. 10.1037/0022-006x.76.1.72 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information S1

MPR-33-e70003-s001.docx (521.3KB, docx)

Data Availability Statement

De‐identified study data will be made available through the Inter‐university Consortium for Political and Social Research (ICPSR) after the study has ended.


Articles from International Journal of Methods in Psychiatric Research are provided here courtesy of Wiley

RESOURCES