Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 May 1.
Published in final edited form as: Contemp Clin Trials. 2024 Mar 12;140:107492. doi: 10.1016/j.cct.2024.107492

Safety Planning Intervention and Follow-up: A Telehealth Service Model for Suicidal Individuals in Emergency Department Settings: Study Design and Protocol

Gregory K Brown a, Courtney Benjamin Wolk a, Kelly L Green a, Freya Nezir a, Danielle L Mowery b, Robert Gallop b,c, Megan E Reilly a, Barbara Stanley d, David S Mandell a, Maria A Oquendo a, Shari Jager-Hyman a
PMCID: PMC11071175  NIHMSID: NIHMS1987488  PMID: 38484793

Abstract

Background:

The Safety Planning Intervention with follow-up services (SPI+) is a promising suicide prevention intervention, yet many Emergency Departments (EDs) lack the resources for adequate implementation. Comprehensive strategies addressing structural and organizational barriers are needed to optimize SPI+ implementation and scale-up. This protocol describes a test of one strategy in which ED staff connect at-risk patients to expert clinicians from a Suicide Prevention Consultation Center (SPCC) via telehealth.

Method:

This stepped wedge, cluster-randomized trial compares the effectiveness, implementation, cost, and cost offsets of SPI+ delivered by SPCC clinicians versus ED-based clinicians (enhanced usual care; EUC). Eight EDs will start with EUC and cross over to the SPCC phase. Blocks of two EDs will be randomly assigned to start dates 3 months apart. Approximately 13,320 adults discharged following a suicide-related ED visit will be included; EUC and SPCC samples will comprise patients from before and after SPCC crossover, respectively. Effectiveness data sources are electronic health records, administrative claims, and the National Death Index. Primary effectiveness outcomes are presence of suicidal behavior and number/type of mental healthcare visits and secondary outcomes include number/type of suicide-related acute services 6-months post-discharge. We will use the same data sources to assess cost offsets to gauge SPCC scalability and sustainability. We will examine preliminary implementation outcomes (reach, adoption, fidelity, acceptability, and feasibility) through patient, clinician, and health-system leader interviews and surveys.

Conclusion:

If the SPCC demonstrates clinical effectiveness and health system cost reduction, it may be a scalable model for evidence-based suicide prevention in the ED.

1. Introduction

The US suicide rate has increased by ~35% over the past two decades, with 47,646 suicide deaths in 2021 [1]. Emergency departments (EDs), are often the primary, sometimes only, point of healthcare contact for individuals at risk for suicide [2-4], especially for patients from marginalized or minoritized populations [4,5]. The risk of suicide is exceptionally high after ED discharge [6-13], with greatest risk for those who don’t receive follow-up care within a year [14].

Brief, evidence-based interventions during or following ED visits can reduce suicide risk, decrease hospitalizations, and increase outpatient engagement [15]. The Safety Planning Intervention (SPI) involves developing a written, step-by-step plan to mitigate future risk [16]. Studies conducted in the Department of Veteran Affairs’ (VA) EDs found that SPI plus follow up calls (SPI+) resulted in lower suicidal behavior rates and greater outpatient mental health treatment attendance over 6 months relative to the control condition [17]. SPI+ also increased clinician and staff satisfaction, comfort discharging Veterans with suicidal behaviors, and patient monitoring post-discharge [18].

Research evaluating SPI+’s effectiveness in non-VA EDs is limited [19]. Organization- and clinician-level factors, such as lack of health system resources, limited precedent for post-discharge contact, and insufficient time for SPI+ training and implementation [18,20], hamper implementing SPI+ in community EDs. Telehealth may be useful in addressing these barriers by task shifting SPI+ to experts external to the ED. ED-based telepsychiatry remotely connects ED patients with mental health clinicians (e.g., social workers or professional counselors) for assessment and intervention. For example, patients enrolled in a South Carolina state-wide ED telepsychiatry program were >5 times more likely to attend outpatient follow-up and had half as many inpatient admissions [21].

We developed a telehealth model for ED staff to connect patients at risk for suicide to clinicians in an external Suicide Prevention Consultation Center (SPCC). The SPCC builds upon learnings from the University of Pennsylvania (Penn) Integrated Care Resource Center, which has increased access to mental health care for primary care patients [22]. We describe our pragmatic trial comparing SPI+ delivered by SPCC clinicians to ED clinicians offered SPI+ training (enhanced usual care; EUC).

1.1. Study Aims

This project has three aims: (1) compare the effectiveness of SPCC and EUC in reducing suicide attempts and deaths, increasing engagement in mental health care, and reducing return suicide-related ED visits and hospitalizations over 6 months; (2) examine preliminary implementation outcomes, including the number of patients receiving safety plans and follow-up calls (reach/penetration), fidelity to SPI+, and whether SPCC is acceptable to and feasible for ED clinicians, health system leaders, and patients; and (3) conduct a cost and cost-offset analysis of SPCC and EUC.

We also explore: (1) whether greater engagement in outpatient mental health services mediates SPI+’s effectiveness; (2) whether stronger clinician intentions, norms, and self-efficacy mediate the effects of SPI+ implementation on implementation outcomes; (3) differences in inpatient admission rates across EUC and SPCC, and (4) SPCC sustainment.

2. Method

2.1. Ethics

The Penn Institutional Review Board (IRB) approved all precedures, with reliance agreements in place for EDs with separate IRBs. The study’s data safety and monitoring board includes implementation science, suicide prevention, and data science experts. A waiver of informed consent is in place for secondary data collection.

2.2. Intervention

SPI+ is a brief intervention to decrease short-term suicide risk. It includes a written safety plan to prevent or de-escalate suicidal crises [16] consisting of six steps: (1) warning signs of suicidal crises; (2) internal coping/distracting strategies; (3) distracting people and social settings; (4) people who can lend support; (5) professionals and agencies; and (6) methods to reduce access to lethal means. SPI+ also includes structured follow-up calls starting within 72 hours of discharge comprising: suicide risk assessment, reviewing and revising the safety plan, and facilitating treatment engagement. Weekly outreach continues for up to one month post-discharge, the period of highest risk. Contacts will be discontinued earlier if the patient engages in mental health care or declines to be contacted per prior SPI+ studies [17].

2.3. Conceptual Frameworks

Two frameworks guide study activities: the Exploration, Preparation, Implementation, and Sustainment (EPIS) model [23] and INSPIRE Center’s Conceptual model [24]. EPIS is a multilevel ecological framework that describes factors affecting implementation during four implementation stages: Exploration, Adoption/Preparation, Implementation, and Sustainment. During Exploration, organizations, researchers, or other partners identify the best evidence-based practice (EBP); in this case, we identified SPI+. Preparation objectives include building on our previous work [18,25,26] identifying and addressing implementation barriers and facilitators in collaboration with our ED partners. During Implementation, EDs can refer patients to the SPCC. During Sustainment, SPCC structure, process, and supports will continue. Although SPCC clinicians will be supported initially by grant funding during the study, we aim to develop a sustainable model by billing for services that will continue post-study. Initially, we will staff the SPCC service with four clinicians to cover daytime and evening hours and we plan to hire additional clinicians to provide 24-hour coverage.

Our implementation constructs are guided by INSPIRE’s Conceptual model [24]. This model integrates contextual determinants of implementation (organizational characteristics, implementation climate and leadership) and behavior change theory, including clinician intentions, the most proximal predictor of behavior change. Determinants of intentions include attitudes (perceived advantages and disadvantages of behavior), injunctive norms (perception that important others, like supervisors, expect the behavior), descriptive norms (perception that relevant others engage in the behavior), and self-efficacy (perception that one has the necessary skills) [27-29]. Organizational characteristics, like staffing and workflows, and implementation climate and leadership (extent to which employees think that leadership expects, supports, and rewards implementation), in turn, may affect the beliefs that underlie the determinants of intentions [30].

2.4. Study Design

This pragmatic trial uses a stepped wedge, cluster randomized design [31] to compare the effectiveness of EUC vs. SPCC on clinical, implementation, and cost-related outcomes in eight Penn EDs. This design is commonly used when randomization by patient or parallel cluster is not feasible [32]. Of eight EDs, six are general medical EDs with a psychiatric consultation liaison service and two are psychiatric EDs. Five EDs are in an urban setting in Philadelphia, Pennsylvania, and three EDs are in suburban/rural settings. The proportion of the ED population who identified as racial minority ranges from 26% to 86% among the eight EDs. All participating EDs use the Epic electronic health record system (EHR), which facilitates data collection for our effectiveness and cost outcomes. All EDs will begin with EUC and transition to SPCC during the trial with randomly assigned staggered start dates. Prior to the first cohort’s SPCC transition, all SPCC clinicians will participate in comprehensive training (see Appendix).

2.4.1. Study Phases

Each site will progress through three phases: Baseline, Preparation for implementation, and Implementation. During the Baseline phase (months 0-12), ED clinicians are asked to implement SPI+ as usual care, consistent with The Joint Commissions’ recommendations for suicide-risk assessment and intervention in acute-care settings [33]. We will provide educational materials that are publicly available, including the Columbia-Suicide Severity Rating Scale (C-SSRS, screening version) [34], used at each site, and SPI+. In EUC, we will provide EHR template instructions for SPI+ delivery developed in partnership with the EDs, and an EHR template and instructions for providing and documenting follow-up services. However, ED providers will not receive expert consultation or ongoing supervision.

The three months prior to the Implementation phase will comprise the Preparation phase. During Preparation, we optimize clinical and SPCC referral workflows with each site. Possible workflow components include EHR-generated referrals, texts or phone callsfrom ED staff to the SPCC, warm handoff to the SPCC via a tablet, or a combination of these methods. To refine procedures on a short timeline for each site, we will use rapid-cycle prototyping [35] totailor operational pathways to increase the likelihood of successful, sustainable implementation. SPCC clinicians will document clinical encounters in the EHR. ED clinicians will remain responsible for treatment planning and disposition.

During Implementation, ED clinicians will can refer patients to the SPCC using established workflows, which includes confirming patients’ interest in SPCC services. Sites will stay in the active Implementation phase until the end of the trial. The final 6 months will include sustainability preparation, including working with sites to develop financial models for continuing SPCC services. SPCC clinicians will participate in weekly consultation with an SPI+ expert (SJH) from the onset of the first cohort’s Implementation phase through study end (see Appendix).

2.4.2. Randomization

Pairs of ED sites will be randomly assigned to one of four Implementation start dates three months apart. Sites will be stratified by high and low patient volume using a median split of the total number of discharged suicidal patients seen in the year prior to study start. Each high patient-volume site will be randomly paired with a low patient-volume site. The first cohort crossover to the Implementation phase occurs approximately 12 months after Baseline start.

2.5. Participants

The study sample will comprise patients with a suicide-related ED visit, emergency medicine and mental health clinicians (“ED clinicians”), and health-system leaders.

2.5.1. Patients

Patients who meet the following criteria ascertained via the EHR will be eligible for Aims 1 and 3 of the study: (1) ED visit with a suicide-related event (i.e., suicidal thoughts or behaviors); (2) 18 years of age or older; and (3) not transferred or admitted to an inpatient hospital. We expect to include about 13,320 patients over approximately 4 years for Aim 1. We will interview 12 patients who complete SPI+ with an SPCC clinician for Aim 2.

2.5.2. ED Clinicians and Leaders

To evaluate implementation outcomes (Aim 2), we will recruit 50 ED clinicians for quantitative data collection and conduct brief interviews with 24 clinicians and 8 leaders recruited evenly across sites.

2.6. Study Outcomes and Data Sources

Our outcome evaluation is guided by the RE-AIM framework [36], which comprises five components considered essential for translating research to practice: (1) Reach to target population (proportion of eligible ED patients who receive a safety plan and follow-up calls when clinically indicated, assessed post-implementation), (2) intervention Effectiveness (reduction in suicidal behaviors, hospitalizations, and ED visits; engagement in outpatient care over 6 months post-ED visit), (3) intervention Adoption by organization and its agents (number and proportion of clinicians with eligible patients who refer to SPCC, assessed post-implementation), (4) intervention Implementation (fidelity to SPI+ assessed at post-implementation; surveys and interviews with key stakeholders during and after implementation; implementation cost data pre- to post-implementation), and (5) Maintenance or extent to which an intervention is sustained over time (surveys and administrative data 12 months post-SPCC implementation initiation). Primary outcomes are reductions in suicidal behavior, engagement in outpatient care, reach/penetration, fidelity, and cost. Data will be collected from five sources: (1) EHR, (2) Medicaid claims, (3) National Death Index (NDI), (4) ED clinician surveys, and (5) patient, leader, and clinician interviews. Data sources and outcomes are summarized in Table 1.

Table 1.

Study outcomes and data sources across aims.

Aim Name Type Time Frame Source Brief Description
1 Suicide behavior composite Primary 6-month follow up Medical record, claims data, and NDI Presence (yes/no) of suicide attempts or deaths using standardized definitions of self-directed violence and ICD codes
1 Outpatient treatment engagement Primary 6-month follow up Medical record and claims data Number and type of mental healthcare visits post discharge from the index ED visit
1 Suicide-related ED visits and psychiatric hospitalizations Secondary 6-month follow up Medical record and claims data Number and type of ED visits and/or inpatient psychiatric admissions for suicidal ideation/behavior
2 Fidelity Primary Post implementation Medical record For written safety plans, fidelity will be measured by the SPISA; For follow-up calls, fidelity will be determined by the number and timing of calls to patients after receiving a safety plan and the presence /absence of each follow-up component per the SPISA-F.
2 Reach/penetration Primary Post implementation Medical record Number of patients with a completed safety plan documented in the medical record / number of patients identified as at risk for suicide by the ED clinician; number of patients who receive 2 or more telephone follow up attempts / number of patients who received a safety plan
2 Adoption Secondary Post implementation Medical record Number and proportion of clinicians with eligible patients who refer the patient to the SPCC for SPI+
2 Utilization of screening of suicide risk among ED patients Secondary Post implementation Medical record C-SSRS or equivalent evidence-based measure completed during ED visit
2 Index ED visit inpatient admission disposition Secondary Post implementation Medical record Number of patients admitted for inpatient hospitalization / number of patients identified as at risk for suicide
2 Feasibility Secondary Mid to post implementation Clinician interviews and survey; patient and health system leader interviews Individual semi-structured qualitative interviews will be conducted with a randomly selected sample of clinicians, leaders, and patients who engage in SPI+ and with the SPCC to elicit views and perspectives about the feasibility in this context. We also will use the FIM, a reliable and valid 4-item tool to assess clinician and leader perceptions of feasibility.
2 Acceptability Secondary Mid to post implementation Clinician interviews and survey; patient and health system leader interviews Individual semi-structured qualitative interviews will be conducted with a randomly selected sample of clinicians, leaders, and patients who engage in SPI+ and with the SPCC to elicit views and perspectives about the acceptability in this context. We also will use the AIM, a reliable and valid 4-item tool to assess clinician and leader perceptions of acceptability.
3 Cost Primary Pre to post implementation Medical record and claims data Time-directed activities-based costing, which assigns resource use and costs to specific EUC and SPCC activities to generate costs at the patient-month level.

Abbreviations: Acceptability of Intervention Measure (AIM); Columbia Suicide Severity Rating Scale (C-SSRS); Emergency Department (ED); Enhanced Usual Care (EUC); Feasibility of Intervention Measure (FIM); International Classification of Diseases (ICD); National Death Index (NDI); Suicide Prevention Consultation Center (SPCC); Safety Planning Intervention plus follow up services (SPI+); Safety Planning Intervention Scoring Algorithm (SPISA); SPISA Follow Up (SPISA-F).

2.6.1. Effectiveness

For Aim 1, we will gather patient outcomes at 6-month follow-up through administrative EHR review, NDI, and claims. We will construct a suicide composite variable, defined as suicide attempts or suicides documented in the EHR or claims data with ICD-10 codes or suicides documented in NDI [37-41]. We will use claims and EHR data to determine the number and types of mental healthcare visits 6-months post-discharge and to measure baseline demographics, diagnoses, and suicidal behavior.

A Penn data analyst will abstract and structure electronic data. Pseudo IDs will be autogenerated for patient, encounter, and note identifiers. Record dates for each patient will be shifted randomly +/− 7-30 days from ED admission time. We then will redact protected health information using EHR de-identification software called PHIlter [42]. We also will apply PHIlter to SPI templates. Once de-identified, data will be rendered through a SimpleEMR viewer [43] for user-friendly and effective chart review, validation, and abstraction into a REDCap database.

2.6.2. Implementation

For Aim 2, we will compare EUC and SPCC on reach/penetration, the proportion of eligible patients with a completed safety plan and proportion of patients who completed SPI+ who receive > follow up call outreach attempts post-implementation. To assess fidelity (during the EUC and SPCC phases, we will rate the completion and quality of all safety plans and follow-ups documented in the EHR using the Safety Planning Intervention Scoring Algorithm (SPISA) [44,45] and SPISA Follow-up (SPISA-F), respectively. The SPISA assesses the presence and quality of each written response and generates two overall scores: Total Completeness and Total Quality. The 4-item SPISA-F assesses the presence/absence of: (1) 2 or more calls (the first within 72 hours of discharge and at least one more in the month post-discharge), and (2) each key follow-up component (see Appendix). We will double code 10% of fidelity ratings for interrater reliability, address differences in rater scores, and refine the coding manuals to improve reliability. SPCC (but not ED) clinicians will receive fidelity feedback during ongoing consultation (see Appendix). We will also explore the impact of the SPCC on disposition decisions via qualitative interviews.

We will measure acceptability and feasibility of SPI+ delivered via SPCC and EUC (separately) via the Acceptability of Intervention Measure (AIM) and Feasibility of Intervention Measure (FIM), respectively; each is a reliable, valid 4-item tool [46]. We will use the Clinical Sustainability Assessment Tool [47] to assess perceived sustainability. We will also elicit perspectives on acceptability and feasibility during qualitative interviews with patients, clinicians, and leaders. We plan to conduct 44 interviews but will continue until reaching thematic saturation [48].

We will measure implementation climate using the Implementation Climate Scale [49] an 18-item, validated measure of a strategic climate for EBP implementation. We will measure implementation leadership with the Implementation Leadership Scale [50,51], a 12-item, psychometrically validated measure that assesses the degree to which a leader is supportive, knowledgeable, and prepared to implement EBPs. We also will collect data regarding structural characteristics of EDs including size, average patient volume, and availability of mental health clinicians. We will assess clinician-level determinants, including norms, self-efficacy, attitudes, and intentions, using eight standardized item stems based on the Theory of Planned Behavior [29]. For each statement, respondents rate the extent to which they agree on a 7-point scale.

2.6.3. Cost

For Aim 3, we will assess (1) costs to ED practices of the SPCC and EUC and (2) reduction in healthcare costs associated with the SPCC and EUC (cost offsets to the healthcare system). All cost-related data will be from secondary sources.

We will use a process mapping-based costing approach, time-driven activity-based costing (TDABC), to outline SPCC and EUC processes and assign resource use and costs [52-54]. First, we will create a step-by-step process map that specifies SPCC and EUC activities using established recommendations [55]. We will specify names of specific strategies (e.g., conduct ongoing training), actions (specific procedures that need to be performed), and actors (who performs the procedures) used in the SPCC and EUC. Second, we will determine frequency and average duration of each action. We will collect these data by capturing frequency and duration of core actions, the action participants (actors) and the date the actions occur. SPCC and EUC core activities are protocolized and mostly standard (e.g., duration of one training), predictable, and regular (e.g., weekly in-person team meetings) or will be documented as part of the implementation evaluation (e.g., documentation of individual sessions for fidelity assessment). The EHR will provide data on personnel time spent in direct patient contact and other care coordination activities.

2.7. Planned Analyses

2.7.1. Effectiveness

Mixed effects models will be our primary analytic model [56]. We will model multiple levels of clustering present in the data (i.e., repeated assessments within patients nested within clinician within site). These models allow inclusion of covariates such as demographics or baseline clinical measures and are flexible with respect to missing data, number of assessments for each outcome, the nature of change over time (e.g., linear, piecewise linear, log-linear, quadratic, etc.). Additionally, these models can be modified to address the nature of the outcome data (i.e., continuous, binary, ordinal, count, or zero-inflated outcomes) through a generalized linear mixed model (GLMM) framework [57]. In contrasting time to measures (i.e., suicide attempts and deaths, engagement, hospital admission), we will used restricted mean survival time and recent advances in the framework to accommodate clustering [58,59]. This strategy accommodates non-proportional hazards and, in this sample, possibly two extremes: low rates or high rates [60].

To test our mediational effectiveness hypotheses, we will use VanderWeele’s method [61], which addresses the issue of sequential ignorability and unmeasured confounding using marginal structural model. VanderWeele’s team extended this model to accommodate clustered continuous, binary, and count data [62], yielding unbiased estimates of the natural direct effect and the natural indirect effect, with total effect being the sum of these two. We will implement sensitivity analyses to determine the impact of confounding [60,63].

To estimate effect size, we reviewed research on SPI+ and other brief suicide interventions, like caring contacts [64-67] and lethal means counseling [68,69]. We estimated 25% to 45% reductions in suicidal behavior relative risk. Assuming 10% attrition, 1,360 participants in each phase would provide 80% power to detect a difference in 5.29% versus 3.03% in suicidal behaviors between EUC and SPCC, respectively. These rates replicate our prior findings [17].

2.7.2. Implementation.

Quantitative analyses will identify the relative contribution of attitudes, self-efficacy, and normative pressure about each SPI+ component to explain variation in intention to use it. Structural equation modeling will examine the effects of theoretical predictors on intention [70,71] and determine if homogenous or heterogeneous factors influence intentions to use each intervention component. For example, attitudes may primarily predict intentions to use follow-up calls, but self-efficacy may be the best predictor for safety plans. This kind of information suggests causal pathways that can be leveraged to design implementation strategies for components of the SPI+ model [72]. Summaries of model fit will include overall goodness-of-fit (e.g., model chi-squared) and focused fit indices pointing to specific points of sub-optimality (e.g., Root Mean Square Errors, Comparative Fit Index). Using this structural equation model, we will add organizational variables and estimate pathways between implementation climate and leadership and attitudes, norms, and self-efficacy to test organizational factors associated with determinants of intention.

An integrated approach [73] combining identification of a priori attributes of interest (i.e., constructs from the EPIS framework and key SPI+ components) with emergent codes and themes will guide qualitative analyses. This approach uses an inductive process of iterative coding to identify recurrent themes, categories, and relationships. We will enter interview transcripts into NVivo. After initial data exploration, we will develop a comprehensive coding scheme and apply it to all data to produce a fine-grained descriptive analysis. We will double-code a sample of transcripts to assess reliability of the coding scheme and resolve coding disagreements through team discussion. Coders will be expected to reach and maintain reliability of κ ≥ .85.

We also plan mixed-methods analyses [74]. We will integrate quantitative measures of acceptability and feasibility with interviews using the following taxonomy: the structure is Quan → QUAL, the design is convergent (we will use quantitative and qualitative data to explore similar questions to see if they generate the same conclusions), and the process is connecting (to elaborate upon the quantitative findings to understand the process of implementation of SPI+ as experienced by stakeholders) [75]. We will enter quantitative findings into NVivo as attributes of each interview participant. We will examine accepatability and feasibility distributions and, if they permit, determine cut-points to classify clinicians as high, medium, and low in perceptions of acceptability and feasibility. This will allow triangulation to determine if quantitative and qualitative methods yield similar information. Similarly, we will use participants’ fidelity scores to categorize and compare important themes among those with high and low fidelity.

2.7.3. Cost and Cost Offsets

To calculate time spent on each SPI+ action, we will multiply the number of times an actor engages in each action by the average duration and multiply this by their hourly rate. The total action cost is the sum of costs incurred by each actor; we will determine cost of each action in both arms. Adding the cost of actions for each arm will yield the total personnel costs. We will itemize consumable equipment and supplies (training materials, assessment materials, etc.) and determine cost per the project budget. Personnel and non-personnel sums will determine overall SPCC and EUC costs. The difference in these costs will be the incremental cost of SPCC relative to EUC. From the process map illustrating resource use and costs associated with each process, we will identify fixed and variable costs. Fixed costs do not vary with the number of patients receiving care, such as staff time attending trainings and office changes to support the SPCC team’s workflow. Variable costs vary with the number of patients receiving SPCC or EUC, such as cost of ED clinician time and SPCC clinician time spent on SPI+ delivery. We will calculate costs at the patient-month level. Fixed and variable costs will be allocated to all patient-months receiving SPCC and EUC during the study. Analyses will compare costs between SPCC and EUC and identify factors (at the patient-ED practice levels) explaining variation in costs.

To examine cost offsets, we will compare ED/hospital use and costs as well as total healthcare costs between arms. This GLMM equation will estimate incremental cost offsets of SPCC model versus EUC.

F(Yis)=γ1SPCCs+γ2EUC+Xisβ+Zsδ+Tt+δs+εis

Yis is the outcome of interest for patient i at Site s, namely, count of ED visits/hospitalizations or total healthcare costs over a defined period post SPCC participation (e.g., 12 months). SPCCs and EUCs are dummy indicators of whether Site s is assigned to SPCC or EUC arm. Xis are individual patient characteristics at baseline that may contribute to variation in the outcome and need to be controlled for in the model. Zis are the practice baseline characteristics that may contribute to outcomes. We will include a set of time (e.g., calendar quarter) indicators, Tt, to control for secular changes over time. δs captures the random effects at the site level; εis is a random error for patient i at Site s. We will estimate the GLMM with a log link function (F in the equation) and a distribution function characterizing the mean-variance structure of the error term (typically gamma). For ED visits and hospitalizations, we will also estimate a count data model (Poisson or negative binomial) in robust checks. We will estimate incremental care use and costs by deriving predictions (conditional on SPCC condition) from the estimated model.

3. Discussion

The results of this study could have significant public health impact. If successful, SPCC could help scale up SPI+ and create local, regional, or national suicide prevention consultation hubs. Wide-scale SPCC implementation can prevent suicide by addressing three critical needs. First, almost half of EDs lack on-site mental health clinicians. SPCCs would increase access for ED patients at risk for suicide. Second, SPCCs delivering SPI+ would increase the use of evidence-based suicide care practices in EDs. Finally, widespread reach of SPI+ may result in increased engagement in follow-up care during the period of greatest risk following ED discharge.

The current project also has limitations. Most notably, participating EDs are from a single health system. However, the health system’s diversity reduces threats to external validity. Participating EDs span a large geographic area serving rural, suburban, and urban communities with great ethnic, racial, and socioeconomic diversity.

To realize the promise of the SPCC model, we must identify and address potential challenges to implementation, like credentialing and licensing, in collaboration with community partners. In addition, clinician-level factors, such as forgetting to refer to the SPCC or reluctance to refer to external clinicians may also present challenges. SPCCs will be of limited value if long-term funding is not available to sustain them. Our multifaceted aims and mixed methods design will enable us to identify such challenges (e.g., low provider referral rates per the EHR) and understand the complex web of contributory factors (e.g., through qualitative interviews). The current project will build the foundation for future research to address these critical barriers and maximize the impact of the SPCC model.

Supplementary Material

Appendix

Acknowledgements

Special thanks to Emily R. Schriver, Sy Hwang, Esha Patel and Olga Barg for technical assistance. We thank the Penn Departments of Emergency Medicine’s and Psychiatry’s leadership, administrators, and clinicians. We also thank Penn INSPIRE’s scientific advisory board, community partners, and data and safety monitoring board.

Study Funding

This study was supported by the National Institute of Mental Health/National Institutes of Health (P50 MH127511 to Gregory K. Brown and Maria A. Oquendo).

Footnotes

CRediT Authorship Contribution Statement

Gregory K. Brown: Conceptualization, Methodology, Writing – Original Draft, Writing – Review & Editing, Supervision, Project Administration, Funding Acquisition; Courtney Benjamin Wolk: Conceptualization, Methodology, Writing – Original Draft, Writing – Review & Editing, Supervision, Project Administration, Kelly L. Green: Conceptualization, Methodology, Software, Writing – Original Draft, Writing – Review & Editing, Supervision, Project Administration; Freya Nezir: Conceptualization, Methodology, Writing – Original Draft, Writing – Review & Editing; Danielle L. Mowery: Conceptualization, Methodology, Software, Writing – Original Draft, Writing – Review & Editing, Supervision, Project Administration; Robert Gallop: Methodology, Software, Writing – Original Draft, Writing – Review & Editing; Megan E. Reilly: Writing – Review & Editing, Supervision, Project Administration; Barbara Stanley: Conceptualization; David S. Mandell: Conceptualization, Methodology, Writing – Review & Editing, Project Administration; Maria A. Oquendo: Conceptualization, Methodology, Writing – Original Draft, Writing – Review & Editing, Supervision, Project Administration, Funding Acquisition; and Shari Jager-Hyman: Conceptualization, Methodology, Writing – Original Draft, Writing – Review & Editing, Supervision, and Project Administration.

References

  • [1].Centers for Disease Control and Prevention. Web-based injury statistics query and reporting system (WISQARS). Published 2020. https://www.cdc.gov/injury/wisqars/index.html
  • [2].Ting SA, Sullivan AF, Boudreaux ED, Miller I, Camargo CA Jr. Trends in US emergency department visits for attempted suicide and self-inflicted injury, 1993–2008. Gen Hosp Psychiatry. 2012;34(5):557–565. doi: 10.1016/j.genhosppsych.2012.03.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Fields WW, Asplin BR, Larkin GL, et al. The Emergency Medical Treatment and Labor Act as a federal health care safety net program. Acad Emerg Med. 2001;8(11):1064–1069. doi: 10.1111/j.1553-2712.2001.tb01116.x [DOI] [PubMed] [Google Scholar]
  • [4].Claassen CA, Larkin GL. Occult suicidality in an emergency department population. Br J Psychiatry. 2005;186(4):352–353. doi: 10.1192/bjp.186.4.352 [DOI] [PubMed] [Google Scholar]
  • [5].Owens PL, Fingar KR, Heslin KC, Mutter R, Booth CL. Emergency department visits related to suicidal ideation, 2006-2013. Healthcare Cost and Utilization Project (HCUP) Statistical Brief #220.; 2020. [PubMed] [Google Scholar]
  • [6].Olfson M, Marcus SC, Bridge JA. Focusing suicide prevention on periods of high risk. J Am Med Assoc. 2014;311(11):1107–1108. doi: 10.1001/jama.2014.501 [DOI] [PubMed] [Google Scholar]
  • [7].Ahmedani BK, Simon GE, Stewart C, et al. Health care contacts in the year before suicide death. J Gen Intern Med. 2014;29(6):870–877. doi: 10.1007/s11606-014-2767-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Chung DT, Ryan CJ, Hadzi-Pavlovic D, Singh SP, Stanton C, Large MM. Suicide rates after discharge from psychiatric facilities: A systematic review and meta-analysis. JAMA Psychiatry. 2017;74(7):694–702. doi: 10.1001/jamapsychiatry.2017.1044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Goldacre M, Seagroatt V, Hawton K. Suicide after discharge from psychiatric inpatient care. The Lancet. 1993;342(8866):283–286. doi: 10.1016/0140-6736(93)91822-4 [DOI] [PubMed] [Google Scholar]
  • [10].Ho TP. The suicide risk of discharged psychiatric patients. J Clin Psychiatry. 2003;64(6):702–707. doi: 10.4088/JCP.v64n0613 [DOI] [PubMed] [Google Scholar]
  • [11].Hawton K, Fagg J. Suicide, and other causes of death, following attempted suicide. Br J Psychiatry. 1988;152(3):359–366. doi: 10.1192/bjp.152.3.359 [DOI] [PubMed] [Google Scholar]
  • [12].Owens D, Horrocks J, House A. Fatal and non-fatal repetition of self-harm: Systematic review. Br J Psychiatry. 2002;181(3):193–199. doi: 10.1192/bjp.181.3.193 [DOI] [PubMed] [Google Scholar]
  • [13].Hawton K, Zahl D, Weatherall R. Suicide following deliberate self-harm: Long-term follow-up of patients who presented to a general hospital. Br J Psychiatry. 2003;182(6):537–542. doi: 10.1192/bjp.182.6.537 [DOI] [PubMed] [Google Scholar]
  • [14].Pavarin RM, Fioritti A, Fontana F, Marani S, Paparelli A, Boncompagni G. Emergency department admission and mortality rate for suicidal behavior: A follow-up study on attempted suicides referred to the ED between January 2004 and December 2010. Crisis J Crisis Interv Suicide Prev. 2014;35(6):406–414. doi: 10.1027/0227-5910/a000282 [DOI] [PubMed] [Google Scholar]
  • [15].Betz ME, Wintersteen M, Boudreaux ED, et al. Reducing suicide risk: Challenges and opportunities in the emergency department. Ann Emerg Med. 2016;68(6):758–765. doi: 10.1016/j.annemergmed.2016.05.030 [DOI] [PubMed] [Google Scholar]
  • [16].Stanley B, Brown GK. Safety Planning Intervention: A brief intervention to mitigate suicide risk. Cogn Behav Pract. 2012;19(2):256–264. doi: 10.1016/j.cbpra.2011.01.001 [DOI] [Google Scholar]
  • [17].Stanley B, Brown GK, Brenner LA, et al. Comparison of the safety planning intervention with follow-up vs usual care of suicidal patients treated in the emergency department. JAMA Psychiatry. 2018;75(9):894–900. doi: 10.1001/jamapsychiatry.2018.1776 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Chesin MS, Stanley B, Haigh EA, et al. Staff views of an emergency department intervention using safety planning and structured follow-up with suicidal veterans. Arch Suicide Res. 2017;21(1):127–137. doi: 10.1080/13811118.2016.1164642 [DOI] [PubMed] [Google Scholar]
  • [19].Boudreaux ED, Larkin C, Vallejo Sefair A, et al. Effect of an emergency department process improvement package on suicide prevention: The ED-SAFE 2 cluster randomized clinical trial. JAMA Psychiatry. 2023;80(7):665–674. doi: 10.1001/jamapsychiatry.2023.1304 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Boudreaux ED, Brown GK, Stanley B, Sadasivam RS, Camargo CA, Miller IW. Computer administered safety planning for individuals at risk for suicide: Development and usability testing. J Med Internet Res. 2017;19(5):e149. doi: 10.2196/jmir.6816 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Narasimhan M, Druss BG, Hockenberry JM, et al. Impact of a telepsychiatry program at emergency departments statewide on the quality, utilization, and costs of mental health services. Psychiatr Serv. 2015;66(11):1167–1172. doi: 10.1176/appi.ps.201400122 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Wolk CB, Last BS, Livesey C, et al. Addressing common challenges in the implementation of collaborative care for mental health: The Penn integrated care program. Ann Fam Med. 2021;19(2):148–156. doi: 10.1370/afm.2651 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health Ment Health Serv Res. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Penn Innovation in Suicide Prevention Implementation Research (INSPIRE). Methods Core. Accessed February 29, 2024. https://www.med.upenn.edu/INSPIRE/methodscore.html
  • [25].Stanley B, Chaudhury SR, Chesin M, et al. An emergency department intervention and follow-up to reduce suicide risk in the VA: Acceptability and effectiveness. Psychiatr Serv. 2016;67(6):680–683. doi: 10.1176/appi.ps.201500082 [DOI] [PubMed] [Google Scholar]
  • [26].Davis M, Wolk CB, Jager-Hyman S, et al. Implementing nudges for suicide prevention in real-world environments: Project INSPIRE study protocol. Pilot Feasibility Stud. 2020;6. doi: 10.1186/s40814-020-00686-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211. doi: 10.1016/0749-5978(91)90020-T [DOI] [Google Scholar]
  • [28].Fishbein M. A reasoned action approach to health promotion. Med Decis Making. 2008; 28:834–844. doi: 10.1177/0272989X08326092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Fishbein M, Ajzen I. Predicting and changing behavior: The reasoned action approach. Psychology Press; 2010. 29. [Google Scholar]
  • [30].Williams NJ, Glisson C. The role of organizational culture and climate in the dissemination and implementation of empirically supported treatments for youth. In: Beidas RS, Kendall PC, eds. Dissemination and Implementation of Evidence-Based Practices in Child and Adolescent Mental Health. Oxford University Press; 2014. [Google Scholar]
  • [31].Mdege ND, Man MS, Taylor CA, Torgerson DJ. Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation. J Clin Epidemiol. 2011;64(9):936–948. doi: 10.1016/j.jclinepi.2010.12.003 [DOI] [PubMed] [Google Scholar]
  • [32].Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials. 2007;28(2):182–191. doi: 10.1016/j.cct.2006.05.007 [DOI] [PubMed] [Google Scholar]
  • [33].The Joint Commission. National Patient Safety Goal for suicide prevention. R3 report: requirement, rationale, reference. Chic IL Jt Comm. 2019;(18):1–6. https://www.jointcommission.org/standards/r3-report/r3-report-issue-18-national-patient-safety-goal-for-suicide-prevention/ [Google Scholar]
  • [34].Posner K, Brown GK, Stanley B, et al. The Columbia-Suicide Severity Rating Scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. Am J Psychiatry. 2011;168(12):1266–1277. doi: 10.1176/appi.ajp.2011.10111704 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Dopp AR, Parisi KE, Munson SA, Lyon AR. Integrating implementation and user-centred design strategies to enhance the impact of health services: protocol from a concept mapping study. Health Res Policy Syst. 2019;17(1):1. doi: 10.1186/s12961-018-0403-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–1327. doi: 10.2105/ajph.89.9.1322 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Miller IW, Camargo CA, Arias SA, et al. Suicide prevention in an emergency department population: The ED-SAFE study. JAMA Psychiatry. 2017;74(6):563–570. doi: 10.1001/jamapsychiatry.2017.0678 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Hedegaard H, Schoenbaum M, Claassen C, Crosby A, Holland K, Proescholdbell S. Issues in developing a surveillance case definition for nonfatal suicide attempt and intentional self-harm using International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) coded data. Natl Health Stat Rep. 2018;108. https://www.cdc.gov/nchs/data/nhsr/nhsr108.pdf [PubMed] [Google Scholar]
  • [39].Hedegaard H. Increase in Suicide Mortality in the United States, 1999–2018. 2020;(362). https://www.cdc.gov/nchs/data/databriefs/db362-h.pdf [PubMed] [Google Scholar]
  • [40].World Health Organization. International Statistical Classification of Diseases and Related Health Problems Tenth Revision (ICD-10). Vol 2. World Health Organization; 2010. https://icd.who.int/browse10/Content/statichtml/ICD10Volume2_en_2010.pdf [Google Scholar]
  • [41].Kammer J, Rahman M, Finnerty M, et al. Most individuals are seen in outpatient medical settings prior to intentional self-harm and suicide attempts treated in a hospital setting. J Behav Health Serv Res. 2021;48(2):306–319. doi: 10.1007/s11414-020-09717-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [42].Norgeot B, Muenzen K, Peterson TA, et al. Protected Health Information filter (Philter): accurately and securely de-identifying free-text clinical notes. Npj Digit Med. 2020;3(1):1–8. doi: 10.1038/s41746-020-0258-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].King AJ, Calzoni L, Tajgardoon M, et al. A simple electronic medical record system designed for research. JAMIA Open. 2021;4(3):ooab040. doi: 10.1093/jamiaopen/ooab040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Boudreaux ED, Miller I, Goldstein AB, et al. The Emergency Department Safety Assessment and Follow-up Evaluation (ED-SAFE): Method and design considerations. Contemp Clin Trials. 2013;36(1):14–24. doi: 10.1016/j.cct.2013.05.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Boudreaux ED, Stanley B, Green KL, Galfalvy H, Brown GK. A randomized, controlled trial of the safety planning intervention: Research design and methods. Contemp Clin Trials. doi: 10.1016/j.cct.2021.106291 [DOI] [PubMed] [Google Scholar]
  • [46].Weiner BJ, Lewis CC, Stanick C, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. doi: 10.1186/s13012-017-0635-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].44-. Malone S, Prewitt K, Hackett R, et al. The Clinical Sustainability Assessment Tool: measuring organizational capacity to promote sustainability in healthcare. Implement Sci Commun. 2021;2(1):77. doi: 10.1186/s43058-021-00181-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Guest G, Bunce A, Johnson L. How many interviews are enough?. An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82. doi: 10.1177/1525822X05279903 [DOI] [Google Scholar]
  • [49].Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: The development and validity testing of the Implementation Climate Scale (ICS). Implement Sci. 2014;9(1):157. doi: 10.1186/s13012-014-0157-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Aarons GA, Ehrhart MG, Farahnak LR. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 2014;9(1):45. doi: 10.1186/1748-5908-9-45 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: A psychometric assessment of a new measure. Implement Sci. 2014;9. doi: 10.1186/1748-5908-9-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Cidav Z, Mandell D, Pyne J, Beidas R, Curran G, Marcus S. A pragmatic method for costing implementation strategies using time-driven activity-based costing. Implement Sci. 2020;15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Cidav Z, Mandell D, Ingersoll B, Pellecchia M. Programmatic Costs of Project ImPACT for Children with Autism: A time-driven activity-based costing study. Adm Policy Ment Health. Published online January 13, 2023:1–15. doi: 10.1007/s10488-022-01247-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Cidav Z, Marcus S, Mandell D, et al. Programmatic Costs of the Telehealth Ostomy Self-Management Training: An Application of Time-Driven Activity-Based Costing. Value Health J Int Soc Pharmacoeconomics Outcomes Res. 2021;24(9):1245–1253. doi: 10.1016/j.jval.2021.03.018 [DOI] [PubMed] [Google Scholar]
  • [55].Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8. 10.1186/1748-5908-8-139 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Verbeke G, Molenberghs G. Linear Mixed Models for Longitudinal Data. Springer; 2000. [Google Scholar]
  • [57].Molenberghs G, Verbeke G. Models for Discrete Longitudinal Data. Springer; 2005. [Google Scholar]
  • [58].Uno H, Claggett B, Tian L, et al. Moving beyond the hazard ratio in quantifying the between-group difference in survival analysis. J Clin Oncol. 2014;32(22):2380–2385. doi: 10.1200/JCO.2014.55.2208 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Wang X, Zhong Y, Mukhopadhyay P, Schaubel DE. Computationally efficient inference for center effects based on restricted mean survival time. Stat Med. 2019;38(26):5133–5145. doi: 10.1002/sim.8356 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [60].Ding P, VanderWeele TJ. Sensitivity analysis without assumptions. Epidemiology. 2016;27(3):368–377. doi: 10.1097/EDE.0000000000000457 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [61].VanderWeele TJ. Explanation in Causal Inference: Methods for Mediation and Interaction. Oxford University Press; 2015. [Google Scholar]
  • [62].Bind M a. C, Vanderweele TJ, Coull BA, Schwartz JD. Causal mediation analysis for longitudinal data with exogenous exposure. Biostat Oxf Engl. 2016;17(1):122–134. doi: 10.1093/biostatistics/kxv029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [63].Ding P, Vanderweele TJ. Sharp sensitivity bounds for mediation under unmeasured mediator-outcome confounding. Biometrika. 2016;103(2):483–490. doi: 10.1093/biomet/asw012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].Carter GL, Clover K, Whyte IM, Dawson AH, Este CD. Postcards from the EDge project: randomised controlled trial of an intervention using postcards to reduce repetition of hospital treated deliberate self-poisoning. BMJ. 2005;331(7520):805. doi: 10.1136/bmj.38579.455266.E0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [65].Hassanian-Moghaddam H, Sarjami S, Kolahi AA, Carter GL. Postcards in Persia: randomised controlled trial to reduce suicidal behaviours 12 months after hospital-treated self-poisoning. Br J Psychiatry. 2011;198(4):309–316. doi: 10.1192/bjp.bp.109.067199 [DOI] [PubMed] [Google Scholar]
  • [66].Beautrais AL, Gibb SJ, Faulkner A, Fergusson DM, Mulder RT. Postcard intervention for repeat self-harm: randomised controlled trial. Br J Psychiatry. 2010;197(1):55–60. doi: 10.1192/bjp.bp.109.075754 [DOI] [PubMed] [Google Scholar]
  • [67].Comtois KA, Kerbrat AH, DeCou CR, et al. Effect of augmenting standard care for military personnel with brief caring text messages for suicide prevention: A randomized clinical trial. JAMA Psychiatry. 2019;76(5):474–483. doi: 10.1001/jamapsychiatry.2018.4530 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [68].Johnson RM, Frank EM, Ciocca M, Barber CW. Training mental healthcare providers to reduce at-risk patients’ access to lethal means of suicide: evaluation of the CALM Project. Arch Suicide Res. 2011;15(3):259–264. doi: 10.1080/13811118.2011.589727 [DOI] [PubMed] [Google Scholar]
  • [69].Bryan CJ, Stone SL, Rudd MD. A practical, evidence-based approach for means-restriction counseling with suicidal patients. Prof Psychol Res Pract. 2011;42(5):339–346. doi: 10.1037/a0025051 [DOI] [Google Scholar]
  • [70].Bleakley A, Hennessy M. The quantitative analysis of reasoned action theory. Ann Am Acad Pol Soc Sci. 2012;640(1):28–41. doi: 10.1177/0002716211424265 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [71].Bleakley A, Hennessy M, Fishbein M, Jordan A. Using the integrative model to explain how exposure to sexual media content influences adolescent sexual behavior. Health Educ Behav Off Publ Soc Public Health Educ. 2011;38(5):530–540. doi: 10.1177/1090198110385775 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Fishbein M, Yzer M. Using theory to design effective health behavior interventions. Commun Theory. 2003;13(2):164–183. doi: 10.1111/j.1468-2885.2003.tb00287.x [DOI] [Google Scholar]
  • [73].Bradley EH, Curry LA, Devers KJ. Qualitative data analysis for health services research: Developing taxonomy, themes, and theory. Health Serv Res. 2007;42(4):1758–1772. doi: 10.1111/j.1475-6773.2006.00684.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [74].Creswell JW, Klassen AC, Plano VL, Smith KC. Best practices for mixed methods research in the health sciences. Vol 12. Office of Behavioral and Social Sciences Research, National Institutes of Health; 2013. doi: 10.1177/1473325013493540a [DOI] [Google Scholar]
  • [75].Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Adm Policy Ment Health Ment Health Serv Res. 2011;38(1):44–53. doi: 10.1007/s10488-010-0314-z [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix

RESOURCES