Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2021 Jan 25;2020:1050–1058.

User-Centered Design of a Machine Learning Intervention for Suicide Risk Prediction in a Military Setting

Carrie Reale 1, Laurie L Novak 1, Katelyn Robinson 1, Christopher L Simpson 1, Jessica D Ribeiro 2, Joseph C Franklin 2, Michael Ripperger 1, Colin G Walsh 1
PMCID: PMC8075431  PMID: 33936481

Abstract

Primary care represents a major opportunity for suicide prevention in the military. Significant advances have been made in using electronic health record data to predict suicide attempts in patient populations. With a user-centered design approach, we are developing an intervention that uses predictive analytics to inform care teams about their patients’ risk of suicide attempt. We present our experience working with clinicians and staff in a military primary care setting to create preliminary designs and a context-specific usability testing plan for the deployment of the suicide risk indicator.

Introduction

Primary care represents a major opportunity for suicide prevention in the military. Significant advances have been made in using electronic health record (EHR) data to predict suicide attempts in patient populations.1 Our team is taking a user-centered design (UCD) approach2,3 to develop an intervention that uses predictive analytics to inform care teams about their patients’ risk of suicide attempt. Despite substantial literature on the development of algorithms that predict clinical risk in a wide variety of clinical domains, few reports of real-world implementation exist, and there are even fewer detailed descriptions of the process of design and deployment of tools to be used by clinicians.4,5 In this manuscript, we present our experience working with clinicians and staff in a military primary care setting to create preliminary designs and a usability testing plan for the deployment of the suicide risk indicator.

Background

Prevention of suicide in primary care

In the Army, close to half of active duty suicide decedents have contact with a primary care provider in the month before death. This figure increases to over 95% in the year prior.6 Research among civilians and veterans suggests similar patterns.7,8 Given that healthcare in the military is readily available with no financial barriers, primary care may serve as a critical point for suicide risk detection and management. Although a substantial proportion of suicide decedents have contact with primary care shortly before death, suicide risk is rarely detected. Among civilian samples, only 3-31% of suicide decedents who accessed care in the year preceding death communicated suicidal intent at their final consultation, and only 26% of primary care providers reported concerns related to suicide during their final consultation with patients who died by suicide soon after that visit.9,10 Among active duty servicemembers, risk detection is even rarer, with suicide risk being documented in fewer than 14% of cases who died by suicide within the month of their final healthcare visit.6 These findings indicate grave limitations inherent to existing risk detection and management approaches in primary care.

Prediction of suicide risk

Suicide prevention begins with identification of those at risk with sufficient time to intervene. Face-to-face screening instruments have been the mainstay of risk prediction for decades but have been shown in meta-analysis to be near- chance in their accuracy to predict suicidal behaviors.11 More recently, predictive algorithms leveraging statistics and machine learning have improved our ability to identify risk of suicidal behaviors using EHR data.1,12–15 Primary care is a setting particularly relevant for scalable, accurate computational screening. Primary care workflows are complex and mental health issues – in particular, suicidal thoughts and behaviors – are both under-reported and under-coded in this setting.16 Even when documented in primary care, suicidal thoughts are coded rarely, as low as 3% of the time.16 Large-scale efforts to implement predictive models of suicidality into clinical practice are already underway at sites like Veterans Affairs17 and Vanderbilt University Medical Center.

User-centered design

UCD is a framework of practices that attends to the usability of tools, the social and physical environment of use, and the goals of users throughout the design process.3 The UCD framework is the accepted standard for safe and effective design.18,19 UCD involves exploring the context of use, defining user needs, two phases of usability testing (formative and summative evaluation), and post-implementation review, as shown in Figure 1. Formative evaluation takes place after an initial engagement to document and understand the user environment, define user needs, and create initial design concepts. The evaluation is iterative, involving the (sometimes collaborative) development of mock-ups or prototypes that are used in scenarios enacted by user participants. Experts use standardized or customized usability assessment tools to identify elements that need improvement. When a design is fully elaborated, a summative evaluation examines the usability and future evolution of the tool, ideally in the real-world setting. In this paper we summarize our experience conducting the first two phases and our preparation for the third phase of UCD outlined in Figure 1.

Figure 1. The user-centered design framework © 2017 Center for Research and Innovation in Systems Safety. Reprinted with permission.

Figure 1.

Methods

All interventions in healthcare practice, particularly those involving the presentation of data in the workflow of clinicians, benefit from UCD. The price of not doing UCD correctly is administrative burden on clinicians contributing to burnout20 and the potential for patient harm.21,22 A wealth of resources exists for design teams to pursue UCD, particularly in support of usability testing (e.g., the website usability.gov) that provides validated instruments and other guidance. However, a key challenge in UCD is developing user-centered methods for evaluation and iterative design improvement that are specific to the context of use. This is particularly important for clinical decision support initiatives, where the results of analytics may be deployed in diverse settings such as hospital, outpatient clinic, and emergency room. In our case the environment is complicated by the project taking place in a military setting. The environment, and the roles of the users, were layered with clinical and military norms.

This project was approved by the IRB of the Department of Defense, Naval Medical Center Portsmouth. We conducted two site visits to develop this analysis. In the first visit, we explored two sites, both primary care clinics on a military base, for deployment of the technology, meeting with staff to describe and discuss the technology and implications for implementation for each site. We met with the clinic leadership, behavioral health specialists, physicians, and corpsmen (enlisted medical specialists in the U.S. Navy). We also met with staff to explore options for data acquisition and de-identification. After the first site visit, we selected Clinic A for deployment of the technology. In the second site visit, we met again with population health staff to refine data needs and planning. At Clinic A, we conducted observations of routine care, focus groups with three clinical groups, and a discussion with clinic leadership. Experienced field researchers conducted observations of routine care by shadowing clinical team members (corpsmen, nurses, and primary care providers) in the clinic during patient appointments (n=5). We observed and documented in fieldnotes their workflow before, during and after patient interactions. We conducted three focus groups comprising nurses (n=7), corpsmen and medical assistants (n=8), and primary care providers (n=5). Questions centered on current workflows connected to the assessment of suicide risk in patients, individual strategies for detecting suicide risk, and challenges faced.

Data collected were in the forms of fieldnotes from observations and interviews and transcripts from focus groups. We used a software tool, Dedoose (www.dedoose.com) to code the data. Coding involved reading the text, creating excerpts, and assigning one or more codes, or labels, to each excerpt. Three research team members coded the data using an open coding approach (i.e., coding all themes that emerge). Preliminary codes were analyzed for repeated ideas and elements, grouped into concept categories, and consolidated into the four major themes described below. We reviewed our codes and definitions in meetings to ensure consistent application of the codes. We coded the data a second time using the Consolidated Framework for Implementation Research (CFIR).23 In this case, the classification system of a formalized, previously developed framework was the source of codes to be applied to the data. We attempted to use CFIR to conduct a diagnostic assessment of implementation context by selecting the constructs most relevant to our study setting.23 A significant challenge in applying the framework may be specific to the military, where members are culturally (and sometimes geographically) distinct from the rest of society. This factor is conceptually akin to Goffman’s notion of a total institution, where a population is formally administered according to a bureaucracy and regulations that differ from those of the wider community.23,24 The CFIR framework includes a delineation of internal and external environments. In this case, there were many elements of the internal environment (i.e., the clinic setting) that reflected the structure and practices of military culture (i.e., the external setting), such that they could not be distinguished.

We defined our preliminary usability testing objectives based on the outcomes of the first site visit. These objectives elucidate the research questions to be pursued through usability testing, such as the most effective way to display the risk prediction information (e.g., wording, colors, graphics), and how users interpret and conceptualize the risk indicators (i.e., What does this mean? What am I going to do with this information?). We synthesized our qualitative analysis to identify any additional testing objectives and extract design concepts to guide prototype development.

Findings

We identified several distinct user groups to include in our usability testing plan in addition to primary care providers managing patient care. Our observations during the second site visit confirmed that other clinical team members play key roles in the suicide screening process, such as corpsmen and civilian medical assistants administering the existing standardized screening instruments (i.e., PHQ-2 and PHQ-9), and therefore would be likely to interact with the new intervention. Table 1 lists the expected primary and secondary user groups we identified for usability testing.

Table 1. Primary and secondary user groups of the risk prediction intervention.

Primary user groups Secondary user groups
Physicians Corpsmen and medical assistants completing intake
Physician Assistants Nurses providing physician clinic support
Nurse Practitioners Nurses or other staff who generate daily patient reports
Psychologists in the primary care clinic  
Behavioral health nurses in the primary care clinic

Our qualitative analysis also identified several themes that help illuminate the way the clinical team thinks about and makes decisions about suicidality. Four primary themes emerged that fall broadly within CFIR’s working definition of context, or the set of interacting circumstances or unique factors that surround a particular implementation effort.23

Suicide as a military problem

Members across all roles of the clinic team repeatedly voiced concerns about the perceived stigma associated with mental health issues in the military. While a certain degree of stigmatization of mental health diagnoses and treatments exists in many subcultures within our society, the onus of these perceptions may be felt more acutely in a military setting than in a civilian setting.25 Providers indicated that the perception of this stigma is pervasive and routinely impacts their practice and day-to-day patient care, such as dealing with patients’ desire to avoid taking medications associated with mental illness or having a depression diagnosis in their medical record. For active duty personnel, this mindset may create substantial barriers to seeking treatment when mental health issues arise.

“The thing is, as soon as they hear speak, talk, write, look – they’re out…many of them come in and they’re like, “No, I don’t [want] this in my record.” But I can’t do anything if it’s not in the record.”

- Primary care provider

“That’s what I tell my patients when I talk to them, “You have diabetes…you’ve got to take it because that’s an acceptable diagnosis. Mental health is not an acceptable diagnosis.” And by us in this room, we know that it’s life, it’s just as important as menopause, and having a virus and having a flu. But in the community, other people – non-medical people – to put that stigma of a mental health label attached to them, and their job may depend on it, the stigmatism from command, their friends, their peers.”

- Primary care provider

The clinic team expressed concern that a history of mental health issues may negatively impact active duty personnel’s career status. When addressing suicidality, certain factors come in to play that might limit an individual’s deployment status. Questions about how this may affect things like military career prospects, current position, finances, ability to carry a weapon, and security clearance are often present. Team members indicated that some deployments and job responsibilities might be restricted based on mental health history, such as jurisdictions where there is a small command, flying planes or being on submarines. Furthermore, active duty personnel taking certain mental health medications must be on that medication for a set amount of time, otherwise additional clearance may be required before deployment. This time clock resets when the patient changes medications, a common occurrence when treating mental health issues. There appeared to be uncertainty around the exact implications of different factors, if any, further complicating active duty personnel’s willingness to seek treatment.

“But they have to have treatment facilities where they’re going, and there’s certain places that they go that they can’t accommodate them, so they can’t go.”

- Nurse

“You never know what you can be disqualified [for] and what you can’t be disqualified [for].”

- Corpsman/Medical assistant

On top of the fear this perceived stigma creates, several clinic team members pointed out that the reality of a modern active duty military setting is in itself a contributor to higher stress and anxiety levels that may stretch an individual’s coping skills. Added pressures from family responsibilities back home when deployed or the lack of familiar social support systems may further contribute.

“When my husband was deployed – when we were first married – we had no contact, we’d get a letter once every couple three months. Now, these guys are on email, you can contact them easily. So, now, you’ve got the wife at home with the baby, and the baby is sick, and the 19-year-old is on ship in the middle of the [ocean], and they’re sending him email messages and stuff. They can’t get home, they can’t help their wife... so, it’s this whole big cascade.”

- Primary care provider

“Especially if it’s a spouse who has no support base here because they recently moved here. They have no friends, no family.”

- Nurse

Suicide as a clinical problem

The primary care team shared their perceptions about suicide as a clinical problem, which must be addressed as with other disease processes the primary care team encounters. They discussed the strategies they employ to identify, evaluate and manage the care of these patients. Some of the issues raised are likely similar in the military setting as those faced in a civilian primary care environment, but some appear unique or heightened in the context of active duty service. One significant problem raised by the team was patient turnover, making continuity of care an ongoing challenge for clinicians caring for patients who relocate or get deployed on a frequent basis. Patients may not express the same mental health concerns to another provider and building a trusted relationship where a patient feels comfortable raising sensitive issues may be more difficult for primary care providers in an active duty setting.

Participants also discussed the realities of dealing with the complex issue of suicidality in the time allotted for a typical primary care appointment. A common occurrence described by the team is for a patient to come in for another issue and, over the course of the visit, it turns into a discussion about suicidality. Despite consistent screening with validated instruments (i.e., PHQ and GAD) during intake, the patient may not mention the underlying issue of suicidality until well into the allotted visit time with the primary care provider, which creates enormous time pressures to adequately address this issue while juggling a fully booked clinic schedule.

Clinicians expressed different comfort levels with the skills and knowledge required to effectively complete a suicide assessment and manage related medications. Many of the challenges of managing complex medications for mental health issues that these clinicians shared are not unique to an active duty military setting, such as patient compliance and undesirable side effects. Clinicians indicated the threshold for when to refer a patient to mental health services was an individual determination.

“My rule of thumb is two strikes and I’m out. I’m out of the picture. If I can’t get you on the right med with the second med, then I refer. Because, again, that’s just a little bit outside my comfort zone; I don’t want to keep playing with somebody’s mind. It’s one thing to play with their blood sugar, but it’s still something different to play with their minds.”

- Primary care provider

“I think that might be provider preference. I’m comfortable with adjusting meds because there are thousands of medications to try and I’m comfortable with trying a few of them. But as long as I have patient buy-in, I think if I know that they’re claiming to be compliant, we have good, regular follow-up, that communication piece is there, I’m comfortable with continuing. But I think if they’re already on two, maybe three medications already, like all of them concurrently, then I’m definitely referring.”

- Primary care provider

Screening and monitoring processes

Team members indicated that their existing suicidality screening tools (i.e., PHQ-2 for all patients, PHQ-9 as indicated) provide an important starting point for identifying patients at risk, but in many cases these tools alone are insufficient.

“I’ve had lots of patients who were negative on the PHQ-2 score, but were still positive when they get…the PHQ-9. And so, that’s always an interesting thought to keep in the back of your mind – and that I tell my corpsmen – just because they are negative doesn’t necessarily mean that they’re negative. So, it’s just about listening to those cues and some of the things that they’re saying.”

- Primary care provider

Multiple team members described the need to go beyond the structured questionnaire and explore the issue further, outlining different personal strategies used to supplement the formal screening instruments. These strategies offer insight into the critical thinking processes staff use to form an overall impression, or gestalt view, of the patient’s mental health status and how they determine the appropriate next steps in each patient’s case. One participant described their approach of rewording the screening questions to be less direct, which patients may be more likely to answer in the negative, in order to get at some behaviors they consider to be warning signs (e.g., selling or giving away possessions). The team discussed a nuanced process of piecing together clues about the patient and balancing this information against the answers to the screening questions. In addition to past medical history, important indicators include aspects of the patient’s demeanor and appearance, such as attitude, tone of voice, and speech patterns, as well as information about the patient’s situation, such as what job someone is doing, whether they carry firearms, and their local support network (i.e., are they deployed alone, a “geo-bachelor” who is away from home and loved ones for the first time). Having the opportunity to get to know the patient’s baseline behavior is invaluable but can be difficult with the frequent relocations inherent to an active duty environment. Often the healthcare team relies heavily on their “gut feeling” about a patient formed through years of clinical experience.

“And every once in a while, I go by feelings; something’s not feeling great.”

- Primary care provider

“…when they first come in, their eyes, their face, their actions...”

- Primary care provider

“I think it’s the interaction with the patient. That’s the only thing that’s going trigger anything is the way you interact.”

- Nurse

Primary care providers also described the difficulty they face effectively managing high patient volumes and following up with their large patient panels. The follow up process for patients starting on a new antidepressant medication is well established – a 30-day follow up appointment is routinely scheduled before the patient leaves the office. However, there is no standardized process to remind a provider to follow up with individuals they may have a concern about but fall outside of these established parameters.

“How much of our time can we really go back and really remember the one I had yesterday or the one for last month?”

- Primary care provider

“Rarely do you have time to search through their history and rarely do they share with you what’s going on.”

- Nurse

Conceptualizing the predictive model

Team members expressed a range of perspectives when speculating how they might envision and react to a suicide risk prediction tool. Many imagined it would involve some sort of flag in the patient’s chart, perhaps similar to an allergy indicator, vaccine reminder, or the prompt for tobacco cessation counseling. One team member used an analogy of having lab values vs. “looking anemic,” speculating it could be an objective piece of information to help them identify this critical issue. The potential value of a new tool to help them improve risk stratification and supplement their own assessment resonated with some team members.

“I do think it could be good. If it takes the compounded results and have a trend of the [screening tool scores]…if I could just see how they always scored like zero or if it’s been creeping up. Have they always been like a two? Because then it’s okay.”

- Corpsman/Medical assistant

For some, the idea of a new data element for suicide risk was difficult to conceptualize. Team members expressed doubts about how a computer could make this type of determination and that the system may not contain all of a patient’s information.

“So I’m thinking…what are you going to use to track the system you’re talking about? What are you going to use to like gage how they are, you know? It’s more of like a subjective thing than objective.”

- Corpsman/Medical assistant

Others raised fears about unintended consequences from how a suicide indicator might be used in the real world given the perceptions of stigma and potential career impact surrounding mental health issues. Team members indicated there is a trust factor between the patient and the provider, and worried that if a referral to mental health appears to come out of nowhere, that could permanently damage that patient-provider relationship. Additional concerns expressed about a theoretical suicide risk indicator include the potential to use it as a basis for policy making or determining job duties. One team member felt a predictive indicator for stigmatized behaviors might put them in a situation where they felt professionally or ethically bound to act if such an indicator or “flag” appeared on the screen. Combined with the unique military culture that affords limited privacy for individuals in high risk roles (i.e., commanding officer may be able to view personnel’s medical information), this could create a conflict for primary care providers, especially those who are both physicians and military officers.

“Because if you have a system that even though it’s not like disqualifying them, but still going to flag them. If I show up to my appointment and I said two years ago I was feeling kind of down and I show up to my appointment two years from now. And it’s like flag, this guy, suicide risk.”

- Corpsman/Medical assistant

Discussion

We extracted a number of critical sociotechnical factors from our qualitative analysis and implementation assessment that will guide our prototype development and usability testing strategies. Many of the themes identified (e.g., differing comfort levels with suicide risk assessment and management among primary care providers, visit time constraints, challenges with turnover and follow-up in large patient panels, and limitations of existing screening tools) underscore the need for additional tools like our proposed machine learning intervention to augment existing processes. Considering the identified themes about the potential for misinterpretation, perceived subjective nature of suicide risk assessment, and complexity of effectively addressing this issue in the context of a military primary care visit, a key takeaway from the qualitative analysis was the need for our prototypes to include straightforward guidance on appropriate actions to take for different levels of indicated risk. This guidance must be distilled down to a manageable set of actions that clinic staff can efficiently incorporate into existing workflows while reinforcing the essential screening work already taking place.

Prototype development

The potential for multiple distinct user groups to view the risk prediction tool while working as a collaborative healthcare team suggests the need for a consistent and easy to interpret prototype design that safely and effectively supports both primary and secondary user groups’ contributions to the suicide screening process. Because of concerns around interpretability, stigma (psychiatric “labeling”), and transience of periods of risk, the prototype under development will include transparent education around what a risk prediction might and might not indicate.26 It also will be informed by the need for any “flag” (a word with particular potency in the military as an administrative “flag” acts as a permanent marker on a military record) to be removed or lessened in displayed import over time.

Times of transition – pre-deployment, post-deployment, at entrance or departure from active duty – have been identified repeatedly as times of increased risk. Prototype design will include clear, unambiguous iconography capturing predicted risk level, change in predicted risk level, and, if data are available, an icon indicating a recent or upcoming transition. Similarly, providers are mindful of their roles to either facilitate or hinder transition to deployment based on results of a medical evaluation. Usability testing informed by those providers’ experiences will ensure minimal unintended consequences of misclassification on, for example, command decisions to deploy or not.

Providers described multiple resources in their decision-making. For example, “Go-Bys” are laminated cards with highly pertinent and succinct actions for a clinical problem (e.g., pneumonia evaluation and diagnostic coding requirements). A supporting Go-By along with prototyped EHR risk prediction visualization will be context- and workflow-sensitive. This prototype Go-By will include pertinent, brief education on risk indicator meanings and suggested actions in line with directive clinical decision support.

Usability testing objectives

In addition to the preliminary usability testing objectives described above, we identified several new priorities to evaluate during testing related to the context of use for the risk prediction tool and prototype design guidance. First, we identified the need for supporting resources (e.g., the Go-By and scripting) to augment the risk indicator visual display, thus our testing objectives were expanded to incorporate these additional components of the user interface. Furthermore, the Go-By will provide suggested action pathways for providers to follow. The appropriateness of these recommendations and their fit with existing workflows will also be incorporated into the testing plan. Lastly, our findings related to the complexity of suicide as a clinical problem within a primary care military setting suggested it was important to assess individual participant’s comfort level with suicide risk assessment, attitudes toward risk prediction information in general, and the perceived utility of the proposed intervention.

Scenario development

Our focus group findings provided rich contextual details for our team to incorporate into usability testing scenarios. We were able to extract a range of relevant factors to potentially include when creating realistic patient situations that represent the social complexity within this patient population (e.g., deployment information, career information, relocation information, etc.) to facilitate usability testing. Table 2 lists some of the patient and clinic visit characteristics that team members perceived as potentially relevant to their suicidality risk decision-making.

Table 2. Patient and context characteristics to guide testing scenario development.

Patient characteristics
Active duty job role Age
History of multiple behavioral health medications (2+) vs. none History of behavioral health diagnosis vs. none
Marital status Recent discharge or emergency department visit
Family responsibilities on base vs. geo-bachelor Multiple risk factors present (e.g., young diabetes patient, family death, not following DM treatment plan)
Individual coping skills level Recent call to the nurse advice line
Imminent deployment Patient concerned about stigma if seeks help
Visit characteristics
Patient comes in for a separation physical Patient walks into front desk, sent by command to check for insomnia
Patient here for routine checkup or unrelated problem, turns out more complex than expected Non-face to face encounter (follow-up phone call or virtual visit)
Patient presented to Mental Health clinic as walk in, sent over to primary care Patient comes in for 30-day follow up appointment after starting a new behavioral health medication
Risk prediction characteristics
PHQ-2 screening is negative but risk indicator is high Supporting signs and symptoms of suicide risk are present vs. none present
Corpsman reports patient states no suicidal ideology from full screening forms but risk indicator is high Risk prediction indicator changes over time (was X at last visit, now Y)

Conclusion

Suicide prevention in a military primary care setting is a complex and challenging effort. Informaticians need a robust understanding of the specific context of use, including the clinical, social, economic and organizational factors, that may impact deployment and adoption to successfully design and implement a novel risk prediction intervention in this domain. The UCD framework provides an effective way to extract these critical contextual factors early in the development process and systematically evaluate design concepts through well-informed usability testing plans.

Acknowledgement

This work was supported in part by funding from the Military Suicide Research Consortium (MSRC), an effort supported by the Office of the Assistant Secretary of Defense for Health Affairs (Award No. W81XWH-10-2-0181). Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the MSRC, the Department of Veterans Affairs, or the Department of Defense.

Figures & Table

References

  • 1.Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clin Psychol Sci. 2017;5(3):457–69. [Google Scholar]
  • 2.Holtzblatt K. Human-Computer Interaction. CRC press; 2009. Contextual design; pp. 71–86. [Google Scholar]
  • 3.Norman DA. The Design of Everyday Things. New York: Basic Books; 2002. [Google Scholar]
  • 4.Shortliffe EH, Sepulveda MJ. Clinical Decision Support in the Era of Artificial Intelligence. JAMA. 2018 Dec 4;320(21):2199–200. doi: 10.1001/jama.2018.17163. [DOI] [PubMed] [Google Scholar]
  • 5.Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, D.C.: National Academy of Medicine; 2019. Dec, [PubMed] [Google Scholar]
  • 6.Ribeiro JD, Gutierrez PM, Joiner TE, Kessler RC, Petukhova MV, Sampson NA, et al. Health care contact and suicide risk documentation prior to suicide death: results from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) J Consult Clin Psychol. 2017;85(4):403. doi: 10.1037/ccp0000178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ahmedani BK, Simon GE, Stewart C, Beck A, Waitzfelder BE, Rossom R, et al. Health care contacts in the year before suicide death. J Gen Intern Med. 2014;29(6):870–7. doi: 10.1007/s11606-014-2767-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Denneson LM, Basham C, Dickinson KC, Crutchfield MC, Millet L, Shen X, et al. Suicide risk assessment and content of VA health care contacts before suicide completion by veterans in Oregon. Psychiatr Serv. 2010;61(12):1192–7. doi: 10.1176/ps.2010.61.12.1192. [DOI] [PubMed] [Google Scholar]
  • 9.Isometsä ET, Heikkinen ME, Marttunen MJ, Henriksson MM, Aro HM, Lönnqvist JK. The last appointment before suicide: is suicide intent communicated? Am J Psychiatry. 1995;152(6):919–22. doi: 10.1176/ajp.152.6.919. [DOI] [PubMed] [Google Scholar]
  • 10.Pearson A, Saini P, Da Cruz D, Miles C, While D, Swinson N, et al. Primary care contact prior to suicide in individuals with mental illness. Br J Gen Pr. 2009;59(568):825–32. doi: 10.3399/bjgp09X472881. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Franklin JC, Ribeiro JD, Fox KR, Bentley KH, Kleiman EM, Huang X, et al. Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychol Bull. 2017;143(2):187. doi: 10.1037/bul0000084. [DOI] [PubMed] [Google Scholar]
  • 12.Walsh CG, Ribeiro JD, Franklin JC. Predicting suicide attempts in adolescents with longitudinal clinical data and machine learning. J Child Psychol Psychiatry. 2018 Dec 1;59(12):1261–70. doi: 10.1111/jcpp.12916. [DOI] [PubMed] [Google Scholar]
  • 13.Belsher BE, Smolenski DJ, Pruitt LD, Bush NE, Beech EH, Workman DE, et al. Prediction Models for Suicide Attempts and Deaths: A Systematic Review and Simulation. JAMA Psychiatry. 2019 Jun 1;76(6):642. doi: 10.1001/jamapsychiatry.2019.0174. [DOI] [PubMed] [Google Scholar]
  • 14.Schoenbaum M, Kessler RC, Gilman SE, Colpe LJ, Heeringa SG, Stein MB, et al. Predictors of Suicide and Accident Death in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS): Results From the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) JAMA Psychiatry. 2014 May 1;71(5):493. doi: 10.1001/jamapsychiatry.2013.4417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Simon GE, Johnson E, Lawrence JM, Rossom RC, Ahmedani B, Lynch FL, et al. Predicting Suicide Attempts and Suicide Deaths Following Outpatient Visits Using Electronic Health Records. Am J Psychiatry. 2018 Oct;175(10):951–60. doi: 10.1176/appi.ajp.2018.17101167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Anderson HD, Pace WD, Brandt E, Nielsen RD, Allen RR, Libby AM, et al. Monitoring Suicidal Patients in Primary Care Using Electronic Health Records. J Am Board Fam Med. 2015 Jan 1;28(1):65–71. doi: 10.3122/jabfm.2015.01.140181. [DOI] [PubMed] [Google Scholar]
  • 17.Kessler RC, Hwang I, Hoffmire CA, McCarthy JF, Petukhova MV, Rosellini AJ, et al. Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans health Administration. Int J Methods Psychiatr Res. [Internet] 2017 Sep [cited 2020 Mar 7];26(3) doi: 10.1002/mpr.1575. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/mpr.1575 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Office of the National Coordinator for Health Information Technology . Strategy on Reducing Regulatory and Administrative Burden Relating to the Use of Health IT and EHRs [Internet] Office of the National Coordinator for Health Information Technology.; 2020. Available from:https://www.healthit.gov/topic/usability-and-provider-burden/strategy-reducing-burden-relating-use-health-it-and-ehrs . [Google Scholar]
  • 19.Wiklund ME, Kendler J, Hochberg L, Weinger MB. Technical basis for user interface design of health IT. 2015.
  • 20.National Academy of Medicine National Academies of Sciences E and Medicine . Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being [Internet] Washington, DC: The National Academies Press; 2019. Available from: https://www.nap.edu/catalog/25521/taking-action-against-clinician-burnout-a-systems-approach-to-professional . [PubMed] [Google Scholar]
  • 21.Castro GM, Buczkowski L, Hafner JM. The contribution of sociotechnical factors to health information technology-related sentinel events. Jt Comm J Qual Patient Saf. 2016;42(2):70–AP3. doi: 10.1016/s1553-7250(16)42008-8. [DOI] [PubMed] [Google Scholar]
  • 22.Howe JL, Adams KT, Hettinger AZ, Ratwani RM. Electronic health record usability issues and potential contribution to patient harm. Jama. 2018;319(12):1276–8. doi: 10.1001/jama.2018.1171. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Goffman E. Asylums: Essays on the social situation of mental patients and other inmates. AldineTransaction; 1968. [Google Scholar]
  • 25.Hom MA, Stanley IH, Schneider ME, Joiner TE., Jr A systematic review of help-seeking and mental health service utilization among military service members. Clin Psychol Rev. 2017;53:59–78. doi: 10.1016/j.cpr.2017.01.008. [DOI] [PubMed] [Google Scholar]
  • 26.Walsh CG, Chaudhry B, Dua P, Goodman KW, Kaplan B, Kavuluru R, et al. Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence. JAMIA Open [Internet] 2020 Jan 22. [cited 2020 Mar 18];(ooz054). Available from:https ://doi. org/10.1093/j amiaopen/ooz054 . [DOI] [PMC free article] [PubMed]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES