Skip to main content
. 2018 Dec 3;9:650. doi: 10.3389/fpsyt.2018.00650

Table 1.

Recommendations for risk mitigation applying AI for suicide prevention in healthcare settings.

Domain Recommendation for implementation Recommendation for research
Consent Develop informed consent for patients to sign detailing the actions and limitations of AI Develop consent forms to all literacy levels and test for understanding
Develop similar consent for providers Develop patient education materials that detail the purpose of AI and evaluate for understanding
Provide patients with “opt-out” of AI monitoring
Provide time limits or expiration to consent
Re-consent each year with evolving technology
Have consent documents approved by experts and medical review board
Controls Adopt standards for suicide monitoring with AI, such as determining what percentage of at-risk individuals will be monitored Compare provider-informed vs. AI-only model to assess for increased accuracy with feedback
Form an AI oversight panel with multidisciplinary specialty
Request provider feedback routinely and update systems accordingly
Create a system for providers to defer or activate risk monitoring with explanation
Log model successes and failures, re-train models
Communication Conduct focus groups with stakeholders to assess for appropriateness and utility of integrating AI into healthcare Develop provider materials and elicit feedback for appropriateness
Provide communication materials for provider use to discuss AI and the monitoring process