Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2018 Jan 29;190(4):E119. doi: 10.1503/cmaj.109-5549

AI opens new frontier for suicide prevention

Lauren Vogel 1
PMCID: PMC5790564  PMID: 29378875

In the early hours of the morning, a distraught teen posts on social media about wanting to hurt herself. Her friends and family are sleeping, but an algorithm answers, providing links to 24/7 help.

Artificial intelligence already sorts what you see on social media, but increasingly, it’s being harnessed to monitor and respond to mental health crises.

Canada is at the cutting edge of the development. The federal government recently tapped an Ottawa-based AI company to screen social media posts for warning signs of suicide. According to a contract, Advanced Symbolics will work with the government to define “suicide-related behavior” — from thoughts to threats to attempts — and conduct market research to identify related patterns of online behavior. For example, do people who self-harm tweet about it?

Based on the findings, the company will conduct a three-month pilot monitoring online discussions about suicide, after which the Public Health Agency of Canada “will determine if future work would be useful for ongoing suicide surveillance.”

The project will use public data and won’t identify individuals. According to Advanced Symbolics chief scientist Kenton White, the goal is to identify “hot spots” of suicide risk so the government can provide resources to communities before tragedy strikes. The company previously used the same technology to forecast the outcomes of elections in Canada, the United States and Europe, accurately predicting the breakdown of the popular vote in the 2016 US presidential election within 0.7 percentage points.

In November, Facebook rolled out an AI program that scans posts and live videos for threats of suicide and self-harm, and alerts a team of human reviewers, who can contact emergency responders if needed. “In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times,” Facebook founder Mark Zuckerberg announced on the site. The company also reported that its AI has been able to identify and remove 99% of posts related to the terrorist groups ISIS and Al Qaeda before users flagged the content, “and in some cases, before it goes live on the site.”

graphic file with name 190e119f1.jpg

Canada is joining a vanguard of countries and companies using artificial intelligence and social media to assess mental health risk.

Image courtesy of peepo/iStock

Other companies are using social media monitoring to explore the link between online conversations and mental health. A recent Brandwatch study of 12.9 million posts from July 2014 to June 2017 found that people in the United Kingdom who shared about symptoms of mental disorders and being bullied were more than six times more likely to reference self-harm in posts. The study also found that only 23% of people in the UK who repeatedly posted about “mental health risk symptoms,” such as sleep disruption, anxiety and appetite changes, later referenced seeking help, compared to 33% in the US. For those who didn’t describe receiving treatment, posts about their symptoms escalated in both frequency and negativity.

Footnotes

Posted on cmajnews.com on Jan. 8, 2018.


Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES