Facebook to expand AI capabilities to prevent potential suicides

  • Small
  • Medium
  • Large

Facebook plans to extend its artificial intelligence capabilities related to suicide prevention to additional countries, according to a Nov. 27 Facebook blog post by Guy Rosen, the company's vice president of product management.

The social-media giant already encourages users to report posts that "[make] you concerned about [the poster's] well-being," according to Mr. Rosen. A group of specialists trained in detecting signs of suicide and self-harm reviews these posts to provide those in need with support options.

Using AI, the company has leveraged pattern recognition technology to proactively identify and prioritize posts — including Facebook Live videos — that express suicidal thoughts. The program, which the company has tested in the U.S., is slated to roll out worldwide, apart from the European Union.

"When someone is expressing thoughts of suicide, it's important to get them help as quickly as possible," Mr. Rosen wrote. After flagging a post for suicidal language, the company might prompt the user to contact a help line or reach out to a friend, or might engage first responders.

"Facebook is a place where friends and family are already connected and we are able to help connect a person in distress with people who can support them," he continued. "It's part of our ongoing effort to help build a safe community on and off Facebook."

More articles on artificial intelligence:
Study: AI health coach improves weight loss among obese individuals
Brown researchers to add medication reminders to Hasbro's animatronic pets for seniors
Google targets healthcare in AI accelerator's 1st class 

Copyright © 2021 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.


Featured Whitepapers

Featured Webinars