hand, reflection, sea, water, life ring, stretch, reach

This is a software for suicide prevention. Facebook’s all-new “proactive detection” AI technology will examine all posts for patterns of self-destructive thoughts; and when imminent send mental health support to the user at risk or their friends or contact local first-responders. By using AI to signal worrisome posts to human mediators instead of expecting for user reports; Facebook can reduce how long it takes to send help.

Testing AI Tool For Suicide Prevention

Facebook earlier experimented using AI to recognize troubling posts and more prominently surface self-destruction reporting options to friends in the U.S. Now Facebook is will hunt all types of content around the world with this AI; except in the European Union; where General Data Protection Regulation privacy laws on profiling users based on sensitive information impede the use of this tech.

Facebook also will employ AI to prioritize individually risky or impulsive user reports; so they’re more immediately addressed by moderators and tools to directly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline, and Forefront from which to provide resources to at-risk users and their networks.

What the Company Says

“This is about shaving off minutes at every single step of the process; especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian concerns about how else the technology could be implemented. Facebook didn’t have clues about how it would skirt scanning for political heresy or petty crime; with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects of the technology; but it’s another space where we have little choice but to hope Facebook doesn’t go too far.

Facebook CEO Mark Zuckerberg praised the product update in a post today; writing that “In the future; AI will be able to understand more of the subtle nuances of language; and will be able to identify different issues beyond suicide as well; including quickly spotting more kinds of bullying and hate.”

SHARE
Previous articleBest Xbox One Games 2017
Next articlePixel 2 Hiccups Continues- This Time, It’s Pixel 2 Reboot Issue
Fiza Khan is a tech geek who loves to let the readers informed about tech trends and news. The writing style is precise yet informative which keeps you updated about what’s new in the tech world without spending much time on reading a huge article.

LEAVE A REPLY

Please enter your comment!
Please enter your name here