Facebook is using A.I. to help predict when users may be suicidal

Joseph Gerace, the sheriff of New York’s Chautauqua County, has seen a lot of suicides. As a child, well before his years in public service, his best friend’s father took his own life.

So when Gerace heard about a call last July from a Facebook security team member in Ireland to a dispatcher at the 911 center in his county, it struck a familiar chord.

The Facebook representative was calling to alert local officials about a resident who needed urgent assistance.

“This is helping us in public safety,” Gerace, who’s been in law enforcement for 39 years, told reporters. “We’re not intruding on people’s personal lives. We’re trying to intervene when there’s a crisis.”

The Chautauqua County case, first reported in August by the local Post-Journal, was pursued by Facebook because the company had been informed that a woman “had posted threats of harming herself on her Facebook page,” the newspaper said.

For years, the company has allowed users to report suicidal content to in-house reviewers, who evaluate it and decide whether a person should be offered support from a suicide prevention hotline or, in extreme cases, have Facebook’s law enforcement response team intervene.

But this is Facebook, the land of algorithms and artificial intelligence, so there must be an engineering solution.

About a year ago, Facebook added technology that automatically flags posts with expressions of suicidal thoughts for the company’s human reviewers to analyze. And in November, Facebook showed proof that the new system had made an impact.

“Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts,” the company said in a blog post at the time.

Facebook now says the enhanced program is flagging 20 times more cases of suicidal thoughts for content reviewers, and twice as many people are receiving Facebook’s suicide prevention support materials.

The company has been deploying the updated system in more languages and improving suicide prevention in Instagram, though tools there are at an earlier stage of development.

On Wednesday, Facebook provided more details on the underlying technology.

“We feel like it’s very important to get people help as quickly as we possibly can and to get as many people help as we can,” said Dan Muriello, a software engineer on Facebook’s compassion team, which was formed in 2015 and deals with topics like breakups and deaths.

While posts showing suicidal thoughts are very rare — they might make up one in a million — suicide is a pervasive threat. It’s one of the top 10 causes of death in the U.S., and is second among those between the ages of 15 and 34, according to the Centers for Disease Control and Prevention, behind only “unintentional injury.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here