On Wednesday, Facebook announced it will integrate real-time suicide prevention tools into Facebook Live. It also said it will offer live-chat support from crisis support organizations such as the National Suicide Prevention Lifeline and the Crisis Text Line through Facebook Messenger, and make it easier to report suicide or self-injury. The most novel of the new tools: Facebook is testing artificial intelligence to identify warning signs of self-harm and suicide in Facebook posts and comments.
The goal, says Facebook, to connect people in distress with people who can help.
In January, a 14-year-old girl hung herself in her Florida foster home and a 33-year-old aspiring actor shot himself in a car on a Los Angeles street, both on Facebook Live. A young Turkish man who had broken up with his girlfriend told viewers before committing suicide on Facebook Live in October: “No one believed when I said will kill myself. So watch this.”
Public suicide is not new, but technologies such as live streaming have helped these haunting public acts reach far more people.
Facebook, which opened up its Live feature to the public last year, has been pushing its more than 1.8 billion users to try out the feature, rolling out an advertising campaign and featuring live streams in users’ news feeds.
According to Dan Reidenberg, the executive director of Save.org, who advises Facebook, there have been seven known cases since live streaming became available, not all of them on Facebook. Facebook spokesman William Nevius refused to say how many people have broadcast on Facebook Live as they have taken their own lives.
“Unfortunately we have now seen a growing series of young people and adults committing suicide and showing this on Facebook Live,” says Kaslow, the former president of the American Psychological Association.
“There always has been this concern: Will something like this cause an epidemic or rash?” she said. “The answer is: We don’t know yet.”