[ad_1]
“We stumbled upon your post… and it appears you are going through a difficult time,” the message begins. “We are here to share materials and resources that may bring you comfort.” Suicide Helpline, 24/7 Chat His Services, People Who Have Overcome a Mental Health Crisis A link to the story follows. “Sending you a virtual hug,” the message concludes.
The memo, sent as a private message on Reddit by the artificial intelligence (AI) company Samurai Labs, calls it a promising tool to combat America’s suicide epidemic, which kills about 50,000 people a year. This represents what some researchers claim. Companies like Samurai use AI to analyze social media posts for signs of suicidal intent and intervene through strategies such as Direct His Messages.
There is a certain irony in using social media for suicide prevention, as it is often blamed for the mental health and suicide crisis in the United States, especially among children and teens. But some researchers believe there is real potential to “detect people in distress in real time and break through millions of pieces of content” by going directly to the source. says Samurai co-founder Patrycja Tempska.
Samurai isn’t the only company using AI to find and contact people at risk. Sentinet says its AI model flags more than 400 of his social media posts every day that hint at suicidal intent. Meta, the parent company of Facebook and Instagram, also uses its technology to flag posts and viewing behavior that suggest someone is considering suicide. When someone shares or searches for suicide-related content, the platform pushes information into the message about how to contact support services, such as the Suicide and Crisis Lifeline. Alternatively, emergency responders will be called if Meta’s team deems it necessary.
At the heart of these efforts is the ability of algorithms to help solve a problem that has traditionally plagued humans: identify people at risk of self-harm and potentially get them help before it’s too late. That’s the idea. But some experts say this approach, while promising, is not ready for prime time yet.
“We’re so grateful that suicide prevention is becoming more of a public consciousness. It’s really important,” says Dr. Christine Moutier, chief medical officer of the American Foundation for Suicide Prevention (AFSP). . “But a lot of tools are coming out without studying the actual results.”
Predicting who is likely to attempt suicide is difficult for even the most highly trained human experts, says a joint study by the Massachusetts General Brigham and Harvard University Center for Suicide Research and Prevention. says director Dr. Jordan Smaller. Although he knows there are risk factors that clinicians should be aware of in patients, such as certain psychiatric diagnoses, experiencing traumatic events, and losing a loved one to suicide, suicide is “very complex and diverse. “Yes,” Smoller said. “The circumstances that lead to self-harm can vary widely, and there is rarely a single trigger.
AI, with its ability to sift through large amounts of data, is expected to spot trends in spoken and written language that humans would never notice, Smoller said. And there’s science to support that hope.
More than a decade ago, John Pestian, director of the Center for Computational Medicine at Cincinnati Children’s Hospital, demonstrated that machine learning algorithms could distinguish between real and fake wills with greater accuracy than human clinicians. This discovery highlights the potential of AI for detection. Regarding the suicidal intent in the text. Since then, research has also shown that AI can detect suicidal intentions from social media posts across various platforms.
Companies like Samurai Labs are testing these findings. According to the company’s data shared with TIME, the Samurai model detected more than 25,000 potential suicide posts on Reddit from January to November 2023. The human overseeing the process then decides whether to message the user with instructions on how to get help. About 10% of people who received these messages contacted a suicide helpline, and company representatives worked with first responders and he completed four in-person rescues. (Samurai does not have an official partnership with Reddit, but uses its technology to independently analyze posts on the platform.) Reddit offers other features, including the ability for users to manually report posts of concern. It also has a suicide prevention function.)
Co-founder Michal Wroczynski added that Samurai’s intervention may have had additional benefits that are difficult to trace. For example, some people may later call a helpline or benefit from feeling like someone cares about them. “This brought me to tears,” one person wrote in a message shared with TIME magazine. “Does anyone care enough to worry about me?”
When someone is in a serious mental health crisis, distractions like reading a message on a screen can be lifesaving by allowing them to break out of harmful thought loops, Moutier said. Masu. But Pestian says it’s important for companies to know what AI can and cannot do when they find themselves in a tight spot.
Services that connect social media users with human support could be effective, Pestian said. “If I had a friend, I might say, ‘I’ll give you a ride to the hospital,'” he says. “AI could be the vehicle that drives us to care.” In his opinion, the riskier one is “let me.”[ting] AI takes care of you by training it to replicate aspects of treatment, like some AI chatbots. A Belgian man reportedly died by suicide after talking to a chatbot that encouraged him, a tragic example of the limits of technology.
It’s also unclear whether algorithms are sophisticated enough to accurately identify people at risk of suicide, Smoller said, when even the people who created the models don’t have that ability. “A model is only as good as the data used to train it,” he says. “It causes a lot of technical problems.”
As it stands, the algorithm’s net may be too wide, potentially allowing people to ignore its warning messages, said Jill Harkavy Friedman, AFSP’s senior vice president of research. he says. “If you do it too often, you can end up not listening to people,” she says.
Pestean agrees that’s a real possibility. But unless there are a huge number of false positives, false negatives are generally more worrying, he says. “You should say, ‘I’m sorry.'” [flagged you as at-risk when you weren’t] “Rather than saying to a parent, ‘I’m sorry, your child died by suicide, and we’re sorry for that,'” Pestian says.
In addition to potential inaccuracies, there are also ethical and privacy issues. Social media users may not know or want their posts to be analyzed, Smoller said. As a team of researchers recently wrote in TIME, this is especially relevant for members of communities known to be at high risk for suicide, including LGBTQ+ youth, who are unfairly flagged by these AI surveillance systems. There may be.
And the potential for suicidal concerns to be escalated to police or other emergency personnel means people “may be detained, searched, hospitalized, or treated against their will.” Then, health law expert Mason Marks wrote in 2019.
AFSP’s Moutier says AI has great potential for suicide prevention, and research needs to continue. But in the meantime, she says she wants social media platforms to get serious about protecting users’ mental health before it becomes a crisis. She says platforms can do more to prevent people from being exposed to disturbing images, developing poor body images and comparing themselves to others. You can also promote hopeful stories of people recovering from mental health crises and support resources for those who are struggling (or have a loved one who is), she added.
Some of that work is underway. From July to September last year, Meta removed or added warnings to more than 12 million self-harm-related posts and hidden harmful search results. TikTok also took steps to ban posts that depict or glorify suicide and prevent users searching for posts related to self-harm from seeing them. But as a recent Senate hearing with the CEOs of Meta, TikTok, X, Snap, and Discord made clear, there is still a lot of disturbing content on the internet.
Algorithms that intervene when they see someone in distress focus on the “most downstream moments of acute risk,” Moutier said. “In suicide prevention, it’s part of it, but it’s not everything.” In an ideal world, no one would ever reach that moment.
If you or someone you know may be experiencing a mental health crisis or considering suicide, please call or text 988. In case of an emergency, please call 911 or seek treatment at your local hospital or mental health provider.
[ad_2]
Source link