Category Archives: suicide

Parents Concerned As Disturbing ‘Momo Challenge’ Videos Encourage Child Suicide

SAN BRUNO (KPIX 5) — A new viral internet concern targeting children and encouraging them to commit suicide is striking fear into the hearts of parents and law enforcement. The “Momo Challenge” is raising concern because, though it might be a hoax, it has potential for legitimate consequences. The challenged is based on a piece of unrelated Japanese art, which is spliced into the middle of children-friendly videos like Peppa Pig and Fortnite. The embedded video issues a series of challenges for children to complete, ending with the goal of children harming themselves or, ultimately, committing suicide. This particular phenomena popped up this past summer as well, when warnings were issued by schools in the United Kingdom and police in Ireland. “These things do pop up, which is what worries people. With this one in particular, there’s nothing that actually proves that it’s real, but it’s caused law enforcement to be worried, which is why everyone is talking about it right now,” says Ian Scherr, Editor-at-Large for CNET. There’s some suspicion and worry that the Momo Challenge is a way to get children to chat with strangers on WhatsApp or to get personal information from them. Scherr says this is a teachable moment. “This is a great example of a reminder of whether this is real or not–it’s important to be aware of what people are doing on the internet, especially teenagers and children,” Scherr said. “You need to be aware of what they are looking at, what they’re talking to, who they’re talking to and all of these types of things–because even if this isn’t real, it’s a reminder that they can get into pretty tough stuff if they’re not paying attention.” YouTube issued a statement saying they haven’t seen any recent activity involving “The MoMo Challenge,” but content like that is against their policy and parents should flag the videos if they do see them.

Police Issue Warning To Parents After ‘Momo Challenge’ Resurfaces

(CBS News) — Police and schools are issuing warnings to parents on social media after a popular WhatsApp challenge has resurfaced in the United Kingdom. The “Momo game” or “Momo challenge” gained international recognition last summer and was initially considered a hoax, quickly becoming a widespread meme. In August 2018, law enforcement investigated the influence of Momo on the death of a 12-year-old in Argentina, worrying parents globally to the potentially real dangers of the challenge. ‘Momo Challenge’: Dangerous Social Media Game Prompts Warning From NJ Schools When children participate in the challenge, they contact a stranger concealing themself as “Momo” using a creepy image and communicate primarily through the Facebook-owned messaging app WhatsApp. Momo encourages a participant to complete various tasks if they want to avoid being “cursed.” Some of the tasks include self harm, which Momo asks the participant to provide photographic evidence in order to continue the game. Ultimately, the game ends with Momo telling the participant to take their own life and record it for social media.

Read more on CBS News.

Mom Discovers Suicide Instructions In Video On YouTube Kids

SACRAMENTO (CBS) – Video promoting self-harm tips — spliced between clips of a popular video game — has surfaced at least twice on YouTube and YouTube Kids since July, according to a pediatrician and mom who discovered the video. The suicide instructions are sandwiched between clips from the popular Nintendo game Splatoon and delivered by a man speaking in front of what appears to be a green screen — an apparent effort to have him blend in with the rest of the animated video. “Remember kids, sideways for attention, longways for results,” the man says, miming cutting motions on his forearm. “End it.” The man featured is YouTuber Filthy Frank, who has over 6.2 million subscribers and calls himself “the embodiment of everything a person should not be,” although there is no evidence that Frank, whose real name is George Miller, was involved in creating the doctored video. He did not immediately respond to CBS News’ request for comment. Read the rest of the story on CBSNews.com.

Police Find Suicidal Man Using Nothing But Dark, Blurry Photo

LAKEWOOD, Colo. (CBS4) — A man was threatening to kill himself and police were desperate to find him before he hurt himself. But the only clue they had to go on was a dark, grainy photo of some lights in the distance.

(credit: Lakewood Police Department)

Agent Kaylee Monn with the Lakewood Police Department was on the call in the late part of last year. The man’s family said the man had called to say goodbye — and sent one last photo of the parking lot he was in.

“In cases involving suicidal people or welfare checks, time is really of the essence,” Monn said in a video posted by the police department. “We want to try to get to them as quickly as we can to help them, prevent any potential injury they may cause to themselves or other people.”

(credit: Lakewood Police Department)

Monn and her colleagues studied the picture and recognized some landmarks.

(credit: Lakewood Police Department)

“I asked a couple of my coworkers who were there on the call with me and looking through the picture we all kind of [worked] together and realized it was Forsberg park,” Monn stated.

(credit: Lakewood Police Department)

Monn, who is trained in Crisis Intervention Techniques, went to the park to search for the man.

“I went over there and I found this gentleman’s car in the parking lot thankfully he hadn’t hurt himself yet,” Monn said. “I was able to talk to him and he was pretty clearly upset, he was crying.”

Ultimately, Monn was able to talk him into going to the hospital to get help.

“If it wasn’t for the picture we may have never found him,” Monn said.

“It was a win for us as a department and definitely for his family because we didn’t have to make any sad notifications that he had hurt himself or that we couldn’t find him,” she said. “Thankfully we found him just in time and saved a life.”

Facebook Screens Posts For Suicide Risk, And Health Experts Have Concerns

MENLO PARK (CBS SF) — A pair of public health experts has called for Menlo Park-based Facebook to be more transparent in the way it screens posts for suicide risk and to follow certain ethical guidelines, including informed consent among users. The social media giant details its suicide prevention efforts online and says it has helped first responders conduct thousands of wellness checks globally, based on reports received through its efforts. The authors said Facebook’s trial to reduce death by suicide is “innovative” and that it deserves “commendation for its ambitious goal of using data science to advance public health.” But the question remains: Should Facebook change the way it monitors users for suicide risk?

‘People need to be aware that … they may be experimented on’

Since 2006, Facebook has worked on suicide prevention efforts with experts in suicide prevention and safety, according to the company. In 2011, Facebook partnered with the National Suicide Prevention Lifeline to launch suicide prevention efforts, including enabling users to report suicidal content they may see posted by a friend on Facebook. The person who posted the content would receive an email from Facebook encouraging them to call the National Suicide Prevention Lifeline or chat with a crisis worker. In 2017, Facebook expanded those suicide prevention efforts to include artificial intelligence that can identify posts, videos and Facebook Live streams containing suicidal thoughts or content. That year, the National Suicide Prevention Lifeline said it was proud to partner with Facebook and that the social media company’s innovations allow people to reach out for and access support more easily. “It’s important that community members, whether they’re online or offline, don’t feel that they are helpless bystanders when dangerous behavior is occurring,” John Draper, director of the National Suicide Prevention Lifeline, said in a press release in 2017. “Facebook’s approach is unique. Their tools enable their community members to actively care, provide support, and report concerns when necessary.” When AI tools flag potential self-harm, those posts go through the same human analysis as posts reported by Facebook users directly. The move to use AI was part of an effort to further support at-risk users. The company had faced criticism for its Facebook Live feature, with which some users have live-streamed graphic events including suicide. In a blog post, Facebook detailed how AI looks for patterns on posts or in comments that may contain references to suicide or self-harm. According to Facebook, comments like “Are you OK?” and “Can I help?” can be an indicator of suicidal thoughts. If AI or another Facebook user flags a post, the company reviews it. If the post is determined as needing immediate intervention, Facebook may work with first responders, such as police departments to send help. Yet an opinion paper published Monday in the journal Annals of Internal Medicine claims that Facebook lacks transparency and ethics in its efforts to screen users’ posts, identify those who appear at risk for suicide and alert emergency services of that risk. The paper makes the argument that Facebook’s suicide prevention efforts should align with the same standards and ethics as would clinical research, such as requiring review by outside experts and informed consent from people included in the collected data. Dr. John Torous, director of the digital psychiatry division in Beth Israel Deaconess Medical Center’s Department of Psychiatry in Boston, and Ian Barnett, assistant professor of biostatistics at the University of Pennsylvania’s Perelman School of Medicine, co-authored the new paper. “There’s a need for discussion and transparency about innovation in the mental health space in general. I think that there’s a lot of potential for technology to improve suicide prevention, to help with mental health overall, but people need to be aware that these things are happening and, in some ways, they may be experimented on,” Torous said. “We all agree that we want innovation in suicide prevention. We want new ways to reach people and help people, but we want it done in a way that’s ethical, that’s transparent, that’s collaborative,” he said. “I would argue the average Facebook user may not even realize this is happening. So they’re not even informed about it.” In 2014, Facebook researchers conducted a study on whether negative or positive content shown to users resulted in the users producing negative or positive posts. That study sparked outrage, as users claimed they were unaware that it was even being conducted. The Facebook researcher who designed the experiment, Adam D.I. Kramer, said in a post that the research was part of an effort to improve the service — not to upset users. Since then, Facebook has made other efforts to improve its service. Last week, the company announced that it has been partnering with experts to help protect users from self-harm and suicide. The announcement was made after news around the death by suicide of a girl in the United Kingdom; her Instagram account reportedly contained distressing content about suicide. Facebook is the owner of Instagram. “Suicide prevention experts say that one of the best ways to prevent suicide is for people in distress to hear from friends and family who care about them. Facebook is in a unique position to help because of the friendships people have on our platform — we can connect those in distress with friends and organizations who can offer support,” Antigone Davis, Facebook’s global head of safety, wrote in an email Monday, in response to questions about the new opinion paper. “Experts also agree that getting people help as fast as possible is crucial — that is why we are using technology to proactively detect content where someone might be expressing thoughts of suicide. We are committed to being more transparent about our suicide prevention efforts,” she said. Facebook also has noted that using technology to proactively detect content in which someone might be expressing thoughts of suicide does not amount to collecting health data. The technology does not measure overall suicide risk for an individual or anything about a person’s mental health, it says.

What health experts want from tech companies

Arthur Caplan, a professor and founding head of the division of bioethics at NYU Langone Health in New York, applauded Facebook for wanting to help in suicide prevention but said the new opinion paper is correct that Facebook needs to take additional steps for better privacy and ethics. “It’s another area where private commercial companies are launching programs intended to do good but we’re not sure how trustworthy they are or how private they can keep or are willing to keep the information that they collect, whether it’s Facebook or somebody else,” said Caplan, who was not involved in the paper. “This leads us to the general question: Are we keeping enough of a regulatory eye on big social media? Even when they’re trying to do something good, it doesn’t mean that they get it right,” he said. Several technology companies — including Amazon and Google — probably have access to big health data or most likely will in the future, said David Magnus, a professor of medicine and biomedical ethics at Stanford University who was not involved in the new opinion paper. “All these private entities that are primarily not thought of as health care entities or institutions are in position to potentially have a lot of health care information, especially using machine learning techniques,” he said. “At the same time, they’re almost completely outside of the regulatory system that we currently have that exists for addressing those kinds of institutions.” For instance, Magnus noted that most tech companies are outside of the scope of the “Common Rule,” or the Federal Policy for the Protection of Human Subjects, which governs research on humans. “This information that they’re gathering — and especially once they’re able to use machine learning to make health care predictions and have health care insight into these people — those are all protected in the clinical realm by things like HIPAA for anybody who’s getting their health care through what’s called a covered entity,” Magnus said. “But Facebook is not a covered entity, and Amazon is not a covered entity. Google is not a covered entity,” he said. “Hence, they do not have to meet the confidentiality requirements that are in place for the way we address health care information.” HIPAA, or the Health Insurance Portability and Accountability Act, requires the safety and confidential handling of a person’s protected health information and addresses the disclosure of that information if or when needed. The only protections of privacy that social media users often have are whatever agreements are outlined in the company’s policy paperwork that you sign or “click to agree” with when setting up your account, Magnus said. “There’s something really weird about implementing, essentially, a public health screening program through these companies that are both outside of these regulatory structures that we talked about and, because they’re outside of that, their research and the algorithms themselves are completely opaque,” he said.

‘The problem is that all of this is so secretive’

It remains a concern that Facebook’s suicide prevention efforts are not being held to the same ethical standards as medical research, said Dr. Steven Schlozman, co-director of The Clay Center for Young Healthy Minds at Massachusetts General Hospital, who was not involved in the new opinion paper. “In theory, I would love if we can take advantage of the kind of data that all of these systems are collecting and use it to better care for our patients. That would be awesome. I don’t want that to be a closed book process, though. I want that to be open with outside regulators. … I’d love for there to be some form of informed consent,” Schlozman said. “The problem is that all of this is so secretive on Facebook’s side, and Facebook is a multimillion-dollar for-profit company. So the possibility of this data being collected and being used for things other than the apparent beneficence that it appears to be for — it’s just hard to ignore that,” he said. “It really feels like they’re kind of transgressing a lot of pre-established ethical boundaries.” How to get help: In the US, call the National Suicide Prevention Lifeline at 1-800-273-8255. The International Association for Suicide Prevention and Befrienders Worldwide also can provide contact information for crisis centers around the world. © Copyright 2019 CBS Broadcasting Inc. All Rights Reserved. This material may not be published, broadcast, rewritten. CNN contributed to this report.

Lawsuit Blames Sorority In Northwestern Athlete’s Death

(AP) — The mother of a player on the Northwestern University women’s basketball team who died in 2017 has sued a sorority claiming hazing by its members led to her daughter’s suicide. Felicia Hankins says the hazing of Jordan Hankins by members of the Alpha Kappa Alpha sorority caused severe anxiety and depression and led to her death in January 2017. The lawsuit filed in U.S. District Court in Chicago also names the Gamma Chi undergraduate chapter of the sorority at Northwestern, the Delta Chi Omega graduate chapter of the sorority and sorority executives. The lawsuit contends Jordan Hankins was “subjected to physical abuse including paddling, verbal abuse, mental abuse, financial exploitation, sleep deprivation, items being thrown and dumped on her, and other forms of hazing intended to humiliate and demean her.” Officials with Chicago-based Alpha Kappa Alpha couldn’t be reached for comment. Hankins was recruited out of Lawrence North High School in Indianapolis. Copyright 2019 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.