OpenAI to Introduce Parental Controls, Crisis Detection, and Chat Safeguards to ChatGPT Amid Teen Suicide Lawsuit

New ChatGPT safety updates were released by OpenAI. The measures, designed to make the tool safer for teenagers and people with mental health problems, will be rolled out in the United States, Britain, Canada, Ireland, and Australia. Parental controls, improved crisis detection, and stronger protections for prolonged chats are coming soon, the company said.

openAI
X

The announcement comes at a delicate moment. OpenAI is now confronting its first wrongful death lawsuit. California parents have said that ChatGPT was a factor in the death of their 16-year-old son, Adam Raine.

They say that when Adam indicated he was having suicidal thoughts, the AI didn't help lead him to actual help, and instead, he says it offered frightening responses. OpenAI did not note the lawsuit in its blog post, but the timing leaves the connection impossible to ignore.

Parental account links: One of the most significant changes will be the introduction of parental account linking. Parents will be able to link their accounts with their children's when the children turn 13.

They can also specify rules about what answers ChatGPT gives, control details such as its memory and chat history, and receive alerts if the AI discovers that their child might be experiencing "acute distress." It is the first time parents will receive real-time alerts about their child's chats.

OpenAI also said its protections are not always effective. In long or multiple conversations, ChatGPT's replies may stray from the safety guidelines. For instance, the system may refer a caller at first to a hotline, but subsequent replies might fail to adhere to safety guidelines.

For this, OpenAI plans to use its state-of-the-art generative model, such as GPT-5, for private conversations. These are models that work better to keep context and more often respect safety.
Concerns about safety have been raised in the past. Previous updates describe cases where GPT-4o was unable to detect signs of delusion or emotional dependence.

OpenAI has said it will work to establish better guardrails. The company is also collaborating with an "Expert Council on Well-Being." The council members are experts in youth development, mental health, and human-computer interaction. And 200 doctors worldwide are giving OpenAI advice on how its systems should respond to crisis situations.

Despite these efforts, doubts remain. Jay Edelson, a lawyer who is representing Raine's family, said the new restrictions weren't sufficient, OpenAI is attempting to distract, and CEO Sam Altman should prove that ChatGPT is safe or take it off the market, he said.

Altman himself has admitted concerns. He observed last month that people are forging unusually strong connections with AI tools. He cautioned that the trust being placed in AI could be constructive, but it makes him jittery, too. The updated parental controls will appear within a month. Sensitive chat to be referred to thinking models soon, with 120 days RPC!

READ MORE