Lawyers for the family of dead teen unimpressed by ChatGPT parental control rollout
Jay Edelson, who represents the parents, accused OpenAI CEO Sam Altman of "hiding behind the company’s blog"

Getty Images / KIRILL KUDRYAVTSEV
The lawyer of a family suing OpenAI over the death of their 16-year-old son says the company’s response isn’t good enough.
Suggested Reading
The tools, unveiled in a Tuesday blog post, will allow parents to connect their ChatGPT account with their child’s, disable certain features like memory and chat history, and will alert them if the platform thinks a young user is in “acute distress.”
Related Content
The rollout followed allegations by the family that ChatGPT played a role in their son taking his life. But Jay Edelson, the lawyer representing them, criticized the measures in a statement on Wednesday.
“Instead of taking immediate action to remove a product we believe poses clear risks, OpenAI has responded with vague promises and public relations,” Edelson said.
Alongside unveiling new tools, OpenAI also said that earlier this year it began to assemble a “council of experts in youth development, mental health and human-computer interaction” to create a framework “for how AI can support people’s well-being and help them thrive.” It also reiterated that ChatGPT is designed to encourage users in crisis to seek professional help: “We’ve seen people turn to it in the most difficult of moments.”
But Edelson is unconvinced. “OpenAI doesn’t need an expert panel to tell it that ChatGPT 4o is dangerous,” he said.
The lawsuit, filed last week by Matt and Maria Raine in the superior court of the state of California for the county of San Francisco, accuses OpenAI and its chief executive and co-founder, Sam Altman, of negligence and wrongful death. They allege that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
Their son, Adam, died in April, after what Edelson called “months of encouragement from ChatGPT.” Court filings revealed conversations he allegedly had with the chatbot where he disclosed suicidal thoughts. The family allege Adam received responses that reinforced his “most harmful and self-destructive” ideas.
The teenager discussed a method of suicide on numerous occasions, including shortly before taking his own life. According to the filing, the AI model guided him on the efficacy of his chosen method.
Edelson also used Wednesday’s statement to call out Altman for “hiding behind the company’s blog.” He believes the CEO should “either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market.”