Logo

'AI models became suicide coaches,' Salesforce CEO Marc Benioff says

The billionaire accused the tech industry of putting growth first and called for sweeping new rules to rein in AI

David Paul Morris/Bloomberg via Getty Images

Salesforce $CRM chief executive Marc Benioff has called for more regulation of artificial intelligence after a string of cases in which the technology was linked with people taking their own lives.

“This year, you really saw something pretty horrific, which is these AI models became suicide coaches," Benioff said, speaking at the World Economic Forum in Davos on Tuesday. “Bad things were happening all over the world because social media was fully unregulated. Now you’re kind of seeing that play out again with artificial intelligence.”

Amid a lack of clear rules governing AI companies in the U.S., individual states including California and New York have started drawing up their own bylaws. President Donald Trump, meanwhile, has sought to bar those efforts via executive order.

“There must be only one rulebook if we are going to continue to lead in AI,” the president said in December. He added that the U.S.'s lead in developing the technology “won't last long” if there are 50 different sets of AI rules in place. Companies like OpenAI have argued that maneuvering through swathes of differing AI rules is damaging to the sector’s competitiveness.

Benioff’s statement comes after allegations last year by a California family that ChatGPT played a role in their son's death. The lawsuit, filed in August 2025 by Matt and Maria Raine, accuses OpenAI and its chief executive, Sam Altman, of negligence and wrongful death. Their son, Adam, died in April, after what their lawyer, Jay Edelson, called “months of encouragement from ChatGPT.”

[Editor's note: The national suicide and crisis lifeline is available by calling or texting 988, or visiting 988lifeline.org.]

Benioff also railed against a law called Section 230 of the Communications Decency Act, which was passed in 1996 to protect web moderators from being regulated and makes individual users, rather than platforms, responsible for content. Despite vast changes in the online landscape since then, the law still applies. Tech giants like Meta $META have used Section 230 as a defense when dealing with issues of user harm in court.

“It’s funny, tech companies, they hate regulation. They hate it except for one. They love Section 230, which basically says they’re not responsible,” Benioff said. “So if this large language model coaches this child into suicide, they’re not responsible because of Section 230. That’s probably something that needs to get reshaped, shifted, changed.”

He added: “What’s more important to us, growth or our kids? What’s more important to us, growth or our families? Or what’s more important, growth or the fundamental values of our society?”

“There’s a lot of families that unfortunately have suffered this year and I don’t think they had to,” Benioff said.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.