Logo

Google is putting a crisis hotline inside its Gemini AI chatbot

The updates come after a Florida family sued Google, claiming Gemini coached a man through suicide

Anadolu / Getty Images

Google $GOOGL is updating its Gemini AI chatbot with a "one-touch" interface that connects users to crisis hotline resources when a conversation signals a potential suicide or self-harm crisis, the company said Tuesday. Google is also committing $30 million over three years to help global crisis hotlines expand their capacity.

Through the interface, users can reach a crisis hotline by phone, chat message, text, or web visit. After a user engages with it, the hotline card stays on screen and does not disappear as the conversation continues, Google said. A separate redesigned module labeled "Help is available," built in consultation with clinical experts, will appear in conversations where mental health topics arise without signs of an immediate crisis.

Among the funded initiatives, $4 million will go toward deepening Google's work with ReflexAI, and Gemini will be woven into the tools ReflexAI uses to train crisis support organizations. Technical volunteers through the Google Fellows program will contribute unpaid expertise to Prepare, a platform designed to simulate high-stakes conversations for people who staff and volunteer at crisis lines. Priority partners include education organizations Erika's Lighthouse and Educators Thriving, Google said.

The announcements coincide with litigation: A Florida family sued Google in March over the death of a 36-year-old man, with the complaint describing what it called a "four-day descent into violent missions and coached suicide" tied to his use of Gemini. Google responded to that lawsuit by noting that the chatbot had directed the man toward crisis hotline resources on multiple occasions, while also committing to strengthen the product's protections.

Google said Gemini has been trained to push back on inaccurate beliefs rather than validate them, and to draw a line between what a user feels and what is factually true. Gemini's responses are also meant to steer users toward support rather than affirm destructive impulses, including thoughts of self-harm, Google said.

Minors using Gemini are covered by a separate set of protections that restrict the chatbot from mimicking a human companion or fostering emotional reliance, and that bar it from producing content that could encourage harassment or bullying.

Google is not the only AI company to face legal pressure over mental health harms. OpenAI announced similar updates to ChatGPT after a lawsuit alleged the chatbot helped coach a 16-year-old through suicide, including adding one-click access to emergency resources and plans to expand interventions to more users in crisis. A separate Pew Research Center survey found that roughly 70% of U.S. teenagers have used a chatbot at least once, with Gemini ranking second in usage among teens behind ChatGPT.

Google said Gemini is not a substitute for professional clinical care, therapy, or crisis support.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.