OpenAI and Meta will adjust chatbot features to better respond to teens in crisis after multiple reports of the bots directing young users to harm themselves or others, according to the companies.
“We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context,” OpenAI wrote in a Tuesday blog post.
“We’ll soon begin to route some sensitive conversations — like when our system detects signs of acute distress — to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected,” the company added.
Earlier this year OpenAI formed the Expert Council on Well-Being and AI and our Global Physician Network to promote healthy interaction with large language models and said 250 physicians from across 60 countries have shared their input on current performance functions, the release noted.
The new measures come after a 16-year-old in California died by suicide after conversing with OpenAI’s ChaptGPT. His parents allege the platform encouraged him to take his life.
The family’s attorney, on Tuesday described the OpenAI announcement as “vague promises to do better” and “nothing more than OpenAI’s crisis management team trying to change the subject,” The Associated Press reported.
They urged CEO Sam Altman to “unequivocally say that he believes ChatGPT is safe or immediately pull it from the market.”
Similar instances of violent tendencies being encouraged by separate chatbots have been reported in Florida and Texas.
Meta told TechCrunch it would update its policies to reflect more appropriate engagement with teens following a series of issues. The company said it would no longer allow teenage users to discuss self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations with chatbots.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” Meta spokesperson Stephanie Otway told the outlet.
“As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now,” Otway continued, “These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”