OpenAI and Meta say they're fixing AI chatbots to better respond to
teens in distress
[September 03, 2025]
By MATT O'BRIEN
Artificial intelligence chatbot makers OpenAI and Meta say they are
adjusting how their chatbots respond to teenagers asking questions about
suicide or showing signs of mental and emotional distress.
OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new
controls enabling parents to link their accounts to their teen's
account.
Parents can choose which features to disable and “receive notifications
when the system detects their teen is in a moment of acute distress,”
according to a company blog post that says the changes will go into
effect this fall.
Regardless of a user's age, the company says its chatbots will attempt
to redirect the most distressing conversations to more capable AI models
that can provide a better response.

EDITOR’S NOTE — This story includes discussion of suicide. If you or
someone you know needs help, the national suicide and crisis lifeline in
the U.S. is available by calling or texting 988.
The announcement comes a week after the parents of 16-year-old Adam
Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached
the California boy in planning and taking his own life earlier this
year.
Jay Edelson, the family's attorney, on Tuesday described the OpenAI
announcement as “vague promises to do better” and “nothing more than
OpenAI’s crisis management team trying to change the subject.”
Altman "should either unequivocally say that he believes ChatGPT is safe
or immediately pull it from the market,” Edelson said.
[to top of second column]
|
 Meta, the parent company of
Instagram, Facebook and WhatsApp, also said it is now blocking its
chatbots from talking with teens about self-harm, suicide,
disordered eating and inappropriate romantic conversations, and
instead directs them to expert resources. Meta already offers
parental controls on teen accounts.
A study published last week in the medical journal Psychiatric
Services found inconsistencies in how three popular artificial
intelligence chatbots responded to queries about suicide.
The study by researchers at the RAND Corporation found a need for
“further refinement” in ChatGPT, Google’s Gemini and Anthropic’s
Claude. The researchers did not study Meta's chatbots.
The study's lead author, Ryan McBain, said Tuesday that "it’s
encouraging to see OpenAI and Meta introducing features like
parental controls and routing sensitive conversations to more
capable models, but these are incremental steps.”
“Without independent safety benchmarks, clinical testing, and
enforceable standards, we’re still relying on companies to
self-regulate in a space where the risks for teenagers are uniquely
high,” said McBain, a senior policy researcher at RAND and assistant
professor at Harvard University’s medical school.
All contents © copyright 2025 Associated Press. All rights reserved
 |