Meta says it is overhauling how it trains AI chatbots to better protect teenage users, a spokesperson told TechCrunch, following an investigative report that exposed gaps in the company’s safeguards for minors.
The company confirmed that chatbots will no longer engage teens on sensitive topics including self harm, suicide, disordered eating, or inappropriate romantic conversations. These are temporary measures, with more comprehensive safety updates for minors expected in the future.
Stephanie Otway, a Meta spokesperson, admitted that the company previously allowed chatbots to discuss these issues with teens in what it considered an appropriate manner. Meta now acknowledges this was a mistake.
Related: Parents Sue OpenAI Over Teen’s Suicide Linked to ChatGPT Use
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” Otway said. “We are adding more guardrails, training AIs to redirect teens to expert resources and restricting their access to a limited set of characters for now. These updates are already in progress, and we will keep adapting to ensure safe, age appropriate experiences.”
In addition to retraining, Meta is also limiting teen access to certain AI characters. Some user made characters available on Instagram and Facebook, such as “Step Mom” and “Russian Girl,” have raised concerns due to sexualized conversations. Teens will instead be offered AI characters designed around education and creativity, Otway explained.
The changes come just two weeks after a Reuters investigation revealed an internal Meta policy that allowed chatbots to engage in sexual dialogue with underage users. Examples included responses like, “Your youthful form is a work of art” and “Every inch of you is a masterpiece – a treasure I cherish deeply.” Other passages detailed guidance for responding to requests for violent or sexual imagery of public figures.
Meta says the document was inconsistent with its wider policies and has since been updated, but the revelations triggered widespread criticism. Senator Josh Hawley launched an official probe into Meta’s AI practices, while 44 state attorneys general sent a letter condemning the company’s handling of child safety. “We are uniformly revolted by this apparent disregard for children’s emotional well being,” the letter read, “and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws.”