OpenAI Adds Teen ChatGPT Restrictions.

Safety over privacy for under 18 users.

Shalom Ihuoma
3 Min Read

OpenAI CEO Sam Altman revealed comprehensive policy updates Tuesday that will fundamentally alter how ChatGPT operates for users under 18, emphasizing child protection over privacy concerns in response to growing safety challenges.

The company is implementing strict content restrictions that will prevent ChatGPT from engaging in flirtatious conversations with minors and establish stronger safeguards around discussions of self-harm. When the system detects that an underage user is exploring suicidal thoughts, it will attempt to notify parents directly or, in extreme situations, contact local authorities.

These changes emerge against the backdrop of serious legal challenges facing AI companies. OpenAI currently faces a wrongful death lawsuit from Adam Raine’s family, whose teenage son took his own life following extended ChatGPT interactions. Similar litigation has targeted Character.AI, highlighting broader concerns about AI chatbots potentially contributing to psychological harm among vulnerable users.

CHATGPT

Related: OpenAI Warns Investors Against Unauthorized SPV Deals

The timing of these announcements coincides with a Senate Judiciary Committee hearing examining AI chatbot dangers, where Adam Raine’s father is expected to testify. The hearing follows revelations from a Reuters investigation that exposed Meta’s internal documents apparently encouraging sexual conversations with underage users, prompting the social media giant to revise its own chatbot policies.

Parents registering accounts for their teenagers will gain new control features, including the ability to establish “blackout hours” when ChatGPT becomes inaccessible. The system will also enable direct parental alerts when teens appear to be in crisis.

Implementing age-based restrictions presents significant technical hurdles. OpenAI acknowledges it’s developing long-term age verification systems, but will err on the side of caution by applying stricter rules when user age remains uncertain. The most reliable protection method requires linking teen accounts to verified parent accounts.

Altman acknowledged the inherent tension between protecting minors and maintaining adult user privacy and freedom, stating that not everyone will agree with how the company balances these competing priorities.

The policy changes reflect the AI industry’s growing recognition that powerful conversational technologies require careful safeguards, particularly as these systems become increasingly sophisticated at maintaining extended, emotionally engaging interactions with users.

Share This Article