top of page

OpenAI Introduces Age Prediction for ChatGPT Consumer Accounts

OpenAI said Tuesday it is rolling out an age prediction model across ChatGPT consumer plans, a move aimed at better identifying users under the age of 18 and automatically applying additional safety protections.


The model uses a mix of account-level and behavioral signals, including usage patterns over time, account age, typical activity hours, and a user’s stated age, according to the company. OpenAI said the system is designed to flag accounts likely belonging to minors without requiring users to self-report.


Once the model determines that an account may belong to someone under 18, ChatGPT will automatically apply safeguards intended to limit exposure to sensitive material, including content related to self-harm.


The rollout comes as OpenAI faces growing scrutiny from regulators and lawmakers over how AI platforms protect children and teenagers. The company, along with other major tech firms, is currently under investigation by the Federal Trade Commission for the potential impact of AI chatbots on minors. OpenAI has also been named in several wrongful death lawsuits, including one involving a teenage user.


To address potential errors, OpenAI said users who are incorrectly flagged as under 18 will be able to verify their identity through Persona to restore full access. Persona is also used by platforms such as Roblox, which has faced similar pressure to strengthen child safety controls.


The age prediction system builds on a series of safety initiatives OpenAI has introduced over the past year. In August, the company announced plans for parental controls, which were rolled out the following month alongside early work on age-detection technology. In October, OpenAI also convened an expert council focused on how AI affects users’ mental health, emotions, and motivation.


OpenAI said it will continue refining the accuracy of the age prediction model and plans to expand the feature to the European Union in the coming weeks to comply with regional regulatory requirements.


As AI tools become more deeply embedded in daily life, the move signals a broader shift across the industry toward proactive age detection and built-in safety controls—particularly as scrutiny over youth protection intensifies.

bottom of page