Faced with increasing concerns about the impact of artificial intelligence on young people, OpenAI has integrated the "age prediction" feature into ChatGPT, designed to help identify minors and set reasonable content limits for their conversations.
In recent years, OpenAI has been heavily criticized for the impacts that ChatGPT can cause on children. Some teenage suicides have been linked to this chatbot, and like other AI providers, OpenAI is also criticized for allowing ChatGPT to discuss sexual topics with young users.
In April 2025, the company was forced to fix a bug that allowed their chatbot to create pornography for users under 18 years old.
The company has been making efforts to solve the problem of underage users for a long time, and the "age prediction" feature has simply added existing protection measures.
This new feature uses artificial intelligence (AI) algorithms to evaluate user accounts based on specific "behavior signals and account levels", with the aim of identifying young users, OpenAI said in a recent blog post.
The company said that those "signals" include information such as the age that the user declares, the time the account has existed, and the time in a day that the account usually operates.
The company already has content filters designed to eliminate discussions about sex, violence and other potentially problematic topics for users under 18 years old. If the age prediction mechanism determines an account under 18 years old, those filters will be applied automatically.
If the user is mistakenly identified as underage, they can restore their "adult" account. According to OpenAI, they can send selfies to Persona, OpenAI's identity verification partner.