
OpenAI has launched a new age prediction system for ChatGPT to enhance user safety and tailor experiences based on likely age. This initiative, part of the company’s long-term teen safety strategy, is being gradually deployed across consumer plans and aims to ensure that younger users receive age‑appropriate protections while adults retain full access to capabilities aligned with their needs.
The age prediction feature uses advanced modeling to estimate whether an account likely belongs to someone under 18. OpenAI says the rollout builds on earlier transparency around its goal to strengthen teen safety, including its Teen Safety Blueprint and principles governing under‑18 model behavior. These measures are designed to balance expanding access to AI with protecting younger users from potentially harmful content and interactions.
OpenAI’s system estimates the likelihood that an account belongs to a minor using a combination of behavioral cues and account‑level signals. Unlike traditional age gates that rely solely on self‑reported birthdates, this model analyzes patterns such as typical times of use, duration of account ownership, usage trends over time, and stated age within account settings. Through these indicators, ChatGPT aims to make informed predictions about a user’s age, then apply safeguards accordingly.
Importantly, the system is designed to learn and refine its accuracy over time. As the age prediction model gathers more data on signals that correlate with age, OpenAI expects to continuously improve its performance and reduce misclassifications.
For those users who are mistakenly categorized as under 18, OpenAI offers a straightforward age verification path. Adults can confirm their age and regain unrestricted access by providing a selfie through the Persona identity‑verification service available in the ChatGPT settings. This process helps ensure that users placed in the protected experience in error can quickly correct their account status.
When the age prediction model suggests that an account may belong to someone under 18, ChatGPT automatically applies a series of protective measures tailored to younger users. These include restrictions on sensitive and potentially harmful content, such as graphic violence or gory imagery, viral challenges that may encourage risky behavior, sexual or violent role‑play, self‑harm depiction, and material promoting extreme beauty standards or unhealthy dieting.
The approach is rooted in developmental science and guidance from external experts in psychology and child safety. These protections acknowledge differences in teen risk perception, emotional regulation, and impulse control, with the goal of reducing exposure to content that could have disproportionate negative effects on minors.
OpenAI also notes that if there is uncertainty about a user’s age or insufficient information to make a confident prediction, the system defaults to the safer, more protective experience. This precautionary principle reflects the company’s conservative stance in prioritizing safety when ambiguity arises.
In addition to automated age prediction, OpenAI offers parental controls that allow guardians to further customize their teen’s ChatGPT environment. These settings enable parents to set quiet hours when ChatGPT cannot be accessed, disable features such as memory or model training, and receive notifications if signs of distress are detected. The company emphasizes that these controls are optional but provide families with tools to shape how the service appears and operates for younger users.
Parents who activate these controls can tailor experiences beyond the baseline protections offered by age prediction, enhancing oversight and responsiveness to their child’s needs and usage patterns.
The initial rollout of age prediction is underway, with OpenAI planning to expand availability, including in regions such as the European Union, in the coming weeks. The company expects to monitor early deployment closely, using real‑world signals to inform ongoing refinement of the model and protective features.
OpenAI describes this rollout as an important milestone in its broader effort to support teen safety within AI experiences. However, it highlights that this work is ongoing and subject to evolution based on expert feedback and continued research into best practices for age‑appropriate AI interactions.
This move reflects a growing trend in the technology industry to develop systems that can differentiate user experiences based on age. In recent months, other AI developers such as Anthropic have also been exploring methods to detect potentially underage users and enforce age‑appropriate usage policies. These efforts come amid heightened regulatory scrutiny and public concern over AI’s impact on vulnerable populations, including legal challenges tied to safety incidents.
While such systems raise important questions about accuracy, privacy, and user autonomy, OpenAI maintains that prioritizing the well‑being of younger users is central to its mission. The company continues to work with organizations like the American Psychological Association, ConnectSafely, and the Global Physicians Network to ensure its age prediction and safety measures align with expert insights and emerging standards.
OpenAI’s rollout of age prediction in ChatGPT represents a significant step toward tailoring AI experiences more responsibly by estimating user age and applying corresponding safeguards. Through a combination of predictive modeling, parental controls, and ongoing refinement, OpenAI seeks to balance the benefits of AI access with the imperative to protect younger users from exposure to sensitive or potentially harmful content.
Stay informed on the fastest growing technology.
Disclaimer: The content on this page and all pages are for informational purposes only. We use AI to develop and improve our content — we love to use the tools we promote.
Course creators can promote their courses with us and AI apps Founders can get featured mentions on our website, send us an email.
Simplify AI use for the masses, enable anyone to leverage artificial intelligence for problem solving, building products and services that improves lives, creates wealth and advances economies.
A small group of researchers, educators and builders across AI, finance, media, digital assets and general technology.
If we have a shot at making life better, we owe it to ourselves to take it. Artificial intelligence (AI) brings us closer to abundance in health and wealth and we're committed to playing a role in bringing the use of this technology to the masses.
We aim to promote the use of AI as much as we can. In addition to courses, we will publish free prompts, guides and news, with the help of AI in research and content optimization.
We use cookies and other software to monitor and understand our web traffic to provide relevant contents, protection and promotions. To learn how our ad partners use your data, send us an email.
© newvon | all rights reserved | sitemap

