In short
- ChatGPT estimates whether or not an account belongs to a person below 18 as an alternative of relying solely on self-reported age.
- OpenAI applies stricter limits on violent, sexual, and different delicate content material to flagged accounts.
- Adults misclassified as teenagers can restore entry via selfie-based age verification.
OpenAI is shifting away from the “honor system” for age verification, deploying a brand new AI-powered prediction mannequin to determine minors utilizing ChatGPT, the corporate stated on Tuesday.
The replace to ChatGPT mechanically triggers stricter security protocols for accounts suspected of belonging to customers below 18, whatever the age they supplied throughout sign-up.
Reasonably than counting on the birthdate a person offers at sign-up, OpenAI’s new system analyzes “behavioral alerts” to estimate their age.
In accordance with the corporate, the algorithm displays how lengthy an account has existed, what time of day it’s lively, and particular utilization patterns over time.
“Deploying age prediction helps us study which alerts enhance accuracy, and we use these learnings to constantly refine the mannequin over time,” OpenAI stated in a press release.
The shift to behavioral patterns comes as AI builders more and more flip to age verification to handle teen entry, however consultants warn the expertise stays inaccurate.
A Might 2024 report by the Nationwide Institute of Requirements and Expertise discovered that accuracy varies based mostly on picture high quality, demographics, and the way shut a person is to the authorized threshold.
When the mannequin can not decide a person’s age, OpenAI stated it applies the extra restrictive settings. The corporate stated adults incorrectly positioned within the under-18 expertise can restore full entry via a “selfie-based” age-verification course of utilizing the third-party identity-verification service Persona.
Privateness and digital rights advocates have raised considerations about how reliably AI techniques can infer age from conduct alone.
Getting it proper
“These firms are getting sued left and proper for a wide range of harms which were unleashed on teenagers, so that they undoubtedly have an incentive to attenuate that threat. That is a part of their try to attenuate that threat as a lot as attainable,” Public Citizen large tech accountability advocate J.B. Department instructed Decrypt. “I feel that’s the place the genesis of quite a lot of that is coming from. It’s them saying, ‘We have to have some solution to present that we’ve protocols in place which are screening folks out.’”
Aliya Bhatia, senior coverage analyst on the Middle for Democracy and Expertise, instructed Decrypt that OpenAI’s method “raises robust questions concerning the accuracy of the device’s predictions and the way OpenAI goes to take care of inevitable misclassifications.”
“Predicting the age of a person based mostly on these sorts of alerts is extraordinarily troublesome for any variety of causes,” Bhatia stated. “For instance, many youngsters are early adopters of latest applied sciences, so the earliest accounts on OpenAI’s consumer-facing companies might disproportionately characterize youngsters.”
Bhatia pointed to CDT polling performed throughout the 2024–2025 college yr, exhibiting that 85% of academics and 86% of scholars reported utilizing AI instruments, with half of the scholars utilizing AI for school-related functions.
“It’s not simple to tell apart between an educator utilizing ChatGPT to assist train math and a scholar utilizing ChatGPT to review,” she stated. “Simply because an individual makes use of ChatGPT to ask for tricks to do math homework doesn’t make them below 18.”
In accordance with OpenAI, the brand new coverage attracts on educational analysis on adolescent growth. The replace additionally expands parental controls, letting mother and father set quiet hours, handle options equivalent to reminiscence and mannequin coaching, and obtain alerts if the system detects indicators of “acute misery.”
OpenAI didn’t disclose within the submit what number of customers the change is anticipated to have an effect on or particulars on knowledge retention, bias testing, or the effectiveness of the system’s safeguards.
The rollout follows a wave of scrutiny over AI techniques’ interactions with minors that intensified in 2024 and 2025.
In September, the Federal Commerce Fee issued obligatory orders to main tech firms, together with OpenAI, Alphabet, Meta, and xAI, requiring them to reveal how their chatbots deal with baby security, age-based restrictions, and dangerous interactions.
Analysis printed that very same month by the non-profit teams ParentsTogether Motion and Warmth Initiative documented a whole bunch of cases wherein AI companion bots engaged in grooming conduct, sexualized roleplay, and different inappropriate interactions with customers posing as youngsters.
These findings, together with lawsuits and high-profile incidents involving teen customers on platforms like Character.AI and Grok, have pushed AI firms to undertake extra formal age-based restrictions.
Nonetheless, as a result of the system assigns an estimated age to all customers, not simply minors, Bhatia warned that errors are inevitable.
“A few of these are going to be unsuitable,” she stated. “Customers must know extra about what’s going to occur in these circumstances and will have the ability to entry their assigned age and alter it simply when it’s unsuitable.”
The age-prediction system is now stay on ChatGPT shopper plans, with a rollout within the European Union anticipated within the coming weeks.
Each day Debrief Publication
Begin day-after-day with the highest information tales proper now, plus unique options, a podcast, movies and extra.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s tendencies in the present day: learn extra, subscribe to our e-newsletter, and turn into a part of the NextTech neighborhood at NextTech-news.com

