As kids flip to AI chatbots for solutions, recommendation, and companionship, questions emerge about their security, privateness, and emotional improvement
23 Jan 2026
•
,
4 min. learn

AI chatbots have turn out to be an enormous a part of all of our lives since they burst onto the scene greater than three years in the past. ChatGPT, for instance, says it has round 700 million weekly lively customers, a lot of whom are “younger individuals.” A UK research from July 2025 discovered that almost two-thirds (64%) of kids use such instruments. The same share of oldsters is fearful their children assume AI chatbots are actual individuals.
Whereas this can be a slight overreaction, reputable security, privateness and psychological issues are rising attributable to frequent use of the expertise by children. As a mum or dad, you possibly can’t assume that every one platform suppliers have efficient child-appropriate safeguards in place. Even when protections do exist, enforcement isn’t essentially constant, and the expertise itself is evolving sooner than coverage.
What are the dangers?
Our kids use generative AI (GenAI) in various methods. Some worth its assist when doing homework. Others would possibly deal with the chatbot like a digital companion, asking it recommendation and trusting its responses as they’d a detailed pal. There are a number of apparent dangers related to this.
The primary is psychological and social. Kids are going via an unbelievable interval of emotional and cognitive improvement, making them susceptible in varied methods. They might come to depend on AI companions on the expense of forming real friendships with classmates – exacerbating social isolation. And since chatbots are pre-programmed to please their customers, they could serve up output that amplifies any difficulties younger individuals could also be going via – like consuming issues, self-harm and/or suicidal ideas. There’s additionally a danger that your youngster spends time with their AI that edges out not solely human friendships, but in addition time that needs to be spent on homework or with the household.
There are additionally dangers round what a GenAI chatbot could permit your youngster to entry on the web. Though the principle suppliers have guardrails designed to restrict hyperlinks to inappropriate or harmful content material, they aren’t at all times efficient. In some circumstances, they could override these inner security measures to share sexually specific or violent content material, for instance. In case your youngster is extra tech savvy, they could even be capable of ‘jailbreak’ the system via particular prompts.
Hallucinations are one other concern. For company customers, this could create vital reputational and legal responsibility dangers. However for teenagers, it could end in them believing false info introduced convincingly as reality, which leads to them taking unwise selections on medical or relationship issues.
Lastly, it’s vital to do not forget that chatbots are additionally a possible privateness danger. In case your youngster enters delicate private and monetary info in a immediate, will probably be saved by the supplier. If that occurs, it might theoretically be accessed by a 3rd occasion (e.g., a provider/accomplice), hacked by a cybercriminal, or regurgitated to a different consumer. Simply as you wouldn’t need your youngster to overshare on social media, the most effective plan of action is to reduce what they share with a GenAI bot.
Some crimson flags to look out for
Certainly the AI platforms perceive and are taking steps to mitigate these dangers? Effectively, sure, however solely up to a degree. Relying on the place your kids dwell and what chatbot they’re utilizing, there could also be little in the way in which of age verification or content material moderation occurring. The onus, due to this fact, is certainly on dad and mom to get forward of any threats via proactive monitoring and training.
First up, listed below are a couple of indicators that your kids could have an unhealthy relationship with AI:
- They withdraw from extracurricular time spent with family and friends
- They turn out to be anxious when not in a position to entry their chatbot, and should attempt to conceal indicators of overuse
- They speak in regards to the chatbot as if it have been an actual particular person
- They repeat again to you as “reality” apparent misinformation
- They ask their AI about critical circumstances resembling psychological well being points (which you discover out about by accessing dialog historical past)
- They entry grownup/inappropriate content material served up by the AI
Time to speak
In lots of jurisdictions, AI chatbots are restricted to customers over 13-years-old. However given patchy enforcement, you will have to take issues into your individual fingers. Conversations matter greater than controls alone. For the most effective outcomes, take into account combining technical controls with training and recommendation, delivered in an open and non-confrontational method.
Whether or not they’re in school, at house or participating in an after-school membership, your kids have adults telling them what to do each minute of their waking lives. So attempt to body your outreach about AI as a two-way dialog, the place they really feel snug sharing their experiences with out the worry of punishment. Clarify the risks of overuse, hallucinations, information sharing, and over-relying on AI for assist with critical issues. Assist them to grasp that AI bots aren’t actual individuals able to thought – that they’re machines designed to be participating. Educate your children to assume critically, at all times reality verify AI output, and by no means substitute a chat with their dad and mom for a session with a machine.
If essential, mix that training piece with a coverage for limiting AI use (simply as you would possibly restrict use of social media, or display time usually) and proscribing use to age-appropriate platforms. Change on parental controls within the apps they use that will help you monitor utilization and reduce danger. Remind your children by no means to share personally identifiable info (PII) with AI and tweak their privateness settings to cut back the danger of unintentional leaks.
Our kids want people on the heart of their emotional world. AI is usually a great tool for a lot of issues. However till your children develop a wholesome relationship with it, their utilization needs to be fastidiously monitored. And it ought to by no means substitute human contact.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s traits in the present day: learn extra, subscribe to our e-newsletter, and turn out to be a part of the NextTech neighborhood at NextTech-news.com

