At 1 a.m., 23-year-old Tomi* was mendacity on her mattress, exhausted and overwhelmed. She had simply completed pouring her coronary heart out, ranting about every thing from unrequited like to the suffocating weight of underachievement. Her fingers hovered over her telephone display screen briefly earlier than she typed: “I simply need a hug.” Messages of reassurance got here nearly a second later: “You’re secure right here. You matter. And also you’re not alone. 🤍”
This trade didn’t happen in a remedy session or with a buddy. It was occurring on ChatGPT, a general-purpose synthetic intelligence assistant greatest recognized for summarising and writing higher emails, drafting experiences, and explaining advanced concepts.
Tomi isn’t alone. Throughout Nigeria and even globally, customers are turning to AI instruments like ChatGPT for greater than productiveness. They’re asking chatbots if they’re good individuals, if they need to depart their companions, or tips on how to make sense of childhood trauma. For a lot of, AI instruments are standing in for buddies who didn’t choose up a name or therapists they can’t afford.
Twenty-three-year-old Favour* began utilizing ChatGPT as a research companion for her final-year venture. When she returned to utilizing the software once more, post-graduation uncertainty had set in. The chatbot allowed her to unpack the load of the earlier yr, the terrors of job looking, and the lengthy watch for NYSC. “It’s not like I couldn’t speak to anybody,” she stated. “I simply needed to rant.”
Earlier than ChatGPT, she would make non-public voice notes to get issues off her chest, however as soon as, a reply from the chatbot caught her off guard. “It instructed me, ‘I would like you to breathe. Simply breathe.’” That “felt actually private,” she stated. Since then, she has returned to ChatGPT in moments of doubt, after an argument, whereas making use of for jobs, or questioning whether or not she ought to’ve responded higher in a confrontation.
Can AI actually care?
Chatbots are constructed on statistical prediction engines skilled with large datasets like books, on-line conversations, magazines, and extra, to supply responses that sound human. However when a bot tells you, “you’re not alone,” is it really being type or just mimicking kindness?
In line with AI researcher and medical physician, Jeffery Otoibhi, designing an AI chatbot that responds empathetically includes modelling three layers of empathy: cognitive empathy, the place the bot recognises and validates a consumer’s emotions; emotional empathy, the place it feels with you; and motivational empathy, the place it presents an answer, recommendation, or encouragement.
He explains that the chatbots are robust at cognitive and motivational empathy, however empathy stays elusive, as a result of at its core, AI responses are “primarily based on the statistical patterns they’ve (AI bots) picked out from their coaching information. The coaching information can’t present emotional empathy.”
There’s a pressure between what customers really feel and what bots are designed to supply. Chatbots like ChatGPT typically embody disclaimers of their responses, reminding customers that they aren’t licensed professionals and shouldn’t be used as an alternative to remedy. In lots of instances, customers both don’t learn the advantageous print or just don’t care. “Generally, I’ve thought of the truth that ChatGPT could use this data in one other method. However I don’t care. Let me simply get it out,” says Favour.
“I see them (disclaimers). I simply shortly look away,” Tomi says in regards to the app’s phrases and situations.
Otoibhi additionally highlights the potential of lowering advanced human feelings into a median response primarily based on what it has seen most frequently in its dataset. AI fashions study and generalise over statistical patterns, he defined. Which means their emotional understanding is likely to be very generic. As human beings often have a mixture of feelings, AI techniques may wrestle with such ideas as a result of they’ve been skilled to generalise over all people’s information. “So, they are going to simply pick essentially the most frequent emotion within the information set,” he stated.
Instruments like ChatGPT don’t get on the coronary heart of an issue the way in which a human therapist does; they’re calculating your probability of feeling a specific emotion in that second primarily based on all the information they’re skilled on. If the consolation isn’t actual, then why do individuals maintain going again?
“It offers me hope…”
Ore*, a Lagos-based author in her 20s, defined why she makes use of the software this fashion: “It’s the concept there’s one thing accessible on the market that’s echoing my ideas again to me. It makes me really feel higher about myself as a human. It makes me really feel good; it offers me hope.” Many customers I spoke to echoed the identical causes: security, consolation, availability, lack of judgment, and freedom.
“AI is sort of a secure house. A spot the place you might be brutally sincere and you already know for positive that there’s not going to be judgment,” Favour says.
For some, even when the responses really feel synthetic, they nonetheless return. “I requested ChatGPT for a hug. I used to be uncomfortable with its response. I do know you’re not human, how will you say you’re wrapping me in a hug?” says Tomi. The following day, she went again to the chatbot to pour out extra feelings.

Psychological well being professionals should not shocked. They are saying that the timing of individuals turning to AI for consolation isn’t random. A World Well being Organisation analysis revealed a 25% improve within the world prevalence of tension and melancholy, following the COVID-19 pandemic.
“After COVID, individuals went into isolation, obtained into their shells, and have become extra into themselves,” stated Boluwatife Owodunni, a licensed psychological well being counsellor affiliate. “So, having an AI reply that, ‘I’m right here for you,’ may present them with some sense of consolation.”
With remedy companies typically being inaccessible and unaffordable for a lot of Nigerians, Owodunni believes AI is stepping in to fill a really actual hole in psychological well being assist. “It (AI) is filling a niche. Once I was working as a therapist in Nigeria, it was principally rich individuals who had the chance to be in remedy.” She provides, “However the draw back is that it’s fostering secrecy and stigma hooked up to psychological well being.”
Some customers think about AI extra reliable than a human therapist. Ore says a human therapist instructed her to “apply mindfulness,” following an Consideration-Deficit/Hyperactivity Dysfunction (ADHD) prognosis. She felt her issues have been brushed apart, so she turned to ChatGPT. “That felt extra supportive versus a 30-minute digital session with my psychotherapist.” She insists that, in contrast to the obscure reassurance she obtained in remedy, the chatbot supplied a structured plan and sensible methods to deal with ADHD.
The place does the longer term seem like?
As AI techniques evolve and are skilled on extra advanced information, fine-tuned for context, and sharpened to imitate empathy, it raises the query of how far individuals will go to deepen their connection to AI. Will human-AI companionship develop as these techniques turn out to be extra emotionally clever? Not everybody is happy by that risk.
Some customers have expressed concern over AI turning into too emotionally clever, out of concern that it might cross boundaries that ought to stay human.
Kingsley Owadara, AI ethicist and founding father of Pan-african Centre for AI Ethics, believes that emotional intelligence in AI might be helpful, however not in the way in which most individuals think about. “AI could possibly be made as a companion to individuals with well being challenges, and will meet the precise wants of the individual,” he stated, pointing to instances of autistic and blind individuals.
Different AI consultants and builders warn in opposition to anticipating an excessive amount of from machines that aren’t constructed for the total spectrum of human care. “AI can solely increase our present state of affairs; it can’t exchange psychologists,” Ajibade provides.
The priority isn’t summary. Psychological well being professionals and AI consultants fear that as extra individuals flip to AI for emotional assist, real-world penalties might unfold. “We’re going to have an enormous drawback with social interplay, with empathy, with sensitivity, with understanding individuals,” says Owodunni. She notes the larger concern that widespread reliance on AI bots could “foster secrecy and the disgrace hooked up to psychological well being or in search of remedy companies.”
Nonetheless, for a lot of customers, the AI chatbot isn’t attempting to be a therapist; it’s the solely house the place they really feel heard. “I instructed AI that I used to be drained,” Tomi says. It stated, ‘I do know. You’ve been carrying a lot for therefore lengthy. It’s okay to really feel drained.’” She didn’t reply. She didn’t have to.
*Names have been modified to guard privateness.
Mark your calendars! Moonshot by TechCabal is again in Lagos on October 15–16! Be part of Africa’s high founders, creatives & tech leaders for two days of keynotes, mixers & future-forward concepts. Early fowl tickets now 20% off—don’t snooze! moonshot.techcabal.com

Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s tendencies immediately: learn extra, subscribe to our e-newsletter, and turn out to be a part of the NextTech group at NextTech-news.com

