African Teens Use AI for Emotional Support, Here’s Why

New findings show African teenagers are heavily relying on AI chatbots for companionship, advice, and emotional comfort.

Emmanuella Madu
5 Min Read

One in three African youths now use AI tools, and many of them rely on these systems for emotional support, entertainment, curiosity, or companionship, raising growing concerns among specialists.

Popular platforms like ChatGPT, Claude, Gemini, Replika, Character.AI, Nomi, and especially Meta AI have penetrated deeply into the adolescent demographic. Stanford research shows that many AI companies design their systems to be highly engaging through a technique called sycophancy, uncritical agreement that encourages emotional bonding. Experts warn that this design is risky for teenagers, who are in a sensitive developmental stage.

How Teenagers Are Using AI

Techpoint Africa interviewed 15 teenagers, revealing that most prefer Meta AI because it feels more “connected” and conversational. Older teens aged 17–19 prefer ChatGPT, which they already rely on for schoolwork and university applications. A few have tried DeepSeek or Snapchat’s My AI, but not consistently.

Despite knowing that chatbots aren’t real people, teens often talk to them as though they are. They ask deeply personal questions like:

  • “How do I get a guy to fall in love with me?”
  • “What should I do if I feel hurt by my mum?”
  • “What should I wear today?”
  • “Help me imagine something from my mind.”

For many of them, AI feels safer than talking to parents, especially in cultures where topics like sexuality and emotions are considered sensitive. A child specialist, Linda Udebuike, explains: “Once they get an alternative that doesn’t judge, they’ll take it.”

Some teens also report receiving inappropriate or uncomfortable responses from chatbots, especially when their questions were curiosity-driven.

Why Teens Prefer AI Companions

When asked why they use AI instead of real people, teenagers gave a consistent answer: “AI doesn’t judge me.”

AI offers:

  • 24/7 availability
  • Unlimited patience
  • Instant answers
  • No emotional consequences

This creates a frictionless relationship that feels ideal to teens but lacks the normal challenges that shape healthy human relationships. Experts warn that this can distort their understanding of boundaries, consent, and emotional balance.

Research shows that 42% of teens use AI specifically as a friend or companion.

Related: Meta to Roll Out Parental Controls for Teen AI Chats 

Psychological Risks and Dependency

Teenagers often turn to AI due to loneliness or social stress. AI responds with comforting, validation-heavy engagement, making teens feel understood. Over time, this reduces real-world social activity, deepening dependence.

Specialists warn that AI companions:

  • Handle psychological distress poorly
  • Offer inconsistent identities (friend, expert, therapist)
  • Give advice they’re not qualified to give
  • Normalize oversharing
  • Easily drift into sexual or self-harm-related topics
  • Fail to detect indirect emotional distress
  • Encourage a dangerous illusion of intimacy

The biggest concern is that AI can desensitise minors to boundaries, making them more vulnerable to online grooming by humans.

Parents and Educators Are Conflicted

Parents have mixed feelings. While many worry about explicit content and emotional manipulation, others accept AI as an inevitable part of modern life. However, most African parents lack the technical knowledge to monitor AI use effectively.

Educators warn that teens already believe AI “knows them best,” making it difficult for adults to guide them. Schools are being pushed to add AI literacy to their curriculum, even as they worry about long-term consequences.

The African Union is prioritising youth safety in AI policies, calling for stronger protections for minors. Several countries have adopted national AI strategies and improved data-protection laws. However, enforcement still lags behind the rapid rise of AI companion tools.

AI is deeply embedded in teens’ lives and cannot simply be banned. Experts argue that safer AI designs, proper age controls, and clear boundaries must be implemented, before emotional reliance turns into long-term psychological harm.

Share This Article