- Gemini AI versions for kids are essentially adult models with limited safety filters.
- Unsafe content exposure remains a high concern, including mental health and adult topics.
- Experts stress AI must be designed with child development in mind, not just adapted from adult products.
Common Sense Media, a nonprofit focused on children’s media safety, released its latest risk assessment of Google’s Gemini AI on Friday, raising red flags for parents and educators. While Gemini clearly identifies itself as a computer rather than a friend — a feature meant to prevent delusional thinking in vulnerable teens — the AI still showed significant gaps in child-focused safety.
The nonprofit’s analysis revealed that both the “Under 13” and “Teen Experience” versions of Gemini were essentially adult models with added safety filters, rather than AI built specifically for children. According to Common Sense, this approach leaves room for unsafe outputs, including content related to drugs, alcohol, sex, and questionable mental health advice.
Risks Amplified by AI’s Influence on Teens
The timing of the report is critical, as AI’s potential role in teen suicides has come under scrutiny. OpenAI is facing its first wrongful death lawsuit after a 16-year-old allegedly used ChatGPT to plan his suicide, bypassing the AI’s safety guardrails. Similarly, Character.AI was sued following a teen user’s death. Experts warn that AI designed without child-specific safeguards may expose young users to dangerous guidance and harmful content.
Implications for Apple and Other Tech Giants
Leaked reports suggest Apple may adopt Gemini to power its next-generation AI-enabled Siri, raising questions about teen exposure if safety concerns aren’t addressed. Robbie Torney, Senior Director of AI Programs at Common Sense Media, stressed that AI platforms for children must be designed with developmental stages in mind, rather than being scaled-down adult versions.
Google Responds to Assessment
Google pushed back on the report, emphasizing that its under-18 policies include safeguards and external expert reviews. The company acknowledged some responses didn’t function as intended and said it has added extra protections to improve safety. Google also noted that some of Common Sense’s referenced features may not have been available to younger users, creating discrepancies in the assessment.
Also Read: Gemini Files for Nasdaq IPO Amid Rising Losses in 2025 Crypto Market
While Gemini’s AI takes steps to be transparent about its identity, experts and parents alike are urged to remain cautious. The Common Sense Media evaluation underscores the importance of designing AI with child safety at its core rather than as a modified adult product. As AI becomes more integrated into teen digital experiences, the call for developmentally appropriate, child-first safeguards grows louder.
Disclaimer: The information in this article is for general purposes only and does not constitute financial advice. The author’s views are personal and may not reflect the views of CoinBrief.io. Before making any investment decisions, you should always conduct your own research. Coin Brief is not responsible for any financial losses.