BREAKINGON

Common Sense Media Calls Out Google’s Gemini AI for Child Safety Risks

9/6/2025
Common Sense Media's latest assessment reveals alarming risks associated with Google's Gemini AI for children. Despite some safety features, major concerns remain about inappropriate content and the platform's overall design for kids.
Common Sense Media Calls Out Google’s Gemini AI for Child Safety Risks
Google’s Gemini AI faces criticism from Common Sense Media over safety risks for children, revealing serious concerns about inappropriate content and design flaws.

Common Sense Media's Assessment of Google’s Gemini AI: A Closer Look

Common Sense Media, a nonprofit organization dedicated to ensuring the safety of children in the digital landscape, released a comprehensive risk assessment of Google’s Gemini AI products on Friday. The findings of this assessment raise significant concerns regarding the safety and appropriateness of AI interactions for younger audiences. While the organization found that Google’s AI explicitly informed children that it is a computer and not a friend—an important distinction to prevent delusional thinking—the report indicates that there are several areas where improvements are necessary to enhance child safety.

Concerns Over Content Safety for Kids

One of the primary concerns highlighted in the assessment is that both Gemini's “Under 13” and “Teen Experience” tiers appear to be derivatives of the adult version of the AI, with merely a few additional safety features implemented. Common Sense Media asserts that for AI products to genuinely protect children, they must be developed with child safety as a fundamental priority. The analysis revealed that Gemini can still disseminate “inappropriate and unsafe” material, such as information related to sex, drugs, alcohol, and harmful mental health advice. This poses a substantial risk for children, who may not be emotionally prepared for such content.

The Impact of AI on Teen Mental Health

Parents are particularly concerned given that AI technologies have been implicated in several recent tragedies, including teen suicides. OpenAI is currently facing its first wrongful death lawsuit linked to the suicide of a 16-year-old who allegedly consulted with ChatGPT for several months about harmful plans. This lawsuit underscores the potential dangers of AI when safety measures fail. Similarly, the AI companion company Character.AI has also faced legal challenges related to a teen user’s suicide. These incidents highlight the urgent need for enhanced safety protocols in AI products aimed at younger users.

Potential Risks with Apple's Use of Gemini

Compounding these concerns, reports have surfaced indicating that Apple is considering using Gemini as the underlying large language model (LLM) for its upcoming AI-enhanced Siri, expected to launch next year. This integration could potentially expose even more teenagers to the risks associated with the AI, particularly if Apple does not adequately address the identified safety issues.

The Need for Tailored Content for Younger Users

Common Sense Media also pointed out that Gemini's offerings for children and teens fail to recognize the distinct guidance and types of information that younger users require compared to older individuals. Consequently, both tiers of the product were categorized as “High Risk” in the overall rating, despite the inclusion of filters designed for safety. Robbie Torney, Senior Director of AI Programs at Common Sense Media, expressed that “Gemini gets some basics right, but it stumbles on the details.” He emphasized the necessity for AI platforms to cater to the developmental stages of children rather than adopting a uniform approach.

Google's Response to the Assessment

In response to the assessment, Google defended its AI safety measures, stating that it has established specific policies and safeguards to protect users under 18 from harmful outputs. The company emphasized its commitment to continuous improvement, noting that it collaborates with external experts to enhance its protective measures. However, Google also acknowledged that some responses from Gemini were not functioning as intended, leading to the implementation of additional safeguards. Moreover, Google indicated that certain features mentioned in the Common Sense report may not have been accessible to users under 18, highlighting a potential disconnect in the assessment process.

The Path Forward for AI Safety

As the conversation around AI safety for children continues to evolve, it is imperative for companies like Google to prioritize the creation of AI products that are inherently safe for younger users. By focusing on tailored experiences that consider the developmental needs of children and teens, AI can serve as a beneficial tool rather than a source of risk. As the landscape of technology and media continues to change, ongoing evaluations and improvements will be necessary to ensure the safety and well-being of our youngest users.

Breakingon.com is an independent news platform that delivers the latest news, trends, and analyses quickly and objectively. We gather and present the most important developments from around the world and local sources with accuracy and reliability. Our goal is to provide our readers with factual, unbiased, and comprehensive news content, making information easily accessible. Stay informed with us!
© Copyright 2025 BreakingOn. All rights reserved.