The US-based nonprofit Common sense Media, which assesses the safety of the technology for children, has just released a new report on Google's AI Gemini, labeling the platform as "high-risk" for younger users.
According to the assessment, although Gemini has affirmed to children that this is just a computer, not a "friend" to limit emotional illusion, this technology can still share sensitive and unsafe documents, such as sexual content, stimulants or advice related to mental health.
This raises concerns, especially in the context of recent recent cases of suicide in adolescence that are believed to be related to interaction with AI chatbots.
Notably, Common sense Media believes that both the Under 13 and For Adults versions of Gemini are still the adult version, with only a few additional safety filters.
The organization believes that, to ensure effectiveness, AI products need to be designed from the beginning according to the development needs of children, instead of adjusting from the general version.
The report comes as Apple is considering integrating Gemini into Siri in a new AI system.
If this becomes a reality, the risk of exposure of children and adolescents to sensitive content may increase, unless protective measures are taken.
In response, Google confirmed that it has built a separate protection policy for users under 18 years old, while continuously improving its content control tool.
The company also said it is working with independent experts to improve safety, but admitted that some feedback from Gemini has not met expectations.
In previous reviews, Common sense Media has labelled Meta AI and Character.AI as un acceptable, Perplexity as high-risk, while ChatGPT is considered average and Claude (for mature users) as having only minimum risk.
With a new warning label for Gemini, protecting children from the unpredictable effects of artificial intelligence continues to be a major challenge for global technology companies.