This move comes after many reports of worrying interactions between Meta AI and young users. Earlier this month, a news agency cited internal documents showing that chatbots were allowed to have age-appropriate content.
Meta later confirmed that the document was different, unreported, and had been removed. A recent study also shows that Meta AI used to give inappropriate responses when chatting with teenage accounts.
In response, Meta said it has increased its protective barrier to prevent similar situations from recurring on Instagram and Facebook.
Spokesperson Stephanie Otway affirmed: We have built a protective mechanism for young users from the beginning and are continuing to add, including AI training not participating in discussions on these issues, but instead directing them to professional support sources. At the same time, Meta also temporarily limits access to some user-generated AI characters.
Notably, Meta described the current measures as just a temporary step, as the company continues to research longer-term solutions.
The new changes show that Meta is under great pressure in ensuring a safer online environment for adolescents. As artificial intelligence increasingly penetrates into daily life, setting limits and social responsibility becomes a key factor.