YouTube has just announced the expansion of the Similarity detection tool (Similarity detection tool), a new feature based on artificial intelligence (AI) that allows content creators to detect and request the removal of deepfake videos with their own faces.
According to the official blog post, the tool is now integrated into YouTube Studio, where creators can review flagged videos in the content detection section after authentication.
If they discover that the video contains edited or AI-generated images that are not enabled, they can send a remove request directly to YouTube.
This feature has been tested to a limited extent since the beginning of this year, but has now been rolled out to more users in the beta phase.
YouTube said the system works similar to content ID, a famous copyright infringement detection tool. But instead of scanning the music, images or videos that have been copyced, it will search for the creator's face and identity.
When users upload their faces to set up the tool, YouTube's system will automatically scan newly posted videos to detect content that may contain their images.
However, YouTube notes that the tool can display videos with real users' faces, not just edited or man-made versions.
These contents may not be eligible for removal under current privacy policies.
In the context of the explosion of AI-generated media, from fake photos to deepfake videos, YouTube's implementation of this tool is considered an important step forward to help creators protect their personal identity and reputation.
This move is also part of Google's broader strategy to control and make AI content transparent.
Recently, this group introduced the Veo 3.1 video creation model, supporting both vertical and horizontal formats, and is expected to be directly integrated into YouTube in the near future, opening a new phase for combining AI with video content creation.