According to YouTube, the goal of this feature is to help users easily request the removal of content that illegally uses their images, thereby increasing peace of mind when operating on the platform.
In the context that AI videos are increasingly difficult to distinguish from real content, the new feature is expected to help users detect fake videos that are fraudulent or misleading early. For content creators, this tool also supports detecting brands or businesses that use illegal personal images to promote products and services.
YouTube first introduced this tool in 2024 and began deploying it at the end of 2025 for members of the YouTube Partnership Program. After that, the platform continued to expand to journalists and politicians before being applied more widely.
In addition to faces, YouTube will also ask if the video copies voice to serve the evaluation process. However, the tool currently cannot automatically detect fake content based only on voice.
The new move shows that YouTube is strengthening measures to protect users from the increasingly explosive AI content wave on the internet.