It is known that Google will mark this content in the photo information window (About this image) in Search, Google Lens and Circle to Search feature on Android.
Accordingly, Google is applying warning labels to advertising services, as well as considering applying them to YouTube videos. Accordingly, the company's final decision will be announced later this year.
Google said it uses C2PA metadata technology to identify AI-generated images. C2PA metadata is a common standard established by a group of industry companies earlier this year. The technology is used to track the origin of an image, identifying when and where the image was created, as well as the equipment and software used to create it.
Members of the C2PA alliance include Amazon, Microsoft, OpenAI, and Adobe. However, the standard has not received much attention from hardware manufacturers. Currently, only Sony and Leica have adopted C2PA. Some prominent AI tool developers have refused to adopt the standard, such as Black Forrest Labs.
On the other hand, the number of online scams using AI-generated deepfakes has skyrocketed in the past two years. In February, an investor in Hong Kong was tricked into transferring $25 million to scammers posing as the company's chief financial officer during a video conference call.
Service provider - Samsub has verified and published a report showing that the number of deepfake scams increased by 245% globally between 2023 and 2024, with a 303% increase in the US alone.
“The fact that services are publicly deployed has made it easier for cybercriminals to do so. They no longer need to have a specific set of technology skills,” David Fairman, chief information officer and chief security officer for APAC at Netskope, told CNBC in May.