It is known that Google will mark these contents in the photo information window (About this image) in the Search section, Google Lens and the Circle to Search feature on Android.
Accordingly, Google is labeling advertising services as warning, as well as considering applying them to YouTube videos. Accordingly, the company's final decision will be announced later this year.
Google said it uses C2PA metadata technology to identify AI-generated images. C2PA metadata is a common standard of a group of companies in the industry, established earlier this year. This technology is used to track the origin of images, determine when and where the image was created, as well as the equipment and software used to create that image.
Members of the C2PA alliance include: Amazon, Microsoft, OpenAI and Adobe. However, this standard has not received much attention from hardware manufacturers. Currently, only Sony and Leica apply C2PA. Some famous AI tool development companies have refused to adopt this standard, such as Black Forrest Labs.
On the other hand, the number of online scams using AI-generated deepfake has skyrocketed in the past two years. In February, an investor in Hong Kong was tricked into transferring $25 million to scammers posing as the company's CFO at a video conference call.
Service provider - Samsub has verified and published a report showing that the number of deepfake scams increased by 245% globally between 2023 and 2024, with the US alone increasing by 303%.
David Fairman, APAC's chief information officer and chief security officer at Netskope, told CNBC in May: "The public implementation of services has facilitated cybercrime. They no longer need to have a special set of technology skills."