Create fake videos and images for blackmail
Mr. Le Quy (character name has been changed) - residing in Ha Dong district, Hanoi - said that in early August 2024, he received a call from a stranger claiming to be a private detective. This person said that he kept many pictures of Mr. Quy having illicit relationships with other women. The subject sent Mr. Quy sensitive images that were edited and cut with Mr. Quy's face.
“They cut my face from an image I posted on social networks . In the original photo, I was standing at a tourist destination, but they were put into a sensitive context" - Mr. Quy said.
After that, the subject asked Mr. Quy to transfer an amount of electronic money equivalent to billions of dong to an e-wallet to avoid being posted on social networks. When Mr. Quy did not agree, they immediately spread sensitive images on social networks.
The Vietnam Center for Handling Fake News and Harmful Information (VAFC) also recently recorded user feedback about subjects using Deepfake technology to cut, collage and distribute images. Sensitive photos on social networks aim to defame personal honor, dignity and reputation.
VAFC confirmed that this is a form of fraud using Deepfake technology - artificial intelligence (AI) technology capable of creating extremely realistic fake videos or images - to commit crimes.
The trick of these fraud groups is to use images and personal information of victims shared online (mainly men with social status), and the subjects use cutting and collage techniques into videos. sensitive clips and texting and calling threateningly for the purpose of blackmail. When the victim is "trapped", the subjects will instruct them to buy cryptocurrency and transfer it to designated e-wallet accounts for appropriation.
Risks from publicly posted images
Talking to Lao Dong, Mr. Ta Cong Son - AI engineer, founder of OverBloom AI, said that the technology to create nude photos using AI is based on the original photo containing the victim's face. Subjects create images by segmenting faces, then using AI technology to collage (using methods such as GAN, Stable Diffusion).
“Those who leak portrait photos and personal information related to images (ID card, license, passport...) on social networks are at high risk of being blackmailed through this form. ” - Mr. Son commented.
According to Mr. Son, it is not difficult to identify a photo created by AI. The victims' faces in these photos often have misaligned eyes, uneven sizes, or asymmetrical faces. Hairlines, especially in areas near the ears or on the forehead, may appear unnatural, like blurred or uneven.
The skin in these photos is excessively smooth, without wrinkles, pores, or otherwise creating areas of inappropriate blending. Objects in the photo may be distorted or out of place in the foreground, or out of sync with the people in the photo.
To prevent images from being used for bad purposes, according to Mr. Son, on social networking platforms, people need to check and adjust privacy settings to limit who can view and access. personal photo. Limit sharing photos publicly with strangers. Do not post or share photos that could be abused, such as sensitive photos, sexy photos, or private photos.
Don't share too much personal information online, including photos, addresses, phone numbers, or other sensitive information. Be careful when making friends and sharing information with strangers.
According to VAFC, when being blackmailed with sensitive photos created by AI, people need to retain all related evidence (messages, emails, images, videos...). Absolutely do not transfer money to the blackmailer and immediately contact the nearest police agency for assistance.