Real-life boundaries blurred, users become targets of deepfakes
According to a research report, the 2025 network security survey in the personal user area conducted by the National Cyber Security Association, about 1 in 555 people are victims of fraud.
Expert Vu Duy Hung - Hung AI Creative said that deepfake is becoming one of the most serious challenges of the artificial intelligence era, as the boundary between real and fake is increasingly blurred. From images, voices to videos, current AI tools allow creating fake content with high authenticity, easy access and very difficult to distinguish with the naked eye.
According to the National Cyber Security Association, in the first quarter of 2025 alone, total global damage from deepfake fraud has reached 200 million USD, with the frequency of attacks occurring every 5 minutes.
Reality records show that deepfake scams no longer stop at warnings but have appeared with very specific scenarios, hitting the psychology of victims.
Ms. Nguyen Thu Lam (32 years old) - an accountant of a company in Hanoi recounted that at the end of November 2025, she received a video call via the Telegram application from an account with a profile picture and a name exactly like a company leader. In the call, this person appeared with a face and voice very similar to her boss, asking her to urgently transfer 370 million VND to handle a contract with a foreign partner.
“He spoke quickly, looked very impatient and asked me not to disturb other departments. The image and voice were so similar that I didn't suspect anything,” Ms. Ha recalled. Only when she finished transferring money and called back with the director's personal phone number did she realize in shock that she had been scammed.
Another case occurred in Bac Ninh. Mr. Ta Quang Hanh (45 years old) - owner of a business store said that he received a call from a strange number with a voice exactly like his son studying abroad in China. This person said he was in an accident and needed to urgently transfer 150 million VND to pay hospital fees.
His voice was trembling, calling in the correct way of addressing each other in the family, so I didn't suspect anything. I only had time to transfer the money and then call back to my child via video and then I knew nothing had happened," Mr. Hanh recounted.
According to AI expert Vu Thanh Thang - Chairman of AIZ development unit, fraud cases using fake images, voices, and videos may be manifestations of deepfake. This technology recreates people with high authenticity, making it difficult for victims to recognize. When combining scenarios of time pressure, impersonating acquaintances or authorities, deepfake is exploited to turn into a particularly dangerous fraud tool, which can even deceive those who understand the technology.
The common point of these cases is that fraudsters often collect image and voice data from social networks, then use AI tools to create highly personalized fraud scenarios.
Deepfake is also exploited to create videos, images to attract views, sell goods, impersonate celebrities, cut and paste misleading statements for profit. Reality shows that deepfake has become a real danger, creeping into life with increasing sophistication.
Increase control with technology platforms behind deepfake products
Mr. Tran Van Son - Deputy Director of the National Institute of Digital Technology and Digital Transformation (Ministry of Science and Technology) said that most of the current generative AI systems are classified into the medium risk group due to the ability to create confusing content or negative impacts on users. This is also the technology platform behind deepfake products, allowing the creation of fake images, sounds and videos with high authenticity, increasing the risk of being exploited for fraud and law violations.
The AI Law (Artificial Intelligence Law effective from March 1, 2026) clearly stipulates the responsibilities of relevant entities in detecting incidents, applying technical measures to fix them, and even temporarily suspending or recalling the system when necessary. Suppliers and implementers must also explain the purpose of use, operating principles, and risk management measures to competent authorities.
Notably, the law strictly prohibits the act of using deepfake technology to defraud or violate the law. Generative AI systems are required to label content created or edited by AI, and apply identification solutions to serve management and traceability. In case of failure to meet risk control requirements, the system may be classified as high-risk and subject to closer monitoring.
Mr. Tran Van Son emphasized that for serious violations such as using deepfake to abuse children or cause social disorder, individuals and related organizations will not only be administratively sanctioned but may also be prosecuted for criminal liability and must compensate for damages according to the provisions of law.
From a legal perspective, Lawyer Tran The Anh - Deputy Director of XTVN Law Company Limited said that although there is no specific charge for deepfake, acts of taking advantage of this technology can still be handled according to current regulations.
The act of using deepfake to appropriate property may be prosecuted for the crime of fraudulent appropriation of property under Article 174 of the 2015 Penal Code (amended in 2017), with the highest penalty of life imprisonment. In case of disseminating fake content, affecting organizations and individuals, it may be prosecuted under Article 288 or Article 155 of the Penal Code.
If it is not serious enough to be prosecuted for criminal liability, the violator may be administratively sanctioned according to Decree 15/2020/ND-CP (amended and supplemented in 2022) with a fine of 10 million VND to 20 million VND for the act of providing and sharing false information online.
Lawyers recommend that people need to raise their vigilance against unusual content in cyberspace, especially calls or videos requesting urgent money transfers. Verifying information through multiple independent channels is necessary to avoid falling into the trap of fraud.
In addition, users should not arbitrarily share personal photos and videos on social networks, because this can become input data for bad actors to create deepfake products for fraud purposes. When detecting abnormal signs, it is necessary to quickly report to functional agencies for support in handling.