The Identity Fraud 2025 report says that on average, a deepfake attack occurs every five minutes. Deepfake is a technology that simulates human facial images, named after a combination of “deep learning” and “fake”.
The World Economic Forum predicts that by 2026, up to 90% of online content could be generated using AI (artificial intelligence). At first glance, many people might think that the main targets of deepfake and AI phishing attacks are celebrities or high-profile figures.
However, the main target and motive remain the same as previous scams, which are ordinary users with personal information, banking, payments and businesses - where valuable data and assets are stored.
Ways AI is being used to steal your data
Kaspersky cybersecurity experts have pointed out that phishing is becoming increasingly sophisticated, with large language models (LLMs) allowing attackers to create personalized messages and websites that deliver convincing messages, with correct grammar, logical structure, and smooth paragraphs.
As phishing becomes a global threat, attackers can also target individuals whose languages they are not fluent in, thanks to AI’s content generation capabilities. Additionally, they can replicate the writing style of individuals—such as business partners or colleagues—by analyzing social media posts or other content associated with that individual.
In addition, audio and video deepfakes are becoming harder to detect with the development of AI. Attackers can use your fake voice and video to request urgent money transfers or provide sensitive information, relying on interpersonal trust to commit fraud at both the personal and business levels.
Many new victims
Recently, a victim “fell into the trap” after receiving a notification that they had been selected by Elon Musk to invest in a new project and were invited to an online meeting. At the appointed time, a deepfake of Elon Musk presented details of the project to a group of attendees, then asked for financial contributions - thereby causing great damage to the victim.
Deepfakes aren’t limited to financial investment scams. Another example is AI romantic scams, where deepfakes are used to create fictional characters that interact with victims over video calls.
After gaining the victim’s trust, the scammer will ask for money to cover emergencies, travel expenses, or loans. Recently, a group of more than two dozen people involved in such scams were arrested after they stole $46 million from victims in Taiwan, Singapore, and India.