According to Kaspersky security experts, there are 3 forms of cybercriminal attacks that take advantage of AI, including:
- ChatGPT can be used to proliferate malware and automatically deploy attacks to multiple victims.
- AI program infiltrates users' data on smartphones. Through the process of analyzing sensitive data, bad guys can completely "steal" the victim's messages, passwords and bank codes.
- Swarm intelligence algorithm helps automatic computer networks (bonets) restore malicious networks that have been eliminated by security solutions.
Kaspersky's comprehensive research on using AI to crack passwords shows that most passwords are encrypted using hashing algorithms such as MD5 and SHA.
It only takes a simple operation to convert a password into a line of encrypted text, but reversing the entire process is a big challenge. However, password leaks from databases happen regularly, affecting many young businesses as well as leading businesses in the technology field.
In July 2024, the largest password compilation in history was leaked online, including 10 billion passwords in text form and 8.2 billion passwords containing special characters.
Mr. Alexey Antonov - Head of Data Science Team at Kaspersky, said that 78% of user passwords can be cracked in many different ways, only 7% of passwords are strong enough to prevent long-term attacks. .
Using AI for non-technical attacks (Social Engineering)
Through AI, bad actors can exploit fraudulent content, including text, images, audio and video, to deploy social engineering attacks. Large language models like ChatGPT-4o are utilized to create extremely sophisticated phishing scripts and messages.
Overcoming language barriers, AI can write a genuine email, just based on information on social networks. AI can even imitate the victim's writing style. This makes fraud even more difficult to detect.
At the same time, Deepfake exists as a "problem" in cybersecurity, even though it was previously considered a scientific research product. Impersonating celebrities for financial gain is the most common method. Fraudsters also use Deepfake to steal accounts and make impersonated calls to victims' friends and relatives to appropriate property. .
In February 2024, a fraudulent video call took place in Hong Kong (China). Simulating an online meeting, the fraudster used Deepfake to impersonate a CEO, convincing financial staff to transfer 25 million USD.
AI security vulnerabilities
Besides taking advantage of AI technology for illegal activities, bad guys can also attack AI algorithms in two ways:
- Prompt Injection attacks: Entering malicious commands into large language models, even going against previously restricted rules.
- Adversarial attacks: Adding hidden information fields to images, or sounds, to affect the machine learning system's ability to classify images.
AI is gradually being integrated into every aspect of human life, from Apple Intelligence, Google Gemini to Microsoft Copilot. Therefore, addressing AI vulnerabilities should be a top priority.