This is one of the important findings recently published by global cybersecurity and digital security group Kaspersky.
Mr. Sergey Lozhkin, Head of Global Research and Analysis (GReAT) in charge of the Middle East, Turkey, Africa and Asia - Pacific of Kaspersky, emphasized that since ChatGPT became popular globally in 2023, their cybersecurity experts have recorded a rapid increase in the application of useful AI (artificial intelligence) for many purposes - from common tasks such as video creation to technical tasks such as detecting and analyzing threats.
However, in the opposite direction, bad guys are also taking advantage of AI to improve their cyber attack capabilities. We are entering a new era of cybersecurity and society, where AI plays a shield for defense, while Dark AI is turned into a dangerous weapon in cyber attacks, said Sergey Lozhkin.
Dark AI is used to refer to the local or remote deployment of large language models (LLMs) that are not limited to a complete system or chatbot platform, to serve bad, unethical or illegal purposes.
These systems operate outside of safety, compliance and governance standards, often allowing fraud, manipulation, cyber attacks or data mining without close supervision.
Mr. Lozhkin shared that the most common form of AI abuse today is the appearance of Black hat GPT models, which emerged in mid-2023.
These are AI models that are developed or specially modified to serve unethical and illegal purposes such as: creating malware, drafting fraudulent and highly convincing emails to serve both large-scale attacks and targeting specific targets, creating fake voices and deepfake videos...
Black hat GPTs can exist as completely private or semi-private AI models and are designed to serve cybercrime, fraud and malicious automation.
To help organizations strengthen their defense against Dark AI threats, cybersecurity experts recommend:
- Use next-generation security solutions to detect AI-generated malware and manage risks in the supply chain.
- Applying real-time threat information tools to monitor AI-powered vulnerability mining behaviors.
- Strengthen access control and staff training, to limit the phenomenon of AI in the dark (Shadow AI) and the risk of data leakage.
- Establish a Cyber Security Operations Center (SOC) to track threats and respond quickly to incidents.