A recent study by cybersecurity company Kaspersky shows that the number of cyber attacks faced by organizations has increased by nearly 49% in the past 12 months.
The most common threat comes from phishing scams. Half of the survey respondents (50%) expect the number of phishing attacks to increase significantly, as cybercriminals increasingly take advantage of AI.
Cybersecurity experts also pointed out four reasons why phishing scams have become more dangerous and unpredictable due to AI.
Phishing attacks are personalized by AI
Traditionally, phishing attacks relied on blasting out the same generic messages to thousands of people, luring a few into a trap. AI has changed that by creating sophisticated, personalized phishing emails at scale.
AI-integrated tools can use personal public information on social networks, recruitment sites or company websites to create tailored emails that match each individual's role, interests and communication style.
For example, a CFO might receive a phishing email that copies the tone and style of a message from the CEO, even referencing recent company events. This level of customization makes it difficult for employees to distinguish between a real message and a phishing message.
AI Makes Deepfakes More Dangerous
Deepfake technology in AI has also become a powerful weapon, used by cybercriminals in scams.
Attackers exploit this technology to create fake audio and video clips, simulating the voices and appearances of leaders and managers with an astonishing level of accuracy.
For example, in one documented case, an attacker used deepfake to impersonate multiple employees in an online meeting, convincing one employee to transfer approximately $25.6 million.
As deepfake technology continues to evolve, attacks of this type are expected to become more widespread and sophisticated.
AI Helps Attackers Bypass Traditional Security Methods
Cybercriminals can use AI to fool traditional email filtering systems. By analyzing and mimicking legitimate email patterns, AI-generated phishing emails can bypass security software checks.
In addition, machine learning algorithms can test and refine scams in real time, increasing the success rate and making scams increasingly sophisticated.
Experts can also fall into AI traps
Even experienced cybersecurity professionals fall victim to sophisticated phishing attacks.
The authenticity and personalization of AI-generated content sometimes overwhelms the skepticism that keeps even experienced professionals on their toes.
Furthermore, these attacks often play on human psychological factors, such as urgency, fear, or power, pressuring employees to act without thoroughly checking the authenticity of the request.