OpenAI has just issued a notable warning about a "missing" security weakness in AI-powered web browsers, suggesting that this vulnerability may never be completely fixed.
According to developer ChatGPT, malware injection attacks, also known as prompt injection, are becoming a persistent threat to modern AI systems.
AI browsers such as ChatGPT Atlas or Perplexity Comet are gradually changing the way users search and interact with information on the internet.
Instead of just displaying a list of links, these tools can read, synthesize, and perform user-replace tasks.
However, it is the ability to understand the context that makes them an attractive target for cybercriminals.
malware attacks are a form of attack in which bad guys disguise malicious instructions into seemingly valid content, in order to deceive large language models such as GPT, Gemini or Llama.
When caught in traps, AI can accidentally leak sensitive data, ignore user requests or spread false information.
For example, a sophisticatedly designed email can cause AI agents to forward tax documents or internal information to attackers.
OpenAI admits that despite constant patching and improving its defense system, prompt injection is a type of attack that is very difficult to completely eliminate.
In a blog post, the company compared this form of attack to traditional web scams and techniques, which are threats that have existed for decades but have never been completely erased.
According to OpenAI, the most realistic approach is to constantly upgrade the protective layers, instead of expecting a "final solution".
A report from browser Brave also pointed out that the root cause of the problem lies in the nature of factor-based AI browsers.
These models have difficulty distinguishing between what content needs to be extracted to respond to users and what instructions they must follow. This gray area creates conditions for harmful indicators to penetrate and take control of AI behavior.
To cope, OpenAI said it has built an automatic attack tool based on the large language model itself, to proactively detect dangerous prompt injection scenarios.
The tool is trained to act as an attacker, thereby helping engineers identify and patch weaknesses before being exploited in practice.
Not only OpenAI, the UK National Cyber Security Service (NCSC) has also made similar comments, saying that malware attacks targeting new-generation AI applications may never be completely minimized.
This poses a big challenge in data protection, especially when AI is increasingly integrated into online services and business systems.
OpenAI has not yet released details on whether the new automatic attack tool is capable of countering rapid source attacks.
However, the company said it is working with many third-party partners to enhance security for ChatGPT Atlas, even though the browser has not yet been officially launched.
OpenAI's warning is seen as a reminder that the AI browser era, despite its great potential, comes with cybersecurity risks that cannot be underestimated.