New generation web browsers integrated with artificial intelligence such as OpenAI's ChatGPT Atlas and Perplexity's Comet are being promoted as the smart door to the Internet.
These web- browsing AI agents can automatically click, read pages, fill in forms and complete multiple user-replace tasks.
However, behind that convenience are serious security risks that experts are constantly warning of.
Cybersecurity researchers say that AI browsers may pose a greater risk than traditional browsers, as they require deep access to email, calendar, and user history.
In the tests, ChatGPT Atlas and Comet worked well with simple tasks, but were often slow and error-prone when handling complex situations.
Rapid malware entry vulnerability is the biggest threat
The biggest risk lies in rapid malware entry attacks, where bad guys hide malicious commands in website content. When an AI user scans the page, it can accidentally execute a blogger's command, leading to leakage of email, password, or performing unwanted actions, such as posting or online transactions.
Experts call this a " systematic problem" of the entire AI processing industry. According to Brave, a security browser developer, attacks of this type are becoming more sophisticated and difficult to detect.
Shivan Sahib, senior engineer at Brave warned: When a browser does everything for you, the risk also increases accordingly. It is a new step forward but full of danger.
AI companies admit vulnerabilities
OpenAI's Director of Information Security, Dane Stuckey, admitted that malware vulnerability is still an unresolved problem. Similarly, Perplexity's security team also affirmed that this is an issue that requires a review of the entire security approach.
To minimize risks, OpenAI introduces a posting mode, in which the actor does not log into the user account while browsing the web, helping to limit data access.
Perplexity also added a real-time attack detection system, but experts say these measures are only temporary.
Warning for users
Steve Grobman, Chief Technology Officer of McAfee, said the problem lies in the large language models themselves: AI doesnt really understand where orders come from, and thats why these attacks are difficult to intercept.
Meanwhile, Rachel Tobac, CEO of Social Proof Security, recommends that users should use a single password, activate multi-factor authentication, and limit AI browsers' access to sensitive data such as banking or health records.
Technology will gradually become safer, but for now, it is best not to give AI too much control, Ms. Rachel Tobac emphasized.
 
  
  
  
  
  
  
  
  
  
  
  
 