OpenAI has just issued a warning that its new generation AI models pose a high cybersecurity risk if overused.
According to the company, these models can be used to implementzero-day exploits or to penetrate complex business operations, causing serious practical impacts.
In a blog post on December 10 (local time), OpenAI said it is investing heavily in training AI to perform defensive cybersecurity tasks, while developing tools to support security groups to quickly check and patch vulnerabilities.
Not only OpenAI, major technology corporations are also upgrading the anti-counterfeiting capabilities of AI.
Google recently announced an improvement to Chrome's browser security architecture to combat sophisticated pre-arranged attacks to take control of AI agents, preparing for the widespread deployment of Gemini.
In November 2025, Anthropic revealed that a cyber attack group had attempted to manipulate Claude Code, but the campaign was thwarted.
The ability of AI in cybersecurity is increasing rapidly. OpenAI said GPT-5.1-Codex-Max achieved 76% of the catch-phishing (CTF) challenges, up significantly from 27% of GPT-5 in August. This is a testament to the speed of AI's development of cyber defense and attack capabilities.
To minimize risks, OpenAI applies a multi-layered security system, including access control, infrastructure security, exit control, and system monitoring. Specific measures include:
- AI training refutes or safely responds to harmful requests but is still useful in education and defense.
- Monitor the entire system to detect suspicious network activity.
- Cooperate with red teaming experts to evaluate and improve risk reduction measures.
OpenAI is also testing Aardvark, an AI assistant that scans source code for vulnerabilities and suggests quick patches, expected to provide free space for some non-commercial open source code warehouses.
The company has established a Pioneering Risk Council consisting of outside cybersecurity experts, and established a trusted access program for users and developers.
The above activities highlight OpenAI's efforts to prepare for the future with increasingly sophisticated AI threats and maintain safety for the global technology community.