Artificial intelligence company Anthropic has just introduced a new feature called Claude Code Security, which allows AI programming assistants to scan source code and propose patches to fix security vulnerabilities.
This tool is directly integrated into the web version of Claude Code, aiming to support development teams in detecting risks that traditional methods may overlook.
According to an announcement from Anthropic, Claude Code Security not only detects known error patterns like many current static analysis tools, but also reads and infers about the code structure similar to how a security expert performs.
The system tracks data flows in the application, analyzes how components interact with each other to detect complex vulnerabilities, including difficult-to-recognize weaknesses using available rules.
The new feature is currently limited to some paid clients of Claude Enterprise and Team, while open source repository maintainers may be prioritized for early access.
The launch takes place in the context of more and more non-professionals using AI tools to create websites and applications, but lacking security knowledge to check the code generated by AI.
A recent report by Tenzai (a technology research and development unit) shows that websites built with tools from OpenAI, Anthropic, Cursor, Replit or Devin can be exploited to leak sensitive data or accidentally transfer money to hackers if not carefully checked.
Anthropic emphasizes that although AI can propose patches, the final decision still belongs to humans. Claude Code Security operates according to a multi-stage verification process, including filtering off false positive results and re-evaluating findings before displaying them on the unified dashboard.
The vulnerabilities are ranked by severity and system reliability when evaluating.
Notably, earlier this month, Product Director Mike Krieger (who is in charge of product engineering, product management and design at Anthropic), revealed that the company's AI programming tools are being used internally to create almost the entire product source code.
Claude was written by Claude himself," said Mike Krieger, emphasizing the high level of automation in the development process.
Regarding testing, Anthropic said that Claude Code Security has been tested through competitive Capture-the-Flag events, and cooperated with the Northwest Pacific National Laboratory (US Department of Energy) to assess the ability to protect critical infrastructure with AI.
According to the company, the research team discovered more than 500 unpublished vulnerabilities in open source projects thanks to the Claude Opus 4.6 model.
Currently, Anthropic is coordinating with the community to disclose information responsibly and continue to expand security efforts in the open software ecosystem.