The product named Code Review is directly integrated into the Claude Code platform, aiming to detect errors early and improve software quality.
In software development, code review between programmers plays an important role in detecting errors, ensuring consistency and maintaining system quality.
However, the emergence of AI-based programming tools has significantly changed this process. Many developers are now using AI to create code from instructions in natural languages, a trend sometimes called "sensory programming".
Although helping to accelerate development, this method also increases the risk of logical errors, security risks and incomprehensible code segments.
When AI creates a large amount of code in a short time, the number of code update requests (called pull requests) also increases sharply, putting great pressure on the censorship team.
According to Ms. Cat Wu, Head of Product Department of Anthropic, many business leaders ask how to ensure that pull requests created by AI are effectively tested?
“We found that Claude Code creates a lot of pull requests and that causes the software release process to be blocked. Code Review is built to solve this problem,” said Ms. Cat Wu.
The new tool is designed to automatically analyze pull requests and make direct comments on the source code.
After activation, the system can be integrated with GitHub, allowing AI to evaluate changes before they are put into the official system.
Instead of focusing on formal or programming style errors, Code Review prioritizes detecting logical errors, problems that can cause serious problems in software. AI will explain each problem in detail, why errors can pose risks and propose ways to fix them.
Bugs are also classified by severity by color such as red for the most serious bug, yellow for issues to be considered and purple for issues related to old source code or bugs that have appeared before.
To do this, Anthropic uses a "multi-agent" architecture. Many AI agents work in parallel to check code from different angles, then a synthetic agent will analyze the results, eliminate duplication and prioritize the most important errors.
In addition to detecting logical errors, Code Review also provides basic security analysis. Technical team leaders can customize additional inspection rules based on internal business standards.
For deeper security needs, Anthropic said that businesses can use a separate product called Claude Code Security.
Currently, Code Review is being deployed in the form of a preview for customers using Claude for Teams and Claude for Enterprise service packages, especially targeting large businesses such as Uber, Salesforce and Accenture.
According to Anthropic, the cost of using the service will be calculated in tokens, similar to other AI services. Each code evaluation is expected to cost about 15 - 25 USD, depending on the complexity of the source code.
Anthropic believes that as AI creates more and more programming code, the demand for automated testing tools will also increase sharply. The company expects Code Review to help businesses develop software faster, while significantly reducing the number of errors before the product is released.