Anthropic has introduced a new Code Review feature for Claude Code, adding an agent-based pull request review system that analyzes code changes using multiple AI reviewers. The feature is available in research preview for Team and Enterprise users.
The system automatically runs when a pull request is opened and dispatches several agents to inspect the changes in parallel. According to Anthropic, the agents search for potential bugs, verify findings to reduce false positives, and rank issues by severity before posting a summary review and inline comments on the pull request.
Anthropic said the number of agents assigned scales with pull request size and complexity. Larger or more complex changes receive deeper analysis, while smaller changes receive lighter review passes. The company reports that average review time is around 20 minutes.
Internally, Anthropic says it has used the system on most of its own pull requests for several months. According to the company, substantive review comments increased from 16% of pull requests to 54% after adoption. On pull requests with more than 1,000 lines changed, Anthropic reports that 84% generated findings, with an average of 7.5 issues identified. For pull requests under 50 lines, 31% generated findings, averaging 0.5 issues.
Anthropic stated that fewer than 1% of findings were marked incorrect by engineers during internal use. The company said the tool is designed to support, rather than replace, human reviewers and does not approve pull requests automatically.
Community reactions to Anthropic’s Code Review announcement were generally positive, with developers highlighting the reported depth of analysis and multi-agent approach as notable differentiators from lighter AI review tools. Some commenters said the pricing may limit adoption for smaller teams, while others questioned whether the reported 20-minute review time and $15–25 cost per pull request would be practical for high-volume engineering workflows.
AI Researcher Nir Zabari, commented:
Sounds good on the surface, but it doesn't share any technical details (like what each parallel agent focuses on) or explain why it's better than other tools, besides saying that it costs $15–25 (based on current Opus pricing, let's say a range of ~3M tokens). In other words, worth going open source on such features...
Meanwhile user @rohini posted:
Claude is writing the code and Claude is reviewing it ? This does not even meet minimum safety standard.
The release places Anthropic more directly into the AI code review market, where tools such as GitHub’s Copilot code review features, and CodeRabbit, review capabilities already offer automated pull request analysis. Anthropic’s differentiation is its multi-agent review architecture and emphasis on deeper, slower analysis rather than lightweight review passes.