tech
March 9, 2026
Anthropic launches code review tool to check flood of AI-generated code
Anthropic launched Code Review in Claude Code, a multi-agent system that automatically analyzes AI-generated code, flags logic errors, and helps enterprise developers manage the growing volume of code produced with AI.

TL;DR
- Anthropic's new product, Code Review, is an AI reviewer for code generated by AI tools.
- It aims to catch bugs, security risks, and poorly understood code before it enters a software's codebase.
- Code Review automatically analyzes pull requests and provides feedback directly on the code, focusing on logic errors.
- The tool uses multiple agents working in parallel to identify and prioritize issues, categorizing them by severity (red, yellow, purple).
- It integrates with GitHub and is available to Claude for Teams and Claude for Enterprise customers.
- The product is targeted at large enterprise users and aims to help manage the increased volume of AI-generated code.
- Pricing is token-based, with an estimated cost of $15 to $25 per review on average.
Continue reading the original article