tech
March 24, 2026
Anthropic hands Claude Code more control, but keeps it on a leash
Anthropic’s new auto mode for Claude Code lets AI execute tasks with fewer approvals, reflecting a broader shift toward more autonomous tools that balance speed with safety through built-in safeguards.

TL;DR
- Anthropic's new 'auto mode' for Claude Code allows the AI to decide which actions are safe to take autonomously.
- The feature uses AI safeguards to review actions, blocking risky behavior and prompt injection.
- Auto mode aims to balance the speed of AI execution with user control and safety.
- It builds on existing autonomous coding tools but shifts the permission-seeking decision to the AI.
- The feature is currently in research preview and will roll out to Enterprise and API users with specific Claude versions, recommending use in isolated environments.
Continue reading the original article