tech
March 1, 2026
The trap Anthropic built for itself
Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Now, in the absence of rules, there's not a lot to protect them.

TL;DR
- The Trump administration blacklisted Anthropic, preventing it from doing business with the Pentagon.
- Anthropic refused to allow its AI to be used for mass surveillance or autonomous armed drones.
- Max Tegmark criticizes AI companies for resisting binding regulation, leading to their current issues.
- Companies like Anthropic, OpenAI, and Google DeepMind have broken safety pledges and resisted external rules.
- The lack of AI regulation is compared to having less oversight than for sandwich shops.
- The argument that regulation would cede AI dominance to China is challenged, as China itself is banning certain AI applications.
- Superintelligence is framed as a national security threat rather than an asset.
- AI progress has been rapid, with human-level language and knowledge mastery achieved faster than predicted.
- Tegmark believes treating AI companies like any other business, with mandatory safety trials, could lead to a beneficial AI future.
- Sam Altman of OpenAI stated he shares Anthropic's red lines regarding AI development.
Continue reading the original article