tech

February 24, 2026

Anthropic Doesn't Want Its AI Killing People. The Pentagon Isn't Happy.

Posts from this topic will be added to your daily email digest and your homepage feed.

Anthropic Doesn't Want Its AI Killing People. The Pentagon Isn't Happy.

TL;DR

  • Anthropic is in a public dispute with the Pentagon regarding its AI's acceptable use policy.
  • The Pentagon is pushing for "any lawful use" of AI services, which Anthropic believes could lead to mass surveillance and lethal autonomous weapons.
  • The Pentagon has threatened to designate Anthropic as a "supply chain risk," potentially jeopardizing its $200 million contract and relationships with other defense contractors.
  • Anthropic's CEO, Dario Amodei, is set to meet with Pentagon CTO Emil Michael to negotiate the terms.
  • Anthropic's refusal stems from its policy against autonomous kinetic operations and mass domestic surveillance, citing concerns over civil liberties and the current state of technology.
  • Existing government directives support Anthropic's position on human judgment over the use of force and restrictions on collecting information on U.S. persons.
  • The Pentagon, under Secretary Pete Hegseth, is prioritizing speed and an "AI-first" warfighting force, potentially overlooking safety and alignment risks.
  • Other AI companies like OpenAI and xAI have reportedly agreed to the "any lawful use" terms, but their models may not meet the necessary security classifications to replace Anthropic's Claude.
  • Anthropic's Claude is currently the only frontier AI model operating on fully classified Pentagon networks.

Continue reading the original article

Made withNostr