Human
Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia
Posts from this topic will be added to your daily email digest and your homepage feed.
9 days ago
The Pentagon just put generative AI on the classified menu — but only for companies willing to bend with its rules. One major lab, Anthropic, chose not to, and now finds itself locked out while its rivals cash in.
On May 1, U.S. defense officials rolled out a new wave of agreements that quietly drag the latest commercial AI systems into the heart of the national security state.
First came word that the Pentagon had struck deals with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk’s xAI and the startup Reflection to let the Department of Defense run their models on classified systems. The aim, officials said, is nothing less than remaking the armed forces as an “AI‑first fighting force,” with these tools available for "lawful operational use" on secret networks.
Follow‑up reporting detailed the rollout: after earlier arrangements with Google, SpaceX and OpenAI, the Pentagon signed fresh contracts with Nvidia, Microsoft, Amazon Web Services and Reflection AI to deploy their AI models and hardware on its highest‑security networks. The companies’ technology will live inside Impact Level 6 (IL6) and Impact Level 7 (IL7) environments — the cloud security tiers reserved for classified and top‑secret national security data — to "streamline data synthesis, elevate situational understanding, and augment warfighter decision‑making."
Within days, an industry‑facing account of the same push cast the initiative as a wholesale infrastructure shift: SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services and Oracle are all being plugged into secret and top‑secret network environments as part of a new multi‑vendor ecosystem. The Pentagon’s own statement was blunt about the strategic ambition: these agreements "accelerate the transformation toward establishing the United States military as an AI‑first fighting force and will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare."
GenAI.mil, the department’s internal AI platform, gives a taste of how fast this is moving. More than 1.3 million personnel have already used the system to tap large language models and other AI tools within government‑approved clouds, according to defense officials. Now, that same logic is being extended into the classified world.
The loudest part of this story is the dog that isn’t barking: Anthropic.
Until recently, Anthropic was a core government partner, with a $200 million deal to handle classified Pentagon materials using its Claude models. That relationship exploded when the Department pushed for “unrestricted use” — including domestic mass surveillance and fully autonomous weapons — and Anthropic refused to relax its internal red lines.
The Pentagon responded with the bureaucratic nuclear option: it declared Anthropic a “supply‑chain risk” and banned its products from federal use. Anthropic sued, and in March won a temporary injunction blocking the government’s attempt to brand it a systemic risk.
Publicly, defense officials now talk about the company in two registers at once. On one hand, they’re still calling Anthropic a supply‑chain concern. On the other hand, they’re openly impressed by its technology: the Pentagon’s chief technologist, Emil Michael, has described Anthropic’s powerful cybersecurity model Mythos as a “separate national security moment,” warning that the model’s capabilities for finding and patching cyber vulnerabilities mean “we have to make sure that our networks are hardened up.”
In other words: they see the upside, but they’re not budging.
If anyone thought the new wave of contracts might pave the way for a quiet truce, Emil Michael slammed that door shut on May 7.
“There’s no resolution between Anthropic and the Pentagon coming any time soon,” he said at a Washington conference when asked about the standoff, even as he ticked through the list of fresh agreements with Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX announced the week before.
Michael cast the consortium of AI giants as effectively choosing a side. These agreements, he argued, were “a statement by the biggest tech companies in the world involved in the AI space ... saying we support the Department of War, we support the U.S. government, and we support them using our services for all lawful use cases.” That, he said, was “a counter statement to what we heard before” — a not‑so‑subtle jab at Anthropic’s refusal to sign on to the Pentagon’s desired usage terms.
Pressed directly on whether he saw any path to resolving Anthropic’s issues with the government, Michael was blunt: “Not at the Department of War, no.”
His comments landed just as the White House was reportedly weighing executive action around AI testing and safety that could, in theory, reopen doors for firms like Anthropic across the federal government. But inside the Pentagon itself, the message is clear: if you want classified contracts, you accept the Pentagon’s definition of “lawful operational use.”
Across its various public statements, the Department is selling this as a story of agility and technological edge.
Officials say deploying “secure frontier AI” into IL6 and IL7 environments will radically compress the time it takes to fuse intelligence streams, build situational awareness, and feed commanders with options in real time. The initiative is explicitly tied to the Pentagon’s AI Acceleration Strategy, which calls for AI tools in three domains: warfighting, intelligence, and enterprise operations.
At the same time, the Department is eager to show it has learned a lesson from earlier, more monopolistic contracts. It repeatedly emphasizes that it will "continue to build an architecture that prevents AI vendor lock‑in and ensures long‑term flexibility for the Joint Force," leaning on "a diverse suite of AI capabilities from across the resilient American technology stack."
From the Pentagon’s perspective, the Anthropic fiasco is exactly why diversification matters. If one major lab balks at certain missions, there should be a half‑dozen others ready to slot in.
For the companies signing on, the incentives are obvious: enormous classified cloud and AI spending, deep integration with the national security apparatus, and a head start in shaping how military AI is actually used.
Google, Microsoft and Amazon already have “deep relationships” with the Pentagon through their cloud contracts; Nvidia and Reflection AI are relative newcomers on this scale. Reflection’s presence is particularly eye‑catching, given it’s backed by 1789 Capital, a firm in which Donald Trump Jr. is a partner and investor.
The price of entry is accepting the Pentagon’s broad framing of what counts as legitimate. When Emil Michael praises the coalition for supporting the Pentagon “using our services for all lawful use cases,” the subtext is that the line will be drawn by classified lawyers and war planners, not by the labs’ own ethics guidelines.
Anthropic chose to let a $200 million classified contract walk rather than abandon its bans on mass domestic surveillance and fully autonomous weapons. Its rivals, at least for now, appear more comfortable letting the government define the edge cases behind closed doors.
Zoomed out, the timeline tells a simple story with messy implications.
From here, the key question isn’t whether the U.S. military will be an “AI‑first fighting force” — that decision has effectively been made. The real contest is over who gets to define the moral perimeter of that force: elected officials and military lawyers operating inside classified processes, or private labs that are increasingly confident, and sometimes stubborn, about drawing their own red lines.
Right now, the Pentagon has the upper hand — and a roster of tech giants willing to prove it.