Human
Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia
Posts from this topic will be added to your daily email digest and your homepage feed.
6 days ago
The Pentagon has quietly flipped the switch on a new era of warfare: America’s biggest AI labs are moving onto the most secret military networks, while one of the industry’s most prominent holdouts, Anthropic, is being pushed out to the curb.
Through early 2026, the U.S. Defense Department had already begun courting commercial AI, landing agreements with Google, SpaceX, and OpenAI to bring cutting‑edge models into government systems for what it calls “lawful operational use.” Those early moves foreshadowed a larger pivot: the Pentagon wanted AI not just in back‑office tools, but wired into the core of war planning and operations.
On May 1, 2026, that pivot became explicit. The Pentagon announced new deals with Nvidia, Microsoft, Amazon Web Services, and Reflection AI that let their AI tech and models run directly on highly classified DoD networks. The goal, in the department’s own words, is nothing less than to “establish the United States military as an AI‑first fighting force” and give troops “decision superiority across all domains of warfare.”
Almost simultaneously, reporting revealed a broader constellation of partners. The Pentagon has now struck agreements with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk’s xAI, and the startup Reflection to use their AI tools in classified settings — a formalization of what had been a patchwork of one‑off relationships. Those arrangements build on earlier “lawful” use deals with OpenAI and xAI, and parallel an apparently similar Google agreement detailed in industry reporting.
By early May, AI trade press tallied an even wider roster: SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services and Oracle — eight heavyweight vendors whose technology is being integrated into secret and top‑secret network environments for “lawful operational use.”
These agreements aren’t about pilots in a lab; they’re about embedding AI into the highest rungs of the U.S. classification system.
The Pentagon says the companies’ AI hardware and models will be deployed on Impact Level 6 (IL6) and Impact Level 7 (IL7) environments to “streamline data synthesis, elevate situational understanding, and augment warfighter decision-making.” IL6 and IL7 are cloud security levels reserved for information “deemed critical to national security,” with stringent physical protections, access controls, and auditing.
AI Magazine describes the same push as integrating “secure frontier AI into Impact Level 6 (IL6) and Impact Level 7 (IL7) network environments to streamline how the military synthesises data.” In practice, this means models from multiple vendors will sit next to satellites, signals intelligence, and operational plans on servers that never touch the public internet.
The deals plug directly into the Pentagon’s AI Acceleration Strategy, which prioritizes three areas: warfighting, intelligence, and enterprise operations. The flagship example so far is GenAI.mil, a secure enterprise platform for generative AI. More than 1.3 million DoD personnel have already used GenAI.mil, which provides access to large language models and other AI tools inside government‑approved cloud environments. According to AI Magazine, over 1.3 million personnel have used the platform, a scale that shows generative AI is already bleeding into day‑to‑day military work, from drafting reports to analyzing data.
In public statements, the Defense Department has cast the AI deals as both a technological leap and a procurement reform.
“These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters’ ability to maintain decision superiority across all domains of warfare,” the department said in a statement announcing the latest contracts. The same language appears in its broader explanation of the strategy: the deployments are meant to elevate situational awareness and augment decision-making in complex operations.
Underneath the rhetoric is a clear structural play: diversification. After a rocky experience relying on a single AI provider, the Pentagon is now openly trying to avoid dependence on any one company. “The Department will continue to build an architecture that prevents AI vendor lock-in and ensures long-term flexibility for the Joint Force,” its statement reads. Another DoD description frames this as a “departure from the Department’s reliance on singular providers and a strategic shift toward a multi-vendor ecosystem.”
The logic is straightforward: with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, Reflection, SpaceX and Oracle all in the mix, the Pentagon gets access to a “diverse suite of AI capabilities from across the resilient American technology stack,” which it argues will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”
Hovering over this expansion is the conspicuous absence of Anthropic, maker of the Claude models — and a former key Pentagon partner for classified work.
Anthropic had previously held a $200 million deal to handle classified materials for the Pentagon. But that relationship blew up when the company refused to loosen what it called its “red lines” around mass domestic surveillance and fully autonomous weapons. The Pentagon, by contrast, wanted “unrestricted use” of Anthropic’s tools, including applications the company feared would violate its safety policies.
The standoff escalated. The Defense Department labeled Anthropic a “supply chain risk” and moved to ban its products from the federal government. Anthropic sued, and in March it won a temporary injunction against the Pentagon’s effort to blacklist the company.
From the government side, officials are sticking to the supply‑chain framing. Emil Michael, the Defense Department’s chief technology officer, told CNBC that Anthropic remains a supply chain risk — even while he praised the company’s powerful security model, Mythos, as a “separate national security moment.” Mythos, he said, has “capabilities that are particular to finding cyber vulnerabilities and patching them,” which in his view demands that DoD networks be hardened before the model can be safely integrated.
From Anthropic’s side, the message is effectively that some uses of AI — especially enabling mass domestic surveillance and fully autonomous weapons — should remain off the table, even at the cost of losing a massive government contract. The Pentagon’s decision to brand that stance as a security risk has turned a policy disagreement into a legal and political fight.
The power vacuum created by Anthropic’s ouster has been quickly filled by others. Nvidia’s and Reflection AI’s contracts are new, breaking into a space where Microsoft and Amazon already have “deep relationships with the Pentagon.” For Nvidia, the deal fuses its dominance in AI hardware with the sensitive world of classified military workloads. For Reflection AI, an obscure startup, the move is potentially transformative.
There’s also a political twist. Reflection AI is backed by 1789 Capital, a venture firm in which Donald Trump Jr. is a partner and investor, AI Magazine notes. That connection all but guarantees additional scrutiny of how the startup was selected and what, exactly, its systems will do on top‑secret networks.
Meanwhile, established tech giants get even deeper inside the Pentagon. The Verge points out that Microsoft and Amazon already had extensive DoD cloud and infrastructure contracts; now their AI systems join their servers behind the fence. SpaceX, via its satellite networks and AI capabilities, and Oracle, via its enterprise and database footprint, are similarly being woven into the classified AI stack described by AI Magazine.
Viewed chronologically, the story is clear: first came experimental access to commercial AI on government platforms; then bespoke “lawful use” agreements with a few leading labs; now a multi‑vendor AI architecture embedded directly in the most sensitive corners of U.S. military infrastructure.
On the Pentagon’s own terms, this is a race for speed and dominance. The promise is real‑time synthesis of signals across “all domains of warfare,” from cyber to space, with AI tools helping commanders understand complex environments faster and act “with confidence” against “any threat.”
But the Anthropic dispute exposes the fault line: whose values govern those systems once they’re wired into classified kill chains and surveillance architectures? For now, the answer is shifting away from labs that insist on strong usage guardrails and toward a diversified corps of vendors willing to let the U.S. military decide what “lawful operational use” really means.
The Pentagon insists that diversification will keep it nimble and secure. Its critics will see something else: a government assembling an unprecedented AI arsenal behind closed doors, at precisely the moment one of the industry’s most safety‑obsessed players is being shown the door.