OpenAI Signs Classified-Network Deal With the Pentagon as Trump Moves to Cut Off Anthropic
Content summary:
OpenAI says it reached an agreement to deploy its models on the Pentagon’s classified cloud networks.
The deal lands hours after President Donald Trump ordered federal agencies to stop using Anthropic technology, with a six-month phase-out for agencies already relying on it.
Anthropic says it will challenge any “supply-chain risk” designation in court and calls the move legally unsound.
The standoff centers on two “red lines” Anthropic says it won’t cross: mass domestic surveillance and fully autonomous weapons.
The underlying legal fight may hinge on what a supply-chain-risk action can cover under 10 U.S.C. § 3252, and how far the Defense Department can extend restrictions beyond its own contracts.
OpenAI says it has reached an agreement to deploy its AI models inside the Pentagon’s classified cloud environment, a move that could reshape the U.S. government’s fast-growing reliance on commercial “frontier” AI systems.
The announcement arrived amid an unusually public clash between the Trump administration and OpenAI rival Anthropic. On Friday, President Donald Trump ordered federal agencies to stop using Anthropic technology and authorized a transition period for offices that currently depend on it, including defense.
Anthropic says it will fight back in court, calling the government’s “supply-chain risk” label unprecedented for a U.S. company and “legally unsound.”
What happened (quick facts)
OpenAI deal: CEO Sam Altman said OpenAI reached an agreement to deploy its models on the Pentagon’s classified networks, and described the talks as emphasizing safety.
Administration action: Trump ordered agencies to cease work with Anthropic, with a six-month phase-out for defense and other agencies that already use the tools.
Pentagon escalation: Defense Secretary Pete Hegseth said the Pentagon would designate Anthropic a “supply-chain risk,” a procurement label typically used to protect defense systems from security vulnerabilities.
Core dispute: Anthropic says it supports lawful national-security uses but refuses to remove two safeguards: no mass domestic surveillance and no fully autonomous weapons (humans must remain in the loop).
OpenAI’s “red lines” in the Pentagon deal
Altman said OpenAI insisted on two safety principles in its agreement: prohibitions on domestic mass surveillance and human responsibility for any use-of-force decisions, including in the context of autonomous weapons systems.
OpenAI also said it plans to build technical safeguards so its models operate within agreed boundaries.
The Pentagon did not immediately respond to requests for comment in early reports.
Why Anthropic became the flashpoint
Anthropic argues that certain military applications remain unsafe or incompatible with democratic values. In a public statement ahead of the deadline, Anthropic’s CEO said the company would not knowingly support two use cases: mass domestic surveillance and fully autonomous weapons, citing both civil-liberties concerns and reliability limits of today’s frontier systems.
Defense officials countered that U.S. law — not private vendor policies — should determine lawful military use. In public remarks, Pentagon officials described some of the surveillance and “killer robots” framing as misleading, and said they have no interest in illegal domestic surveillance or removing humans from lethal decision-making.
“Department of War”: what’s actually being referenced
Multiple accounts note that the Trump administration has described the Pentagon as the “Department of War,” and Altman used that label when discussing the agreement. The institution referenced is the U.S. Department of Defense and its Pentagon leadership.
The legal fight: what does “supply-chain risk” allow?
Anthropic’s position is that the Pentagon cannot use a supply-chain-risk designation to effectively freeze the company out of the broader U.S. economy. In reporting on the dispute, Anthropic points to 10 U.S.C. § 3252, which governs requirements and authorities around supply-chain risk in defense acquisition.
Outside observers and coverage have raised questions about how far the restriction can legally extend — especially if the Pentagon tries to bar contractors from any commercial activity with Anthropic, not just Anthropic’s use on defense contracts.
In parallel, the Pentagon has also floated the possibility of invoking the Defense Production Act as leverage in negotiations, a move some experts have questioned.
Why this matters globally
This episode goes beyond a single contract.
It tests whether frontier AI labs can enforce meaningful safety terms once models move into classified environments — and whether governments will accept vendor “red lines” on surveillance and autonomous weapons.
It also sets a precedent for federal procurement. If a U.S. AI company can be labeled a supply-chain risk in a dispute over usage safeguards, other vendors may reassess how they structure defense deals — or whether they enter them at all.
Powered by Froala Editor
