OpenAI is now the official AI provider for the Pentagon

At the end of February 2026, OpenAI announced an agreement with the US Department of Defense (the Pentagon) to deploy advanced AI systems inside classified environments. An agreement that was previously locked between Anthropic and US government. OpenAI is trying to turn a set of safety limits into contract language and system design. At the same time, Anthropic is publicly explaining why it would not sign the version of these terms it says it was shown. Two frontier labs are now arguing in public about who is the more responsible military supplier.

That alone tells you where we are. Not to mention what happened in Iran not that much later.

What OpenAI says they agreed to?

OpenAI’s announcement highlights three structural choices that matter more than branding.

1) Cloud-only deployment

OpenAI is not handing over models to be run on military edge devices, at least as described in the announcement. Edge deployment is where autonomy becomes operationally easier, and where oversight becomes harder.

Cloud-only is a technical constraint. It does not solve everything, but it changes what is theoretically possible.

2) OpenAI keeps its safety stack in the loop

OpenAI keeps control of the safety and alignment layer, and the plan includes cleared OpenAI engineers operating inside classified environments.

3) Three explicit red lines

OpenAI says the contract includes three hard limits:

  • No mass domestic surveillance of US persons
  • No use of AI to independently direct autonomous weapons
  • No automated high-stakes decisions that bypass a human

The announcement also claims those standards are locked to today’s framework, meaning the agreement remains bound to the current red-line structure even if future policy changes.

From the outside, we cannot fully verify how these clauses are enforced in practice.

What Anthropic says it refused, and why

Anthropic’s public position is not no defense work. Claude is already used across defense and intelligence contexts. Anthropic points to earlier classified-network deployments and national lab usage as proof it is already in the ecosystem. They say they will support foreign intelligence missions, but it do not want its models used as the engine for large-scale surveillance of US citizens.

Anthropic also differentiates partial autonomy from full autonomy. The line it objects to is fully autonomous targeting, where AI selects and engages targets with no human in the loop. They claim that today’s systems are not reliable enough for that, and they offered to work with the Department of Defense on R&D toward safer approaches.

Anthropic says that offer was rejected.

Share this post if you liked it.

Subscribe & dont miss next 📩

Continue reading

Create GPT with your Writing Style

Write your email to access my ChatGPT writing style framework that will make ChatGPT write like you do for free!