
*Scroll to bottom to see how this affects you*
OpenAI CEO Sam Altman announced late Friday that the company has signed a deal with the Pentagon to deploy its AI tools in the military’s classified systems, and notably, with guardrails that appear strikingly similar to those Anthropic had been demanding before being shut out entirely.
The announcement came on the same day President Trump ordered all federal agencies to stop using Anthropic’s AI tools, and Defense Secretary Pete Hegseth declared Anthropic a supply chain risk over its refusal to allow its Claude AI to be used for autonomous weapons systems and mass surveillance of US citizens. Anthropic has said it plans to legally challenge that designation, which is typically reserved for companies with ties to foreign adversaries.
Altman’s statement makes the situation more complicated. He confirmed that OpenAI’s deal with the Pentagon includes prohibitions on domestic mass surveillance and requires human responsibility over the use of force, including for autonomous weapons. Those are essentially the same principles Anthropic had been insisting on. Altman said the Department of War agreed with those principles and that they are reflected in law, policy and the terms of the agreement. He also called on the Pentagon to offer the same terms to all AI companies and expressed a desire to see tensions de-escalate away from legal and governmental actions toward reasonable agreements.
Defense Secretary Hegseth re-posted Altman’s announcement, and the Pentagon’s Under Secretary for Technology Emil Michael praised OpenAI as a reliable and good-faith partner. It remains unclear what specifically differentiated OpenAI’s approach from Anthropic’s in the Pentagon’s eyes, and both CNN and others have asked the Pentagon and OpenAI for clarification.
Why This Matters to You:
This story goes well beyond a corporate dispute. At its core it raises a fundamental question about who decides how AI is used in life-and-death military situations. The fact that OpenAI secured a deal with similar safety guardrails to what Anthropic was asking for, yet Anthropic was still declared a supply chain risk, suggests the dispute may have been less about the restrictions themselves and more about control, compliance and leverage. For everyday people, the use of AI in autonomous weapons and military surveillance has real-world consequences that could shape the future of warfare and civil liberties. It is worth thinking about: Should private AI companies have the power to set limits on how their technology is used by governments? What oversight exists to ensure AI used in classified military systems is safe and accountable? And with multiple AI companies now embedded in the US military, how transparent will the public ever be allowed to be about how these systems are actually used?
