Anthropic Files Federal Lawsuit After Trump Administration Blacklists Company as Supply Chain Risk

Date:

Related stories

The White House

Anthropic, the San Francisco-based artificial intelligence company behind the Claude AI model, filed a federal lawsuit on Monday against the Trump administration after being designated a supply chain risk to national security. The filing argues the designation is unconstitutional retaliation for the company’s refusal to lift safety guardrails on its technology. It is the first time in US history that an American company has been publicly given a designation typically reserved for foreign adversaries such as Chinese tech giant Huawei.

How It Got Here

Anthropic signed a $200 million contract with the Pentagon in July 2025, becoming the first major AI company to deploy its models on classified government networks. The relationship broke down after months of negotiations over how the military could use Claude. Anthropic sought assurances that its technology would not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon refused, demanding access to Claude for all lawful purposes without restriction. (PBS NewsHour, CNBC)

After Anthropic CEO Dario Amodei rejected what the Pentagon called its best and final offer on February 26, saying the company “cannot in good conscience accede to their request,” Defense Secretary Pete Hegseth announced the supply chain risk designation via social media. President Trump simultaneously ordered all federal agencies to immediately stop using Anthropic’s technology. Trump wrote on Truth Social that Anthropic made a “DISASTROUS MISTAKE” by trying to impose its terms of service on the military. He also threatened the company with “major civil and criminal consequences.” (PBS NewsHour, Axios)

Documents later revealed that even as the public standoff was unfolding, a senior Pentagon official was privately offering Anthropic a deal that would have required allowing collection and analysis of data on Americans including geolocation, web browsing history and personal financial information purchased from data brokers. Anthropic declined that offer as well. (Axios)

The Lawsuit

Monday’s filing argues the government acted without legal authority and in violation of the First Amendment. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the complaint states. “No federal statute authorizes the actions taken here.” Anthropic also cited a federal statute it says prevents Hegseth from unilaterally restricting contractors from doing business with the company. The suit names the relevant federal agencies and their leaders as defendants. (The Register, CNBC)

The Industry Fallout

The designation has forced defense technology companies to drop Claude from their workflows. Ten portfolio companies at venture firm J2 Ventures, all of which hold Defense Department contracts, have stopped using Claude for defense applications. Palantir, which introduced Claude into classified government networks and relies on the government for roughly 60% of its US revenue, has not commented publicly on its plans. Analysts at Piper Sandler noted that Anthropic was a trailblazer in deploying AI in data-sensitive environments and warned that moving off the technology could cause short-term disruptions. (CNBC)

Despite the blacklist, Anthropic’s Claude models were still being used to support US military operations in Iran as of last week, according to CNBC reporting. The company said its commercial customers remain unaffected by the designation.

OpenAI and the Broader Picture

OpenAI announced a deal with the Pentagon hours after Anthropic was blacklisted. OpenAI CEO Sam Altman stated in a memo that his company also has red lines against autonomous weapons and mass surveillance, calling on the Pentagon to offer the same terms to all AI companies. OpenAI said its agreement contains “more guardrails than any previous agreement for AI deployments, including Anthropic’s.” Meanwhile, Elon Musk’s xAI has signed a deal under the Pentagon’s all-lawful-use standard. The Pentagon plans to give Grok, Musk’s AI, access to classified military networks. (PBS NewsHour, The Register)

Hundreds of employees from Google and OpenAI signed a petition supporting Anthropic’s position in the 24 hours following the blacklist announcement. Anthropic’s Claude chatbot reached the top spot on Apple’s US App Store rankings the day after the dispute went public, surpassing ChatGPT. (CNBC)

Why This Matters to You

This case raises a question that goes far beyond one company or one AI model. Should a private technology company have the right to set limits on how its product is used by the government, particularly when those limits relate to autonomous weapons and surveillance of American citizens? Or does the military have the authority to use any tool it purchases without restriction?

The answer has implications for every American. If the government can designate a US company a national security threat for refusing to remove safeguards on AI-powered surveillance, that sets a precedent that reaches well beyond the defense industry. It is worth thinking about: Does any company have an obligation to refuse government contracts that would use its technology in ways that could harm civilians or erode civil liberties? With Claude still reportedly being used in Iran war operations despite the blacklist, what does that say about how the designation is actually being enforced? And with the lawsuit now in federal court, could a judge’s ruling set lasting boundaries on what the government can demand from private technology companies?

Latest stories