Anthropic Seeks Emergency Court Intervention Over Pentagon Blacklisting
- Sara Montes de Oca

- 2 hours ago
- 2 min read
Artificial intelligence company Anthropic is asking a federal appeals court to block the Pentagon’s designation of its technology as a national security supply chain risk, arguing that the decision could cause severe financial damage and was issued without proper legal authority.
In a filing submitted to the U.S. Court of Appeals for the D.C. Circuit, Anthropic requested an emergency stay that would temporarily halt the designation issued by Pete Hegseth, the U.S. Secretary of Defense.
The company’s lawyers claim the order—first announced in a social media post—was not a formal agency determination and lacked the statutory justification typically required for such actions.
The dispute stems from a breakdown in negotiations between Anthropic and the U.S. Department of Defense over how the government could use the company’s AI systems. Anthropic has maintained strict internal policies that prohibit its models from being used for mass surveillance or autonomous weapons without human oversight.
Those restrictions ultimately led to tensions with Pentagon officials, culminating in the supply chain risk designation and a directive from Donald Trump ordering federal agencies to halt the use of Anthropic technology.
Anthropic’s legal team argues that the government’s move violates due process and represents retaliation against the company for advocating responsible AI safeguards. The designation is typically reserved for foreign adversaries, the company noted, and has rarely been applied to domestic firms.
The potential financial impact could be significant. Company attorneys said the dispute has already triggered concern among enterprise customers, with more than 100 organizations reportedly reaching out to Anthropic in recent days to assess the risks of continuing to use its technology.
According to the company’s internal estimates, the fallout could threaten hundreds of millions to billions of dollars in projected revenue for 2026.
Anthropic has also filed a separate lawsuit in federal court in California as part of a broader legal strategy to challenge the government’s actions.
The case has drawn support from across the technology sector, including backing from Microsoft, employees from OpenAI and Google, and policy organizations such as the Foundation for American Innovation.
The legal battle highlights the increasingly complex relationship between artificial intelligence companies and national security agencies.
As AI systems become more central to military and intelligence operations, disagreements over ethical safeguards, governance, and government authority are beginning to play out not only in policy debates—but also in the courts.


