top of page

Pentagon Pressures Anthropic to Loosen AI Safeguards Amid Contract Dispute

The U.S. Department of Defense is threatening to cancel a major contract with Anthropic if the company does not agree to revised terms governing how its AI models can be used in military operations.


According to officials, the Pentagon has given Anthropic until Friday to comply. At the center of the dispute are the company’s usage restrictions on its Claude model, which prohibit applications involving mass surveillance or autonomous weapons without human oversight.


Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei on Tuesday in an effort to resolve the standoff. While both sides described the meeting as constructive, the gap between government requirements and company policy remains unresolved.


If an agreement is not reached, the Pentagon has signaled it could escalate significantly — including invoking the Defense Production Act to compel compliance or designating Anthropic as a supply chain risk, which could effectively shut it out of future federal work.


The dispute puts at risk a $200 million contract signed last year and highlights growing tensions between AI developers and government agencies over how advanced models should be deployed in national security contexts.


Anthropic has maintained that its safeguards are designed to ensure responsible use of AI, emphasizing that its policies have not interfered with legitimate military operations.


Pentagon officials, however, argue that operational decisions — including legality — fall under the authority of the government as the end user.


The standoff also comes as competition intensifies among AI providers working with the Department of Defense. Companies including OpenAI, Google, and xAI have secured similar contracts and are being integrated into the Pentagon’s AI ecosystem.


Notably, Anthropic’s Claude has been the primary model deployed on classified systems so far, though alternatives — including xAI’s Grok — are beginning to enter that environment.


At its core, the conflict reflects a broader fault line in the AI era: whether model developers retain control over how their systems are used, or whether that authority ultimately shifts to governments — particularly in high-stakes domains like defense.


With the deadline approaching, the outcome could set a precedent not just for Anthropic, but for how AI companies engage with national security agencies going forward.

bottom of page