Anthropic's Claude AI Faces Pentagon Ultimatum Over Usage Safeguards
Anthropic, the company behind the Claude large language model, is engaged in a tense standoff with the U.S. Department of Defense (DoD), also referred to in some recent communications as the Department of War. The dispute revolves around the Pentagon's push for unrestricted access to Claude under an "any lawful use" policy, while Anthropic insists on preserving critical safeguards in its contracts and usage terms.
In July 2025, Anthropic secured a prototype agreement with the DoD worth up to $200 million to supply Claude for defense-related tasks. This positioned Anthropic as the first frontier AI provider to deploy its models on classified U.S. government networks, national laboratories, and for tailored national security applications. Claude has supported functions such as intelligence analysis, modeling and simulation, operational planning, and cyber operations.
The company has aligned with national security priorities in other ways, including forgoing substantial revenue by restricting access for entities tied to the Chinese Communist Party and helping counter related cyberattacks.
The core issue stems from the Pentagon's demand to eliminate safeguards that prohibit two categories of use:
- Mass domestic surveillance of U.S. citizens.
- Fully autonomous weapons systems capable of selecting and engaging targets without human oversight.
Anthropic has maintained these restrictions consistently. CEO Dario Amodei has explained that mass domestic surveillance could enable the compilation of detailed personal profiles from disparate data sources, often lacking sufficient oversight, thereby threatening democratic freedoms. On autonomous weapons, he has highlighted that current frontier AI models do not possess the necessary reliability or ethical judgment for decisions involving human life, risking harm to both military personnel and civilians. Anthropic has proposed joint research and development for more secure alternatives, though these offers have not been accepted.
Negotiations escalated when Defense Secretary Pete Hegseth met with Amodei on February 24, 2026. The Pentagon issued a compliance deadline of 5:01 p.m. ET on February 27, 2026. Failure to meet the terms could trigger:
- Termination of the $200 million contract.
- Designation of Anthropic as a "supply chain risk," a classification typically applied to adversarial entities.
- Invocation of the Defense Production Act to mandate changes.
Anthropic rejected the Pentagon's "final offer," delivered overnight on February 26–27, describing it as showing "virtually no progress" on the company's key concerns. In a public statement on February 26, Amodei declared that the company "cannot in good conscience accede to their request." He reiterated a willingness to support the military with safeguards in place and offered assistance for an orderly transition if the partnership ends.
Pentagon representatives have maintained that there is no plan to pursue mass domestic surveillance—considered illegal—or fully autonomous lethal systems without human involvement. They emphasize the need for flexibility across "all lawful purposes."
As of February 27, 2026, the deadline has passed with no confirmed resolution, though some reports indicate the DoD expressed interest in continuing talks despite the rejection. Anthropic's position illustrates the friction between private-sector ethical commitments and government requirements for operational latitude in AI deployment.
Transitioning away from Claude on classified networks could prove challenging and time-intensive, potentially spanning months, according to defense-related sources, due to its established integration and frontier-level capabilities.
This episode may shape future dynamics for AI companies navigating innovation, safety protocols, and collaborations with national security entities, particularly under geopolitical strains. Anthropic's decision reflects a prioritization of measures to avert AI misuse that could compromise democratic norms, even when such a stand carries financial and partnership risks.


