Anthropic vs. the Pentagon — When an AI Company's Ethical Judgment Collides with National Security
Anthropic refused the Pentagon's demand for unrestricted access and was banned from all federal contracts. We examine the structural questions raised by the collision between corporate ethical autonomy and national security.
What Happened
In late February 2026, a confrontation between AI company Anthropic and the U.S. Department of Defense (DoD) erupted into public view. The origins trace back to November 2024, when Anthropic entered into a tripartite partnership with Palantir and AWS to operate its AI model Claude on the U.S. government's classified networks. It was the first instance of a commercial large language model (LLM) running in a U.S. government classified environment. By July 2025, a formal two-year prototype contract worth up to $200 million had been signed with the DoD.
The turning point came on February 13, 2026, when reports revealed that Claude had been used in an operation to detain Venezuelan President Maduro. From the outset, Anthropic had established two red lines for its AI: a prohibition on use in fully autonomous weapons, and a prohibition on large-scale domestic surveillance of U.S. citizens. The involvement in the Maduro operation called the enforceability of these boundaries into question.
On February 24, Secretary of Defense Hegseth sent a letter to CEO Amodei demanding "unrestricted use across all applications," with a deadline of February 27 — an unusually short three-day window. On February 26, Amodei issued a statement refusing the demand, declaring that he could "in good conscience accede to the Pentagon's request".
Retaliation was immediate. Between February 27 and 28, President Trump ordered a blanket ban on all federal contracts with Anthropic. The justification invoked was a "supply chain risk" designation — a framework traditionally applied to foreign companies such as Huawei, never before to a U.S. firm. On the same day, OpenAI was reported to have secured a defense contract worth up to $200 million. The exclusion of Anthropic and the acquisition of a replacement contract by a competitor occurred virtually simultaneously.
Background and Context
The relationship between AI companies and the military is not without precedent. The template was set by the 2018 Google Project Maven incident. When it emerged that Google's AI technology was being used for DoD drone footage analysis, 3,100 employees signed a letter of protest. Google declined to renew the contract and subsequently published AI ethics principles, pledging non-participation in "weapons or other technologies whose principal purpose is to cause harm."
However, eight years on from 2018, the industry landscape has shifted dramatically. Palantir secured a $10 billion Army contract and a $1.3 billion Project Maven contract, placing AI-military integration at the core of its business. OpenAI quietly removed its prohibition on military use from its terms of service in January 2024, and in February 2026 obtained a defense contract immediately after Anthropic's exclusion — a move that CEO Altman himself reportedly acknowledged "looked opportunistic." The DoD's AI budget reached $1.8 billion in FY2025, a 40% year-over-year increase. Military AI is becoming a vast market, and the collision between ethical judgment and market opportunity has grown unmistakable.
Workers have also mobilized. In the wake of Anthropic's exclusion, a cross-company petition called "We Will Not Be Divided" was launched, gathering signatures from approximately 800 Google employees and 100 OpenAI employees. More than 200 Google employees submitted a letter urging management to avoid military entanglements. Whereas the 2018 Project Maven protest was confined to a single company, this time solidarity has extended across corporate boundaries.
International frameworks are also in motion. In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons systems (LAWS) by an overwhelming vote of 166 to 3. The UN Secretary-General and the International Committee of the Red Cross (ICRC) have called for a binding treaty by 2026. The resolution, however, carries no legal force, and treaty negotiations have stalled. States have little incentive to relinquish the military advantage of autonomous weapons, and the pace of technological advancement is outstripping the formation of international norms.
Reading the Structure / Seeds of Social Vision
This episode exposes structural problems on three distinct levels, transcending any single company's decisions.
- Set red lines (e.g., ban on autonomous weapons)
- Refuse DoD's unrestricted use demands
- → Banned from all federal contracts
- Remove military use prohibition from terms of service
- Immediately capture market void of excluded companies
- → Up to $200M in defense contracts
First, it has made visible the cost of "ethical autonomy." Anthropic upheld its red lines and, in return, was banned from all federal contracts. OpenAI moved immediately into the vacuum and captured market share. Under the current market structure, in other words, an adverse selection dynamic operates in which the more ethically rigorous a company is, the greater its competitive disadvantage. If this dynamic is left unchecked, the logical endpoint is a regime in which the least ethically constrained AI company monopolizes the nation's technological infrastructure. The risk that market competition degenerates into a race to the bottom on ethics mirrors a pattern repeatedly observed in the domains of environmental regulation and labor standards.
Second, the repurposing of the "supply chain risk" designation as a punitive instrument deserves scrutiny. This framework was designed to address national security risks posed by foreign firms such as Huawei. Its application to a U.S. company — and on grounds not of technical risk but of "policy noncompliance" — carries significant implications. It demonstrates that the logic of national security can function as a mechanism for constraining a company's ethical judgment. This may produce a chilling effect not only on AI firms but on every technology company that holds government contracts.
Third, there is a governance vacuum. The UN's LAWS resolution passed 166 to 3, but it is non-binding. The United States has no comprehensive legal framework regulating the use of AI weapons. The DoD's AI ethics principles remain voluntary guidelines, and effective congressional oversight is absent. As a result, decision-making on the military use of AI has been reduced to bilateral negotiations between the executive branch and individual companies. Anthropic's red lines are a form of corporate self-regulation that carries no legal protection. This episode has delivered an unequivocal answer to the question of whether self-regulation can withstand pressure from state power: it cannot.
The cross-company solidarity of tech workers, as seen in the "We Will Not Be Divided" petition, merits attention as an alternative channel for filling this structural void. In 2018, protest within a single company succeeded in blocking a contract renewal with Project Maven. Since then, however, Google has expanded its involvement in the military domain, demonstrating that intra-company protest alone is unlikely to serve as a durable check. Whether cross-company solidarity can develop into a norm-shaping force that transcends individual corporate decision-making remains an open question.
Remaining Questions
Does the current structure — in which ethical judgment is entrusted to AI companies — actually function when a single company's refusal is instantly replaced by a willing competitor? When a state can nullify a company's autonomous ethical judgment in the name of "national security," who bears responsibility for the ethical governance of technology? In a world where a UN resolution supported by 166 nations carries no binding force, where can an effective check on the development and deployment of autonomous weapons be found?
No one possesses complete answers to these questions. Yet the absence of answers is not the same as license to abandon the inquiry. The price Anthropic paid, and the contract OpenAI obtained — the distance between these two facts is the precise measure of how our present society handles the military use of AI technology.
Related Guides
- Introduction to Systems Thinking — Understanding Complex Social Issues Through Structure
- Collective Impact — Designing Cross-Sector Collaboration
- What Is EBPM? — Foundations of Evidence-Based Policymaking
Related Columns
- AI Regulation: The Federal vs. State Battle in the U.S.
- Cognitive Debt — When We Outsource Thinking to AI
References
Anthropic's Statement on Department of Defense Contract
Dario Amodei, Anthropic. Anthropic
Read source
Resolution on Autonomous Weapons Systems (A/RES/79/62)
United Nations General Assembly. United Nations
Read source
DoD Responsible AI Strategy and Implementation Pathway
U.S. Department of Defense. U.S. Department of Defense
Read source
Related Consulting & Support
AI Adoption & DX Assessment
Free Initial ConsultationEvaluating organizational digital maturity and exploring AI/DX advancement directions together.