Anthropic vs. The Pentagon — When AI Companies' Ethical Judgments Collide with National Security
Anthropic rejected the Pentagon's demand for unrestricted use and was banned from federal contracts. Examining tensions between AI ethics and national security.
TL;DR
- Anthropic refused the Pentagon's unrestricted military use demand and faced a complete ban on federal contracts in retaliation
- OpenAI immediately filled the vacuum, revealing an adverse selection structure where ethically stricter companies face competitive disadvantage
- No comprehensive legal framework exists to regulate military AI use, exposing a three-layer governance vacuum
What's Happening
Anthropic's partnership with Pentagon ended after ethical concerns over military AI use
At the end of February 2026, the confrontation between AI company Anthropic and the U.S. Department of Defense suddenly came to the surface. The origins trace back to November 2024, when Anthropic entered into a three-way partnership with Palantir and AWS to operate its AI model Claude on the U.S. government's classified networks. This was the first instance of a civilian LLM (Large Language Model) operating in the U.S. government's classified environment. In July 2025, a formal two-year prototype contract with the Pentagon was signed, with an upper limit of $200 million.
The turning point came with reporting on February 13, 2026, revealing that Claude had been used in an operation to apprehend Venezuelan President Maduro. Anthropic had established two red lines for AI use from the beginning: prohibition of use in fully autonomous weapons and prohibition of use in large-scale domestic surveillance of U.S. citizens. The involvement in the Maduro operation raised questions about the effectiveness of these boundaries.
On February 24, Defense Secretary Hegseth sent a letter to CEO Amodei, demanding "unrestricted use for all purposes" with a deadline of February 27. The three-day deadline was exceptionally short. Amodei issued a statement on February 26 rejecting this demand, explicitly stating that he could "not in good conscience accede to the Pentagon's request."
Retaliation came immediately. Between February 27 and 28, President Trump ordered a complete ban on Anthropic's federal contracts. The justification used was "supply chain risk" designation, a framework previously applied to foreign companies like Huawei, marking the first time it was applied to a U.S. company. On the same day, OpenAI was reported to have secured a defense contract worth up to $200 million. The exclusion of Anthropic and the acquisition of contracts by competing companies to fill the void occurred virtually simultaneously.
Background and Context
Historical context of AI companies entering defense contracts and ethical boundaries
The relationship between AI companies and the military is not the first such conflict. The prototype was Google's Project Maven incident in 2018. When Google's AI technology was being used for Pentagon drone footage analysis, it became controversial within the company, leading to a protest signature from 3,100 employees. Google did not renew the contract and subsequently published AI ethics principles, expressing non-participation in "weapons or other technologies whose primary purpose is to cause or directly facilitate injury to people."
However, the tide in the tech industry has changed significantly in the eight years since 2018. Palantir has secured a $10 billion contract with the Army and a $1.3 billion Project Maven contract, making AI military integration the core of its business. OpenAI quietly removed military use prohibition clauses from its terms of use in January 2024, and in February 2026, secured a defense contract immediately after Anthropic's exclusion. CEO Altman himself reportedly acknowledged it "looked opportunistic." The U.S. Department of Defense's AI budget reached $1.8 billion in FY2025, showing a sharp 40% increase from the previous year. Military AI is becoming a massive market, creating a clear collision between ethical judgment and market opportunity.
There's also movement on the workers' side. Following Anthropic's exclusion, a cross-tech company "We Will Not Be Divided" petition was launched, signed by approximately 800 Google employees and about 100 OpenAI employees. Over 200 Google employees submitted a letter to management urging them to avoid relationships with the military. Unlike 2018's Project Maven, which was a protest within one company, this spread to solidarity across corporate boundaries—a new development.
International frameworks are also moving. In December 2024, the UN General Assembly adopted a resolution on Lethal Autonomous Weapons Systems (LAWS) by an overwhelming majority of 166 to 3 (Belarus, North Korea, and Russia voted against; the United States did not participate in the vote). The UN Secretary-General and the International Committee of the Red Cross (ICRC) called for the development of a binding treaty by 2026. However, the resolution has no legal binding force, and treaty negotiations have not progressed. Nations have little motivation to give up the military advantages of autonomous weapons, and technological progress is outpacing the formation of international norms.
This situation exposes structural problems beyond individual corporate decisions across three layers.
- Set red lines (e.g., ban on autonomous weapons)
- Refuse DoD's unrestricted use demands
- → Banned from all federal contracts
- Remove military use prohibition from terms of service
- Immediately capture market void of excluded companies
- → Up to $200M in defense contracts
First, the cost of "ethical autonomy" has been made visible. Anthropic faced a complete ban on federal contracts as the price for maintaining its red lines. Meanwhile, OpenAI immediately moved into that vacuum and gained market share. In other words, under the current market structure, adverse selection works where ethically stricter companies become competitively disadvantaged. If this dynamic is left unchecked, the result would be that "the most ethically lax company" among AI companies would monopolize the nation's technological foundation. The risk of market competition transforming into a race to the bottom in ethics is structurally identical to patterns repeatedly observed in environmental regulation and labor standards.
Second, there's the repurposing of "supply chain risk" designation as a sanctions tool. This framework was originally designed to address national security risks from foreign companies like Huawei. Its application to a U.S. company, not for technical risks but for "policy disobedience," has significant implications. It demonstrates that national security logic can function as a means to constrain corporate ethical judgment. This could have a chilling effect not only on AI companies but on all technology companies with contractual relationships with the government.
Third, there's a governance vacuum. The UN LAWS resolution was adopted 166 to 3, but lacks binding force. There is no comprehensive legal system in the U.S. regulating the use of AI weapons. The Pentagon's AI ethics principles remain voluntary guidelines, and effective congressional oversight is not functioning. As a result, decision-making about military use of AI has been reduced to bilateral negotiations between the executive branch and individual companies. Anthropic's red lines are corporate self-regulation without legal protection. The current situation delivered a clear negative answer to the question of whether self-regulation can withstand pressure from state power.
The cross-corporate solidarity of tech workers seen in the "We Will Not Be Divided" petition is noteworthy as another circuit to fill this structural vacuum. In 2018's Project Maven, protest within one company led to the outcome of contract non-renewal. However, Google has since expanded its involvement in the military domain, making it clear that protests within individual companies are unlikely to serve as sustainable deterrents. Whether cross-corporate solidarity can have the power to form norms beyond individual corporate decision-making remains unknown.
Remaining Questions
Unresolved issues about future AI governance and military-civilian tech partnerships
Is the current structure of entrusting ethical judgment to AI companies actually functioning in the face of the reality that when one company refuses, alternative companies immediately fill the gap? When the state can nullify corporate autonomous judgment in the name of "national security," who bears responsibility for the ethical governance of technology? In a world where a UN resolution supported by 166 countries has no binding force, where can effective checks on the development and deployment of autonomous weapons exist?
No one has complete answers to these questions. However, having no answers is different from abandoning the questions. The price paid by Anthropic and the contract gained by OpenAI—the distance between these two facts is an accurate measurement of how current society handles the military use of AI technology.
Related Guides
- Introduction to Systems Thinking—Understanding Complex Social Challenges Through Structure
- Collective Impact—Cross-Sector Collaboration Design
- What is EBPM—Fundamentals of Evidence-Based Policy Making
Related Columns
References
Anthropic's Statement on Department of Defense Contract — Dario Amodei, Anthropic. Anthropic
Resolution on Autonomous Weapons Systems (A/RES/79/62) — United Nations General Assembly. United Nations
DoD Responsible AI Strategy and Implementation Pathway — U.S. Department of Defense. U.S. Department of Defense
