Institute for Social Vision Design
Insights

Who Draws AI's 'Red Lines'? — Anthropic vs. Pentagon Lawsuit Questions Governance Vacuum

Anthropic sued the U.S. Department of Defense over unlimited military AI access demands. An unprecedented confrontation over ethical red lines in AI governance.

What's Happening

On March 9, 2026, AI development company Anthropic sued the US Department of Defense in federal court. The immediate cause was the Pentagon's designation of the company as a "supply chain risk." However, this lawsuit questions more than just one company's legal rights. It addresses the fundamental governance issue of who decides the scope of AI usage.

Here's what happened: Anthropic signed a $200 million contract with the Pentagon in July 2025, but set two "red lines": technology transfer to fully autonomous weapons systems and mass surveillance of US citizens. In January 2026, Defense Secretary Hegseth issued a memorandum requiring AI contracts to allow use for "any lawful purpose." Direct negotiations on February 24 failed to reach agreement, and Hegseth delivered an ultimatum with a February 27 deadline.

Anthropic refused. "We cannot in good conscience accede to their request." The next day, President Trump ordered federal agencies to stop using Anthropic products, and the Pentagon activated the supply chain risk designation.

Background and Context

AI Companies and Military Use — Eight Years of Fluctuation

To understand this confrontation, we need to examine how the relationship between the AI industry and military use has evolved over the past eight years.

2018

Google exits Project Maven

3,100 employee signatures; ~12 resignations

2022

JWCC contract awarded

Amazon/Google/Microsoft/Oracle — up to $9B

2024/1

OpenAI removes military ban

Quietly deletes 'military and warfare' from Usage Policy

2025/7

Anthropic signs DoD contract

$200M contract with 'red lines' on autonomous weapons & mass surveillance

2026/2

Ultimatum and refusal

Hegseth demands 'any lawful use' → Anthropic refuses

2026/3

Lawsuit filed

Supply chain risk designation → Anthropic sues federal government

AI Companies and Military Use — Key Turning Points

Google's Project Maven in 2018 was a turning point. When 3,100 employees signed a protest petition against AI use in drone video analysis and about 12 resigned, Google did not renew the contract and established "AI Principles." However, the company later participated in the Joint Warfighting Cloud Capability (JWCC) contract worth up to $9 billion in 2022. They proclaimed "principles" but couldn't stop the business.

OpenAI's policy shift was even more pronounced. In January 2024, they quietly removed the explicit prohibition of "military and warfare" from their Usage Policy. According to The Intercept's reporting, the Pentagon had been testing OpenAI models through Microsoft Azure even when the prohibition policy existed.

Against this backdrop, Anthropic's "refusal" was an exceptional action.

Anthropic's legal arguments are divided into five counts: violations of the Administrative Procedure Act (APA), infringement of the First Amendment (freedom of expression), and violations of due process clauses.

Just Security magazine's analysis raises serious questions about the Pentagon's legal basis. The legal foundation for supply chain risk designation, 10 U.S.C. § 3252, is a "procurement authority," not a "sanctions mechanism." The "risk" envisioned by this law refers to technical threats where adversaries could subvert systems, not contract negotiation breakdowns.

There are further contradictions. At the time of designation, Claude was "widely deployed" across the military and intelligence agencies, and continued use was permitted for up to six months after designation. This grace period is inconsistent with the premise of an "imminent national security threat." According to CNBC reporting (March 5, 2026), Claude had already been used in military operations against Iran. This raises more fundamental questions about the scope and effectiveness of the "autonomous weapons and mass surveillance prohibition" red lines.

Lawfare magazine offered a more pointed view: "This designation will not survive initial legal review." And as a fundamental question, they argued that "Congress—not the Pentagon or Anthropic—should set military AI rules."

When Employees Speak Up

Notable is the response from competing companies. Over 30 researchers from OpenAI and Google DeepMind signed an Amicus Brief in their personal capacities. This action, including Google DeepMind Chief Scientist Jeff Dean, represents the largest employee activism since Project Maven in 2018.

The brief's core argument states: "In the absence of a legal framework for mitigating the deployment risks of frontier AI systems, AI developers' ethical commitments and their willingness to publicly defend them are not obstacles to good governance."

As of March 2026, employees across the industry at companies including Salesforce, Databricks, IBM, and Cursor have signed open letters, numbering in the hundreds.

Reading the Structure

Governance Vacuum in Three-Way Structure

This lawsuit has highlighted a governance vacuum surrounding military AI use.

🏢

Corporate self-regulation

Anthropic's RSP, Google's 'AI Principles.' Flexible but non-binding. Risk of retreat under pressure (RSP v3.0)

🏛️

Executive authority

DoD's supply chain risk designation. Immediate but risks overreach and procedural flaws

⚖️

Legislative frameworks

Lawfare: 'Congress should set the rules.' Highest democratic legitimacy, but struggles to keep pace with technology

Role of civil society: oversight, advocacy, participation as a third pole

The Three Poles of AI Governance — Who Makes the Rules?

Corporate self-regulation is flexible but lacks binding force. Anthropic itself revised its Responsible Scaling Policy (RSP) v3.0 the day after the ultimatum, removing the core commitment to "halt training of new models unless specific safety guidelines are guaranteed in advance." TIME magazine reported this as "withdrawing flagship safety pledge." However, Anthropic explained that this revision was an independent decision from negotiations with the Pentagon. While the timing coincidence is widely questioned, judgment is reserved. Ethical positions and business decisions are always in tension.

Government administrative authority has immediacy but carries risks of legal overreach. Whether this designation is legally justified will be contested in court, but even if justified, the precedent of "sanctions for ethical policies" could chill policy statements by all tech companies.

Legislative lawmaking has the highest democratic legitimacy but cannot keep pace with technological evolution. The Biden administration's EO 14110 (Executive Order on AI Safety) was revoked by the Trump administration in January 2025. Internationally, while the UN General Assembly adopted a resolution on autonomous weapons regulation in November 2025 with 156 countries in favor, the US and Russia voted against it. The EU AI Act exempts military use, leaving regulatory gaps unfilled.

None of these three actors alone possesses both the legitimacy and effectiveness to draw boundaries for military AI use.

Questions for NPOs and Civil Society

This lawsuit extends beyond AI industry internal issues.

The direct legal impact of supply chain risk designation is limited to Pentagon procurement. However, no primary sources have yet reported direct impacts on the NPO sector. Nevertheless, the following spillover pathways are logically conceivable: When NPOs receiving federal grants provide services as Pentagon subcontractors (such as veteran support or security research), compliance requirements could cascade. Even without legal obligations, organizations with government relationships might avoid using Claude due to risk-averse "chilling effects."

A more fundamental question is the risk that specific AI tool choices could be perceived as political signals. NPOs' freedom to select optimal tools for their missions might be constrained by political context.

Lawfare magazine's point that "Congress should set the rules" suggests that NPOs and civil society should recognize the importance of the legislature as a venue for lobbying and policy advocacy. Civil society's participation as a third pole representing public interest in the binary confrontation between corporations and government—this is precisely within the scope of "Social Vision Design" advocated by ISVD.

Where the Questions Lie

Whether Anthropic wins or loses in court is not the essence of this lawsuit. What matters is the structural questions this confrontation has exposed.

AI technology has "dual-use" (military-civilian) characteristics where the same model can be used for disaster response or military reconnaissance. The technology itself has no military-civilian distinction. This is precisely why the question of who has authority to determine usage scope is being raised.

The limitations of companies drawing lines based on "conscience," the overreach risks of government drawing lines through administrative authority, the delays of legislatures drawing lines democratically—when all three limitations are simultaneously visible, the remaining option would be to strengthen civil society's role in complementing and monitoring them.


References


ISVD Editorial Team

ISVD Editorial Team

Addressing social challenges and creating solutions through the power of design. ISVD works to visualize social issues and design solutions, sharing insights through research, practical guides, and analysis.

Join ISVD's activities?

Sign up to receive the latest research and activity reports. Feel free to reach out about collaboration or project participation.