Institute for Social Vision Design
Simulation

Should Military Use of AI Be Permitted? — At the Intersection of National Security and Technological Ethics

The logic of national security and the imperatives of technological ethics collide head-on over the military application of AI. Through a simulated debate among four fictional panelists, this article illuminates the structural fault lines of the controversy.

Panelists

Seiichi Kono

Former Technical Research Officer, Joint Staff Office, Ministry of Defense

Conditional proponent — Essential for maintaining allied technological superiority

Dr. Sarah Mitchell

AI Ethics Researcher, MIT Media Lab

Opponent — Autonomous weapons pose a structural threat to human dignity

Zhang Weiming

International Security Analyst (Singapore-based)

Realist — Establishing an international governance framework must come first

Aya Tanaka

Tech Company Engineer, signatory of 'We Will Not Be Divided'

Engineer-ethicist — Developers must not abdicate ethical responsibility

This article is a simulated debate featuring fictional panelists. It does not represent the views of any specific individual or organization. Arguments from divergent positions have been reconstructed for the purpose of structural understanding.

Framing the Issue

In February 2026, Anthropic refused the Department of Defense's demand for unrestricted military use of its AI models and was met with a blanket ban on federal contracts. Simultaneously, OpenAI secured a defense contract worth up to $200 million. The DoD's AI-related budget for FY2025 stands at $25.2 billion. Project Maven's contract ceiling with Palantir was raised to $1.3 billion. Meanwhile, the United Nations resolution on lethal autonomous weapons systems (LAWS) remains non-binding.

Should military use of AI be permitted? Four panelists confront the question head-on, issue by issue.

Ethics & Human RightsNational SecurityInternational / StructuralIndividual / Corporate
Ko
KonoConditional
Mi
Dr. MitchellOpposed
Zh
ZhangRealist
Ta
TanakaEngineer Ethics
Four-speaker position map — Distribution of stances on the security–ethics axes

Issue 1: Can AI "Precision Strikes" Protect Civilians?

Seiichi KonoConditional proponent

AI is not merely a means of attack. Accelerating intelligence analysis, automating cyber defense, optimizing logistics — these are defensive applications. There is even a possibility that AI, free from fatigue and fear, could uphold the principles of distinction and proportionality required by international humanitarian law (IHL) more accurately than humans. Compared to the era of indiscriminate bombing, precision-guided AI has the structural potential to reduce civilian casualties.

Dr. Sarah MitchellOpponent

That claim is contradicted by the data. In the February 2026 U.S.–Israeli strikes on Iran, approximately 2,000 "precision strikes" were carried out over 48 hours, killing 787 people — of whom 555 were civilians. Roughly 70 percent. A primary school in Minar was hit. This is the reality of what Mr. Kono calls defense technology that "serves to protect civilians."

Seiichi KonoConditional proponent

The figures from the Iran strikes are tragic, but the logic is inverted. The problem lies not in the existence of AI technology but in its operational use. Without AI, civilian casualties could have been even higher. Rather than eliminating the technology, we should tighten operational norms.

Aya TanakaEngineer-ethicist

I have firsthand knowledge of theaters where "tightening operational norms" has failed. In the Gaza conflict, the system known as "Lavender" flagged approximately 37,000 Palestinians as potential targets. Officers approved each entry in just 20 seconds. The pre-configured acceptable threshold was "up to 20 civilians per target." Furthermore, a system called "Where's Daddy?" was designed to time strikes for when targets were at home with their families at night. The killing of entire families was engineered at the design level.

This is the reality of what Mr. Kono calls "operational norms."

1
1. Data CollectionSurveillance

Automated collection of comms, location, social media

Who defined collection targets?

2
2. AI TargetingLavender Algorithm

37,000 people listed (90% accuracy = 3,700 potential errors)

Algorithm designer? Training data provider?

3
3. TrackingWhere's Daddy?

Identifies nighttime, home, family-present timing

Who designed the attack conditions?

4
4. Human ApprovalOfficer (20 sec)

Pre-set: up to 20 civilians per target acceptable

Is rubber-stamping a 'decision'?

5
5. StrikePrecision munitions

House destroyed. Family members killed

Who bears final responsibility?

Accountability gap: Responsibility dispersed across all stages — no one bears full responsibility
Lavender system kill chain — Ambiguity of accountability at each stage

Issue 2: Can the Technological Competition with China and Russia Be Abandoned?

Seiichi KonoConditional proponent

In 2017, China elevated military–civil fusion (junmin yuhe) to national strategy through its "New Generation Artificial Intelligence Development Plan" (新世代AI発展計画) and is developing autonomous drone swarms for urban warfare. Russia has rapidly expanded AI-equipped drone operations in Ukraine. Even if the United States exercises restraint in military AI, strategic competitors will not. The result is a technological deficit for democratic nations — directly destabilizing the international order.

Dr. Sarah MitchellOpponent

"China is developing it, so we must too" — this logic has exactly the same structure as the nuclear arms race. The logic of deterrence justifies unlimited escalation. Moreover, unlike nuclear weapons, AI has a lower threshold for use. The illusion of "war without sacrifice" creates incentives for preemptive strikes.

Seiichi KonoConditional proponent

The analogy to nuclear weapons is inappropriate. Nuclear weapons are "unusable weapons," whereas AI can be applied to intelligence analysis and defense as well. Only nations with technological superiority can demand bias audits of China's military AI systems. Withdrawal is not a solution to the problem; it is merely the delegation of the problem to others.

Zhang WeimingRealist

The disagreement between Mr. Kono and Dr. Mitchell is one in which both are correct and both are incomplete. The technological competition is real, but unchecked competition is also catastrophic. With nuclear weapons, the NPT constrained the number of nuclear-armed states from a projected 20–30 to nine. It was imperfect, but "far better than nothing." A comparable framework is needed for AI.


Issue 3: Can Corporate Ethical Self-Restraint Function?

Aya TanakaEngineer-ethicist

Anthropic held the line. Two constraints — a prohibition on use in fully autonomous weapons and a prohibition on use in mass domestic surveillance. But the price was a blanket ban on federal contracts. And the very next day, OpenAI secured a $200 million contract.

Dr. Sarah MitchellOpponent

This is the structure of adverse selection. The more ethical a company, the greater its competitive disadvantage, and the company with the loosest standards captures the market. A race to the bottom — the same structure observed repeatedly in environmental regulation and labor standards. The Anthropic affair conclusively demonstrates that voluntary self-regulation is unsustainable in the face of state power and market competition.

Seiichi KonoConditional proponent

Dr. Mitchell is choosing the wrong comparison. The issue is not "ethical competition among AI companies" but "technological competition between democratic and authoritarian states." The U.S. Department of Defense has AI ethics principles. They may be insufficient, but the Chinese People's Liberation Army has nothing comparable. The only viable strategy for preventing an ethical race to the bottom is a two-front approach: democratic states maintaining technological superiority while simultaneously constraining its use.

Aya TanakaEngineer-ethicist

That "two-front approach" is logically contradictory. It was precisely that same U.S. Department of Defense that demanded "unrestricted use" from Anthropic. Secretary of Defense Hegseth gave Anthropic a three-day deadline to open all use cases and, upon refusal, retaliated by designating the company a "supply chain risk" — repurposing a framework originally designed for Huawei against a U.S. company.

When the entity imposing constraints and the entity destroying constraints are one and the same, "changing it from the inside" does not work.

International TreatyDysfunctional

LAWS resolution 156-5, but CCW consensus rule lets US/Russia/Israel block

Non-binding — One nation can block

National RegulationSelf-contradictory

DoD issues AI ethics principles while demanding 'unrestricted use' from Anthropic

Constrainer = constraint-breaker

Corporate Self-regulationStructurally limited

Anthropic maintains red lines → banned from contracts. OpenAI fills the gap

Ethical firms penalized (adverse selection)

Need overlapping three layers + independent inspection mechanism
Three-layer governance structure for military AI — None functions alone

Issue 4: Should the Decision to Kill Be Delegated to Machines?

Dr. Sarah MitchellOpponent

The fundamental question underlying autonomous weapons is simple: "Who bears responsibility for the decision to kill?" When AI selects targets and executes strikes, does responsibility lie with the algorithm designer, the commanding officer, or the corporation? The Geneva Conventions presuppose human legal accountability for every attack. A weapon that undermines this presupposition constitutes a challenge to the international legal order itself.

An Ipsos survey across 28 countries found that an average of 61 percent oppose the use of lethal autonomous weapons systems. Two-thirds of respondents cited the reason that "allowing machines to kill people crosses a moral line." Notably, opposition among U.S. military personnel was higher than among the general public.

Seiichi KonoConditional proponent

The attribution of responsibility can be addressed through legal reform. By extending the doctrine of command responsibility and positioning AI within existing frameworks as a "weapons system," the issue becomes technically resolvable. A technically solvable problem should not be dismissed as an ethical impossibility.

Aya TanakaEngineer-ethicist

"It can be addressed through legal reform" — the operation of Lavender thoroughly refutes that optimism. When an officer approves an AI output in 20 seconds, can we honestly say "a human made the decision"? Google declared "non-participation in weapons" in its 2018 AI ethics principles, then signed a $1.2 billion Project Nimbus contract with the Israeli government in 2024. "Engage and reform" invariably mutates into "engage and fall silent."

When the code we wrote misclassifies a primary school as a "military facility," the responsibility is partly ours. "The higher-ups decided" is not an absolution.

Zhang WeimingRealist

Let me synthesize the discussion to this point. The four panelists have exposed three structural fault lines.

First, the deterrence paradox. Inaction cedes ground to authoritarian states; action invites an arms-race escalation.

Second, the governance vacuum. Voluntary self-regulation is unsustainable (Anthropic). State regulation is self-contradictory (the Hegseth letter). International treaties lack binding force (the CCW consensus mechanism). A framework layering all three is necessary, yet its construction proceeds at a glacial pace.

Third, the diffusion of responsibility. Algorithm designers, commanding officers, corporate executives, political leaders — all bear a share of the responsibility, yet none accept full accountability. This "diffusion of responsibility" is the single most dangerous characteristic of autonomous weapons.

The UN LAWS resolution was adopted 166 to 3 in 2024 and 156 to 5 in 2025. However, the CCW's consensus mechanism remains a barrier, with the United States, Russia, and Israel continuing to block treaty negotiations. There is no perfect solution. But a framework that is "far better than nothing" must begin to be constructed now.


Questions That Remain

What this debate has demonstrated is that the controversy over military use of AI cannot be captured within a binary of "for or against." National security, technological ethics, international law, corporate responsibility, the conscience of engineers, algorithmic bias, the attribution of accountability — all are intertwined, and no single answer absorbs them all.

Only one thing can be said with certainty: the time available for answering this question is rapidly shrinking as technology advances. As AI capabilities improve by the day, the speed at which human society designs rules is failing to keep pace. Technology does not wait. Yet the cost of entrusting human judgment to technology that will not wait continues to accumulate at this very moment.



References

Anthropic's Statement on Department of Defense Contract

Dario Amodei, Anthropic. Anthropic

Read source

Resolution on Autonomous Weapons Systems (A/RES/79/62)

United Nations General Assembly. United Nations

Read source

DoD Responsible AI Strategy and Implementation Pathway

U.S. Department of Defense. U.S. Department of Defense

Read source

'Lavender': The AI machine directing Israel's bombing spree in Gaza

Yuval Abraham. +972 Magazine

Read source

Follow the Money: What the Pentagon's Budget Data Tells Us About AI and Autonomy Adoption

Maggie Gray. maggiegray.us

Read source

Growing demand sparks DOD to raise Palantir's Maven contract to more than $1B

DefenseScoop. DefenseScoop

Read source

Questions and Answers: Israeli Military's Use of Digital Tools in Gaza

Human Rights Watch. Human Rights Watch

Read source

Global survey highlights continued opposition to fully autonomous weapons

Ipsos / Human Rights Watch. Ipsos

Read source

156 states support UNGA resolution on autonomous weapons systems

Stop Killer Robots. Stop Killer Robots

Read source

ISVD Editorial Team

Join ISVD's activities?

Sign up to receive the latest research and activity reports. Feel free to reach out about collaboration or project participation.