Skip to main content
Institute for Social Vision Design

Should Military AI Be Permitted? — The Intersection of National Security and Technology Ethics

ISVD Editorial Team
About 7 min read

The military use of AI technology has created a head-on collision between security logic and technology ethics. Through a simulated debate among four fictional debaters, we illuminate the structural points of contention in this issue.

Panelists

Seiichi Kono

Former Defense Ministry Joint Staff Office Technology Research Officer

Conditional Advocate — Essential for maintaining technological superiority of allied nations

Dr. Sarah Mitchell

MIT Media Lab AI Ethics Researcher

Opponent — Autonomous weapons are a structural threat to human dignity

Zhang Weiming

International Security Analyst (Singapore-based)

Realist — Building international governance frameworks is the priority

Aya Tanaka

Tech Company Engineer, 'We Will Not Be Divided' signatory

Technology Ethics Advocate — Developers must not abandon ethical responsibility

This article is a simulation debate by fictional debaters. It does not represent the views of any specific individuals or organizations. Arguments from different positions have been reconstructed for the purpose of structural understanding of the issues.

Setting the Agenda

In February 2026, Anthropic refused the Department of Defense's request for unlimited use of AI models for military purposes and received a complete federal contract ban. Simultaneously, OpenAI secured defense contracts worth up to $200 million. The U.S. Department of Defense's AI-related budget for FY2025 was $25.2 billion. Project Maven's contract ceiling with Palantir was raised to $1.3 billion. Meanwhile, the UN resolution on Lethal Autonomous Weapons Systems (LAWS) remains non-binding.

Should military use of AI be permitted? Four debaters confront each other head-on on each point of contention.

Ethics & Human RightsNational SecurityInternational / StructuralIndividual / Corporate
Ko
KonoConditional
Mi
Dr. MitchellOpposed
Zh
ZhangRealist
Ta
TanakaEngineer Ethics
Four-speaker position map — Distribution of stances on the security–ethics axes

Point of Contention 1: Can AI Protect Civilians Through "Precision Attacks"?

Seiichi KonoConditional Advocate

AI is not merely an attack tool. High-speed information analysis, automated cyber defense, logistics optimization—these are defensive applications. AI, which experiences neither fatigue nor fear, may even be able to comply with the "principle of distinction" and "principle of proportionality" required by international humanitarian law more accurately than humans. Precision-guided AI could structurally reduce civilian casualties compared to the era of indiscriminate bombing.

Dr. Sarah MitchellOpponent

That argument is refuted by the data. The February 2026 U.S.-Israeli attack on Iran—approximately 2,000 "precision attacks" were conducted in 48 hours, resulting in 787 deaths, of which 555 were civilians. About 70% of the total. Minarb Elementary School was attacked. This is the reality of the defense technology that Mr. Kono claims "contributes to civilian protection."

Seiichi KonoConditional Advocate

The Iran attack figures are tragic, but the logic is backwards. The problem lies not in the "existence" of AI technology but in its "operation." Without AI, civilian casualties might have been even greater. Rather than eliminating technology, we should strengthen operational norms.

Aya TanakaTechnology Ethics Advocate

I know firsthand where "strengthening operational norms" fails to function. "Lavender," used in the Gaza conflict, listed approximately 37,000 Palestinians as potential targets. Officer approval time was just 20 seconds. The preset tolerance standard was "up to 20 civilians per target." Furthermore, the "Where's Daddy?" system was designed to attack at nighttime when targets are at home with their families. Killing entire families is built into the design level.

This is the reality of Mr. Kono's "operational norms."

1
1. Data CollectionSurveillance

Automated collection of comms, location, social media

Who defined collection targets?

2
2. AI TargetingLavender Algorithm

37,000 people listed (90% accuracy = 3,700 potential errors)

Algorithm designer? Training data provider?

3
3. TrackingWhere's Daddy?

Identifies nighttime, home, family-present timing

Who designed the attack conditions?

4
4. Human ApprovalOfficer (20 sec)

Pre-set: up to 20 civilians per target acceptable

Is rubber-stamping a 'decision'?

5
5. StrikePrecision munitions

House destroyed. Family members killed

Who bears final responsibility?

Accountability gap: Responsibility dispersed across all stages — no one bears full responsibility
Lavender system kill chain — Ambiguity of accountability at each stage

Point of Contention 2: Can We Abandon the Technology Race Against China and Russia?

Seiichi KonoConditional Advocate

China has institutionalized its military-civilian fusion strategy through the 2017 "New Generation AI Development Plan" and is developing autonomous drone swarms for urban warfare. Russia has rapidly expanded the deployment of AI-equipped drones in Ukraine. Even if the U.S. refrains from military AI applications, strategic competitors won't stop. The result would be technological inferiority of democratic nations—directly leading to destabilization of the international order.

Dr. Sarah MitchellOpponent

"Because China is developing it, we must too"—this logic has exactly the same structure as the nuclear arms race. The logic of deterrence justifies unlimited escalation. Moreover, AI has a lower "threshold for use" than nuclear weapons. The illusion of "war without sacrifice" creates incentives for preemptive strikes.

Seiichi KonoConditional Advocate

The analogy to nuclear weapons is inappropriate. Nuclear weapons are "unusable weapons," but AI can also be used for information analysis and defense. Only nations with technological superiority can demand bias audits of China's AI military systems. Withdrawal is not solving the problem but delegating the problem to others.

Zhang WeimingRealist

The conflict between Mr. Kono and Mitchell is both correct and incomplete on both sides. Technology competition is real, but unchecked competition is also catastrophic. For nuclear weapons, the NPT reduced the number of possessor states from the predicted 20-30 countries to 9 countries. Imperfect, but "much better than nothing." AI needs a similar framework.


Point of Contention 3: Does Corporate Ethical Self-Restraint Function?

Aya TanakaTechnology Ethics Advocate

Anthropic held its red lines. Two constraints—prohibition of use in fully autonomous weapons and prohibition of use in large-scale domestic surveillance. But the cost was a complete federal contract ban. And OpenAI secured a $200 million contract the next day.

Dr. Sarah MitchellOpponent

This is the structure of "adverse selection." The more ethical companies become competitively disadvantaged, and companies with the loosest standards monopolize the market. Race to the bottom—the same structure observed repeatedly in environmental regulation and labor standards. The Anthropic incident definitively proved that self-regulation is unsustainable under pressure from state power and market competition.

Seiichi KonoConditional Advocate

Mitchell is wrong about the comparison. The issue is not "ethical competition among AI companies" but "technological competition between democratic and authoritarian states." The U.S. Department of Defense has AI ethical principles. They may be insufficient, but the Chinese People's Liberation Army has no equivalent. Preventing a race to the bottom requires a two-front operation: democratic states maintaining technological superiority while constraining usage.

Aya TanakaTechnology Ethics Advocate

That "two-front operation" is logically contradictory. It was precisely that U.S. Department of Defense that demanded "unlimited use" from Anthropic. Defense Secretary Hegseth gave Anthropic a three-day deadline to open all applications, and when refused, retaliated by designating them a "supply chain risk." They applied the Huawei framework to a U.S. company.

When the entity imposing constraints and the entity destroying constraints are identical, "changing from within" doesn't function.

International TreatyDysfunctional

LAWS resolution 156-5, but CCW consensus rule lets US/Russia/Israel block

Non-binding — One nation can block

National RegulationSelf-contradictory

DoD issues AI ethics principles while demanding 'unrestricted use' from Anthropic

Constrainer = constraint-breaker

Corporate Self-regulationStructurally limited

Anthropic maintains red lines → banned from contracts. OpenAI fills the gap

Ethical firms penalized (adverse selection)

Need overlapping three layers + independent inspection mechanism
Three-layer governance structure for military AI — None functions alone

Point of Contention 4: Should We Allow Machines to Make Lethal Decisions?

Dr. Sarah MitchellOpponent

The fundamental question underlying autonomous weapons is simple. "Who bears responsibility for the decision to kill?"—When AI selects targets and executes attacks, does responsibility lie with the algorithm designers, commanders, companies, or politicians? The Geneva Conventions presuppose human legal responsibility for all attacks. Weapons that cannot meet this presupposition are a challenge to the international legal order itself.

An Ipsos survey across 28 countries showed an average of 61% opposing the use of lethal autonomous weapons. Two-thirds of the opposition reasons were "allowing machines to kill people crosses a moral line." Notably, the opposition rate among U.S. military personnel is higher than among civilians.

Seiichi KonoConditional Advocate

Responsibility attribution can be legally established. Extend the principle of command responsibility and position AI as a "weapon system" within existing frameworks. Problems that can be solved technically should not be buried as ethical impossibilities.

Aya TanakaTechnology Ethics Advocate

"Just establish it legally"—Lavender's operation completely refutes that optimism. When officers approve AI output in 20 seconds, can we say "humans made the judgment"? Google declared "non-participation in weapons" in its AI ethics principles in 2018, then signed a $1.2 billion Project Nimbus contract with the Israeli government in 2024. "Engage and correct" transforms into "engage and remain silent."

When code we write misclassifies some elementary school as a "military facility," we bear responsibility too. "Orders from above" is not absolution.

Zhang WeimingRealist

Let me organize the discussion so far. The structural points of contention highlighted by the four participants are three.

First, the paradox of deterrence. If we don't advance, we fall behind authoritarian states; if we advance, we invite arms race escalation.

Second, the governance vacuum. Self-regulation is unsustainable (Anthropic), state regulation is self-contradictory (Hegseth letter), international treaties lack binding force (CCW consensus method). A framework overlaying all three is needed, but construction proceeds slowly.

Third, diffusion of responsibility. Algorithm designers, commanders, executives, political leaders—everyone bears part of the responsibility, no one takes full responsibility. This "diffusion of responsibility" is the most dangerous characteristic of autonomous weapons.

UN LAWS resolutions were adopted 166 to 3 in 2024, 156 to 5 in 2025. However, the CCW consensus method remains a barrier, with the U.S., Russia, and Israel continuing to block treaty negotiations. There is no perfect solution. But we need to start building a framework that is "much better than nothing" right now.


Remaining Questions

What this debate revealed is that discussions about military use of AI cannot be captured by the binary opposition of "for or against." National security, technology ethics, international law, corporate responsibility, engineers' conscience, algorithmic bias, responsibility attribution—all are intertwined and do not converge on a single answer.

Only one thing can be said with certainty. The time to answer this question is rapidly shrinking due to technological progress. As AI capabilities improve daily, the speed at which human society designs rules is not keeping up. Technology won't wait. However, the cost of entrusting human judgment to technology that won't wait continues to accumulate at this very moment.



References

Anthropic's Statement on Department of Defense ContractDario Amodei, Anthropic. Anthropic

Resolution on Autonomous Weapons Systems (A/RES/79/62)United Nations General Assembly. United Nations

DoD Responsible AI Strategy and Implementation PathwayU.S. Department of Defense. U.S. Department of Defense

'Lavender': The AI machine directing Israel's bombing spree in GazaYuval Abraham. +972 Magazine

Follow the Money: What the Pentagon's Budget Data Tells Us About AI and Autonomy AdoptionMaggie Gray. maggiegray.us

Growing demand sparks DOD to raise Palantir's Maven contract to more than $1BDefenseScoop. DefenseScoop

Questions and Answers: Israeli Military's Use of Digital Tools in GazaHuman Rights Watch. Human Rights Watch

Global survey highlights continued opposition to fully autonomous weaponsIpsos / Human Rights Watch. Ipsos

156 states support UNGA resolution on autonomous weapons systemsStop Killer Robots. Stop Killer Robots

Get new columns by email

1-2 social structure analysis columns per week. Free to subscribe.

Join ISVD's activities?

Sign up to receive the latest research and activity reports. Feel free to reach out about collaboration or project participation.