AI Governance and Cognitive Risks
Cognitive and institutional challenges of the generative AI era, including authority bias, cognitive debt, regulatory gaps, and military use governance.
11 items
Literature Map: From Agnotology to 'Structural Invisibility'
Tracing the intellectual lineage from Robert Proctor's production of ignorance, through Miranda Fricker's epistemic injustice and Linsey McGoey's strategic ignorance, to ISVD's 'Reading the Structure' methodology.
The Pitfalls of 'I Asked AI' — Authority Bias and the Hollowing Out of Knowledge
Authority bias in accepting AI output uncritically and knowledge hollowing from skill delegation. From calculators to GPS to LLMs—a recurring pattern.
Motivated Ignorance — The Cognitive Structure of 'Not Wanting to Know'
Analyzing the mechanism by which individuals voluntarily choose to 'remain ignorant' rather than being forced into ignorance from external sources, from a cognitive science perspective. Examining how the 'illusion of knowledge' demonstrated by Sloman & Fernbach's 'The Knowledge Illusion' and motivated reasoning form the individual-level foundation of structural ignorance.
The Production Mechanism of Taboos — Who Decides What 'Must Not Be Said'
A structural analysis of the questions raised by Akira Tachibana's 'Things That Must Not Be Said'—examining the mechanisms by which discussing genetics, intelligence, and appearance becomes 'forbidden to speak of' through the lens of agnotology. Taboos do not emerge naturally but are produced and maintained under specific social conditions.
'Kūki' and Sontaku — The Japanese Form of Pluralistic Ignorance
This case study integrates the 'rule by atmosphere' (kūki) analyzed by Yamamoto Shichihei in A Study of 'Atmosphere' and the concept of 'sontaku' (anticipatory compliance), which gained political attention from 2017, within the theoretical framework of pluralistic ignorance. It illuminates the mechanism that structures the state of 'knowing but not speaking' by radically raising the cost of dissent.
Epistemic Injustice and Information Access Gaps in NPOs — Visualizing Structures Where Voices Go Unheard
Applying Miranda Fricker's epistemic injustice theory to the NPO context, this analysis examines how testimonial injustice and hermeneutical injustice create structural information access gaps in policymaking. Through connections with the 'complaint gap' concept from the Quiet City Project, we envision counter-design approaches grounded in agnotology.
Who Draws AI's 'Red Lines'? — Anthropic vs. Pentagon Lawsuit Questions Governance Vacuum
Anthropic sued the U.S. Department of Defense over unlimited military AI access demands. An unprecedented confrontation over ethical red lines in AI governance.
Should Military AI Be Permitted? — The Intersection of National Security and Technology Ethics
The military use of AI technology has created a head-on collision between security logic and technology ethics. Through a simulated debate among four fictional debaters, we illuminate the structural points of contention in this issue.
Cognitive Debt — What Happens to the Brain and Society When We Delegate Thinking to AI
Brain connectivity among ChatGPT users dropped 55%, with 83% unable to cite their own writing. MIT Media Lab research reveals the structure of cognitive debt.
AI Regulation in the United States: Federal vs. State — Can a Unified Framework Be Achieved?
Federal preemption and state regulations collide in U.S. AI policy. Comparing California, Colorado, and Texas legal frameworks and governance challenges.
Anthropic vs. The Pentagon — When AI Companies' Ethical Judgments Collide with National Security
Anthropic rejected the Pentagon's demand for unrestricted use and was banned from federal contracts. Examining tensions between AI ethics and national security.