What Is Happening
In 2013, Italian programmer Alberto Brandolini proposed a law: "The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it." This asymmetry succinctly captured the information environment of the social media era.
But the proliferation of large language models (LLMs) has fundamentally altered the premises of this law. The cost of lying has been driven to near zero.
Previously, the production of misinformation required at least the human labor of "writing a plausible text." Today, a few seconds of prompting can generate long-form text mimicking the style of an expert. The fabrication of scientific-paper formatting, the invention of statistical data, the creation of fictitious citations — all have become technically trivial.
On the fact-checking side, however, nothing has changed. Investigating primary sources, cross-referencing multiple sources, consulting experts, composing clear rebuttals — these steps still require human judgment and time.
Background and Context
The Historical Position of Brandolini's Law
Brandolini's Law is a contemporary reformulation of a long-standing insight concerning information asymmetry. In propaganda research, the observation that "a lie repeated often enough becomes truth" has been known since the Nazi era. What is new is the structural amplification of this asymmetry by the digital environment.
Bergstrom & West (2020) framed this problem in Calling Bullshit as "a market failure in information quality." The mechanism by which low-quality information drives out high-quality information can be regarded as the information equivalent of Gresham's Law.
The Firehose of Falsehood Model
Paul & Matthews (2016) of RAND Corporation analyzed Russian propaganda strategy and presented the "Firehose of Falsehood" model. Its four characteristics are:
- High volume, multi-channel: Simultaneous dissemination of large quantities of information across multiple platforms.
- Rapid and continuous: Prioritizing being first with information, emitting it at a speed that outpaces correction.
- Indifferent to consistency: Contradictory claims are disseminated simultaneously without concern.
- No requirement for truth: Plausibility alone suffices.
What this model demonstrates is a structural problem: given the finite nature of the recipient's cognitive processing capacity, high-quality information is buried by a flood of low-quality information.
The Qualitative Change in the AI Era
Before LLMs, operating a Firehose of Falsehood required organized human resources. Nation-states and click farms were the typical operators.
After the advent of AI, individuals can generate equivalent volumes of information. This might be called the "democratization" of asymmetry, but in reality it accelerates the degradation of the information environment.
Reading the Structure
Changes in Cost Structure
The cost structure of misinformation production has changed as follows:
| Element | Pre-LLM | Post-LLM |
|---|---|---|
| Text generation | Human authorship (time) | Generated in seconds (≈ 0) |
| Plausibility | Skill required | Automatically conferred by the model |
| Multilingual deployment | Translators required | Instantaneous translation |
| Mass production | Organization required | Possible for individuals |
| Fact-checking | Experts + time | No change |
Critically, the cost on the correction side has scarcely changed. AI-assisted fact-checking efficiency has advanced in part, but the step of "final human judgment" cannot be eliminated.
Analysis from the Perspective of Agnotology
What Proctor's (2008) agnotology (無知学) shows is that ignorance has three types:
- Native ignorance: What is not yet known (unknowing)
- Lost knowledge: What was once known but forgotten (forgetting)
- Strategically produced ignorance: Ignorance that is intentionally produced (strategic ploy)
The AI amplification of the Brandolini asymmetry has dramatically reduced the production cost of the third type — strategically produced ignorance. Ignorance is now industrially manufacturable, and its "production cost" is at its lowest level in history.
The Consequence: "Cognitive Flooding"
The terminus of this structure is the normalization of the "post-truth" condition analyzed by McIntyre (2018) in Post-Truth. The problem is not that "it is unclear what the truth is," but that "people cease to care whether something is true."
Recipients subjected to an information flood eventually abandon judgment. This is the very "ultimate objective of the Firehose" identified by RAND (2016) — not to persuade the opponent, but to exhaust the opponent's capacity for judgment.
Questions for This Lab
The questions raised by this hypothesis are not technical but concern social design:
- Can the expansion of the Brandolini asymmetry be measured quantitatively?
- What mechanisms can structurally reduce the cost of correction?
- Beyond improving individual literacy, what kinds of institutional design are effective?
- How does this asymmetry manifest in the Japanese information environment?
These questions connect directly to the theme of community-based resilience.
References
Agnotology: The Making and Unmaking of Ignorance
Proctor, R. N. & Schiebinger, L.. Stanford University Press
Read source
The Russian 'Firehose of Falsehood' Propaganda Model
Paul, C. & Matthews, M.. RAND Corporation, Perspectives PE-198
Read source
Calling Bullshit: The Art of Skepticism in a Data-Driven World
Bergstrom, C. T. & West, J. D.. Random House
Read source
Post-Truth
McIntyre, L.. MIT Press
Read source