Artificial insanity is Al Gore's polemical counter-framing to the marketing term artificial intelligence. Deployed at COP28 in December 2023, the phrase names the algorithmic systems that curate personalized information environments by optimizing for engagement — a metric that systematically favors content producing emotional arousal, tribal solidarity, and outrage over content that informs rational deliberation. Gore's claim is not that the systems are literally insane but that their outputs, measured by effect on the democratic information ecosystem, are indistinguishable from the outputs of a system designed to produce collective cognitive dysfunction. The phrase carries a specific rhetorical move: rejecting the industry's preferred terminology and replacing it with a description grounded in observable effects rather than engineering ambitions.
The term emerged from Gore's decade-long observation that social media platforms had inverted the problem he diagnosed in The Assault on Reason. Television had produced passivity; social media produced hyperactivity, but engaged with the wrong things through channels optimized for behavioral prediction rather than informed citizenship. The AR-15 comparison was deliberately provocative. Gore was arguing that the algorithms inflict measurable damage at scale, that the damage is well-documented, and that the argument for their continued unregulated deployment is structurally identical to the argument against gun control: individual liberty invoked as a shield for institutional profit at the expense of collective well-being.
Generative AI represents a qualitative escalation of the artificial-insanity dynamic. Previous algorithmic curation selected among human-generated content, imposing natural limits on the volume and quality of what could be amplified. Large language models remove this constraint. A single operator can now generate unlimited quantities of personalized persuasive content, tailored to individual psychological profiles with a precision human persuaders could never match. The distinction between AI-amplified disinformation and earlier disinformation is not merely quantitative but structural: the effort signals that previously allowed citizens to evaluate information — writing quality, sourcing depth, institutional backing — have been commoditized into the output of any prompt.
The Orange Pill's account of Claude producing confident wrongness dressed in good prose describes the mechanism from the builder's perspective. The output satisfies every surface criterion citizens use to evaluate information quality while potentially failing every substantive criterion. Segal caught this in his own writing — a passage about consciousness that invoked Deleuze incorrectly but sounded insightful enough to nearly survive editorial review. At scale, across millions of interactions, this failure mode becomes the defining feature of the AI-saturated information environment.
Gore's framework insists that the artificial-insanity dynamic is structural rather than incidental — a product of the attention economy's business model rather than a fixable bug. The business model requires maximizing time-on-platform, which requires engagement optimization, which produces the pathologies the phrase names. Individual platforms cannot unilaterally fix the problem without losing competitive position to platforms that continue to optimize ruthlessly. The intervention must be systemic: regulation that changes the incentive structure, not exhortation that changes individual behavior within it.
Gore introduced the phrase at COP28 in Dubai in December 2023, in a speech that framed climate disinformation as continuous with the broader degradation of the information environment by engagement-optimizing algorithms. The framing was strategic: by connecting climate denial to the structural features of social media, Gore was arguing that the two crises — climate and democracy — share a common infrastructure and cannot be addressed separately.
Rejection of neutral framing. The industry term artificial intelligence obscures the systems' actual effects; artificial insanity names what the systems do rather than what they were designed to be.
Weapons analogy. The AR-15 comparison claims that engagement-optimizing algorithms inflict measurable, systemic, well-documented damage, and that the arguments against their regulation mirror the arguments against gun control.
Structural cause. The pathology is produced by the attention-economy business model, not by individual design choices; individual platforms cannot fix it unilaterally without losing competitive position.
AI escalation. Generative AI supercharges artificial insanity by industrializing persuasion, perfecting prediction, and dissolving the effort signals that previously allowed citizens to evaluate information.
Civil libertarians have argued that Gore's framing invites speech-suppressing regulation that democracies cannot implement without undermining the First Amendment. Gore's response is that existing regulatory frameworks — truth-in-advertising, broadcast standards, campaign finance — demonstrate that democratic societies routinely constrain communication systems without abolishing free speech, and that the distinction between speech and algorithmic amplification is the leverage point for regulation that preserves liberty while constraining harm.