Charles Perrow — On AI — Wiki Companion
WIKI COMPANION

Charles Perrow — On AI

A reading-companion catalog of the 45 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Charles Perrow — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Charles Perrow — On AI. 45 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (34)
Aesthetics of Smoothness
Concept

Aesthetics of Smoothness

Groys's diagnosis of the dominant cultural aesthetic of the AI age — a logic that eliminates friction, conceals construction, and trains viewers to mistake the polished surface for the thing itself.

Aesthetics of the Smooth
Concept

Aesthetics of the Smooth

Byung-Chul Han's diagnosis, engaged in both The Orange Pill and this book, of the cultural trajectory toward frictionlessness that conceals the labor, struggle, and developmental process that gave work its depth.

AI Safety
Concept

AI Safety

The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …

Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Common-Mode Failure
Concept

Common-Mode Failure

The engineering term for the condition in which a single cause defeats multiple redundant systems simultaneously — and the precise structural description of what happens when one mind operates across domains previously covered by independe…

Democratization of Capability (Senian Reading)
Concept

Democratization of Capability (Senian Reading)

The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?

Depth Atrophy
Concept

Depth Atrophy

The progressive decay of the capacity for sustained, unaided concentration that occurs when practitioners rely continuously on AI assistance — incremental, imperceptible, and grounded in the neuroscience of synaptic pruning.

Flow State
Concept

Flow State

Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.

Fluent Fabrication
Concept

Fluent Fabrication

The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…

High Reliability Organizations
Concept

High Reliability Organizations

The class of organizations — nuclear submarines, aircraft carriers, air traffic control — that operate safely in Perrow's dangerous quadrant through specific organizational disciplines: preoccupation with failure, reluctance to simplify, sens…

Human-AI Collaboration
Concept

Human-AI Collaboration

The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."

Imagination-to-Artifact Ratio
Concept

Imagination-to-Artifact Ratio

Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…

Interactive Complexity
Concept

Interactive Complexity

Perrow's term for systems whose components interact through pathways designers did not anticipate — nonlinear, networked, often invisible — producing failure combinations that exceed any safety analysis that could have been conducted.

Latent Failures
Concept

Latent Failures

Errors embedded in a system but not yet manifested as visible problems — dormant until the specific conditions that trigger them arise, and accumulating invisibly in frictionless AI-augmented workflows.

Living with Normal Accidents
Concept

Living with Normal Accidents

Perrow's governing injunction: the response to systems whose architecture guarantees catastrophic failure is not prevention (impossible) but survival — designing for graceful failure, not flawless operation, and accepting that the measure…

Modularity Principle
Concept

Modularity Principle

The architectural prescription — drawn from Perrow's later work and extended by AI safety researchers — that systems designed as loosely coupled modules with limited interaction pathways absorb failures that tightly integrated systems tran…

Normal Accidents
Concept

Normal Accidents

Perrow's foundational thesis that certain systems, by virtue of their architecture, produce catastrophic failures that cannot be prevented by better operators or better design — failures that are features of the system, not deviations from…

Normalization of Deviance
Concept

Normalization of Deviance

Diane Vaughan's concept for the gradual organizational acceptance of departures from design parameters as the new normal — the slow drift that produces the appearance of safety right up until the accident.

Productive Addiction
Concept

Productive Addiction

The Orange Pill's term for compulsive engagement with generative tools — re-specified by the Skinner volume not as metaphor but as the precise behavioral signature of a continuous reinforcement schedule without an extinction point.

Question Engineering
Concept

Question Engineering

The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.

Redundancy Principle
Concept

Redundancy Principle

The safety-engineering doctrine — central to Perrow's framework — that systems operating in the dangerous quadrant require diverse independent backups, and that the defense against common-mode failure is independence, not duplication.

Resilience Strategy
Concept

Resilience Strategy

The alternative to anticipation — deploy, observe, adapt, correct — that Wildavsky defended as the only governance strategy that historically produces safety rather than merely claiming to.

Task Seepage
Concept

Task Seepage

The mechanism — documented in the Berkeley study of AI workplace adoption — by which AI-accelerated work colonizes previously protected temporal spaces, converting every pause into an opportunity for productive engagement.

Teaming
Concept

Teaming

Edmondson's term for the dynamic activity of teamwork across boundaries — collaboration as verb rather than noun, and the organizational capability the AI transition most demands.

The Beaver's Dam
Concept

The Beaver's Dam

The Orange Pill's metaphor for the institutional work of redirecting the river of AI capability — not to stop the current but to shape what grows around it.

The Burnout Society
Concept

The Burnout Society

Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.

The Complexity-Coupling Matrix
Concept

The Complexity-Coupling Matrix

Perrow's two-by-two diagnostic instrument classifying systems by interactive complexity and coupling tightness — and identifying the upper-right quadrant, where both conditions hold, as the zone where normal accidents become statistically …

The Distribution Problem
Concept

The Distribution Problem

The uncomfortable fact that AI's benefits and costs do not distribute evenly across the population of affected workers — a Smithian question about institutions, not a technical question about tools.

The Luddite Response
Concept

The Luddite Response

The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.

The Operator's Dilemma
Concept

The Operator's Dilemma

The structural contradiction identified in aviation safety research: the automation that improves normal performance degrades emergency performance, because the skills required for intervention are maintained only through the performance a…

The Paradox of Safety
Concept

The Paradox of Safety

The structural observation that efforts to enhance system robustness can render systems more brittle — because safety mechanisms are themselves components whose interactive complexity adds to the system they are designed to protect.

The Specialist's Prison
Concept

The Specialist's Prison

The institutional and cognitive confinement produced by disciplinary specialization — the fishbowl that specialists breathe without seeing, and the structure AI both cracks and reinforces.

The Twenty-Fold Failure Multiplier
Concept

The Twenty-Fold Failure Multiplier

The structural inversion of the twenty-fold productivity gain: if a single AI-augmented worker can produce the output of twenty specialists, she can also produce the failures of twenty, concentrated in a single cognitive bottleneck.

Tight Coupling
Concept

Tight Coupling

Perrow's term for systems in which processes are time-dependent, invariant in sequence, and admit no slack — so that when disruption occurs, it propagates at the speed of the process itself, outrunning the cognition required to intervene.

Technology (1)
Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

Work (2)
The Berkeley Study
Work

The Berkeley Study

Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.

The Next Catastrophe
Work

The Next Catastrophe

Perrow's 2007 extension of normal accident theory to critical infrastructure and organizational concentration — arguing that structural approaches (modularity, decentralization) outperform procedural fixes in systems where catastrophe is a…

Person (4)
Byung-Chul Han
Person

Byung-Chul Han

Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…

Charles Perrow
Person

Charles Perrow

American sociologist (1925–2019) at Yale whose four decades of research on how complex organizations fail produced Normal Accident Theory — the single most influential framework for understanding catastrophic failure in high-risk systems.

Edo Segal
Person

Edo Segal

Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.

Gilles Deleuze
Person

Gilles Deleuze

French philosopher (1925–1995) whose late engagement with Whitehead shaped the contemporary Whitehead renaissance — and whose name, ironically, featured in Segal's clearest example of AI confident-wrongness in The Orange Pill.

Event (4)
Chernobyl
Event

Chernobyl

The 1986 Soviet nuclear catastrophe — caused not by the reactor but by the safety test intended to verify the reactor's protective mechanisms — and the paradigmatic example of the safety system itself as the risk.

The Deleuze Failure
Event

The Deleuze Failure

The moment in The Orange Pill's drafting when Claude produced a fluent philosophical connection between Csikszentmihalyi's flow state and Deleuze's concept of smooth space — eloquent, structurally elegant, and wrong — caught the next m…

The Trivandrum Training
Event

The Trivandrum Training

The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…

Three Mile Island
Event

Three Mile Island

The March 1979 partial meltdown of Unit 2 at the Pennsylvania nuclear plant — the founding case study of Normal Accident Theory and the event that transformed Charles Perrow from organizational sociologist into risk theorist.

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
45 entries