This page lists every Orange Pill Wiki entry hyperlinked from Langdon Winner — On AI. 37 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The regulatory, institutional, and normative arrangements governing AI development and deployment — reframed through Ostrom's framework as a polycentric governance challenge requiring coordination across multiple scales rather than the mark…
Anthropic's alignment approach that trains models to evaluate their own outputs against a set of written principles — replacing the implicit, averaged preferences of human evaluators with explicit, legible values embedded in the training p…
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The rhetorical operation that converts political choices into natural facts or technical necessities — the signature mechanism Winner spent his career exposing.
The Winner volume's central question — transposing his 1980 interrogation of artifacts onto the dominant metaphor of the AI age: the amplifier that supposedly doesn't care what signal you feed it.
The specific form of exit without alternative exercised by senior technology practitioners in 2025–2026 — departing not to a competing system but to the margins, taking with them standards the remaining system cannot replace.
Winner's term for the cluster of assumptions surrounding computerization that render political critique of technology socially unacceptable — the equation of information access with power distribution.
The ideological operation — diagnosed by Mannheim's framework — by which contingent social choices present themselves as natural processes, thereby removing them from the domain of political deliberation.
The institutional mechanism — healthcare, retirement, disability insurance not tied to specific employers — that Kindleberger's framework identifies as required for displaced workers to transition between roles without losing basic protect…
Anthropic's framework of capability thresholds — AI Safety Levels analogous to biosafety levels — specifying safety measures required before deployment at each level, designed to build the governance framework before the harm rather than a…
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The underappreciated Winnerian concept — recovered by Eric Deibel — that a society's technological infrastructure distributes authority and establishes rules the way a political constitution does.
Winner's diagnosis of societies that adopt transformative technologies without deliberation — sleepwalking through consequential change as though it were weather rather than political choice.
Amodei's extension of Segal's amplifier framework — the amplifier is not neutral, the design choices embedded in an AI system are moral choices, and the designer shares responsibility with the user for what gets amplified.
Segal's epilogue to the Galbraith volume — the quarterly conversation where if five people can do what a hundred did, why are we paying a hundred? — and the mechanism through which private capture of AI productivity gains becomes structura…
The twelve-year-old's Mom, what am I for? — read by the Winner volume not as existential inquiry but as a legitimacy demand made by a citizen of a political order whose justification has become unclear.
The February 2026 repricing of software companies reframed from economic correction to redistribution of economic power at market speed without democratic deliberation.
The composite figure from The Orange Pill whose access to AI tools is celebrated as democratization — and whose absence from governance decisions the Winner volume makes visible.
The Winner volume's recovery of the Luddite as a political actor making legitimate democratic demands — rather than a psychological casualty of progress.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The 1930s American response to the Great Depression — the deployment-phase institutional reckoning of the oil and mass-production age that produced the architecture of the post-war golden age.
The Winner volume's reframing of Han's aesthetics of smoothness — not a cultural taste but a built environment that distributes power through invisibility.
Winner's structural critique of governance by those who understand a technical system — the recognition that expertise is not democratic mandate, however deep or sincere.
The Orange Pill's name for the hope that builders with deep technical understanding will govern AI responsibly through an ethic of stewardship — and Lindblom's diagnosis of why this hope, however sincere, is structurally naive.
Arendt's space of appearance — the common world where actors encounter one another through deed and word, and where action becomes possible because others are present to witness, judge, and respond.
Edo Segal's foreword framing — the recognition that AI's deployment, restructuring cognition, labor, and meaning, proceeded without any democratic decision anyone can point to.
The Winner volume's constructive program — the specific institutional architecture required to govern the AI amplifier democratically rather than technocratically.
The nineteenth-century British laws limiting working hours, prohibiting child labor, and establishing safety standards — the archetypal deployment-phase institutional innovation that redistributed the industrial revolution's gains.
Winner's 1977 first book — Technics-out-of-Control as a Theme in Political Thought — examining how modern technological systems develop trajectories that exceed the capacity of individuals or institutions to govern.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
American philosopher (b. 1947) who developed the central capabilities approach in dialogue with Sen — providing the specific enumeration of capabilities that Sen himself deliberately left open.
The skilled textile workers whose 1811–1816 destruction of wide stocking frames became the founding Luddite event — and whose ontological error, Ellul's framework suggests, was believing they faced a technology when they faced a logic.