This page lists every Orange Pill Wiki entry hyperlinked from James Scott — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
Hirschman's 1970 triad — the three possible responses to institutional deterioration. Exit punishes, voice informs, loyalty delays. The framework that explains why the AI discourse is failing.
The performance of adoption without its substance — Scott's term for the practice of satisfying the institutional demand while preserving the underlying work unchanged.
The strategic performance of incomprehension — the weapon that exploits the dominant group's assumptions about the subordinate group's capacity, turning condescension into shield.
The deliberate slowing of adoption through means that cannot be distinguished from legitimate difficulty — the most universal and most deniable of the weapons of the weak.
"When a measure becomes a target, it ceases to be a good measure." Charles Goodhart's 1975 observation from monetary policy, now the operative principle of every specification failure in AI.
Scott's name for what subordinate groups say and do among themselves, in spaces where the powerful cannot hear — distinguished from the public transcript by the audience rather than by the sincerity of the speaker.
Scott's term for the ideology of centralized, rationalist planning that assumed complex human societies could be reduced to administrative categories, managed through expert knowledge, and improved through top-down intervention.
The systematic flattening of complex, local, context-dependent reality into categories that can be measured, compared, and administered from a distance — Scott's diagnostic instrument for centralizing power.
Scott's 1976 framework for the normative expectations of reciprocity and fair dealing that subordinate communities develop to govern their relationships with the powerful — a standard by which exploitation is measured and contested.
Scott's term — drawn from Greek — for the practical, experiential, context-dependent knowledge that accumulates through sustained engagement with a specific environment and resists reduction to formal rules.
The triple structural imbalance — of narrative, institutional incentive, and consequence — that makes everyday resistance rational and simultaneously insufficient.
What subordinates say and do in the presence of power — a performance calibrated to institutional audiences under specific conditions of risk, whose truth value is less important than its social function.
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
The prediction that widespread AI adoption will produce shallow practitioners, degraded skills, and a generation of professionals who cannot do the work their credentials claim — the most empirically grounded and therefore most formidable …
The institutional-design failure at the center of the AI transition — the absence of structural mechanisms through which the hidden transcript of the displaced expert can enter the decision-making process.
The credentialed professional — developer, lawyer, teacher, designer — whose decades of accumulated expertise are being repriced by AI tools she did not design and cannot control, and who occupies the structural position Scott's peasants oc…
The position that using AI is a form of cheating, plagiarism, or professional fraud — the second weapon in the Luddite's arsenal, operating through the vocabulary of authenticity, attribution, and intellectual honesty.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The claim that AI-generated work is fundamentally inferior to human-produced work — the first weapon in the contemporary Luddite's arsenal, operating as technical observation and moral contestation simultaneously.
The moment when everyday resistance shifts from strategic investment to strategic liability — when the institutional landscape has reorganized enough that continued refusal produces marginalization rather than preservation.
Scott's 1985 framework for the ordinary, undramatic, deniable acts through which subordinate groups contest domination without risking open confrontation — foot-dragging, false compliance, feigned ignorance, gossip, pilfering, sabotage dis…
Scott's 1998 masterwork arguing that centralized planning fails when it imposes legibility on complex systems while destroying the local, practical knowledge — mētis — embedded in the communities it reorganizes.
Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.
German-born American economist and political theorist (1915–2012) whose work on exit, voice, and loyalty, the hiding hand, and the rhetoric of reform crossed disciplinary boundaries for half a century.
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.