James Scott — On AI — Wiki Companion
WIKI COMPANION

James Scott — On AI

A reading-companion catalog of the 27 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that James Scott — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from James Scott — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (22)
Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Exit, Voice, and Loyalty
Concept

Exit, Voice, and Loyalty

Hirschman's 1970 triad — the three possible responses to institutional deterioration. Exit punishes, voice informs, loyalty delays. The framework that explains why the AI discourse is failing.

False Compliance
Concept

False Compliance

The performance of adoption without its substance — Scott's term for the practice of satisfying the institutional demand while preserving the underlying work unchanged.

Feigned Ignorance
Concept

Feigned Ignorance

The strategic performance of incomprehension — the weapon that exploits the dominant group's assumptions about the subordinate group's capacity, turning condescension into shield.

Foot-Dragging
Concept

Foot-Dragging

The deliberate slowing of adoption through means that cannot be distinguished from legitimate difficulty — the most universal and most deniable of the weapons of the weak.

Goodhart's Law
Concept

Goodhart's Law

"When a measure becomes a target, it ceases to be a good measure." Charles Goodhart's 1975 observation from monetary policy, now the operative principle of every specification failure in AI.

Hidden Transcript
Concept

Hidden Transcript

Scott's name for what subordinate groups say and do among themselves, in spaces where the powerful cannot hear — distinguished from the public transcript by the audience rather than by the sincerity of the speaker.

High Modernism
Concept

High Modernism

Scott's term for the ideology of centralized, rationalist planning that assumed complex human societies could be reduced to administrative categories, managed through expert knowledge, and improved through top-down intervention.

Legibility
Concept

Legibility

The systematic flattening of complex, local, context-dependent reality into categories that can be measured, compared, and administered from a distance — Scott's diagnostic instrument for centralizing power.

Moral Economy
Concept

Moral Economy

Scott's 1976 framework for the normative expectations of reciprocity and fair dealing that subordinate communities develop to govern their relationships with the powerful — a standard by which exploitation is measured and contested.

Mētis
Concept

Mētis

Scott's term — drawn from Greek — for the practical, experiential, context-dependent knowledge that accumulates through sustained engagement with a specific environment and resists reduction to formal rules.

Power Asymmetry
Concept

Power Asymmetry

The triple structural imbalance — of narrative, institutional incentive, and consequence — that makes everyday resistance rational and simultaneously insufficient.

Public Transcript
Concept

Public Transcript

What subordinates say and do in the presence of power — a performance calibrated to institutional audiences under specific conditions of risk, whose truth value is less important than its social function.

Tacit Knowledge (Polanyi-Collins Reading)
Concept

Tacit Knowledge (Polanyi-Collins Reading)

Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.

The Atrophy Argument
Concept

The Atrophy Argument

The prediction that widespread AI adoption will produce shallow practitioners, degraded skills, and a generation of professionals who cannot do the work their credentials claim — the most empirically grounded and therefore most formidable …

The Channel Problem
Concept

The Channel Problem

The institutional-design failure at the center of the AI transition — the absence of structural mechanisms through which the hidden transcript of the displaced expert can enter the decision-making process.

The Displaced Expert
Concept

The Displaced Expert

The credentialed professional — developer, lawyer, teacher, designer — whose decades of accumulated expertise are being repriced by AI tools she did not design and cannot control, and who occupies the structural position Scott's peasants oc…

The Ethics Argument
Concept

The Ethics Argument

The position that using AI is a form of cheating, plagiarism, or professional fraud — the second weapon in the Luddite's arsenal, operating through the vocabulary of authenticity, attribution, and intellectual honesty.

The Luddite Response
Concept

The Luddite Response

The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.

The Quality Argument
Concept

The Quality Argument

The claim that AI-generated work is fundamentally inferior to human-produced work — the first weapon in the contemporary Luddite's arsenal, operating as technical observation and moral contestation simultaneously.

The Tipping Point of Resistance
Concept

The Tipping Point of Resistance

The moment when everyday resistance shifts from strategic investment to strategic liability — when the institutional landscape has reorganized enough that continued refusal produces marginalization rather than preservation.

Weapons of the Weak
Concept

Weapons of the Weak

Scott's 1985 framework for the ordinary, undramatic, deniable acts through which subordinate groups contest domination without risking open confrontation — foot-dragging, false compliance, feigned ignorance, gossip, pilfering, sabotage dis…

Work (2)
Seeing Like a State
Work

Seeing Like a State

Scott's 1998 masterwork arguing that centralized planning fails when it imposes legibility on complex systems while destroying the local, practical knowledge — mētis — embedded in the communities it reorganizes.

The Orange Pill (book)
Work

The Orange Pill (book)

Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.

Person (2)
Albert Hirschman
Person

Albert Hirschman

German-born American economist and political theorist (1915–2012) whose work on exit, voice, and loyalty, the hiding hand, and the rhetoric of reform crossed disciplinary boundaries for half a century.

Edo Segal
Person

Edo Segal

Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.

Event (1)
The Green Revolution
Event

The Green Revolution

The 1960s–1970s global agricultural transformation — the closest historical parallel to the AI transition, demonstrating that transferable technology without institutional development concentrates benefits among the already-advantaged.

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
27 entries