This page lists every Orange Pill Wiki entry hyperlinked from Lucy Suchman — On AI. 20 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Suchman's sharpest diagnostic proposition: AI generates plans addressed to described situations, not actions tested against encountered ones — and the most dangerous institutional error is to confuse the two.
AGI: a hypothetical system with human-level cognitive ability across essentially every domain. The transition-point that AI-safety thinking orients around, even when no one agrees on what it is.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The design philosophy Suchman's framework implies — AI systems should be designed not as oracles that deliver outputs but as structures that keep users in the developmental friction through which situated knowledge accumulates.
The diagnostic distinction between outputs produced through the practitioner's developmental engagement and outputs delivered without that engagement — identical on the page, decisive in what the person who holds them knows.
The structural one-sidedness of human-machine interaction: the human brings rich social intelligence to the encounter while the machine responds procedurally — an asymmetry that deepens rather than closes as AI becomes more sophisticated.
Suchman's distinction between the bounded, representable domains in which AI systems succeed and the unbounded, emergent reality in which human practitioners must act — the structural boundary where situated intelligence begins.
Suchman's foundational thesis that competent action arises from improvised, moment-by-moment responsiveness to specific circumstances — not from executing pre-formed plans.
The embodied, context-bound, developmentally accumulated understanding that practitioners build through sustained engagement with specific domains — constitutively resistant to extraction, transfer, or replacement by generated outputs.
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
The phenomenological experience, described in The Orange Pill, of having one's half-formed thought held and returned clarified by an AI system — a human interpretive achievement that Suchman's framework insists on describing accurately rath…
The specific form of cognitive resistance — distinct from mere mechanical tedium — through which practitioners develop the situated knowledge that allows them to evaluate AI outputs against the reality those outputs claim to address.
The structural distance between any representation of what should happen and what actually happens when agents encounter specific circumstances — the permanent space where situated intelligence operates.
Nakamura's empirical finding that the transmission of standards — not knowledge, not technique — is the single most important function the mentor provides, and the function AI most thoroughly fails to replicate.
The class of AI-enabled military and intelligence systems that generate target recommendations from pattern-matching over surveillance data — Suchman's sharpest case study of what happens when plans are treated as actions at machine speed.
The early-1980s expert system embedded in Xerox photocopiers that Suchman's PARC research used to demolish the planning paradigm in AI — the canonical case of a machine that assumed users had plans it could support.
Orr's ethnographic research on Xerox field technicians — documenting how effective repair depends on improvisational expertise and war-story knowledge that manuals cannot capture — a parallel and confirming body of work to Suchman's PARC st…
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.