This page lists every Orange Pill Wiki entry hyperlinked from Edwin Hutchins — On AI. 24 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The trained cognitive capacity to envision how systems fail — the QA specialist's orientation toward the pathological that complemented the builder's orientation toward the functional, and that AI tools systematically suppress.
The thousand candles of human intelligence — visual, verbal, kinesthetic, musical, mathematical, spatial, contemplative — whose complementary perception of reality is threatened by the linguistic-logical habitat preference of large languag…
Hutchins's framework for the total web of mutual dependencies among cognitive elements — the insistence that cognition cannot be understood by examining agents in isolation from the environments that constitute their thinking.
Clark's term for the collective extended cognitive systems that emerge when multiple humans and shared AI resources couple into distributed thinking networks whose properties exceed any individual component.
Hutchins's signature methodology — the detailed, situated observation of cognitive work in its natural operational setting, and the only research method adequate to the design of AI-augmented cognitive systems.
The dominant framework in cognitive science since the 1950s: the mind is a computer, thinking is computation, and consciousness is the execution of the right program — the position Noë argues is profoundly wrong in its foundations.
Ericsson's empirically established mechanism for building expertise — effortful, targeted engagement at the boundary of capability, guided by specific feedback and sustained over thousands of hours.
Hutchins's foundational thesis that cognitive processes are not confined to individual brains but are distributed across people, tools, and environments — and that the proper unit of analysis is the functional system, not the mind.
The productive cognitive resistance that arises when agents with different training and different frameworks must negotiate a shared understanding — irritating, slow, socially costly, and the primary mechanism through which distributed sy…
The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.
The central analytical operation of distributed cognition — the tracing of information as it moves across a cognitive system's components, whose speed and fidelity determine the system's computational performance.
The structural principle that reliable cognitive systems employ multiple representational formats that make different properties of information salient and provide cross-checking opportunities for error detection.
The operation at the heart of distributed cognition — information moves through a system by being translated between media, each translation serving as both cognitive work and potential checkpoint.
The 2026 cognitive environment of the AI-augmented solo creator — a workspace improvised in months, concentrating at a single station what previous architectures distributed across teams and centuries of refinement.
The invisible tax every distributed cognitive system pays — the quadratic overhead of aligning representations across multiple human agents — that AI has nearly abolished, at costs only beginning to become visible.
The structural disconnection between the skills AI-augmented systems require of their human components and the learning opportunities those systems provide for developing them.
The structural vulnerability produced when AI concentrates evaluative labor in a single human agent — shifting judgment from distributed team specialists to one person responsible for catching errors across all cognitive domains.
The canonical example of a carefully engineered cognitive environment — a physical space whose every element, evolved over centuries of maritime practice, participates in the computational work of fixing a ship's position.
Hutchins's 1995 ethnographic masterwork — the book that established distributed cognition as a research program by documenting how a Navy navigation team computes a ship's position through the coordinated work of people, instruments, and r…
The AI-powered conversational concierge kiosk that Edo Segal's team at Napster built in thirty days for CES 2026 — the Orange Pill's central case of AI-accelerated specific-purpose design, read through Rams's framework as a case of useful to wh…
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.