Douglas Engelbart — On AI — Wiki Companion
WIKI COMPANION

Douglas Engelbart — On AI

A reading-companion catalog of the 39 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Douglas Engelbart — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Douglas Engelbart — On AI. 39 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (29)
Aesthetics of the Smooth
Concept

Aesthetics of the Smooth

Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.

AI Alignment
Concept

AI Alignment

The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.

Ars Critica
Concept

Ars Critica

The Renaissance art of critical reading — the active evaluation of texts against evidence, logic, and source reliability — and the direct ancestor of the evaluative discipline AI-generated content requires.

Artificial General Intelligence
Concept

Artificial General Intelligence

AGI: a hypothetical system with human-level cognitive ability across essentially every domain. The transition-point that AI-safety thinking orients around, even when no one agrees on what it is.

Attentional Ecology
Concept

Attentional Ecology

The study of how AI-saturated environments shape the minds that live inside them — the framework for asking what becomes of judgment, curiosity, and the capacity for sustained attention when answers become abundant and friction is engineer…

Augmentation vs. Automation
Concept

Augmentation vs. Automation

Engelbart's foundational distinction: automation removes the human from the loop, augmentation redesigns the loop so the human's participation becomes more powerful. The most consequential design decision of the AI decade.

Augmentation's Uncomfortable Demands
Concept

Augmentation's Uncomfortable Demands

The five burdens augmentation imposes on the human it amplifies: exposed judgment, intellectual honesty, emotional resilience, self-directed development, and sustained attention. Augmentation does not make work easier; it makes work differe…

BRAVING Trust
Concept

BRAVING Trust

Brown's seven-component operationalization of trust — Boundaries, Reliability, Accountability, Vault, Integrity, Non-judgment, Generosity — that converts an abstraction into observable practice.

Capability Expansion vs. Headcount Reduction
Concept

Capability Expansion vs. Headcount Reduction

The two archetypal organizational responses to AI-driven productivity gains — reducing staff to maintain output or maintaining staff to expand output — each producing fundamentally different professional outcomes.

Co-Evolution of Human and Tool
Concept

Co-Evolution of Human and Tool

Engelbart's assumption that the human and the tool would evolve together at approximately balanced rates — and the structural diagnosis of what happens when the tool accelerates beyond the human's capacity to adapt.

Collective Intelligence Augmentation
Concept

Collective Intelligence Augmentation

Engelbart's neglected insight: augmentation's highest-value application is not the amplification of individuals but the enhancement of teams — and the current AI deployment is reproducing the industry's historical failure to invest in the c…

Deliberate Practice
Concept

Deliberate Practice

Ericsson's empirically established mechanism for building expertise — effortful, targeted engagement at the boundary of capability, guided by specific feedback and sustained over thousands of hours.

Future Shock
Concept

Future Shock

Toffler's 1970 diagnosis of the psychophysiological stress produced when human beings encounter more change than they can process — not the content of any particular change, but the pace itself.

H-LAM/T Framework
Concept

H-LAM/T Framework

Engelbart's formalization of the augmented system: Humans using Language, Artifacts, Methodology, and Training. Every component shapes every other, and improving one in isolation as likely degrades the system as enhances it.

Imagination-to-Artifact Ratio
Concept

Imagination-to-Artifact Ratio

Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…

Natural Language Interface
Concept

Natural Language Interface

The interface paradigm — inaugurated at scale by large language models in 2022–2025 — in which the user addresses the machine in unmodified human language and the machine responds in kind; the paradigm that, read through Gibson's framework,…

Question Engineering
Concept

Question Engineering

The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.

River of Intelligence
Concept

River of Intelligence

Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.

Tacit Knowledge
Concept

Tacit Knowledge

Michael Polanyi's term for the knowledge that lives in the hands and nervous system rather than in explicit propositions — acquired through practice, failure, and embodied pattern recognition, and dissolved by AI workflows that produce ou…

Task Seepage
Concept

Task Seepage

The mechanism — documented in the Berkeley study of AI workplace adoption — by which AI-accelerated work colonizes previously protected temporal spaces, converting every pause into an opportunity for productive engagement.

The Beaver's Dam
Concept

The Beaver's Dam

The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.

The Bootstrapping Principle
Concept

The Bootstrapping Principle

Engelbart's organizing strategy: use the tools you are building to improve the process of building them. Each cycle makes the next faster, producing a compounding spiral of capability rather than linear improvement.

The Education Paradigm Shift
Concept

The Education Paradigm Shift

The transition from training students in specific cognitive tasks (which AI commoditizes) to developing judgment, questioning, and integrative thinking — the educational restructuring the AI deployment phase demands.

The Governance Gap
Concept

The Governance Gap

The widening structural distance between the speed of technological capability and the speed of institutional response — the defining failure mode of democratic governance in an exponential era.

The Judgment Economy
Concept

The Judgment Economy

The economic regime that emerges when the cost of execution approaches zero and the premium on deciding what to execute rises correspondingly — the Smithian reading of the Orange Pill moment.

The Orange Pill Moment
Concept

The Orange Pill Moment

The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.

The Purpose Question
Concept

The Purpose Question

The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.

The Unfinished Framework
Concept

The Unfinished Framework

What Engelbart established and what he left incomplete: the foundation of augmentation theory is sound, but the culture, pedagogy, governance, measurement systems, and philosophy required to make augmentation operational at civilizational s…

Why the Industry Chose Automation
Concept

Why the Industry Chose Automation

The structural diagnosis of why the computing industry has consistently preferred automation over augmentation: six reinforcing forces — measurement, sales, implementation, organizational compatibility, designer comfort, and psychological e…

Technology (2)
Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

NLS (oN-Line System)
Technology

NLS (oN-Line System)

Engelbart's working implementation of the augmentation framework — the system the SRI team used to build itself. The first platform for genuine collective cognition, and the most sophisticated demonstration of bootstrapping ever attempted.

Work (2)
Napster Station
Work

Napster Station

The AI-powered conversational concierge kiosk that Edo Segal's team at Napster built in thirty days for CES 2026 — the Orange Pill's central case of AI-accelerated specific-purpose design, read through Rams's framework as a case of useful to wh…

The Berkeley Study
Work

The Berkeley Study

Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.

Person (2)
Edo Segal
Person

Edo Segal

Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.

Howard Rheingold
Person

Howard Rheingold

American writer and educator (b. 1947) who taught Engelbart's 1962 paper for years at Stanford and became the principal interpreter of augmentation for a generation of readers who would not otherwise have encountered it.

Event (3)
The Deleuze Failure
Event

The Deleuze Failure

The canonical moment in Segal's work with Claude when the model produced a passage of philosophical elegance that was rhetorically compelling and substantively wrong — the paradigm case of fluent fabrication, and the founding episode of th…

The Mother of All Demos
Event

The Mother of All Demos

Engelbart's ninety-minute 1968 demonstration that showed collaborative editing, hypertext, video conferencing, and the mouse — as integrated components of an augmentation system. The audience took home the peripherals and left behind the vi…

The Trivandrum Training
Event

The Trivandrum Training

The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…

Organization (1)
Stanford Research Institute (SRI)
Organization

Stanford Research Institute (SRI)

The research institute where Engelbart built the NLS system and the Augmentation Research Center — and where, in 1975, the funding dried up and the augmentation vision lost its institutional platform.

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
39 entries