This page lists every Orange Pill Wiki entry hyperlinked from George Miller — On AI. 21 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Miller's term for the cognitive operation that transcends the seven-item limit without exceeding it — the packaging of multiple items into a single retrievable unit, transforming nine unfamiliar letters into three familiar acronyms and, at …
Ericsson's empirically established mechanism for building expertise — effortful, targeted engagement at the boundary of capability, guided by specific feedback and sustained over thousands of hours.
The central distinction Miller's framework draws between chunks built through effortful recoding and chunks received as pre-packaged solutions — between cognitive capital a practitioner owns and cognitive capital she merely rents.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The structural vulnerability of practitioners who possess borrowed chunks rather than earned ones — highly capable within the operational parameters of their tools, profoundly exposed when conditions depart from routine.
Miller's model of human cognition as a nested structure of chunks running from raw operations at the bottom to abstract plans at the top — the architecture within which selective decompression and recompression constitute the cognitive esse…
The dimension of cognitive variability that Miller's framework identifies as decisive in the AI age: not how many items the mind can hold, but how good each item is — the richness, depth, and relevance of what fills each of the seven slots.
The deliberate, error-driven transformation of unfamiliar information into familiar chunks — the process Miller identified as the engine of expertise, and the process that may or may not survive the AI compression of implementation work.
The cognitive politics of working memory in the AI age — the question of who decides what fills the seven slots freed by compression, and whether those decisions are made by the human, the tool's interaction patterns, or the organization's …
The narrow channel of working memory through which every human thought, every cathedral, every legal code, and every software system has been forced to pass — the fixed cognitive constraint that shaped the architecture of every institution …
The population mourning what the AI transition eliminates — senior practitioners whose recognition demand is systematically truncated: their diagnosis acknowledged, their claim to institutional response denied.
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
Miller's 1956 discovery that human working memory holds approximately seven items, plus or minus two — the most cited finding in psychological science and the fixed bottleneck through which every human thought must pass.
The specific behavioral configuration — compulsive AI-augmented engagement experienced as exhilaration from within and pathology from without — produced by a reinforcing loop without a balancing counterpart.
The empirical regularity — established across chess, music, medicine, and software — that deep expertise requires approximately ten years of deliberate practice to build the fifty thousand chunks that constitute genuine mastery in a domain.
Miller's central question after the AI compression — not how many slots the mind can hold, but what can be packed into each of them, how densely, and whether the packer understands what has been packed.
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
The 15th-century invention — Gutenberg's movable type — that Gopnik, Farrell, Shalizi, and Evans identify as the single most illuminating historical analog for understanding what large language models actually are.