This page lists every Orange Pill Wiki entry hyperlinked from K. Anders Ericsson — On AI. 16 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The quiet risk of comprehensive automation: not that machines dominate us, but that we lose the capabilities they replace. Asimov's Solarians are the founding fiction; contemporary work on cognitive offloading is the empirical counterpart.
Ericsson's empirically grounded mechanism for expertise — effortful, boundary-targeting, feedback-rich, iteratively refined engagement that builds the mental representations no shortcut can replicate.
The rich, flexible, deeply structured internal models of a domain that enable expert perception, judgment, and adaptive response — built only through the specific friction of deliberate practice.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The layered, embodied form of knowledge that accumulates in a practitioner through years of focal engagement with her material — too slow to notice day-to-day, too deep to transmit by documentation, and invisible to every metric the device …
The Renaissance humanists' term for the cultivated capacity for judgment that no rule can capture — the highest intellectual virtue, and the capacity the AI age makes most valuable.
Ericsson's three-mode taxonomy of practice — the default mode of AI-assisted work most closely resembles the least developmental form, regardless of how sophisticated the output appears.
The structural illusion by which AI systems appear to possess expertise they have extracted from human experts — the representations manifest in training data without any of the developmental process that built them.
The unprecedented separation — produced by AI — between what a practitioner can produce and what she has become in the process of producing it; between performance and learning, output and understanding.
The cognitive phenomenon — threatened by the speed of AI feedback — in which unconscious processing of a problem over hours or days produces insights that immediate solution eliminates.
Nakamura's empirical finding that the transmission of standards — not knowledge, not technique — is the single most important function the mentor provides, and the function AI most thoroughly fails to replicate.
Ericsson's three-mode classification — naive, purposeful, deliberate — distinguishing practice types by their developmental outcomes rather than the practitioner's intention, and locating most AI-assisted work at the naive end.
The decoupling — by AI — between the minimum floor of professional output and the developmental process that historically produced it, creating a world where expert-level production is achievable without expert-level understanding.
The intellectual genealogy by which Herbert Simon's expertise research produced both the science of human development Ericsson pursued and the artificial intelligence program that now challenges it — two halves of a single mechanism that h…