This page lists every Orange Pill Wiki entry hyperlinked from Norbert Wiener — On AI. 38 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The principle — defended by Wiener at considerable personal cost — that the creators of powerful systems bear moral responsibility for what those systems do after deployment, and that the claim of value-neutral research is a fiction that tr…
The mid-twentieth-century interdisciplinary science of steering — communication and control in animals, machines, and organizations — founded by Wiener in 1948 and systematically excluded from the AI field at its Dartmouth founding.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…
The second law of thermodynamics' universal tendency toward disorder — Wiener's fundamental antagonist, the force against which every act of intelligence is a local and temporary resistance.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The maintenance of an organism's internal conditions within a viable range — Walter Cannon's 1932 term for the biological phenomenon that Wiener generalized into the universal principle of negative feedback.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
The research program of reverse-engineering what is actually happening inside a neural network — the AI equivalent of the Rama explorers' attempt to understand an alien ship not by what it does but by taking it apart and naming its parts.
The regulatory mechanism in which a system detects deviation from a target state and activates a correcting response — the engineering principle behind homeostasis, governance, and every system that sustains itself against entropy.
The runaway dynamic in which a system's output feeds back as input and amplifies — the screech of the microphone, the cascade of hemorrhage, the grinding compulsion of the AI-augmented builder who cannot stop.
Wiener, Rosenblueth, and Bigelow's 1943 redefinition of teleology as an observable feedback pattern — and the distinction between mechanical purpose (pursuing a goal) and human purpose (evaluating whether the goal is worth pursuing).
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The family of techniques — reinforcement learning from human feedback (RLHF), DPO, constitutional AI, and related methods — that shape a pretrained language model into a usable assistant. The stage where the model becomes the product.
Claude Shannon's 1948 distinction between the message you intend to transmit and everything that interferes with its transmission — the spine of information theory and the diagnostic framework for what an amplifier carries.
The research paradigm—dominant from the 1956 Dartmouth Workshop through the 1980s—that attempted to build intelligence by manipulating symbolic representations according to formal rules, and whose failures vindicated Dreyfus's critique.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The World War II engineering problem — how to aim a gun at an adaptive human pilot — that forced Wiener and Julian Bigelow to develop the mathematics of feedback loops that became the foundation of cybernetics and, eventually, of modern AI.
Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.
Andy Clark and David Chalmers's 1998 thesis that cognition routinely extends beyond the skull into tools, notebooks, devices, and other people — the philosophical foundation for thinking about AI as a cognitive partner rather than a separat…
James Watt's 1788 centrifugal device — and Wiener's paradigmatic metaphor — for the regulatory mechanism that converts raw power into sustainable capability. The small, almost laughably simple structure without which the engine destroys its…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
From the Greek kybernetes: the figure whose hand stays on the tiller, reading the water, making continuous small corrections. Wiener's chosen image for the human role in any purposive system containing both humans and machines.
Alan Turing's 1950 proposal to replace the unanswerable question "can machines think?" with a testable question about conversational indistinguishability — the most-cited fictional device in the philosophy of AI.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
The class of machine-learning architectures loosely modeled on biological neurons — the substrate of the current AI revolution and the opposite of Asimov's designed-then-programmed positronic brain.
Norbert Wiener's final book, published in 1964, winner of the National Book Award posthumously — the founding meditation on learning machines, self-reproducing systems, and the moral responsibilities of their creators.
Xingqi Maggie Ye and Aruna Ranganathan's 2026 Harvard Business Review ethnography of an AI-augmented workplace — the most rigorous empirical documentation to date of positive feedback dynamics in human-machine loops.
Wiener's 1950 popular treatise extending the mathematics of cybernetics into a social and ethical framework — and delivering, seventy-five years early, the clearest warning yet written about the human cost of deploying powerful automated sy…
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.
American mathematician and engineer (1916–2001) whose 1948 A Mathematical Theory of Communication founded information theory and supplied the mathematical framework within which every transmission of meaning — including human-AI collaborati…
French philosopher (1925–1995) whose late engagement with Whitehead shaped the contemporary Whitehead renaissance — and whose name, ironically, featured in Segal's clearest example of AI confident-wrongness in The Orange Pill.
American computer scientist (1927–2011), coiner of the term 'artificial intelligence,' organizer of the Dartmouth Workshop of 1956, and one of the principal figures Dreyfus's critique targeted across four decades.
American engineer (1913–2003) who partnered with Wiener on the wartime anti-aircraft fire control problem, co-authored the 1943 paper that founded cybernetics, and later served as chief engineer for John von Neumann's IAS computer at Prince…
Hungarian-American psychologist (1934–2021), father of flow theory, Nakamura's mentor and collaborator across four decades, whose foundational mapping of the peak experience provided the framework Nakamura extended into vital engagement.