This page lists every Orange Pill Wiki entry hyperlinked from John Henry Newman — On AI. 55 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The third and culminating process in Tarde's triad — the synthesis that resolves the duel logique into a form that transcends its contending patterns rather than compromising between them.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The regulatory, institutional, and normative arrangements governing AI development and deployment — reframed through Ostrom's framework as a polycentric governance challenge requiring coordination across multiple scales rather than the mark…
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The study of how AI-saturated environments shape the minds that live inside them — the framework for asking what becomes of judgment, curiosity, and the capacity for sustained attention when answers become abundant and friction is engineer…
The reconception of authorship for the AI age: the author is not the maker but the guarantor — the person who takes responsibility for the work, stands behind its claims, and holds the submedial space of depth the machine cannot provide.
The principle — defended by Wiener at considerable personal cost — that the creators of powerful systems bear moral responsibility for what those systems do after deployment, and that the claim of value-neutral research is a fiction that tr…
Brooks's term for the quality that arises when a system reflects a single, coherent design vision — the most important quality a software system can possess, and the one that only a single coordinating mind can maintain.
Newman's claim — developed in the Letter to the Duke of Norfolk (1875) — that conscience is the original, pre-institutional authority by which human beings encounter moral obligation, preceding every external authority including the Pope.
Newman's account of how a concrete mind reaches certitude in matters that resist formal demonstration — through the accumulated weight of independent probabilities, weighed by the illative sense, crossing a threshold into conviction.
Newman's cardinal's motto — heart speaks to heart — expressing his conviction that the most consequential communication between persons occurs through personal witness rather than abstract argument.
Ericsson's empirically established mechanism for building expertise — effortful, targeted engagement at the boundary of capability, guided by specific feedback and sustained over thousands of hours.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The cognitive discipline of treating fluent presentation as orthogonal to substantive quality — the evaluative capacity that AI-era reading demands and that centuries of correlation between eloquence and expertise make difficult to acquire.
The curricular transformation the AI era demands — from teaching answers to teaching questions, from developing execution skills to cultivating judgment — and the institutional adaptation challenge it poses to every university.
The research tradition — converging from neuroscience, philosophy, and robotics — that mind is not separable from body, and whose empirical maturity over four decades has made the computational theory of mind increasingly hard to defend.
The cultivated capacity — developed through years of practice, refined by the study of failures, calibrated by direct encounter with materials and forces — to sense, before calculation confirms it, that a design or situation is wrong. The s…
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
The argument that the broad intellectual formation the multiversity defunded for fifty years — general education, distribution requirements, the liberal arts — becomes the economically necessary core offering in the AI era.
The layered, embodied form of knowledge that accumulates in a practitioner through years of focal engagement with her material — too slow to notice day-to-day, too deep to transmit by documentation, and invisible to every metric the device …
The widening gap between the speed at which an institution can adapt and the speed at which its environment is changing — the mechanism through which individual future shock compounds into systemic disorientation.
Shannon Vallor's concept for the atrophy of moral capacities through technological mediation — what happens when the conditions for cultivating specific virtues are eroded by tools that produce the practice's outputs without requiring the v…
The interface paradigm — inaugurated at scale by large language models in 2022–2025 — in which the user addresses the machine in unmodified human language and the machine responds in kind; the paradigm that, read through Gibson's framework,…
Newman's term for the mind's engagement with propositions in their abstract, general form — understood, affirmed, even defended, but inert in the economy of the soul.
The AI-era practice of reading generated output against the grain — treating it as a hypothesis requiring verification rather than a finished product requiring consumption.
Aristotle's name for the intellectual virtue that governs action in particular circumstances — the form of knowledge that cannot be computed, because it requires experience, character, and having stakes in the world.
The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.
The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.
Newman's name for the engagement of the whole person — intellect, imagination, memory, affection — with a truth grasped in its concrete particularity, acquiring what he called 'force and keenness.'
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
Ward Cunningham's 1992 metaphor for the cost of expedient decisions in software — now reshaped by AI into a new variant: the debt of implicit decisions that were never evaluated against a consistent design.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The twelve-year-old's 'Mom, what am I for?' read not as a request for information but as an opening of the intermediate area — a question that asks to be held, not answered, because holding is what develops the capacity to inhabit unresolv…
Basalla's load-bearing claim — every artifact descends from a prior artifact through an unbroken chain of modification. No immaculate conceptions anywhere in the record of technology, and no exceptions in the lineage of the large language …
The cultural habit of treating fluent AI output as competent AI output — an extension of the equation between eloquence and expertise that centuries of human interaction built.
Newman's account of the structural principles governing the legitimate passage from probability to certitude in concrete matters — a grammar operating by the convergence of independent probabilities weighed by the illative sense.
The emerging discipline of framing questions to AI systems in ways that produce useful outputs — structurally different from Newman's grammar of assent, and dangerously easy to mistake for it.
Chalmers's 1995 distinction between the easy problems of cognitive function and the hard problem of why there is subjective experience at all — the conceptual instrument that makes the AI consciousness debate tractable.
Newman's name for the trained faculty of informal reasoning by which a concrete mind reaches certitude in matters that resist formal demonstration — operating below the level of articulable rules, grounded in personal formation.
The economic regime that emerges when the cost of execution approaches zero and the premium on deciding what to execute rises correspondingly — the Smithian reading of the Orange Pill moment.
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
Newman's term for the distinctive cognitive formation produced by liberal education — the capacity to perceive relations between domains, grasp principles, and exercise judgment across disciplines.
The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.
Newman's seven diagnostic criteria — preservation of type, continuity of principles, power of assimilation, logical sequence, anticipation of the future, conservative action upon the past, and chronic vigour — for distinguishing genuine de…
The tax every previous computer interface levied on every user — the cognitive overhead of converting human intention into machine-acceptable form. The tax natural language interfaces have abolished.
Edo Segal's term for the small judgment-focused groups emerging in AI-augmented organizations — decomposing work not by functional domain but by cognitive operation, restoring near-decomposability at the level of direction rather than imple…
The Orange Pill's central question — 'Are you worth amplifying?' — read through Newman's framework as a question addressed to conscience about the quality of real assent the builder brings to the collaboration.
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
Newman's 1845 essay establishing seven criteria — his 'notes' — for distinguishing genuine intellectual development from corruption in a living body of thought.
Newman's 1852 Dublin discourses — widely regarded as the most serious defense of liberal education in the English language — which argue that a university exists to form the philosophical habit of mind rather than transmit marketable skill…
Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.
The moment in The Orange Pill's composition when Claude produced a fluent philosophical connection that turned out, on examination, to be wrong — the paradigm case of AI's characteristic failure mode.
The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…