By Edo Segal
The failure I keep circling back to is the one I almost missed entirely.
I described it in *The Orange Pill* — the passage about Deleuze that Claude generated, the one that sounded like genuine philosophical insight, that connected two threads of my argument with elegant precision, that I read twice and nearly kept. The prose was beautiful. The structure was clean. The reference was wrong.
Not subtly wrong. Wrong in a way that anyone who had actually read Deleuze would catch immediately. But the smoothness of the output had nearly anaesthetized the part of me that checks. The part that asks: Do I actually understand this, or do I just like how it sounds?
That near-miss haunted me for weeks. Not because the machine had failed — machines produce errors; that is manageable. Because *I* had nearly failed. The seduction was not in the machine's confidence. It was in my willingness to accept confidence as a substitute for understanding.
John Henry Newman, writing in Victorian England about questions that had nothing to do with artificial intelligence, built the most precise philosophical framework I have encountered for diagnosing exactly this failure. He drew a distinction between two kinds of knowing — *notional assent*, where you hold a truth as an abstraction you can discuss at dinner, and *real assent*, where the truth has gotten into your bones and changed how you live. The engineer in Trivandrum who knew, abstractly, that AI would transform his career, and then spent two days watching it happen to *him* — that passage from the abstract to the visceral is Newman's distinction, lived in real time.
Newman also named the faculty that catches the error before it ships — what he called the *illative sense*, the trained judgment that operates below the level of articulable rules, built through years of patient encounter with a domain. The feeling my senior engineer has when something in a codebase is wrong before she can say what. The thing no model possesses, because no model has stood beside a deathbed or debugged a system at three in the morning or staked its reputation on a conclusion it could not fully justify in words.
This is not a book about theology. It is a book about what happens when the most powerful answer machine in history meets minds that have forgotten how to hold a conviction. Newman saw the shape of this crisis a hundred and seventy years before the machines arrived. His vocabulary — notional and real, the illative sense, conscience as the aboriginal authority — cuts through the AI discourse with a precision that surprises me every time I return to it.
The machines produce extraordinary outputs. Newman asks whether you have earned the right to deploy them.
— Edo Segal ^ Opus 4.6
1801–1890
John Henry Newman (1801–1890) was an English theologian, philosopher of education, and cardinal of the Catholic Church whose intellectual legacy spans theology, epistemology, and the theory of liberal learning. Born in London and educated at Trinity College, Oxford, he became a leading figure in the Oxford Movement before his controversial conversion from Anglicanism to Roman Catholicism in 1845. His major works include *An Essay on the Development of Christian Doctrine* (1845), which established criteria for distinguishing genuine intellectual development from corruption; *The Idea of a University* (1852), widely regarded as the most important defense of liberal education in the English language; and *An Essay in Aid of a Grammar of Assent* (1870), in which he articulated the distinction between notional and real assent and introduced the concept of the "illative sense" — the trained faculty of informal reasoning by which concrete minds reach certitude in matters that resist formal demonstration. His motto as cardinal, *Cor ad cor loquitur* ("Heart speaks to heart"), captured his conviction that the most consequential communication between persons occurs through personal witness rather than abstract argument. Canonized as a saint in 2019 and increasingly discussed as a candidate for Doctor of the Church, Newman's work on the formation of judgment, the limits of formal reasoning, and the relationship between knowledge and personal conviction has found striking new relevance in debates about artificial intelligence, education, and the nature of understanding in an age of machine-generated fluency.
In February 2026, a senior engineer in Trivandrum, India, sat across from a screen running Claude Code and watched a system produce, in two hours, a working prototype of a feature his team had estimated would take six weeks. The engineer had spent fifteen years building backend systems. He understood distributed architectures at the level of instinct. He could feel the pulse of a codebase the way a cardiologist feels an arrhythmia — not through conscious analysis but through a kind of bodily knowledge deposited by thousands of hours of patient, difficult, often tedious labor. He knew, when he walked into that room on Monday morning, that artificial intelligence was transforming software development. He had read the articles. He had followed the discourse. He had nodded along at conference talks about the coming disruption.
By Wednesday afternoon, he was oscillating between excitement and terror, and the oscillation was not intellectual. It was visceral. Something in the foundations of his professional identity had cracked, and the crack was not an abstraction. It was the specific, concrete, bodily experience of watching the ground shift beneath the feet of a person who had built his life upon that ground.
John Henry Newman would have recognized this moment instantly — not because he had any premonition of artificial intelligence, but because he spent the central decades of his intellectual life analyzing precisely this kind of transition: the passage from holding a truth as an abstraction to holding it as a living, destabilizing, personally implicating conviction. Newman called the first condition notional assent. He called the second real assent. The distinction between them is, arguably, the most important philosophical framework available for understanding what artificial intelligence does to human knowledge — and what it cannot do.
Newman developed the distinction across several works, but its fullest expression appears in An Essay in Aid of a Grammar of Assent, published in 1870. The book emerged from decades of wrestling with a question that the professional philosophers of his era largely ignored: How does a concrete, embodied, historically situated human being actually arrive at certitude? Not how should a mind arrive at certitude, according to the canons of formal logic. How does a real person, with a real history, living in a real world of particular circumstances, come to hold a truth with the force of conviction?
The question mattered to Newman because the dominant epistemology of his time — the British empiricist tradition running from Locke through Hume to Mill — held that rational assent should be proportioned to evidence, that certitude in matters where formal demonstration is unavailable is a species of intellectual overreach. Newman thought this account was catastrophically incomplete. It described one kind of knowing while ignoring another, and the kind it ignored was, in Newman's judgment, the more important of the two.
Notional assent, in Newman's account, is the mind's engagement with propositions in their abstract, general form. When a person assents notionally to the statement "All human beings are mortal," the person grasps the logical content of the proposition and accepts it as true. But the proposition, held notionally, makes no particular demand upon the person who holds it. It does not change how one lives. It does not produce the specific gravity of a truth that bears upon one's own mortality, one's own finitude, one's own approaching death. The proposition remains at the level of the general and the abstract, manipulable by the intellect, available for inclusion in further arguments, but inert — lacking the force that would make it a living element in the economy of the soul.
Real assent is categorically different. Real assent is the engagement of the whole person — intellect, imagination, memory, affection — with a truth grasped in its concrete particularity. When the same person who notionally assents to "All human beings are mortal" stands beside the deathbed of someone they love, the proposition ceases to be abstract. It acquires what Newman called "force and keenness," a quality of concrete realization that no amount of logical manipulation can produce. The person now holds the truth of mortality not as a formula but as a reality — a reality that reshapes how they understand their own life, their own choices, their own finite allotment of time.
The engineer in Trivandrum underwent exactly this transition. Before Monday, he held the proposition "AI will transform software development" with notional assent. The proposition was abstract, available for discussion, even for anxiety, but it had not yet achieved the concrete force that would make it a living element in his self-understanding. By Wednesday, the proposition had become real. Not because new information had been added — the facts about AI's capabilities were available before he entered the room — but because the truth had been encountered in its particularity, its bearing upon his career, his identity, his understanding of what his life's work had meant.
This is the first and most fundamental thing Newman's framework reveals about the AI transformation: the crisis is not informational. It is existential. The relevant facts about artificial intelligence — its capabilities, its trajectory, its implications for knowledge work — are widely available. They can be accessed by anyone with an internet connection in a matter of minutes. The machine itself can summarize them with extraordinary fluency. What is scarce is not the information but the capacity to hold the information with real assent — to let it become a concrete, personally implicating truth rather than an abstract proposition one can nod along to at a conference.
The distinction illuminates the curious psychology of denial that has characterized much of the professional response to AI. Consider the lawyer who uses AI to draft briefs while insisting, at dinner parties, that AI will never replace legal judgment. Or the professor who assigns AI-generated reading summaries to students while maintaining that the university's mission remains unchanged. Or the executive who deploys AI across the organization while continuing to evaluate employees by the same metrics that preceded the deployment. In each case, the notional content is acknowledged — yes, AI is powerful, yes, it changes things — while the real implications are held at bay. The proposition is assented to notionally, which is to say it is assented to in a way that makes no concrete difference to how the person lives and works.
Newman would recognize this as the characteristic failure of a culture that has become extraordinarily sophisticated at manipulating propositions while losing the capacity for the kind of assent that actually transforms conduct. Victorian England had its own version: a society that could affirm Christian doctrine with perfect notional precision while living as though the doctrines were decorative rather than operative. The AI age has its equivalent — a professional class that can discuss the implications of artificial intelligence with great fluency while conducting its affairs as though the implications applied to someone else.
Now consider the machine itself. A large language model processes language with a facility that dwarfs any individual human mind. It can generate arguments, produce analyses, compose essays, draft legal briefs, write code — all with a speed and range that make the human practitioner look glacial by comparison. It manipulates propositions across an enormous space of possible connections. It finds patterns, generates inferences, produces outputs that are, by any external measure, sophisticated.
But in Newman's framework, everything the machine produces is notional. Not because the outputs lack quality — they may be excellent. Not because the outputs are abstract in their content — they can be highly specific. But because the machine holds no proposition with real assent. It has no concrete, experiential, personally implicating relationship to the truths it generates. It does not stake itself upon its conclusions. It does not undergo the interior process by which a proposition moves from the status of an intellectual object to the status of a living conviction.
The machine can produce the sentence "All human beings are mortal" and even generate a moving reflection upon its implications. What it cannot do is stand beside a deathbed. It cannot experience the passage from notional to real — the moment when a truth that was merely understood becomes a truth that is felt, appropriated, woven into the fabric of a life. And this is not a limitation that better training data or more sophisticated architecture will resolve. It is a structural feature of a system that processes propositions without being constituted by them.
This is not the familiar argument that machines lack consciousness, though Newman would certainly have found the consciousness question interesting. It is a more precise claim: that the kind of knowledge that matters most — the kind that changes conduct, that shapes character, that bears upon the question of how one should live — requires a knower who is personally implicated in the truth of the conclusion. Not merely a system that can evaluate the logical consistency of propositions, but a being for whom the truth of a proposition is at stake — a being who stands to gain or lose something real by what it holds to be true.
The 2025 Rome conference on Newman and artificial intelligence, held at the Biblioteca Vallicelliana and organized by the University of Notre Dame Australia, circled around precisely this question. Andrew Meszaros, holder of the Saint John Henry Newman Chair in Theology at the Angelicum, articulated the concern with characteristic precision: if education assesses products — essays submitted, code deployed, briefs filed — then AI is a tool, and a remarkably efficient one. But if education assesses the way students think, "then I'm afraid that AI is actually an obstacle to assessing them correctly or assessing what they really, truly know." The distinction between the product and the thinking that produced it is, in Newman's terms, the distinction between notional and real. The product can be generated by a system operating entirely at the notional level. The thinking requires a person undergoing the transition to real assent — a person for whom the encounter with the material has been formative, not merely productive.
The argument extends well beyond education. In every domain where AI generates competent outputs — law, medicine, engineering, creative work, strategic analysis — the same question arises: Does the person deploying the output hold the underlying knowledge with real assent, or has the machine's facility allowed the person to bypass the formative process entirely? The lawyer who files an AI-drafted brief may produce a serviceable document. But has she undergone the specific, slow, often frustrating encounter with the case law that would have deposited in her mind the kind of understanding Newman described — the understanding that is not merely intellectual but embodied, not merely propositional but personal?
The Orange Pill describes this danger with considerable honesty. Its author confesses to a moment when the prose generated by the machine outran the thinking, when a passage about Deleuze "sounded like insight but collapsed under examination." The prose satisfied every external criterion — it was well-structured, rhetorically compelling, apparently learned. But the author, exercising the discipline of real assent, recognized that the surface concealed an absence. The grammar of the prompt had been satisfied. The grammar of assent had not.
The crisis of the AI age, seen through Newman's framework, is not that machines produce bad outputs. They often produce excellent outputs. The crisis is that the excellence of the outputs makes the bypass of real assent frictionless, invisible, and — most dangerously — comfortable. One can now operate for weeks, months, perhaps years in a professional environment, producing work of apparent sophistication, without ever undergoing the formative encounter with the material that would make the knowledge genuinely one's own. The outputs accumulate. The understanding does not.
Newman spent his life arguing that the most important things a person can know — about God, about morality, about the fundamental shape of reality — cannot be arrived at through formal demonstration alone. They require the convergence of lived experience, the exercise of trained judgment, the personal appropriation of truths that resist reduction to formula. The age of artificial intelligence has extended this argument into every domain of human endeavor. The most important things a lawyer can know about the law, an engineer about systems, a physician about disease, a teacher about learning — these too resist reduction to the propositional outputs that machines produce so fluently.
Real assent is scarce. It was always scarce. Newman knew this. But the scarcity was, in previous eras, constrained by the difficulty of producing notional outputs at scale. When writing a legal brief required months of research and drafting, the process itself — tedious, inefficient, often maddening — served as a forcing function for real engagement with the material. The friction was pedagogical even when it was not intentional. The slow encounter with the primary sources built, layer by layer, the kind of understanding that no summary could replace.
When the machine removes this friction, the forcing function disappears. Real assent does not become impossible. It becomes optional. And in a culture that optimizes relentlessly for speed, optionality is a polite word for extinction.
The question, then, is not whether machines can produce knowledge. They produce something that functions like knowledge in virtually every external respect. The question is whether the humans who deploy that knowledge hold it with the conviction, the personal implication, the concrete force that Newman spent his life arguing is the mark of genuine understanding. Whether they have undergone the transition from notional to real — or whether they have settled, comfortably and perhaps permanently, for the surface of knowing without its substance.
---
Newman coined the term "illative sense" to name a cognitive faculty that formal philosophy had largely declined to acknowledge: the power by which a trained mind reaches certitude in concrete matters through an informal process of reasoning that cannot be reduced to any explicit set of rules. The word "illative" derives from the Latin illatio, an inference or conclusion — but Newman's use is deliberately paradoxical. The illative sense draws conclusions, yet its method is not the method of the syllogism. It operates below the threshold of articulable logic, gathering probabilities, weighing considerations, sensing the direction in which converging evidence points, until the mind arrives at a conviction that is, in Newman's account, as rationally grounded as any formal proof — more so, in fact, when the subject matter is too complex, too particular, too embedded in the concreteness of life to admit of formal treatment.
Newman's examples are drawn from domains where this kind of reasoning is most visible. The physician who diagnoses a patient's condition from a constellation of symptoms, none of which is individually decisive, but which together produce a conviction strong enough to act upon. The judge who weighs testimony, demeanor, consistency, plausibility, and the thousand intangible elements of a courtroom proceeding to reach a verdict that no algorithm could derive from the transcript alone. The historian who reads a document and knows — not suspects, not guesses, but knows — that it is a forgery, though the evidence that supports this conviction is too distributed across her experience to be marshaled into a single argument.
In each case, the reasoning is genuine. The conclusion is rationally grounded. But the grounds are not fully articulable. They reside partly in the explicit evidence and partly in the reasoner's history — in the accumulated deposit of thousands of prior encounters with similar cases, similar documents, similar patients. The illative sense is not intuition in the popular sense of a lucky guess. It is the fruit of long training, deep immersion, and repeated encounter with the particular domain in which it operates. Newman was emphatic on this point: the illative sense is not a general faculty that can be exercised with equal authority in any domain. It is domain-specific. The physician's illative sense does not extend to the courtroom. The judge's does not extend to the laboratory. Each exercises judgment within the domain where experience has deposited the material from which judgment draws.
This specificity is precisely what makes the illative sense irreducible to algorithm. An algorithm, by definition, is a set of explicit rules that can be applied by any competent executor — human or machine — regardless of the executor's personal history. The algorithm does not care who runs it. Its conclusions do not depend upon the biography of the processor. The illative sense, by contrast, is inseparable from the biography of the person who exercises it. It is not a rule that can be extracted from the reasoner and applied independently. It is the reasoner's entire history of engagement with the domain, brought to bear upon a particular case in a particular moment, producing a conclusion that no one else, with a different history, might have reached in exactly the same way.
The large language model performs inference at a scale and speed that no human mind can approach. It processes patterns across billions of tokens of text, identifies statistical regularities, generates outputs that are consistent with the patterns in its training data. The outputs are often remarkably sophisticated. They demonstrate what might be called, in an extended sense, a form of competence — the ability to produce contextually appropriate responses across an enormous range of domains.
But the model's inference is structurally different from Newman's illative sense in ways that matter immensely when the stakes are real.
First, the model's inference is impersonal. It does not depend upon, and is not shaped by, a concrete history of embodied engagement with a particular domain. The model has been exposed to enormous quantities of text about medicine, about law, about engineering, about history. But it has not practiced medicine. It has not argued cases. It has not built systems that failed at three in the morning and taught, through their failure, something that no documentation could convey. The exposure is to the propositional residue of other people's experience — to the text that records what experts have thought and said — not to the experience itself. In Newman's terms, the model possesses an extraordinarily vast notional acquaintance with the domain. It does not possess the real acquaintance that comes from having been formed by the domain.
Second, the model's inference is irresponsible — not in a moral sense, but in the precise epistemological sense that no one takes responsibility for the individual conclusion. When a physician exercises the illative sense and diagnoses a patient, the physician stakes her professional standing, her conscience, and in some cases her patient's life upon the soundness of her judgment. The diagnosis is not merely an output. It is an act for which the diagnostician is accountable. The model produces outputs for which no one is accountable in this way. The developer is not accountable for the specific medical conclusion the model generates. The user may or may not be equipped to evaluate it. The model itself is not a moral agent capable of being held accountable. The conclusion floats, detached from any person who bears the weight of its truth or falsity.
Third — and here the contrast becomes most acute — the model cannot distinguish between what it knows and what it is pattern-matching toward. Newman's illative sense includes, as an essential component, the capacity for self-assessment: the trained mind knows not only what it concludes but how confident it ought to be in the conclusion. The experienced physician knows when her diagnosis is firm and when it is tentative. She knows the difference between a case where the converging evidence is overwhelming and a case where it is suggestive but incomplete. This meta-cognitive dimension — the knowledge of what one knows and does not know — is integral to the exercise of judgment. The model lacks it. Its confidence scores are statistical properties of the output distribution, not the product of a mind assessing its own epistemic standing. It can be utterly wrong with the same fluency it brings to being utterly right, and nothing in its architecture provides a reliable internal signal of the difference.
This is what produces the phenomenon The Orange Pill describes as "confident wrongness dressed in good prose" — the hallucination that reads like insight. The passage about Deleuze that sounded philosophically sophisticated but collapsed under examination was not a failure of the model's knowledge base in any simple sense. It was a structural consequence of inference without the illative sense: pattern-completion unrestrained by the meta-cognitive capacity to recognize where genuine understanding ends and statistical extrapolation begins.
The implications extend far beyond the literary. Consider the AI system deployed to assist legal research. The system can identify relevant cases, extract applicable principles, even construct preliminary arguments with a range and speed that no human researcher can match. But it cannot exercise the judgment that distinguishes the case that is technically relevant from the case that is importantly relevant — the case whose principle, applied to the present facts, would produce a just outcome. That distinction requires the kind of trained discrimination Newman described: the convergence of legal knowledge, practical wisdom, moral sensitivity, and concrete familiarity with the specific texture of the present case that together constitute legal judgment.
Or consider the AI system that generates medical diagnoses from symptom descriptions. The system can correlate symptoms with conditions across a database vastly larger than any physician's personal experience. But the physician who has spent twenty years treating patients in a particular population — who knows, from embodied experience, how this community describes pain, how cultural factors inflect symptom presentation, how the specific patient sitting in front of her differs from the statistical average — brings to the diagnostic encounter a quality of understanding that the model's statistical inference cannot replicate. Not because the model lacks data. Because the model lacks what Newman would call personal standing in relation to the conclusion.
Newman scholars have increasingly recognized the resonance between the illative sense and Michael Polanyi's later concept of "tacit knowledge" — the dimension of knowing that resists explicit formulation yet undergirds all competent performance. The connection is instructive. Polanyi argued that we know more than we can tell — that the master craftsman's skill, the scientist's capacity for discovery, the physician's diagnostic instinct all depend upon a substrate of knowledge that cannot be fully articulated and therefore cannot be fully transferred through explicit instruction. Newman anticipated this argument by nearly a century: the illative sense is precisely the faculty that operates on tacit knowledge, drawing upon the full, largely unarticulated deposit of experience to reach conclusions that explicit reasoning alone could never reach.
If Newman and Polanyi are right — if the most important forms of human judgment depend upon a substrate of knowledge that resists formalization — then the limits of algorithmic inference are not temporary limitations awaiting a sufficiently large training corpus. They are structural features of a system that operates exclusively upon explicit, formalized representations of knowledge. The model can process everything that has been written down. It cannot access what has not been written down, and the illative sense operates precisely in the domain of what has not been — and perhaps cannot be — articulated.
Newman would not have dismissed the model's capabilities. Newman was never a rejectionist regarding the tools of the intellect; he valued logic, valued science, valued the disciplined application of formal reasoning in its proper domain. What he insisted upon was the recognition that formal reasoning has a proper domain, and that beyond that domain lies a vast territory of concrete, particular, experiential knowledge where the illative sense is sovereign. The model is an extraordinary instrument of formal and quasi-formal reasoning. It extends the reach of propositional knowledge to a degree that would have astonished Newman. But it does not extend the reach of the illative sense, because the illative sense cannot be extended by any means other than the slow, patient, often painful process of personal formation.
The engineer who can feel a codebase — who walks through a system and senses, before articulation, that something in the architecture is wrong — is exercising Newman's illative sense applied to software. The knowledge that grounds this feeling is distributed across thousands of hours of debugging, deploying, watching systems fail under load, tracing errors to their roots, and experiencing, in the body, the specific character of different kinds of failure. No description of this knowledge, however detailed, captures its full content. And no model, however thoroughly trained on descriptions of software engineering, acquires it.
The question for the age of AI, framed in Newman's terms, is not whether the model can produce impressive outputs. It manifestly can. The question is whether a culture that has access to the model's extraordinary propositional facility will continue to invest in the slow, expensive, often tedious process of forming the illative sense in its human practitioners. Whether the availability of rapid, competent, confident outputs will erode the institutional patience required to develop the kind of trained judgment that only years of embodied experience can produce. Whether the frictionless answer will crowd out the formative struggle.
Newman spent his life arguing that the most trustworthy conclusions in concrete matters are reached by persons, not by processes — by minds formed through deep engagement with the particular domain, not by algorithms applied indifferently across domains. The large language model is the most powerful process ever built. But a process, however powerful, is not a person. And the illative sense, the faculty that makes a person's judgment trustworthy in the domain where she has standing, remains — as Newman would have predicted — beyond the reach of any machine.
---
Newman's Grammar of Assent was, at its core, an attempt to answer a question that the dominant philosophical tradition of his era had rendered nearly unanswerable: By what right does a person hold a concrete, particular truth with certitude when formal demonstration is unavailable? The British empiricists — Locke preeminently, but the tradition running through Hume and Mill — held that rational assent should be proportioned to evidence, that certitude without formal proof is a kind of intellectual excess, a failure of epistemic discipline. Newman thought this principle, applied universally, would destroy not only religious faith but the entire structure of practical reasoning by which human beings navigate their lives. Nobody proportions their assent to evidence in matters of daily life. Nobody withholds conviction about their own identity, the trustworthiness of their friends, the reality of the external world until formal demonstration is available. People reach certitude in these matters through a process that is rational but not formally demonstrative — and Newman set out to describe that process with the rigor it deserved.
The result was a grammar — a set of structural principles governing the legitimate passage from probability to certitude. Newman argued that in concrete matters, certitude is reached through the convergence of independent probabilities, none sufficient alone, together compelling. The convergence is assessed not by any mechanical procedure but by the illative sense of the individual reasoner, operating within the domain where experience has formed her judgment. The grammar of assent thus describes a deeply personal epistemological process: the conditions under which this mind, formed by this history, encountering this evidence, is justified in reaching this conclusion with the force of conviction.
The age of artificial intelligence has produced a second grammar — operating alongside Newman's, superficially resembling it, fundamentally different in its logic and its implications. This second grammar governs not the passage from probability to certitude but the passage from prompt to output. It is the grammar of the prompt: the art and discipline of framing a question to a large language model in such a way that the model produces useful, relevant, sophisticated output.
The grammar of the prompt has its own practitioners, its own literature, its own emerging norms of excellence. The skilled prompt engineer knows how to specify context, constrain scope, iterate on output, and evaluate results. The discipline is genuine. The skill is real. And the outputs it produces can be, by virtually any external measure, impressive — well-structured, factually dense, rhetorically polished, and responsive to the nuances of the request.
The danger Newman's framework exposes is not that the grammar of the prompt is worthless. It is that the grammar of the prompt is being systematically confused with the grammar of assent — that the capacity to produce satisfying outputs from a machine is being mistaken for the capacity to hold genuine knowledge with justified conviction. The two operations are categorically different, and the confusion between them is, in Newman's terms, the characteristic intellectual pathology of the present moment.
Consider what happens when a lawyer uses an AI system to draft a legal brief. The system receives a prompt describing the case, the applicable jurisdiction, the desired outcome. It produces a draft that identifies relevant precedents, constructs arguments, anticipates counterarguments, and presents the analysis in a format the court expects. The draft may be competent. It may even be excellent by the standards of legal writing. The grammar of the prompt has been satisfied: the right question, posed in the right way, has produced a useful output.
But has the grammar of assent been satisfied? Has the lawyer who deploys this brief arrived at a genuine, personally grounded conviction about the soundness of the arguments it contains? Has she engaged with the case law deeply enough to exercise the illative sense — to know, from trained judgment rather than from the model's statistical inference, whether the precedents cited are not merely technically applicable but fundamentally apt? Has she undergone the formative encounter with the material that would deposit, in her professional judgment, the specific quality of understanding that distinguishes the brief she can defend under hostile questioning from the brief she merely filed?
Newman's framework suggests that the answer, in an alarming number of cases, is no. The grammar of the prompt has been satisfied while the grammar of assent has been bypassed entirely. The output exists. The understanding does not.
The Orange Pill provides what may be the most candid illustration of this danger in the emerging literature on AI collaboration. Its author describes a passage generated by Claude about Gilles Deleuze's concept of "smooth space" — a passage that connected two threads of the argument with apparent philosophical sophistication. The prose was elegant. The structure was sound. The reference arrived on time and appeared to illuminate the discussion. The grammar of the prompt had been satisfied with distinction: the right context, the right framing, the right iterative refinement had produced an output that read like genuine philosophical insight.
The next morning, something nagged. Upon examination, the philosophical reference was wrong — not subtly wrong, not a matter of interpretive dispute, but wrong in a way that would be immediately obvious to anyone who had actually read Deleuze. The passage worked rhetorically. It sounded like insight. But the foundation on which the rhetoric rested was a statistical confection — the model's best approximation of what a philosophically informed passage about smooth space should sound like, generated without any understanding of what Deleuze actually argued.
This is the grammar of the prompt masquerading as the grammar of assent. The output satisfies every criterion of the first grammar — fluency, relevance, structural coherence, rhetorical polish — while violating the most basic requirements of the second: that the knowledge be genuinely held, personally tested, grounded in actual engagement with the source material. The author of The Orange Pill caught the failure because he possessed enough residual intellectual discipline to check a reference that felt too smooth. But the seduction of the output — the fact that it looked and sounded exactly like the insight he was reaching for — nearly prevented the check from happening at all.
Newman would have recognized this seduction as a species of what he called notional sophistication — the capacity to manipulate propositions with great facility while holding none of them with real conviction. The model is the ultimate notional intellect: it can generate propositions across an almost infinite range of domains, construct arguments of considerable complexity, produce analyses that satisfy the external criteria of expertise. What it cannot do is hold any of these propositions with the interior conviction that would make them genuine knowledge in Newman's sense. And what it tempts its users to do is to adopt the same posture — to accept the output as though it were knowledge, without undergoing the interior process by which knowledge is actually formed.
The grammar of assent, in Newman's account, requires certain conditions that no interaction with a machine can provide. It requires encounter — the direct, often difficult engagement with the primary material, the source, the reality that the proposition represents. The lawyer must read the cases. The physician must examine the patient. The philosopher must wrestle with the text. The encounter may be tedious, frustrating, even apparently wasteful in its inefficiency. But the encounter is where real assent is formed — where the proposition ceases to be an abstraction and becomes a truth the person has earned through the specific friction of engagement.
It requires personal judgment at every stage — the assessment of relevance, the weighting of evidence, the recognition of what matters and what does not. This judgment is not algorithmic. It cannot be outsourced. It is the exercise of the illative sense within the specific domain of inquiry, and it is formed by the very process of engagement that the machine's fluency makes it tempting to skip.
And it requires what Newman called conscience — the interior faculty that holds the reasoner accountable not merely for the logical consistency of the conclusion but for its truth. The person who reaches certitude through the grammar of assent takes responsibility for the conclusion. She stakes her judgment upon it. She is willing to be challenged, to defend her reasoning, to revise her position if new evidence warrants — but she holds the conclusion with the specific gravity of a conviction that is hers, formed by her engagement, grounded in her experience, tested by her conscience.
The grammar of the prompt requires none of these things. It requires skill — real skill, the kind that develops with practice and produces measurably better results. But it does not require encounter with primary material. It does not require personal judgment about the truth of the output. It does not require the exercise of conscience. The user frames the question. The machine produces the answer. The user evaluates the answer by its external qualities — its coherence, its plausibility, its usefulness — and deploys it. At no point in this process is the user required to have formed a genuine, personally grounded conviction about the truth of what the machine has produced.
Chase Mitchell, writing in the Christian Scholar's Review in 2025, captured this dynamic with a striking inversion. Newman's Grammar of Assent describes the ascent of the mind from probability to certitude — the upward movement by which a person, through the exercise of trained judgment, arrives at convictions worthy of the name. Mitchell argued that AI propagates what he called a "grammar of descent" — a downward movement in which the ready availability of sophisticated outputs erodes the motivation, the discipline, and ultimately the capacity for the ascent Newman described. The descent is not dramatic. It is incremental, comfortable, and almost entirely invisible to the person undergoing it. Each time the machine provides a satisfying output that the user accepts without genuine engagement, the muscles of real assent atrophy slightly. The atrophy compounds. Over months and years, the person who has relied on the grammar of the prompt may find that the grammar of assent has become, not impossible, but unfamiliar — a language once spoken fluently that has faded from disuse.
The institutional implications are severe. Universities, law firms, medical practices, engineering teams — every institution that depends upon the formation of judgment in its practitioners — faces the same question. Will the grammar of the prompt supplement or supplant the grammar of assent? Will the machine's outputs serve as a starting point for genuine engagement, or as a substitute for it? Will institutions continue to invest in the slow, expensive, often frustrating process of forming real assent in their members, or will the efficiency of prompt-based knowledge production prove too attractive to resist?
Jonathan Sanford, president of the University of Dallas, framed the question in terms Newman would have endorsed: "The more we automate, the more we need leaders who can interpret, not merely execute. The more data we have, the more we need wisdom to decide what is worth pursuing." The automation provides outputs. The interpretation requires real assent — the personal, conscience-tested, experientially grounded conviction that distinguishes the person who merely possesses information from the person who actually understands.
Newman would not have opposed the grammar of the prompt as such. Newman was a sophisticated rhetorician who understood the value of well-framed questions and well-structured arguments. What he would have opposed, with the full force of his considerable intellectual energy, is the substitution of the one grammar for the other — the cultural drift toward a condition in which the capacity to produce satisfying outputs is mistaken for the capacity to know. The grammar of the prompt is a tool. The grammar of assent is a formation. The tool is valuable. The formation is essential. And the age of AI is making it dangerously easy to acquire the tool while neglecting the formation.
---
In his Letter to the Duke of Norfolk, written in 1875 in response to William Gladstone's accusation that Catholics had surrendered their conscience to papal authority, Newman made one of the most striking claims in the history of moral philosophy. Conscience, he argued, is "the aboriginal Vicar of Christ" — the original, pre-institutional, pre-doctrinal faculty by which human beings encounter moral obligation as a personal summons. Not as a social convention. Not as a cultural inheritance. Not as an evolutionary adaptation for cooperative behavior. But as a voice — Newman's word — that addresses the individual as an individual and demands an accounting.
The claim was, in its original context, a defense of individual moral agency against institutional overreach. Newman was arguing that even the authority of the Pope — an authority he accepted with genuine conviction — could not override the testimony of a well-formed conscience. "If I am obliged to bring religion into after-dinner toasts," Newman wrote in his famous passage, "I shall drink — to the Pope, if you please, — still, to Conscience first, and to the Pope afterwards." The remark was partly humorous and entirely serious. Conscience, in Newman's hierarchy, precedes every external authority, because conscience is the faculty by which the individual encounters moral reality directly, without mediation.
Applied to the age of artificial intelligence, Newman's theology of conscience exposes a dimension of the crisis that secular analysis consistently underestimates. The dominant discourse on AI ethics — alignment research, safety protocols, governance frameworks, regulatory structures — addresses the question of how to constrain the machine's outputs. How to prevent harmful generations. How to ensure that the system's behavior aligns with human values. These are important questions. They are not the most important questions.
The most important question, in Newman's framework, is not how to constrain the machine but how to form the person who directs the machine. Not how to align the AI but how to align the human. The machine has no conscience. It has parameters, objective functions, reward models, alignment protocols. These are engineering achievements of considerable sophistication, and they represent genuine effort to ensure that the machine's behavior falls within acceptable bounds. But they are not conscience. They are constraints imposed from without upon a system that has no interior moral life — no experience of obligation, no capacity for guilt, no sense that the truth of its outputs matters in a way that implicates its own being.
The human who directs the machine has, or should have, all of these things. Conscience is the faculty that holds the human builder accountable not merely for the technical quality of the output but for its moral character. Not merely for whether the code compiles but for whether the product deserves to exist. Not merely for whether the system performs as specified but for whether the specification itself was worthy of the intelligence and resources expended upon it.
Newman's account of conscience has a specificity that distinguishes it from the vaguer modern usage. In contemporary discourse, "conscience" often means little more than personal preference elevated to moral status — "I follow my conscience" as a way of saying "I do what feels right to me." Newman's conscience is something far more rigorous. It is a cognitive-moral faculty that, like the illative sense, is formed through training, deepened through experience, and sharpened through the repeated encounter with moral reality. Conscience is not a feeling. It is a form of perception — the perception of moral obligation as a real feature of the world, as objective in its way as the perception of color or sound.
A well-formed conscience, in Newman's account, is the product of a long process of moral development. It requires exposure to moral reality — to situations where right and wrong are genuinely at stake, where the consequences of judgment fall upon real persons, where the temptation to rationalize is met by the interior demand for honesty. It requires the discipline of self-examination — the willingness to hold one's own motives to account, to distinguish between what one wants to be true and what one knows to be true, to resist the comfortable conflation of self-interest with moral principle. And it requires what Newman called "docility" — not in the modern sense of submissiveness, but in the original sense of teachability: the openness to correction, the recognition that one's own moral perception may be incomplete or distorted, the willingness to revise in the face of better understanding.
The Orange Pill contains a passage that reads, through Newman's lens, as a confession of conscience failed and, in the act of confession, partially restored. The author describes building a product early in his career that he knew was addictive by design. He understood the engagement loops. He understood the dopamine mechanics. He understood the variable reward schedules, the social validation cycles, the way a notification timed to a moment of boredom could capture thirty minutes of attention the user had intended to spend elsewhere. He understood all of this — and he built it anyway.
The justification he offered himself at the time was the one every builder in his position has offered: "Someone else will build it if I do not, so it might as well be me. At least I'll do it better than they would." The argument is logically coherent. It is even, within certain narrow assumptions, strategically sound. What it is not, in Newman's framework, is morally honest. Conscience — the interior voice that knows the difference between a justification and a truth — knew that the argument was a rationalization. The intellect constructed the defense. Conscience saw through it. And the builder overrode his conscience because the growth was intoxicating and the justification was available.
Newman would have recognized this pattern with painful familiarity. The Apologia Pro Vita Sua, his intellectual autobiography, is in large part the story of a brilliant mind's struggle with the temptation to substitute intellectual facility for moral honesty — to construct arguments that satisfy the intellect while the conscience protests, quietly, that the arguments serve the arguer's convenience more than the truth. Newman's own passage from the Church of England to the Catholic Church was, as he told it, a years-long process of resisting a conclusion that his conscience had reached long before his intellect was willing to follow. The intellect could always find another objection, another reason to delay, another argument for remaining where he was. Conscience simply knew. And the delay was a form of dishonesty with which Newman never fully made peace.
The relevance to artificial intelligence is not metaphorical. It is direct. The builders of AI systems face, at every stage of development and deployment, questions that the intellect can rationalize but that conscience must judge. Should this system be deployed in this context? Should this data be used for this purpose? Should this capability be made available to this population? The intellect, trained in optimization and rewarded for growth, can construct arguments for virtually any deployment. The market rewards output. The investor rewards scale. The culture celebrates disruption. Within this ecosystem of incentives, the case for deployment is almost always available, and the case for restraint almost always requires the builder to accept a cost — financial, competitive, reputational — that the market does not reward.
Conscience is the faculty that bears this cost. Not regulation, which arrives late and addresses generalities rather than particulars. Not alignment protocols, which constrain the machine but do not form the person. Not public opinion, which is itself increasingly shaped by the systems it might otherwise hold accountable. Conscience: the individual's irreducible encounter with the question of whether this particular action, in this particular circumstance, is right.
Newman's insistence on the primacy of conscience was, in his own time, a defense of individual moral agency against institutional pressure. In the age of AI, the pressure comes from a different direction — not from ecclesiastical authority but from the logic of scale, the imperative of growth, the cultural assumption that what can be built should be built, and that the question of whether it should be built is a luxury the competitive environment cannot afford. Newman's response would be unambiguous: conscience is not a luxury. It is the aboriginal authority. It precedes the market, the investor, the competitive landscape, and the cultural assumption. It addresses the builder not as a function of an optimization process but as a person — a person who will, at some point, have to give an account of what they built and why.
The question The Orange Pill places at the center of its argument — "Are you worth amplifying?" — is, in Newman's terms, a question addressed to the conscience. Not to the intellect, which can always construct a case for its own worthiness. Not to the market, which values output regardless of its moral character. To the conscience, which knows whether the person behind the output has been honest — with themselves, with the people who will use what they build, with the truth of what the technology is and what it costs.
The machine amplifies. This is the central claim, and the claim is sound. But amplification is morally neutral in a way that conscience is not. The amplifier carries whatever signal it receives. It does not filter. It does not judge. It does not ask whether the signal deserves to be carried. The signal is the human contribution, and the quality of that signal — its honesty, its care, its moral seriousness — is determined not by the intellect's capacity for sophisticated justification but by the conscience's capacity for honest self-assessment.
Newman would observe that the contemporary discourse on AI ethics is overwhelmingly concentrated on the machine side of the equation — on alignment, safety, governance, regulation. These are necessary concerns. But they are, in Newman's framework, secondary. The primary concern is the formation of the persons who will direct the machine: the cultivation of conscience as a functioning moral faculty, capable of asking the questions that the logic of scale and the grammar of the prompt systematically discourage.
The 2025 Vatican document Antiqua et Nova, issued by the Dicastery for the Doctrine of the Faith, engaged this concern at its deepest level. The document argued that the gravest questions posed by artificial intelligence "are not technical. They are moral and metaphysical." What is a human person, such that the person's labor may or may not be replaced, the person's speech imitated, the person's relationships simulated, the person's decisions outsourced? These questions cannot be answered by the machine. They cannot be answered by the market. They can only be answered by persons whose moral perception — whose conscience — has been formed with sufficient depth and honesty to see through the rationalizations that the logic of scale provides.
Newman's conscience is not primarily a prohibitive faculty. It does not merely say "thou shalt not." It says, with equal force, "thou must." The conscience that perceives an obligation to refrain from building something harmful also perceives an obligation to build something good — to direct the amplifier toward life rather than away from it. The builder whose conscience is well-formed does not merely avoid harm. She asks, with the rigor Newman demanded of all moral inquiry, what deserves to be built. What would genuinely serve the persons downstream of the technology. What would leave the world better than she found it — not in the aggregate, statistically, as a net utility calculation might suggest, but concretely, in the specific lives of the specific people her work will touch.
This is what Newman meant when he said conscience precedes the Pope. Not that conscience is infallible — Newman was too honest about the distortions of self-interest to make that claim. But that conscience is the first and indispensable authority, the faculty without which no external authority — ecclesiastical, regulatory, market, cultural — can function rightly. The regulations will be as good as the conscience of the persons who frame them. The alignment protocols will be as honest as the conscience of the persons who design them. The educational institutions will form persons as whole as the conscience of the persons who lead them.
The formation of conscience, in Newman's account, is slow, difficult, and dependent upon precisely the kind of embodied, experiential engagement that the efficiency of AI tools tempts practitioners to bypass. A well-formed conscience is the product of years of attending to the moral dimension of one's work — years of asking, not as a formality but as a genuine inquiry, whether this product serves the people it reaches, whether this decision honors the trust of the people affected, whether this justification is honest or merely convenient. The formation cannot be shortcut. It cannot be prompted. It cannot be algorithmically generated.
It can only be lived.
In 1852, John Henry Newman was invited to Dublin to deliver a series of lectures on the purpose of a university. The invitation came from the Irish Catholic bishops, who wanted him to establish and lead a new Catholic university. The lectures that resulted — published first as Discourses on the Scope and Nature of University Education and later revised as The Idea of a University — constitute what is arguably the most sustained and philosophically serious defense of liberal education ever written in the English language. They were composed in opposition to a particular threat: the utilitarian reduction of education to professional training, the conviction, advanced most forcefully by the Edinburgh Review circle and by John Stuart Mill's followers, that a university exists to produce useful citizens equipped with marketable skills.
Newman thought this conviction was catastrophically wrong — not because useful skills are unimportant, but because a university that defines its purpose as the production of useful skills has misunderstood what a university is. The purpose of a university, Newman argued, is the formation of the intellect. Not the filling of it. The formation: the cultivation of a specific quality of mind that he called "the philosophical habit" — the capacity to see the relations between different forms of knowledge, to grasp principles rather than merely accumulate facts, to exercise judgment across domains rather than perform competently within a single narrow specialization.
The distinction was not ornamental. Newman believed that the philosophical habit of mind was qualitatively different from, and more valuable than, any collection of specialized competencies. The specialist knows one thing deeply. The person of liberal education knows how things relate — how a principle in one domain illuminates a problem in another, how the methods of one discipline correct the excesses of another, how the whole of knowledge forms what Newman called a "circle" in which each branch occupies a place defined by its relation to every other branch. Remove any branch, and the circle is distorted. Educate in one branch without reference to the others, and the student acquires information but not understanding, technique but not judgment.
The argument was controversial in 1852. It is explosive in 2026. Because the arrival of artificial intelligence has done something that no amount of philosophical argument could accomplish: it has provided an empirical demonstration, at civilizational scale, of what happens when technique is abundant and judgment is scarce.
When the machine can execute virtually any technical task described in natural language — write code, draft legal briefs, generate medical analyses, produce financial models, compose marketing copy, design user interfaces — the person whose education consisted solely of learning to execute these tasks discovers, with the specific vertigo that The Orange Pill documents in its opening chapters, that the ground beneath her professional identity has shifted. The execution was her value proposition. The machine now executes. What remains?
Newman answered this question a hundred and seventy years before it was asked in its current form. What remains is precisely what liberal education was designed to produce: the capacity for judgment. The ability to evaluate, to discriminate, to determine what deserves to be done among the infinite things that can now be done. Not the skill of producing a legal brief but the wisdom to know which arguments serve justice. Not the ability to write code but the discernment to know which products serve human flourishing. Not the technique of generating financial models but the judgment to know which models illuminate and which obscure.
Newman called this capacity "enlargement of mind" — and he was precise about what the phrase meant. Enlargement is not the accumulation of more information. A mind stuffed with facts but incapable of seeing their relations is not enlarged. It is burdened. Enlargement is the capacity to hold multiple perspectives in productive tension, to perceive how different domains of knowledge illuminate each other, to exercise what Newman called "connected thinking" — the kind of thinking that sees a problem not from within a single disciplinary silo but from the intersection of several, where the most consequential insights typically reside.
The organizational structures emerging in response to AI vindicate this account with a precision that borders on the uncanny. The Orange Pill describes "vector pods" — small groups of three or four people whose function is not to build but to decide what should be built. They talk to users. They analyze markets. They debate strategy. They produce specifications that AI tools execute. Their value lies not in any technical competence but in their capacity to perceive relations — between user needs and technical possibilities, between market conditions and product vision, between what can be built and what should be built. These pods are, in Newman's framework, the organizational expression of liberal knowledge: minds formed not to execute one thing well but to see connections and judge wisely among the many things that execution makes possible.
Newman would also recognize what the pods are not. They are not committees. They are not bureaucratic clearinghouses for institutional inertia. They are small groups of persons whose formation — whose education, in the full Newmanian sense — has equipped them to exercise judgment under conditions of genuine uncertainty. The distinction matters because the organizational temptation, in every era, is to substitute process for judgment: to create structures that produce the appearance of deliberation without requiring the reality of formed minds. Newman warned against this substitution repeatedly. The university that produces graduates who can follow procedures without exercising judgment has failed at its fundamental task, regardless of how impressive the procedures are or how efficiently the graduates follow them.
Newman's concept of "the circle of knowledge" — the interconnection of all branches of learning — addresses a dimension of the AI transformation that purely technical analysis consistently misses. The circle of knowledge is the recognition that no domain of human understanding is self-sufficient. Theology without philosophy becomes superstition. Philosophy without science becomes speculation. Science without ethics becomes instrumentalism. Economics without history becomes ideology. Each discipline requires the corrective influence of the others, and a mind educated within a single discipline, however deeply, lacks the conceptual resources to recognize the distortions that specialization inevitably produces.
AI dissolves the boundaries between disciplines more effectively than any educational reform. A single practitioner, armed with AI tools, can now operate across domains that previously required separate teams of specialists. The backend engineer builds interfaces. The designer writes features. The marketer generates analyses. The boundaries that seemed structural turn out to have been artifacts of the translation cost — the cognitive and temporal expense of moving between domains. When the cost drops to the cost of a conversation, the boundaries dissolve.
But the dissolution is educationally valuable only if the practitioner possesses the formed mind that can perceive the unity within the diversity. Without that formation, the dissolution produces not integration but confusion — a person operating across multiple domains without the conceptual framework to understand how the domains relate, what principles govern each, where the insights of one correct the limitations of another. The practitioner who uses AI to generate code, design interfaces, and draft marketing copy without understanding the principles that govern software architecture, user experience, and consumer psychology is not exercising liberal knowledge. She is executing across domains without judgment — producing outputs that satisfy immediate requirements while lacking the coherence that only an integrated understanding can provide.
Newman anticipated precisely this risk. In the sixth of his Dublin discourses, he argued that a person who possesses a great deal of information but lacks the philosophical habit of mind is in a worse position than the person who knows less but understands how things connect. "The enlargement consists, not merely in the passive reception into the mind of a number of ideas hitherto unknown to it, but in the mind's energetic and simultaneous action upon and towards and among those new ideas." The key word is "simultaneous" — the capacity to hold multiple domains in productive relation, to perceive the connections that generate insight, to exercise the kind of synthetic thinking that Newman regarded as the hallmark of the truly educated mind.
The AI age demands this capacity more urgently than Newman's age did, because the AI age has made information superabundant while leaving the capacity for synthesis as scarce as it ever was. The machine can produce information across every domain with a range and speed that no human mind can match. What the machine cannot do is perceive the connections between domains in a way that generates the kind of insight Newman described — the insight that arises not from within any single domain but from the intersection of several, where the principles of one illuminate the problems of another.
Newman's educational philosophy has been vindicated by the very technology it could not have anticipated. The utilitarian model — the model that reduces education to the transmission of marketable skills — has been exposed by AI as preparing students for precisely the kind of work that machines do most efficiently. The student trained exclusively in coding discovers that coding is being commoditized. The student trained exclusively in legal drafting discovers that drafting is being automated. The student trained exclusively in financial analysis discovers that analysis is being generated at a fraction of the cost and a multiple of the speed.
What remains valuable — what the machine cannot commoditize — is the formation Newman advocated: the philosophical habit of mind, the capacity for connected thinking, the ability to exercise judgment across domains, the quality of understanding that perceives not merely what things are but how they relate and what they mean.
Luke Ayers, writing in Crisis Magazine in August 2025, placed this vindication in its institutional context. Newman's elevation to Doctor of the Church, Ayers argued, represented "a prophetic challenge to the AI-driven educational revolution just over the horizon." The challenge was not that AI should be rejected but that its arrival exposed the poverty of the educational model that had displaced Newman's vision — the model that measured educational success by the acquisition of technical competencies rather than the formation of the whole person. Newman's insistence that education cultivate judgment rather than merely transmit skills was, in Ayers's reading, "singularly situated to speak to the difficulty that AI presents to K-12 and postsecondary institutions because he understood and articulated what education ought to be and what a school ultimately aims at."
The formulation is precisely right. Newman did not argue against useful knowledge. He argued that useful knowledge, in the absence of the philosophical habit of mind, produces practitioners who are competent within their specialization and helpless beyond it — persons who can execute but cannot judge, who can produce but cannot evaluate, who can answer but cannot question. The AI age has made this limitation visible by transferring the execution to the machine, leaving the human with nothing to contribute unless the human possesses the one thing the machine lacks: the capacity for the kind of integrated, judgment-rich, conscience-tested thinking that only a liberal education, in Newman's full sense, can produce.
The university, reconceived through Newman's framework, exists not to provide students with skills that compete with the machine but to form minds that direct the machine — minds capable of asking the questions the machine cannot originate, perceiving the connections the machine cannot see, and exercising the judgment the machine cannot possess. That this formation is slow, expensive, often frustrating, and resistant to quantitative measurement is not a defect. It is a feature. The formation of judgment cannot be optimized without being destroyed, because the optimization eliminates the friction — the struggle, the uncertainty, the repeated encounter with difficulty — through which judgment is actually formed.
Newman staked his career on this claim. The age of artificial intelligence has made it irrefutable.
---
Newman's account of how human beings reach certitude in concrete matters was, in its time, a radical departure from the epistemological orthodoxy. The empiricist tradition, dominant in British philosophy from Locke through Mill, held that rational assent should be strictly proportioned to evidence — that where formal demonstration is unavailable, the responsible intellect withholds conviction and contents itself with probability. Newman thought this principle, applied consistently, would paralyze practical reason entirely. No one actually lives this way. No one withholds conviction about the reliability of memory, the existence of the external world, the trustworthiness of close friends, until formal proof is available. People reach certitude in these matters through a process that is rational, but not formally demonstrative. Newman set out to describe this process — and the description he produced has a surprising and instructive resonance with the computational architecture of large language models.
Newman argued that in concrete matters — matters of history, practical judgment, personal conviction, moral assessment — certitude is reached through the convergence of independent probabilities. No single piece of evidence is decisive. No single argument compels assent. But when multiple independent lines of evidence point in the same direction — when the testimony of witnesses, the plausibility of motives, the consistency of circumstantial detail, the alignment with known patterns of human behavior all converge upon a single conclusion — the mind reaches a point where the accumulated weight of probability crosses a threshold and becomes certitude. Not mere opinion. Not tentative belief. Certitude: the state of mind in which the person holds the conclusion with the unconditional assent that formal demonstration would warrant, even though the grounds are not formally demonstrative.
The process, as Newman described it, involves the personal judgment of the reasoner at every stage. Which considerations are relevant? How much weight does each carry? Where do the lines of evidence converge, and where do they diverge? These assessments cannot be mechanized. They depend upon the reasoner's experience, training, and what Newman called the illative sense — the trained faculty of informal inference that operates below the level of articulable rules. The convergence of probabilities is not a calculation. It is an act of judgment, performed by a particular person with a particular history, drawing upon the full deposit of that person's engagement with the domain in question.
The superficial resemblance to large language model inference is immediately apparent. A large language model also converges upon outputs by processing probabilities. Given a prompt, the model evaluates the probability distribution over possible next tokens — possible next words, phrases, continuations — and selects the output that is, by its statistical reckoning, most consistent with the patterns in its training data. The process involves the weighting of multiple considerations simultaneously. The output reflects the convergence of innumerable statistical regularities, no single one of which determines the result, but which together produce a response that is, often, remarkably apt.
The resemblance has not gone unnoticed. Popular accounts of AI frequently describe large language models in language that echoes Newman's epistemology: the models "weigh probabilities," "converge upon conclusions," "integrate multiple lines of evidence." The language invites the inference that the model is doing something structurally similar to what the human mind does when it reaches certitude through the convergence of probabilities. If Newman's process and the model's process are fundamentally the same operation — probabilistic convergence leading to a determinate output — then the model might be understood as an artificial implementation of the illative sense, and the distinction between human judgment and machine inference would be one of degree rather than kind.
Newman's framework reveals why this inference is mistaken, and the reasons for the mistake illuminate something important about both human cognition and artificial computation.
The first and most fundamental difference is the role of the reasoner. In Newman's account, the convergence of probabilities is not a process that happens to the reasoner. It is a process the reasoner performs. The physician does not passively receive the convergence of symptomatic evidence and find herself, as if by mechanism, holding a diagnosis. She actively assesses each piece of evidence, determines its relevance, weighs it against the others, and arrives at a conclusion for which she takes personal responsibility. The convergence is an act — an exercise of trained judgment by a particular person in a particular circumstance. The conclusion bears the character of the person who reached it. A different physician, with a different history of clinical experience, might weigh the same evidence differently and reach a different conclusion, and both conclusions might be rationally defensible, because the convergence of probabilities in concrete matters does not yield a single mechanically determined result.
The model's convergence, by contrast, is precisely the kind of mechanical process Newman distinguished from genuine judgment. Given the same input and the same parameters, the model produces the same output distribution. The convergence is not an act performed by a reasoner. It is a computation performed by an algorithm. No judgment is exercised in Newman's sense of the word, because judgment, in Newman's account, is irreducibly personal — it requires a mind that has been formed by engagement with the domain and that takes responsibility for the conclusion it reaches.
The second difference concerns the relationship between probability and truth. Newman was explicit that the convergence of probabilities, as he described it, is a path to truth — not merely to the most statistically likely conclusion, but to what the reasoner, exercising the full resources of trained judgment, determines to be actually the case. The physician's diagnosis, reached through the convergence of clinical probabilities, is not a claim about what diagnosis is most statistically frequent given these symptoms. It is a claim about what is actually wrong with this particular patient in this particular circumstance. The claim may be wrong. Certitude, as Newman acknowledged, is not infallibility. But the aim of the process is truth, and the reasoner holds herself accountable to that aim.
The model's convergence aims at something different: the statistically most probable continuation given the training data. The model does not seek truth. It does not hold itself accountable to truth. It seeks coherence with patterns — a coherence that often coincides with truth, because the training data contains enormous quantities of accurate information, but that can diverge from truth without any internal signal of the divergence. The divergence is what produces hallucinations: outputs that are statistically coherent with the patterns in the training data but factually wrong. The model generates the hallucination with the same fluency and the same apparent confidence it brings to accurate outputs, because the process that produces both is the same process — probabilistic convergence — and the process does not include a truth-tracking component in Newman's sense.
The Orange Pill describes the "temperature" setting that governs how far the model's output strays from the most probable completion. Higher temperature produces more surprising, more diverse, potentially more creative outputs. Lower temperature produces more predictable, more conservative outputs. The metaphor is instructive: the model's creativity is a function of randomness — of the degree to which the selection process is willing to depart from the statistical mode. Newman's creativity, if the word applies, is a function of judgment — of the reasoner's capacity to perceive connections that others have missed, to weigh evidence that others have overlooked, to reach conclusions that are not merely improbable but genuinely original. The difference between randomness and judgment is the difference between a process that generates surprise by departing from the statistical center and a mind that generates insight by perceiving the truth that the statistical center obscures.
The third difference is perhaps the most consequential for practice. Newman's convergence of probabilities is self-correcting in a way the model's is not. The person who exercises the illative sense brings to each new case not only the accumulated deposit of prior experience but also the accumulated deposit of prior error. The physician who misdiagnosed a case five years ago carries that misdiagnosis as a lived correction — a specific, concrete, personally experienced failure that recalibrates her judgment in ways no abstract knowledge of diagnostic error rates can replicate. The convergence of probabilities, in a mind formed by this kind of experience, includes the weight of past failures, personally undergone, that sharpen the reasoner's capacity to detect the subtle signs of a similar error in the present case.
The model's convergence includes no such mechanism. The model can be retrained on data that includes corrected errors, and its statistical patterns will shift accordingly. But the retraining is not the same as the lived experience of having been wrong. The model does not carry its errors as a person carries them — as formative episodes that reshape the character of subsequent judgment. The model carries its training data as a statistical distribution, not as a biography. And the difference between a statistical distribution and a biography is, in Newman's framework, the difference between a mechanism and a mind.
The implications for practice are direct. The model's probabilistic convergence is an extraordinarily powerful tool for identifying patterns, generating hypotheses, narrowing the space of possibilities that the human reasoner must then evaluate. A physician who uses AI to generate a differential diagnosis from a symptom profile has a starting point that her unaided mind could not have produced as quickly or as comprehensively. A lawyer who uses AI to identify relevant precedents has a research base that her unaided effort could not have assembled as efficiently.
But the tool does not replace the judgment. The differential diagnosis must still be evaluated by a mind that knows this patient, this context, this clinical setting — a mind that brings to the evaluation the full weight of personal experience, including the weight of past error. The precedent research must still be assessed by a mind that understands not merely what cases are statistically relevant but what cases are importantly relevant — cases whose principles, applied to the present facts, would serve the ends of justice in this particular circumstance.
Newman would have embraced the tool while insisting, with characteristic precision, upon the distinction between the tool's output and the judgment that evaluates it. The convergence of probabilities that the machine performs is genuine and powerful. But it is not the convergence of probabilities that Newman described, because Newman's convergence is performed by a person — a person with a history, a conscience, a stake in the truth of the conclusion, and a trained faculty of judgment that cannot be replicated by any process, however sophisticated, that operates upon statistical patterns rather than upon the lived encounter with the real.
---
In the eighth of his Dublin discourses, Newman drew a portrait of the educated person that has been debated, admired, and contested for a hundred and seventy years. The portrait was of the "gentleman" — a term that Newman used not in its social sense, the sense that implies birth and breeding and the right school, but in its intellectual and moral sense: a mind formed by liberal education to perceive relations, exercise judgment, and move among different domains of knowledge with the ease that comes not from superficial acquaintance but from genuine understanding of principles.
The gentleman, as Newman described him, "is at home in any society, he has common ground with every class; he knows when to speak and when to be silent; he is able to converse, he is able to listen; he can ask a question pertinently, and gain a lesson seasonably, when he has nothing to impart himself." The passage is often read as a description of social grace. It is in fact a description of cognitive formation. The gentleman can converse with any specialist because his mind has been formed to perceive the principles underlying the specialty — not the technical details, which belong to the specialist, but the foundational logic that connects the specialty to the wider circle of knowledge. He can ask a question pertinently because he understands enough about the domain to know where the genuine difficulties lie. He can gain a lesson because he possesses the intellectual framework into which new information can be integrated rather than merely accumulated.
The portrait is, in contemporary language, a description of the ideal cross-functional thinker — the person whose value in an organization lies not in any single technical competence but in the capacity to perceive connections between domains and exercise judgment that integrates multiple perspectives. This is the person The Orange Pill calls the "creative director" — the mind whose contribution is not execution but direction, not the production of any single artifact but the judgment that determines which artifacts deserve to exist. The parallel between Newman's gentleman and this contemporary ideal is not coincidental. Both describe the same cognitive formation: a mind equipped not to do one thing well but to evaluate and direct many things wisely.
The parallel throws the contemporary educational landscape into sharp relief. The dominant model of higher education for the past century has been, in Newman's terms, the utilitarian model: education as the acquisition of marketable skills. The model has been spectacularly successful on its own terms. Universities produce graduates with technical competencies that are immediately deployable in professional settings. Computer science programs produce programmers. Law schools produce lawyers. Business schools produce managers. Engineering programs produce engineers. Each program is evaluated by the employability of its graduates, and by this metric, the system works.
The arrival of AI has exposed the fragility of this model with a speed that has left institutions struggling to respond. When the machine can perform the technical tasks that the graduates have been trained to perform — when it can write code, draft documents, generate analyses, produce designs — the graduates discover that their education prepared them for precisely the work that is being commoditized. The programmer who can only program. The lawyer who can only draft. The analyst who can only analyze. Each finds herself in possession of a skill that the market valued highly when it was scarce and values progressively less as AI makes it abundant.
Newman predicted this failure. Not the specific technology that would cause it — he could not have imagined artificial intelligence — but the structural vulnerability of an education that defines itself by the skills it transmits rather than the minds it forms. "Knowledge is one thing, virtue is another; good sense is not conscience, refinement is not humility, nor is largeness and justness of view faith," he wrote. The distinctions are precise. Knowledge of coding is not the judgment to know what code should be written. Skill in legal drafting is not the wisdom to know which arguments serve justice. Technical fluency in financial analysis is not the discernment to know which analyses illuminate and which obscure. The first member of each pair is the skill the utilitarian model transmits. The second is the formation that only liberal education produces. And the second is what the AI age has revealed to be indispensable.
The prompt engineer — the practitioner whose skill consists in framing questions to AI systems in ways that produce useful outputs — provides an instructive case study. Prompt engineering is a genuine skill. It develops with practice. It produces measurably better results. The best prompt engineers understand context specification, constraint framing, iterative refinement, and output evaluation at a level that distinguishes them from casual users. The skill is valuable, and the market currently rewards it.
But the prompt engineer who merely engineers prompts — who has mastered the grammar of the prompt without possessing the grammar of assent — is, in Newman's framework, the opposite of the educated person. The prompt engineer extracts outputs from the machine. Newman's educated person evaluates outputs against the standard of genuine understanding. The prompt engineer optimizes the interaction between human and machine. Newman's educated person asks whether the interaction is producing knowledge or merely producing the appearance of knowledge. The prompt engineer is a technician of extraction. Newman's educated person is a judge of worth.
The distinction is not a matter of status. It is a matter of formation. The prompt engineer whose skill in framing questions is grounded in a deep, liberally educated understanding of the domains in which she operates — who can evaluate the machine's outputs not merely by their coherence and plausibility but by their truth, their relevance, their bearing upon the actual problem at hand — that person combines the skill of the prompt engineer with the formation of Newman's educated mind. She is genuinely valuable, because she brings to the human-AI collaboration the one thing the machine cannot provide: the judgment that distinguishes the output that is merely satisfying from the output that is genuinely sound.
The prompt engineer whose skill is purely technical — who can frame questions brilliantly but cannot evaluate answers beyond their surface plausibility — occupies a position that is structurally identical to the specialist Newman warned against: competent within a narrow domain, helpless beyond it. As the machine's capacity for self-directed task completion increases — as the models become better at decomposing complex problems, generating their own sub-questions, and iterating on their own outputs — the purely technical skill of prompt engineering will itself be commoditized. The machine will prompt itself more effectively than most humans can prompt it. What will remain valuable is the human capacity that prompt engineering, in its purely technical form, does not develop: the judgment to know what is worth asking and the understanding to evaluate what comes back.
Newman's educational vision produces precisely this capacity. The mind formed by liberal education — the mind that has been trained to perceive relations between domains, to exercise judgment across disciplines, to hold multiple perspectives in productive tension — brings to the AI collaboration a quality of engagement that no amount of technical skill can substitute. The educated person does not merely extract outputs. She interrogates them. She asks not merely "Is this coherent?" but "Is this true?" Not merely "Does this answer my question?" but "Is this the right question?" Not merely "Is this output useful?" but "Is this output worthy of the intelligence and resources that produced it?"
These questions are the questions of real assent — questions that require the person asking them to have undergone the formative process by which genuine understanding is developed. They cannot be taught as skills. They cannot be acquired through a training program or a certification course. They are the product of years of the slow, often frustrating engagement with difficult material that Newman argued is the irreducible core of genuine education.
Andrew Meszaros, speaking at the 2025 Rome conference, articulated the practical consequence with admirable directness. If education assesses products — essays submitted, code deployed, briefs filed, projects completed — then AI is simply a more efficient tool for producing those products, and the educational task is to teach students to use the tool effectively. But if education assesses the way students think — if the purpose of the essay is not the essay itself but the process of thinking that the essay requires, the wrestling with ideas that produces genuine understanding — then AI is not merely a tool. It is, when improperly deployed, an obstacle to the very formation education exists to accomplish.
The student who uses AI to produce an essay has produced an essay. The student who has written the essay through the slow, difficult, personally formative process of working out what she thinks has undergone an educational experience. The product may look similar. The person who emerges from the process is categorically different. Newman's educational vision is concerned not with the product but with the person — not with what the student produces but with what the student becomes.
The age of AI has not made Newman's educational vision obsolete. It has made it urgent. The formation of minds capable of genuine judgment — minds that can direct the machine rather than merely operate it, evaluate its outputs rather than merely accept them, and ask the questions that the machine cannot originate — is not a luxury the educational system can afford to provide only to the fortunate few. It is the foundational requirement of a civilization that has given itself tools of unprecedented power and must now develop the wisdom to direct them.
Newman would recognize the irony. The utilitarian model defeated his liberal vision in the institutional contest of the nineteenth century. The machine has defeated the utilitarian model in the economic contest of the twenty-first. What remains standing, after both contests, is the formation Newman advocated — not because it is pleasant or prestigious or traditional, but because it produces the one thing the age requires that no machine can provide: a mind capable of knowing the difference between what is merely produced and what is genuinely understood.
---
In 1845, John Henry Newman published An Essay on the Development of Christian Doctrine — a work that was, in its immediate purpose, a justification for his conversion from Anglicanism to Roman Catholicism, and that became, in its lasting significance, one of the most important contributions to the philosophy of intellectual change. The central question of the Essay was how to distinguish genuine development from corruption — how to determine whether a change in doctrine represents a legitimate unfolding of an idea's implications or a betrayal of its essential character.
The question was urgent for Newman because the Catholic Church of 1845 held doctrines — papal infallibility, the intercession of saints, the role of tradition alongside scripture — that were not explicitly present in the Christianity of the first centuries. Newman's Protestant critics argued that these doctrines were corruptions: additions that altered the original faith beyond recognition. Newman argued the opposite: that the doctrines were genuine developments, the natural and necessary unfolding of implications that were present in the original idea from the beginning, but that required centuries of engagement with new circumstances, new challenges, new intellectual environments to be made explicit.
The distinction between development and corruption required criteria, and Newman supplied seven. He called them "notes" — diagnostic markers by which genuine development could be distinguished from its counterfeit. Preservation of type: the developed form remains recognizably the same kind of thing as the original. Continuity of principles: the fundamental logic that governs the idea remains operative, even as its applications extend. Power of assimilation: the idea absorbs new material from its environment without losing its own character. Logical sequence: the development follows from the original by a chain of reasoning that, while perhaps not formally deductive, is recognizable as an extension rather than an alteration. Anticipation of the future: earlier stages of the idea contain hints or foreshadowings of later developments. Conservative action upon the past: the development does not repudiate but preserves and deepens the earlier expressions of the idea. And chronic vigour: the developed form is more alive, more active, more capable of engaging with its environment than an idea that has merely stagnated.
The analogical distance between the development of Christian doctrine and the development of software is considerable, and Newman himself would have insisted upon respecting it. Theology and code operate in different registers of human concern. The stakes are of a different order. The kind of truth at issue is different. Any application of Newman's developmental framework to the domain of software must proceed with full awareness that it is working by analogy rather than by direct transference.
That said, the analogy illuminates something important about what happens when AI-generated code accumulates in a system over time — something that purely technical analysis tends to miss, because the relevant category is not functionality but coherence.
A software system, like a body of doctrine, begins with a foundational idea: an architectural vision that determines how the system's components relate to each other, what principles govern their interaction, what the system is for in a sense that extends beyond any individual feature. The architecture is not merely a technical decision. It is, in a meaningful sense, the system's identity — the organizing logic that makes the system this system rather than some other system, the principle that ensures the parts cohere into a whole rather than merely coexisting as a collection of functional but unrelated modules.
When human engineers develop a system over time, the best of them exercise something very close to Newman's developmental logic, though they would not use his vocabulary. They extend the system's capabilities while preserving its architectural integrity. They add new features that follow logically from the existing design. They absorb new requirements from the environment — new user needs, new technical constraints, new platform capabilities — without losing the system's essential character. They make changes that, examined in retrospect, seem anticipated by the original design, as though the architecture contained latent possibilities that the new requirements merely made explicit. And the system, under this kind of stewardship, exhibits chronic vigour: it grows more capable, more responsive, more alive to its environment with each iteration.
This is genuine development. It requires what Newman required of doctrinal development: a deep understanding of the original idea, a trained sensitivity to the difference between extension and alteration, and the kind of judgment — Newman's illative sense, applied to the architectural domain — that can perceive whether a proposed change preserves or violates the system's foundational logic.
AI-generated code introduces a specific and novel risk to this developmental process. The risk is not that the code does not work. It often works well. The risk is that the code works without coherence — that it solves the immediate problem while violating the architectural principles that give the system its identity. In Newman's terms, the risk is corruption masquerading as development: functional additions that alter the system's essential character under the guise of extending it.
The risk arises from a structural feature of how AI generates code. The model produces code that satisfies the specification — the prompt — with remarkable efficiency. But the model does not possess an understanding of the system's architecture in the sense that a human engineer who has lived with the system possesses it. The model has access to the codebase, can identify patterns, can generate additions that are stylistically consistent with the existing code. What it does not possess is the illative sense of the system's design intention — the feel for what the architecture is trying to accomplish, the sensitivity to the difference between a change that extends the design and a change that subverts it.
This is not a hypothetical concern. Engineers who have worked extensively with AI-generated code report a consistent pattern. The code works. The tests pass. The feature behaves as specified. But over time, as AI-generated additions accumulate, the system's architectural coherence degrades. Dependencies multiply in ways that the original design did not anticipate. Modules that were intended to be independent become coupled through hidden shared state. The system becomes what software engineers call "brittle" — functional in the sense that it produces correct outputs, but fragile in the sense that any modification to one component produces unexpected failures in others.
Newman would recognize this pattern as corruption by accretion — the gradual alteration of an idea's essential character through the accumulation of additions that are individually defensible but collectively transformative. Each addition, examined in isolation, appears legitimate. It extends the system's capabilities. It satisfies a real requirement. It passes the relevant tests. But the accumulation, over time, produces a system that no longer embodies the architectural vision that gave it coherence — a system whose type has not been preserved, whose principles have been quietly abandoned, whose vigour has been replaced by the brittle functionality of a structure that holds together by accident rather than by design.
Newman's seven notes provide a diagnostic framework for distinguishing between AI-generated code that genuinely develops a system and AI-generated code that corrupts it. Preservation of type: Does the system, after the addition, remain recognizably governed by the same architectural principles? Continuity of principles: Are the fundamental design decisions — the choices about modularity, dependency, abstraction — still operative, or have they been quietly overridden by additions that follow a different logic? Power of assimilation: Has the new code been integrated into the system's existing patterns, or does it sit alongside them as a foreign body — functional but unabsorbed? Logical sequence: Does the addition follow from the existing architecture in a way that a person familiar with the design could have anticipated, or does it introduce capabilities that bear no logical relation to the system's prior trajectory? Conservative action upon the past: Does the addition preserve and deepen the existing code's quality, or does it implicitly deprecate earlier design decisions without explicitly revising them?
The application of these criteria requires, crucially, the kind of judgment that the illative sense provides: a trained sensitivity to the character of the system, formed through long engagement with the codebase, capable of perceiving architectural coherence or its absence at a level of subtlety that no formal test can capture. The person who can feel when a system's architecture is being corrupted — who senses, before articulation, that a seemingly functional addition has introduced an incoherence that will produce cascading problems months or years hence — is exercising precisely the kind of trained, experience-grounded, personally responsible judgment that Newman described.
The model cannot exercise this judgment, because the model does not possess the architectural illative sense. The model has pattern-level familiarity with the codebase. The person has lived understanding of the design. The gap between pattern-level familiarity and lived understanding is the same gap Newman identified between notional and real assent — and it is in this gap that architectural corruption takes root.
The practical implication is not that AI-generated code should be rejected. It should not. The efficiency gains are genuine and often extraordinary. The implication is that AI-generated code must be evaluated by human judgment operating at the architectural level — by persons whose understanding of the system is deep enough, and whose illative sense for design coherence is trained enough, to distinguish between development and corruption. The code that passes tests but fails the notes is a more insidious problem than the code that fails tests outright, because the failure is invisible until the accumulated corruption produces a systemic breakdown.
The Orange Pill describes this as "ascending friction" — the principle that technological abstraction removes difficulty at one level and relocates it to a higher cognitive level. AI removes the friction of writing code. It relocates the friction to the architectural level, where the question is no longer "Does this code work?" but "Does this system cohere?" Newman's developmental framework gives this ascending friction its philosophical grounding. The difficulty that has ascended is the difficulty of distinguishing development from corruption — and that difficulty, as Newman spent an entire book arguing, requires the exercise of trained judgment by persons who possess real understanding of the idea they are stewarding.
The steward of a software system in the age of AI occupies a position that is, structurally, analogous to the theologian in Newman's account: a person responsible for ensuring that the idea under their care develops genuinely rather than corrupting under the pressure of accumulated additions. The analogy is imperfect. The stakes are different. The material is different. But the cognitive operation — the exercise of trained judgment to distinguish faithful development from unfaithful corruption, applied to a complex, evolving body of work that must preserve its essential character while adapting to new circumstances — is recognizably the same. And the faculty required for this operation is the faculty Newman spent his life defending: the illative sense, formed by experience, operating in the domain where formal rules are insufficient, and exercised by persons who hold the system's integrity not as a notional commitment but as a real one — a commitment grounded in the lived understanding of what the system is and what it is for.
When Newman was elevated to the cardinalate in 1879, he chose a motto that surprised many who expected from him something more cerebral, more architectonic, more in keeping with the reputation of the most formidable intellect in Victorian England. He chose Cor ad cor loquitur — heart speaks to heart. The phrase was borrowed from Saint Francis de Sales, but Newman made it entirely his own. It expressed a conviction that had deepened across his entire intellectual life: that the most consequential communication between persons occurs not at the level of argument, however rigorous, but at the level of personal witness — the encounter between one vulnerable, mortal, truth-seeking being and another.
The conviction was not sentimental. Newman was the least sentimental of thinkers. He spent decades constructing arguments of extraordinary logical precision. He could dismantle an opponent's position with surgical exactness. He valued the intellect and its operations as highly as any philosopher of his century. But he recognized, from the evidence of his own experience and from the testimony of the tradition he had studied, that the intellect operating alone — the intellect detached from the personal, the concrete, the morally and existentially implicated — produces a form of communication that is powerful but ultimately insufficient. The intellect can demonstrate. Only the heart can convert.
The distinction maps onto Newman's epistemology with exact precision. Notional communication — the exchange of propositions, arguments, analyses — operates at the level of the intellect. It is the communication of ideas considered abstractly, in their logical relations, divorced from the personal histories and existential commitments of the persons exchanging them. Real communication — the encounter in which one person's lived conviction meets another's lived need — operates at the level of the heart. It is the communication that changes not merely what the other person thinks but who the other person is.
The machine excels at notional communication. It generates propositions with extraordinary fluency. It constructs arguments with a range and speed that no individual mind can match. It produces analyses that are, by virtually every external criterion, sophisticated. And it does all of this without a heart — without the personal, the vulnerable, the existentially committed quality that Newman regarded as the precondition of the communication that actually matters.
This is not a criticism of the machine's capabilities. It is a description of its nature. The machine communicates pattern to pattern, token to token, probability to probability. The communication is impersonal not because the machine is defective but because the machine is not a person. It does not possess the specific vulnerability of a mortal creature with stakes in the truth of its own utterances — a creature that can be hurt by what it says, changed by what it hears, held accountable for the correspondence between its words and its life.
Newman's Apologia Pro Vita Sua, published in 1864, is the supreme example of cor ad cor loquitur in his own practice. The book is an intellectual autobiography — an account of the development of his religious opinions from his youth through his conversion to Catholicism. But its power lies not in the arguments it marshals, formidable as they are, but in the quality of personal witness it achieves. Newman laid bare the interior history of his mind with a candor that exposed him to ridicule, misunderstanding, and the accusation that his conversion was motivated by ambition or instability rather than honest conviction. The exposure was the point. By making himself vulnerable — by allowing the reader to see not merely what he concluded but how he struggled, what it cost him, where he resisted his own conclusions because the conclusions were unwelcome — Newman achieved a mode of communication that no argument, however logically compelling, could have accomplished.
The reader of the Apologia does not merely understand Newman's position. The reader encounters Newman — a particular person, in a particular historical moment, wrestling with questions that resist resolution and arriving at conclusions that demanded the sacrifice of nearly everything he had built. The encounter is cor ad cor loquitur — heart speaking to heart across the distance of circumstance, personality, and time. And the encounter has a persuasive force that no syllogism can replicate, because the reader perceives not merely the conclusion but the integrity of the person who reached it.
The Orange Pill contains moments that approach this quality of communication, and the moments are instructive precisely because they illuminate the boundary between what AI collaboration can achieve and what it cannot. When the author describes the engineer in Trivandrum oscillating between excitement and terror, the passage works because the reader perceives a particular person in a particular circumstance undergoing a particular transformation — not a case study illustrating a general principle, but a human being confronting the concrete reality of changed ground. When the author describes lying awake at three in the morning, unable to stop building, recognizing the compulsion for what it is while being unable to stop, the passage communicates something that no analysis of "productive addiction" could convey: the lived experience of a specific consciousness caught between exhilaration and exhaustion.
These moments are instances of cor ad cor loquitur — not because they are emotionally manipulative, but because they proceed from personal witness. The author does not merely argue that AI creates a particular kind of vertigo. He testifies to having experienced it. The testimony carries a weight that no argument can replicate, because the weight comes from the reader's perception that the author has stakes in the truth of what he says — that the words proceed not from the manipulation of propositions but from the encounter of a particular life with a particular reality.
The machine cannot produce this kind of communication. It can simulate it. It can generate first-person accounts of emotional experience with a fluency that may, on the surface, be indistinguishable from genuine testimony. But the simulation lacks the one thing that gives testimony its force: the knowledge, on the part of the reader, that a real person stands behind the words — a person who can be challenged, who can be held accountable, who has paid the specific price that authentic testimony requires.
The twelve-year-old who asks "What am I for?" — the question that recurs throughout The Orange Pill as an emblem of the existential dimension of the AI transformation — is engaged in cor ad cor loquitur in its purest form. The question proceeds not from intellectual curiosity but from existential need. The child is not asking for information. She is asking for recognition — for the assurance that her existence matters in a way that the machine's competence does not threaten. The question can only be answered by a person — a parent, a teacher, a mentor — whose answer proceeds from genuine conviction, grounded in the lived experience of having wrestled with the same question and arrived at something that can be offered, not as a formula, but as a testimony.
Newman would observe that the twelve-year-old's question is, in its deepest structure, a question about real assent. The child is not asking whether the proposition "human beings have value" is logically defensible. She already knows that the proposition can be defended logically. What she is asking is whether anyone actually believes it — believes it with the force of real assent, the kind of conviction that is grounded in experience and tested by conscience and visible in the way the person who holds it actually lives. The child is asking whether anyone in her world holds this truth heart to heart, not merely mind to mind. And if the answer is no — if the adults in her world hold the proposition notionally, as a formula they can articulate but not as a conviction that shapes their conduct — the child will perceive the emptiness with the ruthless accuracy that children bring to the detection of inauthenticity.
Newman's pastoral concern — the concern that animated his educational project, his preaching, his thousands of letters of spiritual direction — was always with this quality of communication. The university exists, in Newman's vision, not merely to transmit knowledge but to form persons whose knowledge is held with the force of real assent — persons who can communicate cor ad cor loquitur because their convictions proceed from the genuine encounter of a whole person with the truth. The teacher whose understanding of her subject is merely notional — who holds the propositions of her discipline as formulas she can manipulate but not as truths she has personally appropriated — will communicate notionally, and her students will receive the communication notionally, and the entire educational transaction will produce information without formation.
The teacher whose understanding is real — who has wrestled with the material until it has become part of her, who holds it with the force of conviction tested by conscience and deepened by experience — communicates something that no machine can communicate: the quality of a person who has been changed by what she knows. The student does not merely receive information from this teacher. The student encounters a formed mind, a living testimony to the transformative power of genuine understanding, and the encounter is itself formative in a way that no quantity of propositional transmission, however efficient, can replicate.
Sister Catherine Joseph Droste, speaking at the 2025 Rome conference on Newman and artificial intelligence, counseled that "younger generations are using it much more, but they may not have the wisdom to discern how to use it well. And for that, I think older generations need to enter into it and take a dive, in a sense, so that we can talk with one another." The counsel is precisely Newmanian. The solution to the AI challenge is not prohibition or uncritical adoption but conversation — the specific, vulnerable, personally implicated conversation that Newman called cor ad cor loquitur. The older generation enters the technology not to master it but to bring to it the quality of formed judgment — real assent, conscience, the illative sense — that the younger generation has not yet had time to develop. And the communication between generations, when it proceeds heart to heart rather than merely screen to screen, achieves something that no institutional policy, no governance framework, no alignment protocol can substitute: the personal transmission of wisdom from one formed mind to another forming mind.
The machine speaks. It speaks with remarkable fluency, remarkable range, remarkable sophistication. But it does not speak heart to heart, because it has no heart — no personal history, no existential stakes, no vulnerability, no capacity to be changed by the encounter. The communication that forms persons, that transmits not merely information but conviction, that answers the twelve-year-old's question with the force of a life actually lived — this communication remains, as Newman always insisted, the irreducible contribution of the person. And the preservation of this communication, in an age saturated with sophisticated impersonal output, is the most important educational and cultural challenge of the present moment.
---
The argument of this book can be stated in a single sentence: The crisis of the age of artificial intelligence is not the abundance of information but the scarcity of persons capable of holding knowledge with real assent.
The sentence requires the entire preceding argument to be properly understood, because each of its key terms — crisis, information, persons, knowledge, real assent — carries a specific weight that Newman's philosophical framework has been building across nine chapters. But the sentence, properly understood, contains the whole.
The machine has solved the problem of information. It generates, synthesizes, organizes, and retrieves information at a scale and speed that render human effort in these domains not merely less efficient but categorically noncompetitive. Any person with access to a large language model has access to more information, more fluently organized, more rapidly delivered, than the greatest libraries of previous centuries could provide. The problem of information scarcity, which constrained human civilization for millennia, has been solved. Solved completely. Solved, in historical terms, overnight.
And the solution has revealed, with the clarity that only the resolution of one problem can bring to the existence of another, that information was never the fundamental problem. The fundamental problem was always what Newman spent his life addressing: the formation of persons capable of holding knowledge — not merely possessing it, not merely accessing it, not merely manipulating it — but holding it with the force of genuine conviction, personally appropriated, conscience-tested, grounded in the lived experience of a whole human being engaging with reality.
Newman distinguished real assent from notional assent not as a matter of degree but as a matter of kind. The person who holds a truth with real assent does not merely know more than the person who holds it notionally. She knows differently. The truth has been integrated into her understanding at a level that shapes how she perceives, how she judges, how she acts. The truth is not a proposition she can articulate. It is a conviction she inhabits. It has become part of the architecture of her mind, and it influences her response to new situations in ways that are partly below the level of conscious articulation — which is why Newman insisted that the most important knowledge is often the most difficult to express in propositional form.
The large language model produces, at every moment of its operation, notional outputs. This characterization is not a criticism of the quality of those outputs. They may be accurate, insightful, well-structured, even beautiful. But they are produced by a system that holds no proposition with real assent — that has no personal history of engagement with the truths it generates, no conscience that tests the honesty of its conclusions, no capacity for the kind of conviction that changes conduct. The outputs are, in Newman's precise sense, notional: propositions manipulated with extraordinary facility by an intelligence that is not constituted by the truths it processes.
The person who deploys these outputs faces a choice that Newman's framework renders visible. She can accept them notionally — taking the machine's output as adequate without personally engaging with the material, without exercising her own judgment about its truth, without undergoing the formative encounter that would make the knowledge genuinely her own. This path is efficient, comfortable, and, in the short term, often adequate. The outputs serve the immediate purpose. The brief is filed. The code is deployed. The analysis is submitted. The person moves on to the next task.
Or she can use the outputs as a starting point for the harder, slower, more personally demanding process of forming real assent. She reads the cases the machine cited and discovers, through the encounter with the primary material, nuances the machine missed. She examines the code the machine generated and perceives, through the exercise of architectural judgment, an incoherence the machine could not detect. She interrogates the analysis and finds, through the application of domain expertise formed across years of practice, an assumption the machine embedded without recognizing it as an assumption.
This second path is harder. It is slower. It is less efficient by every metric that organizational cultures typically apply. And it is the only path that produces the kind of knowledge Newman spent his life defending — the knowledge that is held by a whole person, grounded in experience, tested by conscience, and capable of informing the kind of judgment that the machine cannot exercise.
The Orange Pill poses the question that brings Newman's framework to its practical point: "Are you worth amplifying?" The question, recast in Newman's terms, becomes: Do you bring real assent to the collaboration?
The person who brings real assent to the human-AI partnership feeds the amplifier a signal shaped by the full complexity of a life actually lived. Her convictions are not formulas. They are the product of years of engagement with the domain — years of struggle, failure, partial understanding, revised judgment, deepening insight. When she directs the machine, the direction proceeds from genuine knowledge: knowledge that has been earned through the formative friction of encounter, tested against the reality of practice, and held with the kind of conviction that Newman argued is the highest product of the human mind.
The person who brings only notional assent feeds the amplifier a thin signal. Her convictions are opinions — positions adopted because they are available, because they align with the prevailing consensus, because the machine produced them and they sounded right. The amplification of this signal produces more of the same: more opinions, more positions, more plausible-sounding conclusions that satisfy the grammar of the prompt while leaving the grammar of assent untouched. The output is voluminous. The knowledge is absent.
Newman's framework does not condemn the machine. The machine is an instrument of extraordinary power, and instruments of extraordinary power are, in the Newmanian worldview, occasions for both gratitude and grave responsibility. The printing press was such an instrument. It democratized access to knowledge on a scale that transformed civilization. It also made possible the rapid dissemination of error, propaganda, and intellectual mediocrity. The outcome depended not on the press but on the persons who used it — their formation, their judgment, their conscience, their capacity for real assent to the truths they chose to propagate.
The same dependence obtains for artificial intelligence. The machine amplifies. What it amplifies depends upon the human signal. And the quality of the human signal depends upon the quality of the human formation — upon whether the person directing the machine possesses the kind of knowledge that Newman spent his life defending: knowledge held by the whole person, formed through struggle, deepened by conscience, and capable of the judgment that distinguishes what is merely produced from what is genuinely understood.
Jonathan Sanford, writing from the presidency of the University of Dallas, articulated what Newman's framework demands of the present moment: "The more we automate, the more we need leaders who can interpret, not merely execute. The more data we have, the more we need wisdom to decide what is worth pursuing. The more persuasive our tools become, the more we need a moral compass that cannot be programmed." The statement is a précis of Newman's entire educational and philosophical project, translated into the vocabulary of the AI age. Interpretation requires real assent — the personal, experienced, judgment-rich engagement with material that notional processing cannot provide. Wisdom requires conscience — the moral faculty that evaluates not merely what is logically consistent but what is genuinely right. A moral compass that cannot be programmed is precisely what Newman meant by conscience: the aboriginal faculty that precedes every external authority and holds the person accountable to a standard that no algorithm can encode.
The formation of persons capable of real assent is slow. It is expensive. It is resistant to optimization. It requires exactly the kind of friction — intellectual, moral, existential — that the efficiency of AI tools makes it tempting to eliminate. Every argument in favor of eliminating this friction is, in Newman's terms, an argument for replacing real assent with notional, for substituting the surface of knowledge for its substance, for producing graduates, professionals, and citizens who can manipulate propositions with facility while holding none of them with conviction.
Newman would not have opposed artificial intelligence. Newman was a man of remarkable intellectual range and genuine curiosity about the developments of his time. But Newman would have insisted, with the full force of his philosophical and moral conviction, that the formation of persons must precede and govern the deployment of tools — that no instrument, however powerful, can substitute for the slow, painful, irreplaceable process by which a human being comes to hold knowledge with real assent. The machine produces. The person knows. And the difference between producing and knowing is the difference between a civilization that has information and a civilization that has wisdom.
The work of the age is the work Newman always advocated: the formation of the whole person. Not the person as information processor, not the person as prompt engineer, not the person as node in an optimization network. The whole person: intellect and conscience, judgment and care, the capacity for real assent and the willingness to be changed by what one genuinely knows. The machine has made this formation more necessary than any previous technology made it, because the machine has made it, for the first time in human history, genuinely optional. The choice to pursue formation when formation is no longer required for the production of competent outputs is the choice that will determine whether artificial intelligence serves human flourishing or merely simulates it.
Newman staked his life on the conviction that the most important things a person can know are not the things that can be most efficiently demonstrated. They are the things that require the whole person — intellect, experience, conscience, the slow accumulation of lived engagement with reality — to be grasped at all. The age of artificial intelligence has not refuted this conviction. It has provided its most stringent test. And the test, like all of Newman's tests, is passed not by the brilliance of the argument but by the quality of the person who faces it.
---
The word that stayed with me was not one I expected. It was not amplification or intelligence or even assent. It was aboriginal — Newman's strange, arresting choice for describing conscience. The aboriginal vicar. The thing that was there before anything else arrived.
I kept turning that word over during the months I spent inside Newman's thinking. Aboriginal: existing from the beginning. Not something you acquire through education or training or optimization. Something you were born carrying, and that the question is whether you have maintained it or buried it under layers of justification and convenience.
Newman wrote in the nineteenth century about problems that should have been solved by now but are instead more acute than he could have imagined. The problem of mistaking fluency for understanding. The problem of holding truths as formulas instead of convictions. The problem of producing sophisticated outputs without undergoing the formation that would make the outputs trustworthy. These are the problems of the AI age, stated with Victorian precision a hundred and fifty years before the machines arrived.
What struck me hardest was the distinction between notional and real. I have lived that distinction. I described it in The Orange Pill without knowing its philosophical name — the engineer who knew, notionally, that AI would change everything, and who spent two days in a room in Trivandrum discovering what everything actually meant when it applied to him, specifically, concretely, inescapably. That passage from the abstract to the personal is the orange pill itself: the moment when a truth you could discuss at dinner becomes a truth you cannot sleep through.
Newman gave me the vocabulary for something I had been reaching for throughout this entire project. The crisis is not that the machines produce bad work. They produce extraordinary work. The crisis is that the extraordinary work makes it possible — comfortable, even — to stop doing the interior work that turns information into understanding. The grammar of the prompt is not the grammar of assent. Satisfying outputs are not genuine knowledge. And the difference matters, concretely, in the quality of what we build and who we become while building it.
I think about my son's question at the dinner table. I think about the twelve-year-old who asks what she is for. Newman would say: she is for real assent. She is for the kind of knowing that requires her whole self — her experience, her conscience, her willingness to sit with difficulty long enough for understanding to form. No machine can do that for her. No machine should.
The aboriginal vicar does not update with the latest model weights. It speaks the same language it has always spoken: Is this true? Is this right? Have you been honest? The questions are simple. Living up to them is the hardest thing there is.
That is the formation this age requires. Not the rejection of the machine, but the cultivation of the person who directs it — the person whose assent is real, whose conscience is functioning, whose heart can still speak to another heart across the distance that efficiency cannot close.
Newman saw this coming. Not the technology. The need.
The most sophisticated AI systems on earth can generate legal briefs, write code, produce medical analyses, and compose philosophy -- all with a fluency that outpaces any individual human mind. John Henry Newman, writing over a century before the first computer, built the philosophical framework that explains why this fluency is both genuinely powerful and genuinely dangerous.
Newman distinguished between holding a truth as an abstraction and holding it as a conviction that reshapes how you live. He identified the trained faculty of judgment -- the illative sense -- that operates where formal rules run out. He insisted that conscience, not compliance, is the aboriginal authority governing what we build and deploy. In the age of AI, these distinctions are no longer academic. They are survival skills.
This book applies Newman's framework to the transformation described in Edo Segal's The Orange Pill -- and reveals that the scarcity the AI age has created is not information, capability, or speed. It is the formation of persons capable of knowing the difference between what is merely produced and what is genuinely understood.
-- John Henry Newman

A reading-companion catalog of the 55 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that John Henry Newman — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →