The seven notes, articulated in An Essay on the Development of Christian Doctrine (1845), are diagnostic markers by which a living body of thought can be assessed for integrity across time. Newman designed them for theological application but insisted on their broader relevance: the question they address — how to distinguish growth from decay in an idea that must adapt to new circumstances — arises for any continuing tradition, discipline, institution, or system. In the AI age, the notes provide a surprisingly precise framework for evaluating whether the accumulation of machine-generated contributions to a software architecture, a legal corpus, a scientific literature, or a curriculum constitutes legitimate development or a quiet corruption by accretion.
There is a parallel reading that begins not with the question of development versus corruption, but with the question of who benefits from the distinction. Newman's notes presuppose an evaluative standpoint outside the process of change itself—a formed steward, an institutional position, a tradition with the capacity to refuse accretion. In the AI age, this standpoint is precisely what is being dissolved. The seven notes function ideologically: they provide a vocabulary for legitimating some changes as 'genuine development' while delegitimating others as 'corruption,' but the classification itself becomes a site of power. Who decides which AI-generated additions preserve type? Who determines whether principles remain operative when the system's economic logic has fundamentally shifted? The notes naturalize a conservative posture—they assume that the original form embodies something worth preserving—while obscuring the possibility that what AI enables is not corruption of a prior integrity but exposure of a prior incoherence.
More fundamentally, the framework assumes that systems have essences that can be preserved or violated. But software architectures, legal corpora, and scientific literatures are not organisms with telos; they are assemblages maintained by ongoing labor under specific material conditions. When AI alters the composition of that labor, it does not corrupt an essence—it reveals that 'conceptual integrity' was always a fragile achievement of coordinated human attention, now no longer economically sustainable. The seven notes mistake an achievement for a property, a practice for a structure. They offer diagnostic precision for a phantom.
The first note, preservation of type, asks whether the developed form remains recognizably the same kind of thing as the original. A software system whose architecture has been steadily altered by AI-generated additions may still execute its functions, but if the pattern of dependencies, the structure of abstractions, and the conceptual backbone have been quietly replaced, the type has not been preserved.
The second note, continuity of principles, asks whether the fundamental logic that governs the idea remains operative. Each AI addition may respect local conventions while collectively undermining the architectural principles that gave the system coherence. The principles are still nominally endorsed; they are no longer operative.
Power of assimilation asks whether the idea can absorb new material from its environment without losing its character — the mark of a living tradition as opposed to either a dead one or a dissolving one. Logical sequence asks whether the development follows from the original by a recognizable chain of reasoning, even if the chain is not formally deductive. Anticipation of the future asks whether earlier stages contain hints or foreshadowings of later developments — a test that is retrospective but not therefore arbitrary.
Conservative action upon the past asks whether development preserves and deepens earlier expressions of the idea rather than repudiating them. Chronic vigour asks whether the developed form is more alive, more active, more capable of engaging with its environment than a merely stagnant version would be.
Application of the notes requires judgment, not calculation. They are not a checklist. They are the kinds of questions that a formed steward of a tradition asks — and that a tradition without formed stewards cannot answer for itself. In the AI age, the formation of such stewards becomes the decisive institutional question.
The notes were developed across the last chapters of the Essay on the Development of Christian Doctrine. Newman's motivation was polemical as well as philosophical: he needed to show, to himself and his readers, that specific Catholic doctrines not explicit in the first centuries could nonetheless be legitimate developments of the original faith.
The notes have been adopted, adapted, and critiqued across disciplines. Theologians have refined them; historians of ideas have applied them to non-religious traditions; in 2024–2025, a small but growing literature in software architecture has taken them up as a framework for evaluating the integrity of AI-assisted codebases.
The notes are seven, not arbitrarily many. Each names a different dimension along which a living tradition can remain whole or come apart.
They operate together, not individually. A development that satisfies some notes and fails others may be partly legitimate and partly corrupting; the judgment of the whole is the aim.
They require trained stewards. Their application is not algorithmic; it depends on the illative sense of someone formed in the tradition.
They are portable across domains. Theology, biology, software, and curriculum have all proven responsive to the framework, with appropriate adjustments.
They expose corruption that surface metrics miss. Individual additions can pass every local test while the cumulative result violates multiple notes — the characteristic AI-era failure mode.
Whether the notes are exhaustive, or whether additional criteria are needed for domains Newman did not anticipate, is a live question. Scholars working on AI applications have proposed additional notes — for example, transparency of provenance — that speak to concerns specific to machine-generated contributions.
The right framing recognizes integrity as something neither essential nor illusory, but enacted. Newman's notes are fully correct (100%) in identifying the dimensions along which coherence can be assessed: preservation of type, continuity of principles, power of assimilation. These are not arbitrary categories; they name real structural relationships that systems either exhibit or don't. The contrarian view is equally correct (100%) in observing that these relationships are maintained by ongoing institutional work under specific material conditions. The notes describe what must be evaluated; they do not describe who can evaluate it or under what economic arrangements such evaluation remains possible.
The question of whether AI-generated contributions constitute development or corruption is therefore not answered by applying the notes alone—it depends on the prior question of whether formed stewards with the capacity to apply them still exist and are organizationally empowered. Where such stewards remain operative (60/40 in favor of Newman's framework), the notes provide actionable guidance: they can identify architectural drift that local metrics miss, flag principle violations that emerge only at scale, distinguish vigorous evolution from brittle accretion. Where such stewardship has dissolved (70/30 in favor of the contrarian view), the notes become nostalgic—they describe a form of institutional health that is no longer economically viable.
The synthetic insight is that the notes are self-referential: chronic vigour, the seventh note, applies to the evaluative tradition itself. A living tradition can use Newman's framework to assess its own AI-mediated development. A tradition that has lost the capacity to form and empower stewards cannot answer the questions the notes pose—not because the questions are wrong, but because answering them requires the very institutional health they diagnose.