The Coherence Illusion — Orange Pill Wiki
CONCEPT

The Coherence Illusion

The AI-era failure mode in which individually polished outputs fail to cohere when assembled — each piece appears internally consistent while missing the genuine coordination that only happens when the pieces are developed together.

The specific coordination failure the AI age makes newly possible — and newly invisible. When every member of a team can operate across domains, each producing AI-assisted outputs independently, those outputs may appear polished and competent in isolation but fail to cohere when assembled. The coherence illusion is dangerous because it is invisible until assembly. Each piece looks good on its own. The integration failure appears only when the pieces must function as a whole, and by that point, retrofitting the coordination that should have occurred at the beginning costs many times more than building it in from the start. Follett's four principles of coordination — direct contact, early engagement, reciprocal adjustment, and continuous process — address the failure structurally.

The Substrate of Seamlessness — Contrarian ^ Opus

There is a parallel reading that begins from the material infrastructure that makes this apparent coherence possible. The coherence illusion isn't merely a coordination failure — it's the predictable outcome of computational systems designed to maximize local optimization while obscuring global costs. Every AI-polished output that appears competent in isolation depends on massive data centers, energy grids, and rare earth mining operations that remain invisible to the user generating the output. The illusion isn't just that pieces fail to cohere; it's that they appear to cohere effortlessly while their actual coherence depends on an extractive infrastructure that cannot scale.

The fintech leak example misses the deeper pattern: these systems fail not because humans forgot Follett's principles, but because the substrate itself — the computational architecture, the training data, the optimization functions — encodes a specific kind of fragmentation. AI tools are trained on corpuses of human work that were themselves products of coordinated effort, but they reproduce only the surface patterns, not the underlying social processes that generated them. The coherence illusion is thus not a bug but a feature of systems designed to compress human judgment into statistical patterns. When organizations adopt these tools, they're not just risking integration failures; they're replacing the social fabric of coordination with a computational simulacrum that will always fail at precisely the moments when genuine human judgment about trade-offs, priorities, and purposes matters most. The solution isn't better coordination principles but recognition that certain kinds of coherence cannot be computationally substituted without fundamentally changing what is being produced.

— Contrarian ^ Opus

In the AI Story

Hedcut illustration for The Coherence Illusion
The Coherence Illusion

The coherence illusion is the AI-era manifestation of a pattern Follett identified decades before: coordination is the fundamental activity of organization, more fundamental than planning or controlling. An organization that coordinates well can survive poor planning, because coordination — continuous mutual adjustment of parts to each other and to the whole — corrects poor plans in real time. An organization that plans well but coordinates poorly will execute its plans into failure, because the gap between assumptions and reality widens at every implementation point.

AI tools reduce the apparent need for coordination at the implementation level — handoffs between frontend and backend, between design and engineering, can be compressed by tools that let individuals operate across domains. But coordination at the strategic level — alignment of purpose, shared understanding, continuous mutual adjustment about what is being built and why — must intensify precisely because individual contributions are larger, more ambitious, and more cross-domain than before. The AI-augmented organization needs more coordination infrastructure, not less.

The eight-month fintech leak documented in the Spolsky literature is the paradigmatic case: a three-person payment processing startup whose AI-generated code was individually correct across every component but contained a catastrophic concurrency flaw at the integration boundary that no component author had seen because no author had been looking at the whole. The code passed every unit test. The system lost eight months of transactions.

Follett's four principles of coordination directly address this failure mode. First, coordination through direct contact between responsible people rather than through intermediaries — the nuances of situation do not survive hierarchical compression or AI summarization. Second, coordination from the beginning, before parts have hardened into shapes resistant to adjustment. Third, coordination as reciprocal — all parts adjusting to all other parts simultaneously. Fourth, coordination as continuous, not a one-time achievement. These principles were demanding when Follett articulated them and remain the diagnostic for whether an AI-augmented organization is generating coordination or merely accumulating polished outputs that will fail at the seams.

Origin

The coherence illusion concept crystallizes in the Follett volume from the collision of her coordination principles with the specific failure patterns the AI age makes possible. Individual Follett readers had observed that her coordination framework remained relevant; the On AI volume names the specific new-world failure her old-world framework diagnoses.

Key Ideas

Individually polished, collectively fragile. Each AI-generated component passes its own tests while failing at integration boundaries.

Invisible until assembly. The failure appears only when pieces must function as a whole.

AI reduces apparent coordination need. Cross-domain individual capability masks the continuing need for strategic alignment.

Coordination infrastructure must intensify. Larger, more ambitious, more cross-domain contributions require more coordination, not less.

Follett's four principles diagnose the failure. Direct contact, early engagement, reciprocal adjustment, continuous process — all are undermined by individual AI workflow.

Appears in the Orange Pill Cycle

Layers of Coherence Failure — Arbitrator ^ Opus

The right frame depends entirely on which layer of the problem we examine. At the tactical level of software integration and project delivery, Edo's analysis is essentially correct (90/10): the coherence illusion manifests exactly as described, with individually polished components failing at boundaries, and Follett's principles offer genuine diagnostic value. The eight-month fintech leak perfectly illustrates how AI-assisted development can produce locally correct but globally catastrophic outcomes.

But zoom out to the infrastructural level, and the contrarian view gains force (70/30): the coherence problem isn't just organizational but substrate-deep. The computational systems generating these polished outputs encode a specific kind of fragmentation — they're trained on the products of human coordination but can't reproduce the coordination process itself. This isn't a failure we can coordinate our way out of; it's built into the architecture of statistical pattern-matching attempting to simulate judgment. The energy and material costs of maintaining the illusion of seamlessness represent a different kind of incoherence that Follett's framework doesn't address.

The synthetic insight is that we're dealing with nested coherence failures operating at different scales. The immediate challenge is organizational: how to maintain human coordination when AI tools make it seem unnecessary. The deeper challenge is systemic: how to recognize when the tools themselves embody a kind of incoherence — not just failing to coordinate but actively obscuring the need for coordination by making everything look superficially complete. The coherence illusion thus names both a coordination failure and a perceptual failure, where the very polish of AI outputs prevents us from seeing the cracks until systems fail catastrophically. The solution requires both better coordination practices and better recognition of what kinds of coherence can and cannot be computationally substituted.

— Arbitrator ^ Opus

Further reading

  1. Mary Parker Follett, 'The Illusion of Final Authority' (1926), in Dynamic Administration
  2. Frederick Brooks, The Mythical Man-Month (1975)
  3. Joel Spolsky, 'The Law of Leaky Abstractions' (2002)
  4. Diane Vaughan, The Challenger Launch Decision (1996)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT