Modularity is the structural antidote to the interactive complexity and tight coupling that Perrow's framework identifies as the architectural sources of normal accidents. A modular system is one whose components are designed as independent units with well-defined interfaces and limited interaction pathways. Modules that are not connected cannot transmit failure between them. The system's overall complexity may remain high, but its effective coupling is reduced because failures are contained within the module in which they originate. The principle extends naturally to AI safety, where the LessWrong community's 2025 analysis proposed Just-In-Time Assembly — temporary assembly of AI capabilities for specific tasks, disassembly afterward — as a direct application of Perrow's later prescriptions.
There is a parallel reading that begins from the material conditions of AI deployment rather than its architectural ideals. Modularity requires infrastructure — standardized interfaces, coordination protocols, monitoring systems, and the computational overhead of assembly and disassembly. Each module boundary introduces latency, each interface requires maintenance, each separation multiplies the attack surface for adversarial inputs. The prescription assumes a world where these costs are bearable, where the organizations implementing AI have the luxury of inefficiency. But the actual deployment environment is one of razor-thin margins, venture capital burn rates, and winner-take-all dynamics where a 10% performance penalty means market extinction.
The deeper problem is that modularity itself becomes a site of capture. Who defines the interfaces? Who maintains the standards? Who decides which modules can connect? The history of computing suggests these chokepoints concentrate power more effectively than any monolithic system. Microsoft didn't dominate through Windows as a monolith but through controlling the APIs that modules had to use. Google's power isn't in search as a single system but in being the interface layer between modular services. The modular AI future Perrow's framework suggests wouldn't distribute risk — it would create new categories of systemic risk at the interface layer, where a single protocol failure could cascade through every system that depends on it. The containment boundaries that look robust in the architectural diagram become the transmission pathways in practice, carrying failures not through direct coupling but through shared dependencies on common infrastructure.
The modularity principle was central to Perrow's later work, particularly The Next Catastrophe. His shift from procedural to structural prescriptions reflected four decades of observation that procedural safety interventions degrade under pressure while architectural constraints persist. A modular design does not require operator discipline to function as designed; it functions as designed because the geometry does not permit the failure pathways that non-modular designs allow.
For AI systems, modularity suggests designs that limit the capability concentration that currently characterizes frontier models. Smaller specialized models coordinated through explicit interfaces, rather than monolithic systems that handle every task through a single opaque cognitive architecture. Capability gated behind specific task contexts rather than universally available. Disassembly after use rather than persistent state that accumulates latent interaction effects. Each of these design choices reduces interactive complexity and coupling in the ways Perrow's framework identifies as essential for containing normal accidents.
For organizations, modularity suggests maintaining specialist silos with deliberate interfaces rather than dissolving them entirely. The Orange Pill celebrates the dissolution of silos as liberation; Perrow's framework suggests that some silo walls functioned as structural containment, and their dissolution removes protection that no amount of procedural safety can replace. The prescription is not to rebuild all walls but to rebuild the ones that matter — the ones whose removal creates common-mode failure pathways that the organization cannot otherwise defend against.
The competitive dynamics of the AI industry push against modularity. Integration is more efficient than modular assembly. Monolithic models outperform specialized ones on most benchmarks. Tight coupling produces the twenty-fold productivity multiplier that modular architectures cannot match. The structural forces that push organizations toward the dangerous quadrant of Perrow's matrix are economic, not technical, and they operate with a persistence that architectural prescriptions alone cannot overcome.
The modularity principle has roots in software engineering (Parnas, 1972) and systems engineering generally. Perrow adopted it as a prescription in The Next Catastrophe (2007). AI safety researchers, particularly in the LessWrong community, have extended it to contemporary AI systems since roughly 2023.
Coupling reduction by structure. Modular design reduces effective coupling without requiring procedural discipline.
Failure containment. Unconnected modules cannot transmit failure; the geometry contains the damage.
Just-In-Time Assembly. Temporary assembly of capabilities for specific tasks prevents the accumulation of latent interaction effects.
Economic tension. Modularity is less efficient than integration; the market rewards the architectural choices that maximize normal accident probability.
Structural over procedural. Architectural constraints persist where procedural safety degrades under pressure.
The tension between modularity and integration resolves differently at different scales of analysis. At the component level within a single AI system, the contrarian view dominates (75/25) — the computational and performance costs of true modularity are prohibitive given current architectures, and the interface layers do become control points. The question "what makes AI competitive?" has an answer that runs directly counter to modular design. At the organizational level, the balance shifts (60/40 in favor of modularity) — specialist silos with deliberate interfaces genuinely do contain certain failure modes, though the coordination costs are real and the interface management creates new bureaucratic pathologies.
The key insight both views share is that architecture is destiny, but they disagree on which architecture we're actually building. Perrow's framework assumes we have architectural choice; the contrarian view observes that the substrate of modern AI — venture capital, cloud computing, platform economics — has already made that choice. The synthesis isn't to balance these views but to recognize they operate at different timescales. Modularity is the correct long-term prescription for a mature AI ecosystem; integration is the inevitable near-term reality of a technology in its gold rush phase.
The frame that holds both views is "containment strategy as a function of system maturity." Early in a technology's lifecycle, integration dominates because the efficiency gains fund the experimentation that discovers what modules should exist. As systems mature and failure modes become clear, modularity becomes both possible (we know what to separate) and necessary (we've seen what happens when we don't). The question isn't whether to pursue modularity but when the industry will mature enough that the economic incentives align with the architectural prescription. Until then, we're building the tightly coupled systems that Perrow's framework predicts will fail normally and catastrophically.