The modularity principle was central to Perrow's later work, particularly The Next Catastrophe. His shift from procedural to structural prescriptions reflected four decades of observation that procedural safety interventions degrade under pressure while architectural constraints persist. A modular design does not require operator discipline to function as designed; it functions as designed because the geometry does not permit the failure pathways that non-modular designs allow.
For AI systems, modularity suggests designs that limit the capability concentration that currently characterizes frontier models. Smaller specialized models coordinated through explicit interfaces, rather than monolithic systems that handle every task through a single opaque cognitive architecture. Capability gated behind specific task contexts rather than universally available. Disassembly after use rather than persistent state that accumulates latent interaction effects. Each of these design choices reduces interactive complexity and coupling in the ways Perrow's framework identifies as essential for containing normal accidents.
For organizations, modularity suggests maintaining specialist silos with deliberate interfaces rather than dissolving them entirely. The Orange Pill celebrates the dissolution of silos as liberation; Perrow's framework suggests that some silo walls functioned as structural containment, and their dissolution removes protection that no amount of procedural safety can replace. The prescription is not to rebuild all walls but to rebuild the ones that matter — the ones whose removal creates common-mode failure pathways that the organization cannot otherwise defend against.
The competitive dynamics of the AI industry push against modularity. Integration is more efficient than modular assembly. Monolithic models outperform specialized ones on most benchmarks. Tight coupling produces the twenty-fold productivity multiplier that modular architectures cannot match. The structural forces that push organizations toward the dangerous quadrant of Perrow's matrix are economic, not technical, and they operate with a persistence that architectural prescriptions alone cannot overcome.
The modularity principle has roots in software engineering (Parnas, 1972) and systems engineering generally. Perrow adopted it as a prescription in The Next Catastrophe (2007). AI safety researchers, particularly in the LessWrong community, have extended it to contemporary AI systems since roughly 2023.
Coupling reduction by structure. Modular design reduces effective coupling without requiring procedural discipline.
Failure containment. Unconnected modules cannot transmit failure; the geometry contains the damage.
Just-In-Time Assembly. Temporary assembly of capabilities for specific tasks prevents the accumulation of latent interaction effects.
Economic tension. Modularity is less efficient than integration; the market rewards the architectural choices that maximize normal accident probability.
Structural over procedural. Architectural constraints persist where procedural safety degrades under pressure.