Interactive complexity is the first of Perrow's two structural conditions for normal accidents. In a linear system, component A feeds B feeds C, and failure propagates along a predictable chain. In an interactively complex system, components interact radially through pathways that cross physical barriers, software abstractions, and organizational walls. The interactions are not sequential but networked; failures do not march but radiate, producing combinations no one tested because no one imagined them. Interactive complexity is not a failure of imagination but a mathematical ceiling: the space of possible interactions grows combinatorially while the space of testable interactions grows linearly, guaranteeing that unexamined pathways exceed examined ones by orders of magnitude.
The concept distinguishes Perrow's framework from earlier risk analyses that treated accidents as failures of specific components. In linear systems, you can trace backward from the failure to the component that caused it and redesign that component. In interactively complex systems, the 'cause' is not a component but an interaction, and the interaction is one of millions of possible combinations, most of which will never be tested until one of them produces a failure severe enough to force examination.
The Orange Pill's architecture — dissolved specialist silos, generalists operating across multiple domains, AI collaboration that crosses every boundary — systematically increases interactive complexity. When specialists work in separate domains, interactions between domains are mediated by handoffs and reviews; the silo wall functions as a containment structure. When the wall comes down, domain interactions multiply, and a misunderstanding in one domain no longer produces a localized failure but a correlated failure propagating simultaneously across every domain the generalist touches.
Human-AI collaboration is a particularly sharp case of interactive complexity because both parties — the human with biographical opacity to herself, the AI with architectural opacity to everyone — contribute cognitive processes neither can fully inspect. The Deleuze failure described in The Orange Pill is a textbook example: Claude's pattern-matching met Segal's confirmation bias through an interaction pathway neither architecture's independent capabilities would have produced or detected.
The practical consequence is that organizations cannot safety-engineer their way out of interactive complexity through better testing. The testing itself is finite; the interaction space is effectively infinite. What organizations can do is reduce the complexity through modularity, maintain redundancy that absorbs unanticipated failures, and invest in the HRO capabilities that detect failures early enough to contain them.
Perrow developed the concept while analyzing Three Mile Island, where the specific failure cascade involved a stuck pressure relief valve, a coolant drain, a misread instrument, and an operator intervention that each would have been survivable alone but combined to produce core damage. The combination was not in any fault tree. Perrow generalized the pattern across nuclear plants, petrochemical facilities, aviation, and eventually financial systems.
Radial, not sequential. Interactively complex systems fail through networks, not chains; intervention at any single link cannot interrupt a radiating failure.
Combinatorial explosion. The space of possible interactions grows as a combinatorial function of component count, exceeding any feasible safety analysis.
Invisible pathways. The interactions that produce catastrophe typically cross boundaries — physical, software, organizational — that hide them from inspection.
Correlated failure. When a single mind operates across multiple domains, failures become correlated rather than independent, concentrating rather than distributing risk.
Detection versus prevention. Interactive complexity cannot be prevented in complex systems; it can only be detected early and contained through organizational redundancy.