Traditional organizations with specialist silos accidentally created diversified portfolios of cognitive risk. Each specialist's errors were her own, uncorrelated with others, containable within her domain. The probability that multiple specialists produced errors simultaneously, on the same feature, affecting the same users, was the product of their individual error rates — a very small number. This statistical independence was a byproduct of organizational structure, not an engineered safety feature, but its absence has consequences.
When one person operates across twenty domains through AI mediation, the independence disappears. A cognitive bias distorting her reasoning in one domain distorts it in all twenty. Fatigue that degrades her judgment at three in the afternoon degrades every decision across every domain that afternoon. A fundamental misunderstanding of requirements propagates through every feature she builds, because every feature passes through the same cognitive bottleneck. This is common-mode failure applied to cognition.
The speed dimension compounds the exposure. At twenty-fold velocity, the same feature is built in days rather than months. Errors accumulate at twenty times the rate while the discovery opportunities compress. The cumulative undetected-error load grows faster than any review process can clear it, creating a growing inventory of latent failures — errors embedded in the system, dormant until the conditions that activate them arrive.
The organizational concentration of risk follows directly. When twenty specialists do the work, the incapacitation of any one produces localized disruption; the other nineteen continue. When one person does the work of twenty, every function depends on a single point of failure. Segal's decision to maintain engineering team size despite the multiplier, which he frames as a human-values commitment, is — whether he recognizes it or not — a redundancy preservation decision. Perrow's framework reveals why: redundancy is the primary defense against common-mode failure in systems where interactive complexity makes specific failure prediction impossible.
The concept emerges from the collision of Segal's productivity arithmetic in You On AI with Perrow's framework for analyzing correlated failures in complex systems. Neither Segal nor Perrow articulated it in this form; the multiplier's two-faced character becomes visible only when the two frameworks are held against each other.
Correlated errors. AI-mediated cross-domain work converts statistically independent errors into correlated ones, eliminating natural diversification.
Cognitive bottleneck. Every decision passes through a single mind, so every cognitive failure propagates across every domain that mind touches.
Accelerated latent failure accumulation. Twenty-fold speed produces twenty times the error rate while compressing detection windows.
Single-point fragility. The twenty-fold worker is an organizational single point of failure in a way twenty specialists were not.
Productivity as risk measure. The same number quantifies the capability expansion and the failure-exposure concentration.