The mechanism Brooks identified is not a property of software but of human cognition and organization. Any task that requires coordination among multiple agents incurs a coordination cost, and the cost scales with the square of the number of agents because each agent must maintain awareness of each other agent's work. This is why small teams ship faster than large teams, why meetings consume more time as organizations grow, and why the addition of a new team member produces a temporary decrease in total output before the new member becomes productive.
Brooks's Law has been repeatedly tested and consistently confirmed in software contexts. Studies of team velocity, defect rates, and schedule performance have shown the quadratic communication-overhead pattern holds across industries, programming languages, and organizational structures. The law's apparent exceptions — cases where adding people seemed to help — have generally turned out on closer examination to involve teams that were not yet communication-saturated or tasks that could be genuinely partitioned into independent subtasks requiring no coordination.
You On AI moment is Brooks's Law applied to its limit. If communication overhead is the binding constraint, then the theoretically optimal team size is one — and AI has made one sufficient for a significant class of work previously requiring teams. The Trivandrum training Segal describes, in which twenty engineers each became capable of the work of a full team, is the empirical confirmation that the law's logic extends all the way down to the solo builder.
But the law's extension to AI collaboration reveals a subtlety. The solo builder communicating with an AI tool does not have zero communication overhead — she has the overhead of describing intention to the machine, evaluating its output, iterating on the description, and verifying the results. This overhead differs from inter-human communication overhead in structure, but it scales with the ambition of the project in ways that Brooks's original formulation did not anticipate and that the AI-month fallacy now makes explicit.
Brooks derived the law from his experience managing the IBM OS/360 project in the 1960s, where repeated attempts to rescue schedule slippage by adding developers consistently produced further slippage. He codified the pattern in The Mythical Man-Month (1975) and named it in the 1995 anniversary edition.
The mathematical form — that communication channels scale as n(n-1)/2 — was drawn from graph theory and had been known to operations researchers. Brooks's contribution was applying the graph-theoretic result to project staffing and demonstrating its managerial consequences through specific historical cases.
Quadratic overhead. Communication channels grow as n(n-1)/2; doubling the team quadruples the coordination burden.
Ramp-up time. New team members require integration time during which existing members are less productive, compounding the delay they were hired to prevent.
Partitionable vs. unpartitionable tasks. The law applies to tasks requiring coordination; tasks genuinely decomposable into independent subtasks escape it.
The solo limit. If communication is the binding constraint, the optimal team size is one — a limit previously theoretical that AI tools have made practically achievable for a significant class of work.