The Brooks volume proposes that the man-month's mythical status finds its contemporary analog in the AI-month: the managerial assumption that an AI tool can replace a human developer on a month-for-month basis, enabling headcount reduction without loss of capability. The AI-month is mythical for different reasons than the original man-month. The man-month was mythical because adding people added communication overhead that consumed the additional capacity. The AI-month is mythical because substituting AI for a human developer eliminates not just the person's implementation capacity but also the person's embodied understanding of the system, the person's capacity to catch errors through orthogonal expertise, the person's institutional memory, and the person's capacity to develop into a more senior contributor over time. The substitution appears equal on the ledger of hours; it is deeply unequal on the ledger of capabilities.
The fallacy operates in boardrooms and executive meetings whenever a productivity improvement from AI is translated into a headcount reduction. The executive sees that one developer with AI produces the output of twenty developers without AI. The arithmetic seems to license firing nineteen of the twenty. The arithmetic is wrong, but the wrongness is not obvious until the consequences have already materialized.
The first consequence is the loss of institutional memory. The nineteen developers fired were not only producing code; they were carrying knowledge of why the system was built the way it was, which approaches had been tried and failed, which customers had quirks that required workarounds, which integrations worked through undocumented conventions. None of this knowledge survived in the code. All of it was in the heads of the people who had done the work. When those people leave, the organization loses the knowledge permanently, and the AI tool — trained on general patterns rather than on the specific organization's history — cannot reconstruct it.
The second consequence is the loss of perspective diversity. The team of twenty caught errors that no single member would have caught, because each member brought different training, different experience, and different blind spots. The surviving developer, however capable, operates with her own blind spots uncorrected. The AI tool, trained on the dominant patterns in its training data, tends to reinforce rather than challenge her blind spots.
The third consequence is the loss of developmental pipeline. The junior developers fired were not only producing current output; they were developing into senior contributors who would carry the organization's capability forward. Firing them eliminates the pipeline through which the next generation of senior developers would have been produced. Five or ten years later, the organization will need senior contributors and will find there are none available internally — because the conditions for producing them were eliminated in the round of efficiency gains.
The Brooks volume argues that these three losses compound. Institutional memory loss makes perspective diversity loss harder to recover from, because the organization no longer has people who remember which perspectives used to be available. Pipeline loss makes institutional memory loss permanent, because there is no one being trained up to replace the departing holders of knowledge. The builder's wager in the original Orange Pill is Brooks's direct counter-argument: keep the team, deploy AI to multiply its capability, resist the arithmetic that converts productivity into headcount reduction.
The phrase 'AI-month' is a deliberate echo of Brooks's 'man-month' in The Mythical Man-Month. The Brooks volume proposes the corollary as the natural extension of Brooks's original framework into the age of AI-augmented development.
The argument draws on contemporaneous concerns in the SaaSpocalypse literature and on Prahalad and Hamel's framework for why core competencies cannot be purchased or compressed through capital investment.
Surface equivalence. On the ledger of hours, one AI-augmented developer appears to equal twenty unaugmented developers.
Deep inequivalence. The twenty developers carried capacities — memory, perspective, pipeline — that the AI tool cannot replicate.
Compounding losses. The three forms of loss reinforce each other across time horizons measured in years, not quarters.
The wager against arithmetic. The correct organizational response is to multiply existing team capability with AI, not to reduce headcount — a position the market rewards reliably only over long time horizons.
Whether the AI-month fallacy is as severe as the Brooks volume claims is contested. Optimistic analyses argue that sufficiently capable AI systems will eventually replace not only implementation capacity but also institutional memory and pipeline functions. The Brooks volume treats this as speculative and unlikely in relevant time horizons, but acknowledges that the assessment depends on AI capabilities that are actively evolving.