When IBM's Deep Blue defeated Garry Kasparov in 1997, the initial response was complementarity. "Centaur chess" — human-computer teams — outperformed either humans or computers alone. The human provided strategic intuition, creativity, and the ability to identify positions where the computer's evaluation was unreliable. The computer provided tactical precision, endgame calculation, and the ability to evaluate millions of positions per second. The combination was stronger than either component. For approximately fifteen years, centaur teams dominated competitive chess analysis. Then the computers improved to the point where the human contribution became not just unnecessary but counterproductive. Adding a human to the loop introduced noise rather than signal. The relationship shifted from complementarity to substitutability, and the shift was complete.
The chess case is the clearest empirical illustration of a dynamic that Varian's framework identifies but that most discussions of AI and labor underplay. The relationship between human and machine capabilities is not static. It evolves as the technology evolves, and it evolves in a specific direction: from complementarity toward substitutability, as the machine's performance in the complementary task exceeds the human's performance by a margin large enough that human contribution introduces more error than it corrects.
In chess, this threshold was crossed when engines reached a playing strength approximately four hundred Elo points above the strongest human. At that gap, the human's occasional strategic insight could not compensate for the errors the human introduced through slower calculation and imprecise evaluation. Centaur teams, which had dominated from approximately 2000 to 2015, lost their competitive advantage. By 2020, the best pure engines were decisively stronger than any human-engine combination.
The chess case is a warning, not a prediction. The domains in which AI currently operates are far broader and more complex than chess, and the human capacities that complement AI — judgment under uncertainty, ethical reasoning, emotional intelligence, the ability to formulate questions that have never been asked — are different in kind from the strategic intuition that human chess players once provided. The argument that these capacities will remain complementary for the foreseeable future is plausible. But the chess case demonstrates that "complementary for now" is not the same as "complementary forever," and any economic framework that assumes permanent complementarity is making an assumption that the technology may eventually invalidate.
The economic implications for the AI transition are specific. Workers whose work is currently complementary to AI — the senior engineers, the experienced lawyers, the strategic analysts — have a time horizon over which their complementarity is reliable. The length of that horizon depends on the rate at which AI performance improves in their specific domains, which is not uniform and not predictable with confidence. The prudent strategy is to treat current complementarity as a window of opportunity rather than a permanent condition, and to invest in the skills that remain complementary even as the technology's capability expands.
The chess complementarity shift unfolded over approximately two decades, with Kasparov's 1997 defeat marking the beginning and the marginalization of centaur chess in the mid-2010s marking the completion. Kasparov himself has written extensively on the transition in his 2017 book Deep Thinking, reflecting on both the human and machine sides of the evolution.
Complementarity is temporary. The relationship between human and machine capabilities evolves with the technology, not as a fixed property.
The threshold is specific. Substitution dominates when the machine's performance exceeds the human's by a margin that makes human contribution error-additive rather than error-correcting.
Centaur chess was a real era. For fifteen years, human-machine teams genuinely outperformed pure engines; the era's passing was empirically datable.
The pattern may generalize. Domains where AI now operates may follow the same trajectory; the only question is how quickly.
Investment strategy should reflect temporal limits. Current complementarity is an opportunity with a horizon, not a permanent endowment.
Some argue that chess is too narrow to generalize — the closed rule-set and clear objective function made the transition predictable in ways that open-ended cognitive work does not permit. Others argue that the chess pattern will repeat across domains on timescales that depend only on the specific threshold structure.