The equal-odds baseline is Dean Keith Simonton's most counterintuitive and most thoroughly validated finding: across thousands of creators in dozens of domains, the ratio of masterpieces to total output hovers around a small, stubborn constant. The difference between Edison and a lesser inventor is not a higher hit rate per attempt but a larger denominator. Shakespeare's thirty-seven plays, Beethoven's 722 works, Picasso's 50,000 artifacts — the canonical masterpieces emerge from these large samples through the operation of a roughly constant probability of excellence. The name is deliberately provocative: it asserts that even at the creative peak, the creator cannot reliably produce excellence on demand. Genius is more at-bats, not better swings.
The baseline was established through decades of historiometric analysis in which Simonton and his students counted every work of hundreds of eminent creators, plotting quality ratings against total output across entire careers. The finding held across classical composers, scientific patent-holders, literary writers, painters, and choreographers. Domain-specific constants varied — Shakespeare's hit rate exceeded Edison's because the combinatorial space of Elizabethan theater differs from that of nineteenth-century invention — but the structural principle remained identical. Quality is an emergent property of quantity operating at sufficient scale.
Applied to the AI transition, the baseline generates the most consequential prediction in the literature on creativity and artificial intelligence. If AI multiplies creative output by an order of magnitude, and if the equal-odds baseline holds under the new conditions, then AI multiplies creative excellence by the same factor — not because each output is better, but because the larger denominator produces more hits at the same probability. This is the most optimistic reading of the AI moment framed in rigorous quantitative terms.
But the baseline carries a condition that the optimistic reading tends to glide past: each unit of output must constitute a genuine creative attempt, involving full engagement in generation, evaluation, and iteration. AI makes production trivially easy while leaving creation as hard as it ever was. A developer clearing a backlog with AI produces more transactions but not more genuine attempts — the lottery tickets are counterfeit. The Berkeley study documented precisely this distinction operating at organizational scale.
The baseline's condition connects directly to the diagnostic questions that run through The Orange Pill: are the amplified outputs genuine creative attempts, or merely amplified transactions? Simonton's framework does not answer the question for any individual creator. It tells you the question matters more than volume-based triumphalism acknowledges, and specifies with mathematical precision what happens in each case.
Simonton formalized the equal-odds baseline in a series of papers through the 1970s and 1980s, building on earlier work by Havelock Ellis, Wayne Dennis, and Harvey Lehman on productivity-age relationships in creative careers. The empirical foundation came from his exhaustive catalogs of the complete works of eminent creators — a methodological commitment that required treating every forgotten composition or unperformed play as data, not as embarrassment to be explained away.
The framework drew on Donald Campbell's 1960 blind-variation-and-selective-retention model, which provided the theoretical mechanism that the statistical regularities of the baseline demanded. If creativity is combinatorial variation followed by selection, then the probability of excellence per attempt should be roughly constant, and the distribution of quality across a career should follow the patterns Simonton's data documented.
Quality scales with quantity. The creator who produces more masterpieces does so by producing more of everything, not by having higher hit rates per attempt.
The constant is domain-specific. Probability per attempt varies by field — higher in lyric poetry, lower in prolific invention — but within a domain the constant holds across creators of different eminence levels.
Failure is not waste. The lesser works are the mechanism through which the masterpieces emerge, because the creator cannot know in advance which attempt will prove extraordinary.
The condition is genuine engagement. The baseline's predictive power depends on each unit of output constituting a real creative attempt, not merely a transaction or output-clearing exercise.
AI tests the condition at scale. The multiplier effect AI introduces either amplifies masterpiece production proportionally (if engagement holds) or multiplies adequacy without proportional gains in excellence (if engagement collapses).
Critics including Robert Weisberg have argued that the equal-odds finding reflects the artifact of aggregating across creators with different quality distributions, and that within any single career the quality varies systematically rather than randomly. Liane Gabora's critique of the underlying BVSR framework extends to the baseline: if creativity is not a selectionist process, then the probabilistic framing misdescribes what creators do. Simonton responded with additional historiometric data showing that within-creator variation, while present, was smaller than between-creator variation in productivity — consistent with the baseline's prediction. The AI era introduces a new empirical test that neither side anticipated.