You On AI Encyclopedia · Condorcet's Jury Theorem The You On AI Encyclopedia Home
Txt Low Med High
CONCEPT

Condorcet's Jury Theorem

The 1785 mathematical result — now literally running inside modern AI ensemble systems — that proves a group of independent, informed judges converges on truth as it grows, and amplifies error in the same way when conditions fail.
Condorcet proved that if each member of a group has a probability greater than one-half of making a correct decision on a binary question, the probability that the majority decision is correct increases with group size, approaching certainty. The theorem has a dark mirror: if individual reliability is below chance, larger groups converge on error with the same mathematical certainty. The theorem is therefore a double-edged sword — a justification for inclusive governance under specific conditions, and a diagnostic of how collective decision-making can systematically amplify mistake. Modern machine learning ensembles — random forests, boosting algorithms, voting classifiers — operate on the same mathematical structure. The theorem is not a metaphor for AI ensembles; it is the literal foundation beneath them.
Condorcet's Jury Theorem
Condorcet's Jury Theorem

In The You On AI Encyclopedia

The theorem has two critical conditions: individual reliability greater than chance, and independence of errors among participants. The first condition is what makes universal education a mathematical necessity rather than merely a social good — the reliability of democratic decisions depends on the reliability of the individual judgments composing them.

The independence condition is what makes diversity a mathematical requirement rather than merely a political value. A 2024 study applying the theorem to ensembles of large language models found that majority voting across multiple LLMs produced only marginal improvements — the models, despite apparent diversity, had been trained on overlapping data and shared architectures, so their errors correlated. When errors correlate, the theorem's guarantee collapses. The fishbowl becomes the governing failure mode.

Condorcet Paradox
Condorcet Paradox

Researchers have explicitly deployed the theorem in neural network ensembles for medical diagnosis — combining outputs of multiple deep learning models trained on radiograph images, using majority voting to achieve diagnostic accuracy exceeding any individual model. The theorem provides the mathematical guarantee, and the guarantee depends on precisely the conditions Condorcet specified.

The theorem's application to AI governance is direct. Decisions about AI development are currently made by a narrow group whose information, assumptions, and professional networks substantially overlap. Under the theorem's conditions, their collective reliability is constrained by the correlation of their errors. Broadening participation — genuinely diverse participation, not demographic tokenism — would, under the theorem, produce more reliable collective decisions, provided the new participants are adequately informed.

Origin

The theorem appeared in the Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix (1785), a 500-page treatise applying probability calculus to collective judgment — a founding document of social choice theory, decision theory, and the statistical analysis of testimony.

The result was rediscovered and formalized by Duncan Black in the 1950s, generalized in machine learning by Robert Schapire's 1990 paper 'The Strength of Weak Learnability,' and is now standard material in computer science curricula — usually without mention of the eighteenth-century mathematician whose proof it is.

Key Ideas

Universal Instruction
Universal Instruction

Individual reliability matters. Below chance, larger groups converge on error; above chance, they converge on truth.

Independence is essential. Correlated errors nullify the theorem's guarantee, regardless of group size.

Diversity is mathematical, not decorative. A homogeneous group produces a chorus, not a jury.

The theorem runs inside AI. Random forests, boosting, and ensemble voting are literal implementations of Condorcet's proof.

Debates & Critiques

Critics have noted that the theorem assumes binary decisions and independent errors — conditions rarely met exactly in practice. Defenders respond that approximate satisfaction of the conditions produces approximate benefits, and that the theorem's diagnostic value lies less in its exact predictions than in its identification of the conditions under which collective judgment is reliable or fails.

Further Reading

  1. Condorcet, Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix (1785)
  2. Robert Schapire, 'The Strength of Weak Learnability,' Machine Learning (1990)
  3. Christian List and Robert Goodin, 'Epistemic Democracy: Generalizing the Condorcet Jury Theorem,' Journal of Political Philosophy (2001)
  4. Scott Page, The Difference: How the Power of Diversity Creates Better Groups

Three Positions on Condorcet's Jury Theorem

From Chapter 15 — how the Boulder, the Believer, and the Beaver each read this concept
Boulder · Refusal
Han's diagnosis
The Boulder sees in Condorcet's Jury Theorem evidence of the pathology — that refusal, not adaptation, is the correct posture. The garden, the analog life, the smartphone that is not bought.
Believer · Flow
Riding the current
The Believer sees Condorcet's Jury Theorem as the river's direction — lean in. Trust that the technium, as Kevin Kelly argues, wants what life wants. Resistance is fear, not wisdom.
Beaver · Stewardship
Building dams
The Beaver sees Condorcet's Jury Theorem as an opportunity for construction. Neither refuse nor surrender — build the institutional, attentional, and craft governors that shape the river around the things worth preserving.

Read Chapter 15 in the book →

Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
CONCEPT Book →