The full title is Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix — 'Essay on the Application of Analysis to the Probability of Decisions Rendered by Majority Vote.' The work is 500 pages of dense probability calculus applied to the theory of collective judgment: how large must a jury be to reach a reliable verdict; how should voting procedures be designed to approximate rational collective choice; what is the probability that a democratically decided question is correctly decided. It is simultaneously the most ambitious mathematical treatment of democracy ever attempted and the most sobering, producing results that simultaneously justify inclusive governance and demonstrate its fundamental limitations.
There is a parallel reading that begins from the material conditions required for probabilistic democracy to function. Condorcet's theorem assumes voters exist as independent computational units, each processing information and reaching judgments that can be aggregated. But this independence is precisely what modern information infrastructure destroys. Voters don't form judgments through isolated reflection on evidence — they form them through algorithmically mediated information flows, social proof cascades, and platform-specific affordances that shape not just what they think but how thinking occurs. The theorem's mathematical elegance depends on assumptions about human cognition that were already questionable in 1785 and are now demonstrably false.
The deeper problem is that Condorcet's framework treats collective decision-making as a problem of aggregating pre-existing judgments rather than examining how those judgments are produced. When AI systems implement ensemble methods and majority-vote classifiers, they operate on data streams whose generation they control — a closed loop where the distinction between judgment and manipulation dissolves. The political economy of AI doesn't just violate the independence assumption; it weaponizes dependence as a feature. Every recommendation algorithm, every personalized feed, every predictive nudge creates correlations between voters that make the theorem's probabilistic guarantees meaningless. We don't have voters with variable but independent reliability; we have subjects of computational systems that manufacture consensus and discord according to optimization functions that have nothing to do with truth or collective welfare. The Essai remains mathematically correct but politically irrelevant — a beautiful theorem about a democracy that cannot exist under conditions of algorithmic mediation.
The Essai was not merely a mathematical exercise. It was the foundation of Condorcet's political philosophy. If democracy is to be justified as a method of governance, the theorem specifies the conditions under which its justification holds. The individual participants must be, on average, more reliable than chance. This condition creates the imperative for universal education: the quality of democratic decisions depends on the quality of individual judgments composing them.
The work was ignored by most of Condorcet's contemporaries. Its mathematics was too dense for political theorists and its political implications too contested for mathematicians. It survived as a technical document studied by specialists until Duncan Black rediscovered it in the 1940s and Arrow generalized its results in 1951 — an intellectual transmission across a century and a half that illustrates Condorcet's own thesis about the survival of durable ideas across institutional collapse.
Its relevance to AI is not metaphorical. The theorem is implemented directly in ensemble methods, majority-vote classifiers, and the architecture of systems that combine diverse predictors. The paradox is the structural constraint behind every value-alignment effort. The work is not a historical curiosity — it is operational code for a large class of AI systems, whether or not the engineers building them recognize it.
Published in 1785 during Condorcet's tenure as Permanent Secretary of the Académie des Sciences, the work was his response to Jean-Charles de Borda's 1770 proposal for a voting method Condorcet regarded as theoretically inadequate.
The Essai represents the fullest integration of Condorcet's mathematical training with his political commitments — the moment when his identity as a probability theorist and his identity as a democratic reformer fused into a single intellectual project.
Probability applied to democracy. Every collective decision has a probability of being correct, computable from individual reliabilities.
The theorem and the paradox together. What collective decision-making can achieve (jury theorem) and what it cannot (paradox).
Mathematical foundations for political theory. Democracy is not merely a value but an empirical claim subject to mathematical evaluation.
Foundational for AI. The work's results now operate literally inside machine learning systems.
The tension between these readings depends entirely on which layer of the problem we examine. At the mathematical level, Edo's account is 100% correct — Condorcet's results are literally implemented in ensemble methods, and the theorem-paradox duality does structure both ML architectures and value-alignment problems. The contrarian view has no purchase here; the math is the math. But shift to the question of information independence, and the weighting reverses to 80% contrarian — modern information systems do create precisely the correlations that violate the theorem's assumptions, though some pockets of independent judgment persist in specialized domains.
The synthetic frame emerges when we ask not whether Condorcet's framework applies, but under what conditions it could be made to apply. The theorem doesn't require perfect independence, just sufficient independence — a threshold that becomes an engineering specification rather than a natural given. This reframes the entire project: instead of lamenting the death of independent judgment, we can specify the architectural requirements for systems that preserve enough independence for collective intelligence to function. The question becomes: what minimal independence must be protected for the theorem's guarantees to hold approximately? This is simultaneously a mathematical question (what correlation structures preserve the theorem's convergence properties), an engineering question (what information architectures enable sufficient independence), and a political question (what institutional arrangements can enforce these architectures).
The deepest insight may be that Condorcet's work, read properly, already contained this reflexivity. His emphasis on education wasn't just about improving individual judgment but about creating the conditions for judgment to exist at all. The AI age doesn't invalidate his framework — it radicalizes his insight that democracy's mathematical foundations require active construction and maintenance of its epistemic preconditions.