In Rosanvallon's framework, these are the three practices through which citizens exercise sovereignty between elections. Vigilance is continuous monitoring of authority-holders through institutional mechanisms: free press, transparency laws, civil society organizations, whistleblower protections. Denunciation is the public naming of abuses and failures, requiring a public sphere where naming can reach an audience large enough to generate democratic pressure. Evaluation is ongoing assessment of whether governance produces outcomes the governed have the right to expect, requiring shared standards against which performance can be measured. The AI transition has structurally disabled all three: vigilance is blinded by technological opacity (citizens cannot see training data, audit models, evaluate alignment), denunciation is atomized (individual workers experience displacement individually, preventing aggregation into collective narratives), and evaluation lacks standards (no agreed criteria for assessing whether an AI company is governing its technology democratically).
These powers have deep historical roots. Vigilance descended from the French Revolution's popular societies and political clubs functioning as permanent monitoring bodies. Denunciation has lineage from medieval petitioning traditions through Enlightenment pamphleteering to twentieth-century investigative journalism. Evaluation emerged with the development of performance metrics, citizen report cards, and the apparatus through which democracies hold governance to account. Each power required institutional infrastructure to function effectively. A free press does not emerge spontaneously—it requires legal protections, economic sustainability, professional norms distinguishing journalism from propaganda. Effective denunciation requires not just individual courage but institutional relays converting individual acts into collective pressure.
The AI context presents novel obstacles to each power. For vigilance: AI companies are private entities with no obligation to disclose training data, internal safety assessments, or deployment decisions. Voluntary transparency is unilateral—given at company discretion, framed in company terms, revocable at company convenience. The engineer at Segal's company who identified misuse risks performed counter-democratic vigilance—she watched, identified danger, attempted to hold power accountable. Her denunciation was suppressed because no institution existed to receive it, investigate it, amplify it, translate it into accountability. She was a sensor in a system with no nervous system.
For denunciation: social media has made it possible for anyone to publicize grievances to millions, but the same platforms fragment denunciation—dispersing it across algorithmic feeds optimizing for engagement rather than significance, burying structural critiques beneath noise of individual complaints. Result: more denunciation, less accountability. Viral threads about AI bias reach millions and change nothing. Whistleblower disclosures are absorbed into news cycles and forgotten within weeks. The denunciation occurs; the accountability does not. Effective denunciation requires institutional relays (whistleblower protections, mandatory reporting, congressional hearings) that convert individual knowledge of abuse into collective democratic accountability. For AI, these relays are either absent or inadequate.
For evaluation: citizens lack conceptual tools to perform the evaluative function counter-democracy requires. They can sense something consequential is happening—the silent middle who feel both exhilaration and loss—but cannot translate that sense into democratic judgment because categories of judgment have not been articulated. They know something has changed; they lack vocabulary to say what, and lack institutional framework to do anything about it. Standards exist for political governance (electoral accountability, rule of law, rights protection) and corporate governance (fiduciary duty, transparency). Almost no standards exist for AI governance—the criteria a democratic public would use to assess whether an AI company is governing its technology in ways that serve common good, distribute benefits broadly, respect affected communities' participation rights.
Rosanvallon's systematic articulation of these three powers appeared in Counter-Democracy (2006), though the concepts themselves have longer genealogies in democratic theory. Vigilance connects to Montesquieu's separation of powers and the institutional checks each branch exercises over others. Denunciation extends from classical republican virtue of exposing corruption (the Roman censor) through modern press freedom. Evaluation builds on the democratic principle that governors are servants of the governed and must continuously demonstrate they are serving well. What distinguished Rosanvallon's formulation was the integration of all three into a coherent alternative theory of democratic sovereignty—an account of how democracy actually operates between elections, when electoral accountability is dormant.
The application to AI governance is Rosanvallon's own work, developed in recent lectures and a 2025 report for the Global AI Summit in Paris co-authored with Yann Algan. The report proposed creating citizen intermediary bodies to oversee AI use, drawing inspiration from the Swiss citizen army model for democratic oversight. The recommendation represents institutional invention responding to governance failure—exactly the pattern Rosanvallon's historical work traces. When existing mechanisms prove inadequate, democracies must invent new ones. The question is whether invention can match the speed of the crisis.
Three powers constituting counter-democracy. Vigilance (continuous monitoring through institutional mechanisms), denunciation (public naming requiring relays that convert individual acts into collective pressure), evaluation (ongoing assessment requiring shared standards)—the practices through which citizens exercise sovereignty between elections.
Structural disabling by AI. Vigilance blinded by opacity (cannot see training data, audit models), denunciation atomized (individual experiences not aggregated into collective narratives), evaluation lacking standards (no agreed criteria for democratic assessment of AI governance quality).
Infrastructure requirements. Each power requires institutional support—free press, transparency laws, whistleblower protections for vigilance; public sphere, institutional relays for denunciation; shared evaluative frameworks, independent assessment bodies for evaluation—all either absent or inadequate for AI.
Sensor without nervous system. The engineer who identified AI misuse risks had no institution to receive her vigilance, investigate her denunciation, or translate her evaluation into accountability—performing counter-democratic function in a system lacking counter-democratic infrastructure.
More denunciation, less accountability. Social media enables publicizing grievances to millions while simultaneously fragmenting denunciation across engagement-optimized feeds—viral threads about AI bias reach millions, change nothing, demonstrating that publicity without institutional relays does not produce democratic accountability.