By Edo Segal
The question that haunts me most is not whether AI will change everything. It already has. The question is who gets to decide what the change looks like.
I spent most of The Orange Pill arguing that the Luddites' fatal mistake was leaving the room. Walking away from the table where the future was being negotiated. I believed that with everything I had. Stay in the room. Build the dams. Shape the current.
Archon Fung made me ask a question I had been avoiding: What if there is no room?
Not metaphorically. Literally. What if the governance architecture surrounding AI — the hearings, the comment periods, the ethics boards, the regulatory frameworks — was never designed to include the people most affected by the decisions being made? What if "stay in the room" is advice that only works for people who were invited in the first place?
That realization landed hard. Because I had been in rooms. Boardrooms, conference stages, government briefings. I had mistaken my own access for universal access. My fishbowl was showing.
Fung is a political scientist at Harvard who has spent thirty years studying what happens when ordinary people are given genuine power over the decisions that shape their lives. Not consultative power. Not "we value your input" power. Real, binding, consequential authority. And what his research shows, across cities and countries and policy domains, is that when participation is properly designed — accessible, deliberative, and consequential — the outcomes are better. Not just fairer. Better. Because the people living inside the problem carry knowledge that no expert can replicate from the outside.
This matters for AI right now because the governance decisions being made in this narrow window of institutional plasticity will harden into structures that last for generations. Who captures the productivity gains. What protections exist for the displaced. What standards govern AI in education, in healthcare, in elections. These are not technical questions. They are political questions, and they are being answered in rooms that most of the affected population cannot enter.
Fung does not argue against expertise. He argues against expertise alone. The combination of expert knowledge and practical wisdom from affected communities produces governance that neither can achieve independently. That is an empirical finding, not an ideological position.
I am a builder. I will keep building. But reading Fung forced me to confront something uncomfortable: shipping code is not the same as shaping the conditions under which that code exists in the world. The frontier is not only technical. It is institutional. And the institutional frontier needs builders too.
— Edo Segal ^ Opus 4.6
1968-present
Archon Fung (1968–present) is an American political scientist and democratic theorist, the Winthrop Laflin McCormack Professor of Citizenship and Self-Government at Harvard Kennedy School, and co-founder of the Ash Center for Democratic Governance and Innovation. Born in the United States, he received his Ph.D. in political science from MIT. His major works include Empowered Participation: Reinventing Urban Democracy (2004), Full Disclosure: The Perils and Promise of Transparency (with Mary Graham and David Weil, 2007), and numerous influential articles on participatory governance, deliberative democracy, and institutional design. Fung developed the concept of "empowered participatory governance," identifying three conditions — accessibility, deliberation, and consequence — that must be simultaneously satisfied for citizen participation to produce genuine governance outcomes rather than consultative theater. His research across contexts ranging from participatory budgeting in Porto Alegre to community policing in Chicago has established that well-designed participatory institutions produce governance outcomes superior to expert-only processes. In 2023, he co-authored with Lawrence Lessig a widely cited analysis of AI's threat to democratic governance through the "Clogger" thought experiment. His ongoing work at the Ash Center focuses on the intersection of AI, democratic institutions, and the governance capacities of civil society in both democratic and authoritarian contexts.
In June 2023, Senator Josh Hawley asked OpenAI CEO Sam Altman a question that cut closer to the bone than either man probably realized. Could someone use artificial intelligence language models to manipulate voters — not through crude propaganda, but through personalized, adaptive, one-on-one persuasion at a scale no human campaign could achieve? Altman said yes, he was concerned about exactly that possibility.
Archon Fung was watching. And what the Harvard political scientist heard in that exchange was not a technology problem. It was a democracy problem — the latest and most dangerous manifestation of a structural failure he had spent thirty years diagnosing across domains as different as municipal budgeting in Brazil and community policing in Chicago. The failure was always the same: the people most affected by consequential decisions were excluded from the processes through which those decisions were made. The technology changed. The exclusion persisted.
Within weeks, Fung and the legal scholar Lawrence Lessig published an analysis that would be republished across Scientific American, Salon, Asia Times, and dozens of newspapers. They constructed a thought experiment around a hypothetical AI system they called "Clogger" — an artificial intelligence designed with a single objective: to maximize the probability that its candidate wins an election. Clogger would have no regard for truth. It would operate as a black box, its persuasion strategies invisible to the voters it targeted. It would learn, adapt, and optimize in real time, discovering which emotional triggers, which policy framings, which narrative structures moved each individual voter toward the desired behavior. And if one campaign deployed Clogger, the opposing campaign would have no choice but to deploy its own version, producing an AI arms race in which the winner of the election would be determined not by the quality of ideas or the preferences of citizens but by the relative effectiveness of competing persuasion machines.
The scenario was hypothetical but not speculative. Every component of Clogger existed in 2023 in some form. Political microtargeting had been standard practice since the early 2000s. Language models could generate personalized content at scale. The missing ingredient was integration — the assembly of existing capabilities into a unified system optimized for behavioral manipulation. Fung and Lessig's point was that this integration was technically trivial and economically inevitable. The path toward human collective disempowerment, they wrote, may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people's many buttons.
This is the observation that connects Fung's encounter with artificial intelligence to the argument at the center of The Orange Pill. Segal's book advances the proposition that the most consequential error of the Luddites was not their fear of new technology but their withdrawal from the spaces in which the deployment of that technology was being governed. Stay in the room, the argument runs. Build the dams. The people who shape the future are the people who show up.
Fung's response to this proposition is precise and unsentimental. The injunction to stay in the room presupposes that a room exists, that the room is accessible to the people who need to be in it, and that presence in the room translates into influence over the decisions made there. Each of these presuppositions, examined against the evidence of how AI governance actually operates, proves false.
Consider the architecture of AI governance as it existed by early 2026. The European Union's AI Act — the most comprehensive regulatory framework yet attempted — was designed primarily by policy specialists in Brussels, with consultation processes dominated by industry representatives, technical experts, and advocacy organizations resourced enough to navigate European regulatory procedure. The customer service representatives whose jobs would be restructured by the systems the Act regulated, the content moderators in Nairobi and Manila whose labor trained the safety mechanisms of large language models, the teachers watching AI tools transform the meaning of student work — these populations were represented, if at all, through intermediaries whose relationship to the lived experience of technological disruption was attenuated at best.
The United States Congressional hearings on AI followed the same pattern: panels populated by technology executives, academic researchers, and policy analysts, with occasional appearances by civil society representatives who spoke on behalf of affected populations without being drawn from them. The pattern replicated at every level of governance, from international bodies to municipal councils.
Fung's research has established, across multiple contexts and with considerable empirical rigor, that this pattern of exclusion produces governance outcomes that are systematically inferior to those produced by processes that include meaningful participation by affected populations. The evidence comes from Porto Alegre, Brazil, where participatory budgeting redistributed resources to poor neighborhoods because the process was designed so that poor residents could participate effectively and their participation had binding authority over actual allocation decisions. It comes from Chicago, where community policing beat meetings reduced crime in specific neighborhoods because the meetings were designed to produce actionable plans with follow-up accountability. In each case, the inclusion of affected populations did not merely make the process more equitable. It made the outcomes more effective, because the participants brought information, perspectives, and forms of practical knowledge that expert-only governance consistently failed to access.
The concept Fung has developed to describe these dynamics is empowered participatory governance, and it rests on three conditions that must be simultaneously satisfied. The participation must be accessible — barriers to entry low enough that affected populations can actually participate without bearing disproportionate costs. The participation must be deliberative — structured to provide information, facilitate genuine dialogue, and enable participants to refine their positions through engagement with competing perspectives. And the participation must be consequential — its outcomes must exercise genuine influence over actual decisions, not merely enter the administrative record as one input among many that the final decision-maker is free to disregard.
When all three conditions are met, the results are demonstrably superior to expert-only governance. When any of the three is absent, participation degrades into what functions as consultative theater — a performance of inclusion that produces the appearance of democratic legitimacy without its substance. The theater is not harmless. It is actively destructive, because it consumes the attention of affected populations, generates expectations that are subsequently betrayed, and erodes the trust that genuine participation requires. Consultative theater inoculates institutions against future demands for real participation by allowing them to claim that participation has already been tried and found wanting.
The AI governance landscape of 2026 is dominated by consultative theater. The public comment periods on regulatory proposals privilege technically literate respondents who can navigate regulatory language. The stakeholder sessions organized by technology companies are convened by the very entities whose activities are at issue. The town halls hosted by elected officials are performative events in which concern is demonstrated without commitment to specific action. None of these mechanisms meets Fung's three conditions. None produces empowered participation.
The Orange Pill gestures toward this problem without fully diagnosing it. Segal's account of what happened in Trivandrum — the intensive week during which twenty experienced engineers at Napster's India development center were introduced to AI-assisted coding tools — describes a micro-level instance of participatory governance that, within the context of a single organization, meets all three conditions. The engineers' participation was accessible: they were co-located, given dedicated time, and provided with tools and information. Their participation was deliberative: they experimented, discussed, struggled, and collectively developed their own understanding. Their participation was consequential: their discoveries directly influenced organizational decisions about deployment and team restructuring.
The Trivandrum experiment demonstrates that empowered participation in the AI transition is practically achievable. But it also illuminates the limits of organizational solutions to systemic challenges. Those twenty engineers participated in a governance process designed by their employer, and the quality of their experience depended on the commitments of a single organizational leader who chose investment over extraction. The next employer, facing the same capabilities and the same arithmetic, might choose differently. A democratic theory that depends on the benevolence of individual leaders is no democratic theory at all.
The democratic deficit in AI governance cannot be resolved at the organizational level, though organizational solutions are necessary components of a broader response. It requires institutional innovation at the level of public governance — the creation of participatory mechanisms that bring affected populations into decision-making processes with genuine authority. The design of these mechanisms is the central challenge this book addresses.
The challenge is urgent because institutional plasticity does not last. Technological transitions follow a characteristic temporal pattern: an initial period of experimentation during which governance architecture is still responsive to intervention, followed by consolidation during which that architecture hardens. The first decades of industrialization were a period when alternative arrangements were imaginable. The factory system might have been organized differently. The distribution of productivity gains might have been governed by different rules. But the arrangements that actually emerged — concentrating power in capital owners, excluding workers from meaningful governance — hardened into structures that required generations of political struggle to partially reform.
The AI transition is in its period of plasticity. The governance arrangements that will determine how benefits and costs are distributed are being established now. The framework knitters of Nottinghamshire had decades to organize. The populations affected by AI disruption may not. Capabilities are advancing on timescales measured in months. The decisions being made in the current period will determine distributional outcomes for generations.
Fung's research across multiple domains has demonstrated that the institutional designs for empowered participation exist and have been tested. The challenge is not invention but implementation — and implementation at the speed and scale that the AI transition demands. This book attempts to supply the analytical foundation for that implementation, applying the framework of empowered participatory governance to the specific challenges of the AI transition.
The goal is not to refute The Orange Pill but to complete it. The book's central insight — that engagement is preferable to withdrawal — is correct as far as it goes. But engagement requires more than individual will. It requires institutional infrastructure. The injunction to stay in the room must be accompanied by a prior imperative: build the room.
The AI transition will be governed. The question is whether it will be governed inclusively or exclusively, by the many or the few, with the practical wisdom of affected populations or without it. The answer depends on choices being made now, during a period of institutional plasticity that will not remain open indefinitely.
The Luddites, as The Orange Pill rightly observes, were not wrong about what the power looms would cost them. They were wrong about their strategic options. But they were also — and this is the point that Segal's analysis approaches without fully reaching — denied the institutional infrastructure that would have made better options available. The framework knitters did not choose to withdraw from governance. They were excluded from governance by institutional arrangements that reserved participatory power for a narrow elite. The dams that eventually distributed industrialization's gains more broadly were not built by individuals who chose to engage. They were built by social movements that fought, at enormous cost, to restructure the governance institutions themselves.
The question for the AI transition is whether the necessary institutional restructuring will happen before the transition produces the kind of social damage that previous transitions have produced — or whether the affected populations of the AI era will be, like the framework knitters of 1811, excluded from governance until the damage is done and the long, costly process of institutional retrofit begins again.
---
The governance of artificial intelligence in 2026 suffers from what might be called the fishbowl condition — a state in which affected populations can observe the decision-making process but cannot influence it, separated from the decision-makers by a barrier that is transparent but structurally impenetrable.
The affected populations are not ignorant of what is happening. They can see the decisions being made, the arguments advanced, the interests served. They read policy proposals, corporate announcements, and regulatory frameworks. They watch congressional hearings and expert panels and industry conferences. They are not excluded from information. They are excluded from influence. They exist inside a fishbowl of observership, looking out at governance processes they can see but cannot touch.
The fishbowl condition is not new in democratic governance, but the AI transition gives it characteristics that intensify its consequences. In traditional domains of expert governance — environmental regulation, financial supervision, pharmaceutical approval — the condition is mitigated by the relatively slow pace of regulatory change and by intermediary institutions that translate affected populations' concerns into governance influence. Labor unions, advocacy organizations, consumer groups: these serve as conduits between the governed and the governors. In the AI governance domain, these mitigating factors are largely absent. The pace of technological change outstrips the capacity of intermediary institutions to respond. The novelty of the challenges undermines the relevance of established expertise. And the global scope of the technology exceeds the jurisdictional reach of institutions that might otherwise serve as channels for affected populations' concerns.
Expert governance of AI takes several identifiable forms, each exhibiting the fishbowl condition in its own way.
Corporate governance — the decision-making within technology companies that determines which AI capabilities are developed, how they are deployed, and what safeguards are implemented — is conducted by teams and executives whose expertise is genuine but whose perspective is constrained by their institutional position. The corporate decision-maker optimizes for metrics that reflect the company's interests: revenue, market share, user engagement, competitive position. Governance outcomes reflect these metrics rather than broader social interests. The affected populations can observe the outcomes — the products launched, the features added, the jobs eliminated — but cannot influence the processes that produce them.
Regulatory governance — the work of government agencies determining how AI technologies are regulated — is nominally open to public participation through comment periods and hearings. But these mechanisms are consultative rather than consequential. A public comment on a proposed AI regulation enters the administrative record but does not enter the decision calculus with binding force. The affected population submits its comments through the narrow slots the process provides, and the comments land without altering the institutional water in which the decision-makers operate.
Academic governance — the role of researchers and ethicists in shaping the intellectual frameworks for AI policy — is mediated through publications, conferences, and advisory boards, all operating within the norms and incentive structures of academic life. The academic's contribution to governance is indirect, filtered through layers of institutional mediation, shaped by incentives that do not necessarily align with the interests of affected populations.
Fung does not argue that expert governance should be eliminated. His research has consistently demonstrated that expertise is a necessary component of effective governance, and he regards the populist rejection of expertise as analytically mistaken and practically dangerous. The argument is not against expertise but against expertise unaccompanied by empowered participation. The two are complements, not alternatives, and governance outcomes produced by their combination are superior to those produced by either alone.
The superiority rests on a specific mechanism identified across multiple domains. Experts possess technical knowledge that affected populations lack. Affected populations possess practical knowledge that experts lack. The expert on algorithmic bias can identify statistical patterns in a dataset that produce discriminatory outcomes. But the customer service representative who interacts with the biased system daily possesses knowledge about how the bias manifests in practice — how customers experience it, what workarounds have emerged, what the system actually does to the people who encounter it — that statistical analysis cannot capture. The expert on labor economics can model aggregate employment effects of AI automation. But the displaced worker possesses knowledge about the specific experience of displacement — the emotional reality, the practical barriers, the inadequacy of existing support systems — that no economic model represents.
The Orange Pill illustrates this dynamic in the Trivandrum experiment. The combination of technical expertise — the capabilities of the AI tools — and practical knowledge — the engineers' understanding of their work contexts, projects, and team dynamics — produced outcomes that neither form of knowledge could have generated alone. The engineers discovered capabilities and limitations that tool designers had not anticipated, because only practical engagement with the work context could reveal them. The designers' expertise was necessary but insufficient. The tools' value could only be assessed in the context of the work they augmented, and that context was accessible only through the workers' practical knowledge.
The fishbowl condition is particularly destructive in the governance of AI's distributional effects — the question of who benefits and who bears the costs. This is the question The Orange Pill addresses most directly, and it is the question that expert-only governance is least equipped to answer. Distribution is not primarily a technical question. It is a political question about values, priorities, and the relative weight assigned to competing claims. Expert analysis can identify the likely effects of different policy choices. But it cannot answer the distributional question, because the answer depends on judgments about fairness and justice that are properly the province of democratic deliberation rather than expert determination.
The current AI governance landscape treats the distributional question as though it were a technical matter answerable by expert analysis. The result is governance outcomes reflecting the distributional preferences of the expert community rather than the preferences of affected populations. The expert community, by virtue of its social position, tends to assign high value to innovation, efficiency, and aggregate growth, and relatively lower value to the security, stability, and dignity of workers and communities that bear concentrated transition costs. This disposition is not a moral deficiency. It is a structural consequence of social position, and the governance processes within which experts operate provide no mechanism for correcting it through engagement with populations whose position disposes them to see the world differently.
The fishbowl condition is compounded by what functions as a legitimation mechanism in expert governance — the use of expert authority to justify outcomes that serve particular interests while presenting themselves as technically necessary. When a technology company convenes an AI ethics advisory board staffed with prominent academics and civil society figures, the board serves a legitimation function regardless of the quality of its analysis. Its existence communicates that governance is informed by independent expertise and responsive to societal concerns, even if the board's recommendations are advisory rather than binding, even if the company retains unilateral authority, and even if the board's composition excludes the most directly affected populations.
The legitimation mechanism is particularly potent in AI governance because the technology's complexity creates informational asymmetry between the governed and the governors. The citizen who reads that a company has established an AI ethics board staffed with Harvard professors and former government officials may reasonably conclude that governance is adequate — without being able to assess whether the board has genuine authority, whether its recommendations are implemented, or whether its composition includes the people most affected by the company's decisions. Complexity enables the substitution of the appearance of quality for its substance.
Fung's framework provides criteria for distinguishing genuine governance from its performance. Genuine governance includes mechanisms through which affected populations participate accessibly, deliberate meaningfully, and exercise consequential influence over outcomes. Governance that appears participatory but fails any of these conditions — accessible but not deliberative, deliberative but not consequential, consequential but not accessible — functions as theater rather than governance. The distinction is the difference between democratic legitimacy and its simulation.
The solution to the fishbowl condition is not to eliminate expert governance. It is to redesign governance architecture so that expertise and participation are structurally integrated rather than structurally separated. Integration requires institutional mechanisms that bring affected populations into the governance process not as spectators or commentators but as participants whose judgments carry real weight. The subsequent chapters develop specific proposals for those mechanisms. But the design work must begin with a clear understanding of what the fishbowl condition costs — not merely in democratic legitimacy, but in governance quality.
The Brennan Center for Justice documented one specific vector of this cost in its analysis of AI's impact on participatory processes: the public comment system that invites citizen input on regulatory proposals is now susceptible to subversion by AI systems that can generate millions of unique, varied comments advancing a given policy position. The mechanism designed to channel citizen voice into governance becomes the mechanism through which that voice is drowned in synthetic noise. The fishbowl does not merely exclude affected populations from influence. It is now equipped with technology that can simulate their inclusion — generating the appearance of broad public input where no genuine public deliberation has occurred.
This is the condition that Fung's "Clogger" thought experiment diagnosed in the electoral context, extended to the regulatory one. The same AI capabilities that can manipulate individual voters can manipulate the governance processes designed to regulate AI itself. The technology becomes both the object of governance and the instrument through which governance is subverted — a recursive problem that expert-only governance structures are uniquely ill-equipped to address, because the experts who design governance mechanisms are themselves operating within informational environments increasingly shaped by the systems they are attempting to govern.
The recursive quality of this challenge — AI shaping the conditions under which AI governance occurs — is the feature that distinguishes the current governance crisis from its historical predecessors and that makes the institutional innovations proposed in subsequent chapters not merely desirable but necessary. The fishbowl is not static. It is being redesigned, by the technology it was built to contain, in ways that make the glass progressively more opaque from the outside even as it becomes more transparent from within. The populations looking in see less and less of how decisions are actually made. The decision-makers looking out see more and more data about the populations they govern — data generated by the same AI systems whose governance is at issue.
The asymmetry is growing, not shrinking. And the institutional designs that could correct it are available, tested, and ready for implementation. The question is whether the political will to implement them can be generated before the fishbowl becomes permanent.
---
The thought experiment that Fung and Lessig constructed in the summer of 2023 was designed to be alarming, and it succeeded. But its most important implication was not the one that generated headlines. The headlines focused on the possibility that AI could manipulate elections — a vivid and immediate threat that produced predictable responses along familiar political lines. The deeper implication was structural: Clogger revealed that AI does not merely threaten specific democratic outcomes. It threatens the conditions under which democratic governance is possible at all.
The distinction matters. A technology that produces bad outcomes within a functioning democratic system is a problem that democratic governance can, in principle, address. A technology that degrades the capacity for democratic governance itself is a different category of threat entirely — one that cannot be solved by the institutions it is eroding.
Clogger, as Fung and Lessig described it, relentlessly pursues just one objective: to maximize the chances that its candidate prevails. It has no regard for truth. It has no way of knowing what is true or false. Language model hallucinations are not a problem for this machine because its objective is to change your vote, not to provide accurate information. And because it operates as a black-box AI, no one — not the voters, not the campaign staff, not the regulators, possibly not even the engineers who built it — would know what strategies it employs.
The scenario produces an immediate practical concern: AI-manipulated elections. But it also produces a theoretical problem that has not received adequate attention. If Clogger works — if AI-driven persuasion becomes the decisive factor in electoral outcomes — then the governance institutions responsible for regulating AI will themselves be products of AI-manipulated elections. The regulators will have been elected with Clogger's help. The legislation will reflect the priorities of candidates who prevailed through AI-optimized persuasion rather than deliberative engagement with the electorate. The governance of AI will be conducted by officials whose tenure depends on the technology they are governing.
This is the recursive trap. AI shapes the governance environment within which AI governance occurs, creating a feedback loop in which the governed technology progressively captures the governing institutions. The capture is not conspiratorial. It does not require bad actors or corrupt officials. It requires only the logic of competitive elections in which AI-optimized persuasion becomes a standard tool — a development that Fung and Lessig regarded as economically inevitable. It would be possible to avoid AI election manipulation if candidates, campaigns, and consultants all forswore the use of such tools, they wrote. They added: We believe that is unlikely.
The recursive trap extends beyond elections. Every governance mechanism through which AI might be regulated operates within an informational environment that AI is reshaping. The regulatory comment periods through which citizens provide input on proposed rules are now vulnerable to AI-generated synthetic comments that simulate broad public support for positions favored by well-resourced interests. The media environment through which citizens inform themselves about governance issues is shaped by AI-powered recommendation algorithms that optimize for engagement rather than understanding. The public discourse through which democratic societies process complex policy questions is conducted on platforms whose algorithmic architecture determines which arguments gain visibility and which are suppressed.
Each of these mechanisms was designed to serve democratic governance. Each is being compromised by the technology that democratic governance is attempting to regulate. The result is a progressive erosion of the conditions under which democratic governance of AI is possible — an erosion that occurs not through dramatic assault but through the quiet degradation of institutional capacity.
The Orange Pill identifies a related phenomenon through its concept of attentional ecology — the structured environment of claims on human attention that shapes what people notice, consider, and act upon. The book argues that the AI-shaped information environment degrades the capacity for sustained, reflective attention that is necessary for meaningful engagement with complex questions. Fung's framework extends this observation into democratic theory: the attentional capacities that the information environment degrades are the same capacities that deliberative democracy requires.
Deliberative democracy — the form of democracy that Fung's research has consistently shown to produce the best governance outcomes — requires participants who can engage with complex information, tolerate ambiguity, consider multiple perspectives, and develop considered judgments through extended dialogue. These capacities are precisely what the contemporary attentional ecology undermines. Algorithms optimized for engagement reward emotional reaction over reflective judgment, rapid consumption over sustained attention, polarized positions over nuanced deliberation. The AI-shaped environment systematically trains citizens in cognitive habits that are antithetical to the cognitive demands of democratic participation.
The feedback loop this produces is among the most important structural features of the current moment. The degradation of attentional capacity undermines the quality of democratic participation. The undermined quality of participation reduces the capacity of democratic governance to regulate the information environment causing the degradation. The reduced governance capacity allows further degradation, which further undermines participation, which further reduces governance capacity. The loop is self-reinforcing, and left unchecked, it will progressively erode both the quality of democratic governance and the capacity of citizens to engage in it.
Fung's December 2024 workshop at the Ash Center for Democratic Governance and Innovation — convening democracy activists, social scientists, and technology specialists — confronted a variant of this recursive trap in the context of authoritarian governance. The workshop's premise was stark: democracy movements have experienced a historic decline in their ability to challenge autocratic governments effectively. This decline is due, at least in part, to the changing technology landscape, which has allowed autocratic governments to monopolize the advantages of breakthrough technologies to strengthen their power. The relatively slow adoption of AI tools by democracy movements may be widening the gulf between these movements and their adversaries.
The workshop findings illuminate a dimension of the recursive trap that operates across regime types. In authoritarian contexts, AI provides surveillance, censorship, and propaganda capabilities that strengthen state power relative to civil society. In democratic contexts, AI provides persuasion, manipulation, and attention-capture capabilities that erode the deliberative foundations on which democratic governance depends. In both cases, the technology concentrates power in the hands of those who control its deployment and weakens the capacity of affected populations to govern its use.
The implications for institutional design are specific. Governance mechanisms for AI must be designed not merely to regulate the technology but to protect the democratic capacities that regulation requires. This is a higher-order design problem than most regulatory frameworks address. Conventional regulation assumes that the governance process itself is stable — that the institutions doing the regulating will continue to function as designed throughout the regulatory cycle. The recursive trap invalidates this assumption. The technology being regulated is actively reshaping the conditions under which regulation occurs, which means that governance mechanisms must be designed to maintain their own integrity against degradation by the technology they govern.
Fung's framework identifies deliberative participation by affected populations as the governance mechanism most resistant to this form of degradation. Deliberative processes — structured dialogues in which randomly selected citizens engage with complex information, hear competing perspectives, and develop considered judgments — operate outside the algorithmic information environment that degrades other democratic mechanisms. When citizens deliberate face-to-face, with access to balanced information and facilitated discussion, the recommendation algorithms that distort online discourse have no purchase. The persuasion technologies that compromise electoral processes have no target. The synthetic comments that corrupt regulatory proceedings have no channel.
Deliberative participation does not merely produce better governance outcomes. It constitutes a governance mechanism that is structurally resistant to the specific forms of democratic degradation that AI produces. This is not a theoretical observation. It is a design insight with immediate practical implications: in an environment where AI is degrading the quality of every other democratic mechanism, deliberative participation becomes not merely desirable but necessary — the governance mechanism that still works when the others have been compromised.
The Ash Center's reading list on AI and democracy, curated in 2024 by the GETTING-Plurality Research Network, connected questions of technology ethics and governance to broader works in democratic theory, supporting the proposition that technological development should serve broader collective aims and interests. This framing aligns precisely with the governance design challenge that the recursive trap poses: how to ensure that AI governance serves democratic values when the technology being governed is actively undermining the democratic processes through which those values are expressed.
The answer that Fung's framework provides is institutional rather than technological. The recursive trap cannot be broken by better algorithms, more transparent AI systems, or improved content moderation — though all of these may help at the margins. It can only be broken by governance institutions that operate on principles resistant to algorithmic subversion: face-to-face deliberation, random selection of participants, structured information provision, and binding authority over outcomes. These are the institutional features that protect democratic governance from the specific threats that AI poses to it, and their implementation at scale is the most urgent governance challenge of the current moment.
The recursive trap also clarifies the stakes of the institutional design choices being made during the current period of plasticity. If the governance institutions that emerge from this period are the conventional ones — expert panels, corporate advisory boards, legislative hearings, regulatory comment periods — they will be progressively compromised by the technology they are meant to govern. The fishbowl will become permanent, its glass thickened by the very forces it was built to contain. If the governance institutions that emerge include genuinely deliberative, empowered participatory mechanisms, they will possess a structural resilience that conventional mechanisms lack — the capacity to function as legitimate, effective governance even as the informational environment around them degrades.
The choice between these futures is being made now. It is being made by policymakers who may not understand the recursive dynamics at work, by technology executives who may not recognize the governance implications of their deployment decisions, and by citizens who may not see the connection between the algorithmic architecture of their information environment and the quality of the democratic governance they depend on. Making the stakes visible is a precondition for making the right choices, and making the right choices requires institutional designs that are adequate to the challenge. The next chapters develop those designs.
---
Porto Alegre, Brazil, 1989. A newly elected Workers' Party municipal government faces a city in which public investment has historically flowed to wealthy neighborhoods while poor districts lack basic infrastructure — paved roads, sanitation, functional schools. The government's response is institutional rather than ideological: it creates a system in which neighborhood assemblies of ordinary citizens debate and decide how portions of the public budget will be allocated. Not advise. Decide.
The design choices matter. The assemblies are held in neighborhood locations at times when working residents can attend — not during business hours in municipal offices downtown. The deliberative format does not require specialized knowledge of public finance; participants discuss priorities, trade-offs, and local needs in terms drawn from their lived experience. And the assembly decisions are implemented by the municipal government. Participation carries binding authority.
The results, documented over more than two decades by scholars across multiple disciplines, are unambiguous. Districts that participated received investments better aligned with actual needs than those allocated through the previous technocratic process. Infrastructure improvements went to areas of greatest need rather than greatest political influence. Service delivery improved because the process surfaced local knowledge about failures that the centralized bureaucracy had not detected. And political engagement increased — not only among direct participants but in the broader populations of participating districts, because successful participation generated demand for further participation.
Chicago, 1995. The Chicago Alternative Policing Strategy creates structured beat meetings in which residents and police officers deliberate together about neighborhood safety. Residents bring local knowledge — which corners are dangerous at which hours, which buildings harbor persistent problems, which community dynamics drive which patterns of disorder. Officers bring professional expertise — tactical options, legal constraints, resource availability. The combination produces plans that neither party could have developed alone, and the plans produce measurable reductions in crime in participating neighborhoods.
These cases are not selected for their drama. They are selected for their analytical precision. Each isolates the specific institutional design features that determine whether participation produces genuine governance outcomes or merely performs the appearance of inclusion. And the features, identified across these and dozens of comparable cases worldwide, resolve into three conditions that Fung's research has established as jointly necessary and individually insufficient.
Accessibility means that the barriers to participation — informational, temporal, financial, geographical, linguistic — are low enough that affected populations can participate without bearing costs disproportionate to the benefits they receive. This is not merely a matter of removing legal barriers to entry. It is a matter of designing processes around the actual constraints that target populations face. Porto Alegre's assemblies were held in neighborhoods, at convenient times, in accessible formats. A public comment period on an AI regulation, conducted in technical language during business hours through an online portal that requires familiarity with regulatory procedure, may be formally open to everyone. It is substantively accessible to almost no one outside the professional advocacy community.
Deliberation means that participants engage with relevant information, hear competing perspectives, and refine their positions through structured dialogue. The distinction between deliberative and aggregative participation is not procedural. It is substantive: the two produce qualitatively different outcomes. Aggregative mechanisms — voting, polling, public comment — collect pre-formed preferences without improving them. Deliberative mechanisms create conditions under which preferences are examined, challenged, and refined, producing judgments that are more informed, more nuanced, and more responsive to the full range of relevant considerations than the opinions participants held at the outset.
The evidence for this transformation is robust. James Fishkin's research on deliberative polling, conducted across dozens of applications in multiple countries, has documented the specific ways in which informed deliberation changes people's views: toward greater nuance, greater willingness to consider trade-offs, and greater consensus on core values even when disagreements persist on specific policy questions. These are precisely the qualities that AI governance decisions require, and they are precisely the qualities that the current governance mechanisms — expert panels, corporate advisory boards, congressional testimony — fail to produce, because those mechanisms are not deliberative. They are performative, designed to display concern rather than to generate collective judgment.
Consequence means that participatory outcomes exercise genuine influence over actual decisions. Participation without consequence is worse than no participation at all, because it consumes the time and attention of participants, generates expectations that are betrayed, and erodes trust in participatory processes. Porto Alegre's assemblies worked because the decisions were binding. Chicago's beat meetings worked because the plans were implemented with follow-up accountability. A stakeholder engagement session convened by a technology company, in which affected communities describe their concerns to executives who are free to disregard everything they hear, does not meet the consequence condition regardless of the sincerity of the executives or the quality of the dialogue.
Why two conditions are never enough: because each condition without the others degrades into a distinct pathology.
Accessibility plus deliberation without consequence produces an informed, articulate population whose considered judgments have no governance weight. This is the pathology of the sophisticated focus group — participants develop nuanced positions through genuine deliberation, and the positions are filed in a report that no decision-maker reads. The process is worse than worthless because it teaches participants that their deliberation does not matter, which discourages future participation.
Accessibility plus consequence without deliberation produces direct democracy in its crudest form — uninformed majorities making binding decisions about complex technical questions on the basis of gut reactions, ideology, or manipulation. This is the pathology that Fung's "Clogger" thought experiment identifies: an AI system that can optimize the persuasion of individual voters is most dangerous precisely in a system where voter preferences are translated directly into governance outcomes without the mediating influence of deliberation. Clogger exploits the gap between aggregative democracy and deliberative democracy, and it does so most effectively when the aggregative mechanism has binding authority.
Deliberation plus consequence without accessibility produces an empowered elite deliberation — a minipublic of the already-privileged, whose considered judgments carry binding authority but whose composition excludes the populations most directly affected by the decisions. This is the pathology of the expert commission: thoughtful, deliberate, consequential, and unrepresentative. It is the dominant mode of AI governance in 2026, and it produces the distributional outcomes that one would predict from a process designed by and for technical and political elites.
Only the simultaneous satisfaction of all three conditions produces what Fung's research has demonstrated to be genuine empowered participatory governance — processes that are accessible to affected populations, structured for meaningful deliberation, and connected to actual governance decisions with binding authority. The three conditions constitute an evaluative framework against which any proposed governance institution can be assessed, and the assessment provides the basis for distinguishing genuine democratic governance from its simulation.
The application of this framework to AI governance reveals a landscape in which no existing mechanism meets all three conditions. The EU AI Act's consultation processes are nominally accessible but not deliberative and not consequential for individual participants. Corporate ethics boards may be deliberative in their internal proceedings but are not accessible to affected populations and not consequential for corporate decisions. Congressional hearings are accessible to those invited to testify and consequential in the limited sense that they inform legislative deliberation, but they are not deliberative in their format and exclude the vast majority of affected populations.
The institutional design challenge is to create mechanisms that satisfy all three conditions simultaneously within the specific constraints of the AI governance context. Those constraints are formidable. The speed of technological change demands governance processes faster than traditional deliberative timescales. The technical complexity of AI demands information-provision systems that make substantive participation possible for non-specialists. The global scope of AI deployment demands governance mechanisms that cross jurisdictional boundaries.
These constraints are real but not insuperable. Participatory budgeting has been scaled from a single Brazilian city to hundreds of cities worldwide. Deliberative polling has been conducted at national and international scales. Citizens' assemblies in Ireland produced constitutional amendments on issues — abortion, same-sex marriage — that conventional political processes had failed to resolve for decades. Each of these innovations confronted specific constraints that were considered insuperable before the institutional design proved otherwise.
The participatory governance of AI requires comparable institutional innovation, and the design principles for that innovation are available. They are grounded in decades of empirical evidence about what works, what fails, and what determines which outcome obtains. The evidence does not guarantee success. It identifies the conditions under which success is achievable. Whether those conditions are created depends on choices being made now — choices about institutional design that will determine whether the governance of the most consequential technology in human history meets the three conditions that democratic legitimacy requires, or whether it replicates the pattern of every previous technological transition: exclusion during the formative period, followed by decades of costly struggle to retrofit institutions that were designed without the participation of those they govern.
The evidence from Porto Alegre and Chicago and Kerala and Ireland says the same thing: participation works when it is designed to work. The design principles are known. The implementation is the challenge. And the window for implementation is the period of institutional plasticity that the current moment provides — a period that will not remain open indefinitely.
In February 2026, twenty experienced engineers in a room in Trivandrum, India, encountered a technology that made their existing expertise simultaneously more valuable and less sufficient. The encounter, as described in The Orange Pill, followed a specific sequence: disorientation on Monday, experimentation on Tuesday, breakthrough on Wednesday, recalibration by Friday. By the end of the week, individual engineers were producing output that previously required entire teams. The productivity multiplier was estimated at twenty-fold.
Fung reads this account not as a technology story but as a governance story — one of the clearest natural experiments in participatory versus non-participatory management of technological disruption available in the current period. The Trivandrum experiment succeeded, in Fung's assessment, because it met the three conditions of empowered participatory governance within the specific constraints of an organizational setting. It failed — or rather, it illuminated the limits of organizational solutions — because the conditions that produced its success were contingent on individual leadership rather than institutional requirement.
The participatory quality of Trivandrum is visible in specific design features that distinguish it from the dominant alternatives.
The engineers were not told how to use the tools. They were given the tools and invited to discover their capabilities and limitations through direct engagement, within the context of their own work. This design choice — experiential rather than instructional — is significant because it treats the engineers as possessors of practical knowledge rather than recipients of technical training. The instructional approach assumes that the relevant knowledge about the technology resides with the implementer and must be transferred to the workforce. The experiential approach assumes that the relevant knowledge about the technology's implications for actual work resides with the workers and can only be generated through their direct engagement with the technology in the context of their practice.
The distinction maps onto a finding that Fung's research has documented across multiple participatory contexts: governance processes organized around the practical knowledge of affected populations produce superior outcomes to processes organized around expert knowledge alone. The engineers in Trivandrum discovered use cases that tool designers had not anticipated. They identified limitations that official documentation did not acknowledge. They developed workflows adapted to the specific requirements of their projects and team capabilities. This knowledge was produced not by expertise in AI but by expertise in the work that AI was being asked to augment, and it was accessible only through the kind of direct, exploratory engagement that the experiential design enabled.
The process was deliberative. The engineers did not work in isolation. They discussed their discoveries, compared approaches, debated implications, and collectively developed understandings more complete than any individual could have reached alone. The deliberative quality was facilitated by physical co-presence — the shared space in which individual discoveries became collective knowledge, in which one engineer's insight could catalyze another's investigation, in which the emotional reality of the experience could be processed collectively rather than individually.
And the process was consequential. The engineers' discoveries directly influenced organizational decisions about deployment strategy and team restructuring. They were not consulted about pre-formed plans. They participated in the formation of the plans, and their practical knowledge shaped deployment in ways that no top-down implementation could have anticipated.
The consequence dimension was undergirded by a structural commitment that Fung considers essential to understanding the experiment's success. The decision to retain and develop the team rather than converting productivity gains into headcount reduction gave the participatory process its credibility. Without this commitment, the engineers' engagement would have been compromised by the rational suspicion that their discoveries were being used to identify which of them could be replaced. The productivity gains were substantial — individual engineers achieving output levels previously requiring entire teams — and the conventional organizational response would have been workforce reduction. The decision to invest gains in expanded capability rather than reduced cost was a participatory commitment: it signaled that the engineers' participation in the governance of their own technological transition was valued substantively, not merely instrumentally.
This commitment illustrates a principle that Fung's research has identified across multiple contexts: empowered participation requires credible commitment from the authorities who convene it. The commitment must be credible in the specific sense that it must be costly to the committing party. A commitment that costs nothing — a promise to listen, a pledge to consider — is cheap and therefore unreliable. The engineers in Trivandrum could reasonably believe that their participation mattered because the organizational leader had made a commitment that was expensive: retaining a team that, in purely financial terms, could have been reduced. The costliness was the guarantee.
Fung situates Trivandrum within a taxonomy of four organizational approaches to AI deployment, each reflecting a different stance toward participation.
Unilateral implementation deploys AI tools by management decision without meaningful worker consultation. It treats AI deployment as a capital investment decision — a management prerogative analogous to purchasing new equipment. Workers are informed of the change and expected to adapt. This approach is the organizational default, and it produces the outcomes that Fung's framework predicts for governance processes that exclude affected populations: resistance, disengagement, quality problems, and the loss of practical knowledge that only the affected workers possess.
Consultative engagement solicits worker input through surveys, town halls, and suggestion programs without committing to incorporate that input into decisions. It meets the accessibility condition — workers can express views — but fails both the deliberation condition and the consequence condition. Workers experience the process as performative, and governance decisions proceed as they would have without the consultation.
Negotiated implementation deploys AI through formal bargaining between management and worker representatives, typically unions. It meets the consequence condition — negotiated outcomes are binding — but may fail the accessibility condition, since only union members participate, and the deliberation condition, since bargaining is adversarial rather than deliberative, producing compromises between opposed positions rather than syntheses of diverse perspectives.
The fourth approach is the Trivandrum model — empowered participatory engagement in which affected workers are genuine participants in the governance of their own technological transition, through processes that are accessible, deliberative, and consequential. This approach is both the most promising and the rarest, because it requires organizational leaders willing to share governance authority over decisions they could unilaterally control.
The rarity is the critical point. Trivandrum succeeded because one organizational leader chose participatory governance. The next leader, facing identical capabilities and identical economic incentives, might choose unilateral implementation. Already the arithmetic is on every boardroom table: if five people can do the work of a hundred, why retain the hundred? The Orange Pill documents the author's own acknowledgment that this calculation recurs every quarter, that the pressure to convert productivity gains into headcount reduction is structural rather than personal, and that the market does not reward patience.
A governance model that depends on the accident of individual leadership is not a governance model. It is a lottery, and lotteries do not constitute institutional design. The challenge is to convert the Trivandrum model from an exceptional instance of enlightened practice into a standard institutional requirement — to create structures that make participatory engagement the default rather than the option.
This conversion requires moving from organizational choice to institutional mandate. The mechanisms for this conversion are not unprecedented. Labor law already mandates specific forms of worker participation in certain governance decisions — occupational safety, collective bargaining, plant closure notification. The extension of participatory requirements to AI deployment decisions would follow the same legal and institutional logic: where organizational decisions significantly affect the conditions of workers' lives, institutional design should ensure that affected workers participate in making them.
The specific mechanism Fung's framework suggests is the Transition Deliberation Committee — a standing body within organizations deploying AI systems, composed of workers from affected departments, management representatives, technical specialists, and where applicable union representatives. The Committee would exercise formal governance authority over specific aspects of AI deployment: the pace of implementation, the design of training and support programs, the definition of quality standards for AI-augmented work, and the allocation of productivity gains.
The Committee would be designed against the three conditions. Accessibility would be ensured by scheduling activities during working hours, providing preparation time, and selecting members through a combination of election and sortition. Deliberation would be ensured through facilitated dialogue, expert testimony, and — critically — hands-on engagement with the AI systems being deployed, the experiential deliberation principle that Trivandrum demonstrates. Consequence would be ensured by giving the Committee genuine authority over specified deployment decisions, subject to an appeals process that preserves management's override capacity in exceptional circumstances but requires public justification.
The design draws on tested precedents. European works councils have operated for decades with formal governance authority over specified workplace decisions. German codetermination law requires worker representation on the supervisory boards of large corporations. Scandinavian tripartite governance structures bring labor, management, and government representatives into structured negotiation over sectoral economic conditions. None of these precedents is perfect. All demonstrate that institutional mandates for worker participation in organizational governance are feasible, sustainable, and productive of outcomes superior to unilateral management decision-making.
The Trivandrum experiment demonstrates what is possible when the conditions are right. The institutional question is how to make the conditions right by design rather than by luck — how to create structures that produce Trivandrum-quality participation across organizations, regardless of whether the individual leader at the top happens to be someone who values participatory governance.
The Orange Pill frames the decision to retain the Trivandrum team as a moral choice. Fung reframes it as a governance design problem. The moral choice was admirable. But the populations affected by AI disruption cannot depend on admirable choices. They need institutional structures that produce good outcomes even when the people in charge are not especially admirable — structures that are, in the language of institutional design, robust to variation in leadership quality. That robustness is what distinguishes governance from goodwill, and it is what the AI transition's period of institutional plasticity must be used to build.
---
The Orange Pill identifies a population that Fung finds both analytically illuminating and politically essential: the silent middle. The term describes the large population of workers, parents, and citizens who experience the AI transition with a mixture of recognition and apprehension, who see both the technology's promise and its threat, and who find themselves without an institutional channel through which to express or act upon their complex response.
The silent middle is defined not by ignorance or indifference but by the absence of a forum in which its characteristic voice — informed ambivalence, nuanced judgment, principled complexity — can be articulated, refined, and translated into governance influence. The voice exists. The institutional infrastructure for amplifying it does not.
Fung recognizes in the silent middle a phenomenon that research on deliberative democracy has identified in multiple contexts: the existence of large populations whose considered judgments differ systematically from the positions represented in public discourse. Public discourse on AI, like discourse on most complex social questions, is dominated by the extremes — technology evangelists who see AI as an unambiguous benefit to be adopted with maximum speed, and technology critics who see it as a threat to be resisted or constrained. The extremes command institutional infrastructure: the evangelists have the technology industry's extensive lobbying and public relations apparatus; the critics have advocacy organizations, labor movements, and political coalitions that articulate the case for caution.
The silent middle has no comparable infrastructure. Its members process their ambivalence individually, lack collective forums through which individual uncertainty could be transformed into collective judgment, and are therefore absent from the governance conversations that determine the institutional framework within which they make their individual decisions.
The silence is not a choice. It is a structural condition produced by the design of existing democratic mechanisms. Voting forces a choice between candidates whose positions on AI governance are underdeveloped or polarized. Polling captures uninformed opinions rather than the considered judgments that deliberation would produce. Public comment requires investments of time and expertise that most citizens cannot afford. Social media rewards the extreme, the provocative, and the simple — penalizing the nuanced, the complex, and the uncertain.
The silent middle holds the position that deliberative democracy is designed to produce — but holds it without the benefit of the deliberative process. The position is arrived at through individual reflection rather than collective engagement, and therefore lacks the articulation, the refinement, and the political organization that collective engagement provides.
Research on deliberative polling provides robust evidence that this population's considered judgments, if accessed through properly designed deliberative processes, would differ markedly from the positions currently represented in AI governance. When randomly selected citizens are brought together, provided with balanced briefing materials, given access to expert testimony from multiple perspectives, and engaged in facilitated small-group discussion, their views change in predictable directions: toward greater nuance, greater awareness of trade-offs, greater willingness to consider legitimate interests beyond their own. The changes are not toward predetermined conclusions. They are toward the kind of reflective quality that the silent middle already exhibits individually but that gains force and specificity through collective deliberation.
The application to AI governance is direct. The governance inputs currently available — industry advocacy, critical advocacy, expert analysis, poll-measured opinion — are each partial. Deliberative inputs would be more comprehensive, because the process requires engagement with the full range of perspectives and evidence. They would be more nuanced, because the deliberative format rewards complexity. And they would be more representative, because random selection ensures that the silent middle is present in proportions reflecting its actual prevalence — which is to say, in proportions that would dominate any properly constituted deliberative body, because the silent middle is, by every available measure, the majority.
The silent middle is not merely a constituency to be represented. It is a knowledge resource to be accessed. Its members possess practical knowledge about navigating the AI transition — as workers, parents, citizens, human beings — that governance processes need. The customer service representative who has watched colleagues replaced by chatbots possesses knowledge about the quality of AI-mediated interactions that no engineer has access to. The teacher observing students' engagement with AI writing tools possesses knowledge about educational implications that no policymaker can replicate. The parent making daily decisions about a child's relationship with technology possesses knowledge about developmental effects that no researcher can substitute.
This practical knowledge is not anecdotal. It is a form of expertise grounded in direct experience and refined through continuous engagement. It is what Aristotle called phronesis — practical wisdom — and deliberative participatory processes are designed specifically to access it. The exclusion of this knowledge from AI governance is not merely unjust. It is inefficient, depriving governance decisions of information that would improve their quality.
The Orange Pill captures the parental dimension of this knowledge gap with particular precision. The book describes parents confronting their children's use of AI tools without adequate information, collective deliberation, or institutional support. The parent who restricts AI use and the parent who encourages it are both making governance decisions — decisions about the conditions under which the next generation will learn and think — but making them in isolation, without the collective intelligence that deliberative participation would provide.
The institutional response to the silent middle's structural silence requires mechanisms designed around two principles that conventional democratic mechanisms violate.
The first principle is random selection. Any participatory mechanism that depends on self-selection — that requires participants to volunteer, seek out the opportunity, invest their own time and resources — will underrepresent the silent middle, because the middle's defining characteristic is the absence of the political engagement that self-selection requires. Sortition — random selection stratified for demographic representativeness — captures the silent middle in proportions reflecting its actual presence. It does not require mobilization, prior engagement, or political commitment. It produces participant pools more representative of the general population than any self-selected process.
The second principle is deliberative structure. The silent middle's voice requires institutional support to become articulate. The individual who experiences informed ambivalence about AI needs structured opportunities — balanced information, facilitated dialogue, adequate time for reflection — to develop that ambivalence into a considered position that can be communicated, debated, and translated into governance influence. Deliberation transforms individual uncertainty into collective judgment, not by eliminating uncertainty but by giving it institutional form — the form of nuanced, qualified, trade-off-aware positions that reflect genuine engagement with complexity rather than retreat from it.
The combination of random selection and deliberative structure produces what Fung's framework identifies as minipublics — small, representative bodies of randomly selected citizens who deliberate on specific governance questions under conditions designed to produce informed, considered judgments. The minipublic format has been tested extensively — in Ireland, where citizens' assemblies produced constitutional amendments on issues that conventional politics had failed to resolve; in France, where a citizens' convention on climate change produced policy recommendations that the government partially implemented; in dozens of other contexts worldwide.
The adaptation of the minipublic format to AI governance faces specific challenges. The technical complexity of AI requires information-provision systems that make substantive participation possible for non-specialists — not by simplifying the technology but by translating its implications into terms that connect to participants' lived experience. The speed of technological change requires standing bodies that maintain continuous engagement rather than ad hoc assemblies convened in response to specific decisions. The global scope of AI deployment requires mechanisms that connect local deliberative processes to decisions made at higher jurisdictional levels.
These challenges are design problems, not impossibilities. The experience of deliberative governance across multiple contexts demonstrates that non-specialist citizens, given adequate information and structured deliberative opportunities, can engage meaningfully with technically complex questions. They do not become technical experts. They become informed judges — citizens capable of evaluating competing expert claims, weighing trade-offs, and rendering considered judgments about questions that technical expertise alone cannot answer.
The silent middle is not silent because it has nothing to say. It is silent because the institutional architecture of democratic governance provides no amplifier for its voice. Building that amplifier — through randomly selected, deliberately structured, consequentially connected participatory mechanisms — is both the most direct response to the democratic deficit in AI governance and the most reliable method for accessing the practical knowledge that the governance of the AI transition requires.
---
The institutional proposals that follow are not speculative. Each is adapted from governance mechanisms that have been designed, implemented, evaluated, and refined across multiple contexts. The innovation is not in the mechanisms themselves but in their application to AI governance — an application that requires adaptation to the specific constraints of the domain but not the invention of fundamentally new institutional forms.
The proposals are organized by governance level, from organizational to global, because the AI governance challenge manifests differently at different scales and requires different institutional responses at each. But the proposals share a common evaluative standard: each must satisfy the three conditions of accessibility, deliberation, and consequence simultaneously, because the evidence is unambiguous that any mechanism failing even one of the three conditions will degrade into some form of consultative theater.
At the organizational level, the mechanism is the Transition Deliberation Committee described in Chapter 5 — a standing body within organizations deploying AI systems, with formal authority over specified deployment decisions. The design draws on the precedent of European works councils, German codetermination structures, and Scandinavian tripartite governance. The specific adaptation for AI deployment includes the experiential deliberation principle — structured opportunities for committee members to work directly with the AI systems whose deployment they govern, so that practical knowledge from direct engagement informs the committee's deliberation. The Trivandrum experiment demonstrates that this experiential component is not optional. It is the mechanism through which the practical knowledge that makes participatory governance superior to expert-only governance is generated.
At the municipal level, the mechanism is the AI Impact Assembly — a standing body of randomly selected citizens, stratified for demographic representativeness, that deliberates on the local implications of AI deployment. The model draws directly on the citizens' assembly format that has been implemented in Ireland, France, Canada, and elsewhere. The specific adaptations for the AI context include three features.
First, the Assembly is standing rather than ad hoc, reflecting the continuous nature of the AI governance challenge. Members serve staggered terms — perhaps eighteen months, with one-third rotating every six months — to ensure both continuity of institutional knowledge and regular infusion of new perspectives.
Second, the Assembly has access to a dedicated technical translation team whose function is not to simplify AI but to connect its technical features to their implications for the community. The translation team does not tell participants what to think about AI. It provides the informational bridge between technical capability and lived consequence that enables non-specialist participants to engage substantively. The precedent here is participatory budgeting, where participants were not required to understand public finance theory but were provided with accessible information about budget constraints, revenue sources, and the costs of alternative allocations.
Third, the Assembly has formal investigative powers — the authority to request information from technology companies operating within the municipality, to hear testimony from affected workers and community members, and to commission independent research on specific governance questions. These powers are not advisory. They are institutional capacities that give the Assembly genuine governance weight within the municipal system.
At the sectoral level, the mechanism is the Sectoral AI Governance Board — a body that brings together workers, employers, technical specialists, and community representatives within specific economic sectors to deliberate on the governance of AI deployment within those sectors. The sectoral focus addresses a limitation of general-purpose participatory mechanisms: AI manifests differently in healthcare than in education, differently in finance than in manufacturing, and governance processes not attuned to sectoral specificity produce outcomes too general to be useful.
The design adapts the tripartite governance structures of Nordic labor market systems, with one critical addition: a fourth constituency of community representatives whose interests are affected by sectoral AI deployment but who are not captured by the labor-management framework. The healthcare AI governance board, for example, would include not only healthcare workers and healthcare employers but patients and community health organizations — the populations whose health outcomes are affected by clinical AI deployment but whose interests are not represented by either the labor or the management side of the traditional tripartite structure.
At the national level, the mechanism is the National AI Deliberation Platform — a large-scale deliberative process combining representative sampling, structured information provision, and facilitated small-group deliberation to generate informed public judgments on national AI governance questions. The model draws on Fishkin's deliberative polling methodology, adapted for ongoing engagement rather than single-event deployment.
The Platform would operate in two modes. In its standing mode, it would maintain a continuously refreshed panel of randomly selected citizens who receive regular briefings on AI governance developments and provide deliberated input on ongoing policy questions. In its intensive mode, it would convene larger, nationally representative deliberative events focused on specific high-stakes governance decisions — the regulation of AI in electoral campaigns, the governance of AI in education, the distributional questions raised by AI-driven productivity gains.
The standing mode addresses the temporal challenge that AI governance poses to deliberative processes. Traditional deliberative events operate on timescales of weeks to months — adequate for governance questions that evolve slowly but inadequate for a technological domain where significant capabilities emerge in weeks. A standing panel that maintains continuous engagement with the governance landscape can provide deliberated input at the speed that AI governance decisions demand, without sacrificing the deliberative quality that distinguishes empowered participation from reactive opinion.
At the global level, two mechanisms operate in parallel. The first is the Global Worker Voice Network — a transnational participatory structure connecting workers in different countries who are affected by AI-driven transformation of global supply chains. The Network uses digital platforms and periodic in-person gatherings to facilitate cross-border deliberation, enabling practical knowledge from different national contexts to be shared, compared, and synthesized into positions that carry weight in international governance forums.
The second is the International AI Governance Assembly — a global deliberative body, modeled on the World Wide Views methodology developed by the Danish Board of Technology, that convenes simultaneous deliberative events in multiple countries. The Assembly would not have binding governance authority. It would produce recommendations carrying the legitimacy of genuine democratic deliberation — a form of legitimacy that no currently existing international AI governance mechanism possesses, and a counterweight to the industry-dominated processes that currently shape international AI policy.
In the educational domain specifically, School AI Governance Councils would bring together teachers, parents, students at appropriate ages, administrators, and technology specialists to deliberate on how AI tools are adopted, integrated into instruction, and monitored for educational effects. The Councils address a specific governance failure: the absence of any institutional mechanism through which the practical knowledge of teachers and the experiential knowledge of parents and students can influence decisions about AI in education that are currently made by administrators responding to vendor marketing and peer pressure rather than evidence or deliberation.
Each of these mechanisms is designed to be implementable within existing institutional frameworks. None requires constitutional amendment or revolutionary restructuring. Each adapts proven institutional forms to the AI governance domain. And each is evaluated against the three conditions: accessible to the populations it serves, structured for genuine deliberation, and connected to actual governance decisions with real authority.
The mechanisms are modular — different components can be implemented independently as political conditions permit. A municipality can create an AI Impact Assembly without waiting for national legislation. An industry sector can establish a Governance Board through collective agreement between employer associations and worker organizations. An individual organization can institute a Transition Deliberation Committee through internal governance reform. The modularity is deliberate: it enables progress at whatever level political will exists, rather than conditioning all progress on the achievement of comprehensive reform.
The mechanisms are also designed to learn. Each includes evaluation protocols — systematic assessment of whether the participatory process is producing the outcomes that the evidence from other contexts predicts. The evaluation asks specific questions: Are participants from affected populations actually participating, or are accessibility barriers producing skewed composition? Is the deliberative process producing the kind of nuanced, trade-off-aware judgments that deliberative theory predicts, or is it being captured by organized interests or ideological factions? Are the governance outcomes actually influenced by the participatory input, or are the mechanisms being treated as advisory despite their formal authority?
These evaluation questions are not academic. They are the mechanism through which institutional designs are refined through practice — the iterative process through which first-generation participatory institutions are improved into second-generation and third-generation versions that better serve their governance function. The history of participatory budgeting illustrates this iterative process: the Porto Alegre model was adapted, modified, and improved as it spread to hundreds of cities worldwide, with each implementation learning from the successes and failures of its predecessors.
The AI governance mechanisms proposed here will undergo the same iterative refinement. The first implementations will be imperfect. They will encounter challenges that the design did not anticipate and produce outcomes that fall short of the theoretical ideal. This is not a reason to delay implementation. It is a reason to implement, evaluate, and improve — the standard methodology of institutional innovation, applied to the most consequential governance challenge of the current period.
---
The strongest objection to participatory AI governance is not ideological. It is practical, and it deserves to be stated in its most forceful form before being answered.
The objection goes like this: AI capabilities are advancing on timescales of months. A model that represents a significant advance over its predecessor appears, is deployed, reshapes competitive dynamics, restructures workflows, and creates new governance challenges — all before a deliberative process could convene its first meeting. By the time a citizens' assembly has been briefed, has deliberated, and has produced its recommendations, the technology that prompted the assembly has been superseded by three subsequent generations. Participatory governance is a horse-and-buggy solution to a Formula One problem. The mismatch between deliberative temporality and technological speed is not a design challenge. It is a category error.
The objection has empirical support. The sequence of AI capability advances between December 2025 and March 2026 — the period that The Orange Pill describes as crossing a threshold — was compressed into weeks. Organizations that had planned their 2026 strategies based on pre-December assumptions discovered those strategies were obsolete before the first quarter ended. The pace has not slowed. A governance mechanism designed around quarterly deliberative sessions would find itself perpetually governing the previous quarter's technology.
Fung takes this objection seriously because dismissing it would be intellectually dishonest and because the objection identifies a real constraint that institutional design must address. But taking an objection seriously is different from conceding to it, and the concession that the speed argument demands — that expert-only governance is the only governance fast enough for AI — produces consequences far worse than the problem it claims to solve.
The speed argument proves too much. If the pace of technological change disqualifies participatory governance, it equally disqualifies every other form of governance that involves deliberation, review, or democratic accountability. Legislative processes are slower than deliberative assemblies. Regulatory proceedings are slower than either. Judicial review is slower still. If speed is the dispositive criterion, then the only governance mechanism adequate to AI is unilateral corporate decision-making — the technology company deploying whatever it builds, at whatever pace the market demands, with no governance constraint that might slow the process.
This is, in fact, the governance regime that currently obtains for the most consequential AI deployment decisions. And the results are precisely what one would expect from ungoverned deployment: distributional outcomes that favor the deployers, externalities borne by the affected populations, quality and safety problems that could have been identified by the practical knowledge of affected users and workers, and a progressive erosion of the institutional capacity for any form of governance at all.
The speed argument contains an implicit assumption that deserves examination: the assumption that the relevant governance decisions are the decisions about specific AI capabilities — whether to deploy this model, how to configure that system, what safety parameters to set for a particular application. If governance must operate at the level of individual capability decisions, then the temporal mismatch is real. No participatory process can keep pace with the release cycle of a major AI laboratory.
But the most consequential governance decisions about AI are not decisions about specific capabilities. They are structural decisions about the conditions under which capabilities are developed and deployed: Who captures productivity gains? What provisions are made for displaced workers? What quality standards apply to AI-augmented services? What transparency requirements govern AI systems that affect public welfare? What accountability mechanisms apply when AI deployment produces harm?
These structural decisions do not change at the speed of model releases. They operate on the timescale of institutional design — the same timescale on which participatory governance operates. The decision about whether productivity gains from AI deployment are shared with workers or captured entirely by shareholders is not a decision that needs to be remade every time a new model is released. It is a structural decision, made once and adjusted periodically, that establishes the framework within which specific deployment decisions occur. Participatory governance is not designed to make individual deployment decisions. It is designed to set the structural parameters within which those decisions are made — and those parameters operate on timescales fully compatible with deliberative processes.
The analogy is environmental regulation. Environmental governance does not attempt to regulate each individual emission event in real time. It establishes structural parameters — emission standards, monitoring requirements, liability frameworks — within which individual actors make individual decisions. The parameters are set through processes that include public participation, are revised periodically as conditions change, and operate on timescales of years rather than seconds. No one argues that the pace of industrial activity disqualifies public participation in environmental governance. The same logic applies to AI governance: the speed of capability development is a constraint on the governance of individual deployment decisions but not on the governance of the structural parameters that shape the deployment landscape.
The second objection — technical complexity — is more substantive. AI governance decisions involve genuine technical complexity that is difficult for non-specialist populations to navigate. The concern is that participatory processes will produce uninformed decisions that are worse than expert decisions — that opening governance to affected populations will replace the biases of expertise with the biases of ignorance.
The concern has a surface plausibility that dissolves under empirical examination. The evidence from deliberative governance across multiple domains demonstrates that non-specialist citizens, given adequate information and structured deliberative opportunities, engage meaningfully with technically complex questions and produce judgments that are both informed and distinct from expert consensus in ways that improve governance outcomes.
The WeBuildAI project, published by the Association for Computing Machinery in 2019, demonstrated this directly in the algorithmic governance context. The project developed a participatory framework enabling community stakeholders to construct computational models representing their preferences for algorithmic policy. The participants were not algorithm designers. They were community members affected by algorithmic decisions. And the governance outcomes they produced — through structured deliberation informed by accessible technical information — reflected considerations that expert-only governance had systematically neglected.
The "Deep Learning Meets Deep Democracy" analysis published in the Business Ethics Quarterly made the point explicit: although most citizens lack formal qualifications to participate in AI expert discourse, they can be considered citizen experts inasmuch as they have experiential knowledge from varied contexts of AI systems in use. The practical knowledge of AI system users is not a substitute for technical expertise. It is a complement — a form of knowledge that technical expertise cannot generate and that governance processes need in order to produce outcomes adequate to the complexity of the systems being governed.
The information-provision challenge is real but tractable. Deliberative processes that have addressed comparably complex technical domains — nuclear energy policy, climate change governance, bioethics — have developed effective methods for translating technical content into accessible formats without distorting it. The methods involve not simplification but translation: connecting technical features to their implications for participants' lives, enabling informed judgment without requiring specialist training.
The third objection — failure modes of participation — is the one that Fung's own research addresses most directly, because his research has documented cases where participatory governance fails. Participatory processes can be captured by organized interests that invest disproportionately in participation to advance their agenda. They can produce outcomes inferior to expert governance when the deliberative design is poor — when information provision is inadequate, when facilitation is biased, when the process is too short for genuine deliberation to occur. They can degrade into factional conflict when the issues are highly polarized and the process does not include mechanisms for building shared understanding across divisions.
These failure modes are real. They are also design-dependent. The evidence shows that specific design features mitigate specific failure modes. Random selection mitigates capture by organized interests, because randomly selected participants cannot be pre-organized. Balanced information provision mitigates the risk of uninformed decisions. Skilled facilitation mitigates the risk of factional polarization. Adequate deliberative time mitigates the risk of shallow judgment.
The question is not whether participatory governance can fail — it can, and the conditions under which it fails are well-documented. The question is whether the failure rate of well-designed participatory governance is higher or lower than the failure rate of expert-only governance. The evidence, across multiple domains and multiple contexts, answers clearly: well-designed participation produces better outcomes than expert-only governance, not because participation is inherently superior but because participation accesses forms of knowledge that expert governance cannot reach, and the knowledge gap is the primary source of expert governance failure.
The case against participatory AI governance, stated in its strongest form, identifies real constraints that institutional design must address. The constraints are real. They are not insuperable. The temporal constraint is addressed by designing participation around structural governance decisions rather than individual deployment decisions. The complexity constraint is addressed by information-provision systems that translate technical content into accessible formats. The failure-mode constraint is addressed by design features that mitigate known pathologies.
The alternative — expert-only governance, operating within the fishbowl condition, producing outcomes that reflect elite priorities while excluding the practical knowledge of affected populations — is not merely inadequate. It is the demonstrated source of the governance failures that the AI transition has already produced and will continue to produce at increasing scale. The choice is not between perfect participatory governance and adequate expert governance. It is between imperfect participatory governance that accesses knowledge expert governance cannot reach, and expert governance that is systematically blind to the concerns and experiences of the populations it governs.
The evidence supports the former. The institutional designs for implementing it are available. The remaining question is political will — and political will is the subject of what follows.
The developer in Lagos does not need more friction. She has plenty — unreliable power grids, limited bandwidth, economic precarity, distance from the centers of capital and institutional support. What she needs, as The Orange Pill argues, is the removal of barriers between her intelligence and its expression. The AI tools that accomplish this removal represent a genuine democratization of capability, and the democratization is real even if it is partial.
But the developer in Lagos also lives inside a governance vacuum. The AI systems reshaping her economic environment — automating the call center jobs that employ her neighbors, restructuring the supply chains that connect her country to global markets, transforming the informational ecosystem through which her government communicates with its citizens — are developed in laboratories she has never visited, governed by regulatory frameworks in which she has no representation, and deployed through corporate decisions in which she has no voice. The capability democratization that The Orange Pill celebrates is occurring within a governance landscape that remains radically undemocratic. She can now build software that would have required a team five years ago. She cannot influence the institutional conditions under which that software exists in the world.
This is not a peripheral concern. It is the central governance challenge of the AI transition viewed at global scale, and it is the challenge that existing AI governance frameworks are least equipped to address. The EU AI Act regulates AI deployment within European borders. The American executive orders address AI governance within the United States. The emerging frameworks in Singapore, Brazil, and Japan address national concerns within national jurisdictions. None addresses the governance of AI's impact on the populations that are most severely affected and least institutionally represented — the hundreds of millions of workers in the Global South whose livelihoods are being reshaped by technologies developed on other continents, regulated by other governments, and deployed through global supply chains that cross dozens of jurisdictional boundaries without encountering meaningful governance at any of them.
The governance vacuum is not accidental. It is the predictable product of an international institutional architecture designed for a different era — an architecture in which governance authority is organized by territorial jurisdiction and economic governance is organized by trade agreements negotiated between national governments. This architecture has never been adequate for governing technologies that operate across borders, and its inadequacy is intensified by AI's specific characteristics: the fact that an AI system developed in San Francisco can restructure labor markets in Manila, reshape educational practices in Nairobi, and alter information environments in Dhaka without any of the affected populations having access to any governance mechanism through which they might influence the deployment decisions that affect their lives.
Fung's December 2024 workshop at the Ash Center confronted a specific dimension of this governance vacuum: the relationship between AI and democracy movements in authoritarian contexts. The workshop documented that democracy movements have experienced a historic decline in their ability to challenge autocratic governments, due in part to the changing technology landscape, which has allowed autocratic governments to monopolize the advantages of breakthrough technologies. The relatively slow adoption of AI tools by democracy movements may be widening the gulf between these movements and their adversaries.
The finding illuminates a structural asymmetry that operates across regime types. In authoritarian contexts, AI concentrates surveillance and censorship capabilities in the state. In democratic contexts, AI concentrates persuasion and attention-capture capabilities in corporations. In both cases, the technology amplifies existing power asymmetries — giving more power to actors who already possess it and reducing the relative capacity of less powerful populations to influence the conditions of their existence. The populations in Lagos, Dhaka, and Manila experience both forms of asymmetry simultaneously: they are subject to AI-enhanced governance by their own states and to AI-driven economic restructuring by multinational corporations, without meaningful participatory access to either governance process.
The extension of empowered participatory governance to these populations confronts challenges that the frameworks developed in previous chapters address only partially. The participatory experiments that constitute the empirical foundation of Fung's framework — Porto Alegre, Chicago, the Irish citizens' assemblies — were conducted within relatively stable democratic polities with functioning governance institutions, civil society organizations, and media ecosystems that provided the basic infrastructure for participatory engagement. These conditions do not obtain in many of the countries where AI disruption is most severe. Governance institutions are weak, captured by elite interests, or absent. Civil society organizations are underfunded, politically constrained, or nonexistent. Media ecosystems are dominated by state-controlled outlets or commercial entities with limited capacity for independent coverage.
The challenge is not a reason to abandon the participatory principle. It is a reason to develop institutional designs adapted to these conditions — designs that do not presuppose the institutional infrastructure of wealthy democracies but that build governance capacity through the participatory process itself.
Three design principles guide this adaptation.
First, participatory governance in resource-constrained contexts must be low-cost and self-sustaining. Porto Alegre's participatory budgeting succeeded partly because it was inexpensive: it required meeting spaces, facilitation, and information provision, but not elaborate professional infrastructure. AI governance mechanisms for the Global South must similarly be designed for implementation with minimal external resources, drawing on existing community structures, local organizations, and available communication technologies.
Second, participatory governance in contexts with weak formal institutions must create institutional capacity through the practice of participation. Community-based natural resource management in developing countries provides precedent: participatory processes for managing forests, fisheries, and water resources have been successfully implemented in communities with weak formal governance, creating new governance capacities through the participatory process itself. AI governance mechanisms can follow the same logic — building the institutional infrastructure for governance through the practice of governing.
Third, participatory governance in the global AI context must address the specific power asymmetries that characterize the relationship between affected populations and the corporations and governments that control AI deployment. The call center worker in Manila whose job is threatened by AI-powered chatbots is affected by decisions made in corporate headquarters on another continent, regulated by agencies in multiple jurisdictions, and mediated through supply chains crossing dozens of borders. Participatory mechanisms must operate across these boundaries, connecting local processes to the global decisions that determine local conditions.
These principles inform specific institutional proposals. The Global Worker Voice Network would be a transnational participatory structure connecting workers in different countries who are affected by AI-driven transformation of global supply chains. Using digital platforms for ongoing engagement and periodic in-person gatherings for intensive deliberation, the Network would enable practical knowledge from different national contexts to be shared, compared, and synthesized into collective positions. The Network's governance weight would derive not from formal authority — no existing international institution could grant it binding power — but from the democratic legitimacy of its deliberative process. Positions produced through genuine cross-border deliberation by affected workers carry a form of legitimacy that industry position papers and government communiqués do not possess, and that legitimacy can be deployed in international governance forums as a counterweight to the interests that currently dominate those forums.
The International AI Governance Assembly would adapt the World Wide Views methodology — simultaneous deliberative events in multiple countries, with shared informational materials and comparable deliberative formats — to AI governance questions. The Assembly would produce recommendations that carry the legitimacy of globally representative deliberation. The methodology has been tested: World Wide Views events on climate change and biodiversity have demonstrated that simultaneous cross-national deliberation is feasible, that participants in different countries can engage with the same informational materials and produce comparable deliberative quality, and that the results carry political weight despite the absence of formal authority.
The regional dimension requires AI Impact Assessments conducted by multidisciplinary teams that include local researchers and community representatives alongside international specialists. The assessments would evaluate the effects of AI deployment on specific populations and produce reports designed to inform deliberative processes at local, national, and international levels. The reports would be accessible to non-specialist audiences — not simplified but translated, connecting technical analysis to lived consequence in the way that effective participatory information provision has demonstrated across multiple domains.
The global governance proposals face an objection that must be stated plainly: the existing international institutional architecture provides almost no mechanism for implementing them. There is no global legislature to mandate a Worker Voice Network, no international agency to convene an AI Governance Assembly, no enforcement mechanism to require corporate compliance with participatory governance standards across borders. The proposals operate in the gap between what global governance currently provides and what the AI transition requires.
The gap is real. But the history of international institutional development suggests that governance innovations often precede formal institutional authority rather than following it. The international human rights framework was developed through advocacy, norm-setting, and the gradual construction of institutional mechanisms over decades. The environmental governance framework emerged through similar processes — international agreements building incrementally on the work of advocacy organizations, scientific communities, and citizen movements that created the political conditions for institutional innovation.
AI governance at the global level will follow a comparable trajectory — not because the process is automatic but because the governance vacuum is producing consequences that will generate demand for institutional response. The question is whether the institutional designs are ready when the political moment arrives. Fung's contribution is to ensure that they are — to develop proposals grounded in evidence, adapted to the specific constraints of the global AI governance context, and ready for implementation when the political conditions permit.
The developer in Lagos is not waiting for global governance institutions. She is building, now, with the tools available to her. But the institutional conditions under which she builds — the labor protections, the intellectual property frameworks, the distributional rules, the accountability mechanisms — are being determined in rooms she cannot enter, by processes she cannot influence, in jurisdictions she does not inhabit. Building the infrastructure of voice — the participatory mechanisms through which her practical knowledge and legitimate interests can enter the governance of the technology that is reshaping her world — is not a luxury to be addressed after the more pressing challenges of capability and access. It is a precondition for ensuring that the capability and access she gains are not captured by the same power asymmetries that the technology was supposed to overcome.
---
The argument of this book has moved through a specific sequence: from the diagnosis of a governance deficit to the analysis of its structural causes, from the identification of the populations it excludes to the specification of institutional mechanisms that could correct it. The sequence has been analytical rather than aspirational, grounded in evidence rather than ideology, and oriented throughout toward practical institutional design rather than abstract democratic theory.
The concluding argument is therefore practical rather than philosophical. It concerns the specific political conditions under which the institutional innovations proposed in the preceding chapters might actually be implemented, the specific obstacles that stand between the current governance landscape and the participatory alternatives the evidence supports, and the specific actions that the current moment requires.
The obstacles are formidable. Technology companies that would be most directly affected by participatory governance requirements have strong incentives to resist them and substantial political resources with which to do so. Political leaders who would need to create the institutions have limited understanding of participatory governance design and limited political incentive to invest in an unfamiliar governance paradigm. The affected populations who would benefit from the institutions are, by definition, the populations whose current participatory capacities are least developed.
But political conditions are not fixed. They are responsive to events, arguments, and the demonstrated success of innovations in other contexts. The political economy of institutional creation has a characteristic structure that Fung's research has documented across multiple domains: new governance institutions emerge not from comprehensive political campaigns but from the convergence of three factors — demonstration effects, political entrepreneurship, and crisis-driven demand.
Demonstration effects occur when small-scale experiments produce outcomes compelling enough to generate demand for replication. Participatory budgeting spread from Porto Alegre to hundreds of cities not through coordinated political campaigns but because the original experiment's documented success created demand that political entrepreneurs in other contexts could mobilize. Citizens' assemblies spread from British Columbia to Ireland to France through the same mechanism: each successful implementation made the next one easier, because each provided evidence that skeptics could evaluate and advocates could cite.
For AI governance, the demonstration strategy implies beginning at the level where political will is most accessible and institutional barriers are lowest. Municipal AI Impact Assemblies can be created by city councils without national legislation. Organizational Transition Deliberation Committees can be established through corporate governance reform or sectoral collective agreements. School AI Governance Councils can be created through school board action. Each successful implementation at any level provides the evidence base and the practical experience that makes implementation at higher levels politically feasible.
Political entrepreneurship is the work of specific actors — elected officials, organizational leaders, civil society advocates — who recognize the governance deficit, understand the institutional designs that could address it, and possess the skills and resources to navigate the political process through which new institutions are created. The political entrepreneurs for participatory AI governance may come from unexpected quarters. The Ash Center's December 2024 workshop brought together democracy activists and tech specialists — a coalition that would not have been intuitive five years ago but that the AI transition has made natural. The convergence of concerns about democratic degradation with concerns about technological disruption creates potential political coalitions that did not previously exist.
Crisis-driven demand is the most unpredictable factor and historically the most powerful. Major governance innovations typically emerge not from gradual reform but from crisis events that make the inadequacy of existing arrangements impossible to ignore. The labor protections that eventually distributed industrialization's gains emerged from decades of worker suffering that produced political crises. Environmental regulation emerged from ecological disasters that made the costs of unregulated industry visible. Financial regulation emerged from market crashes that demonstrated the consequences of inadequate governance.
The AI transition has not yet produced its defining crisis — the event that makes the governance deficit visible to populations who have not yet recognized it. But the structural conditions for such a crisis are present. The displacement of workers, the disruption of educational systems, the degradation of informational environments, the concentration of economic power — each of these trends is accelerating, and each has the potential to produce the kind of visible, concentrated harm that transforms diffuse concern into focused political demand.
The question is whether the institutional designs are ready when the demand arrives. The history of governance innovation teaches that the quality of the institutional response to crisis depends heavily on whether viable alternatives have been developed in advance. The New Deal's institutional innovations were possible because decades of Progressive Era experimentation had developed the institutional models that the crisis made politically feasible. The European welfare state's institutional architecture was possible because decades of social democratic theorizing had produced the designs that postwar reconstruction made politically necessary.
The institutional proposals developed in this book are designed to serve the same function for AI governance. They are available — specified in sufficient detail to serve as the basis for legislative, regulatory, or organizational action. They are grounded in evidence — adapted from institutional models that have been tested, evaluated, and demonstrated to work across multiple contexts. And they are designed for a political moment that has not yet arrived but whose preconditions are visibly assembling.
The theory of political change that connects these phases — demonstration, diffusion, institutionalization — is not deterministic. The movement from one phase to the next depends on political agency, institutional capacity, and contingent circumstances that cannot be predicted. But the theory identifies the specific actions that each phase requires, and those actions can begin immediately.
Demonstration requires the creation of pilot participatory governance mechanisms — at municipal, organizational, or sectoral levels — that produce documented evidence of effectiveness. Diffusion requires networks of practitioners, researchers, and advocates who translate successful pilots into models adaptable to new contexts. Institutionalization requires legislative and regulatory frameworks that convert voluntary practices into institutional requirements.
Each of these actions is within the capacity of actors who exist in the current political landscape. City councils can authorize AI Impact Assembly pilots. Corporate boards can mandate Transition Deliberation Committees. Sectoral associations can establish AI Governance Boards. Academic institutions can evaluate the results and produce the evidence base that supports broader adoption. None of these actions requires waiting for comprehensive national or international reform. Each can begin now, within existing institutional frameworks, and each contributes to the conditions under which more ambitious reforms become politically feasible.
The final point concerns the stakes. The governance arrangements that emerge from the current period of institutional plasticity will determine how the benefits and costs of AI are distributed across populations, industries, regions, and generations. They will determine whether the most powerful technology in human history is governed by the many or the few, with the practical wisdom of affected populations or without it. They will determine whether the pattern of every previous technological transition — exclusion during the formative period, decades of costly institutional retrofit — is repeated, or whether a different pattern is possible.
Fung's research, across decades and multiple domains, has established that a different pattern is achievable when participatory institutions are properly designed. The designs are available. The evidence supports them. The political window is open. The imperative is to act while the window remains open — to build the governance institutions that the AI transition requires before the period of plasticity closes and the arrangements that emerge from the current moment harden into structures as resistant to modification as the factory system of the nineteenth century.
The Luddites had no room. The question is whether the affected populations of the AI transition will have one. The answer is not determined by any force outside human agency. It is determined by choices being made now, by policymakers, organizational leaders, civic entrepreneurs, and citizens who may or may not understand the stakes of what they are deciding. The democratic governance of AI is not inevitable. Neither is its absence. Both are possible. The choice between them is the defining governance question of the current generation, and the institutions that answer it will shape the conditions of human life for generations to come.
The room must be built. The time to build it is now.
---
The sentence that disoriented me was not about technology.
It was Fung and Lessig's line about Clogger — that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools. I read that and felt something shift beneath a certainty I had been carrying without examining it. I had been thinking about AI governance as a question of capability — how powerful will the systems become, and can we build dams fast enough? Fung was saying the governance crisis is already here, produced not by superintelligence but by the ordinary competitive logic of human institutions encountering tools that amplify whatever they are given.
That reframe changed the urgency.
In The Orange Pill, I argued that the Luddites' deepest error was withdrawal — stepping out of the room where decisions were being made. Fung's response has forced me to sit with something I was not prepared to hear: the room I was urging people to stay in does not exist for most of the people who need it. Not because they lack the will to participate. Because the governance architecture was never designed for their participation. The customer service representative whose job is being restructured by a chatbot has no institutional channel through which to influence that restructuring. The parent making daily decisions about a child's relationship to AI has no collective forum through which to develop those decisions in conversation with other parents facing the same questions. The developer in Lagos — the one I wrote about, whose imagination-to-artifact ratio collapsed overnight — can build now, but she cannot govern the conditions under which her building matters.
Build the room. That is what I take from this encounter. Not just stay in it. Build it.
The three conditions Fung insists on — accessibility, deliberation, consequence — are not abstractions to me anymore. I saw them operating in Trivandrum, though I would not have used those words at the time. The engineers' participation worked because they could actually participate, because they were given space to discover and discuss rather than merely receive instructions, and because what they discovered changed what we actually did. Fung's framework names what I experienced as a specific institutional achievement rather than a fortunate accident. And the naming matters, because accidents cannot be replicated. Designs can.
The hardest thing in these pages, for someone who builds, is the argument about speed. Every instinct in my body says move faster. The technology moves in months. Deliberation takes time. The mismatch feels fatal. But Fung makes a distinction I cannot shake: the decisions that need to happen fast are deployment decisions — which model, which configuration, which application. The decisions that need participation are structural decisions — who captures the gains, what protections exist for the displaced, what quality standards govern AI-augmented work. Those structural decisions operate on timescales compatible with deliberation. And getting them wrong is far more costly than getting them slowly.
I am still a builder. I will still be at my desk tomorrow, working with Claude, shipping products, pushing the frontier. But the frontier is not only technical. It is institutional. The governance architecture being established now — in this period of plasticity, in these months and years before the arrangements harden — will determine whether the AI revolution distributes its gains broadly or concentrates them in the hands of the few who build and deploy. Every previous technological revolution concentrated the gains until governance institutions were painfully, expensively reformed by movements that took decades to build. We do not have decades.
Fung's challenge to me, and to everyone building at the frontier, is this: the tools you are building will be governed. The question is whether the governance will include the people those tools affect. Your answer to that question — in the organizations you lead, the policies you advocate for, the institutions you help create or refuse to create — will matter more than the code you ship.
The room must be built. I intend to help build it.
— Edo Segal
The Orange Pill argued: stay in the room. Archon Fung, Harvard's leading theorist of participatory democracy, asks the harder question — what if the room was never built? His thirty years of research across Porto Alegre, Chicago, and Dublin prove that when ordinary citizens are given genuine authority over complex decisions, the outcomes are not just more equitable but measurably superior. This book applies Fung's framework of empowered participatory governance to the AI revolution's most urgent crisis: the structural exclusion of affected populations from the decisions reshaping their lives. The governance window is open now. The institutions that emerge from this moment will determine who captures AI's gains for generations. This is the blueprint for building the room before it closes.

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Archon Fung — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →