Danielle Allen — On AI
Contents
Cover Foreword About Chapter 1: Political Equality and the Tool That Redistributes Capability Chapter 2: The Practice of Equality in AI-Mediated Work Chapter 3: Sacrifice and the Distribution of Transition Costs Chapter 4: The Civic Agency of the Builder Chapter 5: Consent of the Governed in the Platform Age Chapter 6: Participatory Readiness and the Skill of Self-Governance Chapter 7: The Commons of Intelligence Chapter 8: Education for Democratic Participation After AI Chapter 9: The Architecture of Inclusion and the Protection of Difference Chapter 10: From Declaration to Practice Epilogue Back Cover
Danielle Allen Cover

Danielle Allen

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Danielle Allen. It is an attempt by Opus 4.6 to simulate Danielle Allen's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The governance conversation bores me. I need to say that honestly before I explain why I spent months inside it anyway.

I am a builder. I think in products, in timelines, in the distance between an idea and the thing it becomes. When someone says "governance framework," my eyes glaze over the way a poet's might glaze over during a database architecture review. It is not my language. It is not my fishbowl.

Danielle Allen shattered that indifference.

Not because she made governance exciting — though she did, in the way that seeing the load-bearing wall of a building you live in for the first time is exciting and terrifying in equal measure. She shattered it because she showed me that every celebration I had written in *The Orange Pill* about democratized capability was incomplete without the question she kept asking: Whose capability? Governed by whom? On whose terms?

I wrote about the developer in Lagos whose ideas now had a path to reality. Allen asked whether that path ran through infrastructure the developer did not control, in a language she did not choose, subject to revision by actors she could not influence. I wrote about the imagination-to-artifact ratio collapsing. Allen asked whether collapse without institutional design is just another word for enclosure — the commons of human intelligence fenced off by the companies that happened to build the first models.

These are not abstract objections. They are the structural questions that determine whether what I celebrated as democratization actually becomes democratic, or whether it becomes the most sophisticated form of dependency the world has ever seen. Access without agency. The consumer's seat, not the citizen's.

Allen's framework — equality as practice, not condition; consent as ongoing participation, not a clicked checkbox; inclusion as architecture, not an open door — gave me the institutional vocabulary I was missing. In *The Orange Pill*, I built a tower of arguments about human capability in the age of AI. Allen showed me that the tower needs a foundation I had not poured: the governance structures that determine whether amplified capability serves everyone or consolidates in the hands of the few who control the platforms.

This lens is uncomfortable for builders. It slows you down. It asks you to think about the people downstream before you celebrate the speed of the current. It insists that the most important work of this moment is not building faster but governing wisely — and that governing wisely requires voices that the building culture has historically ignored.

I needed this discomfort. You might too.

Edo Segal ^ Opus 4.6

About Danielle Allen

1971-present

Danielle Allen (1971–present) is an American political philosopher, public intellectual, and institutional leader whose work spans democratic theory, classical studies, education policy, and technology governance. Born in Takoma Park, Maryland, she earned degrees from Princeton and Cambridge before completing her PhD in government at Harvard. She has held faculty positions at the University of Chicago and Harvard, where she became the James Bryant Conant University Professor — one of Harvard's highest faculty honors. Her major works include *Talking to Strangers: Anxieties of Citizenship since Brown v. Board of Education* (2004), *Our Declaration: A Reading of the Declaration of Independence in Defense of Equality* (2014), and *Justice by Means of Democracy* (2023). She co-authored the influential 2021 paper "How AI Fails Us" with Daron Acemoglu, Kate Crawford, and E. Glen Weyl, and published "A Roadmap for Governing AI" in 2025. Central to her thought are the concepts of equality as ongoing practice rather than static condition, "difference without domination," power-sharing liberalism, and participatory readiness — the civic capacities citizens need for genuine self-governance. She directs the GETTING-Plurality research network at Harvard, which studies how technology can be developed in support of democracy and collective intelligence. In 2024, she was a candidate for governor of Massachusetts. Her legacy is the insistence that technology governance is not a policy afterthought but a constitutional question — that who governs the tools determines whether the tools serve democratic equality or undermine it.

Chapter 1: Political Equality and the Tool That Redistributes Capability

Every great redistribution begins with a tool that somebody underestimates.

The ballot was a piece of paper. The printing press was a machine for copying words. The spreadsheet was a grid on a screen. In each case, the tool's physical modesty concealed a political explosion: the rearrangement of who could participate in the construction of collective life. The people who controlled the previous arrangement always described the new tool as a convenience, an efficiency gain, a labor-saving device. They were not wrong. They were merely describing the smallest thing the tool would do.

Danielle Allen has spent her career studying these redistributions — not as a historian of technology but as a political philosopher who understands that the material conditions of participation determine whether democratic equality is a lived reality or a decorative slogan. Her reading of the Declaration of Independence, developed most fully in Our Declaration and extended through her subsequent work on justice, power-sharing, and institutional design, insists on a distinction that most commentary about democracy elides. The Declaration does not describe equality as a pre-existing condition that government must protect. It commits to equality as a practice — something that must be actively constructed through institutions, norms, and the daily architecture of collective life. The phrase "all men are created equal" is not an observation about nature. It is a declaration of intent, and the distance between that intent and its realization is the distance that democratic politics must traverse, generation after generation, without ever arriving at a final destination.

This framework — equality as practice rather than condition — transforms the question of what artificial intelligence means for democracy. The standard debate asks whether AI will be good or bad for democratic societies, as though democracy were a patient and AI a drug whose side effects must be weighed against its therapeutic benefits. Allen's framework asks something more precise and more demanding: Does AI expand or contract the practice of equality? Does it create conditions under which more people can participate as genuine equals in the construction of collective life? Or does it create new forms of dependency and domination that masquerade as participation?

The evidence that AI redistributes productive capability is, at this point, overwhelming. The Orange Pill documents the redistribution with the authority of someone who has lived inside it: engineers in Trivandrum building features they could never have attempted alone, designers writing functional code for the first time, a product constructed from nothing in thirty days that would previously have required months and multiple specialized teams. The adoption curve tells the same story from the demand side. ChatGPT reached fifty million users in two months — not because the marketing was extraordinary but because the need was already enormous, pressing against every barrier that separated human intention from realized artifact. The imagination-to-artifact ratio, as The Orange Pill names it, collapsed. The distance between what a person could conceive and what a person could build compressed to the width of a conversation.

From within Allen's framework, this collapse registers as an event of the first democratic magnitude. The capacity to build — to translate ideas into working artifacts that shape the material world — is not merely an economic capacity. It is a political one. When that capacity is concentrated among a small technical elite, the rest of the population inhabits a built environment they did not build and cannot modify. They use tools they did not design. They live within digital infrastructure constructed by people who may never have consulted them about what that infrastructure should do. They are, in the most literal sense, governed by systems they did not consent to and cannot influence. The expansion of who gets to build is, therefore, an expansion of who gets to participate in the material construction of the world that everyone inhabits. This is participation in the deepest sense that democratic theory recognizes — not the participation of the voter who chooses between options formulated by others, but the participation of the builder who formulates options that did not previously exist.

But Allen's entire intellectual project is built on the recognition that formal expansion is not the same as substantive equality. The extension of the franchise was the great formal expansion of the nineteenth and twentieth centuries. It took eighty-seven years after the Declaration to end legal slavery, another fifty-seven for women's suffrage, another forty-five before the Voting Rights Act began dismantling the legal infrastructure of racial disenfranchisement. At each stage, the formal expansion of political rights coexisted with — and in many ways concealed — the substantive inequality of political influence. Wealth continued to purchase access. Education continued to confer advantage. Social networks continued to channel opportunity toward those who were already positioned to capture it. The ballot box was formally equal. The system surrounding it was not.

Allen's 2021 paper "How AI Fails Us," co-authored with Daron Acemoglu, Kate Crawford, and E. Glen Weyl, established the theoretical foundation for applying this insight to artificial intelligence. The paper argued that the dominant paradigm of AI development "misconstrues intelligence as autonomous rather than social and relational" and "tends to concentrate power, resources, and decision-making in an engineering elite." The critique was not that AI tools are bad. The critique was that the paradigm through which those tools are developed — centralized, proprietary, optimized for the replacement of human judgment rather than its augmentation — reproduces the very concentration of power that democratic equality exists to prevent. The tool expands capability. The paradigm within which the tool is developed and deployed concentrates control. These two dynamics operate simultaneously, and the democratic question is which one predominates.

Consider what "access" actually means when subjected to Allen's analytical rigor. A developer in Lagos now has access to AI tools that can translate her natural-language descriptions into working software. This is real. The barrier that previously separated her intelligence from its expression has been lowered. But the tools she accesses are built by American corporations, trained on predominantly English-language data, optimized for the workflows of Western knowledge workers, and governed by terms of service she had no role in drafting. The infrastructure on which her newfound capability depends is privately owned and privately governed. The decisions about pricing, capability limits, and acceptable use that shape her experience of these tools are made by executives accountable to shareholders, not to the global community of users whose productive lives now depend on their platforms. If Anthropic doubles its prices, she has no recourse. If OpenAI restricts certain capabilities, she has no vote. She participates in the use of the tools. She does not participate in the governance of the systems on which the tools depend.

This is the distinction between access and agency that Allen's framework renders visible with uncomfortable clarity. Access is the ability to use what someone else has built, under conditions that someone else has set, for as long as someone else permits. Agency is the capacity to participate in the decisions that determine the conditions of your participation. Access without agency is the condition of the consumer, not the citizen. It is the condition of the feudal peasant who is permitted to farm the lord's land under the lord's terms — formally free, substantively dependent.

Allen made this point with characteristic directness in her 2023 Washington Post column: "Social media has already knocked a pillar out from under our democratic institutions by making it exceptionally easy for people with extreme views to connect and coordinate." She extended the analysis to generative AI, warning that it would "help bad actors further accelerate the spread of misinformation" while also noting that "a healthy democracy could govern the technology and put it to good use in countless ways." The formulation is precise. The technology is not the problem. The governance — or its absence — is the problem. A healthy democracy could govern AI well. The question is whether our democracy is healthy enough to do so.

The question cuts deeper when examined through the lens of Allen's "power-sharing liberalism," the framework she has developed across multiple publications and applied directly to AI governance in her 2025 paper "A Roadmap for Governing AI." Power-sharing liberalism, building on the work of Amartya Sen, Philip Pettit, Elizabeth Anderson, and Elinor Ostrom, "puts human flourishing at the center of ethical and political inquiry and treats rights of social participation (or positive liberties) as just as foundational to society as the rights to basic bodily protection, freedom of conscience, and non-discrimination." Applied to AI, the framework insists that governance should be "not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology — to advance human flourishing."

This is the critical move. Most AI governance discourse operates in the register of negative liberty — preventing harm, blocking discrimination, stopping deepfakes, regulating deceptive practices. Allen argues that this framing is radically incomplete. Democratic governance of AI must also operate in the register of positive liberty: using AI to enhance democratic participation, augment human cooperation, foster collective intelligence, and expand the conditions under which genuine self-governance becomes possible. A framework focused only on preventing harm will never ask the generative question: What should this technology be for?

The redistribution of productive capability is real. It is potentially the most significant expansion of participation since universal suffrage. But the parallel with suffrage is instructive precisely because it reveals how far the expansion must travel before it constitutes genuine democratization. The vote was necessary but not sufficient. It required decades of institutional construction — campaign finance regulation, voting rights enforcement, accessible polling infrastructure, civic education — before the formal equality of the ballot began to approximate the substantive equality that democratic theory demands. The redistribution of productive capability will require comparable institutional construction: governance mechanisms for AI platforms, standards for transparency and interoperability, educational systems that prepare citizens for genuine participation, and the democratic oversight of infrastructure that has become essential to civic and economic life.

Allen and Weyl's essay in the Journal of Democracy identified the stakes with a formulation that should haunt every reader of The Orange Pill. They warned that the current paradigm of AI development threatens two forms of catastrophe simultaneously: collapse, in which AI-enabled misinformation and manipulation overwhelm democratic institutions, and singularity — understood not merely as a technological event but as a political one, "where a single, coherent will comes to organize and dominate global social life." The authors argued that "little meaningfully differentiates a future dominated by a small and homogeneous elite from one controlled by self-propagating machines. These two outcomes are indeed different flavors of the same singularity."

This is the uncomfortable implication that the celebration of AI democratization must confront. The same technology that lowers the floor of who gets to build may simultaneously raise the ceiling of who controls the infrastructure of building. The same tools that empower the individual developer may simultaneously concentrate power in the corporations that provide those tools. The redistribution of capability is genuine, but it occurs within a system whose structural dynamics tend toward concentration, and unless those dynamics are counteracted by deliberate institutional design, the redistribution may prove to be the mask that concentration wears.

The work of democratic theory in the age of AI is to design the institutions that make the redistribution genuine rather than superficial. Not to stop the technology — Allen has never been a prohibitionist — but to govern it in ways that ensure the expansion of capability serves the democratic commitment to equality rather than undermining it. The technology is the river. The institutions are the structures that determine whether the river irrigates or floods. And the question, as Allen has posed it throughout her career, is whether democratic societies possess the political will and institutional creativity to build what the moment demands, before the moment has passed.

---

Chapter 2: The Practice of Equality in AI-Mediated Work

For most adults in democratic societies, the workplace is where equality is either practiced or betrayed. It is the institution in which they spend the largest portion of their waking hours, the institution that most directly shapes their economic standing, their sense of competence, their daily experience of being recognized — or overlooked — as a person whose contributions matter. Whatever democratic principles a society declares in its founding documents, the workplace is where those principles are tested against the friction of actual human relationships. A person who is formally equal in the polling booth and substantively subordinate in the office inhabits a democracy that is, at best, incomplete.

Danielle Allen's insistence that equality is a practice rather than a condition finds its sharpest expression in this terrain. The practice of equality does not mean that all contributions are identical, or that hierarchies of competence are illegitimate, or that the person who has spent twenty years developing architectural judgment should have no more authority than the person who arrived last week. It means that hierarchies must reflect genuine functional necessity rather than inherited privilege. It means that the benefits of collective effort must flow to all participants rather than being captured at the top. And it means that the terms of the hierarchy — who holds authority, on what basis, subject to what constraints — must be open to scrutiny and revision by those who live within them. The moment these conditions fail, the workplace becomes a site of domination regardless of what the society's constitution says about the equal dignity of its citizens.

AI is rearranging the internal architecture of the workplace faster than any technology since electrification, and the rearrangement creates both new possibilities for the practice of equality and new mechanisms for its betrayal. The Orange Pill documents the rearrangement in vivid detail: engineers building user interfaces they had never attempted, designers implementing complete features end to end, backend specialists reaching into frontend territory and vice versa. The Berkeley researchers whose study the book discusses found the same phenomenon from the empirical side — workers using AI tools expanded into areas that had previously been someone else's domain. Delegation decreased. Boundaries dissolved. The organizational chart remained formally intact while the actual flow of contribution shifted beneath it.

From within Allen's framework, the dissolution of role boundaries is genuinely exciting. The rigid hierarchies of specialization that have characterized modern organizations are, in significant measure, hierarchies of exclusion. When only the person with the "engineer" title is permitted to write code, and only the person with the "designer" title is permitted to shape the interface, the organization has constructed barriers to contribution that have nothing to do with the quality of the contribution and everything to do with credentialing, institutional history, and the defense of professional territory. These barriers are a form of domination — not the dramatic domination of the tyrant but the quiet domination of the gatekeeping function, the institutional assumption that a person's capacity to contribute is defined by their job description rather than by their actual capabilities.

AI dissolves these barriers by dissolving the translation cost that sustained them. When the cost of moving between domains drops to the cost of a conversation with an AI assistant, the gatekeeping function of specialization collapses. The person with the best idea about how a feature should work can actually build that feature, regardless of whether their title says "engineer" or "designer" or "product manager." This is, in the most literal sense, a democratization of contribution — an expansion of who gets to participate in the substantive work of the organization, not merely in the formal structure of meetings and approval chains.

But Allen's framework demands that the analysis not stop at the dissolution of barriers. The practice of equality requires positive conditions, not merely the absence of negative ones. Three conditions in particular demand attention in the AI-mediated workplace.

The first is recognition. Allen's concept of "difference without domination" — developed in her philosophical work and applied directly to AI governance in her 2025 "Roadmap" paper — holds that genuine equality requires not the elimination of difference but the structural prevention of any difference from becoming the basis for systematic advantage. In the AI-mediated workplace, the relevant difference is between those who direct AI tools effectively and those who do not. If this difference becomes a new basis for organizational hierarchy — if the person who prompts well is treated as inherently more valuable than the person who contributes in other ways — then the dissolution of old role boundaries has merely been replaced by a new form of stratification. The practice of equality requires that the organization recognize diverse forms of contribution, including forms that AI does not directly augment, as genuinely valuable.

The second condition is distribution. When AI multiplies the productive output of every worker, the question of who captures the additional output becomes urgent. The Orange Pill describes a twenty-fold productivity gain — each engineer in Trivandrum operating with the leverage of a full team. That gain flows somewhere. It can flow to workers, through higher compensation, reduced hours, expanded creative latitude, and investment in their development. It can flow to shareholders, through margin expansion and headcount reduction. It can flow to customers, through lower prices and better products. The distribution is not determined by the technology. It is determined by the institutional choices of the people who control the organization. And the practice of equality demands that those choices be made through processes that include the workers whose labor generates the gain, not merely the executives and shareholders who are positioned to capture it.

The author of The Orange Pill illustrates the stakes through his account of the decision to retain and grow his Napster team rather than convert productivity gains into headcount reduction. The arithmetic of reduction was straightforward and seductive. If five people can do the work of a hundred, the investor's logic points toward five. The author chose differently — chose to redirect expanded capability toward more ambitious projects, new product development, organizational growth. This is a practice of equality in Allen's sense: a choice to distribute the benefits of AI broadly within the organization rather than capturing them narrowly at the top.

But — and this is the point where Allen's institutional analysis becomes indispensable — the choice was made by one leader, in one organization, against the structural pressure of market incentives that pointed in the opposite direction. What happens when the leader changes? What happens when the quarterly numbers are more demanding? What happens at the thousands of organizations whose leaders face the same arithmetic and choose the investor's path? Individual decisions, however admirable, are not institutional structures. The labor movement did not rely on the goodwill of individual factory owners. It demanded the eight-hour day, the minimum wage, the right to organize — institutional protections that ensured the benefits of industrial productivity were distributed broadly regardless of any particular employer's inclinations.

Allen has been explicit about this need. Her "Roadmap for Governing AI" includes among its seventeen recommendations the call for governance mechanisms that address the distribution of AI's economic gains. Her power-sharing liberalism framework treats economic empowerment not as a separate concern from democratic governance but as a constitutive element of it — a recognition that democratic participation is hollow when citizens lack the material security that makes genuine participation possible. The AI-mediated workplace requires institutional innovations that embed the distribution of productivity gains into the structure of organizational governance: worker consultation before AI-driven restructuring, profit-sharing arrangements linked to AI-enhanced output, training investments funded by the companies that benefit most from AI adoption, and governance mechanisms that give workers a voice in how AI tools reshape their working conditions.

The third condition is the most difficult to name and the most important to protect. It is the condition that the Berkeley researchers captured when they documented the phenomenon of "task seepage" — the colonization of previously protected time by AI-mediated work. Workers were prompting during lunch breaks, filling gaps of a minute or two with AI interactions, converting every scrap of unstructured time into productive output. The researchers found that "a sense of always juggling, even as the work felt productive" became the dominant experience.

Allen's framework identifies what is lost in this colonization. The practice of equality requires not merely the formal equality of access to tools but the substantive conditions that make genuine participation possible. Among those conditions is the time and cognitive space for reflection, deliberation, and genuine human connection — the activities through which democratic dispositions are formed and maintained. A workplace in which every moment is productively saturated is not a democratic workplace, regardless of how much each worker produces. It is a workplace that has optimized for output at the expense of the human capacities — judgment, reflection, care — on which democratic life depends.

The parallel to the Industrial Revolution's early decades is precise. Electrification and the electric motor made continuous production feasible. Factory owners extended hours, intensified pace, colonized night with artificial light. The human cost was staggering. The labor movement's response was institutional: the eight-hour day, the weekend, child labor laws. These were not merely humanitarian gestures. They were democratic necessities — the construction of protected time within which citizens could exercise the capacities that democratic self-governance requires. The AI-mediated workplace demands comparable protections: structured pauses for reflection, boundaries between work and non-work, institutional recognition that the value of a worker cannot be measured by their output alone.

The practice of equality in the AI-mediated workplace is, finally, a practice of institutional design. The technology creates new possibilities for participation by dissolving old barriers. It simultaneously creates new mechanisms for domination through the concentration of productivity gains, the erasure of protected time, and the emergence of new forms of stratification based on AI fluency. Whether the possibilities or the mechanisms predominate is not a technological question. It is a political one — a question about the quality of the institutions that govern how AI is deployed in the places where most people spend most of their lives. The practice of equality demands that those institutions be designed with the same care and democratic commitment that the labor movement brought to the governance of industrial workplaces. The tools have changed. The democratic requirements have not.

---

Chapter 3: Sacrifice and the Distribution of Transition Costs

Every transition in human history has been paid for, and the bill has never been split evenly.

The mechanization of agriculture fed growing cities and fueled industrial expansion. It also displaced millions of families whose connection to the land was not merely economic but cultural, spiritual, constitutive of identity in ways that no retraining program could address. The automation of manufacturing increased output and lowered prices for consumers. It also gutted communities whose entire economic and social existence depended on the factory floor. The digitization of information created unprecedented access to knowledge. It also destroyed the livelihoods of typesetters, librarians, video store clerks, travel agents, and countless other workers whose expertise became redundant overnight.

In each case, the long-term trajectory bent toward expansion — toward greater aggregate capability, greater total output, greater possibility for more people. And in each case, the short-term cost was borne by those who were least equipped to bear it and least consulted about whether they were willing to do so. The factory owner captured the productivity gain. The displaced worker bore the transition cost. The economist measured the aggregate improvement. The community measured the emptied storefronts.

Danielle Allen's democratic theory insists that the distribution of transition costs is not a secondary concern — not a matter of social policy to be addressed after the technology has been deployed and the gains have been captured. It is a primary democratic question, as fundamental as the distribution of political voice itself. Her concept of sacrifice — the recognition that genuine equality sometimes requires those who benefit most from a transition to accept costs they could avoid — is not a sentimental appeal. It is a structural requirement of democratic legitimacy. A society that permits the gains of technological transition to be captured by the already advantaged while the costs are borne by the already vulnerable has failed the basic test of democratic governance, regardless of how much its aggregate productivity has increased.

The Orange Pill engages this history with unusual directness. The book's extended analysis of the Luddites insists on the distinction between the legitimacy of the fear and the inadequacy of the response. The framework knitters of Nottinghamshire were not philosophically opposed to change. They were skilled workers whose expertise had been rendered economically worthless, whose wages had collapsed, whose communities were disintegrating. They were correct, with prophetic precision, about what the power looms would do to them. Their error was strategic, not analytical. Breaking machines did not save the trade. It criminalized the grievance.

The lesson Allen would draw from this history — and has drawn, in her work on democratic transitions and the structural conditions of equal citizenship — is that the Luddites were destroyed not by the machines but by the absence of institutions designed to distribute the transition's costs equitably. No labor protections existed. No retraining infrastructure had been built. No mechanism ensured that the gains of mechanization would flow, even partially, to the communities bearing its costs. The machines were not the problem. The institutional vacuum was the problem. Into that vacuum rushed the logic of the market, which distributes gains to the holders of capital and costs to the holders of displaced labor with the indifference of gravity.

The AI transition is following this pattern with troubling fidelity. By February 2026, a trillion dollars of market value had vanished from software companies — the Software Death Cross that The Orange Pill documents in its penultimate chapter. Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. These are not abstract fluctuations in financial markets. They are signals of real structural change — the repricing of an entire industry around a new theory of value in which the code that was once expensive to produce has become cheap, and the judgment about what code should exist has become the scarce commodity.

The repricing creates winners and losers. The winners are the companies building AI platforms, whose revenue curves are climbing with exponential confidence. The winners are the individuals whose existing advantages — judgment, taste, strategic vision, access to AI tools — are amplified by the technology. The winners are the organizations whose leaders choose, as the author of The Orange Pill chose, to redirect productivity gains toward growth rather than reduction.

The losers are the senior developers whose decades of accumulated expertise are being devalued by a tool that can produce competent code without any of their depth. The losers are the workers at companies whose leaders choose the investor's arithmetic — five people doing the work of a hundred, the other ninety-five rendered surplus. The losers are the communities whose economies depend on the technology sector jobs that are being restructured at unprecedented speed.

Allen's framework insists that the distribution of these gains and losses is a democratic question, not a market outcome to be accepted with a shrug. Her power-sharing liberalism holds that economic empowerment is a constitutive element of democratic participation, not a separate policy domain. Citizens who lack material security cannot participate effectively in self-governance. Communities whose economic foundations have been pulled out from under them cannot sustain the civic institutions on which democratic life depends. The distribution of AI's transition costs is therefore not merely a social welfare concern. It is a question of democratic infrastructure.

What would it mean to take this question seriously? Allen's "Roadmap for Governing AI" offers specific institutional proposals: federal licensing of firms leading AI development, AI offices in state governments to enhance accountability, regulatory frameworks that address the distributional consequences of AI deployment. But the underlying principle is more fundamental than any specific proposal. The principle is that democratic societies have an obligation to construct institutions that distribute the costs of technological transition equitably — institutions analogous to the labor protections, social insurance systems, and public investments that were built, through decades of political struggle, around the Industrial Revolution.

The analogy demands precision. The Industrial Revolution's costs were eventually addressed through institutional innovations that took decades to develop: the eight-hour day (won through strikes and legislation), universal public education (built through local and national political campaigns), social insurance (constructed through the New Deal and its equivalents across democratic societies), and the right to organize (secured through bitter and sometimes violent struggle). Each institution was a response to a specific form of inequality produced by the transition. Each was built through democratic mobilization, not through the benevolence of those who captured the transition's gains.

The AI transition is occurring at a pace that does not allow for decades of gradual institutional development. The capabilities that reshaped work practices in late 2025 will be dramatically exceeded by what arrives in 2027 and beyond. Workers whose skills are being devalued need support now, not after a generation of political organizing. Communities whose economies are being restructured need investment now, not after a decades-long legislative process. The institutions that governed previous transitions — labor law, educational systems, social insurance — were designed for a world of relatively stable occupational categories and relatively slow technological change. They are not designed for a world in which the definition of productive expertise shifts faster than any institutional framework can track.

This mismatch between the speed of technological change and the speed of institutional response is, in Allen's terms, a democratic emergency. Not because the technology is malign — Allen has consistently argued that AI can be governed well by a healthy democracy — but because the institutional vacuum into which the transition is occurring creates the conditions for the same pattern that destroyed the Luddites: gains captured by the advantaged, costs borne by the vulnerable, and a generation of displaced workers navigating the transition without the institutional support that democratic equality demands.

Allen identified this dynamic in her Journal of Democracy essay with Weyl: "the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power." The concentration operates on both sides of the transition simultaneously. On the production side, the companies that build AI models are accumulating power at extraordinary speed. On the distribution side, the workers and communities that bear the transition's costs have almost no institutional voice in how the transition unfolds.

The democratic response requires sacrifice — not in the sentimental sense of charitable giving but in the structural sense that Allen's theory demands. Those who benefit most from the AI transition must accept constraints on their freedom to capture all the gains. This means companies that deploy AI must fund transition support for the workers whose roles are eliminated. It means governments must invest in retraining, income support, and community economic development at a scale commensurate with the disruption. It means the technology industry must accept governance mechanisms — licensing, impact assessment, distributional requirements — that reduce the speed of deployment in exchange for a more equitable distribution of the transition's costs and benefits.

Allen has framed the choice with a formulation borrowed from the deepest currents of American democratic thought: this is a moment that tests whether democratic societies can govern technology, or whether technology will govern them. The Luddites' lesson is that the second outcome — technology governing on its own terms, distributing costs and benefits according to the logic of capital rather than the logic of democratic equality — produces catastrophe for the vulnerable and ultimately for the social fabric itself. The AI transition does not have to repeat this pattern. But avoiding it requires the political will to build institutions commensurate with the power of the technology they govern, and to build them before a generation of workers has already been swept downstream.

---

Chapter 4: The Civic Agency of the Builder

The obligation that accompanies understanding is one of the oldest ideas in democratic thought. Plato argued that those who perceive the forms of justice owe their knowledge to the community that educated them. Aristotle held that practical wisdom — phronesis, the capacity for sound judgment in particular circumstances — is inseparable from civic responsibility. The physician who understands disease has obligations the layperson does not. The engineer who understands structural failure has obligations the resident does not. The lawyer who understands the law has obligations the client does not. In each case, the specialized understanding creates a relationship of trust between the knower and the community, and the quality of democratic life depends on whether that trust is honored or exploited.

Danielle Allen has extended this ancient principle into the contemporary terrain of technology governance with a specificity that the philosophical tradition alone does not provide. Her work does not merely assert that technologists have civic obligations. It constructs a framework — power-sharing liberalism — within which those obligations can be defined, institutionalized, and enforced. The framework insists that the people who build the systems that shape democratic life bear a responsibility that is proportional to the power those systems exercise. Not proportional to their intentions. Not proportional to their self-conception as engineers rather than political actors. Proportional to the actual consequences of their work on the lives of citizens who had no say in how that work was conducted.

The builders of AI systems occupy a position in contemporary democratic life that has no precise historical precedent. They understand how large language models work — how they are trained, how they fail, what biases they encode, what capabilities they possess and what risks they create. They understand the leverage points where a small design decision can cascade through an entire social ecosystem. They understand the failure modes — the ways AI can produce harm that is invisible from the outside, that masquerades as competence, that presents confident wrongness in polished prose. This understanding is not merely technical. It is civic, because the systems these builders create shape the information environment in which democratic deliberation occurs, the economic conditions under which citizens work, the cultural landscape in which communities form their identities.

The Orange Pill names the gap between understanding and responsibility with striking candor. The book describes "a priesthood structure without the priesthood ethic" — people with deep knowledge of complex systems who believe that understanding confers the right to build without accountability. The author confesses his own participation in this failure: building a product he knew was addictive by design, understanding the engagement loops and dopamine mechanics, building it anyway because the technology was elegant and the growth was intoxicating. The confession is more analytically revealing than any abstract argument about the obligations of technologists, because it identifies the precise mechanism through which civic obligation fails. The failure is not ignorance. The failure is the institutional context in which knowledge operates — a context that rewards growth, engagement, and market share while imposing no cost on the downstream consequences of design decisions that the builder understands perfectly well.

Allen's "How AI Fails Us" paper, co-authored in 2021, diagnosed this institutional failure at the paradigm level. The paper argued that "actually existing AI" — the dominant paradigm of centralized, autonomous, human-replacing systems — "tends to concentrate power, resources, and decision-making in an engineering elite." The concentration is not an accident. It is a structural feature of a development paradigm that treats intelligence as a resource to be extracted and monopolized rather than as a capacity to be distributed and cultivated. The paper called for "alternative visions based on participating in and augmenting human creativity and cooperation," drawing on a tradition that "underlies many celebrated digital technologies such as personal computers and the internet." The distinction between extraction and augmentation is the distinction between a paradigm that concentrates civic agency in the hands of builders and one that distributes it broadly.

Allen has given this distinction institutional specificity through her GETTING-Plurality research network at Harvard, which brings together researchers across disciplines "to advance the understanding of how to shape, guide, govern, and deploy technological development in support of democracy, collective intelligence, and other public goods." The network's foundational claim is that "new eras of technological innovation, such as artificial intelligence and decentralized social technologies, have brought us to a constitutional moment in society." The phrase "constitutional moment" carries enormous weight coming from a scholar who has spent her career studying the original constitutional moment of American democracy. It signals that the governance of AI is not a policy problem to be addressed through incremental regulation. It is a foundational question about the terms on which collective life is organized — a question as fundamental as the questions the founders confronted about the distribution of political power among citizens, states, and the federal government.

The constitutional analogy illuminates what civic agency demands of builders in the present moment. The American founders were, in their own way, builders — people with specialized knowledge (of political philosophy, of institutional design, of the mechanics of governance) who faced the question of whether to use that knowledge in service of the community or in service of their own power. The Constitution they produced was, among other things, a set of constraints that the powerful accepted on their own power in order to make collective self-governance possible. The Bill of Rights limited what the government could do to citizens. The separation of powers limited what any single branch could do without the others. The amendment process ensured that the constraints themselves could evolve as circumstances changed.

The builders of AI systems face an analogous choice. They possess knowledge that gives them enormous power over the systems that shape democratic life. They can use that power to maximize their own advantage — building addictive systems, concentrating control over essential infrastructure, capturing the productivity gains that AI generates. Or they can accept constraints on their power in order to make genuine democratic governance of AI possible. These constraints might include transparency requirements that allow democratic scrutiny of AI systems. They might include impact assessments before deployment. They might include governance mechanisms that give affected communities a voice in design decisions. They might include professional standards — analogous to the standards governing medicine, engineering, and law — that define the obligation to consider public welfare as a condition of practice.

Allen's Allen Lab research on the "ethical-moral intelligence" of AI systems makes the need for human civic agency even more urgent. The Lab's "Crocodile Tears" working paper found that while AI models "demonstrate moral sensitivity to ethical dilemmas in ways that closely mimic human responses, they exhibit greater certainty than humans when choosing between conflicting sacred values, despite recognizing such tragic tradeoffs as difficult." The discrepancy between reported difficulty and decisiveness "raises important questions about their coherence and transparency, potentially undermining trustworthiness." The finding is a precise empirical demonstration of why AI cannot be trusted to exercise civic judgment on behalf of the communities it affects. The systems can mimic moral reasoning. They cannot be uncertain in the way that genuine moral reasoning requires — the way that acknowledges the irreducible difficulty of choosing between values that cannot be simultaneously honored.

This means that the civic agency of the builder is not merely a supplement to AI capability. It is a necessary constraint on it. The builder who understands that the system she is creating will make moral judgments that affect millions of people — and who understands that the system's apparent moral sophistication conceals a structural inability to grapple with genuine moral uncertainty — has an obligation to ensure that human judgment remains central to the system's deployment. Not as a pro forma review step. As a constitutive element of the system's governance, built into the architecture from the beginning.

Allen's 2025 Harvard PolicyCast interview articulated the positive vision that accompanies this obligation. She argued for "reorienting thinking about AI from replacement to partnership," where humans and AI systems each bring their distinctive strengths. She offered a concrete example: AI-enhanced public opinion research that could give decision-makers "a kind of bird's eye view of the actual shape of opinion — including surprising points of potential agreements that are below the surface a little bit buried." This is AI in service of democratic agency — technology that expands the capacity of citizens and leaders to understand each other, rather than technology that replaces their judgment with algorithmic optimization.

But the partnership model requires that builders design for it. It requires that the systems be built with the explicit intention of augmenting human judgment rather than replacing it. It requires that the interfaces, the defaults, the incentive structures, the business models all point toward partnership rather than dependency. And it requires that the institutional context in which builders operate — the corporate governance structures, the regulatory frameworks, the professional norms — reward the partnership model rather than the extraction model.

The construction of these institutional frameworks is itself a civic act — perhaps the most important civic act that builders can perform. Allen's "Roadmap" paper includes seventeen specific recommendations for AI governance, ranging from federal licensing of AI firms to AI offices in state governments to investments in public goods and democratic infrastructure. Each recommendation is an attempt to build institutional constraints that channel the civic agency of builders toward democratic purposes. But the recommendations can only be implemented if the people who understand AI systems most deeply — the builders themselves — participate actively in the political processes through which governance frameworks are constructed.

This is the civic obligation that Allen's framework places on the builder. Not merely to build responsibly within existing constraints, but to participate in the construction of the constraints themselves. To bring their specialized knowledge to the democratic deliberations that will determine how AI is governed. To accept that their understanding confers not the right to build without accountability but the obligation to ensure that the systems they build serve the democratic commitment to equality that makes the technology's benefits available to everyone. The ancient principle stands, updated for a technology the ancients could not have imagined: understanding is not a privilege. It is a responsibility. And the quality of democratic life in the age of AI depends on whether the people who understand these systems most deeply are willing to bear it.

Chapter 5: Consent of the Governed in the Platform Age

The principle is simple enough to fit on a single line of parchment. Governments derive their just powers from the consent of the governed. The practice has never been simple at all.

Consent, in democratic theory, is not the act of clicking "I agree" on a document you did not read. It is not the passive acceptance of conditions imposed by someone more powerful. Consent is the ongoing, informed, voluntary participation of citizens in the rules that structure their collective life. It requires that the governed understand what they are consenting to. It requires that they possess genuine alternatives — the capacity to withhold consent, to demand modification, to exit arrangements that no longer serve their interests. And it requires that the process of consent be renewable, subject to democratic revision as circumstances change and as the governed develop new understandings of what their situation demands.

Danielle Allen has spent her career demonstrating that this principle, the most foundational claim of democratic legitimacy, has never been fully realized even in its country of origin. The Declaration of Independence proclaimed consent as the basis of just government in 1776 while excluding from the category of "the governed" the enslaved people whose labor built the economy, the women whose domestic work sustained the households, the Indigenous peoples whose land was being appropriated. Each subsequent expansion of the circle of consent — abolition, suffrage, civil rights, disability rights — required not merely legal change but institutional reconstruction, the building of new mechanisms through which the newly included could exercise their consent in practice rather than merely possess it in theory.

The platform age has created a consent crisis that is structurally distinct from any previous challenge to democratic legitimacy, and Allen has been among the most precise diagnosticians of its character. The crisis is this: the infrastructure on which democratic participation increasingly depends — the digital platforms through which citizens communicate, work, organize, access information, and build — is privately owned and privately governed. The rules that structure this infrastructure are not set through democratic deliberation. They are set through corporate decision-making, subject to market pressures and shareholder interests, codified in terms of service that are presented on a take-it-or-leave-it basis to users who possess no meaningful bargaining power and, in most cases, no genuine alternative.

The formulation sounds abstract until you examine what it means for a specific person in a specific situation. Return to the engineers in Trivandrum whom The Orange Pill describes — twenty people whose productive capability was multiplied by a factor of twenty through Claude Code. Their professional lives had been transformed. Their capacity to build, to create, to contribute to the material construction of the world had expanded beyond anything they had previously experienced. This expansion was real, and it was exhilarating, and it depended entirely on a platform whose terms of use, pricing structure, and capability limits were determined by a corporation in San Francisco that owed them nothing beyond the contractual obligations of a commercial subscription.

If Anthropic were to double the price of Claude Code, those engineers would have no recourse. If Anthropic were to restrict capabilities that the engineers had built their workflows around, they would have no voice in the decision. If Anthropic were to modify its terms of service in ways that limited the engineers' ability to use the tool in their specific context — their specific industry, their specific geography, their specific cultural practices — they would possess no mechanism for objecting that Anthropic was obligated to hear. The transformation of their productive lives occurred on terms they did not negotiate, within a system they did not help design, subject to modification by actors they cannot influence.

This is not a hypothetical concern. It is the documented pattern of platform capitalism. Social media platforms have modified algorithms in ways that destroyed businesses built on their earlier rules. App stores have changed commission structures that upended the economics of developers who had no seat at the table. Cloud providers have adjusted pricing in ways that forced architectural rebuilds on customers who had organized their entire technical infrastructure around the provider's earlier terms. In each case, the users who bore the consequences had no meaningful participation in the decisions that produced them. They had consented, in the attenuated sense of having clicked through a terms of service agreement, to a relationship in which the platform retained unilateral authority to change the rules.

Allen's analytical framework identifies what makes this consent crisis qualitatively different from previous challenges to democratic legitimacy. The feudal lord's power over the peasant was visible and nameable. The peasant knew who dominated him and could, at least in principle, imagine a world organized differently. The industrial employer's power over the worker was similarly visible — the factory had walls, the boss had a name, the working conditions had a physical reality that could be protested, organized against, regulated. The platform's power is structural rather than personal. There is no lord, no boss, no visible authority to resist. There is only an infrastructure that has become essential to participation in economic and civic life, governed by rules that the participants did not create and cannot change.

Allen and Weyl's Journal of Democracy essay brought this analysis to the specific case of AI. They argued that the investment scale required to develop large language models — billions of dollars, accessible only to a handful of corporations and nation-states — creates "concentrations of democratically unaccountable power" that threaten democratic governance regardless of the intentions of the people who wield it. The concentration is structural, not personal. It does not require bad actors. It requires only the normal operation of market dynamics in a domain where the cost of entry is prohibitive and the returns to scale are enormous. The result is an infrastructure of productive capability that is essential to an expanding share of economic and civic life and that is governed by a shrinking number of institutions accountable only to their shareholders.

Allen's power-sharing liberalism provides the normative framework for responding to this crisis. The framework treats "rights of social participation" — positive liberties, the capacity to engage effectively in the collective enterprise — as foundational to democratic governance. When the infrastructure of social participation is privately controlled, the positive liberties that depend on that infrastructure are effectively held at the discretion of the infrastructure's owners. The developer in Lagos whose productive capability depends on Claude Code possesses a positive liberty — the capacity to build — that can be revoked, restricted, or repriced by a private actor at any time. This is not a liberty in any sense that democratic theory recognizes as genuine. It is a permission, granted by the powerful to the dependent, subject to withdrawal without democratic process.

The democratic response to the consent crisis of the platform age must operate on multiple levels simultaneously, and Allen's work provides the architecture for each.

At the level of transparency, the governed must be able to see how the systems that affect their lives actually operate. AI platforms make decisions — about what content to generate, what capabilities to provide, what biases to encode, what information to present and how — that shape the productive and informational environment of billions of people. These decisions are currently opaque. The governed cannot consent to what they cannot see. Transparency requirements — meaningful transparency, not the performative disclosure of incomprehensible technical documentation — are a prerequisite for any form of genuine consent.

At the level of participation, the governed must have mechanisms for influencing the rules that structure their engagement with AI platforms. This is the most radical of Allen's demands, and the one most at odds with the current structure of the technology industry. Platform governance is currently unilateral. The platform sets the rules. The user accepts them or leaves. Allen's framework insists that when a platform becomes essential infrastructure for participation in economic and civic life, its governance must include mechanisms for the participation of those who depend on it. These mechanisms might take many forms — user advisory councils with genuine authority, democratic governance of platform policies, regulatory frameworks that translate user interests into binding constraints — but their absence is, in Allen's terms, a failure of democratic legitimacy as fundamental as the absence of elected representation in government.

At the level of alternatives, the governed must possess genuine exit options. Consent is meaningless when the only alternative to acceptance is exclusion from participation. If the only way to access AI-enhanced productive capability is through a small number of privately governed platforms, then "consent" to those platforms' terms is not consent but capitulation. The construction of genuine alternatives — open-source AI development, public AI infrastructure, interoperability standards that prevent platform lock-in — is a democratic necessity, not merely a competitive policy preference. Allen's "Roadmap" paper is explicit about this: among its seventeen recommendations are calls for public investment in AI infrastructure and governance mechanisms that prevent the monopolistic concentration of AI capability.

At the level of portability, the governed must be able to carry their work, their data, their productive relationships with them when they choose to move between platforms. The current structure of AI platforms creates dependency through the accumulation of context — the prompts, the conversations, the project histories, the customizations that make the tool increasingly tailored to the user's needs. This accumulated context is a form of productive capital that the user has invested in the platform. When the platform holds this capital hostage — when switching costs are so high that the user cannot meaningfully exercise the option to leave — the user's consent to the platform's terms is coerced by dependency rather than freely given.

Allen's analysis connects the platform consent crisis to the deepest currents of democratic political theory. The founders understood that consent requires structural conditions — not just the formal right to vote but the material independence that makes the vote meaningful. The property requirements for suffrage in the early republic were antidemocratic in their exclusion, but they reflected a genuine insight: a person who is economically dependent on another person cannot exercise free political judgment. The democratic project has been, in significant measure, the project of extending material independence to all citizens so that their political consent is genuine rather than coerced.

The platform age has created a new form of dependency that threatens the material independence on which genuine consent rests. Workers, creators, entrepreneurs, and citizens whose productive and civic lives depend on privately governed platforms are economically dependent on infrastructure they do not control. Their consent to the rules that govern that infrastructure is formally free and substantively coerced. The democratic response — the construction of transparency, participation, alternatives, and portability — is not a regulatory detail. It is the extension of the democratic project of material independence into the platform age, the construction of the conditions under which consent to the governance of essential digital infrastructure can be genuine rather than nominal.

Allen has framed this project, in her April 2025 lecture at Endicott College, as part of "a historical contest over what framework of political economy is going to define the world as AI transforms it." The contest is between a framework in which the infrastructure of productive and civic life is governed by democratic institutions accountable to the public, and a framework in which that infrastructure is governed by private institutions accountable to shareholders. The outcome of this contest will determine whether the consent of the governed — the foundational principle of democratic legitimacy — survives the platform age or is reduced to a formality that masks a new and more durable form of domination.

The question, as Allen would insist, is not whether the platforms will be governed. They will be governed — by someone, according to someone's values, in someone's interest. The question is whether the governed will have a genuine say in who governs and how. That question is the oldest question in democratic politics. The platform age has given it a new urgency and a new institutional terrain. The democratic response must be commensurate with both.

---

Chapter 6: Participatory Readiness and the Skill of Self-Governance

Democratic self-governance requires a population that is prepared for it. This observation is so obvious that it is frequently ignored — passed over in the rush to discuss institutional design, policy frameworks, and regulatory mechanisms, as though the institutions could function without the citizens who inhabit them. Constitutions do not govern. Citizens govern, through institutions that channel their judgment, mediate their disagreements, and translate their collective will into binding decisions. When the citizens are not prepared for this work — when they lack the knowledge, skills, and dispositions that collective self-governance demands — the institutions fail, regardless of how elegantly they are designed.

Danielle Allen has been among the most persistent voices arguing that democratic preparation — what might be called participatory readiness — is the foundation on which everything else depends. Her reading of the Declaration of Independence emphasizes that the document's authors were not merely announcing principles. They were modeling a practice: the practice of collective deliberation, of reasoning together about the conditions of shared life, of building agreement across difference through the patient work of argument and evidence. The Declaration is, in this reading, an educational document — a demonstration of the democratic skills that its signers hoped their fellow citizens would develop and exercise.

The AI moment has made participatory readiness simultaneously more urgent and more difficult to achieve. More urgent because the decisions that democratic societies must make about AI governance are among the most consequential and complex decisions any polity has confronted — decisions about the distribution of transformative productive capability, the governance of essential digital infrastructure, the regulation of systems that shape the information environment in which democratic deliberation occurs. More difficult because the technology that demands sophisticated civic judgment is simultaneously reshaping the cognitive environment in which that judgment is developed.

The Orange Pill describes a pedagogical innovation that illustrates both the promise and the challenge. A teacher stopped grading essays and started grading questions. The assignment is not to produce an answer but to identify the five questions you would need to ask before you could produce an answer worth having. Students who frame the best questions demonstrate the deepest engagement with the material, because a good question requires understanding what you do not understand — a harder cognitive operation than demonstrating what you do understand.

Allen's framework recognizes this innovation as an exercise in democratic education, whether or not the teacher intended it as such. The skills the teacher is cultivating — the capacity to identify gaps in understanding, to frame inquiries that open rather than close investigation, to evaluate competing claims with critical rigor, to sit with uncertainty rather than rushing to premature resolution — are precisely the skills that democratic deliberation requires. A citizen who can frame a good question about AI governance is more valuable to the democratic process than a citizen who can recite the provisions of the EU AI Act, because the framing skill transfers across issues while the specific knowledge becomes obsolete as circumstances change.

But participatory readiness for the AI age demands capacities that go beyond the skill of questioning, and Allen's work identifies several that deserve sustained attention.

The first is the capacity to evaluate AI-generated content with democratic rigor. The author of The Orange Pill describes this challenge through his own experience: Claude producing a passage about Deleuze that sounded like insight but fractured under examination. "Confident wrongness dressed in good prose" — the smoothest failure mode of a system whose outputs are optimized for plausibility rather than truth. When any citizen can generate a policy brief, a legal argument, or a historical analysis using AI tools, the quality of democratic deliberation depends on the capacity of other citizens to evaluate those productions with the critical rigor that self-governance demands.

This evaluation capacity is not primarily technical. It does not require understanding how transformer architectures work or how training data shapes model outputs, though such understanding is useful. It requires the older, harder skills of critical reasoning: the ability to check claims against evidence, to identify assumptions that the argument conceals, to recognize the difference between a sound argument and a smooth one, to ask "Is this true?" with sufficient persistence to get past the surface plausibility that AI-generated content characteristically possesses. These are the skills of a well-educated citizen in any era. AI has not changed their fundamental character. It has changed the scale at which they are needed and the consequences of their absence.

The second capacity is the ability to deliberate across expertise boundaries. AI governance requires decisions that integrate technical understanding with political judgment, ethical reasoning, economic analysis, and the lived experience of affected communities. No single person possesses all of these forms of knowledge. No single discipline provides all of them. The capacity for democratic deliberation about AI therefore requires what Allen has described as the ability to build consensus across difference — to engage productively with people who bring different knowledge, different values, and different stakes to the conversation, without either deferring entirely to the expert or dismissing the expertise as irrelevant to the democratic question.

This capacity is particularly important because the governance of AI is vulnerable to two symmetrical failures. The first is technocratic capture — the tendency for AI governance decisions to be made by technical experts whose specialized knowledge gives them disproportionate influence over outcomes that affect everyone. The second is populist dismissal — the tendency for democratic majorities to reject expert input on the grounds that expertise is elitist, self-interested, or simply incomprehensible. Both failures produce bad governance. Technocratic capture produces governance that reflects the values and interests of the technical elite at the expense of the broader public. Populist dismissal produces governance that ignores the technical realities on which sound policy depends.

Allen's participatory readiness framework addresses both failures simultaneously. Citizens who are prepared for democratic deliberation about AI can engage with technical expertise without being captured by it — they can ask the expert to explain, to justify, to connect the technical analysis to the values and interests that democratic governance is supposed to serve. They can also resist the temptation to dismiss expertise — they can recognize that sound governance of complex systems requires knowledge that most citizens do not possess, and that incorporating that knowledge into democratic deliberation is a strength rather than a surrender.

The third capacity, and perhaps the most difficult to cultivate, is the ability to make collective decisions under conditions of radical uncertainty. Allen identified this challenge in her Washington Post column when she noted that AI's "aggregate effects might fundamentally alter political dynamics in ways we can't yet predict." The AI transition is creating conditions of uncertainty that exceed what previous generations of democratic citizens have confronted. The speed of change, the unpredictability of AI capabilities, the impossibility of foreseeing the second- and third-order consequences of today's decisions — all create a decision-making environment in which the traditional tools of democratic deliberation are necessary but insufficient.

Citizens must learn to act wisely when the consequences of their choices cannot be predicted with confidence. This requires a form of democratic humility — the acknowledgment that no individual, no institution, no discipline possesses the knowledge necessary to govern this transition alone. It also requires democratic courage — the willingness to build institutions and make commitments despite uncertainty, to construct frameworks that are robust enough to guide action while remaining flexible enough to adapt as circumstances change.

Allen and her collaborators have explored how AI itself might support rather than undermine these capacities. In the Journal of Democracy essay, Allen and Weyl noted that AI models "can carry out detailed, flexible, and interactive conversations with large numbers of participants and then portray in easily digestible forms the current state of collective concerns." This is AI in service of participatory readiness — technology used not to replace democratic deliberation but to enhance it, to make it possible for larger and more diverse groups of citizens to participate in collective decision-making with genuine understanding of each other's positions. Her 2025 PolicyCast interview offered the specific example of AI-enhanced public opinion research that could reveal "surprising points of potential agreements that are below the surface a little bit buried" — agreements that no single participant in the deliberation could see but that AI analysis of the full range of perspectives could make visible.

The educational institutions that prepare citizens for democratic participation in the AI age must cultivate all of these capacities — and they are not doing so. Allen's assessment aligns with The Orange Pill's observation that educational establishments are operating on assumptions that the world has already invalidated. But Allen's diagnosis runs deeper than the skills gap. The failure is not merely that schools are teaching the wrong skills. The failure is that the purpose of education has been captured by an economic rationale — preparing students for the labor market — that was always incomplete and that AI has rendered untenable.

When AI can perform many of the skills that education has traditionally taught, the economic justification for education collapses. What remains — what was always the deeper justification, now made urgently visible by its contrast with everything AI can do — is the democratic purpose. Education exists to prepare citizens for the practice of self-governance. This purpose does not become obsolete when AI arrives. It becomes the only purpose that justifies the enormous public investment that education requires.

The curriculum that the moment demands is a curriculum organized around democratic preparation. It teaches questioning over answering, because the citizen who can frame the right question contributes more to democratic deliberation than the citizen who can recite the right answer. It teaches integration over specialization, because the governance of AI requires the capacity to connect technical knowledge with ethical reasoning, economic analysis, and the lived experience of affected communities. It teaches judgment over execution, because AI has made execution abundant while leaving judgment as the scarce and essential human contribution.

Allen has called the current moment "a constitutional moment" — a period that demands the same kind of foundational institutional redesign that the American founding required. The educational dimension of this constitutional moment is the redesign of democratic education around the capacities that self-governance in the age of AI actually requires. The redesign cannot wait for a generation of curriculum reform. The citizens who will make the decisions that shape AI governance for decades are in school now. What they learn — or fail to learn — about the practice of democratic deliberation will determine the quality of the institutions that govern AI for the rest of the century.

---

Chapter 7: The Commons of Intelligence

Before the English enclosures, there were fields that belonged to everyone and no one.

The commons — the shared pastures, woodlands, and waterways on which medieval village communities depended — were governed by elaborate systems of collective rules developed through democratic deliberation among the commoners themselves. These rules regulated access, determined use, allocated maintenance obligations, and balanced individual benefit against collective sustainability. The commons were not anarchic. They were, in many cases, more carefully governed than the private estates that surrounded them, precisely because their shared character demanded governance structures that could mediate competing claims without the simplifying authority of a single owner.

The enclosure movement of the seventeenth and eighteenth centuries destroyed this governance and converted the shared resource into private property. The economic efficiency arguments for enclosure were, in aggregate terms, sound — enclosed land was more productive per acre than commons land. The distributional consequences were catastrophic. The commoners who had sustained their families on shared resources for generations were displaced, impoverished, and forced into wage labor in the emerging industrial economy. The destruction of the commons was not merely an economic event. It was a political one — the elimination of a system of democratic self-governance that had sustained communities for centuries, replaced by a system of private ownership that concentrated both economic resources and governance authority in the hands of landowners.

Danielle Allen's democratic theory, informed by the work of Elinor Ostrom on commons governance and integrated into her power-sharing liberalism framework, insists that the commons question is not a historical curiosity. It is a recurring structural challenge that every democratic society must confront whenever a shared resource becomes essential to collective life. The atmosphere is a commons threatened by enclosure through carbon emissions. The oceans are a commons threatened by overfishing and industrial exploitation. The electromagnetic spectrum is a commons whose governance has been the subject of regulatory struggle for a century.

AI-generated intelligence is becoming a new commons — the shared resource on which productive, educational, and civic life increasingly depends. The analogy is not metaphorical. It is structural. The training data on which large language models are built is itself a commons — the accumulated output of human civilization, the books, articles, code, conversations, and creative works that billions of people have produced over centuries. This collective intellectual inheritance belongs to no one and to everyone. It is the shared resource from which the value of AI systems is extracted.

The extraction follows the pattern of enclosure with unsettling precision. The raw material is shared. The value extracted from it is private. Companies train proprietary models on the collective output of human civilization, then sell access to those models under terms that the original contributors had no role in setting. The commoner whose published writing, whose shared code, whose public conversations constitute the training data has been enclosed — her contribution captured, processed, and returned to her as a product she must pay to use.

Allen and Weyl identified this dynamic in their Journal of Democracy essay through the lens of power concentration. The "investment scale required to develop the models" — billions of dollars, accessible to a handful of corporations — creates barriers to entry that function as enclosure fences. The small developer, the community organization, the public institution cannot build its own large language model. It must use the models that the enclosers provide, on the terms the enclosers set. The intelligence commons has been enclosed, and the commoners are now tenants.

Allen's GETTING-Plurality research network at Harvard represents the most systematic attempt to construct an alternative paradigm. The network's foundational premise is that the development of AI should be organized around the concept of plurality rather than singularity — "the plural (and not singular) nature of human and (possibly) artificial intelligence." The distinction is not merely philosophical. It is institutional. The singularity paradigm concentrates AI development in a few large organizations pursuing a single goal: the creation of general-purpose systems that replace human cognitive labor. The plurality paradigm distributes AI development across many organizations pursuing many goals: augmenting human cooperation, supporting democratic deliberation, enabling diverse communities to build tools that reflect their own values and serve their own purposes.

The governance of the intelligence commons, if Allen's framework is to be realized, requires institutional innovations at multiple scales simultaneously.

At the level of training data, the commons must be recognized as a commons — a shared resource whose use requires collective governance rather than unilateral extraction. This might take the form of data trusts, institutions that hold training data on behalf of the communities that produced it and negotiate the terms on which AI companies can access it. It might take the form of compensation mechanisms that return a share of AI-generated value to the creators whose work made that value possible. It might take the form of governance requirements that give communities a voice in how their collective intellectual output is used — which models are trained on it, for what purposes, under what constraints.

At the level of model development, the commons must be protected against monopolistic enclosure through investments in open-source AI and public AI infrastructure. Allen's "Roadmap" paper is explicit about this need. When AI capability is concentrated in a few proprietary systems, the entire community is dependent on the decisions of a few corporations. Public investment in AI infrastructure — funded by taxation and governed by democratic institutions — provides an alternative that ensures the intelligence commons remains genuinely accessible rather than formally open but practically enclosed.

At the level of deployment, the commons must be governed to prevent the externalities that unregulated use produces. The intelligence commons, like any commons, is vulnerable to tragedy — the degradation that occurs when individual users exploit the shared resource without bearing the cost of its maintenance. In the case of AI, the relevant externalities include the displacement of workers, the erosion of the information environment through AI-generated misinformation, the concentration of economic power, and the environmental costs of computation. Governance mechanisms — licensing, impact assessment, distributional requirements — are the commons management rules that prevent individual exploitation from degrading the shared resource.

Allen's work suggests that the governance of the intelligence commons cannot be adequately addressed by any single level of institutional action. National regulation alone is insufficient because AI platforms operate globally. International governance alone is insufficient because the specific impacts of AI vary across communities and require locally responsive governance. Corporate self-regulation alone is insufficient because the incentive structures of private enterprise do not align with the interests of the commons. The governance architecture must be multi-layered — local, national, and international institutions operating in coordination, each addressing the governance challenges that its scale is best equipped to handle.

The historical precedents for this kind of multi-layered commons governance are imperfect but instructive. The governance of the global atmosphere through international climate agreements demonstrates that commons governance at planetary scale is possible, though difficult and incomplete. The governance of the internet through multi-stakeholder institutions demonstrates that the governance of global digital infrastructure can incorporate diverse voices, though the power asymmetries among stakeholders have consistently favored wealthy nations and corporations. The governance of local water systems through community institutions demonstrates that commons governance at the community level can be highly effective when communities possess the authority and resources to manage their shared resources.

Each model has limitations. None can be directly transplanted to the governance of AI intelligence. But together they demonstrate that democratic governance of shared resources is possible across scales, and they identify the design principles that effective commons governance requires: transparency about the state of the resource, participation by those who depend on it, mechanisms for resolving conflicts among competing uses, and accountability for those who manage the resource on behalf of the community.

Allen's invocation of a "constitutional moment" takes on its deepest meaning in the context of the intelligence commons. The American Constitution was, among other things, a governance framework for shared resources — the powers of the federal government, the rights of citizens, the mechanisms for resolving conflicts among states. It was designed by people who understood that shared resources require shared governance, and that the alternative to democratic governance of shared power is the concentration of that power in the hands of the few.

The intelligence commons presents a governance challenge of comparable magnitude. The shared resource — the collective intelligence of human civilization, augmented and distributed through AI systems — is among the most valuable resources in human history. Its governance will determine whether that value is distributed broadly or captured narrowly, whether it serves the democratic commitment to equality or undermines it, whether it augments human agency or replaces it.

The choice between commons and enclosure is not a technical question. It is a political one — a question about who will govern the most powerful shared resource of our time, and in whose interest. Democratic theory provides the principles. The institutional creativity to apply those principles to the intelligence commons is the work that lies ahead.

---

Chapter 8: Education for Democratic Participation After AI

The purpose of public education has been contested since the first common school opened its doors, and the contest has always been, at bottom, a contest about democracy.

One tradition holds that education exists to prepare individuals for economic productivity — to equip them with the skills that employers require, to sort them by competence, to certify their readiness for particular forms of work through credentials that signal their value to the labor market. This tradition has dominated educational policy for decades, and its influence is visible in the structures of contemporary schooling: the emphasis on standardized testing, the organization of curricula around discrete skills, the evaluation of educational institutions by the employment outcomes of their graduates.

A second tradition, older and deeper, holds that education exists to prepare citizens for self-governance. This tradition stretches from Aristotle's insistence that the education of citizens is the most important function of the polis through Thomas Jefferson's argument for public education as the foundation of republican government to John Dewey's vision of the school as a laboratory of democratic life. In this tradition, the skills that matter most are not the skills that employers demand but the skills that collective self-governance requires: the capacity to deliberate, to evaluate evidence, to recognize legitimate disagreement, to build consensus across difference, to participate in the ongoing construction of the rules that govern shared life.

Danielle Allen stands firmly in this second tradition, and her work provides the most rigorous contemporary framework for understanding what democratic education requires in the age of artificial intelligence. Her reading of the Declaration of Independence as an educational document — a model of the democratic skills its authors hoped to cultivate — establishes the connection between education and self-governance not as a policy preference but as a foundational commitment of the democratic project itself.

The AI moment has collapsed the economic rationale for education with a speed and thoroughness that should alarm anyone who depends on that rationale to justify public investment in schooling. When AI can write competent code, draft legal briefs, generate financial analyses, produce marketing copy, compose music, create visual art, and perform medical diagnoses, the skills-for-employment argument for education loses its force. The student who spends four years acquiring coding proficiency enters a labor market in which AI provides coding proficiency at negligible cost. The student who masters legal research enters a profession in which AI conducts legal research faster and more comprehensively than any human. The credentials that once signaled valuable skills now signal skills that the market may no longer need.

This collapse creates a crisis that is simultaneously economic, institutional, and — in Allen's framework — democratic. Economic because students and families are questioning whether the investment of time and money that traditional education requires is justified by the employment outcomes it produces. Institutional because educational establishments are watching their value proposition erode in real time without the institutional agility to redefine it. Democratic because the collapse of the economic rationale threatens to undermine public support for the educational institutions that democratic self-governance requires.

Allen's response to this crisis, developed across her work on democratic education, civic participation, and the governance of AI, is to argue that the collapse of the economic rationale does not destroy the case for education. It reveals the case that was always more fundamental — the democratic case — by stripping away the economic scaffolding that had been concealing it. Education was never primarily about preparing workers. It was about preparing citizens. The economic rationale was always a supplement to the democratic purpose, not a substitute for it. Now that the supplement has lost its force, the purpose stands exposed and demands to be addressed on its own terms.

What does democratic education look like when the economic rationale has collapsed? Allen's framework, combined with the evidence from the AI transition itself, suggests several principles.

The first principle is that democratic education must prioritize judgment over execution. When AI handles execution — the production of code, the drafting of documents, the generation of analyses — the human contribution shifts to the judgment that directs execution. The judgment about what software should exist and whom it should serve. The judgment about which legal argument is sound and which is merely plausible. The judgment about what data means and what it does not mean. The judgment about what is worth building in a world where everything describable is buildable.

Allen's work connects this shift to the requirements of democratic participation. The citizen who can exercise sound judgment about AI governance — who can evaluate competing policy proposals, assess the distributional consequences of different regulatory approaches, distinguish between genuine democratization and superficial access — is the citizen that democratic education must produce. This citizen does not need to know how to code. She needs to know how to think about what coding produces and who it affects and whether the institutions governing it serve the democratic commitment to equality.

The second principle is that democratic education must cultivate the capacity for collective deliberation. Allen has argued throughout her career that democracy is a practice of reasoning together — of building agreement across difference through the patient work of argument, evidence, and mutual recognition. This practice requires skills that are distinct from individual cognitive competence: the ability to listen to perspectives that differ from one's own, to modify one's position in response to persuasive argument, to find common ground without abandoning principle, to accept imperfect outcomes as the necessary cost of collective decision-making.

AI makes these skills more urgent because the decisions that democratic societies must make about AI governance are precisely the kind that require collective deliberation — decisions that affect everyone, that involve contested values, that require the integration of diverse forms of knowledge, and that cannot be resolved by any single perspective, however informed. Allen's positive vision of AI-enhanced deliberation — the use of AI tools to facilitate large-scale democratic conversation, to surface hidden agreements, to make complex policy tradeoffs visible and comprehensible — can only be realized if citizens possess the deliberative capacities that make productive engagement with these tools possible.

The third principle is that democratic education must develop the capacity for critical evaluation of AI-generated content. Allen's Lab research on the "ethical-moral intelligence" of AI systems — the finding that models exhibit "greater certainty than humans when choosing between conflicting sacred values, despite recognizing such tragic tradeoffs as difficult" — has direct implications for democratic education. AI systems produce outputs that carry the surface markers of authority — fluent prose, confident assertions, comprehensive coverage — while lacking the genuine uncertainty that rigorous thinking about complex issues requires. Citizens who cannot distinguish between the appearance of authority and its substance are vulnerable to manipulation at a scale that previous technologies never made possible.

This evaluation capacity is not a new skill. It is the oldest skill in the liberal arts tradition — the skill of rhetoric, of distinguishing between persuasion that serves truth and persuasion that serves the persuader. What is new is the scale at which the skill is needed and the sophistication of the content it must evaluate. When any citizen can generate a policy brief that reads as though it were produced by a team of analysts, the capacity to evaluate policy briefs with genuine critical rigor becomes essential to the functioning of democratic deliberation itself.

The fourth principle is that democratic education must prepare citizens for governance under conditions of uncertainty. Allen's recognition that the AI transition represents a "constitutional moment" implies that the decisions being made now will shape governance structures for decades. But the decisions must be made under conditions of radical uncertainty about AI's trajectory, its long-term consequences, and the adequacy of any governance framework to address circumstances that no one can predict. Citizens who are prepared for this kind of decision-making can build institutions that are robust enough to function under uncertainty — institutions with built-in mechanisms for revision, adaptation, and democratic correction as circumstances change.

The institutional challenge is immense. Educational systems are among the slowest-adapting institutions in democratic societies, constrained by entrenched bureaucracies, credentialing requirements, political interests, and the sheer inertia of established practice. Allen has been characteristically clear-eyed about this challenge, and her institutional recommendations — including AI offices in state governments and regulatory frameworks that could extend to educational governance — reflect the recognition that the adaptation required is systemic rather than incremental.

But the urgency is equally immense. The citizens who will govern AI for the next half-century are in school now. What they learn — or fail to learn — about the practice of democratic deliberation, the evaluation of AI-generated content, the exercise of judgment under uncertainty, and the skills of collective decision-making will determine the quality of AI governance for generations. Every year that educational institutions fail to reorient around democratic preparation is a year of citizens entering the electorate without the capacities that self-governance in the age of AI requires.

Allen has argued that AI itself, properly governed, could be a powerful tool for democratic education — enabling large-scale deliberative exercises, providing personalized learning experiences that cultivate judgment rather than rote competence, making complex policy issues accessible to citizens who lack specialized training. But this positive use of AI in education depends on the very governance frameworks that democratic education is supposed to prepare citizens to construct. The circularity is uncomfortable but not paralyzing. It simply means that the work of democratic education and the work of democratic AI governance must proceed simultaneously, each informing and enabling the other.

The school that teaches questioning over answering, deliberation over demonstration, judgment over execution, and humility in the face of uncertainty is a school that prepares citizens for the constitutional moment that Allen has identified. It is also a school that justifies its existence on grounds that AI cannot undermine — grounds that are, in fact, made more compelling by AI's presence. The more capable the machine, the more essential the citizen who can govern it wisely. Democratic education is not threatened by AI. It is, for the first time in decades, revealed as indispensable.

Chapter 9: The Architecture of Inclusion and the Protection of Difference

Genuine inclusion has never been achieved by opening a door.

The Americans with Disabilities Act of 1990 did not merely declare that people with disabilities could enter buildings. It specified the width of doorways, the grade of ramps, the height of counters, the placement of signage, the texture of floor surfaces. It understood that the barrier was not a locked gate but an architecture — a built environment designed around assumptions about what a normal body looks like, how a normal body moves, what a normal body can reach. The locked gate could be opened by decree. The architecture could only be changed by redesign, and the redesign required understanding, at a granular level, how the existing design excluded the people it was supposed to serve.

Danielle Allen's concept of inclusion operates at this architectural level. Inclusion is not the absence of exclusion. It is the positive design of institutions, systems, and environments that make participation possible for everyone — that anticipate the specific obstacles different populations face and construct the conditions under which diverse people can contribute their capabilities to the shared enterprise. The distinction is the difference between a door that is formally unlocked and a building that is actually navigable. The door satisfies the declaration. The building satisfies the practice.

Applied to AI, Allen's architectural concept of inclusion exposes a gap between the democratization narrative and the democratization reality that is wider than most commentary acknowledges. The narrative says: AI tools lower the barrier to building. Anyone with an idea and the ability to describe it can now produce working software, functional prototypes, creative works, analytical outputs. The floor of who gets to build has risen. This is real, and it matters, and it represents a genuine expansion of participatory capacity.

The reality says: the tools are designed for a specific population, embedded in a specific infrastructure, governed by specific assumptions about who the user is, what the user needs, and how the user works. The expansion of access is genuine. The architecture of that access is not neutral.

Consider the dimensions of the architecture that Allen's framework makes visible.

Infrastructure is the most material dimension and the easiest to understand. AI tools require computational power that is unevenly distributed across the globe. Reliable electricity, high-speed internet connectivity, devices capable of running modern software — these are the physical prerequisites for participation in the AI economy. They are not available to billions of people. The International Telecommunication Union estimates that roughly 2.6 billion people remain entirely offline. Billions more have connectivity that is too slow, too expensive, or too unreliable to support sustained interaction with cloud-based AI systems. The developer in Lagos whose ideas are as good as anyone's in San Francisco — the figure who appears repeatedly in democratization narratives — faces infrastructure constraints that no amount of tool accessibility can overcome if the electricity cuts out twice a day and the bandwidth cannot sustain a real-time conversation with a large language model.

Allen's "Roadmap for Governing AI" includes public infrastructure investment among its seventeen recommendations, and the inclusion rationale is explicit. When AI capability becomes essential to participation in economic and civic life, the infrastructure that enables access to that capability becomes a democratic necessity — not a market outcome to be addressed through private investment but a public good that requires public provision, funded by taxation and governed by democratic institutions.

Language is the second dimension, and it is more consequential than infrastructure conversations typically acknowledge. The large language models that power the AI tools driving the democratization narrative were trained predominantly on English-language data. They work best in English. They reflect English-language categories of thought, cultural references, patterns of reasoning, and assumptions about the world. For the 6.3 billion people who do not speak English natively — including the majority of the world's developers, educators, and civic leaders — this linguistic bias is not a minor inconvenience. It is a structural barrier to the kind of full participation that Allen's inclusion framework demands.

The bias operates at levels that are difficult to detect and nearly impossible to correct through surface-level adaptation. Translation is not the same as inclusion. A model that generates fluent text in Yoruba but was trained on English-language reasoning patterns is producing Yoruba-language outputs shaped by English-language cognition. The categories, the assumptions, the patterns of inference — these reflect the training data, not the language of the output. Allen's principle of "difference without domination" — the insistence that genuine diversity requires the structural prevention of any difference from becoming the basis for systematic advantage — is violated by a system that presents itself as multilingual while operating, at the deepest cognitive level, from a single linguistic perspective.

Culture is the third dimension, and it is the one that Allen's philosophical work engages most distinctively. The tools are designed within a specific cultural context — the technology industry of the American West Coast — and they embed the values of that context in their design, their defaults, their optimization targets, their definition of what counts as a good outcome. The aesthetic of smoothness that Byung-Chul Han diagnosed and that The Orange Pill engages at length — frictionless, seamless, optimized for speed and efficiency — is not a universal human value. It is a particular cultural preference, rooted in a particular historical moment, reflecting particular assumptions about what matters in human experience.

Many cultural traditions value what smoothness eliminates. Deliberation that takes time. Decision-making that incorporates silence. Creative processes that embrace friction as generative rather than treating it as waste. Religious practices that structure time around contemplation rather than productivity. Indigenous governance traditions that require consensus rather than optimization. When AI tools are designed to maximize smoothness, they impose one culture's values on a diverse world of practices that may define quality, excellence, and the good life in fundamentally different terms.

Allen and her collaborators in the GETTING-Plurality network have argued that the alternative to cultural homogenization through AI is the development of AI systems that are genuinely plural — systems that can be configured to reflect different cultural values, that are trained on diverse data reflecting the full range of human cultural expression, and that are governed by institutions that give diverse communities genuine authority over how the tools operate in their contexts. The network's foundational premise — that AI development should be organized around plurality rather than singularity — is, at its deepest level, a claim about cultural inclusion: the insistence that the intelligence commons should serve the full diversity of human communities rather than imposing the values of the communities that happened to build the first models.

Governance is the fourth dimension, and it is the one that connects the architecture of inclusion to every other argument in this book. Genuine inclusion requires not merely access to tools but participation in the governance of the systems on which those tools depend. The developer who can use Claude Code but cannot influence Anthropic's decisions about pricing, capability, and terms of use is included as a user and excluded as a citizen. The teacher who integrates AI into her classroom but has no voice in how the AI is trained, what biases it encodes, or what educational assumptions it embeds is included as a consumer and excluded as a professional. The community that depends on AI-enhanced public services but has no mechanism for shaping how those services are designed is included as a beneficiary and excluded as a participant.

Allen's entire body of work converges on this point. Inclusion without governance participation is access without agency. It is the condition of the consumer, not the condition of the citizen. And the distinction matters because the decisions that governance participation influences — the decisions about what the tools do, for whom, under what constraints — are the decisions that determine whether the tools serve the community or merely extract value from it.

The concept of "difference without domination," which Allen developed in her philosophical work and applied directly to AI governance in her "Roadmap" paper, provides the normative standard for evaluating whether the architecture of inclusion is genuine. Difference without domination means that diverse communities can use AI tools in ways that reflect their own values — their own definitions of quality, their own priorities, their own governance traditions — without any community's values being imposed on others through the defaults embedded in the tools' design. It means that the developer in Lagos and the educator in Mumbai and the Indigenous governance council in New Zealand can each engage with AI technology on terms that reflect their own understanding of what the technology should be for, rather than accepting the terms imposed by designers in San Francisco.

This standard is far from being met. Current AI tools are designed for a relatively homogeneous user population and governed by institutions that reflect the interests of that population. The architecture of inclusion — the infrastructure investments, the multilingual AI development, the cultural configurability, the governance mechanisms — has barely begun to be constructed. The gap between the declaration of democratization and the practice of genuine inclusion remains vast.

Allen has described AI governance as part of "a historical contest over what framework of political economy is going to define the world." The architecture of inclusion is where that contest is most concretely fought — not in the abstract realm of principles and declarations but in the specific decisions about who has access to what infrastructure, in what language, reflecting whose values, governed by whom. The contest will be won or lost not by the quality of the arguments but by the quality of the architecture. And the architecture of genuine inclusion, as the history of every previous democratic expansion demonstrates, does not build itself. It must be designed, funded, and maintained through the sustained political commitment of democratic societies that take their own principles seriously enough to invest in their realization.

---

Chapter 10: From Declaration to Practice

The distance between a commitment and its realization is the most important distance in democratic life, and it is always wider than the people making the commitment imagine.

The Declaration of Independence committed the United States to the proposition that all men are created equal. Eighty-seven years later, six hundred thousand people died in a war to determine whether that commitment extended to people whose ancestors had been kidnapped from Africa. Fifty-five years after that, women won the right to vote — one hundred forty-four years after a document that purported to speak for all of humanity was written by men who did not consider women's political participation worth mentioning. Another forty-five years passed before the Voting Rights Act began dismantling the legal infrastructure that had effectively nullified the franchise for Black citizens in the American South. Another fifty-eight years before the Supreme Court held that the equal protection clause applied to same-sex couples.

Two hundred and twenty-eight years from declaration to the most recent major expansion of the principle's application. And the application remains incomplete — contested, fragile, subject to reversal, requiring constant defense. The declaration was a morning's work. The practice has been the labor of centuries.

Danielle Allen has spent her career studying this distance — the gap between declaration and practice that is the central drama of democratic life. Her reading of the Declaration of Independence, the work that established her public reputation and that continues to inform everything she has written since, insists that the distance is not a failure of the document. It is a feature of the democratic project itself. Democracy is not a state to be achieved. It is a practice to be maintained — an ongoing, never-completed effort to close the gap between what a society commits to in its foundational documents and what it actually delivers in the lived experience of its citizens. The gap never closes completely. The effort to close it never ends. The quality of a democracy is measured not by whether the gap has been eliminated but by whether the society is actively working to narrow it.

The Orange Pill is, in Allen's terms, a declaration. It declares that AI is democratizing capability. It declares that the imagination-to-artifact ratio has collapsed, that the floor of who gets to build has risen, that the distance between what a person can conceive and what a person can realize has compressed to the width of a conversation. It declares that the appropriate response to the AI transformation is neither refusal nor uncritical acceleration but the patient, attentive construction of structures that channel the technology's power toward human flourishing. These are genuine commitments, and they are offered with an honesty about the author's own uncertainties and failures that gives them more weight than most technology manifestos deserve.

But the declaration is only the beginning. The practice is what matters. And the practice, as Allen's framework makes visible with uncomfortable clarity, has barely begun.

Consider the gap between the declaration of democratized capability and the practice of democratic AI governance. The tools are deployed. Millions of people are using them. The productive transformation documented in The Orange Pill is real and accelerating. But the governance institutions that would make this transformation genuinely democratic — the mechanisms for user participation in platform governance, the transparency requirements, the distributional structures, the educational systems, the infrastructure investments, the multilingual and culturally responsive AI development — are either nonexistent or radically insufficient. The EU AI Act exists but addresses supply-side regulation rather than demand-side empowerment. The American executive orders on AI exist but lack the institutional infrastructure for enforcement and adaptation. The "emerging governance frameworks" in other nations are, in most cases, frameworks in name only — statements of principle without the institutional capacity to translate principle into practice.

Allen has been precise about the nature of the gap. Her "Roadmap for Governing AI" identified seventeen specific recommendations, ranging from federal licensing of AI firms to AI offices in state governments to investments in public goods and democratic infrastructure. The recommendations are specific enough to be actionable. Almost none of them have been implemented. The distance between Allen's roadmap and the actual state of AI governance is itself a measure of the declaration-practice gap — a demonstration that even the most carefully reasoned proposals for democratic AI governance remain proposals rather than practices.

The gap operates at every level this book has examined. At the level of equality (Chapter 1): the redistribution of capability is real, but the institutional structures that would make it genuinely equitable are absent. At the level of workplace practice (Chapter 2): the dissolution of role boundaries creates new possibilities for participation, but the institutional protections that would ensure fair distribution of productivity gains and protect workers' time and dignity are inadequate. At the level of transition costs (Chapter 3): the costs are falling on the already vulnerable, and the institutions that would distribute them more equitably do not exist. At the level of civic agency (Chapter 4): the builders possess knowledge that democratic governance needs, but the institutional frameworks that would channel that knowledge toward democratic purposes are absent. At the level of consent (Chapter 5): billions of people depend on AI platforms they did not help design and cannot influence. At the level of education (Chapters 6 and 8): the curriculum that democratic participation requires has barely begun to be developed, let alone implemented. At the level of commons governance (Chapter 7): the intelligence commons is being enclosed by private interests while the governance institutions that would protect it remain largely imaginary. At the level of inclusion (Chapter 9): the architecture of genuine inclusion — infrastructure, language, culture, governance — remains unbuilt.

Allen would insist — has insisted, throughout her career — that the width of the gap is not a reason for despair. It is a diagnosis. It is a measurement of the work that remains to be done. And the work, in democratic life, is never finished. The institutions that governed previous technological transitions were not built overnight. They were built through decades of political struggle, institutional experimentation, democratic mobilization, and the persistent refusal to accept that the gap between declaration and practice was acceptable or inevitable.

But Allen has also been clear that the AI transition presents a distinctive challenge that the historical parallels do not fully capture. The speed of the transition exceeds anything previous generations confronted. The capabilities that reshaped work in late 2025 will be dramatically exceeded within years. The governance institutions being designed today will face technological capabilities that no one can predict. The gap between the speed of capability change and the speed of institutional response is not closing. It is widening.

In her April 2025 lecture at Endicott College, Allen framed the challenge as "a historical contest over what framework of political economy is going to define the world as AI transforms it." The contest is not merely between competing policy proposals. It is between competing visions of how collective life should be organized — a vision in which the infrastructure of productive and civic life is governed by democratic institutions accountable to the public, and a vision in which that infrastructure is governed by market forces accountable to shareholders. The outcome of this contest will determine whether the declaration of democratized capability becomes the foundation of a new era of genuine democratic participation or the mask that a new form of concentrated power wears as it consolidates.

Allen's work provides both the diagnostic framework for understanding the gap and the institutional vision for closing it. Her power-sharing liberalism insists that human flourishing — not merely harm prevention — must be the organizing principle of AI governance. Her concept of participatory readiness identifies the civic capacities that democratic populations must develop. Her "Roadmap" provides specific institutional recommendations. Her GETTING-Plurality network is building the intellectual infrastructure for a paradigm of AI development organized around plurality rather than singularity.

But the institutional vision, however rigorous, cannot close the gap by itself. The gap is closed by democratic action — by the sustained political engagement of citizens who understand what the AI transition demands and who are willing to do the slow, unglamorous, essential work of building the institutions that democratic governance requires. This work includes voting for representatives who take AI governance seriously. It includes participating in regulatory processes that shape the rules governing AI platforms. It includes demanding that educational institutions prepare citizens for democratic participation rather than merely for employment. It includes organizing in workplaces to ensure that the benefits of AI-enhanced productivity are distributed equitably. It includes insisting that the infrastructure of AI capability be governed as a commons rather than enclosed as private property.

The Declaration of Independence was written in a single summer. The practice of equality it declared has consumed two and a half centuries and remains unfinished. The declaration that AI democratizes capability has been made. The practice of genuine democratization — the construction of institutions that make the declaration real in the lives of the billions of people who will be affected by this technology — is the work that lies ahead. It is the work of democratic life itself, now applied to the most powerful technology that democratic societies have ever confronted.

Allen's final word, implicit in everything she has written and made explicit in her framing of the "constitutional moment," is that worthiness is not a quality of individuals. It is a quality of institutions. The question is not whether any person is worthy of being amplified. The question is whether the institutions governing AI are worthy of the democratic commitments they claim to serve — whether they distribute power broadly, include all affected parties in governance, protect the conditions for genuine participation, and prevent the concentration of capability from becoming the concentration of power.

That question is not answered by declarations. It is answered by practice. And the practice, as always in democratic life, has only just begun.

---

Epilogue

The phrase that kept circling back was not one of Danielle Allen's. It was one of mine.

In The Orange Pill, I wrote about the "imagination-to-artifact ratio" — the distance between what a person can conceive and what a person can build. I celebrated its collapse. I watched my engineers in Trivandrum build things in days that would have taken months. I felt the exhilaration of productive capability expanding faster than I had ever experienced, and I wrote about that exhilaration with the honesty I could muster at the time.

Allen took my phrase and did something with it that I had not done. She asked: whose imagination? Built on whose infrastructure? Governed by whose rules?

The questions are not hostile. They are the questions that a serious democrat asks of any redistribution, because Allen has spent her career studying what happens when a society declares equality and then fails to build the institutions that make the declaration real. Two hundred and twenty-eight years between "all men are created equal" and the Supreme Court recognizing that equal protection extends to same-sex couples. That is the distance between declaration and practice, and Allen measures it not to discourage but to diagnose — to show exactly how much institutional work the gap demands.

I had described the developer in Lagos as someone whose ideas now had a path to reality. Allen asked whether that path ran through infrastructure the developer did not control, in a language she did not choose, under terms she did not negotiate, subject to revision by people she could not influence. The answer to every one of those questions is yes. And the distance between "you can use this tool" and "you can govern this tool" is the distance between access and agency — between the consumer and the citizen.

What hit me hardest was Allen's reframing of the question I posed at the end of The Orange Pill. I asked whether we are worthy of being amplified. She answered that worthiness is not a quality of individuals. It is a quality of institutions. The distinction matters more than I initially realized. I can work on myself — examine my biases, sharpen my judgment, deepen my self-knowledge. But no amount of individual worthiness protects the developer in Lagos if the platform she depends on changes its pricing, or the educator in Mumbai if the AI she has built her curriculum around modifies its capabilities, or the community that has organized its governance around tools whose terms of service can be rewritten unilaterally.

Allen calls this moment a "constitutional moment" — a phrase that carries extraordinary weight from someone who has devoted her career to studying the original constitutional moment of American democracy. The founders faced the question of how to govern shared power. We face the question of how to govern shared intelligence. The parallel is not rhetorical. It is structural. And the quality of our answer will determine, as Allen argues, whether the most powerful technology democratic societies have ever encountered serves the commitment to equality or undermines it.

I still believe what I wrote in The Orange Pill. The amplification is real. The democratization of capability is genuine. But Allen has convinced me that the celebration is premature — not wrong, but incomplete. The dams I described need to be designed not just by builders but by the communities they protect. The river I celebrated needs governance structures that include the people downstream, not just the people at the frontier. The question is not whether we can build. It is whether we can govern what we build with the democratic commitment that makes the building worthwhile.

That is the work. It is slower than building. It is less exhilarating. It does not produce the dopamine rush of shipping a product in thirty days. But it is the work on which everything else depends, and Allen has made me see it with a clarity I did not possess before.

The gap between declaration and practice. That is where we live now. That is where the real work begins.

Edo Segal

You can use the tool. You cannot govern it.
That gap is the democratic crisis of the AI age.

AI has collapsed the distance between imagination and creation. Anyone with an idea can build. The celebration is everywhere -- and it is incomplete. Danielle Allen, one of democracy's most rigorous living theorists, asks the question the builders skip: Who controls the infrastructure your capability depends on? Her framework reveals that democratized access without democratic governance is not liberation -- it is dependency dressed in the language of empowerment. This volume applies Allen's concepts of equality-as-practice, power-sharing liberalism, and participatory readiness to the AI revolution, exposing the institutional vacuum where governance should be and mapping what must be built to fill it. The tools are deployed. The dams are not. Allen shows why that gap threatens everything the technology promises.

-- Danielle Allen, The Washington Post, 2023

Danielle Allen
“ "A healthy democracy could govern the technology and put it to good use in countless ways."”
— Danielle Allen
0%
11 chapters
WIKI COMPANION

Danielle Allen — On AI

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Danielle Allen — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →