Power-sharing liberalism puts human flourishing at the center of political inquiry and treats positive liberties—the substantive capacity to participate in collective life—as no less fundamental than negative liberties. Building on Amartya Sen, Philip Pettit, Elizabeth Anderson, and Elinor Ostrom, Allen developed the framework as an alternative to both market liberalism (which reduces freedom to non-interference) and social-democratic paternalism (which substitutes state provision for citizen agency). Applied to AI, the framework insists that governance must be proactive and generative—asking what the technology should be for—rather than merely reactive and harm-preventing.
There is a parallel reading that begins not with democratic ideals but with the physical infrastructure AI requires: server farms consuming nation-state levels of electricity, rare earth mining devastating communities, fiber optic cables crossing sovereign boundaries, and computational resources concentrated in a handful of corporate data centers. From this vantage point, power-sharing liberalism appears as a noble abstraction floating above the material reality of how intelligence gets computed. The framework speaks of "distributing power broadly" and "collective governance," but the substrate of AI—the actual silicon, energy, and bandwidth—follows a physics of concentration that no amount of participatory theory can wish away.
The lived experience of those most affected by AI deployment tells a different story than one of expanding democratic participation. The Uber driver whose algorithmic boss changes payment rules without notice experiences not an absence of democratic input but the presence of a new form of economic precarity. The content moderator in Manila reviewing traumatic images for Silicon Valley platforms participates in AI's development, but as human shock absorber, not democratic agent. The framework's emphasis on "asking what technology should be for" assumes a collective "we" that can meaningfully pose this question, but the actual mechanisms of AI development—venture capital allocation, corporate R&D priorities, military procurement—operate through logics of accumulation and strategic advantage that structurally exclude the democratic deliberation Allen envisions. Power-sharing liberalism may provide the normative vocabulary for critique, but without engaging how AI's material requirements create systematic barriers to the very participation it champions, it risks becoming what it seeks to transcend: a formal framework that leaves actual power relations undisturbed.
Allen's framework emerged from sustained engagement with three traditions. From republicanism, she took the concept of non-domination—the insistence that freedom requires not merely the absence of interference but the absence of another's arbitrary power over you. From the capability approach, she took the focus on substantive freedom and the recognition that material conditions determine whether formal rights become lived capacities. From commons governance, she took the institutional imagination to think beyond the state-market binary that has dominated liberal theory for a century.
The AI application is decisive. Most AI governance discourse operates in the register of negative liberty: preventing discrimination, blocking deepfakes, regulating deceptive practices. Allen argues this framing is radically incomplete. It leaves unasked the generative question: what should the technology be for? A framework focused only on preventing harm will never ask whether AI should be used to enhance democratic deliberation, augment human cooperation, or expand the conditions for self-governance.
The framework has direct institutional implications. It justifies public investment in AI infrastructure as a democratic necessity, not merely a competitive policy preference. It grounds the argument for worker participation in AI-driven workplace restructuring. It provides the normative foundation for treating the intelligence commons as a shared resource requiring collective governance rather than a raw material available for private extraction.
Power-sharing liberalism also supplies the standard by which Allen evaluates particular AI governance proposals. The test is whether a proposed institution distributes power broadly or concentrates it, whether it includes affected communities in decisions or relegates them to consumer status, whether it creates conditions for genuine participation or merely provides formal access to systems governed by others.
Allen developed power-sharing liberalism across a series of publications culminating in Justice by Means of Democracy (2023). The framework applies Aristotelian, republican, and capability-theoretic resources to the contemporary crisis of democratic governance, with direct implications for the technology domain she engages through her GETTING-Plurality research network.
Positive and negative liberties together. Democratic freedom requires both protection from interference and the substantive capacity to participate in collective life.
Generative governance. Asking what technology is for, not merely how to prevent its harms, is the precondition for governance that serves democratic flourishing.
Non-domination. Freedom means the absence of arbitrary power over you, including the power of private actors who control essential infrastructure.
Material conditions of participation. Citizens without material independence cannot exercise genuine democratic agency; economic empowerment is constitutive of democratic governance, not separate from it.
Beyond state-market binary. Democratic institutions must include commons-based, participatory, and hybrid forms that neither pure markets nor centralized states can provide.
Libertarian critics argue that treating positive liberties as foundational opens the door to expansive state claims that threaten the negative liberties the framework purports to include. Allen's response is that the distinction has always been unstable—negative liberty requires positive institutions (courts, police, property records) that presuppose collective action—and that the serious question is how to design institutions that protect both dimensions of freedom simultaneously.
The tension between Allen's power-sharing liberalism and the material critique resolves differently depending on which layer of the AI question we examine. At the level of normative foundations—what principles should guide AI governance—Allen's framework is essentially correct (90%). The insistence that we need both negative and positive liberties, that governance must be generative not merely preventive, provides the right conceptual architecture for thinking about AI's role in democratic life. The contrarian's focus on material constraints doesn't invalidate these principles; it specifies the conditions under which they must operate.
At the implementation layer, however, the material critique gains force (70% contrarian). The physical infrastructure of AI does create systematic tendencies toward concentration that participatory frameworks struggle to address. Server farms and training clusters aren't easily democratized; the economics of scale in computation create natural monopolies that resist power-sharing arrangements. Yet Allen's framework still matters here (30%) because it provides the criteria for evaluating second-best solutions: if we can't democratize the infrastructure directly, what proxy mechanisms (regulation, public options, interoperability requirements) best approximate democratic control?
The synthesis emerges at what we might call the institutional design layer. Both views are right that current AI development excludes meaningful democratic participation, but for different reasons—Allen identifies the absence of participatory frameworks while the materialist identifies structural barriers to their implementation. The productive question becomes: given AI's tendency toward material concentration, what institutional forms can create countervailing democratic power? This might include public compute infrastructure, data trusts, algorithmic auditing bodies with genuine enforcement power, or requirements for worker participation in automated management systems. The framework succeeds not by denying material constraints but by providing the normative standards for evaluating institutional responses to them.