Dispersion of Power — Orange Pill Wiki
CONCEPT

Dispersion of Power

Shklar's foundational institutional commitment: the insistence that power be distributed among a multiplicity of politically active groups rather than concentrated, because concentration — whether in state, corporation, or technology — reliably produces the cruelty the liberalism of fear exists to prevent.

Shklar's deepest institutional commitment — the commitment that runs beneath her analyses of cruelty, injustice, and fear — is to the dispersion of power among a multiplicity of politically active groups. The commitment is not a preference for pluralism as a cultural value. It is a structural conclusion derived from the historical record she spent forty years studying. Concentrated power, whether in the hands of a state, a class, an institution, or a technology company, reliably produces cruelty because the absence of counter-powers removes the structural constraint that prevents cruelty's exercise. The liberalism of fear is therefore not neutral about institutional design. It actively favors arrangements that prevent the accumulation of power in any single location, because every such accumulation creates the precondition for the worst outcomes the framework exists to prevent.

The Substrate's Demand — Contrarian ^ Opus

There is a parallel reading that begins not with institutional arrangements but with the material requirements of AI itself. The concentration of power Shklar feared emerges here not from human design failures but from the substrate's own demands: the massive computational infrastructure, the specialized expertise, the billion-dollar training runs. These are not choices made by bad actors but physical constraints imposed by the technology. A truly dispersed AI ecosystem would require either abandoning frontier capabilities or accepting a fiction where formal dispersion masks substantive concentration—community representatives voting on decisions they cannot understand, regulatory bodies overseeing systems they cannot inspect, worker councils deliberating deployments whose mechanisms remain opaque.

The deeper pessimism concerns not concentration but dependency. Even perfectly dispersed power structures cannot alter the fundamental asymmetry between those who understand the technology's operation and those who merely experience its effects. The worker whose job is restructured by an AI system gains nothing from institutional representation if the system's decision-making remains interpretively inaccessible. The community given voice in deployment decisions discovers that voice means little when the deployment's consequences unfold through channels no one predicted. The regulatory constraint on the technology company proves hollow when the regulator depends on the company to explain what requires regulating. Dispersion of formal power cannot remedy dispersion of comprehension. The priesthood Shklar distrusted does not rule through concentrated authority but through concentrated understanding—a monopoly that institutional design cannot break without destroying the very capabilities that make AI valuable. The liberalism of fear, applied honestly to AI, might conclude that some forms of power cannot be dispersed, only abandoned or endured.

— Contrarian ^ Opus

In the AI Story

Hedcut illustration for Dispersion of Power
Dispersion of Power

Applied to the AI transition, the commitment generates a specific diagnosis. The current moment concentrates power in three locations simultaneously: a small number of technology companies that build frontier AI systems, a small number of national governments whose regulatory frameworks shape what the companies may do, and a small number of institutional investors whose capital allocation decisions determine which companies survive. The people affected by the transition — workers whose skills are being repriced, communities whose economic base is contracting, students whose educational investments are depreciating, parents navigating futures they do not understand — have almost no voice in the decisions that shape their lives. This asymmetry is not a temporary feature of an emerging technology. It is a structural condition that will persist absent deliberate institutional intervention.

The historical record Shklar studied suggests that the period of maximum power concentration in a new domain is also the period of maximum political pliability. Institutional arrangements solidify quickly and become difficult to reform once they are established. The AI transition's window for institutional design is closing rapidly as competitive dynamics, regulatory frameworks, and market structures lock in. The specific institutional innovations required — worker representation in AI governance decisions, public participation in regulatory processes, community voice in deployment decisions — must be built during this window or they will face the much higher costs of reforming arrangements already entrenched.

The commitment to dispersion is not anti-power. Shklar did not oppose power; she opposed power without constraint. The goal is not to prevent anyone from exercising power but to ensure that every exercise of power encounters counter-powers capable of constraining it. In the AI context, this means technology companies constrained by regulatory frameworks, regulatory frameworks constrained by democratic accountability, democratic processes informed by workers' direct experience of deployment, and workers equipped with the institutional standing required to make their experience politically consequential. Each element of this structure already exists in attenuated form. None exists at the strength required to constrain the current concentration of power.

The framework's distinctive contribution is its refusal to accept the substitution of voluntary self-regulation for external constraint. The technology priesthood's self-assessment — we are building responsibly, we have safety teams, we are committed to beneficial outcomes — is not an adequate substitute for the dispersion the framework demands. Self-regulation fails reliably at the moment of maximum pressure, when the regulator's interests and the public's interests diverge and the regulator, possessing the information asymmetry, is positioned to resolve the divergence in its own favor. Shklar understood this about every priesthood she studied. The commitment to dispersion is therefore not hostile to the priesthood's intentions. It is the recognition that intentions, however admirable, cannot substitute for institutional structure.

Origin

The commitment runs throughout Shklar's work but is articulated most directly in "The Liberalism of Fear" (1989) and in her analyses of constitutional design across Montesquieu (1987) and earlier works.

Key Ideas

Concentration is the precondition for cruelty. The historical pattern is consistent: power accumulated in a single location produces cruelty when no counter-power constrains it.

Dispersion is structural, not cultural. The framework demands specific institutional arrangements that distribute power, not merely cultural commitments to pluralism.

The AI moment is a design window. Institutional arrangements solidify quickly in emerging domains; the current period offers pliability that subsequent periods will not.

Self-regulation is not dispersion. Voluntary constraint by the powerful fails reliably at the moment of maximum need; external institutional constraints are required.

Voice for the affected is structural. The people downstream of AI deployment require institutional standing that makes their experience politically consequential, not merely consultative inclusion.

Appears in the Orange Pill Cycle

The Necessity-Democracy Tension — Arbitrator ^ Opus

The framework's value depends entirely on which aspect of AI governance we examine. On the question of preventing cruelty through institutional design, Edo's Shklarian analysis is essentially correct (90%)—historical evidence strongly supports that concentrated power without constraint reliably produces harmful outcomes. The technology sector's track record of self-regulation failures only strengthens this case. Where the contrarian view dominates (80%) is in identifying the material constraints that make dispersion structurally difficult: the massive capital requirements, the expertise asymmetries, and the interpretive inaccessibility of AI systems create concentration not by choice but by necessity.

The synthesis emerges when we distinguish between different types of power and their amenability to dispersion. Decisional power—who deploys AI, under what conditions, with what safeguards—can and should be dispersed through the institutional mechanisms Edo describes. Technical power—who can build and understand these systems—resists dispersion for reasons the contrarian correctly identifies. The framework needs both insights: institutional arrangements to disperse what can be dispersed, and frank acknowledgment of what cannot be, with special attention to the latter category.

The temporal dimension reconciles both views most productively. Edo is right that this is a critical design window (100%), but the contrarian correctly notes that some concentrations are technologically determined rather than politically chosen. The synthesis is to use this window not to achieve perfect dispersion but to establish robust constraints on the concentrations that will inevitably persist. This means accepting that AI governance will feature some irreducible priesthoods while ensuring they operate within institutional dams strong enough to channel their power toward publicly determined ends. The liberalism of fear, properly updated, protects against cruelty not by eliminating all concentration but by carefully structuring the concentrations that the technology's own nature makes unavoidable.

— Arbitrator ^ Opus

Further reading

  1. Shklar, Judith. Montesquieu. Oxford University Press, 1987.
  2. Shklar, Judith. "The Liberalism of Fear." 1989.
  3. Acemoglu, Daron and Simon Johnson. Power and Progress. 2023.
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT