By Edo Segal
The framework I never questioned was the one I was standing inside.
I have spent my career building companies, shipping products, arguing about what to build and how to build it faster. I questioned everything within the frame — the architecture, the team structure, the product roadmap, the competitive positioning. I was relentless about what happened inside the walls. I never once questioned the walls themselves.
The corporate form. The quarterly cadence. The boardroom where five people decide what happens to a productivity gain that a hundred people generated. The assumption that the market is the mechanism through which capability reaches the people who need it. The belief that if I just build well enough, care enough, maintain my dams with enough diligence, the system will bend toward something good.
Unger looks at all of that and asks a question so simple it's destabilizing: Why do you treat those arrangements as permanent? Who told you the corporation was the only vessel for building? Who decided that the allocation of AI's gains would be settled in rooms that exclude the people most affected by the decision? These are not laws of physics. They are constructions. Made by people. Revisable by people. And the failure to see them as revisable — the naturalization that makes the contingent feel like bedrock — is, in Unger's terms, the deepest form of political captivity available to a free society.
This matters for AI in a way that is immediate and concrete. The institutional arrangements crystallizing around this technology right now — the platform model, the individual augmented producer, the prompt-and-judgment workflow — are being treated as inevitable. As though the technology itself demanded them. It does not. The technology enables them. Other arrangements are possible. But possible is not the same as constructed, and the window for construction narrows every month that the first arrangements harden into the appearance of permanence.
I wrote in *The Orange Pill* about building dams. Unger forced me to see that personal dams and organizational dams are necessary but insufficient. The dams that matter most are institutional and democratic — frameworks that give communities genuine authority over the arrangements through which AI enters their lives. That is a larger project than anything I have attempted. It begins with seeing the walls.
This book is that seeing. It will not make the walls disappear. But it might help you stop mistaking them for the horizon.
— Edo Segal ^ Opus 4.6
1947-present
Roberto Mangabeira Unger (1947–present) is a Brazilian philosopher, legal theorist, and political thinker who has served on the faculty of Harvard Law School since 1976, where he became one of the youngest professors to receive tenure in the institution's history. Born in Rio de Janeiro, he is a founding figure of the Critical Legal Studies movement and has served twice as Brazil's Minister of Strategic Affairs. His major works include *Knowledge and Politics* (1975), the three-volume *Politics* (1987), *The Self Awakened* (2007), *The Religion of the Future* (2014), and *The Knowledge Economy* (2019). Unger's central philosophical contribution is the concept of "false necessity" — the argument that existing social and institutional arrangements are contingent constructions rather than inevitable expressions of deeper structural requirements, and that democratic communities possess the capacity to imagine and build alternatives. His work on "empowered democracy" and institutional experimentalism has influenced legal theory, political philosophy, and development economics across multiple continents, making him one of the most ambitious and systematic social theorists of the late twentieth and early twenty-first centuries.
Every institutional arrangement that endures long enough eventually performs a conjuring trick upon the minds of those who inhabit it: it makes itself invisible. The market economy, which was once a revolutionary insurgency against feudal obligation and guild monopoly, now presents itself to its inhabitants as the natural and inevitable way of organizing production and exchange — as though no alternative had ever existed, as though no alternative could ever be conceived, as though the specific configuration of property rights, contract law, wage labor, and corporate governance that defines contemporary capitalism were etched into the structure of reality rather than hammered out through centuries of conflict, compromise, and contingency. The nation-state, which was a radical institutional invention of the seventeenth and eighteenth centuries — an arrangement that would have been literally unintelligible to the political communities it displaced — now functions as the unquestioned container of political life, the unit within which all serious political imagination operates, the thing that must be taken as given before any other political question can be asked. The nuclear family, the research university, the professional credential, the corporation itself — each was once a departure from what preceded it, each was once contested, and each has been naturalized to the point where the effort to think beyond it feels not merely impractical but somehow absurd, a violation of the way things are rather than a challenge to the way things happen to have been arranged.
Roberto Mangabeira Unger has spent half a century identifying and dismantling this conjuring trick. His concept of false necessity — developed across the monumental three volumes of Politics and refined through decades of subsequent work in The Self Awakened, The Knowledge Economy, and The World and Us — names the most pervasive and least visible form of intellectual captivity: the mistaken belief that the existing institutional order is the only possible one, that the social furniture of the world was always there and must always remain, that to question the fundamental structure of economic, political, and educational life is to engage in fantasy rather than in the most rigorous and consequential form of thought available to human beings. False necessity is not a philosophical error that afflicts the naive. It afflicts the sophisticated most severely, because the sophisticated have the most elaborate justifications for why things must be as they are — the most refined theories of path dependence, institutional complementarity, and structural constraint that together constitute what Unger calls "the dictatorship of no alternatives." The dictatorship does not need to suppress dissent. It merely needs to make alternatives unimaginable, and the imagination, once foreclosed, enforces its own closure.
The artificial intelligence transition — the technological upheaval that Edo Segal's The Orange Pill documents from within the tremor zone — is being subjected to this conjuring trick at a speed that would have astonished even Unger when he first formulated the anti-necessitarian critique. The naturalization that took the market economy centuries to accomplish, that took the nation-state generations, that took the research university decades, is being accomplished for AI-mediated production in months. Within weeks of the Claude Code breakthrough that Segal describes — the moment when the machine learned to meet human beings on the terms of natural language rather than requiring them to translate their intentions into code — a specific set of institutional arrangements had crystallized and been presented to the world as the inevitable, the natural, the only possible response to the new technological capability.
These arrangements are specific. They are contingent. They are the product of particular choices made by particular actors operating within particular market incentives. And they are being naturalized at a pace that forecloses the institutional imagination needed to conceive of alternatives before the window of possibility closes.
Consider the arrangements that have already been presented as inevitable. First: the individual AI-augmented producer as the basic unit of the new economy. Segal's twenty-fold productivity multiplier — the demonstration that a single engineer equipped with Claude Code can produce what a team of twenty produced before — has been absorbed into the discourse not as one possible way of organizing AI-augmented work but as the way, the natural unit, the arrangement that the technology itself demands. But this arrangement is no more natural than the factory system that replaced the artisan workshop, or the artisan workshop that preceded it. The technology enables individual augmented production. It does not require it. Alternative arrangements — cooperative production in which AI tools are shared among teams organized for mutual development rather than individual output maximization, democratic workplaces in which the gains from AI augmentation are distributed through collectively bargained agreements, public AI utilities that provide augmentation as an infrastructure service rather than a corporate subscription — are not merely conceivable. They are constructible. They are simply not being constructed, because the first arrangement that happened to emerge from the specific market conditions of 2025 has already begun to present itself as the only one.
Second: the compression of expertise into what might be called the prompt-and-judgment workflow — the arrangement in which the human contribution is reduced to specifying what should be built and evaluating whether it has been built well, while the machine handles everything in between. This arrangement, too, is contingent. It reflects a particular theory of what human expertise is — a theory that decomposes expertise into a judgment layer and an execution layer and then assigns the execution layer to the machine. But expertise is not so cleanly decomposable. The senior engineer whom Segal describes, the one who realized that the implementation work consuming eighty percent of his career had been the substrate in which his architectural intuition was deposited — that engineer's experience testifies to the falsity of the decomposition. The arrangement that assigns execution to the machine and judgment to the human is one possible arrangement. An arrangement that preserves certain forms of friction as developmental rather than merely obstructive — that deliberately maintains human engagement with implementation as a training ground for the judgment that AI augmentation makes more valuable — is another. The first arrangement maximizes short-term productivity. The second invests in the long-term development of the human capacities on which productivity ultimately depends. Both are possible. Only one is being naturalized.
Third: the platformization of AI capability — the arrangement in which a handful of corporations provide AI augmentation as a service, set the terms of access, determine the pricing, shape the interface, and capture the economic surplus. This arrangement mirrors the platformization of the internet economy that preceded it, and it is being naturalized with the same speed and the same rhetoric: the platforms are simply the infrastructure, the natural and necessary channel through which the technology reaches its users, the arrangement that efficiency demands. But the platformization of the internet was not inevitable. It was the product of specific regulatory choices (or the absence of regulation), specific business model innovations, specific network effects exploited by specific actors in specific market conditions. Alternative arrangements existed and were foreclosed — public internet infrastructure, cooperative platforms, decentralized protocols that distributed rather than concentrated the gains. The platformization of AI is following the same trajectory, and the naturalization is proceeding even faster because the pattern has been established and the actors are the same.
Unger would recognize each of these naturalizations as an instance of what he calls institutional fetishism — the treatment of a particular institutional arrangement as though it were the unique expression of a deeper functional requirement, rather than one contingent realization among many possible ones. The functional requirement is real: society does need ways of organizing AI-augmented production, distributing the gains, maintaining human capability, governing the deployment. But the specific arrangements that currently fill these functional slots are not the only ones that could fill them. They are the ones that happened to arrive first, the ones that the existing incentive structures of the technology industry produced, the ones that the existing distribution of power and capital made most likely. To treat them as necessary — to treat the individual augmented producer, the prompt-and-judgment workflow, the platform model — as the inherent and inevitable structure of the AI economy is to commit the fundamental error that Unger's entire philosophical project was constructed to identify and resist.
The naturalization proceeds through a specific discursive mechanism that Segal's account illustrates with unwitting precision. The triumphalists celebrate the new arrangements as liberation. The elegists mourn them as loss. The silent middle holds both feelings without resolution. But all three groups — the triumphalists, the elegists, and the silent middle — share a common assumption: the arrangements themselves are given. The debate is about how to feel about them, not about whether they are the only possible arrangements or whether alternatives might be constructed. The emotional register varies; the institutional imagination is uniformly absent.
Segal himself, for all the honesty and depth of his account, operates largely within this assumption. When he describes the boardroom conversation about whether to convert the twenty-fold productivity gain into headcount reduction or team expansion, the conversation occurs within the given institutional framework of the private corporation making unilateral decisions about the deployment of AI-generated surplus. The question is what the corporation should do with the gain. The question that institutional imagination would ask — whether the corporation is the right institutional form for making this decision at all, whether the workers who generated the gain through their augmented labor should have democratic participation in determining its distribution, whether the community affected by the decision should have institutional means of shaping it — does not arise, not because Segal lacks the intelligence or the moral seriousness to ask it, but because the institutional framework within which the question is posed has already foreclosed it.
This is not a criticism of Segal. It is a demonstration of how naturalization works. The framework becomes invisible precisely to those who inhabit it most fully — to the builders, the leaders, the people of good faith who are making the most consequential decisions within arrangements they experience as simply "the way things are." Unger's contribution is to make the framework visible again, to denaturalize what the speed of the AI transition has already made to seem permanent, to insist that the specific institutional form of the AI economy is a political choice that has not yet been democratically made and that the window for making it is closing with every month that the current arrangements are naturalized further.
The denaturalization is not, in Unger's framework, merely a philosophical exercise. It is the precondition for every other form of constructive action. The builder cannot build beyond the given if the given has been mistaken for the necessary. The beaver cannot redirect the river if the river's current course has been mistaken for the only course the terrain allows. The institutional imagination that the AI transition most urgently requires — the capacity to envision organizational forms, governance structures, educational systems, and social contracts that do not yet exist — cannot be exercised until the existing arrangements have been revealed as what they are: contingent, revisable, replaceable, and subject to democratic reconstruction.
Unger has asked, with the rhetorical force that characterizes his most memorable interventions: "How could anything be more revolutionary than artificial intelligence?" The question is typically invoked in the context of his critique of the "secular stagnation" thesis — the claim that modern innovation is less transformative than its predecessors. But the question carries a deeper implication. If the technology is genuinely revolutionary, then the institutional response to it must be correspondingly revolutionary. A revolutionary technology met with naturalized institutional arrangements — the old corporate forms, the old governance mechanisms, the old distribution of authority between capital and labor, the old relationship between democratic communities and technological power — is a revolution betrayed. The revolution occurs in the technology. The betrayal occurs in the institutions. And the betrayal is accomplished, above all, through the naturalization that makes the existing institutional response appear to be the only one available.
The chapters that follow will trace the alternative — the institutional imagination that Unger's anti-necessitarian framework demands and that the AI transition makes both possible and urgent. But the starting point must be the recognition that the arrangements currently crystallizing around artificial intelligence are not destiny. They are politics masquerading as technology. And the first act of intellectual and political courage in this moment is to see through the masquerade.
---
Beneath the frozen surface of institutional life — the org charts and credentialing systems and labor markets and governance frameworks that present themselves to their inhabitants as solid, permanent, and load-bearing — social arrangements are plastic. They can be remade. They have been remade, repeatedly, throughout human history, often in response to precisely the kind of technological upheaval that the present moment represents. The appearance of solidity is an illusion produced by the naturalization described in the preceding chapter: the conjuring trick through which contingent arrangements come to seem necessary, through which the specific historical products of specific struggles and compromises are experienced as features of the terrain itself. But the plasticity is always there, waiting to be discovered or, more precisely, waiting to be exercised by human beings who possess the capacity to see through the illusion and act on what they see.
Roberto Mangabeira Unger's philosophical anthropology — developed most fully in The Self Awakened and threaded through all his subsequent work — rests on the claim that this plasticity is not merely a feature of social arrangements but a defining characteristic of the beings who inhabit them. Human beings are, in Unger's account, context-transcending creatures: organisms whose deepest and most distinctive capacity is the ability to resist, negate, and overcome the formative contexts that shape their thought and action. Every human being is embedded in institutional and imaginative frameworks that constrain what can be seen, what can be thought, what can be done. And every human being possesses what Unger calls negative capability — the capacity to push back against those frameworks, to see them as contingent rather than necessary, to imagine and construct alternatives. This capacity is not the privilege of the philosopher or the revolutionary. It is the birthright of the species. The tragedy of most social life is not that this capacity is absent but that it is suppressed — suppressed by institutions that present themselves as natural, by ideologies that present existing arrangements as the only possible ones, by the sheer gravitational weight of habit and familiarity that makes the given world feel like the necessary world.
The AI transition has demonstrated this plasticity with a clarity and a speed that vindicate Unger's anthropology far beyond what his original formulations could have anticipated. Consider what has dissolved in months. The specialized team — that fundamental unit of software production in which distinct roles (frontend, backend, design, product management, quality assurance) were assigned to distinct people operating within distinct competencies — has been revealed as a contingent organizational form rather than a structural necessity. When the translation cost between domains dropped to the cost of a natural-language conversation, the boundaries between roles dissolved with it. The backend engineer who never wrote frontend code began building complete user-facing features. The designer who never touched backend systems began implementing end-to-end. These boundaries had seemed structural — as solid as the walls between departments, as natural as the division between those who design and those who build. They were artifacts of translation cost, nothing more. When the cost evaporated, so did the boundaries.
The years-long training pipeline — the arrangement through which a human being acquired the right to participate in software production, passing through undergraduate education, graduate specialization, junior positions, senior mentorship, and the slow accumulation of credentialed expertise — has been compressed in ways that reveal its contingency. Segal describes an engineer with eight years of backend experience who built a complete user-facing feature in two days, not because she had secretly acquired frontend skills but because the translation barrier that had confined her to a single domain was an artificial constraint imposed by the cost of moving between domains rather than by the nature of the domains themselves. The credentialing system that she had passed through — the years of specialized training, the accumulation of domain-specific knowledge, the institutional gatekeeping that determined who was permitted to write which kind of code — had been designed for a world in which translation was expensive. In a world in which translation is nearly free, the credentialing system is not merely inefficient. It is an artifact of a formative context that no longer exists.
The institutional gatekeeping of expertise — the arrangement through which access to productive capability was mediated by educational credentials, professional certifications, institutional affiliations, and the accumulated social capital of belonging to the right networks — has been partially breached. The developer in Lagos whom Segal describes, the one who has the ideas and the intelligence but not the team, the capital, or the institutional infrastructure, now has access to productive capability that was previously confined to those who had passed through institutional gatekeepers. The breach is partial — connectivity, hardware, language, and capital constraints remain formidable — but its direction is unmistakable. The gates are not gone, but they are lower, and the lowering reveals that much of what the gates protected was not quality or safety or standards but the economic rents of those who controlled access.
Each of these dissolutions confirms Unger's central anthropological claim: the capacity of human beings to transcend and reconstruct the contexts within which they operate. The engineer who crosses from backend to frontend is exercising negative capability — seeing a boundary that appeared structural as contingent and acting on what she sees. The non-technical founder who builds a prototype over a weekend is exercising negative capability — refusing to accept that the institutional gatekeeping of technical expertise is a permanent feature of the landscape. The team that dissolves its specialization boundaries and reorganizes around what Segal calls "vector pods" — small groups whose function is not to build but to decide what should be built — is exercising negative capability at the organizational level, treating the existing org chart as raw material for reconstruction rather than as a fixed constraint.
But Unger's framework demands a question that the exhilaration of the moment tends to suppress. The plasticity is real. The dissolution of previously rigid arrangements is genuine. The negative capability of the individuals and teams who are crossing boundaries and constructing new forms of work is authentic. The question is: Who exercises the plasticity? And in whose interest?
The plasticity demonstrated by the AI transition is currently being exercised almost exclusively by two categories of actor: technology corporations, which are redesigning the institutional arrangements of production in ways that maximize their competitive advantage, and individual builders, who are redesigning their own working practices in ways that maximize their personal capability and output. These are legitimate exercises of plasticity, and Unger's framework does not dismiss them. But they are exercises of plasticity within a formative context that remains unquestioned — the context of the privately owned corporation operating within a market economy governed by the logic of capital accumulation.
The technology corporation that dissolves its specialized teams and reorganizes around AI-augmented pods is exercising plasticity within a framework that takes corporate ownership, shareholder primacy, and unilateral management authority as given. The reorganization may be brilliant. It may serve the employees well. It may produce better products. But it is not an exercise of institutional imagination in Unger's sense, because the framework within which the reorganization occurs — the corporate form itself, the distribution of authority between management and labor, the relationship between the organization and the broader community it affects — has not been questioned. The plasticity operates within the formative context. It does not challenge it.
The individual builder who leverages AI to produce what a team of twenty produced before is exercising plasticity within a framework that takes individual entrepreneurship, market-mediated exchange, and the private appropriation of productivity gains as given. The expansion of individual capability is genuinely democratizing — it lowers the barriers to entry, it expands who gets to build, it creates possibilities that the previous arrangements foreclosed. But it is also genuinely atomizing. The individual producer, augmented by AI, is a powerful figure. She is also a lonely one — disconnected from the solidaristic arrangements (guilds, unions, professional communities, democratic workplaces) that historically provided workers with collective voice in the design of the institutional arrangements that governed their labor.
Unger would observe that the most consequential exercises of plasticity in human history have not been individual or corporate. They have been democratic — collective exercises of institutional imagination in which entire communities reconstructed the arrangements governing their shared life. The labor movement of the nineteenth and twentieth centuries was an exercise of plasticity that constructed entirely new institutional forms: the eight-hour day, the weekend, collective bargaining, workplace safety regulation, social insurance. These arrangements were not incremental improvements within the existing framework. They were transformations of the framework itself — new ways of organizing the relationship between labor and capital that did not exist before democratic communities imagined and fought to construct them.
The AI transition demands the same quality of democratic institutional imagination, and it is not receiving it. The plasticity is being exercised at the individual and corporate levels with extraordinary energy and creativity. At the democratic level — the level at which the formative context of the AI economy would be deliberately designed by the communities it affects — the plasticity is dormant. Governments are reacting, not constructing. The EU AI Act addresses supply-side regulation — what AI companies may and may not build — but does not construct the demand-side institutional arrangements that would empower citizens, workers, and communities to participate in the design of the AI-mediated economy they will inhabit. The American executive orders establish guidelines and principles but do not create the institutional infrastructure through which democratic communities could exercise genuine authority over the deployment of AI in their workplaces, schools, and public services.
The consequence is that the formative context of the AI economy is being set by default rather than by design — set by the actors who happen to be exercising plasticity most vigorously, who happen to be the technology corporations and the individual builders, who happen to be operating within the existing frameworks of corporate capitalism and market exchange. This is not a conspiracy. It is the natural result of what Unger would call "low-energy democracy" — a democratic practice that operates at too slow a pace and too low a level of engagement to shape the institutional arrangements being constructed around a revolutionary technology.
The plasticity is there. It has been demonstrated, beyond any reasonable doubt, by the speed and completeness of the dissolutions already underway. But plasticity without democratic direction is merely disruption — the replacement of one set of naturalized arrangements with another, the reconstruction of the formative context by those who have the power and the position to shape it, without the collective deliberation that would make the reconstruction an act of democratic self-governance rather than an exercise of corporate or individual advantage.
What Unger's framework demands is not less plasticity but more — and, crucially, plasticity exercised at the right level. Not merely the plasticity of the individual builder crossing domain boundaries, though that is valuable. Not merely the plasticity of the corporation reorganizing its teams, though that is often necessary. But the plasticity of democratic communities reconstructing the institutional frameworks within which individuals build and corporations operate — the formative contexts that determine whether the AI transition produces broadly distributed human empowerment or a new and more sophisticated form of domination dressed in the language of inevitability.
The ice looks solid. The plasticity beneath it is real. The question is who will exercise it, and toward what ends, and through what institutional forms — and whether democratic communities will claim the authority to answer these questions before the answers are provided for them by the actors who are already building within the existing framework while the framework itself escapes democratic scrutiny.
---
Three ways of standing before a revolutionary technological force define the landscape of the present moment, and the choice among them — which is to say the refusal to treat any of them as the only option — is the central political and existential decision of the AI age.
The first figure is the one whom The Orange Pill calls the Swimmer — the person who plants feet against the current, leans into the drag, and refuses to be carried. The Swimmer's posture is resistance. The philosopher Byung-Chul Han, who gardens in Berlin and does not own a smartphone, who listens to music only in analog and insists that the resistance of pen on paper is necessary for genuine thought, is the Swimmer's most intellectually serious representative. His diagnosis is real: the aesthetic of the smooth, the pathology of auto-exploitation, the erosion of the capacity for depth that follows from the removal of productive friction. Segal takes this diagnosis seriously enough to devote three chapters to it, and the seriousness is warranted. The Swimmer sees something that the other figures miss or refuse to see — the specific and quantifiable human costs of frictionless optimization, the loss of embodied knowledge, the colonization of every pause by productivity, the transformation of the self into a project of relentless optimization that calls itself freedom while functioning as the most sophisticated form of self-imposed servitude.
But the Swimmer, in Unger's framework, commits a specific and consequential error. The Swimmer treats the current institutional arrangement — the smoothness society, the achievement culture, the optimization machine — as the only possible expression of AI capability. The Swimmer looks at the river and sees not a force that could be channeled in multiple directions but a monolithic current whose only effect is erosion. Resistance, in this framework, means standing still — cultivating the garden, insisting on analog, refusing the tool. And standing still, Unger would argue, is its own form of false necessity: the belief that the river's current course is its only possible course, that the only honest response to a force you cannot stop is to refuse to participate in shaping where it goes.
The Swimmer's resistance is noble. It is also, in the most precise sense, conservative — it conserves a way of being that the technological transformation has made available only to those who have the privilege to opt out. Han gardens in Berlin on a professor's salary, with the institutional support of a research university, within a welfare state that provides the basic security without which refusal is not philosophical but suicidal. The Swimmer's position is available to those who can afford it. For the developer in Lagos, for the engineer in Trivandrum, for the twelve-year-old whose mother cannot tell her whether her homework still matters, refusal is not an option. The river is already in their living rooms. What they need is not someone who tells them to stand still but someone who helps them build structures that direct the flow toward purposes they have chosen.
Unger's deepest objection to the Swimmer is not practical but philosophical. The Swimmer has accepted a premise that the anti-necessitarian framework rejects absolutely: the premise that the institutional arrangements through which AI enters human life are fixed, that the only variable is whether you participate in them or refuse. This premise surrenders the most important ground — the ground of institutional design, the question of what arrangements would channel AI capability toward human development rather than human diminishment. The Swimmer has given up on that question, and in giving up on it, has ceded the design of the formative context to those who have not given up — who are, in fact, designing it with great energy and very particular interests in mind.
The second figure is the Believer — the person who treats the river's direction as inherently good and accelerates without questioning the institutional arrangements through which the acceleration occurs. The Believer has read enough Schumpeter to find creative destruction romantic and enough technology journalism to believe that market forces, left to their own devices, will distribute the gains of innovation broadly. The Believer converts the problem of institutional design into the non-problem of acceleration: move faster, build more, ship sooner, and the market will sort out the distribution.
The Believer's error, in Unger's framework, is the mirror image of the Swimmer's. Where the Swimmer treats the current arrangements as inevitable and therefore unacceptable, the Believer treats the current arrangements as inevitable and therefore optimal. Both accept false necessity. The Swimmer accepts it with despair; the Believer accepts it with enthusiasm. Neither exercises the institutional imagination that the moment demands.
Segal captures the Believer's logic with uncomfortable precision in his account of the boardroom conversation about headcount reduction. The arithmetic is clean: if five people can do the work of a hundred, why keep a hundred? The Believer sees this arithmetic as a market signal — the technology has spoken, and the efficient response is to follow the signal. The question of what happens to the ninety-five, the question of whether alternative institutional arrangements might distribute the productivity gain differently, the question of whether the workers whose augmented labor generated the gain should participate in deciding how it is deployed — these questions do not arise within the Believer's framework, because the framework treats the market's allocation as the natural and correct one.
Unger has spent decades arguing that the market's allocation is never natural and never automatically correct. It is the product of a specific institutional framework — property rights, contract law, corporate governance, labor regulation — that was itself the product of political choices and that can be remade through different political choices. The Believer's enthusiasm for acceleration is not wrong in itself — the technology is genuinely revolutionary, and its potential for expanding human capability is real. What is wrong is the assumption that acceleration within the existing institutional framework is the only form of acceleration available, that the framework itself is not a candidate for redesign, that the distribution of gains and losses produced by the current arrangements is the only distribution that AI makes possible.
The third figure is the one whom Segal calls the Beaver and whom Unger would recognize as the practitioner of the transformative vocation — the person who neither refuses the river nor celebrates its unaltered course but builds in it, constructing structures that redirect the flow toward purposes the unaltered current would not serve. The Beaver studies the river. Observes where the current runs dangerous and where it runs generative. Identifies the points of leverage where small structures can redirect enormous flows. And builds — not once but continuously, maintaining the structures against the constant pressure of a force that does not pause for maintenance.
Unger would embrace the Beaver as a figure of the transformative vocation, but he would also insist on a crucial extension that Segal's account implies but does not fully develop. The dam is not merely a personal practice, not merely an organizational innovation, not merely the individual builder's attentional ecology or the corporation's AI Practice framework. The dam, in Unger's terms, must become an institution — a structure that outlasts the individual builder, that can be maintained and revised by communities, that embeds democratic accountability into its architecture, that is subject to collective deliberation and continuous reconstruction. The personal dam protects the individual. The organizational dam protects the team. But the institutional dam — the dam that protects the community, the society, the democratic polity — requires a form of construction that transcends individual virtue and corporate policy. It requires democratic institutional imagination: the collective exercise of the transformative vocation by communities that refuse to accept the current arrangements as the only possible ones and that insist on their authority to construct alternatives.
The distinction between the Beaver as individual builder and the Beaver as democratic institution-builder is the distinction between adaptation and transformation. Adaptation is the modification of personal and organizational practices within an existing formative context. Transformation is the reconstruction of the formative context itself. Segal's builder's ethic — care, honesty, attention to consequences, the maintenance of dams — is a magnificent ethic of adaptation. It tells the individual builder how to operate wisely and humanely within the AI-mediated economy as it is currently constituted. What it does not do — what it cannot do from within the builder's vantage point — is reconstruct the institutional framework within which all building occurs.
This is not a criticism of the builder's ethic. It is an identification of its necessary complement. The builder who exercises the transformative vocation does not merely build dams within the river. She participates in the collective design of the riverbed itself — the institutional infrastructure that determines the river's course, the distribution of its waters, the communities it nourishes and the communities it floods. This participation is political in the deepest sense: it is the exercise of democratic authority over the arrangements that shape collective life. And it requires institutional forms that do not yet exist — forms of democratic governance adequate to the speed and scale of the AI transformation, forms of collective deliberation that can operate at the pace of technological change rather than at the glacial pace of existing democratic institutions.
Unger's concept of the transformative vocation — the refusal to accept any institutional arrangement as final, combined with the practical commitment to constructing alternatives — synthesizes the authentic insights of all three figures while transcending the limitations of each. From the Swimmer, it takes the recognition that something genuinely valuable is at risk — that depth, embodied knowledge, the capacity for sustained attention, the satisfactions of friction-rich mastery are real goods whose erosion represents a real cost. From the Believer, it takes the recognition that the technological capability is genuine and that its potential for expanding human possibility is not illusory. From the Beaver, it takes the commitment to building rather than merely accepting or resisting. But it adds what none of the three figures possesses alone: the insistence that the institutional framework within which the building occurs is itself a construction, itself contingent, itself a candidate for the most radical and the most democratic form of reconstruction.
The transformative vocation is demanding. It requires the Swimmer's capacity for diagnosis — the ability to see what is being lost. It requires the Believer's capacity for energy — the willingness to engage with the full force of the technological transformation. It requires the Beaver's capacity for construction — the practical commitment to building structures that work. And it requires something beyond all three: the institutional imagination to conceive of arrangements that do not yet exist, combined with the democratic commitment to construct them through collective deliberation rather than corporate fiat or individual initiative.
This is the vocation that the AI age demands. Not the refusal of the Swimmer. Not the acceleration of the Believer. Not even the individual dam-building of the Beaver, though that is where it begins. But the collective, democratic, experimentalist reconstruction of the institutional frameworks within which artificial intelligence enters human life — the refusal to accept that the current arrangements are the only possible ones, combined with the practical commitment to imagining and constructing better ones.
---
Not all change is of the same kind, and the failure to distinguish between the kinds of change available in any given historical moment is one of the most consequential intellectual errors available to political and organizational leaders — an error whose cost is measured not in misunderstanding but in misdirected action, in the application of incremental solutions to structural problems, in the treatment of revolutionary upheaval as though it were merely a faster version of what came before.
Roberto Mangabeira Unger's distinction between context-preserving and context-smashing change addresses this error directly. Context-preserving change modifies practices, policies, and arrangements within an existing institutional framework. It takes the framework as given and works within its boundaries. A new feature added to an existing software product is context-preserving change. A revised hiring policy within an existing corporate structure is context-preserving change. An updated curriculum within an existing educational model is context-preserving change. The framework — the set of assumptions, institutions, and practices that defines what is possible and what is unthinkable — remains intact. The change occurs within it, and the framework's capacity to absorb the change without being transformed is precisely what makes it context-preserving.
Context-smashing change is categorically different. It does not modify practices within an existing framework. It transforms the framework itself — dissolving the boundaries that defined the previous arrangement, rendering obsolete the assumptions on which the previous arrangement rested, creating a situation in which the old framework no longer provides guidance because the conditions it was designed to govern no longer obtain. Context-smashing change is what happened when the printing press dissolved the monastic monopoly on textual production, when the factory system dissolved the artisan workshop, when the automobile dissolved the equestrian infrastructure of cities, when the internet dissolved the geographic constraints on information distribution. In each case, the change was not a faster or better version of what existed within the previous framework. It was the dissolution of the framework itself, and the most consequential response was not the optimization of old practices but the construction of entirely new institutional arrangements adequate to the new conditions.
The artificial intelligence transition is context-smashing. This claim requires substantiation beyond assertion, because the naturalization machine described in Chapter 1 is already working to present the AI transition as context-preserving — as a faster, cheaper, more efficient version of existing software development, existing knowledge work, existing creative production, existing organizational design. The discourse is full of context-preserving language: "productivity gains," "efficiency improvements," "workflow optimization," "tool adoption." Each phrase implies that the framework remains intact and that AI merely makes the work within it faster. This implication is false, and its falsity can be demonstrated by examining what has actually dissolved.
The specialized role has dissolved. Not partially, not at the margins, but in its fundamental logic. When the cost of crossing domain boundaries drops to the cost of a natural-language conversation, the specialization that was built on translation cost loses its structural justification. The distinction between "frontend developer" and "backend developer" was not a distinction between kinds of intelligence or kinds of capability. It was a distinction between kinds of translation — between the human being who had learned to translate design intentions into browser-executable code and the human being who had learned to translate system logic into server-executable code. The translation was expensive, the learning was time-consuming, and the specialization was therefore economically rational. When AI handles the translation, the specialization becomes an artifact of a cost structure that no longer exists. This is not an optimization within the framework of specialized roles. It is the dissolution of the framework of specialized roles — a context-smashing change that demands new institutional arrangements rather than the refinement of old ones.
The team as the unit of production has dissolved — or, more precisely, has been revealed as a contingent organizational form rather than a structural necessity. In the formative context that preceded AI augmentation, the team existed because no individual could hold the full range of capabilities required to build a complex product. The team was a mechanism for assembling complementary specializations — a social technology for compensating for individual limitations through collective organization. When the individual's effective capability expands by an order of magnitude, the rationale for the team does not merely weaken. It transforms. The team, if it survives, must survive for different reasons than the ones that justified its existence in the previous formative context — for reasons of democratic deliberation, mutual development, the cultivation of judgment that emerges from the friction of diverse perspectives, rather than for the reason that no individual could do the work alone. Whether the team survives this transformation, and in what form, and governed by what principles, is an open question that the existing institutional framework cannot answer, because the framework was designed for teams organized around complementary specializations — a logic that AI has rendered obsolete.
The credentialing pipeline has dissolved, in the specific sense that the relationship between credentials and capability has been severed for a significant class of work. The four-year computer science degree, the graduate specialization, the years of junior and mid-level experience — this pipeline was designed to produce human beings with the specific translation capabilities that specialized software development required. When the translation is handled by the machine, the pipeline is not merely inefficient. It is training people for a formative context that no longer exists — producing graduates equipped for a world of specialized translation labor who will enter a world of augmented creative direction. This is not a curriculum problem solvable by adding an "AI literacy" module. It is an institutional crisis that requires the reconstruction of the educational framework itself — a reconception of what education is for, how it is organized, what it produces, and how it relates to the economic and social life it is supposed to prepare people for.
The timeline of production has dissolved. When a product that would have required six months of team labor can be built in thirty days — as Segal demonstrates with the Napster Station sprint — the business planning frameworks, the investment cycles, the competitive dynamics, the organizational rhythms that were built around six-month production timelines become incoherent. Quarterly planning assumes a relationship between time and output that no longer holds. Investment rounds assume a relationship between capital and capability that has been transformed. Competitive moats assume a relationship between expertise and defensibility that has been breached. Each of these assumptions was a feature of the formative context. Each has been smashed by the transformation of the underlying cost structure.
The dissolution is comprehensive enough that the language of optimization — "productivity gains," "efficiency improvements" — actively obscures what is happening. The language implies continuity. What is occurring is discontinuity. The language implies a faster version of the old world. What is emerging is a different world, with different structural features, requiring different institutional arrangements, demanding different human capabilities, and offering both different opportunities and different dangers than the world it is replacing.
Unger's framework provides the vocabulary for what the optimization language conceals. A context-smashing transformation requires not better policies within the old framework but new frameworks altogether — new ways of organizing production, new ways of distributing gains, new ways of preparing human beings for the work that matters, new ways of governing the relationship between technological power and democratic authority. The construction of these new frameworks is the exercise of institutional imagination, and it is the most consequential form of building available in the present moment.
The danger of treating context-smashing change as context-preserving change is not merely intellectual. It is practical, and its practical cost is measured in the distance between the response the situation requires and the response it actually receives. When leaders treat the AI transition as an optimization problem — how do we make our existing teams faster, how do we integrate AI into our existing workflows, how do we update our existing curricula — they are applying context-preserving solutions to a context-smashing situation. The solutions may be competent. They may even produce short-term improvements. But they will fail to address the structural transformation underway, and the failure will become visible only when the old framework collapses under the weight of conditions it was never designed to bear.
The context-smashing nature of the AI transition creates both the greatest danger and the greatest opportunity. The danger is that new arrangements will crystallize before democratic communities have had the opportunity to participate in their design — that the formative context of the AI economy will be set by the first actors to exercise plasticity, who happen to be the technology corporations and individual builders operating within the existing framework of corporate capitalism, producing institutional arrangements that reflect their particular interests and perspectives rather than the interests and perspectives of the broader communities affected. This is not speculation. It is already happening. The vector pod, the prompt-and-judgment workflow, the individual augmented producer — each is an institutional innovation constructed by the people who happen to be at the frontier, without democratic deliberation, without collective input from the workers and communities these arrangements will govern.
The opportunity is that context-smashing moments are the moments of greatest institutional openness — the rare windows in which the naturalization of the old framework has been disrupted and the naturalization of the new framework has not yet been accomplished. In these windows, the institutional imagination can operate with a freedom that is unavailable during periods of stability. Alternatives that seemed unthinkable under the old framework become thinkable because the old framework's authority has been shaken. New arrangements that would have been dismissed as impractical become practical because the "practical" arrangements of the previous era have been revealed as artifacts of conditions that no longer obtain.
These windows do not stay open. The pressure to naturalize — to treat the first institutional response as the permanent one, to convert contingency into necessity, to close down the space of alternatives in favor of the arrangement that happened to arrive first — is relentless. Every month that passes without democratic institutional imagination being exercised is a month in which the current arrangements become more deeply embedded, more widely assumed to be the only arrangements available, more resistant to the reconstruction that Unger's framework demands.
The pattern is visible in every previous technological revolution. The factory system crystallized before the labor movement could construct alternatives, and the human cost of that premature crystallization was borne by a generation of workers — the generation Segal compares to the Luddites — who suffered the transition without the institutional protections that democratic imagination could have provided. The internet economy crystallized before democratic communities could construct alternatives to platform monopoly, and the cost of that premature crystallization is still being paid in the form of surveillance capitalism, attention exploitation, and the concentration of informational power in a handful of corporations.
The AI transition is moving faster than either of these predecessors. The window for democratic institutional imagination is correspondingly narrower. The need for it is correspondingly more urgent. And the cost of failing to exercise it — of allowing the formative context of the AI economy to be set by default rather than by design, by corporate discretion rather than democratic deliberation — will be correspondingly greater.
Unger's call is not for patience. It is for speed in the right dimension — not the speed of technological deployment, which is already proceeding at full velocity, but the speed of institutional imagination, which is barely proceeding at all. The construction of new frameworks adequate to the context-smashing transformation underway is the most consequential building project of the present moment. It requires the same creative energy, the same willingness to experiment, the same tolerance for failure and revision, that the technology builders bring to their work. But it must be directed not at the technology itself but at the institutional arrangements through which the technology enters human life — the arrangements that will determine, more than any algorithm or model architecture, whether the AI transition produces the broadly distributed human empowerment that the technology makes possible or the sophisticated new form of domination that the technology also makes possible, and that false necessity, if left unopposed, will present as the only outcome available.
The technology industry has produced, in the span of a single decade, computational systems capable of generating human-quality prose, composing music, writing functional software, diagnosing disease from medical imagery, and conducting scientific research at a pace that would have seemed hallucinatory to the engineers who built the first neural networks in the 1980s. It has not produced a single new institutional form adequate to governing the deployment of these systems in the interest of the communities they affect. The asymmetry between technological imagination and institutional imagination is the defining failure of the present moment — not because the technology is moving too fast, which is the complaint of the Swimmer, nor because the institutions are unnecessary, which is the assumption of the Believer, but because the human capacity for institutional invention, which is every bit as real and every bit as consequential as the capacity for technological invention, is being exercised at a fraction of the intensity and with a fraction of the resources that the situation demands.
Roberto Mangabeira Unger has argued throughout his career that institutional imagination — the capacity to envision and construct organizational forms, governance structures, and social contracts that do not yet exist — is the scarcest and most consequential form of human creativity. It is scarcer than scientific insight, because scientific insight operates within established methodological frameworks while institutional imagination must construct the frameworks themselves. It is scarcer than artistic vision, because artistic vision can be realized by an individual or a small group while institutional imagination must mobilize and coordinate the energies of entire communities. It is scarcer than technological innovation, because technological innovation operates within market incentives that reward speed and scale while institutional imagination must operate against those incentives, constructing arrangements that the market, left to its own devices, would never produce — arrangements that protect the slow, the vulnerable, the developmental, the democratic against the relentless pressure of optimization.
The AI transition has made the scarcity of institutional imagination visible in a way that previous technological transitions did not, precisely because the speed of the AI transition has collapsed the time lag between technological deployment and institutional consequence. When the factory system was deployed in the early nineteenth century, the institutional consequences — the dissolution of artisan communities, the creation of an industrial proletariat, the transformation of the relationship between labor and capital — took decades to become fully visible, and the institutional responses — labor law, collective bargaining, workplace safety regulation, social insurance — took further decades to be constructed. The time lag, while painful for the generation that bore the cost of the transition without the protection of the institutional response, at least provided a period in which the consequences could be observed, analyzed, and debated before the institutional imagination was called upon to act.
The AI transition provides no such lag. The consequences are appearing in real time, measured in months rather than decades. The Berkeley study that Segal cites — documenting work intensification, task seepage, the colonization of pauses, the erosion of boundaries between domains — was published within a year of the technological breakthrough it analyzed. The Software Death Cross — the repricing of an entire industry's valuation model — occurred within weeks. The dissolution of specialized teams, the compression of training pipelines, the breach of credentialing barriers — all observable within months. The institutional imagination is being called upon to respond to consequences that are still in the process of manifesting, to construct arrangements adequate to conditions that are still in the process of forming, to build for a world that is still in the process of arriving.
And the institutional imagination is not responding. Not because it cannot. But because it is not being cultivated, resourced, or exercised with anything approaching the intensity that the situation demands.
Consider the current institutional landscape as Unger's framework would analyze it. The EU AI Act — the most comprehensive regulatory framework currently in operation — addresses the supply side of the AI transition: what AI companies may and may not build, what risk assessments they must conduct, what disclosures they must make, what categories of AI application are prohibited or restricted. These are genuine and necessary contributions. They are also, in Unger's terms, context-preserving responses to a context-smashing transformation. They regulate the production of AI within the existing framework of the European regulatory state, applying the familiar tools of risk classification, compliance obligation, and enforcement mechanism to a technology whose most consequential effects are not the risks the regulation addresses but the institutional transformations it catalyzes — transformations in how work is organized, how expertise is valued, how productive capability is distributed, how the gains from augmented production are allocated, how democratic communities relate to the technological forces reshaping their collective life. The regulation is competent. It is also radically insufficient, because it operates within a framework designed for a world in which the primary question was what technologies should be permitted, when the primary question is now what institutional arrangements should govern the deployment of technologies that have already been permitted and whose effects are already reshaping the social order.
The American response — executive orders establishing guidelines and principles, the convening of advisory bodies, the publication of framework documents — suffers from the complementary limitation. Where the European response applies the regulatory toolkit of the existing state, the American response applies the advisory toolkit of the existing political system: statements of principle, recommendations for voluntary adoption, the marshaling of expert opinion through institutional channels designed for deliberation at a pace that bears no relationship to the pace of the transformation they are deliberating about. Both responses are context-preserving. Both take the existing institutional framework as given and ask what can be done within it. Neither exercises the institutional imagination that the situation demands — the imagination to construct institutional forms that do not yet exist, governance mechanisms that operate at a fundamentally different speed and scale, democratic structures that give communities genuine authority over the arrangements through which AI enters their workplaces, schools, and public services.
The emerging frameworks in Singapore, Brazil, and Japan each bring distinctive institutional traditions to the AI governance problem, and each produces distinctive insights. Singapore's approach — centralized, experimentalist, pragmatic — demonstrates the value of rapid institutional iteration within a small and administratively capable polity. Brazil's approach — informed by its constitutional commitment to social rights and its experience with participatory budgeting and other forms of direct democratic engagement — suggests possibilities for democratic participation in AI governance that the larger democratic polities have not yet explored. Japan's approach — shaped by its specific demographic challenges and its distinctive relationship between the state, the corporation, and technological development — offers models of public-private coordination that the more market-driven polities have not considered. But none of these approaches, individually or collectively, represents the scale of institutional imagination that the AI transition requires. Each operates within the existing framework of the nation-state, the existing toolkit of regulatory governance, the existing distribution of authority between states, corporations, and citizens.
Unger would observe that the institutional imagination deficit is not accidental. It is structural — the product of specific features of the formative context within which institutional design currently operates. Three features deserve particular attention.
First: the dominance of the technology industry in shaping the discourse about AI governance. The companies that build AI systems are also, in practice, the primary sources of expertise about how those systems work, what they can do, what risks they pose, and what governance arrangements are appropriate. This concentration of expertise produces a specific and predictable distortion: the governance arrangements that emerge tend to be the arrangements that the technology industry can accommodate within its existing business models, rather than the arrangements that democratic communities might construct if they had independent access to expertise and independent institutional capacity for deliberation. The priesthood problem that Segal identifies — the concentration of understanding in a small group whose interests do not necessarily align with the communities they serve — is, in Unger's terms, an institutional problem rather than merely a moral one. It cannot be solved by exhorting the priests to be more responsible, though responsibility matters. It can only be solved by constructing institutional arrangements that democratize access to expertise and distribute the authority to make governance decisions beyond the circle of those who build the technology.
Second: the speed mismatch between technological change and democratic deliberation. Existing democratic institutions were designed for a world in which the pace of institutional change was measured in years and decades rather than months. Legislative processes, regulatory rulemaking, judicial review, the formation of public opinion through media and civil society — each operates at a tempo that was adequate to previous technological transitions and that is catastrophically inadequate to the current one. The EU AI Act took years to move from proposal to implementation. In those years, the technology it was designed to govern underwent multiple transformations that rendered portions of the regulation obsolete before they took effect. This is not a failure of European governance. It is a structural mismatch between the pace of technological change and the pace of democratic response — a mismatch that cannot be resolved by making the existing institutions faster but only by constructing new institutional forms capable of operating at a fundamentally different tempo.
Third: the absence of institutional infrastructure for democratic participation in technology governance at the community level. The AI transition is experienced locally — in specific workplaces, specific schools, specific neighborhoods, specific families. But the governance of the AI transition is conducted nationally and internationally, at a level of abstraction and a distance from lived experience that effectively excludes the people most directly affected from meaningful participation in the design of the arrangements that will govern their collective life. The parent who lies awake wondering what to tell her child, the teacher who watches students disappear into tools she has not been trained to understand, the worker who feels the ground shifting beneath a career built over decades — each of these people has experiential knowledge that is directly relevant to the design of the institutional arrangements the AI transition requires, and none of them has an institutional channel through which that knowledge can be brought to bear on the governance decisions being made on their behalf.
Unger's concept of "inclusive vanguardism" — developed in The Knowledge Economy and subsequent work — speaks directly to this structural deficit. The knowledge economy, Unger argues, currently operates as an "insular vanguardism" — a system in which the most advanced productive practices are confined to a small number of firms and individuals while the majority of economic agents are excluded from participation. The AI transition is deepening this insularity even as it appears to democratize capability at the individual level. The developer in Lagos can now build with Claude Code, yes. But the institutional arrangements that determine the terms of access, the distribution of gains, the governance of the platform, the regulatory framework within which the tool operates — these arrangements are designed by the insular vanguard and imposed on everyone else as conditions of participation rather than negotiated through democratic deliberation.
Inclusive vanguardism, in Unger's formulation, is not the extension of charity from the vanguard to the majority. It is the reconstruction of institutional arrangements so that the most advanced productive practices — including AI-augmented production — are available to the broadest possible range of people, under terms that those people have participated in designing, within governance frameworks that are subject to democratic accountability rather than corporate discretion. This is not a redistribution program. It is an institutional construction project — the creation of organizational forms, educational systems, regulatory structures, and democratic processes that do not currently exist and that would channel the extraordinary productive capability of AI toward broadly distributed human empowerment rather than toward the further enrichment and empowerment of the insular vanguard that currently controls the technology and sets the terms of its deployment.
The institutional imagination required for this construction is of a specific kind. It is not the imagination of the utopian blueprint — the detailed specification of an ideal society to be installed wholesale. Unger rejects this form of imagination as both impractical and dangerous, because it substitutes the authority of the designer for the authority of the democratic community and because it assumes a level of knowledge about optimal institutional arrangements that no designer possesses. The institutional imagination Unger demands is experimentalist — it proposes alternatives, tests them, evaluates the results, and revises based on experience. It is pluralist — it assumes that different communities may construct different arrangements suited to their particular circumstances and values. It is democratic — it insists that the construction of institutional arrangements is a matter for collective deliberation rather than expert prescription or corporate fiat.
But it is also urgent. The window for institutional imagination does not stay open indefinitely. Every month that passes without democratic institutional construction is a month in which the arrangements being constructed by the insular vanguard become more deeply embedded, more widely naturalized, more resistant to democratic reconstruction. The formative context of the AI economy is being set, right now, by default rather than by design. The institutional imagination that could set it differently — that could construct arrangements channeling AI capability toward inclusive human development rather than insular enrichment — must be exercised now, at a pace and with an intensity commensurate with the technological transformation it is responding to, or the window will close and the arrangements that happened to crystallize first will be treated as necessary, as natural, as the way things inevitably are.
The scarcest resource is not computation. It is the collective capacity to imagine and construct the institutional world we want to inhabit. And the cultivation of that capacity — through education, through democratic practice, through the creation of institutional channels for collective deliberation — is the most consequential investment the present moment demands.
---
Every major technological revolution in human history has produced a moment of premature settlement — a moment in which the first institutional arrangements to crystallize around the new technology were treated as permanent, as the natural and necessary framework within which the technology would henceforth operate, foreclosing the institutional experimentation that could have produced better alternatives. The factory system crystallized before labor law. The railroads crystallized before antitrust. The internet crystallized before privacy regulation. Social media crystallized before attention protection. In each case, the premature settlement imposed costs on a generation of human beings — workers, consumers, citizens, children — who bore the consequences of institutional arrangements they had no part in designing, consequences that better institutional imagination, exercised earlier, could have mitigated or prevented.
Roberto Mangabeira Unger's political philosophy is, at its core, a systematic argument against premature settlement — an insistence that no institutional arrangement should be treated as final, that the first response to a new condition is never the best response, that the democratic community must retain the permanent capacity to revise, reconstruct, and replace the arrangements through which it organizes its collective life. Unger calls this commitment "experimentalism," and it is not a vague philosophical disposition. It is a specific institutional program: the construction of governance mechanisms that enable continuous experimentation with alternative arrangements rather than the periodic ratification of existing ones, that treat every institutional settlement as provisional, every governance framework as a hypothesis to be tested rather than a conclusion to be enforced.
The AI transition is producing premature settlement at a speed commensurate with the speed of the technological transformation itself. Within months of the Claude Code breakthrough, specific institutional arrangements have already begun to harden — not because they have been tested and found superior to alternatives, but because they were the first to emerge and because the naturalization machine described in Chapter 1 is already converting their contingency into apparent necessity. The arrangements are real. They are in many cases competent. Some of them may turn out, upon reflection and experimentation, to be genuinely good. But none of them has been tested against alternatives, because alternatives have barely been conceived, and the pressure to treat the first response as the permanent response grows stronger with every month that the arrangements are in operation.
Consider the institutional experiments currently underway, examined with the experimentalist rigor that Unger's framework demands. The "vector pod" concept — small groups of three or four people whose function is not to build but to decide what should be built, while AI tools handle the execution — is an institutional experiment. It embodies a specific hypothesis: that the optimal organizational form for AI-augmented production separates the judgment function from the execution function and concentrates judgment in small, multidisciplinary teams. This hypothesis may be correct. It may also be wrong — it may turn out that the separation of judgment from execution produces judgment that is disconnected from the material realities of execution, that the feedback loops between doing and deciding are more important than the efficiency gains of separating them, that the multidisciplinary team of three is too small to generate the diversity of perspective that good judgment requires or too large to maintain the agility that rapid iteration demands. The point is not that the vector pod is a bad idea. The point is that it is a hypothesis, and treating it as a settled answer — naturalizing it as "the way AI-augmented work is organized" — forecloses the experimentation that would determine whether it is the best hypothesis among the available alternatives.
The "AI Practice" framework — structured pauses built into the AI-augmented workday, sequenced rather than parallel workflows, protected time for human-only deliberation — is another institutional experiment. It embodies a specific hypothesis about the relationship between AI-augmented productivity and human cognitive health: that the pathologies documented by the Berkeley researchers (work intensification, task seepage, attentional fragmentation) can be mitigated by deliberate structural interventions within the workday. This hypothesis, too, may be correct. It may also be insufficient — it may turn out that the pathologies are not primarily workday-level phenomena but organizational-level or societal-level phenomena that cannot be addressed by interventions within the individual workplace. The Berkeley researchers themselves noted that the work intensification they documented was driven not merely by the availability of the tool but by the organizational culture that rewarded visible productivity, the career incentive structures that penalized rest, the competitive dynamics that punished the firm that slowed down while its competitors accelerated. An intervention that addresses the workday without addressing these structural drivers may be treating symptoms rather than causes.
The "builder's ethic" itself — Segal's prescription of care, honesty, attention to consequences, the willingness to maintain the dams — is an institutional experiment at the level of individual practice. It embodies the hypothesis that the AI transition can be navigated wisely through the cultivation of personal virtues: attentional discipline, ethical sensitivity, the capacity for self-knowledge that distinguishes flow from compulsion. This hypothesis is not wrong — these virtues are genuinely valuable, and their absence is genuinely costly. But the hypothesis is incomplete if it assumes that personal virtue, however cultivated, is sufficient to address a structural transformation of the institutional landscape. The most virtuous builder, operating within institutional arrangements that concentrate the gains of AI augmentation in the hands of capital owners and impose the costs on displaced workers, will not — cannot — produce broadly distributed human flourishing through personal virtue alone. The institutional framework within which the virtue is exercised determines its effects, and the framework is not a matter of personal choice but of collective design.
Unger's experimentalism offers a specific alternative to premature settlement, and the alternative is not merely "try different things" — which is the vague and ultimately toothless version of experimentalism that most organizations practice under the banner of "innovation." The alternative is the construction of institutional mechanisms that make continuous experimentation structurally possible — that embed the capacity for revision into the architecture of governance itself, so that the revision of institutional arrangements does not require a crisis to trigger it but occurs as a regular, anticipated, democratically governed feature of institutional life.
What would this look like in the context of the AI transition? Unger's framework suggests several concrete institutional forms, each of which represents the exercise of institutional imagination at a level that the current discourse has not yet reached.
First: governance mechanisms that operate at the speed of technological change. The current model — in which democratic institutions observe the effects of technological deployment, deliberate about appropriate responses, and enact regulatory measures months or years after the effects have become visible — is structurally inadequate to a technology whose effects are appearing in real time. An experimentalist alternative would construct governance mechanisms capable of real-time institutional adjustment: standing bodies with the authority and the expertise to modify governance arrangements as the technology and its effects evolve, operating under democratic mandate but at a tempo adequate to the transformation they are governing. These bodies would not replace legislative or judicial oversight. They would supplement it with institutional capacity for the rapid, iterative, evidence-based adjustment that the pace of AI development demands.
Second: institutionalized pluralism in organizational design. Instead of allowing a single organizational model — the vector pod, the individual augmented producer, the AI-integrated team — to naturalize as the way AI-augmented work is organized, an experimentalist approach would deliberately support the construction and testing of multiple organizational models in parallel. Different firms, different sectors, different communities would experiment with different arrangements, and the results would be systematically compared, evaluated, and shared. This is not laissez-faire — it is not the market "sorting things out" through competitive selection, which tends to select for short-term profitability rather than long-term human development. It is structured experimentation under democratic governance, with explicit attention to the effects of different organizational models on the full range of values at stake: productivity, yes, but also worker development, community well-being, democratic participation, and the cultivation of the human capacities that the AI age makes most valuable.
Third: sunset provisions for institutional arrangements. Every governance framework, every organizational model, every educational curriculum deployed in response to the AI transition would include a built-in expiration date — a moment at which the arrangement must be reviewed, its effects evaluated, and a deliberate decision made about whether to continue, modify, or replace it. The sunset provision is the institutional mechanism that prevents premature settlement from becoming permanent settlement. It forces the democratic community to revisit its choices, to examine the consequences of its previous decisions, and to exercise institutional imagination anew rather than allowing the arrangement that happened to be in place to persist by inertia.
Fourth: democratic feedback mechanisms that bring the experiential knowledge of affected communities into the governance process. The parent, the teacher, the worker, the student — each possesses knowledge about the effects of AI deployment that is directly relevant to governance decisions and that is currently excluded from the governance process by the institutional architecture of existing democratic institutions. An experimentalist approach would construct channels — participatory panels, deliberative assemblies, digital platforms for structured input — through which this experiential knowledge could inform the design, evaluation, and revision of institutional arrangements. This is not populism. It is the recognition that governance knowledge is distributed, that the people who live inside institutional arrangements have insight into their effects that the people who design them from outside cannot access, and that democratic governance is enriched rather than undermined by the inclusion of this distributed knowledge.
The experimentalist commitment is demanding. It requires the willingness to treat every institutional response to the AI transition as provisional — to resist the comfort of settlement, the reassurance of permanence, the naturalization that converts the first response into the only response. It requires the construction of institutional mechanisms that most existing governance systems do not possess and that most existing political actors have not conceived. It requires democratic communities to exercise institutional imagination at a pace and with an intensity that exceeds anything in the recent history of democratic governance.
But the cost of failing to exercise it — of allowing premature settlement to harden into permanent settlement, of treating the first institutional arrangements to crystallize around AI as the necessary and natural framework for the AI economy — is measured in human lives. The generation that bore the cost of the factory system's premature settlement, the generation that bore the cost of the internet's premature settlement, the generation currently bearing the cost of social media's premature settlement — each paid a price that better institutional imagination, exercised earlier, could have reduced. The AI transition is moving faster than any of these predecessors, which means the window for institutional imagination is narrower and the cost of missing it is greater.
The choice is not between settlement and chaos. It is between premature settlement, which forecloses alternatives, and democratic experimentalism, which constructs the institutional capacity for continuous improvement. Unger's life work has been the argument for the latter. The AI transition is the moment when that argument becomes not merely compelling but urgent beyond any previous instance of its application.
---
Most democratic governance operates at low energy. This is not a pejorative observation but a structural one — a description of the tempo, the intensity, and the scope of democratic engagement that existing institutional arrangements permit and expect. Citizens vote periodically, typically at intervals of two to six years. Legislatures deliberate at a pace determined by procedural rules designed for a world in which the primary challenge was preventing hasty action rather than enabling rapid response. Regulatory agencies operate through notice-and-comment rulemaking processes that measure their timelines in months and years. Judicial review proceeds at the pace of litigation. Public opinion forms through media cycles that, even in the age of social media, operate at a tempo determined by human attention spans and news rhythms rather than by the pace of the transformations the public is attempting to understand.
This low-energy democratic practice was adequate to the governance challenges of the late twentieth century, when the pace of institutional change was measured in decades and the primary task of democratic governance was the maintenance and incremental adjustment of stable institutional arrangements. It is catastrophically inadequate to the governance of artificial intelligence, a technology whose effects are context-smashing, whose deployment timelines are measured in months, and whose institutional consequences are appearing in real time while the democratic institutions responsible for governing them are still deliberating about whether to form a committee.
Roberto Mangabeira Unger has argued throughout his career for what he calls "empowered democracy" or "high-energy democracy" — a form of democratic governance that enables continuous institutional experimentation, active citizen participation in the design of the arrangements that govern collective life, and rapid institutional adjustment in response to changing conditions. High-energy democracy is not merely a normative ideal in Unger's framework. It is a structural necessity — a form of governance that the complexity and speed of modern institutional life demand, regardless of ideological preference, because low-energy democracy lacks the institutional capacity to perform the governance functions that the situation requires.
The AI transition transforms this structural necessity into an emergency. The gap between the pace of AI deployment and the pace of democratic response is not merely wide. It is widening. Every month that the gap grows is a month in which the institutional arrangements of the AI economy are being set by actors other than democratic communities — by technology corporations making deployment decisions, by individual builders constructing new organizational forms, by market forces selecting among competing arrangements on the basis of short-term profitability rather than long-term human flourishing. The democratic community is not absent from this process. It is present as a spectator — observing the transformation, forming opinions about it, expressing those opinions through the channels available (social media posts, conference panels, occasional legislative hearings), but exercising no effective institutional authority over the design of the arrangements that will govern its collective life in the AI age.
The inversion is precise and consequential. In a functioning democracy, the relationship between democratic authority and technological power runs in one direction: democratic communities decide the terms under which technological forces operate within their societies, and the technological actors comply with those terms or face democratic sanction. In the current AI transition, the relationship runs in the opposite direction: technological actors decide the terms under which AI enters workplaces, schools, homes, and public services, and democratic communities adjust to those terms or face competitive disadvantage. The corporations set the terms of access. The platforms determine the interface. The algorithms shape the experience. The democratic community's role is reduced to reactive regulation — imposing constraints on deployment after the deployment has already occurred and its institutional effects have already begun to crystallize.
Unger would identify this inversion as a symptom of institutional failure rather than technological inevitability. The failure is not that democracy is inherently incapable of governing technological transformation — the history of democratic governance includes numerous instances of effective institutional response to technological upheaval, from the labor legislation of the Progressive Era to the environmental regulation of the 1970s to the telecommunications governance frameworks of the late twentieth century. The failure is that existing democratic institutions lack the specific institutional capacities that the governance of AI requires: the capacity for rapid institutional adjustment, the capacity for technically informed deliberation, the capacity for democratic participation at the community level where AI's effects are experienced, and the capacity for experimentalist governance that treats every arrangement as provisional and revisable.
These capacities are not inherent features of democracy. They are institutional achievements — products of deliberate construction, available to democratic communities that choose to build them and unavailable to those that do not. The construction of these capacities is itself an exercise of institutional imagination, and it is the exercise that the AI transition demands most urgently from democratic communities.
What would high-energy democratic governance of AI actually require? Unger's framework, applied to the specific institutional challenges of the AI transition, suggests several concrete constructions.
First: standing democratic bodies with real-time governance authority over AI deployment in specific domains — education, healthcare, labor markets, public services. These bodies would not replace legislative or regulatory oversight. They would supplement it with institutional capacity for the rapid, iterative governance that the pace of AI deployment demands. They would be composed of representatives drawn from the communities affected by AI deployment in each domain — teachers and parents and students in education, healthcare workers and patients in healthcare, workers and employers and community members in labor markets — supplemented by technical experts whose role is to inform rather than to determine. Their authority would be genuine: the power to approve, modify, delay, or prohibit specific AI deployments within their domain, subject to legislative mandate and judicial review but operating at a tempo adequate to the transformation they are governing.
Second: democratic technology assessment at the community level. The AI transition is experienced locally — in the specific workplace where AI augmentation changes the nature of work, in the specific school where AI tools change the nature of learning, in the specific neighborhood where AI-driven economic shifts change the nature of opportunity. But the governance of the AI transition is conducted nationally and internationally, at a level of abstraction that excludes the people most directly affected from meaningful participation. High-energy democracy would construct institutional infrastructure for community-level technology assessment — structured processes through which local communities could evaluate the effects of AI deployment in their specific contexts, develop recommendations for governance adjustments based on their experiential knowledge, and communicate those recommendations to the standing governance bodies with sufficient institutional weight to affect outcomes. This is an extension of democratic practice into a domain from which it has been largely excluded, and the extension is justified not by ideology but by necessity: the experiential knowledge of affected communities is directly relevant to governance quality, and no governance framework that excludes it can be adequate to the complexity of the phenomenon it is attempting to govern.
Third: democratic participation in the design of AI-augmented institutional arrangements within workplaces and educational institutions. The organizational experiments currently underway — the vector pod, the AI Practice framework, the builder's ethic — are being designed by managers and leaders within corporate hierarchies, without meaningful participation by the workers whose daily lives these arrangements govern. High-energy democracy would extend democratic participation to the design of these arrangements, through works councils, co-determination mechanisms, or other institutional forms that give workers genuine voice in the construction of the organizational models within which they operate. This is not a radical proposal in the European context, where co-determination has a long institutional history. It is radical only in the American context, where the design of organizational arrangements is treated as a managerial prerogative — a naturalization that Unger's framework would identify as false necessity in its purest form.
Fourth: public AI infrastructure that provides alternatives to corporate platform dependence. The current arrangement, in which AI capability is provided as a corporate service through proprietary platforms controlled by a handful of firms, is not the only possible arrangement. Public AI infrastructure — publicly funded, publicly governed, publicly accountable AI systems and platforms that provide augmentation capability as a public utility rather than a corporate subscription — is a conceivable and constructible alternative. It would not replace corporate AI provision. It would complement it, providing a democratic alternative that operates under different governance principles and different incentive structures, ensuring that access to AI capability is not contingent on the terms set by corporate actors and that the governance of AI infrastructure is subject to democratic accountability rather than corporate discretion.
Fifth: international democratic cooperation in AI governance that transcends the nation-state framework within which most current governance efforts are contained. AI is a global technology with global effects, and governance frameworks that operate within national boundaries will inevitably be inadequate to a phenomenon that transcends them. But international governance of AI cannot be left to the existing international institutions — the trade organizations, the multilateral bodies, the diplomatic forums — that were designed for a different set of challenges and that operate at a tempo even slower than national democratic institutions. High-energy international democratic cooperation would require the construction of new institutional forms — perhaps networks of the standing domain-specific governance bodies described above, operating across national boundaries, sharing experiential knowledge and governance innovations, constructing common frameworks while preserving the capacity for local adaptation that Unger's experimentalism demands.
Each of these constructions is an exercise of institutional imagination. Each addresses a specific governance deficit that the current institutional framework cannot fill. Each requires the commitment of democratic communities to the construction of institutional capacity that does not yet exist — capacity that no existing political actor has proposed, that no existing party platform includes, that no existing governance framework anticipates.
This is the challenge. The technology is here. Its effects are visible. Its institutional consequences are crystallizing. And the democratic governance capacity adequate to shaping those consequences — to ensuring that the AI transition serves broadly distributed human empowerment rather than the further concentration of power and wealth in the hands of those who control the technology — does not exist. It must be built. The building is an act of institutional imagination of the highest order. And the urgency is genuine: every month that passes without this construction is a month in which the formative context of the AI economy is being set by actors who are not democratically accountable to the communities their decisions affect.
Unger has spent his career arguing that democratic communities possess the capacity to construct the institutional world they want to inhabit. The AI transition is the test of whether that capacity will be exercised — and the stakes of the test are the difference between a technology that serves human freedom and a technology that, through the naturalization of the institutional arrangements that currently surround it, becomes the most sophisticated instrument of domination the world has ever produced, not through malice but through the simple, devastating failure of democratic institutional imagination to keep pace with the forces it is responsible for governing.
---
A twelve-year-old lies awake and asks her mother a question that no educational institution in the world is currently equipped to answer: "What am I for?" The question is not vocational — not "what career should I pursue?" — but existential, prompted by the daily experience of watching a machine do her homework better than she can, compose stories better than she can, answer questions faster and more comprehensively than she can. The question is the child's version of the challenge that the AI transition poses to every institutional arrangement designed to cultivate human capability: When the machine can do what we trained people to do, what should we train people to be?
Roberto Mangabeira Unger's philosophy of education — developed across The Self Awakened, The Knowledge Economy, and the institutional proposals threaded through his political writings — begins with a premise that the current educational establishment has almost entirely abandoned: education is not primarily the transmission of knowledge or the development of marketable skills. It is the cultivation of the context-transcending self — the formation of human beings who possess the capacity to see the frameworks within which they operate as contingent rather than necessary, to imagine alternatives to the given, and to participate in the construction of institutional arrangements that do not yet exist. Education, in this account, is the institutional practice through which a society produces the human beings capable of exercising the transformative vocation — the refusal to accept any arrangement as final, combined with the practical commitment to building better alternatives.
This conception of education is not new. It is, in fact, the oldest conception of education in the Western philosophical tradition — the Socratic commitment to the examined life, the insistence that the purpose of learning is not the accumulation of answers but the cultivation of the capacity to ask better questions. It has been progressively displaced, over the past century, by a conception of education as human capital formation — the preparation of workers for existing labor markets, the certification of competencies defined by existing institutional arrangements, the production of graduates who can function effectively within the given framework of economic and social life. The human capital model of education was always reductive — it reduced the full range of human development to the subset that existing labor markets valued — but it was at least functional: it produced graduates who could, in fact, function within the institutional arrangements they entered, who possessed the specific skills and knowledge that those arrangements demanded.
The AI transition has rendered the human capital model not merely reductive but dysfunctional. It is now actively producing graduates equipped for a formative context that no longer exists — graduates whose specific skills and certified competencies are precisely the capabilities that AI systems now provide at near-zero marginal cost. The four-year computer science degree that produces graduates capable of writing code in specific languages, debugging specific frameworks, navigating specific deployment environments — this degree is training for a world in which these specific capabilities constituted the scarce resource in software production. That world ended, practically and decisively, in the winter of 2025. The graduates emerging from this pipeline in 2026 and 2027 possess certified competencies that are, for a significant and growing class of work, less valuable than a subscription to an AI coding assistant.
This is not a curriculum problem that can be resolved by adding modules on "AI literacy" or "prompt engineering" to existing degree programs — the context-preserving response that most educational institutions are currently implementing. It is an institutional crisis that requires the reconstruction of the educational framework itself — a context-smashing transformation of what education is for, how it is organized, what capacities it cultivates, and how it relates to the economic and social life it is supposed to prepare people for.
Unger's framework provides the conceptual architecture for this reconstruction, and the architecture begins with a fundamental reorientation: from education as the production of human capital for existing institutional arrangements to education as the cultivation of the capacities that those arrangements cannot supply and that the AI age makes most urgently necessary. Three capacities in particular emerge from Unger's analysis as the essential products of a reconstructed educational system.
The first is the capacity for institutional imagination — the ability to conceive of organizational forms, governance structures, and social arrangements that do not currently exist. This capacity has never been systematically cultivated by any educational system. It has been treated as the province of the exceptional — the visionary leader, the social entrepreneur, the rare politician with the ability to see beyond the given. Unger's argument is that institutional imagination is not a rare talent but a general human capacity — an expression of the negative capability that defines the species — and that its systematic cultivation is both possible and necessary. An educational system designed to cultivate institutional imagination would look fundamentally different from the existing system. It would emphasize the study of institutional alternatives — the comparative analysis of different ways of organizing work, governance, education, healthcare, not as academic exercises but as practical repertoires of possibility. It would immerse students in the history of institutional innovation — the construction of the cooperative movement, the labor movement, the environmental movement, the open-source movement — not as history but as cases of institutional imagination in action, exemplary instances of the transformative vocation from which practical lessons can be drawn. It would provide structured opportunities for students to design and test alternative institutional arrangements, using the AI tools now available to prototype organizational forms with the same ease that those tools allow the prototyping of software products.
The second capacity is what Unger calls "the cooperative deepening of capability" — the ability to work with others across different perspectives, disciplines, and backgrounds to produce understanding and capability that no individual perspective could produce alone. The current educational system is organized around individual assessment: individual grades, individual credentials, individual competition for individual positions. This organizational principle was functional in a world where individual competency was the scarce resource and institutional arrangements were designed to sort individuals by competency level. In a world where individual competency can be augmented to extraordinary levels by AI tools, the scarce resource is not individual capability but the quality of the collaborative process through which individuals with different perspectives construct shared understanding and shared institutional arrangements. The educational system that cultivates this capacity would de-emphasize individual assessment and emphasize collaborative design — the structured practice of working across difference to produce outcomes that no individual could produce alone. Not as a soft skill. As the foundational educational practice.
The third capacity is what the preceding chapters have called judgment — but which Unger's framework reveals as something more specific and more demanding than the word typically conveys. Judgment, in Unger's sense, is not merely the ability to evaluate options and choose well among them. It is the ability to evaluate options in light of values that the evaluator has examined, contested, and deliberately adopted — values that are not inherited defaults but the products of sustained reflection on what matters and why. This is the capacity that the twelve-year-old's question reaches for: "What am I for?" is a question about values, about purpose, about the examined life that Socrates insisted was the only life worth living. An educational system that cultivates this capacity would not treat values as given — would not assume that the purpose of education is to prepare students for "success" as the existing institutional arrangements define success — but would treat the examination of values as a core educational practice, as fundamental as mathematics or literacy, as demanding and as rewarding as any technical skill.
These three capacities — institutional imagination, cooperative deepening, and examined judgment — constitute the educational program of the context-transcending self. They are the capacities that the AI transition makes most urgently necessary and that the existing educational system is least equipped to cultivate. Their cultivation requires not the adjustment of existing curricula but the reconstruction of the educational framework — the formative context within which young minds develop the relationship to knowledge, to capability, to themselves and each other that will determine whether they flourish or flounder in the AI age.
The reconstruction is institutional, not merely pedagogical. It is not enough to train individual teachers to grade questions rather than answers, though this is a valuable pedagogical innovation. The institutional architecture of education — accreditation, credentialing, funding models, the structure of the academic year, the assessment systems that determine what is valued and what is rewarded, the relationship between educational institutions and the labor markets they serve — must itself be reconstructed. The accreditation system that certifies institutions on the basis of input measures (faculty credentials, library holdings, credit-hour requirements) rather than outcome measures (the actual capacities of graduates) is an artifact of the previous formative context. The credentialing system that treats the four-year degree as the standard unit of educational achievement, regardless of what the degree actually produced in terms of human development, is an artifact. The funding model that ties institutional revenue to enrollment numbers and completion rates rather than to the quality of the educational experience or the capabilities it cultivates is an artifact. Each of these artifacts was functional within the formative context for which it was designed. Each is dysfunctional within the formative context the AI transition is creating. Each must be reconstructed.
Unger's insistence on educational reconstruction is grounded in a recognition that education is the institutional lever with the greatest long-term consequences for the distribution of the AI transition's benefits and costs. The generation currently entering the educational system will live their entire adult lives within the formative context the AI transition is creating. Whether they are equipped to exercise the transformative vocation — to participate as agents rather than objects in the construction of the institutional arrangements that will govern their collective life — depends on whether the educational system cultivates the capacities that agency requires. An educational system that continues to produce human capital for a formative context that no longer exists is not merely wasteful. It is a form of institutional violence against the young — a system that consumes years of their lives and, increasingly, encumbers them with financial obligations, in exchange for credentials whose value is being eroded by the same technological forces that the educational system has failed to comprehend.
Unger has argued, with a directness unusual among philosophers, that "the most effective way to use a machine is to work as not a machine, as anti-machine, nonformulaically or nonalgorithmically." The educational implication is stark: the purpose of education in the AI age is to cultivate the capacities that machines cannot replicate — the imaginative, the interrogative, the cooperative, the evaluative, the capacities that arise from the experience of being a finite, embodied, mortal, caring creature in an uncertain world. These capacities cannot be produced by adding AI literacy modules to existing curricula. They can only be cultivated through the wholesale reconstruction of the educational framework — a reconstruction that treats education not as the production of human capital but as the formation of human beings capable of asking the questions that no machine will originate, building the institutional arrangements that no market will produce, and making the value judgments that no algorithm can justify.
The twelve-year-old's question deserves an answer worthy of its depth. The answer is not that she is for the things machines cannot do — this frames her existence in relation to the machine, defining her by negation, making the machine the reference point and the human the residual category. The answer is that she is for the things that only a being capable of the transformative vocation can do: the asking of questions that open new spaces of possibility, the imagination of institutional arrangements that do not yet exist, the construction of a shared life that reflects examined values rather than inherited defaults, the exercise of the deepest human capacity — the capacity to see the given as contingent and to build beyond it. This is what she is for. And whether she knows it, whether the educational institutions she passes through cultivate it or suppress it, whether the society she enters provides the institutional channels through which it can be exercised — these are the questions that will determine whether the AI transition fulfills its promise or betrays it, and they are questions that only the most ambitious exercise of democratic institutional imagination can answer.
The most consequential political act available in any historical moment is not protest, not resistance, not the seizure of existing institutional machinery, but the construction of institutional arrangements that did not previously exist — arrangements that, once constructed, change the landscape of possibility for everyone who inhabits them, that create options where no options existed before, that transform the relationship between human communities and the forces that shape their collective life. Roberto Mangabeira Unger has spent half a century arguing that this constructive act — the exercise of institutional imagination in the service of democratic self-governance — is the highest expression of human political capacity, and that the persistent failure to exercise it is the deepest cause of the persistent failures of democratic life.
The builder's ethic that Edo Segal articulates in The Orange Pill — the commitment to care, honesty, attention to consequences, the maintenance of structures that redirect powerful forces toward human flourishing — is, when read through Unger's framework, more radical than its modest tone suggests. It is not merely a set of personal virtues suited to the AI age. It is, in embryo, a program of institutional construction — a set of commitments that, if extended from the personal to the organizational to the democratic, would constitute the most ambitious exercise of institutional imagination since the labor movements of the nineteenth and twentieth centuries constructed the institutional architecture of the welfare state.
Consider what the builder's ethic actually demands when it is taken seriously at each scale of social organization.
At the personal scale, the builder's ethic demands what Segal calls attentional ecology — the deliberate structuring of one's cognitive environment to prevent the colonization of every pause by productivity, to preserve the capacity for boredom, for sustained attention, for the kind of thinking that only occurs when the tool is set aside and the mind is left to its own resources. This is a genuine and demanding practice. It requires the builder to resist the most seductive feature of the AI tools — their availability, their willingness to engage at any hour, their capacity to transform every idle moment into a productive one — and to construct instead a relationship with the tools that preserves the human capacities the tools cannot replicate. The personal dam is real. It matters. And it is, in Unger's terms, the individual expression of the transformative vocation: the refusal to accept the tool's default relationship with the user as the only possible relationship, combined with the practical construction of an alternative.
But personal practice, however disciplined, operates within institutional constraints that personal practice cannot alter. The builder who cultivates attentional ecology within a corporate culture that rewards round-the-clock availability, that measures performance by visible output, that promotes the people who work longest and fastest rather than the people who think most deeply — this builder is exercising personal virtue against institutional headwinds. The virtue is admirable. The headwinds are structural. And structural problems require structural solutions.
At the organizational scale, the builder's ethic demands what the Berkeley researchers called AI Practice — the deliberate construction of organizational routines that protect human development against the pressure of AI-augmented productivity. Structured pauses. Sequenced workflows. Protected mentoring time. The organizational forms that Segal describes — the vector pod, the team that decides what should be built rather than building it, the redefinition of seniority from execution speed to judgment quality — are genuine institutional innovations. They represent the exercise of institutional imagination at the level of the firm, and their significance should not be underestimated. The firm is the institutional environment within which most adults spend most of their waking hours, and the redesign of the firm's internal arrangements is therefore a consequential act of institutional construction that directly affects the lives of the people who work within it.
But organizational innovation, however creative, operates within a larger institutional framework that organizational innovation cannot alter. The firm that constructs humane AI Practice arrangements competes against firms that do not. The leader who chooses to invest productivity gains in team development rather than headcount reduction faces quarterly pressure from investors who measure returns on a shorter time horizon. The organizational dam, however well-built, is subject to the currents of a larger river — the competitive dynamics of the market, the incentive structures of the financial system, the regulatory framework within which the firm operates. These larger currents are not natural forces. They are institutional arrangements — products of political choices that can be revised through political action. But they cannot be revised through organizational innovation alone.
This is where the builder's ethic, if it is to fulfill its radical potential, must extend beyond the personal and the organizational to the democratic — the level at which the formative context itself can be reconstructed. The democratic extension of the builder's ethic is not a different kind of activity. It is the same activity — the construction of structures that redirect powerful forces toward human flourishing — exercised at the scale where the most consequential structures operate. The labor legislation of the Progressive Era was a democratic extension of the builder's ethic: the construction of institutional arrangements (the eight-hour day, workplace safety regulation, the right to collective bargaining) that redirected the industrial forces the factory owners could not or would not redirect on their own. The environmental legislation of the 1970s was a democratic extension of the builder's ethic: the construction of institutional arrangements that redirected the productive forces that market incentives, left to their own devices, directed toward ecological destruction.
The AI transition requires a democratic extension of the builder's ethic commensurate with the scale and speed of the transformation. The specific institutional constructions this extension demands have been outlined in the preceding chapters: standing democratic governance bodies with real-time authority over AI deployment, community-level technology assessment, democratic participation in the design of AI-augmented workplace arrangements, public AI infrastructure as an alternative to corporate platform dependence, experimentalist governance with sunset provisions, educational reconstruction oriented toward the context-transcending self. Each of these constructions is specific. Each is constructible. Each requires the exercise of institutional imagination at a level that the current political discourse has not yet reached.
But Unger's framework reveals something about these constructions that is easily missed when they are enumerated as a policy agenda. They are not merely instrumental — not merely means to the end of governing AI wisely, though they are that. They are constitutive of a form of democratic life that is valuable in itself — a form of life in which citizens are not merely consumers of governance outcomes but active participants in the construction of the institutional arrangements that govern their collective existence. The exercise of institutional imagination is itself a form of human flourishing — an expression of the context-transcending capacity that defines the species, a practice that develops the very capabilities it requires, a form of collective self-determination that is diminished every time it is delegated to experts, to corporations, to the market, to the dictatorship of no alternatives.
This is the radical heart of the builder's ethic, visible only when Unger's framework is applied to it. The dam is not merely a protective structure. The dam is a democratic construction — an institutional innovation that embodies the collective judgment of the community about how the river should flow, that is maintained through continuous democratic engagement rather than one-time installation, that is subject to revision as conditions change and understanding deepens, and that creates behind it a pool of possibility — a habitat for forms of human life that the unaltered current would not sustain.
The dam, in this account, is not a metaphor. It is an institutional program. Every policy framework that gives communities genuine authority over AI deployment in their schools, workplaces, and public services is a dam. Every educational reconstruction that cultivates the context-transcending self rather than producing human capital for a defunct formative context is a dam. Every governance mechanism that enables continuous institutional experimentation rather than premature settlement is a dam. Every democratic structure that distributes the authority to shape the AI transition beyond the circle of technology corporations and individual builders to the communities those decisions affect is a dam.
The construction of these dams is the most consequential form of building available in the present moment. It is also the most demanding, because it requires the exercise of institutional imagination at a pace and scale that exceeds anything in the recent history of democratic governance, in conditions of urgency imposed by a technological transformation that does not pause for deliberation. The technology is deployed daily. Its institutional effects are crystallizing daily. The formative context of the AI economy is hardening daily. And the democratic institutional imagination that could shape it differently — that could construct arrangements channeling the extraordinary productive capability of AI toward broadly distributed human empowerment rather than insular enrichment — is being exercised at a fraction of the intensity the situation demands.
Unger's life work is the argument that this need not be the case — that democratic communities possess the capacity to construct the institutional world they want to inhabit, that the appearance of necessity is always an illusion, that alternatives always exist. The AI transition is the test of that argument. The evidence so far — the speed of naturalization, the institutional imagination deficit, the premature settlement of organizational forms, the low-energy democratic response to a high-velocity transformation — is not encouraging. But the test is not over. The formative context is hardening but not yet hard. The window for institutional imagination is narrowing but not yet closed. The capacity for construction is dormant but not extinct.
The radical project of the dam is the awakening of that capacity — the insistence that democratic communities exercise, with full energy and full imagination, the transformative vocation that is their birthright. Not someday. Not when the technology stabilizes. Not when the political conditions improve. Now. In this moment. Before the window closes and the arrangements that happened to crystallize first are treated as the arrangements that were always meant to be.
---
The obligation that Roberto Mangabeira Unger's philosophy imposes on the present moment is not the obligation to understand the AI transition — though understanding is a precondition — nor the obligation to regulate it — though regulation is a component — nor even the obligation to resist its worst tendencies — though resistance is sometimes necessary. The obligation is to build beyond the given: to construct institutional arrangements that do not yet exist, that the current formative context would not have produced, that redirect the extraordinary capability of artificial intelligence toward purposes that the market alone would not serve and that democratic communities, exercising the fullest measure of their institutional imagination, have chosen.
Building beyond the given is the hardest form of building. It is harder than building within the given, because the given provides blueprints, precedents, templates, the accumulated institutional knowledge of what has been tried and what has worked. Building beyond the given means working without blueprints — constructing in the space where no precedent exists, where the formative context that provided the old blueprints has been smashed by the same technological forces that make the new construction necessary. It is harder than critique, because critique can operate from outside the system it examines while construction must operate from within it, subject to all the constraints, the compromises, the imperfections that practical building entails. It is harder than refusal, because refusal requires only the discipline of saying no while construction requires the discipline of saying yes to something specific, something imperfect, something that will be tested by reality and found wanting in ways the builder cannot predict.
And yet it is the only form of response adequate to the moment. The Swimmer's refusal, however principled, leaves the formative context to be designed by others. The Believer's acceleration, however energetic, accepts the formative context as given and optimizes within it. Critique, however penetrating, identifies what is wrong without constructing what could be right. Only the builder who works beyond the given — who treats the current institutional arrangements as raw material for reconstruction rather than as the fixed terrain within which all action must occur — exercises the transformative vocation at the level the AI transition demands.
Unger's philosophical career has been, in its entirety, an argument for the possibility and the necessity of this form of building. Against the determinisms of left and right — against the Marxist claim that history follows necessary laws and the neoliberal claim that market arrangements are the only efficient ones — Unger has insisted on what he calls the plasticity of social life: the demonstrable fact that institutional arrangements can be remade, that they have been remade repeatedly throughout human history, and that the appearance of necessity is always the product of naturalization rather than the reflection of structural constraint. The AI transition is the most vivid demonstration of this plasticity in living memory. Arrangements that seemed structural have dissolved in months. Boundaries that seemed permanent have been revealed as artifacts of cost structures that no longer obtain. Hierarchies that seemed inherent have been inverted by a tool that does not care about credentials, seniority, or institutional affiliation. If the plasticity of social life was ever in doubt, the AI transition has settled the question conclusively. The arrangements can be remade. The only question is who remakes them, and in whose interest, and through what process of deliberation.
Unger has identified, with increasing urgency in his recent work, the specific obstacle to the exercise of this plasticity in the service of democratic self-governance. The obstacle is not, as it is sometimes presented, the power of technology corporations or the speed of technological change or the complexity of AI systems. These are real challenges, but they are practical challenges — challenges that institutional imagination, once mobilized, can address. The deeper obstacle is what Unger calls the dictatorship of no alternatives: the pervasive, often unconscious conviction that the current institutional arrangements are the only possible ones, that the choice is between the existing framework and chaos, that to question the fundamental structure of the AI economy is to engage in fantasy rather than in the most rigorous and most consequential form of democratic thought.
The dictatorship of no alternatives operates with particular efficiency in the AI discourse because the technology itself generates an aura of inevitability. The exponential curves, the breathtaking demonstrations of capability, the speed of adoption — each reinforces the impression that what is happening is not the product of institutional choices but the unfolding of a natural process, as inevitable and as impervious to human direction as the orbit of a planet. But the orbit of a planet is physics. The institutional arrangements of the AI economy are politics. The technology is a force. The institutional framework within which the force operates is a construction. And constructions can be reconstructed.
The specific constructions the AI transition demands — the democratic governance mechanisms, the educational reconstructions, the experimentalist frameworks, the public infrastructure alternatives, the community-level assessment processes, the international cooperation architectures — have been outlined in the preceding chapters with as much specificity as the present moment of understanding allows. None of them is utopian. Each addresses a specific institutional deficit that has been identified through analysis of the actually existing AI transition and its actually existing institutional consequences. Each is constructible within the institutional capacities of existing democratic societies, though each requires the extension of those capacities in directions that existing political actors have not yet proposed.
But the specific constructions are less important than the practice they represent — the practice of institutional imagination itself, the habit of treating every arrangement as contingent and every constraint as a candidate for transformation. Unger has argued that this practice must become a permanent feature of democratic governance rather than an occasional response to crisis, because the forces reshaping institutional life — of which AI is the most dramatic current instance but certainly not the last — will not pause to allow democratic communities the luxury of returning to low-energy governance between episodes of upheaval. The pace of transformation is permanent. The need for institutional imagination is therefore permanent. And the institutional capacity for exercising it — the standing governance bodies, the community assessment processes, the democratic feedback mechanisms, the experimentalist frameworks with their sunset provisions — must be permanently constructed and permanently maintained.
This is the deepest implication of Unger's framework for the AI age: not a specific set of policy proposals, though specific proposals are necessary, but a permanent transformation of democratic practice — from the low-energy governance that treats institutional arrangements as settled to the high-energy governance that treats them as ongoing experiments, continuously evaluated, continuously revised, continuously reconstructed in light of experience and in the exercise of collective judgment about the kind of life the community wants to make possible.
The AI transition, in this account, is not merely a technological event requiring a governance response. It is an occasion — perhaps the most consequential occasion in the history of democratic governance — for the democratic community to reclaim its authority over the design of the institutional world it inhabits. The technology has demonstrated, beyond any remaining doubt, that institutional arrangements can be remade at extraordinary speed. The question is whether the remaking will be performed by the insular vanguard of technology corporations and individual builders operating within the existing formative context of corporate capitalism, or by democratic communities exercising the fullest measure of their institutional imagination to construct a formative context worthy of the extraordinary capability the technology provides.
Unger's answer has been consistent across five decades of philosophical work: the human community possesses the capacity to construct the institutional world it wants to inhabit. The obstacles are real but surmountable. The naturalization is powerful but reversible. The institutional imagination is dormant but awakable. The plasticity of social life is a permanent feature of the human condition, available to every generation, awaiting only the exercise of the transformative vocation — the refusal to accept the given as the necessary, the commitment to build beyond what exists toward what could exist, the insistence that democratic communities are the authors rather than the objects of the institutional arrangements that shape their collective life.
The AI age demands this exercise with an urgency that exceeds any previous moment in the history of democratic self-governance. The capability is extraordinary. The institutional arrangements through which it is deployed will determine whether it serves as an instrument of broadly distributed human empowerment or as the most sophisticated mechanism of domination ever constructed. The choice between these outcomes is not technological. It is institutional. It is political. It is democratic. And it is being made, right now, by default rather than by design, in the absence of the institutional imagination that could make it a genuine choice rather than a foregone conclusion.
The given is never the necessary. Alternatives always exist. The deepest human capacity is the capacity to see through the illusion of inevitability and to build, in the space that seeing opens, the institutional arrangements that a free and self-governing community would choose.
This is the obligation. This is the vocation. This is the work that the moment demands and that only the most ambitious exercise of democratic institutional imagination can perform.
The building begins.
---
The assumption I did not know I was making was that the arrangements surrounding us were the terrain itself.
I have built companies inside institutional frameworks I never questioned — corporate structures, funding models, team hierarchies, the quarterly cadence that determines what is visible and what disappears. I questioned everything inside those frameworks. I prided myself on questioning. I moved fast, broke things, rebuilt them better. But the frames themselves — the assumption that a corporation is the natural vessel for building, that a board conversation is where productivity gains get allocated, that the market is the mechanism through which capability finds its way to the people who need it — those I treated the way a fish treats water. As the medium. As reality.
Unger's framework does something uncomfortable to that assumption. It does not say the frameworks are wrong, exactly. It says they are chosen — that they were constructed by specific people in specific historical moments for specific purposes, and that they can be reconstructed by different people for different purposes. And it says that the failure to see them as constructed, the naturalization that makes the contingent feel permanent, is the deepest form of political captivity available to a free people.
I kept thinking, while working through these ideas, about the boardroom conversation I described in The Orange Pill — the one where the twenty-fold productivity number was on the table and the question was whether to convert it into headcount reduction or team expansion. I chose to keep the team. I chose to invest in capability. I stand by that choice. But Unger forced me to see something about the conversation itself that I had not seen: the question of what to do with the productivity gain was being decided inside a room that contained managers and investors. It did not contain the engineers whose augmented labor generated the gain. It did not contain the community whose economic life would be affected by the decision. The choice was real. The framework within which the choice was made excluded most of the people the choice would affect.
That exclusion is not malice. It is architecture — institutional architecture so familiar it has become invisible. Unger makes it visible. And once visible, it cannot be unseen.
I do not pretend to have absorbed the full weight of what Unger demands. His vision of empowered democracy, of high-energy democratic governance operating at the speed of technological change, of communities exercising genuine authority over the institutional arrangements that shape their collective lives — this is a vision whose realization requires construction far beyond anything I can accomplish from a builder's desk. But the direction matters. The recognition that the dams I described in The Orange Pill must be not only personal and organizational but democratic and institutional — that the builder's ethic is incomplete until it extends to the collective design of the frameworks within which all building occurs — this recognition has changed how I think about what I owe.
What I owe is not just care within the existing arrangements. What I owe is imagination beyond them. The institutional imagination to ask: What frameworks would we build if we did not mistake the current ones for the only ones possible? What would AI governance look like if the communities affected had genuine authority over its design? What would education look like if its purpose were the cultivation of context-transcending human beings rather than the production of human capital for a world that no longer exists?
I do not have answers to these questions. Unger himself insists that answers are less important than the practice of asking — the permanent exercise of institutional imagination that refuses to let any settlement harden into necessity. But the questions now live in me, alongside the others that have accumulated through this project, and they have changed the shape of what I think building means.
The given is not the necessary. The arrangements are not the terrain. And the most consequential thing a builder can build is not a product or a company but an institutional arrangement that makes it possible for others to build, on terms they have chosen, toward purposes they have examined, in a world they have participated in constructing.
That is what I am reaching for. It is larger than anything I have reached for before. And it begins, as Unger insists all genuine building begins, with the refusal to accept the world as given.
— Edo Segal
Every revolutionary technology produces a moment when the first institutional response is mistaken for the only possible one. The factory system crystallized before labor law. The internet crystallized before privacy regulation. Now AI is crystallizing inside corporate frameworks, platform monopolies, and market assumptions that democratic communities never chose -- and the naturalization is happening faster than any previous cycle. Roberto Mangabeira Unger's philosophy of false necessity identifies this as the deepest political danger of our time: not the technology itself, but the illusion that the arrangements surrounding it are inevitable.
This volume applies Unger's anti-necessitarian framework to the AI revolution as documented in Edo Segal's The Orange Pill. It examines how institutional imagination -- the capacity to envision and construct organizational forms that do not yet exist -- has become the scarcest resource in an age of abundant computation. It traces Unger's argument that democratic communities must exercise the transformative vocation: the refusal to accept any arrangement as final, combined with the practical commitment to building better alternatives at the speed the moment demands.
-- Roberto Mangabeira Unger

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Roberto Mangabeira Unger — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →