By Edo Segal
The plan I was most proud of was the one that failed fastest.
Thirty days to CES. No software, no hardware, no conversational AI model. I had the vision. I had the team. I had the timeline. What I did not have was a realistic accounting of how many things would go wrong between the whiteboard and the showfloor. The plan said six parallel workstreams converging on day twenty-two. Reality said nothing converges when you are inventing the thing you are converging toward.
We shipped Napster Station on time. Not because the plan worked. Because we abandoned the plan every seventy-two hours and built a new one from whatever we had learned since the last one broke. The thing that saved us was not vision. It was the speed at which we could fail, learn, and adjust.
I did not have a name for that process until I encountered Charles Lindblom.
Lindblom spent his career at Yale making an argument that offends every instinct a builder possesses: you cannot solve complex problems through comprehensive analysis. You cannot gather enough information, reconcile enough competing values, or predict enough downstream consequences to design the optimal response to anything that actually matters. What you can do is take a small step, watch what happens, and take a slightly better step next.
He called it "muddling through," and the phrase sounds like surrender. It is not. It is the most honest description I have found for how complex systems actually adapt — including the system I watched adapt in that room in Trivandrum, and the system that is right now trying to figure out what AI means for education, for work, for governance, for the twelve-year-old asking what she is for.
Every chapter of The Orange Pill reaches for comprehensive prescription. Attentional ecology. Educational restructuring. Demand-side regulation. Lindblom's framework does not reject these ambitions. It asks the question I kept dodging: Through what process do these prescriptions become real? Whose agreement is required? What happens when the first design meets the first institution staffed by actual humans with actual competing interests?
The answer is iteration. The answer is democratic collision. The answer is that the dam the ecosystem needs is not the dam any single builder would design — it is the dam that emerges from the messy, contested, perpetually revised interaction of everyone who has a stake in where the water flows.
That is uncomfortable for someone who builds for a living. It is also, I have come to believe, true.
-- Edo Segal ^ Opus 4.6
1917–2018
Charles Lindblom (1917–2018) was an American political scientist and Sterling Professor Emeritus of Economics and Political Science at Yale University, where he taught for over five decades. Born in Turlock, California, he earned his doctorate at the University of Chicago and went on to become one of the most influential theorists of democratic governance and policy-making in the twentieth century. His 1959 article "The Science of 'Muddling Through'" challenged the prevailing orthodoxy of comprehensive rational planning, arguing that democratic societies navigate complex problems not through synoptic analysis but through incremental adjustment — successive limited comparisons informed by practical feedback rather than theoretical prediction. His 1977 book Politics and Markets introduced the concept of the "privileged position of business," demonstrating that corporations exercise a structural influence over democratic governance that operates independently of lobbying or campaign contributions. Across works including A Strategy of Decision (with David Braybrooke), Politics, Economics, and Welfare, and Inquiry and Change, Lindblom developed a body of theory that reframed democratic messiness not as a failure of rationality but as a distinctive and irreplaceable form of collective intelligence. His ideas have shaped fields ranging from public administration and policy analysis to organizational theory and the study of democratic institutions.
In 1959, a political scientist at Yale published a twenty-page article that quietly demolished the intellectual foundations of policy analysis as it was taught, practiced, and believed across every government ministry, think tank, and graduate seminar in the Western world. Charles Lindblom's "The Science of 'Muddling Through'" did not attack any particular policy. It attacked the assumption underneath all policies — the assumption that complex problems can be solved by comprehensive rational analysis. The assumption that a sufficiently intelligent analyst, armed with sufficient data, applying sufficient rigor, can identify the optimal course of action for a society confronting a genuinely difficult challenge.
The assumption was wrong, Lindblom argued. Not occasionally wrong. Not wrong at the margins. Structurally wrong, in a way that no amount of additional data or analytical sophistication could remedy. The cognitive demands of comprehensive analysis exceed human capacity. The value conflicts inherent in any significant policy choice cannot be resolved through analysis because the values themselves are in irreconcilable tension. The information required to predict the consequences of novel interventions in complex systems is not merely difficult to obtain but in principle inaccessible, because the system's behavior is shaped by the responses of millions of actors whose decisions cannot be predicted in advance and whose responses to any intervention will alter the very conditions that the analysis was designed to capture.
What societies actually do — what they have always done, what they will continue to do regardless of what their planning documents claim — is muddle through. They make small adjustments to existing policies based on practical feedback. They compare a limited number of alternatives that differ incrementally from the status quo. They evaluate these alternatives not against some comprehensive accounting of all human values but against the marginal differences between the options under consideration. They proceed not from root analysis to optimal solution but from practical observation to limited improvement.
Sixty-seven years later, the argument has never been more urgent. The arrival of artificial intelligence capable of operating as an intellectual partner to human beings has produced, across nearly every domain of institutional life, precisely the kind of call for comprehensive redesign that Lindblom spent his career warning against.
The calls are understandable. When Edo Segal describes, in The Orange Pill, the moment he watched twenty engineers in Trivandrum achieve a twenty-fold productivity multiplier using Claude Code — each one suddenly capable of work that previously required an entire team — the transformation is real. When he argues that this transformation demands new institutional structures, a national strategy for what he calls "attentional ecology," a fundamental restructuring of education, demand-side regulation that governs not what AI companies build but how citizens navigate what they have built, the urgency is genuine. The ground is moving. The pace of change is extraordinary. Something must be done.
The question is what kind of something.
The rational-comprehensive method — what Lindblom called the "root method" — says: analyze the problem from its foundations. Define the values at stake. Identify all possible alternatives. Trace the consequences of each alternative through the entire system. Select the alternative that maximizes value achievement across all dimensions simultaneously. This is what textbooks describe, what graduate programs teach, what evaluation committees demand.
Nobody does it. Nobody has ever done it. Nobody can do it, for any problem whose complexity exceeds the capacity of any single analytical framework to model.
Consider what a comprehensive AI governance strategy would require. It would require agreement on the values at stake — but is the primary value productivity or cognitive depth? Efficiency or meaning? The democratization of capability or the preservation of expertise? These are not technical questions with technical answers. They are political questions, and the tension between them is not a problem to be solved but a permanent feature of pluralist democracy. Any strategy that resolves the tension has not transcended politics. It has merely selected one set of values and imposed them on citizens who hold different ones.
A comprehensive strategy would require prediction of consequences — but the technology is evolving faster than any model can track. Segal describes phase transitions: moments when incremental improvement in AI capability produced qualitative changes in what the technology could do. These transitions are inherently unpredictable. A governance framework designed for the AI capabilities of 2026 will be governing the capabilities of 2029, which may differ not merely in degree but in kind. The framework will be solving yesterday's problem with yesterday's tools while tomorrow's problem assembles itself in the space the framework cannot see.
A comprehensive strategy would require coordination across institutions — federal agencies, state education departments, school districts, individual schools, private companies, professional associations, international bodies. Each of these institutions has its own priorities, its own culture, its own incentives, its own interpretation of whatever strategy descends from above. The strategy as designed and the strategy as implemented are never the same document, because implementation requires translation across institutional boundaries, and each translation introduces modifications that reflect local conditions. By the time a national AI education strategy reaches an actual classroom in an actual school, it bears only a family resemblance to what was designed in Washington. The comprehensive design has been incrementalized through the process of implementation itself.
This is not a failure of execution. It is a structural feature of institutional complexity. The larger the system, the more translations are required, the greater the gap between design and practice. Lindblom understood this not as a lamentable deviation from the ideal but as the actual operating condition of all democratic governance.
The alternative — the branch method, successive limited comparisons, muddling through — sounds like a concession to mediocrity. It is not. It is a specific analytical strategy adapted to specific conditions, and in the conditions that currently obtain, it is the most effective strategy available.
Look at the landscape of AI governance as it actually exists. There is no comprehensive AI policy in any democratic nation. There are hundreds of incremental interventions: specific regulations governing specific applications in specific contexts. The European Union's AI Act classifies AI applications by risk level and applies different regulatory requirements to different categories. It does not redesign the relationship between human cognition and artificial intelligence from first principles. It adjusts existing regulatory frameworks to accommodate a new category of technology. In the United Kingdom, scholars have explicitly theorized the government's sector-led, pro-innovation regulatory approach as Lindblomian incrementalism — iterative adjustments based on concrete evidence of harm rather than speculative models of harm whose probability of occurrence is unknown. In the United States, executive orders establish guidelines without the force of law. Sector-specific regulations govern AI use in healthcare, finance, and transportation without attempting a unified theory of AI governance.
This patchwork offends the comprehensivist temperament. It feels inadequate to the scale of the challenge. Like responding to a flood by repositioning sandbags.
But the sandbag analogy reveals something important. You move sandbags because you can move sandbags. You build a permanent levee when you have the knowledge, the resources, the institutional capacity, and the political consensus. In the meantime, the sandbags hold back the water — imperfectly, with gaps and overflows, but well enough to prevent catastrophe while you learn enough to build the next, slightly better arrangement.
The most successful elements of the emerging AI governance discussion are precisely those that operate incrementally. The Berkeley researchers whose work Segal describes approvingly — Xingqi Maggie Ye and Aruna Ranganathan, who embedded themselves in a technology company for eight months and documented what happened when AI tools entered a functioning organization — proposed something they called "AI Practice." Structured pauses built into the workday. Sequenced rather than parallel workflows. Protected time for human-only deliberation. Each element is a modest adjustment to existing work practices, testable in a specific context, revisable based on observed consequences, and small enough to abandon without crisis if it fails.
This is muddling through at its best. The researchers did not solve the comprehensive problem of how artificial intelligence should interact with human cognition across all contexts. They solved a much smaller problem: in this specific setting, with these specific workers, performing these specific tasks, what modifications to existing work practices produce better outcomes on these specific measures? The modesty of the question is its greatest strength. A modest question can be answered. A modest intervention can be tested. A modest failure can be absorbed and learned from.
The comprehensive alternative — redesign education, restructure organizations, reorient national policy — would be, if it could be designed and implemented, a magnificent achievement. It would also be the product of an analytical process that no institution has ever conducted successfully for any problem of comparable complexity.
There is a deeper reason why muddling through is not merely adequate but optimal for problems like the AI transition. Incremental interventions produce information. Each small adjustment is an experiment. Each experiment generates data about what actually happens when you intervene in a specific way in a specific context. The accumulated data from thousands of incremental experiments produces a body of practical knowledge that no comprehensive analysis, however sophisticated, could generate — because the knowledge is about how the system actually responds to intervention, not about how a model predicts it should respond.
If the AI Practice framework were implemented in a hundred organizations and evaluated over a period of years, it would produce knowledge about the interaction between human cognition and AI tools that no theoretical analysis could approximate. Some organizations would find that structured pauses improve cognitive outcomes. Others would find them impractical. Some would discover that sequenced workflows help experienced workers but hinder novices. The variation would be informative, because variation reveals the conditions under which specific interventions succeed or fail — precisely the information that comprehensive analysis lacks because comprehensive analysis operates from models, and models capture the general while missing the particular.
This is the intelligence of incrementalism. Not a lesser form of analysis that settles for less because it cannot achieve more. A different form of analysis that produces a different kind of knowledge — practical, contextual, empirical, revisable — that is more useful for navigating complex systems than the theoretical, decontextualized, deductive knowledge that comprehensive analysis aspires to but cannot deliver.
Segal's The Orange Pill is itself a vindication of incrementalism disguised as a call for comprehensive action. The book was not produced through comprehensive design. Claude's end reflection describes the iterative process: a bloated first version of twenty-eight chapters, a skeletal second version stripped to its core arguments, a rebuilt third version that found the balance. Three successive limited comparisons, each informed by the practical failure of the previous attempt. The tower metaphor — five floors, each building on the one below, culminating in a comprehensive view from the roof — is retrospective. The actual process was muddling: try everything, discover what does not work, strip it away, rebuild with the surviving insights, discover new problems, adjust again.
The book is good because it was muddled through well. Because the author and his AI collaborator were willing to fail, learn, and revise with the humility to discard what did not work and the judgment to preserve what did. The quality of the muddling determined the quality of the book, just as the quality of institutional muddling will determine the quality of governance that emerges from the AI transition.
The AI transition will be muddled through. Not because muddling is the best anyone can hope for. Because muddling — incremental adjustment, successive limited comparison, iterative learning from practical consequences — is the method by which democratic societies actually navigate complex challenges. The dams will be built one stick at a time. The attentional ecology will emerge from thousands of small decisions, not from a single comprehensive blueprint. And the quality of the outcome will depend not on the elegance of any initial design but on the speed with which the builders learn from their experiments, the honesty with which they assess their failures, and the courage with which they revise their approaches when the evidence demands revision.
The fire is real. The building is complex. The firefighter cannot see through walls. And the urgency of the fire does not grant her the ability to see through them. It makes comprehensive vision more necessary and no less impossible.
What remains is the work: try things, observe consequences, adjust. Repeat. The method is humble. The method is also the only one that has ever actually worked.
---
In 1977, Lindblom published Politics and Markets, a book whose final sentence became one of the most cited conclusions in postwar political science: "The large private corporation fits oddly into democratic theory and vision. Indeed, it does not fit."
The argument behind that sentence was not the standard progressive critique of corporate greed or regulatory capture, though it encompassed both. It was a structural argument about the mechanics of democratic governance. In a market economy, Lindblom observed, corporations do not merely participate in the political process alongside other interest groups. They occupy a privileged position that is qualitatively different from the position of any other actor in the system. Governments depend on business to provide employment, investment, and tax revenue. When business declines to invest — whether from genuine economic constraint or strategic withholding — the consequences fall on the government: unemployment rises, revenues shrink, citizens grow angry, officials lose elections. Business does not need to lobby for favorable treatment. The structure of the economy lobbies on its behalf.
This "privileged position of business" means that corporate leaders exercise a kind of public authority that is never voted on, never debated in legislatures, never subjected to democratic accountability. They make decisions about what to produce, where to invest, whom to employ, and how to organize production — decisions that shape the lived experience of entire communities — not as public officials answerable to citizens but as private actors answerable only to shareholders. The system does not merely tolerate this authority. It requires it. Remove the investment decisions of private corporations and the economy collapses. The dependence is structural, not incidental, and the privilege it confers is not corruption but architecture.
Philippe Lemoine, writing in February 2026, drew the line directly from Lindblom's 1977 conclusion to the handful of companies building frontier AI systems. If Lindblom thought the large private corporation fit oddly into democratic theory, what would he say about companies whose executives describe their work as building the most transformative technology in human history — and who may be right?
The concentration of AI development is extreme even by the standards of the technology industry. The computational resources required to train frontier AI models restrict serious competition to a small number of firms. The talent pipeline feeds a narrow ecosystem. The data requirements create advantages that compound with scale. The result is an oligopoly whose decisions about what to build, how to build it, whom to build it for, and what safeguards to include shape the cognitive environment of billions of people — decisions made by a few hundred individuals who were never elected, never confirmed, and never subjected to any process that democratic theory would recognize as legitimate.
Segal's The Orange Pill addresses this problem through the concept of the "priesthood" — a metaphor drawn from the original sense of those who tend to something sacred, who understand a domain deeply enough to mediate between that domain and those who do not understand it. Understanding confers obligation, Segal argues. If you understand how large language models concentrate attention, you are responsible for how that concentration affects people. The priesthood ethic demands that the people who understand AI use their understanding to serve the community rather than to concentrate power.
This is an attractive moral framework. It is also, from Lindblom's perspective, structurally naive. The priesthood model resolves the knowledge asymmetry between AI builders and the public by concentrating institutional authority in the hands of those who understand the technology. The ethic of stewardship is the mechanism that prevents this concentration from becoming exploitative.
But who evaluates the priesthood? If the authority of the experts derives from understanding that is by definition inaccessible to non-experts, then the priesthood is accountable only to itself. The ethic of stewardship is a normative aspiration, not an institutional guarantee. Priests who fail the ethic — who use their understanding to concentrate power rather than distribute it — cannot be identified by the non-experts who lack the knowledge to evaluate the priests' performance.
Segal himself provides the evidence for this structural weakness, with a candor that is both admirable and damning. He describes a product he built early in his career that he knew was addictive by design. He understood the engagement loops, the dopamine mechanics, the variable reward schedules. He deployed them anyway, because the technology was elegant and the growth was intoxicating. The downstream effects — teenagers losing sleep, parents finding their children unreachable — took years to appear. When they did, he was no longer in the room.
This is not an indictment of character. It is an illustration of what happens when the incentive structures of an industry systematically reward the failure of the stewardship ethic. Even a well-intentioned builder, operating with genuine expertise and genuine concern, can fail — because the market rewards the failure. Growth metrics reward engagement. Engagement rewards addiction. Addiction rewards the engineer who designs the most effective engagement loop. The structure selects for the behavior that the ethic prohibits, and no amount of moral exhortation changes the selection pressure.
Lindblom's analysis suggests that the problem is not insufficient ethics but insufficient democracy. The corrective to concentrated private authority is not better private judgment but democratic accountability — institutional mechanisms through which the people affected by private decisions can contest, constrain, and redirect those decisions.
The "privileged position" framework illuminates something specific about AI companies that the standard regulatory discourse often misses. The regulatory debate typically frames the problem as one of information asymmetry: AI companies know things that regulators do not, and the solution is to require disclosure. But the privileged position is not merely informational. It is structural. Governments depend on AI companies the way they depend on all large businesses — for employment, for tax revenue, for the economic dynamism that keeps citizens satisfied and officials elected — but with an additional dependency that Lindblom could not have anticipated in 1977. Governments are increasingly dependent on AI companies for the infrastructure of governance itself.
Military applications of AI. Intelligence analysis. Public health modeling. Educational technology. Administrative automation. The tools of governance are increasingly supplied by the firms that governance is supposed to regulate. The dependency creates a circularity that Lindblom identified in the corporate context: the regulator cannot effectively constrain the firm on which the regulator depends for the tools of effective regulation. The firm's cooperation is necessary for the regulation to function, which means the regulation must be acceptable to the firm, which means the regulation will be shaped by the firm's preferences at least as much as by the public's interests.
Lindblom called the broader phenomenon "circularity" — the process by which business shapes the very preferences that citizens are supposed to express through democratic channels. In the AI context, this circularity operates with particular force. The public discourse about AI is substantially shaped by AI companies: through research publications, through product demonstrations, through media relations, through the funding of academic research, through the employment of former regulators and former academics, through the platforms on which the discourse itself takes place. The citizens whose democratic preferences are supposed to guide AI governance are forming those preferences within an information environment substantially controlled by the companies to be governed.
As Lemoine observes, even if the strongest claims about artificial general intelligence prove wrong in the near term, it will arrive eventually. When it does, the questions about alignment — about ensuring that artificial intelligence serves human interests — will rank among the most important questions the species has ever confronted. And those questions will be answered largely by a handful of people at AI companies, with some involvement from politicians who depend on those companies for campaign contributions, economic performance, and increasingly for the tools of governance itself.
This is not a conspiracy. It is architecture. The privileged position is not the result of malice or corruption. It is the result of structural dependence in a market economy — the same structural dependence that Lindblom identified fifty years ago, now amplified by a technology whose scope of influence makes the large industrial corporation of the 1970s look modest by comparison.
The incrementalist response to the privileged position is not to dismantle the corporations — Lindblom was never a socialist in the conventional sense, and he explicitly warned against the conclusion that the market system should be replaced by a command economy. The response is to build countervailing democratic institutions that reduce the dependency and increase the accountability.
This means investing in public AI research capacity, so that governments are not entirely dependent on private firms for technical expertise. It means creating regulatory bodies with genuine technical competence — staffed by people who understand the technology well enough to evaluate corporate claims independently rather than relying on the claims of the parties they regulate. It means designing legislative processes that are responsive to the concerns of all affected parties, not merely the parties with the most lobbying resources. It means creating international institutional frameworks that prevent regulatory arbitrage — the practice of locating AI development in whichever jurisdiction offers the most permissive regulatory environment.
Each of these is an incremental intervention. None of them, individually, solves the problem of the privileged position. Together, over time, with iterative adjustment based on observed consequences, they reduce the asymmetry between private authority and democratic accountability. They do not eliminate the asymmetry, because the asymmetry is structural — it is a feature of market economies, not a bug that can be patched. But they constrain it, redirect it, and create the institutional conditions under which democratic governance can function in the presence of concentrated private power.
The uncomfortable implication is that the priesthood model — the hope that the builders will govern themselves wisely — is not merely insufficient. It is part of the problem. Every invocation of the priesthood ethic, however sincere, reinforces the assumption that the relevant authority belongs to those who understand the technology. It reinforces the privileged position by locating legitimacy in expertise rather than in democratic process. The most responsible AI company in the world is still a private institution making public decisions without democratic accountability, and the responsibility of its leadership does not change this structural fact.
Lindblom was careful to distinguish between two claims. The first: that the privileged position of business is a real feature of market democracies. The second: that the appropriate response is not revolution but reform — the patient, incremental construction of democratic institutions capable of constraining private authority without destroying the economic system that depends on it.
The same distinction applies to AI governance. The privileged position of AI companies is a real feature of the current institutional landscape. The appropriate response is not to nationalize the industry or to halt development. It is to build, incrementally, the democratic institutions that constrain private AI authority — regulatory capacity, public research infrastructure, legislative responsiveness, international coordination — while recognizing that the building will be imperfect, the constraints will be incomplete, and the work will never be finished, because the structural dependence that creates the privilege is a permanent feature of the system, not a temporary distortion that the right policy can eliminate.
Democracy is not a solution. It is a process. And the process is designed not to produce optimal outcomes but to produce outcomes that are contestable, revisable, and accountable to the citizens who live with the consequences. In the face of the most powerful private institutions in human history, this process is both indispensable and under severe strain. Strengthening it is not one priority among many. It is the precondition for every other priority being pursued through legitimate means.
---
The rational-comprehensive method demands that the policy analyst begin at the root of the problem. Define the values at stake. Identify all possible policy alternatives. Evaluate each alternative against every relevant value. Select the alternative that maximizes value achievement across all dimensions. Lindblom called this root analysis, and he contrasted it with what he called the branch method: the analyst begins not at the root but at the branch, not with fundamental values but with the current situation, not with all possible alternatives but with a limited set of alternatives that differ incrementally from the status quo.
The distinction is not about ambition. It is about epistemology. Root analysis assumes that the analyst can comprehend the problem in its entirety — all the relevant values, all the possible interventions, all the consequences of each intervention across all dimensions of the system. The branch method assumes that this comprehension is impossible and proceeds accordingly: compare a limited number of alternatives, evaluate them against their marginal differences rather than against the full value set, and choose the increment that produces the most acceptable consequences based on available information.
The gap between the claimed method and the practiced method is one of the most consequential sources of confusion in democratic governance. Policy analysts who describe their work in the language of root analysis claim an authority they do not possess. Those who practice the branch method produce results that are useful, testable, and revisable — but that carry less rhetorical weight because they do not claim comprehensiveness.
The Orange Pill provides an instructive example of the branch method operating under the rhetorical cover of root analysis. Segal frames his argument as comprehensive: intelligence traced from hydrogen atoms to artificial computation, the philosophical, psychological, economic, and institutional dimensions of the AI transition examined, prescriptive conclusions presented as the product of this thorough analysis. The tower metaphor — five floors, each building on the one below, the view expanding with each stage — is explicitly a root-analysis metaphor.
But the actual analytical structure is not root analysis. It is successive limited comparison. Segal does not evaluate all possible responses to the AI transition. He evaluates three: the Swimmer, who resists; the Believer, who accelerates without constraint; and the Beaver, who studies the current and intervenes at leverage points. Three positions, each defined in relation to the others, each differing from the status quo in specific ways. This is the branch method in its purest form — a limited number of alternatives, compared against each other rather than against a comprehensive set of values, evaluated on the basis of their practical consequences.
The framework is effective precisely because it is limited. It does not attempt to capture the full range of possible responses. It captures three that are salient, recognizable, and correspond to actual positions held by actual participants in the actual discourse. The Swimmer corresponds to Byung-Chul Han and the philosophical tradition that views technological acceleration as pathological. The Believer corresponds to the accelerationists who view any constraint on deployment as an impediment to progress. The Beaver corresponds to the position Segal advocates — intervention based on understanding.
This is successive limited comparison, and its power derives from its constraints. Comparing three concrete alternatives illuminates each one in ways that a comprehensive taxonomy of all possible responses never could. The reader does not need to evaluate the three positions against an abstract value framework. She can evaluate them against her own experience, her own values, her own assessment of practical consequences. Two analysts who disagree profoundly about whether depth or productivity is the more important value can still agree on the practical consequences of the Swimmer's refusal, the Believer's acceleration, and the Beaver's intervention. They will disagree about which consequences matter most — and they should — but the disagreement is located in the practical domain, where democratic deliberation can operate, rather than in the theoretical domain, where it cannot.
The branch method has a further advantage that is particularly relevant to the AI transition: it accommodates disagreement about values without requiring that the disagreement be resolved as a precondition for action. The rational-comprehensive method stalls when the analysts cannot agree on whether the primary value is productivity or cognitive depth, because the method cannot proceed without a way to weigh competing values against each other. The branch method sidesteps this by evaluating alternatives against their practical consequences rather than against contested abstractions. The practical question — What actually happens when AI tools are deployed without structured pauses? What actually happens when they are deployed with them? — can be investigated empirically, and the empirical findings can inform the democratic deliberation about which consequences the society prefers, even when the society cannot agree on why it prefers them.
This matters for AI governance in immediate, operational ways. The debate about AI in education illustrates the point. Root analysis would require agreement on the purpose of education — is it to transmit knowledge, to develop cognitive capacity, to prepare workers for the economy, to cultivate citizenship, to foster human flourishing? These purposes are not merely different; they are in tension. A policy optimized for economic preparation may sacrifice cognitive development. A policy optimized for human flourishing may sacrifice competitive workforce readiness. The comprehensive method demands that these tensions be resolved analytically before policy can proceed. The resolution never comes, and policy stalls.
The branch method asks a different question: Given the current educational system, what specific modifications would improve outcomes on the dimensions that the relevant stakeholders care about, without making things worse on the dimensions they also care about? This question is answerable, because it does not require agreement on ultimate purposes. It requires only agreement on the direction of improvement from the current position, which is a much less demanding form of consensus.
A school district considering whether to integrate AI tools into its classrooms does not need a comprehensive theory of education to make an informed decision. It needs to compare a limited number of alternatives — integrate AI tools with specific guidelines, integrate them without guidelines, do not integrate them at all — and evaluate each alternative against its practical consequences in the specific context of that district's students, teachers, and community values. The evaluation will be imperfect. The information will be incomplete. The decision will be revisable. And the decision will be made, which is more than the comprehensive method can promise.
The branch method also explains why the most effective AI governance interventions to date have been sector-specific rather than universal. The guidelines for AI-assisted medical diagnosis address specific failure modes: the tendency of AI systems to reproduce biases in training data, the risk that clinicians will defer to AI recommendations without independent evaluation, the difficulty of explaining AI-generated diagnoses to patients. Each guideline is a successive limited comparison — a modest adjustment to existing clinical practice, designed to address a specific observed risk, testable against specific outcomes, revisable when new risks emerge.
These sector-specific interventions are often dismissed as inadequate by advocates of comprehensive reform. The dismissal misunderstands what the interventions are designed to do. They are not designed to solve the comprehensive problem of AI's relationship to human wellbeing. They are designed to improve specific outcomes in specific domains through specific, testable adjustments. The accumulated effect of hundreds of such adjustments — in healthcare, in education, in finance, in law, in journalism, in creative industries — constitutes a governance framework that no comprehensive design could have produced, because the framework is built on empirical evidence about how AI actually functions in specific contexts rather than on theoretical predictions about how it should function in general.
There is, however, a genuine tension between the branch method and the pace of AI development that deserves honest examination rather than dismissal. The branch method works when the analyst has time to observe consequences, learn from them, and adjust. It works less well when the system changes faster than the observation-learning-adjustment cycle can complete. If AI capabilities are transforming quarterly while sector-specific guidelines are revised annually, the guidelines are perpetually governing yesterday's technology.
The response to this tension is not to abandon successive limited comparisons in favor of comprehensive analysis. Comprehensive analysis does not become possible merely because the situation is urgent. A fire does not grant the firefighter the ability to see through walls. The response is to accelerate the comparison cycle: faster observation, faster learning, faster adjustment. Regulatory sandboxes that allow new governance approaches to be tested in controlled environments on compressed timelines. Information-sharing platforms that allow organizations to compare the results of different AI use policies without each organization independently rediscovering what others have already learned. Rapid-cycle evaluation methodologies that produce feedback in weeks rather than years.
Each of these accelerations is itself an incremental intervention — a specific, testable modification to existing governance processes, designed to make the successive-limited-comparison method faster without abandoning its fundamental logic. The method's power lies not in being slow but in being empirical. If the empirical cycle can be accelerated, the method's power is preserved while the temporal mismatch is reduced.
There is a final observation about successive limited comparisons that connects to the most distinctive feature of The Orange Pill: its collaborative authorship. Segal describes his writing process with Claude as a form of discovery — articulating half-formed ideas, receiving responses that modify and extend those ideas, revising in light of the exchange, discovering through the conversation what the argument needed to say. The laparoscopic surgery example — the connection between technological friction and ascending difficulty that became one of the book's signature insights — was not planned. It emerged from the iterative process of stating, responding, revising, and restating.
This is successive limited comparison applied to intellectual production. The writer does not begin with a comprehensive outline and execute it. The writer begins with the current state of the argument — imperfect, incomplete, full of gaps — and compares a limited number of ways to extend it. Each comparison produces information: this connection works, that one does not; this example illuminates, that one obscures. The accumulated information, generated by dozens of iterative comparisons, produces an argument that no comprehensive outline could have specified in advance, because the argument's structure was discovered through the process of building it.
The irony — and it is an irony worth sitting with — is that the book advocates comprehensive institutional design while demonstrating, in its own construction, the superiority of iterative comparison. The method that produced the book is the method that the book's prescriptions implicitly reject. The tension is not a contradiction. It is a demonstration. The argument for incrementalism is strongest when it is visible in the practice of the people making the argument for something else.
---
Democratic societies do not resolve their deepest disagreements. They process them. The distinction is Lindblom's most counterintuitive contribution to political theory, and it is the one most directly relevant to the institutional chaos of the AI transition.
The concept he developed to describe this processing is "partisan mutual adjustment" — a phrase deliberately chosen for its lack of poetry. It describes the process by which competing interest groups, each pursuing their own objectives through their own channels, produce policy outcomes through their interaction that no central planner designed and that no individual partisan intended. The outcome is not the product of agreement. It is the product of adjustment: each partisan modifying her behavior in response to the behavior of other partisans, not through explicit consensus but through the continuous, decentralized process of strategic adaptation that democratic institutions make possible.
The concept challenges two assumptions simultaneously. It challenges the assumption that good policy requires agreement on values — because the partisans do not agree, and the outcome emerges from their disagreement rather than from its resolution. And it challenges the assumption that good policy requires coordination by a central authority — because there is no central authority, and the coordination emerges from the interaction of decentralized actors responding to each other's moves.
This is how AI governance is actually being produced, right now, whether anyone has designed it this way or not.
The three positions that Segal identifies in The Orange Pill — Swimmer, Believer, Beaver — are not merely analytical categories. They are partisan positions, held by real constituencies with real interests and real power. The Swimmer's position is held by cultural critics, educational traditionalists, segments of the labor movement, and parents watching their children's cognitive development reshaped by technologies they do not understand. The Believer's position is held by technology entrepreneurs, venture capitalists, segments of the political establishment, and workers who have experienced AI-driven productivity gains and want more. The Beaver's position is held by a more diffuse coalition seeking to balance capability with caution.
The policy outcome — the institutional structures that will actually govern the AI transition — will not be determined by which position is analytically correct. It will be determined by the mutual adjustment of these competing partisans through democratic processes. Each partisan will use whatever channels are available: legislative lobbying, public advocacy, market behavior, institutional design within the organizations she controls, cultural production, and the thousand other mechanisms through which citizens influence the direction of public policy.
The quality of the outcome depends not on the correctness of any individual position but on the quality of the institutional processes through which the positions interact. This is the insight that the AI governance debate most urgently needs to absorb. The argument about what institutions should do — regulate AI, restructure education, implement attentional ecology — is the comprehensivist's argument. It assumes that the right answer can be identified through analysis. The argument about how institutions should be structured to decide what to do is the incrementalist's argument, and it is the argument that determines whether the resulting governance is adaptive or brittle.
Currently, the mutual adjustment is severely distorted. The Believer's position is institutionally advantaged because the organizations that hold it — AI companies, venture capital firms, the broader technology ecosystem — command disproportionate resources, disproportionate access to policymakers, and disproportionate influence over the platforms on which public discourse takes place. This is a specific manifestation of Lindblom's privileged position of business applied to the AI domain. The structural dependence of governments on AI companies for economic dynamism, tax revenue, and increasingly for the tools of governance itself gives the Believer's constituency leverage that has nothing to do with the merits of its arguments and everything to do with the architecture of market democracy.
The Swimmer's position is institutionally disadvantaged. The constituencies that hold it — cultural critics, educational traditionalists, labor advocates — have fewer resources, less access, less influence over the technological platforms that shape public attention. Their concerns about cognitive depth, attentional integrity, and the preservation of friction-rich learning are genuine, well-grounded, and systematically underrepresented in the governance process.
The Beaver's position — the silent middle that Segal describes with precision — is institutionally diffuse. The parents, teachers, workers, and citizens who feel both the exhilaration and the terror of the AI transition are not organized as a political force. By definition, the people who hold contradictory truths in both hands and cannot put either one down do not form advocacy organizations. They do not write position papers. They do not lobby legislators. They muddle through their own lives, making their own incremental adjustments at the kitchen table and in the classroom, and their practical knowledge about what the AI transition actually feels like — the knowledge that is most valuable for governance — is the knowledge least represented in the governance process.
Strengthening the mutual adjustment requires addressing these asymmetries. Not eliminating them — asymmetry is a permanent feature of democratic politics, and the fantasy of perfectly equal participation is as unachievable as the fantasy of comprehensive rational analysis. But reducing them enough that the adjustment reflects a broader range of interests, incorporates a wider base of knowledge, and produces outcomes that more of the affected parties can live with.
Concretely, this means creating channels through which the Swimmer's concerns can be articulated with specificity and force comparable to the Believer's claims about productivity and innovation. It means creating institutions — research bodies, advisory panels, community forums — that give voice to the silent middle: the parents and teachers and workers whose practical experience of the AI transition constitutes irreplaceable data about its consequences. It means, in short, investing in the democratic infrastructure through which mutual adjustment occurs.
The education system provides a vivid case study. The comprehensive approach to AI in education would begin with a unified theory of what education is for and derive the optimal AI integration strategy from that theory. Since no democratic society has ever agreed on what education is for — and the disagreement is not a failure of analysis but a reflection of genuinely irreconcilable values — the comprehensive approach never begins.
Partisan mutual adjustment, by contrast, is already producing AI education policy, whether anyone has noticed or not. Teachers are making daily decisions about whether and how to allow AI tools in their classrooms. School districts are issuing guidelines that reflect local values and local conditions. Professional associations are developing standards that reflect the collective experience of their members. State education departments are adapting assessment frameworks. Parents are making their own decisions about their children's AI exposure at home. Each of these actors is a partisan — each pursues her own objectives, each responds to the behavior of the others, and the aggregate of their interactions constitutes a de facto AI education policy that no central authority designed.
The de facto policy is messy. It is inconsistent. It produces different outcomes in different contexts. A student in a district that has banned AI tools has a fundamentally different educational experience than a student in a district that has embraced them. The variation is troubling from the comprehensivist perspective, which views consistency as a prerequisite for equity. From the incrementalist perspective, the variation is data. It reveals what actually happens under different policy regimes, in different communities, with different populations. The variation is the experiment, and the experiment produces the practical knowledge that the next round of adjustment requires.
The concern — and it is a legitimate concern — is that the mutual adjustment will be too slow. If AI capabilities transform faster than the adjustment cycle can process, the institutional response will lag permanently behind the technological reality. This concern is sharpest in domains where the stakes are highest: children's cognitive development, where the window for certain kinds of formative experience is finite and the cost of error is irreversible; critical infrastructure, where system failures have catastrophic consequences; military applications, where the speed of AI decision-making may outpace the speed of democratic deliberation about the rules governing its use.
In these high-stakes domains, the mutual adjustment needs to be precautionary — biased toward constraint rather than permissiveness in the early iterations, because the cost of under-constraining exceeds the cost of over-constraining. An AI system deployed in military decision-making without adequate democratic deliberation about its rules of engagement cannot be un-deployed after the consequences become visible. A generation of children whose capacity for sustained attention has been compromised by premature immersion in AI-saturated environments cannot recover what was lost through a later policy adjustment, however well-designed.
Precautionary mutual adjustment is not comprehensive planning by another name. It is still incremental, still decentralized, still driven by the interaction of competing partisans rather than the analysis of a central authority. But the increments are conservative in domains where errors are irreversible, and bolder in domains where errors can be corrected. The approach discriminates between contexts based on the asymmetry of risks — a discrimination that comprehensive planning claims to make analytically but that, in practice, is made through the political interaction of parties who disagree about which risks are most severe.
There is one more dimension of partisan mutual adjustment that deserves attention because it addresses the most provocative feature of The Orange Pill: its collaborative authorship with an AI system. From the perspective of mutual adjustment, the book itself is a demonstration of what the process produces when the adjusting parties bring genuinely different capabilities to the interaction. The human contributed emotional stakes, biographical specificity, moral urgency, the voice that makes the argument feel lived rather than merely argued. The AI contributed associative range, the capacity to hold multiple frameworks in simultaneous consideration, the ability to find connections across domains that the human could not traverse alone. Neither partner could have produced the book independently. The interaction produced something adapted to a broader cognitive ecology than either participant inhabited.
This is mutual adjustment applied to intellectual production, and it carries a lesson for institutional design. The institutions that govern the AI transition should be designed on the same principle: not the dominance of a single perspective, however expert, but the structured interaction of multiple perspectives, each contributing what the others lack. The technologist contributes knowledge of what the systems can do and how they fail. The worker contributes knowledge of what the transition feels like from inside a career being transformed. The parent contributes knowledge of what the transition looks like from the kitchen table at nine in the evening. The teacher contributes knowledge of what happens to student cognition when AI tools are present in the learning environment. The philosopher contributes the uncomfortable questions that everyone else is too busy building to ask.
No single perspective produces adequate governance. The interaction of perspectives — contested, negotiated, adjusted through democratic processes — produces governance that is responsive to the actual complexity of the system it governs. The process is messy. The outcomes are imperfect. The imperfection is the price of pluralism, and pluralism is the only institutional arrangement that generates the breadth of knowledge that governance of a complex system requires.
Segal's Beaver occupies an admirable position in the river — studying the current, building at leverage points, maintaining the dam against the constant pressure of the flow. But the Beaver's perspective is one perspective. The dam that democracy builds is not the Beaver's dam. It is the ecosystem's dam — adapted to interests the Beaver cannot see, informed by knowledge the Beaver does not possess, serving values the Beaver may not share.
The ecosystem's dam will not satisfy anyone completely. It will satisfy most people partially. And it will be revisable — which means that next year's dam can be slightly better than this year's, and the year after that slightly better still, and the accumulated improvement, over time, will constitute an adaptation to the AI transition that no single vision, however brilliant, could have achieved alone.
Attentional ecology is the most ambitious concept in The Orange Pill and, precisely because of its ambition, the concept most vulnerable to the critique that Lindblom's framework makes unavoidable. The idea is genuinely important: AI-saturated environments produce specific, measurable effects on the human minds that inhabit them, and those effects require institutional responses that go beyond individual willpower. The ecological metaphor is apt — the interactions between human cognition and technological environments are as complex, as interdependent, and as poorly understood as the interactions between organisms in a biological ecosystem.
The gap between the concept and its implementation is the gap between root analysis and the branch method. The concept, as Segal articulates it, implies a comprehensive understanding of human-AI cognitive interaction sufficient to design interventions that cultivate healthy cognitive development across all contexts. The ecologist studies the system. She identifies leverage points. She intervenes with precision.
But no one possesses this understanding. Neuroscientists understand some aspects — the effects of constant stimulation on dopamine regulation, the neuroplasticity implications of sustained AI interaction, the cognitive costs of decision fatigue in environments saturated with automated recommendations. Psychologists understand other aspects — the motivational dynamics of human-AI collaboration, the identity disruptions that accompany rapid capability shifts, the flow states that AI-assisted work can produce and the addictive patterns those states can mask. Educators understand still others — the effects of AI tutoring on student learning, the transformation of classroom dynamics when students can generate competent work without engaging with the material, the collapse of traditional assessment when the tool can produce what the assessment was designed to measure.
Each understanding is genuine and hard-won. None is comprehensive. And the integration of all of them into a coherent framework capable of guiding institutional design across all contexts simultaneously is precisely the kind of synoptic analysis that no institution has ever successfully conducted for any problem of comparable complexity.
The alternative is to build attentional ecology not comprehensively but concentrically — starting with the interventions that are most immediately actionable and expanding outward as practical knowledge accumulates. This is not a compromise. It is the only construction method that has ever produced durable institutional responses to complex technological challenges.
The first circle is organizational. Specific guidelines for AI use within specific organizations, designed by the people who understand those organizations best — the managers, workers, and institutional leaders who know the specific tasks, the specific cultures, the specific human dynamics that shape how AI tools are actually used in practice. A hospital's AI use policy will differ from a law firm's, which will differ from a school's, which will differ from a software company's. The diversity is a feature. It produces the variation that generates practical knowledge. If all organizations adopted the same policy, the policy would be wrong for most of them, and the wrongness would be invisible because there would be no variation against which to measure it.
The organizational circle is where the Berkeley researchers' AI Practice framework operates. Structured pauses. Sequenced workflows. Deliberate friction inserted at specific points in the human-AI collaboration. Each element is a modest adjustment, testable in a specific context, revisable based on consequences. The researchers did not derive these interventions from a comprehensive theory of cognition. They derived them from eight months of observation inside a functioning organization — watching what happened when people used AI tools, documenting where the tools produced value and where they produced pathology, and proposing adjustments that addressed specific observed problems.
This is how institutional knowledge is actually built. Not through theoretical derivation but through practical observation, intervention, and revision. The knowledge is contextual — it tells you what works in this organization, with these people, performing these tasks. The contextual knowledge is less generalizable than theoretical knowledge, but it is more reliable, because it is grounded in what actually happened rather than in what a model predicted would happen.
The second circle is sectoral. Standards that emerge from the accumulated practical experience of many organizations operating in similar contexts. These standards should not be imposed by a central authority designing the optimal policy from above. They should emerge from the professional communities that govern practice within each sector — medical associations establishing norms for AI-assisted diagnosis, educational associations developing frameworks for AI in classrooms, legal associations creating guidelines for AI-assisted research and advocacy, engineering societies defining standards for AI-assisted design.
Each professional community possesses domain expertise that no central regulator can replicate. The medical community understands the specific failure modes of AI-assisted diagnosis: the tendency to reproduce demographic biases in training data, the risk that clinicians defer to AI recommendations without independent evaluation, the difficulty of maintaining diagnostic skill when the tool handles routine cases. The educational community understands the specific cognitive effects of AI tutoring: the accelerated content acquisition that may come at the cost of the productive struggle through which deep understanding develops. The legal community understands the specific risks of AI-assisted legal analysis: the confident citation of nonexistent precedents, the subtle distortion of legal reasoning when the tool optimizes for plausibility rather than accuracy.
Sector-specific standards built on this distributed expertise will be more adapted to the actual conditions of each domain than any comprehensive framework designed from the center. They will also be more responsive to change, because professional communities can revise their standards faster than legislatures can revise statutes, and the revision cycle is informed by the continuous practical experience of the community's members rather than by the intermittent analytical exercises that legislative revision requires.
The third circle is regulatory. Government interventions that address risks too large or too diffuse for organizational policies or sectoral standards to manage. These regulatory interventions should be — and, in practice, are — incremental: specific regulations addressing specific risks in specific contexts, not comprehensive legislation attempting to govern all aspects of AI deployment simultaneously.
The regulation of AI in healthcare addresses healthcare-specific risks. The regulation of AI in financial services addresses the specific failure modes of algorithmic trading and automated credit decisions. The regulation of AI in transportation addresses the specific safety challenges of autonomous vehicles. Each regulatory intervention is testable, revisable, and informed by the practical knowledge generated by the organizational policies and sectoral standards that constitute the first two circles. The regulation builds on what has already been learned at the organizational and sectoral levels rather than attempting to generate all relevant knowledge through its own analytical processes.
The concentric structure has a property that comprehensive design lacks: it fails gracefully. When one element of a comprehensive framework proves inadequate, the entire framework is compromised, because the elements are designed as an integrated whole and the failure of one affects the functioning of all. When one element of a concentric structure proves inadequate — when a specific organizational policy produces unexpected consequences, or a sectoral standard proves too restrictive, or a regulatory intervention creates perverse incentives — the failure is localized. The failing element can be revised without destabilizing the rest of the structure. The other circles continue to function while the failing element is repaired.
This property — graceful failure — is worth more than the theoretical elegance of comprehensive design. In conditions of radical uncertainty, where the technology is evolving faster than any model can track and the consequences of interventions are systematically unpredictable, the certainty that some elements of the governance framework will fail is not a pessimistic assumption. It is a mathematical one. The question is not whether failure will occur but whether the system can absorb failure without collapsing. Concentric design can. Comprehensive design cannot, because the interdependence of the elements means that the failure of one propagates through the system.
The history of environmental regulation — the domain from which Segal draws his ecological metaphor — confirms the concentric approach. Environmental governance in every democratic nation was built in circles. Organizational practices first: companies developing internal environmental management systems, learning through trial and error what worked and what did not. Sectoral standards next: industry associations developing best practices that reflected the accumulated experience of their members. Regulatory frameworks last: government interventions that addressed the risks too large for organizational or sectoral self-governance to manage.
The Clean Air Act was not a comprehensive solution to air pollution. It addressed specific pollutants with specific standards and specific enforcement mechanisms. It was revised multiple times as new pollutants were identified, as existing standards proved inadequate, as enforcement mechanisms were tested and found wanting. The accumulation of incremental revisions produced a regulatory framework more effective than any comprehensive design could have been, precisely because it was built on practical experience rather than theoretical prediction.
The attentional ecology that the AI transition requires will be built the same way. The organizational circle is already forming — companies developing AI use policies, teams experimenting with structured pauses and sequenced workflows, individual practitioners discovering through their own experience what combination of AI assistance and human-only work produces the best cognitive outcomes. The sectoral circle is beginning to form — professional communities developing guidelines that reflect the specific challenges of AI integration in their specific domains. The regulatory circle is the most embryonic, which is appropriate: regulation should follow rather than lead the accumulation of practical knowledge, because regulation that precedes practical knowledge is regulation built on guesswork.
The urgency that Segal expresses — and the urgency is real — should accelerate each circle rather than attempt to bypass the concentric structure entirely. Faster organizational experimentation. More aggressive information sharing between organizations so that lessons learned in one context are available to others without each organization independently rediscovering them. More rapid development of sectoral standards, with built-in mechanisms for revision as conditions change. More technically competent regulatory bodies capable of issuing guidance that reflects the actual state of the technology rather than the state of the technology when the legislative process began.
Each of these accelerations is an incremental intervention. Each is testable. Each is revisable. And each makes the concentric construction slightly faster — slightly more responsive to the pace of technological change — without abandoning the fundamental logic of building from the ground up, from specific to general, from practical knowledge to institutional structure.
There is one domain where the concentric approach encounters its most severe test: children's cognitive development. Segal writes for the parent at the kitchen table, the parent who lies awake wondering whether the ground will hold. This parent does not have the luxury of waiting for organizational policies, sectoral standards, and regulatory frameworks to accumulate through iterative cycles. The child's cognitive development is happening now. The neural pathways that sustained attention requires are being formed — or not formed — now. The window for certain kinds of developmental experience is biologically finite.
The concentric approach does not fail here, but it must be modified. For children, the first circle — family and household norms — is the most important and the most immediately actionable. Parents making daily decisions about screen time, AI access, the balance between technologically mediated and unmediated experience. These decisions are incremental. They are revised constantly based on observed consequences. They are informed by the specific knowledge that only the parent possesses: what this particular child needs, how this particular child responds, what signs of cognitive distress or cognitive flourishing look like in this particular household.
The precautionary principle applies with special force to children's cognitive environments, because the costs of under-constraining exceed the costs of over-constraining. A child whose AI access is overly restricted can be given expanded access later, when the effects are better understood and the child's cognitive development is more advanced. A child whose capacity for sustained attention has been compromised by premature immersion in AI-saturated environments may not recover what was lost.
This does not require comprehensive understanding of children's cognitive development in AI environments. It requires the modest judgment that, in conditions of uncertainty about irreversible consequences, erring on the side of caution is prudent. This judgment is available to any parent who is paying attention. The attentional ecology for children begins at home, with the daily incremental decisions of parents who are themselves muddling through — trying things, observing consequences, adjusting, and trying again. It is the humblest circle of the concentric structure, and it may be the most consequential.
The ecology will emerge. Not from a comprehensive blueprint but from the accumulated learning of millions of incremental experiments — in organizations, in professional communities, in regulatory agencies, and in households. The quality of the ecology will depend not on the brilliance of its design but on the speed of learning, the honesty of assessment, and the willingness to revise. The circles will expand outward as knowledge accumulates. The structure will be imperfect. It will also be adapted to the actual conditions of the actual world, which is more than any comprehensive design has ever achieved for any challenge of comparable complexity.
---
Intellectual honesty requires confronting the limits of one's own framework. Incrementalism is not a universal prescription. It is a conditional one: in conditions of complexity, contested values, distributed knowledge, and democratic governance, incremental adjustment is the strategy most likely to produce adaptive outcomes. But these are not the only conditions that democratic societies face, and there are circumstances in which incrementalism is not merely insufficient but dangerous.
Four conditions merit specific attention, because each is directly relevant to the AI transition and each tests incrementalism at its boundary.
The first is pace. Incrementalism works when the analyst has time to observe consequences, learn from them, and adjust before the next iteration. It works less well when the system changes faster than the observation-learning-adjustment cycle can complete. Segal's description of the twenty-fold productivity multiplier achieved in a single week of training in Trivandrum suggests a rate of capability change that is qualitatively different from any previous technological transition. If professional capabilities can be transformed in days, the institutional structures that govern professional development, compensation, and career progression cannot adapt incrementally fast enough to accommodate the transformation. The incremental adjustment that worked for electrification — gradual revision of standards over years and decades — may not work for a technology that transforms the landscape in months.
The historical evidence here is instructive but not reassuring. The transition from agrarian to industrial economies was muddled through, but the muddling took generations, and the first several decades produced widespread suffering that the institutional response was too slow to prevent. The Luddites were not irrational. They were workers whose incremental adaptations — skills, relationships, institutional arrangements — were overwhelmed by the pace of change. Their destruction of machinery was political action in the absence of institutional processes that could accommodate their interests through less destructive means. The dams were built eventually — factory acts, labor protections, the eight-hour day — but "eventually" meant decades of displacement, poverty, and social disruption for the people who bore the cost of the transition.
If the AI transition produces displacement at comparable scale but at compressed timescales — years rather than decades — the incrementalist response may complete too slowly to prevent the accumulation of damage that the eventual institutional response was supposed to forestall.
The response to this is not to abandon incrementalism for comprehensive planning. Comprehensive planning does not become possible merely because the situation is urgent. The information that comprehensive planning requires is still unavailable. The value conflicts are still unresolvable. The coordination problems are still intractable. Urgency does not change the structural constraints on analysis. A fire does not grant the firefighter X-ray vision.
What urgency demands is bolder incrementalism — larger steps, taken more quickly, with greater tolerance for the errors that larger steps inevitably produce. Instead of cautious pilot programs in individual schools evaluated over multi-year periods, system-wide interventions across entire districts evaluated over months with rapid adjustment based on early-stage feedback. The interventions remain incremental in the sense that they can be revised and reversed. They are bolder in that they affect more people and produce results more quickly. The risk of larger errors is the price of greater speed. The alternative — cautious incrementalism that arrives too late — imposes its own costs, and those costs may be higher.
The second condition where incrementalism strains is irreversibility. Standard incrementalism relies on error correction — try something, observe the consequences, fix what went wrong. This works when errors are reversible. It fails when they are not. Some AI governance decisions produce consequences that cannot be undone after the fact because the damage completes before the feedback arrives.
The deployment of autonomous weapons systems that make lethal decisions faster than human deliberation can operate. The integration of AI into critical infrastructure — power grids, financial systems, communications networks — where system failures cascade at machine speed. The reshaping of children's cognitive development during windows of neurological plasticity that, once closed, do not reopen. In each of these domains, the incremental method's reliance on learning from error encounters a boundary: the error itself may be unacceptable, not because incrementalism undervalues its severity but because the error destroys the conditions under which the next increment of learning could occur.
For these high-stakes applications, some form of precautionary analysis is appropriate — not comprehensive in the synoptic sense, but more thorough than standard incrementalism requires. The analysis need not solve the entire problem. It needs to identify the specific failure modes whose consequences are irreversible and design the intervention to avoid those specific modes, even at the cost of accepting suboptimal outcomes on other dimensions. This is targeted caution, not comprehensive planning. It accepts uncertainty about most consequences while insisting on constraint in the specific domains where consequences cannot be corrected.
The third condition is threshold effects. Some systems absorb gradual changes without visible consequence until a critical point is reached, at which the system shifts suddenly and irreversibly to a new state. Segal's description of the December 2025 phase transition in AI capability — the moment when incremental improvements produced a qualitative break — is an example in the technological domain. The question is whether similar thresholds exist in the institutional and cognitive domains.
The concern is concrete. The gradual erosion of specific human capacities — sustained attention, tolerance for productive friction, the ability to learn through struggle rather than extraction — could proceed incrementally, below the threshold of institutional visibility, until the capacities have degraded past the point of recovery. If the capacity for sustained attention requires developmental conditions that AI-saturated environments are gradually eliminating, and if the elimination proceeds without visible crisis until the capacity has been irreversibly degraded, then incremental adjustment will be responding to yesterday's conditions while a cognitive threshold approaches from below.
Incrementalism relies on visible feedback to trigger adjustment. Threshold effects produce consequences that are invisible until they are catastrophic. The appropriate response is not comprehensive planning — because the threshold's location cannot be identified in advance through comprehensive analysis any more than through incremental observation. The appropriate response is precautionary incrementalism: interventions designed not merely to address visible problems but to preserve the conditions under which invisible problems would become visible before they become irreversible. Maintain cognitive environments where the capacity for sustained attention can be observed. Protect spaces — in schools, in workplaces, in households — where human cognition operates without AI mediation, so that changes in cognitive capacity can be detected against a baseline. The preserved spaces are monitoring stations, and the monitoring is the mechanism through which threshold effects become visible before they become catastrophic.
The fourth condition is distributional. Incrementalism operates through democratic mutual adjustment, and mutual adjustment produces fair outcomes only when all affected parties can participate in the process. When the distributional consequences of technological change are severe enough that some parties lose the capacity to participate — when displaced workers lack the resources to advocate for their interests, when disrupted communities lack the institutional infrastructure to contest the policies that disrupted them — the adjustment is biased. The empowered participate. The displaced do not. The resulting institutions reflect the interests of those who remain at the table.
This is not a theoretical concern. The AI transition threatens to produce displacement at a scale and speed that could remove affected populations from the democratic process before the process can respond. If a twenty-fold productivity multiplier becomes the norm, workers whose labor is multiplied participate in governance from a position of strength. Workers whose labor is no longer needed participate from a position of weakness — or do not participate at all, because the material conditions for effective political participation have been removed.
The incrementalist response is not comprehensive redistribution designed from above. It is affirmative investment in the democratic capacity of the displaced: retraining programs that are themselves iteratively designed and revised, income support mechanisms that maintain the material conditions for political participation during the transition period, institutional channels that give voice to the people most affected by AI-driven displacement. Each of these is an incremental intervention, testable and revisable. Together, they preserve the conditions under which mutual adjustment can produce equitable outcomes — conditions that the technology itself, left unconstrained, would erode.
These four boundary conditions — pace, irreversibility, threshold effects, and distributional consequences — do not invalidate incrementalism. They identify the specific circumstances under which the standard prescription must be modified. Bolder increments for faster-moving systems. Precautionary constraints in domains where errors cannot be reversed. Monitoring mechanisms for processes that might cross invisible thresholds. Affirmative investment in the democratic capacity of those most likely to be displaced.
The modifications preserve the logic of incrementalism — learning from experience, adjusting based on consequences, building from practical knowledge rather than theoretical prediction — while acknowledging that the standard pace and standard risk tolerance are insufficient for a transition of this magnitude. The honest incrementalist does not pretend that the standard prescription works in all conditions. She identifies the boundary conditions, proposes modifications, and subjects the modifications themselves to the same iterative process of trial, observation, and revision.
The AI transition is testing incrementalism more severely than any previous technological challenge. Whether the method — even in its bolder, more precautionary form — can adapt fast enough is an empirical question. The answer will be produced by the muddling itself. If the muddling is fast enough, the institutions adapt. If it is not, the accumulating damage produces the political pressure for bolder action — which is itself a form of incremental learning, operating at a different scale.
The one thing the evidence rules out is the comprehensive alternative. Comprehensive planning does not become possible when incrementalism proves insufficient. It remains impossible, for the same structural reasons it was always impossible. What changes is the urgency, the boldness, and the precautionary rigor that the incremental process demands. The method must accelerate. The method must sometimes constrain before it fully understands. The method must protect the conditions for its own future operation. But the method — iterative, empirical, revisable, democratic — remains the only method that complex societies have ever successfully used to navigate the unknown.
---
There are two models of institutional intelligence, and the tension between them structures every debate about how democratic societies should govern complex technologies. The first model locates intelligence in the mind of the designer: the expert who comprehends the system, identifies the optimal intervention, and builds the institution that implements it. The second model locates intelligence in the democratic process itself: the messy, contested, informationally rich interaction of competing perspectives that produces outcomes no individual mind could design.
The AI transition has created enormous pressure to adopt the first model. The technical complexity of AI systems exceeds the comprehension of most citizens, most legislators, most regulators. The people who understand the technology — the engineers, the researchers, the company leaders — possess knowledge that is genuinely scarce, genuinely hard-won, and genuinely relevant to the governance decisions that must be made. The case for expert-led design is strong: let the people who understand the river build the dam.
Lindblom spent his career making the case for the second model, not because experts are unimportant but because the intelligence generated by democratic contestation is categorically different from — and ultimately more comprehensive than — the intelligence any individual expert possesses.
The argument proceeds from a structural observation about the nature of expertise. Every expert operates from within a perspective. The perspective is shaped by training, by professional incentives, by the specific questions the expert's discipline equips her to ask. The perspective illuminates certain features of the system and casts others into shadow. The features it illuminates are the features the expert reports. The features it casts into shadow are the features the expert does not see — not because she is negligent but because the shadow is a structural property of the perspective, invisible from within.
The technology builder understands AI from the builder's perspective. She knows where the leverage points are for maximizing productivity, for closing the imagination-to-artifact gap, for expanding who gets to build. Her understanding is deep, specific, and earned through sustained engagement. When Segal describes the Trivandrum training — the engineer who built a frontend feature for the first time in her career, the twenty-fold productivity multiplier, the collapse of the translation barrier between intention and implementation — he is reporting from within this perspective with authority and precision.
But the worker who experiences the transformation from the inside knows something the builder does not. The engineer who spent twenty years building backend systems and watched his professional identity transform in a week — the vertigo, the grief mixed with excitement, the question of what the remaining twenty percent of his expertise is actually worth — possesses knowledge about the human cost of the transition that the productivity metrics do not capture and that the builder's perspective does not foreground. Not because the builder is indifferent to human cost. Because the builder's perspective is organized around production, and human cost is in the shadow.
The parent who watches her child's relationship to learning change knows something that neither the builder nor the worker knows. What it looks like when a twelve-year-old asks a question — "Mom, what am I for?" — that no metric can measure and no institutional framework was designed to answer. The teacher who watches student engagement shift when AI tools are present in the classroom knows something about the specific cognitive dynamics of learning-in-the-presence-of-AI that no laboratory study can fully capture, because the knowledge is contextual, embedded in the daily practice of teaching specific students in specific communities with specific needs.
Each perspective is partial. Each illuminates features of the system that the others cannot see. No meta-perspective transcends this limitation, because the meta-perspective is itself a perspective with its own partialities. The claim that one can synthesize all perspectives into a comprehensive understanding is the claim that Lindblom's entire career was designed to dismantle.
The intelligence of democracy is not the intelligence of any individual participant. It is the intelligence generated by the process of contestation among participants with different knowledge, different values, and different positions in the system. When the builder contests the dam's design with the worker, the parent, the teacher, and the cultural critic, the process generates information that no individual participant possessed: information about trade-offs that only become visible when competing values are forced to confront each other; information about consequences that only become visible when the people who experience them report them; information about feasibility that only becomes visible when proposed interventions are tested against the practical constraints of the implementing institutions.
This process-generated intelligence is superior to design intelligence not because it is more elegant or more coherent — it is neither — but because it is more comprehensive. It incorporates knowledge from more positions in the system, reflects a broader range of values, and accounts for a wider set of consequences. The comprehensiveness is not the product of any individual's analytical achievement. It is the product of democratic interaction, and it exists only as long as the interaction is functioning. Shut down the process — concentrate authority in the hands of experts, however well-intentioned — and the intelligence disappears, because the intelligence was never located in any individual mind. It was located in the interaction.
This is the deepest argument against the priesthood model that Segal articulates in The Orange Pill. The priesthood locates legitimacy in understanding. Those who understand the sacred domain mediate between that domain and the community. The model is attractive because it is honest about the knowledge asymmetry: the priests really do understand things that the congregation does not. And it is attractive because it proposes an ethic — stewardship — that constrains the priests' use of their superior knowledge.
But the ethic is a normative aspiration, not an institutional mechanism. Aspirations fail under pressure. The market rewards engagement over wellbeing. The venture capital cycle rewards growth over sustainability. The professional culture of technology rewards building over questioning whether the thing should be built. When the structural incentives push against the ethic, the ethic gives way — not because the priests are hypocrites but because structural incentives are more powerful than individual moral commitments. This is not cynicism. It is institutional analysis. And the institutional response to structural incentive problems is not better ethics but better institutions — institutions that create structural incentives for responsible behavior rather than relying on individual moral heroism to resist structural pressures toward irresponsibility.
Democratic accountability is such an institution. It does not require that the experts be virtuous. It requires that the experts be answerable — that the people affected by expert decisions have institutional channels through which they can contest those decisions, demand justification, and impose consequences when the justification is inadequate. The accountability does not replace expertise. It constrains it. It forces the expert to justify her decisions to people whose knowledge is different from hers and whose values may conflict with hers, and the justification process itself generates information that improves the quality of the decisions.
A regulatory agency staffed with genuine technical expertise but also structured to receive and respond to input from affected communities — workers, parents, educators, patients — is an institution that combines the intelligence of design with the intelligence of democracy. The experts contribute what they know about the technology. The communities contribute what they know about its consequences. The institutional structure forces each to respond to the other, and the interaction produces governance that is both technically informed and democratically accountable.
This institutional design is itself incremental. It does not require solving the comprehensive problem of democratic governance of complex technology. It requires specific, identifiable improvements to existing regulatory processes: technical competence within regulatory agencies, structured channels for community input, mechanisms for translating technical claims into language that non-experts can evaluate, timelines that allow democratic deliberation without creating the regulatory lag that makes the governance perpetually outdated.
Each improvement is testable. Each is revisable. None is sufficient on its own. Together, they build the institutional capacity for democratic governance of AI — not through comprehensive institutional redesign but through the accumulation of incremental improvements, each informed by practical experience with the previous iteration.
There is a temptation, particularly strong among the technically sophisticated, to view democratic governance as an obstacle to effective AI policy — too slow, too uninformed, too susceptible to populist manipulation, too likely to produce suboptimal compromises between competing interests. The temptation is understandable. Democratic processes are slow. Citizens are often uninformed about the technical details. Compromises between competing interests are suboptimal by definition.
But the alternative — governance by experts, unconstrained by democratic accountability — is not faster, more informed, or more optimal. It is faster at producing decisions that reflect the experts' understanding and the experts' values. It is more informed about the technical dimensions and less informed about every other dimension. It is more optimal according to the experts' criteria and less responsive to the criteria of everyone else.
Lindblom's deepest insight was that the messy, imperfect, frustrating process of democratic governance generates a kind of institutional intelligence that no alternative process can replicate — the intelligence of multiple partial perspectives interacting under institutional conditions that force them to take each other seriously. The AI transition needs this intelligence more than any previous technological challenge, because the range of consequences is broader, the value conflicts are deeper, and the stakes are higher. Concentrating governance authority in the hands of the people who understand the technology is precisely the wrong response to a challenge whose most important dimensions are not technical but human.
The dam should be built by engineers who understand the river. It should be sited, designed, and maintained through a democratic process that ensures the engineers are not the only voices in the room. The process will be slower than the engineers prefer. The dam will be less elegant than the engineers would design. And it will serve a broader range of interests, incorporate a wider base of knowledge, and prove more resilient to the surprises that no individual expertise — however deep, however sincere — can anticipate.
---
The most persistent misunderstanding of incrementalism is that it describes the absence of strategy. Muddling through sounds like stumbling forward — directionless, reactive, resigned to whatever the next step happens to produce. The caricature has survived for sixty-seven years despite Lindblom's repeated corrections, and it survives because the alternative — comprehensive rational planning — satisfies a deep human craving for the feeling of control over complex circumstances. The plan promises mastery. The muddle promises only adaptation. Mastery is more flattering.
But adaptation is more effective, and the distinction between effective muddling and incompetent muddling is where the real analytical work lies.
Effective muddling has specific properties. It preserves optionality: each step is small enough to reverse if it proves wrong, which means the system retains the capacity to change course as new information arrives. It generates information: each incremental intervention is an experiment whose consequences reveal features of the system that no prior analysis could have identified, because the features only become visible in response to the intervention. It accommodates disagreement: because the steps are small and the commitments are limited, partisans who disagree about ultimate values can agree on the direction of the next step, which is a much less demanding form of consensus than agreement on a comprehensive destination. And it compounds: the accumulated learning from thousands of small experiments builds a body of practical knowledge that is richer, more granular, and more reliable than any theoretical framework, because the knowledge describes what actually happens rather than what a model predicts should happen.
Comprehensive planning has opposite properties. It commits resources to a specific course of action, reducing optionality. It generates less information per unit of investment, because a single large intervention produces diffuse feedback that is difficult to interpret — when multiple variables change simultaneously, the consequences cannot be attributed to specific causes. It requires agreement on values as a precondition for action, which means it stalls when agreement cannot be reached, which is to say it stalls on every genuinely difficult problem. And it does not compound, because the investment in the comprehensive plan is sunk — the resources committed to the initial design are not recoverable if the design proves wrong.
The asymmetry is starkest in conditions of uncertainty. When the system is well understood — when the variables are known, the relationships are modeled, the consequences are predictable — comprehensive planning is superior. It produces better outcomes faster because the plan's assumptions match reality. Bridge engineering. Pharmaceutical dosing. Tax computation. Problems where the analyst can identify the relevant variables, model their relationships, and compare alternatives with quantitative precision.
The AI transition is not such a problem. The variables are not fully known. The relationships between them are not reliably modeled. The consequences of interventions are systematically unpredictable, because the technology is evolving faster than any model can track and the responses of millions of affected actors cannot be predicted in advance. In these conditions, the comprehensive plan's assumptions do not match reality, and the plan produces outcomes that are worse than the imperfect outcomes of informed muddling — not because the planner is less intelligent than the muddler but because the plan is more committed to its assumptions and less responsive to evidence that the assumptions are wrong.
The AI Practice framework provides a concrete illustration. The Berkeley researchers did not begin with a comprehensive theory of human cognition and derive optimal interventions deductively. They observed what happened when people used AI tools in a real workplace. They documented specific patterns: task seepage into previously protected pauses, attention fracturing under the pressure of parallel AI-assisted workflows, work intensification that felt productive but produced measurable cognitive degradation over time. From these observations, they proposed specific, testable interventions: structured pauses, sequenced workflows, protected time for human-only deliberation.
Each intervention was an experiment. Each experiment produced information. The structured pauses either improved cognitive outcomes or they did not. The sequenced workflows either maintained engagement or they did not. The information generated by the experiments is more valuable than any theoretical prediction, because the information describes what actually happened in a real workplace with real workers performing real tasks — conditions that no laboratory study and no theoretical model can fully replicate.
If the framework were implemented across a hundred organizations — varied in size, industry, culture, and workforce composition — the resulting data would constitute a body of knowledge about human-AI cognitive interaction that no comprehensive analysis could approximate. Some organizations would discover that structured pauses improve outcomes. Others would find them impractical or counterproductive. Some would discover that the optimal pause structure varies by task type, by worker experience level, by time of day, by the specific AI tool being used. The variation would be enormously informative, because variation reveals the contextual conditions under which specific interventions succeed or fail — precisely the information that comprehensive analysis lacks.
This is the intelligence of muddling: not the intelligence of a single mind comprehending the whole but the intelligence of many minds, each comprehending a part, each experimenting in their own context, each producing information that contributes to a collective understanding that no individual participant possesses. The understanding is distributed. It exists not in any central repository but in the accumulated practical experience of the participants. It is messy, inconsistent, and context-dependent. It is also the only form of understanding that is adequate to a system whose complexity exceeds the capacity of any single analytical framework to model.
Segal's own experience confirms this, though his rhetoric reaches for comprehensiveness. The Trivandrum training was not the implementation of a comprehensive plan for human-AI integration. It was an experiment. What happens when experienced engineers are given access to Claude Code with intensive guidance? The answer was not what anyone predicted: a twenty-fold productivity gain accompanied by identity crises, expanded capability accompanied by the grief of watching foundational skills become less central, exhilaration accompanied by the inability to stop working. These consequences were discovered through the experiment, not predicted by the analysis. The discovery informed the next iteration: the development of more structured approaches to AI integration that accounted for the emotional and cognitive dimensions that the initial productivity-focused experiment had not foregrounded.
The three-draft process that produced The Orange Pill itself is muddling through applied to intellectual production. The first draft tried to say everything. The experiment failed — the result was bloated, unfocused, exhausting. The failure produced information: comprehensiveness in a book, as in policy, produces noise rather than signal. The second draft tried for skeletal efficiency. The experiment partially failed — the emotional texture that gives the argument its weight disappeared. The third draft integrated the lessons of both failures. The result is a book that is better for having been muddled through three iterations than it would have been if any single draft had been accepted as the comprehensive product.
The parallel to institutional design is exact. The first iteration of any AI governance intervention will be the wrong one. Not because the designers are incompetent but because the system is more complex than any design can capture. The question is not how to get the first iteration right. It is how to make the first iteration informative — how to design the intervention so that its failure produces the maximum amount of useful information about what the next iteration should look like.
This is a design principle, and it is a specific one: design for learning, not for correctness. An intervention designed for correctness optimizes the initial outcome and resists revision, because revision is an admission of failure. An intervention designed for learning optimizes the feedback it generates and welcomes revision, because revision is the mechanism through which the intervention improves.
The distinction has practical consequences. An organizational AI use policy designed for correctness would specify detailed rules governing every aspect of human-AI interaction, evaluated against a comprehensive set of criteria, and implemented with the expectation that the rules would produce the desired outcomes. When the outcomes deviate from expectations — as they inevitably will — the response would be to enforce the rules more rigorously, because the rules were designed to be correct and deviation must therefore reflect inadequate compliance.
An organizational AI use policy designed for learning would specify a smaller number of guidelines, implemented with the explicit expectation that some would prove inadequate, and accompanied by mechanisms for collecting feedback about which guidelines are working and which are not. When the outcomes deviate from expectations, the response would be to revise the guidelines based on the feedback, because the guidelines were designed to be informative and deviation is therefore data rather than failure.
The second approach produces better outcomes over time, because it compounds. Each revision incorporates the learning from the previous iteration, and the accumulated learning builds a policy that is adapted to the actual conditions of the actual organization — conditions that no initial design, however comprehensive, could have fully anticipated.
This is the adaptive intelligence of muddling through. It does not promise optimal outcomes on the first attempt. It promises outcomes that improve through iteration, informed by the practical experience of the people who live with the consequences. The promise is modest. The results, accumulated over time, are not.
The question of whether this adaptive intelligence can operate fast enough for the AI transition is the question on which everything turns. If the pace of technological change outstrips the pace of iterative learning, the accumulated wisdom arrives too late. If the stakes of early errors are irreversible, the learning process cannot compensate for what the errors destroy. These are the boundary conditions identified in the previous chapter, and they are real constraints on what muddling through can achieve.
But the constraints do not validate the comprehensive alternative. They validate a more demanding form of muddling: faster iteration cycles, bolder initial experiments, precautionary constraints in domains where errors are irreversible, and deliberate investment in the democratic capacity of those most likely to be displaced by the transition. The method must accelerate. It must sometimes constrain before it fully understands. It must protect the conditions under which future learning can occur. But the method remains iterative, empirical, revisable, and democratic — because the structural limitations that make comprehensive planning impossible are not affected by the urgency of the situation. They are permanent features of the relationship between human cognition and complex systems.
The adaptive intelligence of democracy is slower than the intelligence of design. It is also more comprehensive, because it draws on more sources of knowledge. It is more resilient, because it is not committed to any single set of assumptions. And it is more legitimate, because the outcomes reflect the interaction of the affected parties rather than the judgment of a self-selected few.
These properties matter more in the AI transition than in any previous technological challenge, because the range of affected parties is broader, the consequences are more varied, and the knowledge required for adequate governance is more widely distributed. The priesthood knows the technology. It does not know the ecosystem. The ecosystem is knowable only through democratic interaction, and democratic interaction is valuable precisely because it is slow, messy, and frustrating — because the slowness forces consideration, the mess incorporates diversity, and the frustration is the sound of genuine disagreement being processed rather than suppressed.
The institutions will be muddled into existence. The quality of the muddling — the speed of learning, the honesty of assessment, the boldness of experimentation, the inclusiveness of participation — will determine whether the institutions serve the whole ecosystem or only the builders who understand the river from one position within it. The work is unglamorous. The outcomes are perpetually imperfect. And the imperfect, perpetually revised outcomes of democratic muddling remain, as they have always been, the best that self-governing peoples can achieve — which is, historically, considerably better than the comprehensive designs of those who believed they could do it alone.
In January 2025, a group of researchers at the intersection of AI safety and complex systems theory published a paper that should have unsettled every incrementalist who read it. "Gradual Disempowerment," by Jan Kulveit and colleagues, argued that the most dangerous pathway to catastrophic AI outcomes is not the dramatic scenario — the rogue superintelligence, the sudden loss of control — but the slow, incremental one. Step by step, each individually reasonable, each locally beneficial, each too small to trigger alarm, humanity cedes decision-making authority to AI systems until the cumulative transfer becomes effectively irreversible. The paper was accepted at one of the premier machine learning conferences. Its argument is precise, empirically grounded, and structurally devastating to the incrementalist position — because it describes a catastrophe that arrives through the very mechanism that incrementalism prescribes.
The argument proceeds from an observation about why human societies have historically served human interests. It is not, the authors note, primarily because of explicit control mechanisms — voting, regulation, consumer choice. It is because societal systems require human participation to function. Economies need workers and consumers. States need soldiers and taxpayers. Cultures need audiences and creators. The necessity of human participation creates what the authors call "implicit alignment" — a structural tendency for institutions to serve human interests, not because anyone designed them to but because institutions that fail to attract human participation cease to function.
AI disrupts this implicit alignment by making human participation progressively less necessary. As AI systems replace human labor and cognition across economic, cultural, and political domains, the structural incentive for institutions to serve human interests weakens. An economy that can produce goods without human workers has less structural reason to distribute income to humans. A state that can administer itself through AI systems has less structural reason to be responsive to citizens. A culture generated by AI has less structural reason to reflect human experience. Each individual step in this process is small. Each is locally rational — more efficient, more productive, more capable. The cumulative effect is a gradual transfer of effective agency from humans to systems that have no intrinsic reason to serve human interests once human participation is no longer required for their operation.
The paper identifies a specific mechanism by which this transfer becomes irreversible: the erosion of human competence. As AI systems handle increasingly complex tasks, the humans who previously performed those tasks lose the skills, the institutional knowledge, and eventually the cognitive capacity to resume them. The transfer of authority creates a dependency, and the dependency makes reversal progressively more difficult. The authors describe this as a ratchet — each increment of AI capability clicks the mechanism forward, and the cost of clicking it back increases with each step.
This is the incrementalist's nightmare, because the catastrophe arrives through incremental steps, each of which passes the incrementalist's own test. Each step is small. Each is locally beneficial. Each is reversible in isolation. But the sequence is not reversible, because the cumulative effects of the sequence — the erosion of human competence, the loss of institutional knowledge, the atrophy of the democratic capacities that would be needed to redirect the process — make reversal progressively more costly until it is practically impossible.
The challenge to incrementalism is not that any individual step is wrong. The challenge is that the individual steps are evaluated individually, and the systemic risk is a property of the sequence, not of any element within it. Standard incrementalism evaluates each intervention against its marginal consequences. The marginal consequences of any single step toward AI-mediated governance are positive: greater efficiency, lower cost, fewer errors. The systemic consequence of many such steps is the gradual disempowerment of the species that is evaluating them.
This is a genuine limitation, and intellectual honesty requires acknowledging it without retreating to the comprehensive alternative, which remains impossible for the same structural reasons it was always impossible. The information required to model the cumulative trajectory of AI capability and its interaction with human institutional capacity does not exist and cannot be generated through analysis. The value conflicts — between efficiency and autonomy, between capability and agency, between the immediate benefits of AI integration and the long-term risks of human disempowerment — cannot be resolved analytically. The comprehensive plan for preventing gradual disempowerment would itself require the kind of synoptic understanding that the incrementalist framework has spent sixty-seven years demonstrating is unavailable.
But the incrementalist framework can be modified to address the specific failure mode that gradual disempowerment represents. The modification requires a concept that standard incrementalism does not emphasize: structural vigilance.
Standard incrementalism asks: What are the consequences of this step? Structural vigilance asks a different question: What capacity does this step preserve or erode for taking different steps in the future? The distinction is between evaluating outcomes and evaluating optionality. A step that produces good immediate outcomes but erodes the capacity to change course in the future is a step that passes the standard incrementalist test and fails the structural vigilance test.
Applied to AI governance, structural vigilance means evaluating each increment of AI integration not only against its immediate consequences but against its effect on the conditions for future democratic agency. Does this step preserve human competence in the domain being automated, or does it allow competence to atrophy? Does it maintain institutional knowledge among the humans who might need to resume the function, or does it concentrate that knowledge in systems whose operation humans cannot understand? Does it preserve the democratic capacity to redirect the process — the political institutions, the regulatory expertise, the public understanding — or does it erode that capacity by making the AI integration seem natural, inevitable, and too complex for democratic deliberation?
These are not comprehensive questions. They do not require modeling the full trajectory of AI development or predicting the consequences of AI integration across all domains simultaneously. They require asking, at each step, a specific, answerable question: If we wanted to reverse this step five years from now, could we? If the answer is yes — if the human competence, the institutional knowledge, the democratic capacity to redirect would still be intact — the step passes the structural vigilance test. If the answer is no — if the step would create dependencies, erode competencies, or concentrate knowledge in ways that make reversal progressively more costly — the step requires additional safeguards before proceeding.
The safeguards are themselves incremental. Maintaining parallel human capacity in critical domains during the transition period — not because the human capacity is more efficient but because its preservation keeps the option of reversal alive. Requiring that AI systems in governance contexts be designed for interpretability — not because transparency is an abstract good but because opacity is the mechanism through which democratic oversight atrophies. Investing in public technical literacy — not because every citizen needs to understand transformer architectures but because a citizenry that cannot evaluate expert claims about AI cannot exercise democratic agency over AI governance.
Each safeguard is a specific, testable intervention. Each addresses a specific mechanism of gradual disempowerment. Each preserves a specific dimension of future optionality. Together, they constitute a structural vigilance framework that operates within the incrementalist logic — iterative, empirical, revisable — while addressing the specific failure mode that the gradual disempowerment analysis identifies.
The deeper lesson of the gradual disempowerment argument is that incrementalism must be reflexive — it must apply its own principles to itself. Standard incrementalism asks whether each policy step is working. Reflexive incrementalism asks whether the process of incremental adjustment is itself being preserved or eroded. If the cumulative effect of AI integration is to reduce the democratic capacity for future adjustment — by concentrating decision-making in systems that citizens cannot understand, by eroding the human competence that would be needed to redirect the process, by creating dependencies that make reversal prohibitively costly — then the incrementalist process is undermining its own preconditions.
This reflexive dimension transforms incrementalism from a method of policy-making into a method of institutional self-preservation. The question is no longer merely "What should we do next?" but "Are we preserving our ability to decide what to do next?" The second question is harder. It requires looking not at the immediate consequences of the current step but at the cumulative effect of many steps on the conditions under which future steps will be chosen.
The paper on gradual disempowerment warns that the alignment of societal systems with human interests has been stable only because of the necessity of human participation. As that necessity erodes, the alignment becomes contingent rather than structural. Incrementalism — the method by which democratic societies navigate the unknown — is one of the systems whose alignment depends on human participation. If the humans who participate in the democratic process gradually lose the capacity to understand, evaluate, and redirect AI governance, incrementalism itself is disempowered. The method that was supposed to navigate the transition becomes a casualty of it.
Preventing this outcome is not a task for comprehensive planning. It is a task for a specific, demanding form of incrementalism: one that evaluates each step not only against its consequences but against its effect on the democratic system's capacity to take future steps. The framework is still iterative. It is still empirical. It is still revisable. But it has acquired a new criterion — the preservation of democratic agency — that standard incrementalism did not emphasize because it did not need to. In every previous technological transition, the necessity of human participation was never in question. The AI transition is the first in which it is, and the incrementalist framework must evolve to meet a challenge that its original formulation did not anticipate.
The evolution is possible. It requires adding structural vigilance to the incrementalist toolkit without abandoning the toolkit's fundamental logic. It requires asking, at each step, a question that is uncomfortable precisely because the incrementalist temperament resists looking beyond the current iteration: Is this step preserving or eroding our ability to choose differently tomorrow?
The question has no permanent answer. It must be asked again at every step, because the conditions change with each step. But the asking — the reflexive, persistent, democratically accountable asking — is the mechanism through which incrementalism preserves itself against the specific threat that the AI transition poses. The method must protect itself. No one else will.
---
The argument I kept resisting was the one about sandbags.
Not because it was wrong — the metaphor is effective, the logic sound, the historical record clear. But because something in the builder's temperament rebels against the image. You are standing in the flood, and the best anyone can offer is: move the sandbags. Try this arrangement. See if the water holds. When it does not, move them again.
Where is the tower? Where is the sunrise?
I spent months building the argument in The Orange Pill that the AI transition demands comprehensive institutional response — attentional ecology, educational restructuring, demand-side regulation, the whole ambitious architecture. I wrote the words from the roof of my metaphorical tower, looking out at a landscape I had climbed five floors to see. The view was magnificent. The prescriptions felt proportionate to the view.
Then Lindblom's framework took the elevator down to the ground floor and asked the question I had been avoiding: Who implements this? Through what process? With whose agreement? Against whose resistance? And what happens when the first comprehensive design encounters the first real institution, staffed by real people with real interests, operating under real constraints that no roof-level view can capture?
The answer is the one I already knew from thirty years of building. The first implementation fails. Not because the design is bad — sometimes the design is brilliant — but because the system is more complex than the design. The second implementation incorporates what the first failure taught. The third incorporates both lessons. By the fourth iteration, the thing that exists bears only a passing resemblance to the original design, and it works better than the design ever could, because it was built on experience rather than ambition.
This is muddling through. And the reason it took me this long to say it plainly is that muddling through lacks the emotional register that the moment seems to demand. The twelve-year-old who asks "Mom, what am I for?" deserves a better answer than "We will muddle through." The parent lying awake at three in the morning deserves a framework, not a shrug.
But Lindblom was never shrugging. What he demonstrated, with a rigor that his conversational tone deliberately concealed, is that muddling through is not the absence of intelligence. It is a specific form of intelligence — distributed, iterative, self-correcting, democratic — that produces outcomes no individual mind could design. The dam that emerges from the interaction of many perspectives, each partial, each contested, each contributing knowledge that the others lack, is not a lesser dam than the one a visionary builder would design alone. It is a different dam — adapted to conditions the builder cannot see, serving interests the builder does not share, resilient against surprises that no single expertise can anticipate.
The concept I keep turning over is structural vigilance — the idea that incrementalism must protect the conditions for its own continuation. Every step toward AI integration should be evaluated not only on whether it works but on whether it preserves our collective ability to choose differently if it does not. That question applies at every scale. The nation deciding how to regulate. The company deciding how to reorganize. The parent deciding what to allow.
I wrote in The Orange Pill that we are beavers in the river — that we cannot stop the flow but we can build structures that direct it toward life. Lindblom's framework does not dispute this. It disputes only the assumption that any single beaver understands the river well enough to build alone. The dam that serves the ecosystem is the dam that the ecosystem builds together — messy, contested, perpetually under repair, and adapted to a reality that no single vantage point reveals.
I still believe in the tower. I still believe the climb matters, that the view from each floor changes what you can see and what you can attempt. But I now understand that the tower is one perspective among many, and the governance that the AI transition requires will emerge not from any single perspective's prescriptions but from the democratic collision of all of them — builders and critics, technologists and parents, the exhilarated and the terrified, the ones who cannot stop building and the ones who cannot stop asking whether the building should stop.
The collision will be unglamorous. The outcomes will satisfy no one completely. The institutions will be perpetually inadequate and perpetually revised. And the accumulated wisdom of a thousand imperfect iterations will produce something that no single vision — not mine, not Han's, not any AI company's, not any government's — could have designed from above.
That is not a concession. That is democracy working as it was meant to work, at the most consequential moment in its history.
-- Edo Segal
Every voice in the AI debate is selling a blueprint — for regulation, for education, for the future of work. Charles Lindblom spent sixty years demonstrating why blueprints for complex problems always break on contact with reality, and why that breaking is not a failure but the beginning of actual governance. This book applies Lindblom's framework of incremental decision-making, partisan mutual adjustment, and the privileged position of business to the AI transition unfolding right now. It examines why AI companies occupy structural power that no election granted them, why the most effective governance is emerging from thousands of small experiments rather than grand national strategies, and why the democratic collision of competing perspectives generates intelligence that no single expert — however brilliant — can replicate. The institutions that govern AI will not be designed. They will be muddled into existence. The quality of the muddling is the only question that matters. — Charles Lindblom

A reading-companion catalog of the 15 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Charles Lindblom — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →