By Edo Segal
The tool I couldn't stop using was the one I never thought to question.
Not whether it worked. It worked spectacularly. Not whether it was useful. It was the most useful thing I had ever touched. The question I never asked was simpler and more dangerous: whose interests does this serve beyond mine?
I described in The Orange Pill the exhilaration of building with Claude — the four-in-the-morning sessions, the inability to stop, the imagination-to-artifact ratio collapsing to the width of a conversation. Every word of that was true. What I did not describe, because I did not yet have the vocabulary, was the structure I was building inside of. The infrastructure I depended on without governing. The terms I accepted without negotiating. The political economy I participated in without examining.
Evgeny Morozov gave me that vocabulary. And it made me deeply uncomfortable.
Morozov is not anti-technology. I need to say this upfront because the discourse will try to file him there, and if you let it, you will miss what he is actually doing. What he is against is something he calls solutionism — the reflexive conversion of every human experience into a technical problem awaiting its fix. The ideology is so pervasive in my world that I breathed it like air. I celebrated the collapse of friction without asking what the friction was doing. I measured the productivity gain without asking who captures the value. I built dams in the river without asking who controls the river itself.
Morozov asks the questions that builders like me are structurally incentivized to avoid. Not "Does the tool work?" but "Who governs the tool?" Not "Is the builder empowered?" but "Is the builder dependent?" Not "Can we build it?" but "Should this experience be converted into a problem at all?"
These are political questions. They do not have engineering answers. And that is precisely why the technology discourse keeps converting them into engineering challenges — because engineering challenges are solvable, and political questions require the slow, contentious, imperfect work of collective decision-making that no product roadmap can accommodate.
I am still a builder. I still believe in the amplification thesis at the heart of The Orange Pill. But Morozov forced me to see something I had been looking past: that amplification within a structure of concentrated governance is not the same thing as freedom. The capability is real. The sovereignty may not be.
This is a lens you need before you decide what to build next.
— Edo Segal ^ Opus 4.6
Evgeny Morozov (born 1984) is a Belarusian-born writer, researcher, and technology critic who has become one of the most prominent intellectual voices challenging the prevailing narratives of Silicon Valley. Raised in Soligorsk, Belarus, he studied in Bulgaria and later at Harvard and Georgetown. His first book, The Net Delusion: The Dark Side of Internet Freedom (2011), dismantled the prevailing assumption that the internet was an inherently democratizing force, showing how authoritarian regimes could exploit the same technologies activists relied upon. His second, To Save Everything, Click Here: The Folly of Technological Solutionism (2013), introduced the concept of "solutionism" — the ideology that treats every dimension of human existence as a technical problem amenable to optimization — into mainstream discourse. Through essays in The New York Times, The Guardian, The New Yorker, the Boston Review, and the New Left Review, Morozov has extended his critique to artificial intelligence, coining the term "AGI-ism" and arguing that AI reinforces what he calls "Panglossian neoliberalism." He has advocated for democratic governance of digital infrastructure, data sovereignty, and what he terms "intelligence amplification" over artificial intelligence — using technology to enhance human decision-making rather than replace it. He remains one of the few technology critics whose work engages simultaneously with philosophy, political economy, and the technical specifics of the systems he analyzes.
In 2013, Evgeny Morozov published To Save Everything, Click Here, a book whose title was designed to irritate precisely the people who most needed to read it. The book's target was not any particular technology but a way of thinking about technology that had become so pervasive in Silicon Valley and its cultural satellites that it had ceased to be recognizable as a way of thinking at all. Morozov called it solutionism — the instinct to treat every dimension of human existence as a problem awaiting its technological fix — and he argued, with the polemical force that would become his signature, that this instinct was not merely mistaken but ideologically dangerous, a framework that systematically depoliticized questions that were inherently political by recasting them as engineering challenges amenable to technical optimization.
A decade later, the instinct has found its most powerful instrument. The arrival of large language models capable of producing working software from natural-language descriptions, of generating competent prose on any subject, of simulating the surface features of expert judgment across virtually every professional domain, represents not the invention of solutionism but what Morozov's framework would identify as its apotheosis — the moment when the distance between identifying a problem and deploying a solution has collapsed to the width of a conversation, and with it the space in which a society might pause to ask whether the thing being solved was actually a problem, whether the problem as defined bore any meaningful relationship to the experience it claimed to address, and whether the solution, however effective on its own terms, might be destroying something that the terms could not capture.
The solutionist operation proceeds in two steps, and the steps are so familiar they have become invisible. The first step is redefinition: a human experience, which may be complex, ambiguous, and valuable precisely in its resistance to simplification, is recast as a problem with identifiable parameters and a specifiable solution space. The second step is optimization: a technical intervention is designed, deployed, and evaluated according to metrics that the redefinition itself established. The original experience — with all its irreducible complexity, all its dimensions that resist quantification, all its entanglement with questions of value and meaning and purpose that no metric can capture — has been replaced by its technically tractable shadow. The shadow is easier to work with. The shadow yields to intervention. And the shadow is not the thing.
Consider what this means in the context of the AI tools that The Orange Pill celebrates with such genuine and infectious enthusiasm. Edo Segal describes an engineer in Trivandrum who built a complete user-facing feature in two days, having never written a line of frontend code. The capability expansion is real — nobody disputes this. The engineer described what the interface should feel like in human terms, and the tool translated her description into working code she had never learned to write. The barrier between her imagination and its realization had been, in Segal's striking phrase, compressed to the width of a conversation.
Morozov's framework asks what experience was redefined as a problem in this compression. The experience of existing at the boundary of one's competence — of not knowing how to do something and needing to learn it before one can build what one imagines — has been recast as a friction to be eliminated, a barrier to be collapsed, a cost to be optimized away. The engineer did not learn frontend development. She did not undergo the process by which a body of unfamiliar knowledge becomes embodied understanding through sustained engagement with its difficulties. She did not experience the productive frustration of failing at something new and discovering, through the specific texture of the failure, something about the structure of the problem that no working solution can teach. She got the feature. She did not get the understanding. And the solutionist framework, which can measure the feature but cannot measure the understanding, records the interaction as pure gain.
The inadequacy of this accounting is not a minor quibble about educational philosophy. It points to the structural blindness at the center of the solutionist worldview — a blindness that Morozov has diagnosed with increasing precision as AI has amplified the ideology's reach. In his 2023 New York Times essay "The True Threat of Artificial Intelligence," Morozov coined the term "AGI-ism" to describe the broader ideological formation driving the pursuit of artificial general intelligence. AGI-ism, he argued, is "just the bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative" — no alternative, that is, to the market-driven, efficiency-maximizing, problem-solving orientation that treats every domain of human existence as raw material for technical optimization. The connection to neoliberalism is not metaphorical. It is structural. The same logic that insists the market is the optimal mechanism for allocating economic resources insists that the algorithm is the optimal mechanism for allocating cognitive ones.
Morozov identified three specific biases that this ideology embeds in AI systems and in the culture that adopts them. The "market bias" assumes that private-sector actors will consistently outperform public ones in developing and deploying intelligent systems. The "adaptation bias" assumes that the appropriate human response to AI is adaptation — learning to use the tools, restructuring one's workflows, acquiring the new skills the tools demand — rather than transformation of the conditions under which the tools are produced and deployed. The "efficiency bias" assumes that efficiency is the master value against which all others must be measured, that a process which produces the same output with less friction is by definition superior, regardless of what the friction was doing for the people who experienced it.
These biases are not incidental to the AI tools. They are constitutive of them. The tools are designed to solve problems efficiently. Their value proposition is the rapid, competent resolution of whatever the user specifies. The user who uses the tools as designed — who specifies problems and accepts solutions — is using them correctly. And the correct use of the tools reinforces the ideology that produced them, because every successful interaction confirms the solutionist proposition: the problem was real, the solution worked, the friction was a cost, and its elimination was a benefit. The cumulative effect of millions of such interactions, each individually defensible, is the progressive restructuring of an entire culture's relationship to difficulty, uncertainty, and the experiences that only difficulty and uncertainty can produce.
Segal himself provides, with admirable honesty, the diagnostic evidence for exactly this restructuring. He describes working with Claude until four in the morning, unable to stop, unable to find the boundary between productive engagement and compulsion. He describes forgetting to eat for four hours. He recognizes, with the self-awareness of a builder who has spent decades at the frontier, the specific pattern of addiction — and yet the recognition does not produce the capacity to stop, because the tool is not offering something harmful. It is offering something useful, something genuinely productive, something that meets a need so deep that the body's signals of hunger are overridden by the intensity of the cognitive collaboration. The solutionist response to this experience is predictable and has already been proposed in a thousand variations across the discourse: build a reminder app, set a timer, design a productivity framework that imposes boundaries on the unbounded engagement. Convert the experience of compulsion into a problem and solve it with another tool.
Morozov's response would be different. He would ask what the experience of compulsion is telling us about the nature of the tool, about the nature of the culture that produced it, and about the nature of a political economy in which the inability to stop working is not recognized as a symptom of structural pathology but celebrated as evidence of the tool's power. He would observe that Segal's four hours without eating are not a personal failing to be corrected by better self-management. They are a data point in a much larger pattern — the pattern of a culture that has internalized the solutionist imperative so thoroughly that the distinction between choosing to work and being unable to stop working has dissolved, and that the dissolution is experienced not as a loss of autonomy but as a gain in productivity.
The solutionist ideology operates, in Morozov's analysis, through what might be called a ratchet mechanism. Each successful solution reinforces the legitimacy of the problem-solution framework. Each reinforcement makes it slightly harder to question the framework itself — to ask whether the experience that was redefined as a problem might have been valuable in its original form, whether the dimensions of the experience that the redefinition discarded were the dimensions that mattered most. The ratchet only turns in one direction. Each cycle of problem-identification and solution-deployment brings more of human experience within the solutionist framework, and each expansion of the framework makes the unconverted experiences — the experiences that resist redefinition, that refuse to yield to optimization, that are valuable precisely in their recalcitrance — look increasingly anomalous, increasingly inefficient, increasingly like problems that have not yet found their solutions.
AI accelerates this ratchet with a specificity that previous technologies could not achieve. The smartphone brought solutionism into every waking hour. Social media brought it into every social relationship. But AI brings it into the process of thought itself. When the tool can produce competent text on any subject, the experience of not knowing what to think — the experience of genuine intellectual uncertainty, of sitting with a question long enough for the question to reshape the questioner — is redefined as a problem of writer's block, of insufficient information, of cognitive friction that the tool can eliminate. The experience of struggling with a difficult problem is redefined as an inefficiency that the tool can optimize away. The experience of being confused is redefined as a deficit that the tool can correct.
Each redefinition is locally defensible. Each solution works on its own terms. And each successful solution tightens the ratchet, making it slightly harder to access the experiences that the solutions have displaced — the uncertainty that is the precondition for genuine thought, the struggle that is the medium through which skill develops, the confusion that is the cognitive state preceding the kind of clarity that no external source can provide.
Morozov argued in his 2024 essay for the Boston Review that contemporary AI reinforces what he calls "Panglossian neoliberalism" — a credo "championed by venture capitalists, tech CEOs, and startup founders" that "asserts that we already live in the best of all possible worlds and that there is no alternative to the market-driven provision of our tech infrastructures." The credo's Panglossian dimension holds that the current arrangement is fundamentally sound and merely requires optimization; its neoliberal dimension holds that the market is the correct mechanism for performing that optimization. AI is both the product and the engine of this credo: it was produced by the market-driven institutions whose legitimacy it reinforces, and its operation within those institutions produces outcomes that confirm the credo's core propositions — that problems yield to technical solutions, that efficiency is the master value, that the appropriate response to any dissatisfaction is not political contestation but better engineering.
The question Morozov forces onto the AI discourse — the question that the discourse would prefer not to confront — is not whether the tools work. They manifestly do. The question is what the working of the tools is doing to the culture's capacity for the kind of thinking that the tools themselves cannot perform: the thinking that questions frameworks rather than operating within them, that evaluates whether a problem should be solved rather than how to solve it most efficiently, that sits with the irreducible complexity of human experience rather than converting it into the tractable simplicity of a technical specification.
This is not a question that can be answered by building a better tool. It is a question about the ideology that makes tool-building the answer to every question, including this one. And it is the question that Morozov has spent his career trying to make audible above the noise of solutions — the noise that, in the age of AI, has become louder, more comprehensive, and more difficult to distinguish from the sound of thinking itself.
The distinction between a problem and an experience is the load-bearing wall of Morozov's entire intellectual edifice, and it is the distinction that the AI moment has made simultaneously more important and more difficult to maintain. A problem has parameters. It can be specified, bounded, decomposed into components, and evaluated against criteria that establish when it has been solved. An experience has no such properties. It is irreducible, situated, temporally extended, and valuable in ways that resist quantification — sometimes valuable precisely because it resists quantification, because the dimensions of the experience that matter most are the dimensions that no metric can capture and no algorithm can optimize.
The solutionist conversion of experience into problem is not merely an intellectual error, though it is that. It is, in Morozov's analysis, a political operation with material consequences. Every experience successfully redefined as a problem becomes a market opportunity. Every problem solved by a tool becomes a revenue stream. The larger the domain of human experience that can be brought within the problem-solution framework, the larger the addressable market for the companies that sell solutions. The ideology and the business model are not coincidentally aligned. They are structurally identical. Solutionism is a business model that has achieved the status of a worldview, and its success as a worldview is what sustains its viability as a business model.
Consider grief. A person has lost someone she loved. The loss is enormous, encompassing not only the absence of the person but the absence of the future they would have shared, the conversations that will not happen, the mutual understanding that was built over years and cannot be reconstructed with anyone else. Grief is not a problem. It is the process through which the bereaved person integrates the loss into her continued existence, developing — through pain that is not incidental to the process but constitutive of it — the emotional capacity to live in the altered landscape that the loss has created.
The solutionist looks at grief and sees a problem: the person is suffering. The solution space includes therapy apps, AI-powered journaling tools, chatbots trained on therapeutic dialogue, platforms that connect the bereaved with support communities algorithmically matched to their specific loss profile. Each solution addresses a genuine dimension of the experience. None of them addresses the experience itself, because the experience is not a collection of addressable dimensions. It is a totality that is degraded, not improved, by being decomposed into components and optimized piecewise.
The therapeutic industry would object, and the objection has merit in specific cases — clinical depression, pathological grief that has become chronic and debilitating. Morozov is not arguing against intervention where intervention is genuinely necessary. He is arguing against the default assumption that all difficulty is pathological, that the appropriate response to suffering is always its elimination, that an experience whose defining characteristic is its painfulness must therefore be a problem whose defining characteristic is its solvability. The assumption converts a developmental process into a deficiency, and the conversion prevents the development that the process would have produced.
This analysis applies with particular force to the cognitive experiences that AI tools most directly address. Consider the experience that Segal describes in The Orange Pill with such striking honesty — the moment when Claude's prose outran his thinking, when the output sounded better than the thought behind it, when the smoothness of the text concealed the absence of the deliberation that should have produced it. Segal recognized what was happening and retreated to a coffee shop with a notebook to recover his own voice, to undergo the slower, harder, uglier process of figuring out what he actually believed before allowing the tool to express it more elegantly than he could manage alone.
That retreat is the experience that solutionism cannot see. The experience of not yet knowing what you think — of sitting with intellectual uncertainty long enough for genuine thought to emerge from the discomfort — has no place in the solutionist framework, because within that framework it registers only as a deficit: the writer lacks a draft, the thinker lacks a position, the builder lacks a specification. The tool addresses the deficit. The draft appears. The position is generated. The specification materializes. And the experience that the deficit contained — the specific, productive, often painful cognitive work that occurs when the mind is engaged with a question without yet having committed to an answer — has been bypassed.
Morozov would call this the preemptive draft problem, and it is perhaps the most consequential application of solutionism to human cognition. The writer who receives a competent draft before she has completed her own deliberation has not been assisted in thinking. She has been relieved of the necessity of thinking, which is a fundamentally different thing. The draft shifts her cognitive orientation from generative to evaluative — from the open, exploratory, uncertain process of discovering what she thinks to the closed, judgmental process of assessing what the machine has produced. Research in cognitive psychology has demonstrated that anchoring effects are powerful and persistent: the first frame you encounter constrains every subsequent judgment, even when you know the frame is arbitrary. The preemptive draft is the ultimate anchor. It establishes the terms within which all subsequent "thinking" will occur, and the terms were set not by the thinker's deliberation but by the machine's statistical extrapolation from its training data.
The distinction between generating text and generating ideas is critical here, and it is a distinction that the AI moment has made both more important and more difficult to maintain. Claude generates text that often resembles the products of genuine thinking with sufficient fidelity that the distinction can be difficult to perceive — even for the person who receives the text and knows, at some level, that she did not produce the underlying thought. The text is competent. It addresses the topic. It makes relevant points. It organizes the analysis in a structure that a reader would find persuasive. It does everything that thinking would produce except the thinking itself.
Morozov's 2024 essay for the Boston Review, "The AI We Deserve," proposed a framework for understanding what is lost in this substitution. Drawing on the pragmatist philosopher John Dewey, Morozov distinguished between "instrumental reason" — the goal-directed, problem-solving rationality that AI embodies — and what he called "ecological reason," a mode of intelligence that "stresses both indeterminacy and the interactive relationship between ourselves and our environments." Ecological reason, in Morozov's formulation, "thrives on nuance and difference, and thus resists automation." It is the mode of cognition that operates when a person is genuinely thinking rather than merely solving — when she is exploring rather than executing, questioning rather than answering, attending to the irreducible particularity of a situation rather than subsuming it under a general category that yields to algorithmic treatment.
The concept of ecological reason offers an alternative to the solutionist framework that is neither nostalgic nor technophobic. Morozov does not argue that instrumental reason is illegitimate or that AI should be abandoned. He argues that a culture that recognizes only instrumental reason — that treats ecological reason as mere inefficiency, as the residual messiness that better algorithms will eventually clean up — is a culture that has amputated a dimension of its own intelligence. And AI, as currently designed and deployed, systematically privileges instrumental reason while marginalizing ecological reason, because instrumental reason is the mode that yields to computation while ecological reason is the mode that resists it.
Segal's own account of writing The Orange Pill illustrates the tension with unusual clarity. The moments he describes as most valuable in his collaboration with Claude — the moments when the tool found connections he had not seen, when it bridged domains he could not have bridged alone, when it held his half-formed ideas and returned them clarified — are moments of instrumental reason operating at extraordinary efficiency. The tool found the bridge. The tool made the connection. The tool clarified the idea. Each operation is genuinely useful. Each operation also bypasses the specific cognitive work that would have occurred had the builder been forced to find the bridge himself — the slower, less efficient, more frustrating process of trying one connection and finding it inadequate, trying another and finding it partially adequate, sitting with the inadequacy long enough for a third connection to emerge that could not have been anticipated at the outset.
The bridge that the builder finds through struggle is not the same as the bridge the tool provides, even if the two bridges connect the same banks. The builder's bridge carries the weight of the search that produced it — the dead ends explored, the false starts abandoned, the gradual refinement of understanding that occurs when one is forced to generate rather than evaluate. The tool's bridge carries no such weight. It arrives fully formed, competent, immediately useful, and free of the developmental residue that gives the builder's bridge its specific character and its specific educational value.
Morozov would observe that the aggregate effect of millions of such substitutions — tool-bridges replacing builder-bridges across every domain of professional and creative work — is not merely a change in productivity. It is a change in the kind of intelligence that a culture produces. A culture in which every professional routinely receives preemptive drafts, in which the generative phase of cognition is systematically bypassed in favor of the evaluative phase, in which the experience of not-knowing is routinely converted into the problem of not-yet-having-the-output, is a culture that is losing the capacity for the specific kind of thought that only occurs under conditions of genuine uncertainty. The loss is invisible to every metric the culture employs, because the metrics measure output, and the output is competent — often more competent than what the unaided thinker would have produced. But competence and understanding are not the same thing, and a culture that can produce the former while losing the capacity for the latter is a culture that is, in the most precise sense, becoming fluent without becoming literate.
The difficulty-as-medium argument cuts across domains with uncomfortable precision. The sculptor's stone is not an obstacle to sculpture. It is the medium through which sculpture comes into being. The resistance of the stone — its grain, its hardness, its specific response to the chisel — is what gives the sculpture its character. A sculptor working with no resistance would produce nothing, because sculpture is the art of working against and within resistance. Boredom, which the solutionist identifies as a deficit of stimulation and solves with an app, is the condition in which the brain's default mode network activates — the neural architecture associated with creative thought, self-reflection, and the consolidation of memory. The bored child on a summer afternoon is not experiencing a problem. She is experiencing the medium through which the capacity for self-generated engagement develops. Confusion, which the solutionist identifies as an information deficit and solves with an answer, is the cognitive state that precedes the kind of clarity that no external answer can provide — the clarity that emerges from working through the confusion rather than being relieved of it.
Morozov's argument is not that all difficulty is valuable or that suffering should never be alleviated. It is that certain forms of difficulty are not obstacles to be eliminated but media to be worked through — developmental processes that produce specific human capacities which cannot be produced in any other way. The solutionist framework, which can only see difficulty as cost, systematically destroys these processes while measuring the destruction as progress. And AI, which can eliminate difficulty with unprecedented speed and comprehensiveness, accelerates the destruction with an efficiency that previous technologies could not approach.
The challenge, then, is not to reject AI tools but to develop the capacity to distinguish between difficulty-as-obstacle and difficulty-as-medium — between the friction that genuinely deserves elimination and the friction that deserves protection. This capacity is itself a product of ecological reason, of the kind of nuanced, context-sensitive, situation-specific judgment that instrumental reason cannot perform and that solutionism cannot value. And it is precisely this capacity that the solutionist culture, amplified by AI, is most effectively eroding.
Morozov's 2011 book The Net Delusion disrupted the dominant narrative about the internet at the precise moment when that narrative was at the height of its cultural authority. The story was simple and almost universally accepted: the internet was a force for liberation. It enabled dissidents to organize, citizens to participate, information to flow around censors and authoritarians. The Arab Spring, erupting across North Africa and the Middle East as the book appeared, seemed to confirm everything the narrative promised. Young people armed with smartphones and Twitter accounts were toppling dictatorships. The internet was on the right side of history.
Morozov argued that the narrative was not false but fatally incomplete — and that the incompleteness was not innocent but ideological. The internet did enable dissidents to organize. It also enabled governments to surveil the organizers. The platforms that connected activists also connected their persecutors. The data flows that carried democratic aspirations also carried the surveillance apparatus that would crush them. The countries where the internet advanced freedom were countries that already possessed the institutional infrastructure to support freedom. The countries where it advanced repression were countries whose institutions of repression were sophisticated enough to coopt the new tools. The technology did not determine the outcome. The political context did. And the narrative that attributed liberatory properties to the technology itself — what Morozov called "cyber-utopianism" — was a form of intellectual laziness that substituted engineering optimism for political analysis and technological explanation for political understanding.
Fifteen years later, the same narrative has reproduced itself in the discourse about artificial intelligence with a fidelity that would be comic if the stakes were not so high.
The AI liberation narrative follows the internet liberation narrative beat for beat. AI is a force for democratization. It enables individuals to build what only teams could build, creators to produce what only specialists could produce, small companies to compete with large ones. The Orange Pill documents this democratization with genuine enthusiasm and genuine evidence — the engineer who built a frontend feature without knowing frontend, the marketing manager who built a CRM in forty-five minutes, the solo builder who shipped a revenue-generating product without writing a line of code. Each case demonstrates a real expansion of capability. Each is real. Each looks very much like democratization, provided one defines democratization as the broader distribution of productive capacity.
Morozov's framework insists on a different definition, and the difference is not semantic but structural. Democratization, in any politically meaningful sense, refers not to the distribution of capability but to the distribution of power — specifically, the power to participate in the governance of the institutions that shape one's life. The internet distributed the capability to publish while concentrating the governance of publication in the hands of a few platform companies. YouTube democratized video distribution while concentrating the governance of video standards, monetization policies, and content moderation in Google's hands. Uber democratized transportation while concentrating the governance of pricing, labor standards, and market access in Uber's hands. In every case, users gained capability; the platform gained governance power; and the rhetoric of democratization was deployed to legitimate a power structure that was fundamentally antidemocratic.
The AI moment reproduces this structure at a scale and speed that exceeds all previous instances. The builder who can create anything with Claude has gained a genuine productive freedom. She has not gained a voice in the governance of the platform that provides her capability — cannot influence the pricing decisions, model updates, data policies, or corporate strategy that will determine whether her capability remains viable. She cannot vote on Anthropic's development priorities. She cannot demand transparency about the decisions that shape the tool on which her professional practice increasingly depends. Her sovereignty is real at the application layer and illusory at the infrastructure layer. The rhetoric of empowerment directs attention to the sovereignty and away from the dependency, and the direction of attention is not accidental. It is the mechanism by which the dependency is sustained.
The internet delusion had consequences that took a decade to become visible. The celebration of digital empowerment produced policies that favored platform growth over platform governance, that treated the technology's effects as inherently positive and the regulation of those effects as inherently restrictive. By the time the consequences materialized — surveillance capitalism, algorithmic manipulation, the erosion of democratic discourse, the weaponization of social media by authoritarian regimes — the concentration of power was so thorough that remedial options were limited and the political will to pursue them was constrained by the very dependency the concentration had created. Societies that had spent a decade celebrating their digital liberation discovered that they had been building the infrastructure of their own subordination, and that the infrastructure was now too deeply integrated into their economic and social lives to be easily dislodged.
Morozov sees the AI delusion producing the same trajectory at compressed timescales. The capability expansion is more dramatic than the internet's. The dependency is more absolute — the builder who has restructured her entire professional practice around Claude Code has no meaningful fallback when the service experiences an outage or the provider changes its terms. The governance asymmetry is more extreme — the decisions that determine the behavior of large language models are made by smaller groups of people, operating with less transparency, than the decisions that governed internet platforms at a comparable stage of development. And the narrative of democratization is more compelling, precisely because the capability expansion is more genuine, which means it provides more effective ideological cover for the concentration of power it accompanies.
Morozov has argued, particularly in his 2024 analysis of "Panglossian neoliberalism," that the AI industry's intellectual framework is not merely optimistic but structurally incapable of recognizing the political dimensions of its own operation. The "Panglossian" dimension holds that the current arrangement of the technology industry — venture-capital-funded, commercially driven, governed by the strategic priorities of a small number of corporations — represents, if not the best of all possible worlds, then at least the best available approximation. The "neoliberal" dimension holds that the market-driven provision of AI infrastructure is not merely one possible arrangement among others but the natural and correct arrangement, deviations from which represent interference with a process that functions best when left to the market's invisible hand.
Morozov has pointed out — with the sardonic precision that characterizes his best polemical work — that this framework's Panglossian confidence in the private sector rests on a historical foundation that flatly contradicts it. The technologies that the AI industry has commercialized were substantially created by public investment: ARPANET, GPS, the integrated circuit, the computer mouse, and the foundational research in neural networks that made large language models possible were all products of government funding, much of it military. The private sector did not create the technological base it exploits. It inherited it from public institutions and converted it into private wealth, and it now deploys the rhetoric of market superiority to prevent the public institutions that created the base from governing the wealth it generates.
This pattern — public investment creating the conditions for private appropriation, followed by the deployment of market ideology to prevent public governance of the appropriated resources — is the deep structure of the AI political economy, and it is the structure that the democratization narrative conceals. When Anthropic presents Claude as a tool that empowers individual builders, the presentation is accurate at the level of individual capability and misleading at the level of institutional power. The individual builder is empowered. The institutional structure that determines the terms of that empowerment — who owns the infrastructure, who governs the platform, who captures the economic value, who bears the risks — remains concentrated, unaccountable, and insulated from democratic challenge by the very rhetoric of empowerment it deploys.
Segal acknowledges this dependency in The Orange Pill — he notes the concentration of infrastructure ownership, the data centers and chip fabrication plants and electrical grids that make AI possible. But the acknowledgment operates at the level of observation rather than analysis. The dependency is noted. Its political implications are not pursued with the same intensity and specificity that the book brings to its capability analysis. The reader leaves with a vivid sense of what the empowered builder can do and a vague sense that the infrastructure on which she depends is owned by someone else. The vividness and the vagueness are not accidental. They are the structural signature of the democratization narrative: the capability that has been distributed is always vivid, immediate, and emotionally compelling; the governance that has been retained is always abstract, deferred, and rhetorically subordinate.
Morozov would insist that the sequel to the internet delusion does not need to produce the same consequences as the original. The trajectory is not inevitable — it is the product of specific political choices, and different choices would produce different outcomes. But the choices that would produce different outcomes are political choices, not technical ones. They require democratic governance of AI infrastructure, regulatory frameworks that ensure the benefits of AI-enhanced productivity are broadly distributed rather than narrowly captured, labor protections that address the restructuring of work that AI produces, and data governance regimes that recognize the contributions of training-data producers — the millions of people whose intellectual labor was extracted, without compensation or meaningful consent, to create the models on which the entire AI industry depends.
These are not problems that better AI can solve. They are political questions that require political answers — answers produced through deliberation, contestation, compromise, and collective decision-making. And the AI discourse, like the internet discourse before it, is structured to prevent these questions from being asked, because asking them threatens the interests of the institutions whose power the democratization narrative protects. The sequel is playing the same notes as the original. The question is whether enough people have learned from the first performance to hear what the melody actually conceals.
Technological determinism is the philosophical wallpaper of the contemporary technology industry — so ubiquitous, so thoroughly integrated into the assumptions that govern how technologists think and speak about their work, that most of its adherents do not recognize it as a philosophical position at all. They experience it as description, as the neutral reporting of facts about how the world works. Technology drives change. Technology transforms industries. Technology disrupts markets. In every sentence, the technology is the subject and the verb is active. The technology does things. Humans respond to what the technology does. The direction of causation runs from tool to user, and the only rational response to a powerful tool is adaptation, because the tool — like weather, like gravity, like the passage of time — operates independently of human decision.
Morozov has argued throughout his career that this framing is not merely incomplete but structurally dishonest, and that its dishonesty has consequences that extend far beyond academic philosophy. When a society internalizes technological determinism as its default framework for understanding change, it surrenders the possibility of genuine political choice about technology. If the technology drives the outcome, the outcome is not a choice but a necessity. Resistance is not merely difficult but irrational — like protesting the tide. And the companies that produce the technology benefit enormously from this framing, because it converts their corporate strategy into natural law, their business decisions into cosmic inevitability, their products into forces that no one can stop and that everyone must accommodate.
The Orange Pill is considerably more sophisticated than the standard determinist account — Edo Segal does not argue that AI will transform the world regardless of human choice. His central metaphor, the river of intelligence and the beaver's dam, explicitly positions humans as active agents who can build structures to direct the flow toward beneficial outcomes. The metaphor acknowledges human agency. It insists that outcomes depend on choices.
But Morozov's framework reveals a determinist structure embedded within the metaphor even as the metaphor introduces agency. Intelligence is presented as a natural force — a river that has been flowing for 13.8 billion years, from hydrogen atoms through biological evolution through cultural accumulation to artificial computation. The beaver builds dams in the river. The beaver does not create the river. The beaver does not determine its direction, force, or character. The beaver responds to a force that is larger than itself, older than itself, and fundamentally independent of its choices.
This is the subtlest form of technological determinism, and it is the form that The Orange Pill employs with particular effectiveness. The arrival of AI is presented not as a decision by specific companies and investors to build specific products for specific purposes within specific political and economic structures, but as a branching of a cosmic river — a natural development in a process that transcends human history. Questioning its arrival is, within this framework, like questioning the arrival of spring. You can build dams to manage the flow, but you cannot argue that the flow itself represents a choice that could have been made differently.
Morozov would call this naturalization, and he would identify it as the most effective ideological operation the technology industry performs. When Anthropic's CEO says AI will transform every industry, he is not making a prediction. He is making a political claim — using the language of inevitability to suppress the possibility that someone might say: perhaps we should ensure that the transformation benefits more than the transformers. When the venture capitalist says "You cannot stop this," he is not describing a physical law. He is performing a political act, deploying the grammar of nature to foreclose the grammar of democracy.
Morozov has pointed out, with characteristic empirical precision, that the technologies celebrated as inevitable were in every case the products of highly contingent institutional decisions. Large language models exist because specific research programs, funded by specific institutions, pursued specific approaches to natural-language processing over other approaches that were equally viable and might have produced fundamentally different tools. The decision to train models on internet-scale text corpora rather than curated, domain-specific knowledge bases was a choice. The decision to deploy trained models through commercial APIs rather than as open-source tools accessible to all was a choice. The decision to fund AI development through venture capital — which imposes specific return expectations, specific timelines, and specific incentive structures that favor rapid commercialization over careful evaluation — was a choice.
Each of these choices had alternatives. Each alternative would have produced different tools with different capabilities, different limitations, different distributions of benefit and cost. And each choice was made by a small number of people operating within institutions whose interests are not identical to the public interest — a fact that the naturalization of AI systematically conceals.
This concealment has practical consequences that Morozov traces with the specificity of a political economist rather than the abstraction of a philosopher. When the twenty engineers in Trivandrum discover that they can each do the work of a full team, the determinist framing presents this as an encounter with a natural force — the river has reached a new channel, and the beavers must adapt. The non-determinist framing presents it differently: a company in San Francisco has built a product, priced it at a hundred dollars per month, and made it available in India. The engineers' new capability is real but contingent — contingent on the continued operation of infrastructure they do not own, on the continued availability of a service whose terms they did not negotiate, on the continued viability of a business model whose evolution they cannot influence.
The determinist framing makes this contingency invisible by treating the capability as a property of the historical moment rather than a feature of a specific commercial relationship. The non-determinist framing makes it visible by insisting that the capability is inseparable from the institutional context that produces and sustains it. The difference matters because political contestation — the process by which a society determines who benefits from technological change and who bears its costs — requires visibility. You cannot contest what you cannot see. And technological determinism is, at its core, a machine for rendering the political dimensions of technological change invisible.
Morozov traced this dynamic in his essay "Socialism After AI," published in December 2025, where he argued that even critics of capitalism have been captured by a soft determinism that treats AI as a neutral instrument to be redirected rather than a specific product of specific institutions that embodies specific assumptions about intelligence, efficiency, and value. The left, he argued, has "merely treated it like earlier tools of capitalist production — as a neutral instrument that can simply be redirected." But AI's purposes, Morozov insisted, "cannot be specified in advance and must be discovered through practice" — a formulation that draws on his concept of ecological reason to challenge the instrumental view of intelligence that both Silicon Valley and its critics share.
The deepest form of determinism in The Orange Pill operates not at the level of metaphor but at the level of the through-line question itself. The book asks: "When AI amplifies everything we are, what becomes of who we are?" The question assumes amplification — assumes that AI is an amplifier whose effects depend on what you feed it. Feed it carelessness, you get carelessness at scale. Feed it care, you get care at scale. The framework is elegant and intuitive. But it presupposes that the amplifier is neutral — that it faithfully amplifies whatever signal it receives without introducing its own biases, distortions, or structural tendencies.
Morozov would challenge this presupposition with the specificity of someone who has spent years analyzing the non-neutral properties of supposedly neutral tools. The amplifier is not neutral. It is trained on specific data that reflects specific biases. It is optimized for specific objectives — typically user engagement and task completion — that may not align with the user's deeper interests. It operates within a commercial framework that incentivizes certain kinds of use and discourages others. It is governed by institutions whose interests shape its behavior in ways that are not transparent to the user. An amplifier with these properties does not merely amplify the signal. It shapes it, and the shaping is not incidental to the amplification but constitutive of it.
The Luddites, whom Segal discusses with considerable nuance in Chapter 8 of The Orange Pill, were fighting precisely this framing. They were not opposed to machines as such — Morozov's analysis aligns with Segal's on this point. They were opposed to the specific deployment of machines under specific conditions that destroyed their livelihoods while enriching factory owners. They were fighting the distribution, not the technology. And they lost not because the technology was inevitable but because the political power to determine the distribution was in the hands of the factory owners, not the workers.
Technological determinism is the ideology that prevents this distinction — between the technology and its deployment, between the capability and its distribution, between the tool and the political economy that determines who benefits from the tool — from being made. By treating the technology as an autonomous force, determinism renders the choices that shape its effects invisible. And the invisibility of those choices is the precondition for a distribution of benefits that favors the makers over the users, the owners over the workers, the governors over the governed.
The river is not a force of nature. It is a product of human institutions. And the question is not how to build better dams in a river whose course is given, but who has the power to determine where the river flows — and by what process, and with what accountability, and in whose interest. That question is not answered by engineering. It is answered by politics. And politics, in the specific sense Morozov intends — the collective determination of how power is distributed and how institutions are governed — is precisely what the determinist framework is designed to prevent.
Solutionism is not merely a habit of mind. It is a business model — and the business model is so successful that the habit of mind it requires has become the dominant intellectual orientation of the most powerful institutions on the planet. Morozov has insisted throughout his career on the inseparability of the ideological and the economic dimensions of solutionism, and in the AI moment this insistence has become not merely analytically useful but urgently necessary, because the economic structure of the AI industry has achieved a degree of concentration that makes previous technology monopolies look like farmers' markets.
The political economy of solutionism operates through a structure that Morozov's work allows us to decompose into four interlocking elements, each reinforcing the others with a mechanical reliability that should alarm anyone who takes democratic governance seriously.
The first element is the redefinition of experience as problem — the ideological operation analyzed in the preceding chapters. A human experience is taken, recast as a problem with specifiable parameters, and inserted into a solution space. The operation is cultural before it is commercial: it requires a population that has internalized the assumption that difficulty is always a deficiency, that friction is always a cost, that the appropriate response to any dissatisfaction is a technical intervention rather than a political demand. This cultural precondition did not arise spontaneously. It was cultivated, over decades, by institutions whose revenue depends on the continued expansion of the problem-solution framework into new domains of human existence. Every experience successfully redefined as a problem is a market created.
The second element is the production of solutions. The technology companies that profit from solutionism are the companies that produce solutions to the problems the ideology has defined. The economics are straightforward: the more human experiences that can be brought within the problem-solution framework, the larger the addressable market for solutions. The AI moment has expanded this market with a comprehensiveness that previous technologies could not approach, because AI tools address not a specific domain of human activity but the general capacity for cognitive work. The addressable market is not transportation, or communication, or entertainment. It is thought itself — every professional activity that involves language, judgment, analysis, or creative production. The market is as large as the domain of human cognition that can be technically addressed, and the tools are expanding that domain daily.
The third element is infrastructure dependency. The solutions that the technology companies produce depend on infrastructure that the companies own and control: data centers consuming electricity at rates comparable to small cities, GPU clusters designed by NVIDIA and manufactured by TSMC in facilities costing tens of billions of dollars, cloud computing platforms operated by Amazon, Microsoft, and Google, and the trained models themselves — proprietary intellectual property representing billions of dollars in research investment and training computation. The user who adopts an AI solution becomes dependent on this infrastructure, and the dependency deepens over time as her processes, workflows, professional identity, and productive capacity are restructured around the tool's capabilities. The dependency is not merely commercial. It is structural. The switching costs — measured not only in money but in time, relearning, and the disruption of established practices — are prohibitively high.
The fourth element is governance asymmetry. The user who depends on the infrastructure does not govern it. The terms of service, the pricing, the model behavior, the data policies, the feature roadmap, and the strategic direction of the platform are determined by the company that owns it. The user has no formal mechanism to influence these decisions — cannot vote on the platform's policies, cannot elect its leadership, cannot demand transparency about algorithmic changes that affect her work, cannot compel the platform to operate in her interest rather than its shareholders'. She can provide feedback through channels the company controls. She can complain on social media. She can threaten to leave, though the switching costs make the threat increasingly empty as the dependency deepens. But she cannot govern.
These four elements — redefinition, production, dependency, governance asymmetry — constitute a system, and the system produces a specific distribution of power: productive capability flows outward to users while governance power flows inward to a small number of corporations. The distribution is presented as democratization. It is, in Morozov's analysis, the opposite — a structure in which the rhetoric of empowerment legitimates the concentration of power by directing attention to the capability that has been distributed and away from the governance that has been retained.
The Orange Pill provides one of the most vivid illustrations of this structure in contemporary literature — though the illustration is unintentional, which makes it more revealing rather than less. The scene in Trivandrum operates as a diagnostic case study. Twenty engineers sit in a room. A builder tells them that by the end of the week, each will be able to do more than all of them together. The tool is Claude Code. The cost is one hundred dollars per person, per month. By Friday, the prediction is confirmed. A twenty-fold productivity multiplier at a trivial price.
Apply the four-element framework. The experience of building software as part of a collaborative team — with its social bonds, its distributed intelligence, its shared accomplishments, its specific satisfactions — has been redefined as a productivity problem: these engineers are not building fast enough. The solution has been produced by Anthropic, a San Francisco corporation funded by venture capital, operating data centers powered by infrastructure whose environmental and social costs are borne by communities geographically and socially distant from both the engineers and the company. The dependency is immediate: by Friday, the engineers' workflows, expectations, and professional self-conception have been restructured around a tool they do not own. And the governance asymmetry is total: the engineers have no voice in any decision Anthropic makes about Claude's development, pricing, availability, or behavior.
Morozov would observe that the "twenty-fold productivity multiplier" is presented in The Orange Pill as an empirical finding — a measurement of enhanced human capability. But productivity, as a metric, is not a neutral measure of human flourishing. It is a measure of output per unit of labor time, and its maximization is the operational logic of capital. When a culture celebrates a twenty-fold productivity increase, it is celebrating the fact that one worker can now produce what twenty workers produced before. The question the celebration systematically avoids is: what happens to the organizational capacity, the institutional knowledge, the collaborative intelligence that resided in the team that has been compressed into the individual?
Segal's answer — that the team was kept intact and expanded, that the productivity gain was invested in more ambitious work rather than headcount reduction — is credible and admirable. But it is also the answer of a specific leader making a specific choice within a structure that provides no guarantee that other leaders will make the same choice. The boardroom arithmetic, as Segal himself acknowledges, is always on the table: if five people can do the work of one hundred, why not have five? The fact that Segal resisted this arithmetic does not change the fact that the structure incentivizes it, and that the incentive is structural rather than personal — built into the logic of capital allocation, quarterly earnings expectations, and market competition that governs the behavior of every organization operating within the current political economy.
Morozov has argued, particularly in his 2024 analysis of "Panglossian neoliberalism," that the AI industry's claims about democratization perform a specific ideological function. The function is to shift responsibility from the institutional level to the individual level. If the tool is available and you fail, the failure is yours — your inadequate prompting, your insufficient ambition, your failure to adapt. The political question — why the institutions that determine economic outcomes remain inaccessible to most of the world's population, why the governance of the platforms on which productive capability depends is concentrated in the hands of a few corporations, why the economic value generated by AI-enhanced productivity flows disproportionately to infrastructure owners rather than the workers whose enhanced productivity generates it — is replaced by a technical question about tool adoption. This replacement is not an oversight. It is the mechanism by which the political economy reproduces itself.
The history of technology capitalism provides the template. Every major platform has deployed the rhetoric of democratization to legitimate governance concentration. Facebook democratized social connection while concentrating the governance of social norms, content standards, and attention allocation. Amazon democratized retail while concentrating the governance of marketplace rules, seller terms, and logistics infrastructure. Apple democratized mobile computing while concentrating the governance of the app economy through a thirty-percent toll on every transaction. In each case, the distribution of capability was real, and in each case the distribution served the concentration of governance by creating a dependent user base whose continued participation enriched the platform and deepened the dependency.
The AI moment reproduces this pattern with a refinement that previous platforms could not achieve: the dependency it creates is cognitive rather than merely commercial. The user who depends on Amazon for retail convenience can, in principle, walk to a store. The user who depends on Claude for cognitive augmentation has restructured not just her purchasing habits but her thinking habits, her professional capabilities, and her relationship to difficulty itself. The dependency operates at the level of cognition rather than consumption, which means the switching costs are measured not in inconvenience but in diminished capacity — the atrophied skills, the lost tolerance for friction, the restructured expectations that make operating without the tool feel not merely inconvenient but cognitively impoverishing.
This is why Morozov insists that the political economy of AI cannot be addressed by individual choices, however admirable those choices might be. Segal's decision to keep his team intact is a personal virtue. It is not a political response. A political response would address the structural incentives that make headcount reduction the default rather than the exception. It would address the governance asymmetry that gives users no voice in the decisions that shape their dependency. It would address the infrastructure concentration that makes the entire productive capacity of millions of builders contingent on the strategic decisions of a handful of corporations.
The political response Morozov envisions draws on his earlier work on data sovereignty and public infrastructure. In a 2015 essay for the New Left Review titled "Socialize the Data Centres!," he argued that the infrastructure on which digital economic activity depends should be treated as a public utility — subject to democratic governance, public investment, and regulatory oversight rather than private ownership and market discipline. The argument extends naturally to AI infrastructure: the data centers, the training data, the models, and the platforms through which AI capability is distributed are infrastructure in exactly the same sense that railroads, electrical grids, and telephone networks are infrastructure — critical systems on which the productive activity of millions depends, and whose governance determines the distribution of the value that productive activity generates.
The alternative to the current political economy is not the rejection of AI tools. Morozov has been explicit about this — he has described his preferred framework as "intelligence amplification" rather than artificial intelligence, emphasizing the use of technology to make human decision-makers smarter rather than to replace them. The alternative is the democratic governance of the institutions that produce and deploy the tools — governance that gives users a genuine voice in the decisions that shape their productive lives, that subjects infrastructure ownership to public accountability, that ensures the economic value generated by AI-enhanced productivity is distributed according to principles determined by democratic deliberation rather than market power.
This alternative is not utopian. It is the application of governance principles that every advanced democracy already applies to other forms of critical infrastructure. The railroad was governed. The telephone network was governed. The electrical grid is governed. The application of democratic governance to AI infrastructure requires political will, institutional innovation, and the kind of sustained collective action that the solutionist ideology — with its conversion of every political question into a design challenge — is specifically designed to prevent. But the prevention is not inevitable. It is a product of specific political conditions, and political conditions can be changed by political action. That is, in fact, what political action is for.
There is a moment in the act of writing — and in the act of thinking more broadly — that occurs before the words arrive. Cognitive scientists call it the generative phase: the state in which the mind is working on a problem without yet having produced an answer, engaged with a question without yet having committed to a position. The uncertainty is not a deficit. It is the engine. The resolution is the product. And the product is only as valuable as the deliberation that produced it.
Morozov's framework identifies the AI writing assistant — and specifically the moment when Claude produces a draft before the user has decided what to think — as solutionism applied to the act of deliberation itself. The user's uncertainty about what to say is redefined as a problem: she needs help writing. The solution is a pre-generated draft, a text that addresses the topic, makes relevant points, organizes the argument coherently, and presents a position the user can adopt, modify, or reject. The draft solves the problem as defined. But it preempts the deliberation that the uncertainty was supposed to produce.
This is not a minor point about writing habits. It is, in Morozov's analysis, the most consequential application of solutionism to human cognition, because the thing being solved away is the very process through which human beings determine what they think. The preemptive draft does not assist thinking. It replaces thinking with the appearance of thinking — the appearance being, in many cases, sufficiently convincing that even the recipient cannot reliably distinguish between the genuine article and its simulation.
The cognitive mechanism is specific and well-documented. Without a draft, the writer approaches the task with what cognitive scientists call an open orientation. She does not know what she thinks. She begins by exploring — writing tentatively, trying formulations that may not work, following lines of thought that may lead nowhere. The openness is uncomfortable, which is precisely why it is productive: the discomfort is the cognitive signal that the mind is doing genuinely generative work, producing connections and insights that could not have been predicted at the outset. The discovery is the point. Writing, in this mode, is not the transcription of pre-existing thought. It is the mechanism through which thought is produced.
With the preemptive draft, the cognitive orientation shifts from generative to evaluative. The draft already exists. The user's task is no longer to discover what she thinks but to assess what the machine has produced. The shift sounds subtle. It is not. The generative orientation is where new ideas emerge, where unexpected connections are made, where the thinker surprises herself. The evaluative orientation is where existing ideas are refined and tested — but it operates within a framework that the generative orientation has already established. When the framework is provided by the machine rather than discovered by the thinker, the evaluation operates within terms that the thinker did not set, and the resulting thought, however polished, is shaped by the machine's statistical inferences rather than the thinker's genuine deliberation.
Research in cognitive psychology provides the empirical substrate for this analysis. Anchoring effects — the tendency for initial information to constrain all subsequent judgment — are among the most robust findings in the field. The first number you encounter in a negotiation shapes every subsequent offer, even when you know the number is arbitrary. The first frame you encounter for a problem constrains every subsequent analysis, even when you recognize the frame as one among many possible frames. The preemptive draft is the ultimate cognitive anchor. It establishes not merely a starting point but a conceptual architecture — an organization of the topic, a selection of relevant points, a prioritization of arguments, a rhetorical strategy — within which all subsequent "thinking" will occur. The user who receives a preemptive draft and then revises it has not thought freely. She has thought within the constraints the draft imposed, and the constraints are invisible precisely because the draft is competent — because it addresses the topic in a way that appears natural, that matches the user's expectations closely enough to feel like assistance rather than constraint.
Segal documents this mechanism in The Orange Pill with a candor that Morozov's framework would recognize as diagnostically significant. The moment when Claude's prose outran his thinking — when the output sounded better than the thought behind it, when the smoothness of the text concealed the absence of earned deliberation — is not an anecdote about one writer's experience. It is a description of the preemptive draft's operation observed from the inside by a user who was actively watching for it and still found himself on the verge of accepting the substitution. If a highly intelligent, self-aware builder writing a book about this very phenomenon could not reliably distinguish between the tool's competent output and his own genuine thought, the implication for the millions of less self-aware users — the students writing essays, the lawyers drafting briefs, the managers composing strategy documents, the citizens forming political opinions — is not reassuring.
The democratic dimension of this argument is where Morozov's analysis achieves its most uncomfortable force. A functioning democracy depends on the deliberative capacity of its citizens — on their ability to consider competing arguments, weigh evidence, tolerate uncertainty, and arrive at judgments that reflect their own values and their own analysis. The preemptive draft does not eliminate this capacity in a single stroke. It atrophies it gradually, by offering a more efficient alternative to the slow, uncomfortable process of genuine deliberation. Each instance of accepting a preemptive draft rather than undergoing the generative work of original thought weakens the deliberative muscle slightly. Each weakening makes the next acceptance slightly more likely, because the atrophied capacity makes original deliberation slightly more difficult and the preemptive draft slightly more attractive by comparison. The ratchet operates at the level of cognitive habit, below the threshold of conscious awareness, and its cumulative effect is a population that can evaluate with considerable sophistication but generate with decreasing originality — a population that can edit but cannot write, that can critique but cannot conceive, that can refine but cannot originate.
Morozov's concept of ecological reason — the mode of intelligence that "thrives on nuance and difference, and thus resists automation" — provides the theoretical framework for understanding what the preemptive draft destroys. Ecological reason is generative. It emerges from the open, uncertain, context-sensitive engagement with a situation's full complexity rather than from the efficient application of predetermined categories to a simplified representation of that complexity. Instrumental reason — the mode AI embodies — can process the simplified representation with extraordinary efficiency. It cannot generate the engagement from which ecological reason emerges, because that engagement requires precisely the uncertainty, the discomfort, the patient tolerance of not-knowing that the preemptive draft is designed to eliminate.
The defense of the preemptive draft typically proceeds along two lines, and Morozov's framework reveals the inadequacy of both. The first defense holds that the draft is a starting point, not a conclusion — that the user retains the capacity and responsibility to revise, reject, or replace it. This is formally correct and practically misleading, for the anchoring reasons described above. The draft shapes the destination even when the user believes she is charting her own course. The second defense holds that most writing does not require or benefit from genuine deliberation — that the vast majority of written communication is routine and adequately served by competent text generation. This defense may be correct for emails and meeting summaries, but it establishes a cognitive habit that does not respect the boundary between routine and consequential communication. The user who has been trained by thousands of routine interactions to accept the preemptive draft will not reliably recognize the moment when the stakes demand that she reject it and undergo the discomfort of original thought. The habit is domain-general. The atrophy does not confine itself to the routine.
There is a temporal dimension to the preemptive draft problem that makes it particularly insidious. The user who first encounters AI-assisted writing typically maintains a robust generative capacity — she has spent years or decades developing the ability to think through writing, and the preemptive draft initially functions as a genuine aid, a scaffold that supports rather than replaces her deliberative process. But cognitive capacities, like physical ones, require exercise to maintain. The generative capacity that is not exercised — that is routinely bypassed in favor of evaluating machine-generated drafts — atrophies over months and years. The atrophy is invisible because the output remains competent — often more competent than what the unaided user would produce, because the machine has access to a broader range of reference and a more reliable capacity for structural organization. But competence and understanding diverge, and the divergence widens with each month of bypassed deliberation.
Morozov would observe that this divergence has implications that extend far beyond individual cognitive development. A culture in which the generative capacity of its citizens is systematically atrophied — in which millions of professionals routinely bypass the deliberative process that produces genuine thought — is a culture whose collective intelligence is being hollowed from the inside. The surface is smooth and competent. The output is polished and comprehensive. But the specific quality of thought that only genuine deliberation produces — the originality that comes from struggling with a problem rather than receiving a solution, the depth that comes from generating rather than merely evaluating, the wisdom that comes from sitting with uncertainty rather than being relieved of it — is progressively disappearing from the culture's cognitive repertoire. The disappearance is invisible to every measurement the culture employs, because the measurements capture output quality, and output quality is maintained by the machine. The capacity that is atrophying is not the capacity to produce. It is the capacity to think — and the difference between the two is the difference between a civilization that generates its own understanding and one that consumes understanding generated on its behalf.
Beneath the celebration of the solo builder — beneath the narrative of individual empowerment, the exhilarating collapse of the imagination-to-artifact ratio, the engineer who is twenty times more productive than she was last year — there is a physical reality that the celebration consistently omits. The reality consists of chip fabrication plants in Hsinchu, Taiwan, costing tens of billions of dollars to construct. Data centers in Virginia, Oregon, and Iowa consuming electricity at rates comparable to small cities. GPU clusters designed in Santa Clara by a company whose market capitalization exceeds the GDP of most nations. Fiber optic cables running along ocean floors. Cooling systems consuming millions of gallons of water. And the trained models themselves — proprietary intellectual property representing not merely billions of dollars in computation but the extracted, uncompensated intellectual labor of millions of human beings whose writing, code, images, and ideas were incorporated into training datasets without meaningful consent and without compensation.
Morozov insists that any honest assessment of the AI moment must reckon with this infrastructure — not as a footnote to the empowerment narrative but as its defining context, the material foundation without which the digital sovereignty that builders celebrate would be literally impossible. The builder's sovereignty is real at the application layer — she can build anything she can describe. It is illusory at the infrastructure layer — her capacity to build depends entirely on the continued operation of systems she does not own, does not govern, cannot repair, and cannot replace.
The history of infrastructure provides the analytical template. Every critical infrastructure technology — railroads, electrical grids, telephone networks, internet backbone — has followed a recognizable trajectory: initial development by private actors, rapid concentration of ownership, the emergence of dependency relationships between the infrastructure owners and the populations whose economic activity depends on the infrastructure, and eventually — after the consequences of unaccountable concentration become sufficiently visible — the imposition of democratic governance through regulation, public utility designation, or in some cases nationalization.
Railroad regulation in the late nineteenth century was the paradigmatic case. The railroads were critical infrastructure on which the economic activity of entire regions depended. The companies that controlled the railroads used their control to extract monopoly rents, to discriminate among shippers, to favor their own interests at the expense of the communities whose participation sustained the system. The response — the Interstate Commerce Commission, the Hepburn Act, the eventual framework of common-carrier obligations — did not reject the railroad or pretend the dependency did not exist. It subjected the infrastructure to democratic oversight, constraining the power of the owners to exploit the dependent.
The telephone network followed the same trajectory. AT&T's monopoly on telephone service gave a single corporation effective governance power over a critical medium of communication. The response — decades of regulatory intervention culminating in the breakup of 1984 — was motivated not by hostility to telephones but by the recognition that infrastructure this critical to economic and social life could not be governed by the strategic priorities of a single corporation without accountability to the populations it served.
Morozov's 2015 essay "Socialize the Data Centres!" extended this analysis to digital infrastructure, arguing that the data centers, cloud platforms, and computational resources on which the digital economy depends are infrastructure in exactly the same sense — critical systems whose governance determines the distribution of the value they enable. The essay's title was deliberately provocative, but the underlying argument was analytically precise: infrastructure that has become essential to economic and social participation should be subject to governance mechanisms that reflect the interests of the dependent populations, not merely the strategic priorities of the owners.
AI infrastructure is the most concentrated and the most consequential infrastructure humanity has built, and it is governed — if the word "governed" can be applied to an arrangement in which a handful of corporations make essentially unilateral decisions about the development and deployment of systems that reshape the cognitive capabilities of millions — by the least democratic mechanisms of any critical infrastructure in modern history. The decisions about how large language models are trained, what data they are trained on, what behaviors they are optimized for, what safety constraints they observe, what capabilities they offer and at what price, are made by relatively small groups of people — engineers, executives, investors — operating within institutions whose accountability extends to their shareholders and their boards but not to the populations whose lives their products reshape.
The dependency this creates is qualitatively different from previous infrastructure dependencies. The user who depended on the railroad for shipping could, in principle, find an alternative route. The user who depended on the telephone for communication could, in principle, write a letter. But the user who has restructured her professional capacity around AI-augmented cognition — who has spent months developing workflows, expectations, and capabilities that depend on the tool's continued availability — faces a different kind of dependency. When the service experiences an outage, she discovers that she has no fallback. Her workflows presuppose the tool. Her productivity presupposes the tool. Her professional identity — the things she can do, the value she creates, the role she occupies — presupposes the tool. The outage is temporary. The vulnerability is permanent. And the vulnerability deepens with every month of integration, as skills atrophy in the domains the tool has absorbed, as processes become more thoroughly dependent on capabilities the tool provides, as the cost of operating without the tool increases precisely because the capacity for operating without it has been surrendered.
Morozov would observe that Anthropic's specific governance structure — a public benefit corporation with a long-term benefit trust, designed to maintain the company's commitment to safety and public benefit even under commercial pressure — represents an attempt to address the governance asymmetry from within the existing institutional framework. The attempt is worth taking seriously, because it acknowledges the problem that most AI companies do not acknowledge: that the concentration of governance power in the hands of infrastructure owners creates risks that market mechanisms alone cannot address.
But Morozov's framework would also identify the structural limitations of the attempt. A public benefit corporation is still a corporation. Its governance mechanisms — however sincerely designed, however genuinely intended to protect the public interest — operate within a legal, financial, and competitive environment that exerts constant pressure toward the maximization of shareholder value and the prioritization of commercial considerations over public-benefit considerations. The long-term benefit trust is a structural innovation, but it operates within a market whose time horizons are quarterly, whose competitive dynamics reward speed over caution, and whose investment logic demands returns that may not be compatible with the patient, careful, accountable governance that critical infrastructure requires.
The question is not whether Anthropic's leadership is sincere — there is every reason to believe it is. The question is whether sincerity, operating within a market structure that rewards different priorities, constitutes adequate governance for infrastructure on which the cognitive capabilities of millions of people increasingly depend. The history of infrastructure suggests that it does not — that the pressures of market competition eventually override the best intentions of the most principled actors, and that the only reliable constraint on infrastructure power is democratic governance with enforceable accountability. The well-intentioned monopolist is still a monopolist. The benevolent platform is still a platform whose users depend on it without governing it.
The training data question adds a dimension to the infrastructure analysis that has no precise precedent in previous infrastructure concentrations. The models that power AI tools were trained on datasets incorporating the intellectual labor of millions of people — writers, programmers, artists, researchers, journalists — whose work was extracted from the public internet and incorporated into proprietary systems without compensation, without meaningful consent, and without any mechanism for the contributors to share in the value their contributions made possible. Morozov has identified this extraction as a form of enclosure — the conversion of a commons (the open internet's accumulated intellectual output) into private property (the trained model) that generates revenue for the enclosing institution while providing no return to the commons from which the value was extracted.
The enclosure analogy is historically precise and politically pointed. The original enclosures — the conversion of common agricultural land into private property in England from the fifteenth through the nineteenth centuries — were justified by the same logic that justifies the AI training data extraction: the enclosed land would be used more productively under private management than under common governance, and the increased productivity would, eventually, benefit everyone. The eventual benefits materialized — but not for the commoners whose access to the land was destroyed, and not without centuries of political struggle to ensure that the benefits of increased productivity were distributed beyond the enclosing class.
Morozov would argue that the AI infrastructure requires governance mechanisms that no single corporation, however well-intentioned, can provide — mechanisms built through political processes, enforced by public institutions, accountable to democratic constituencies. These mechanisms would address not only the governance of the platforms but the governance of the training data (recognizing the contributions of the intellectual commons and ensuring they are compensated), the governance of the environmental externalities (ensuring the energy and water consumption of AI infrastructure is subject to the same environmental accountability as other industrial operations), and the governance of the economic distribution (ensuring the productivity gains enabled by AI infrastructure are shared broadly rather than captured narrowly).
These are not technical challenges. They are political ones. And they require the thing that the solutionist ideology is most effective at preventing: collective action by dependent populations demanding democratic governance of the institutions on which they depend. The dams that The Orange Pill calls for must include not only behavioral norms for individual builders but governance structures for the infrastructure itself — structures built not by beavers responding to a river they did not create but by citizens demanding accountability from institutions they have the right to govern.
Morozov's argument is not against technology. This bears repeating — not because the point is subtle but because the solutionist framework, which divides the world into people who are for technology and people who are against it, consistently misreads his critique as technophobia. The misreading is itself diagnostic: the framework cannot accommodate a position that accepts the power of the technology while refusing to accept the ideology that accompanies it. Within the solutionist binary, one is either a builder or a Luddite, an optimist or a pessimist, a participant in progress or a nostalgic clutching at the past. Morozov's position lies outside this binary, in a territory that the framework renders invisible — which is, of course, precisely the territory where the most important questions live.
His argument is against solutionism, the ideology that converts every human experience into a technical problem and every political question into a design challenge. And his alternative is not better technology but better politics — the recognition that the most important questions raised by the AI transition are not questions about what to build but questions about who benefits, who bears the costs, who governs the transition, and by what process these determinations are made.
The distinction between technology and solutionism — between tools and the ideology through which tools are produced, deployed, and celebrated — is the axis around which Morozov's entire body of work turns. He does not argue that AI tools are bad. He argues that the ideology through which they are produced and marketed is a specific political formation serving specific interests, and that the ideology's most effective strategy is to present itself not as a political formation but as the natural orientation of intelligent people toward the problems they encounter. The ideology's invisibility is its power. The fish does not see the water. The builder does not see the solutionism. Both swim in their respective media with a fluency that makes the media itself imperceptible.
The alternative that Morozov proposes — politics, in the substantive rather than the partisan sense — is the process by which a society determines how power is distributed, how costs and benefits are allocated, and how the institutions that shape collective life are governed. Politics in this sense is not the degraded spectacle of the news cycle. It is the mechanism through which democratic societies make binding collective decisions about the terms of their shared existence. It involves deliberation — the slow, contentious, imperfect process of weighing competing interests and arriving at arrangements that reflect, however imperfectly, the plurality of values and perspectives that a democratic society contains.
Morozov has been explicit that the politics he envisions is not unprecedented but historically continuous with the governance responses that every previous major technological transition eventually provoked. The labor movement's response to industrialization was a form of technology politics: the collective determination that the productivity gains of industrial machinery should be accompanied by protections for the workers whose labor the machinery displaced — the eight-hour day, the weekend, child labor laws, workplace safety regulations. These structures did not reject the technology. They redirected its effects, ensuring that the gains flowed to the society broadly rather than exclusively to the owners of the machinery.
Railroad regulation was technology politics. Telephone regulation was technology politics. Environmental legislation — the Clean Air Act, the Clean Water Act, the framework of environmental impact assessment — was technology politics, the collective determination that the costs of industrial production should not be externalized onto communities that bore the pollution without sharing the profits. In every case, the political response was resisted by the industries it constrained. In every case, the resistance deployed the language of inevitability: this is progress, you cannot stop it, regulation will stifle innovation, the market allocates resources more efficiently than any political body. And in every case, the political response was eventually recognized as a necessary condition for the technology's benefits to be broadly shared rather than narrowly captured.
The AI moment requires a comparable political response, and the specific forms this response might take are beginning to emerge in regulatory frameworks around the world — the EU AI Act, emerging governance proposals in Singapore, Brazil, and Japan, the executive orders and legislative proposals taking shape in the United States. Morozov would regard these efforts as necessary but insufficient — necessary because they represent the first institutional recognition that AI governance cannot be left entirely to the companies that produce AI systems, insufficient because they predominantly address the supply side (what AI companies may build) while leaving the demand side (what citizens, workers, and communities need to navigate the transition) largely unaddressed.
A political response adequate to the AI moment would address several dimensions simultaneously. Democratic governance of AI infrastructure would ensure that the critical systems on which productive capability depends are subject to public accountability rather than corporate discretion. This does not necessarily mean nationalization — Morozov has proposed various institutional forms, from public data trusts to cooperative ownership models to regulated-utility frameworks — but it means governance mechanisms that give dependent populations a genuine voice in the decisions that shape their working lives.
Labor protections would address the restructuring of work that AI produces — not merely the displacement of workers but the intensification of work that the Berkeley researchers documented, the blurring of boundaries between roles, the erosion of protected time, the conversion of every pause into a productive opportunity. The labor movement's historical achievement was not merely the protection of jobs but the protection of time — the insistence that human beings are not production units and that the boundary between work and life is not a luxury but a necessity. The AI moment requires the equivalent of the eight-hour day for cognitive work: structural protections that limit the colonization of human attention by tools designed to maximize engagement.
Data governance would recognize the contributions of the intellectual commons — the millions of human beings whose writing, code, images, and ideas were extracted to create the training datasets on which AI models depend — and ensure that the value these contributions made possible is shared rather than exclusively captured by the companies that performed the extraction. The specific mechanisms might include data dividends, training-data compensation funds, or requirements that models trained on public data be made available as public goods. The principle is that the conversion of a commons into private property should be accompanied by a return to the commons proportionate to the value extracted.
Educational institutions would prepare citizens not merely to use AI tools but to participate meaningfully in the governance of the institutions that produce them — to understand the political economy of the technology industry, to evaluate the claims made on behalf of technological progress, to participate in the democratic processes through which the terms of the AI transition are determined. This is a different educational project from the one that currently dominates: "teaching students to prompt" is training consumers. Teaching students to understand the institutional structures within which prompting occurs, and to participate in the governance of those structures, is educating citizens.
Morozov's recent essay "Socialism After AI," published in December 2025, pushed the political argument further, contending that contemporary capitalism "no longer tries to legitimize itself primarily through efficiency, but through its capacity to turn constraint into experimentation and self-formation — a promise that AI intensifies." The observation identifies something that the conventional left critique of technology has struggled to accommodate: AI is not experienced by its users as oppressive. It is experienced as liberating. The engineer in Trivandrum does not feel exploited. She feels empowered. And the feeling is genuine — it corresponds to a real expansion of her productive capacity. The political challenge is not to convince empowered users that they are actually oppressed, which would be both condescending and inaccurate. It is to demonstrate that empowerment within a structure of concentrated governance is not the same as freedom, and that the difference matters — that it has consequences for the distribution of economic value, for the accountability of the institutions on which one depends, and for the democratic self-governance of the societies in which one lives.
This is the challenge that The Orange Pill approaches but does not fully meet — not because Segal lacks the intelligence or the honesty to engage it, but because the framework within which the book operates does not contain the vocabulary for political analysis at the level the moment requires. The book calls for dams. It calls for stewardship. It calls for the cultivation of judgment and taste and the capacity to ask good questions. These are valuable prescriptions, and they represent a quality of engagement with the human dimensions of the AI transition that most technology discourse lacks entirely.
But they are prescriptions for what individuals and organizations can do within the existing structure. They do not address whether the structure itself should be changed — whether the concentration of AI infrastructure in the hands of a few corporations is compatible with democratic governance, whether the governance asymmetry between users and platforms is acceptable, whether the distribution of the economic value that AI-enhanced productivity generates is just. These are political questions. They require political answers — answers produced through the deliberation, contestation, and collective decision-making that democratic societies employ to determine the terms of their shared existence.
Morozov would conclude not with a prescription but with a question — because the question is worth more than any prescription it might produce. The question is not "How should we use AI?" It is: who decides how AI is governed, by what process, with what accountability, and in whose interest? That question cannot be answered by building better tools. It cannot be answered by cultivating individual judgment, however admirable. It cannot be answered by constructing behavioral dorms within existing institutional arrangements. It can only be answered by politics — by the collective determination of citizens who understand that the institutions shaping their cognitive lives are theirs to govern, not merely theirs to navigate.
The solutionist says: build better tools. The political thinker says: build better institutions. Both are necessary. But in a world drowning in tools and starving for governance, in a world where the capacity to build has outstripped the capacity to decide what is worth building by several orders of magnitude, the institutional prescription is the urgent one. And it is the one that the ideology most effectively prevents — which is, of course, precisely why it is the one that most needs to be heard.
A book written about solutionism with the tools of solutionism is itself a solutionist artifact. Morozov, who has devoted his career to the identification of ideological structures that conceal themselves within the objects they produce, would recognize in this recursive quality not an irony to be noted and set aside but a diagnostic opportunity — the place where the analysis and the object of analysis converge, and the convergence reveals something that neither could reveal alone.
The Orange Pill is a book about AI written with AI. It is a book about the dangers of frictionless production composed through a process whose defining feature is the elimination of productive friction. It is a critique of the ideology embedded in the tool, generated by the tool whose ideology it critiques. The recursion is not a flaw. It is the book's most revealing feature — and what it reveals, under Morozov's analytical lens, is the depth to which solutionism has penetrated the cognitive habits of even its most self-aware practitioners.
Segal acknowledges the recursion repeatedly and with genuine candor. He describes the Deleuze failure, where Claude produced a philosophically incorrect connection that sounded convincing enough to nearly survive editorial scrutiny. He describes the moment when Claude's prose outran his thinking, when the smoothness of the output concealed the absence of the deliberation that should have produced it. He describes retreating to a coffee shop with a notebook — a physical withdrawal from the digital collaboration — to recover his own voice. Each acknowledgment demonstrates a quality of intellectual honesty that Morozov would respect, because the honesty is specific rather than gestural: Segal names the mechanism, describes its operation, and admits its power over him even as he resists it.
But Morozov's framework would also identify the structural limitation of this honesty. Self-awareness, however genuine, is a personal virtue. It is not a political response. And the problems that the recursion reveals are structural, not personal — products of institutional incentives that individual awareness cannot change.
The builder who is aware that the preemptive draft may be shaping his thought rather than assisting it is still receiving the preemptive draft. The awareness does not alter the anchoring effect. It does not restore the generative deliberation that the draft preempted. It does not redistribute the governance power that the infrastructure owner retains. It does not change the terms of the dependency. Self-awareness accompanies the solutionist practice like a warning label accompanies a product — it protects the practitioner from the charge of naivety without altering the structural dynamics that the practice produces.
This observation is not a criticism of Segal's integrity. It is a diagnosis of the constraints that the solutionist framework imposes on even its most reflective inhabitants. The framework can accommodate self-awareness. It cannot accommodate the political analysis that would make self-awareness unnecessary, because political analysis — the examination of who benefits from the current arrangement, who bears its costs, and what governance structures would distribute benefits and costs more equitably — threatens the arrangement that the framework sustains. Self-awareness is tolerable. Political contestation is not.
Consider the specific dynamics of the collaboration that produced The Orange Pill. Segal describes working with Claude to organize his ideas, find connections across disciplinary boundaries, and produce prose that clarified arguments he could feel but could not yet articulate. The collaboration is genuine. The book is real. The ideas are substantive. The quality of engagement with thinkers like Han, Csikszentmihalyi, and Kauffman is serious and often illuminating.
But the collaboration occurred within an institutional relationship whose terms were set entirely by one party. Anthropic determined what Claude could and could not do. Anthropic's training decisions shaped what connections Claude could find and what connections it could not. Anthropic's safety constraints determined the boundaries of the collaboration. Anthropic's pricing determined its accessibility. Anthropic's terms of service determined the legal framework within which the collaboration occurred. At no point did Segal — despite being, by his own account, an engaged and sophisticated user — have any formal mechanism to influence these determinations. He was a customer. He was a collaborator with the tool. He was not a participant in the governance of the institution that produced it.
Morozov would observe that the book's prescriptions reproduce this asymmetry. The Orange Pill calls for dams — structures that individual builders and organizations can construct to manage their relationship with the AI tools. These dams are behavioral and organizational: norms for when to use the tool and when to step away, practices for maintaining human capacities the tool might atrophy, cultural values that prioritize depth alongside speed. The prescriptions are valuable. They demonstrate a seriousness about the human dimensions of the AI transition that most technology discourse lacks entirely. They represent, within the solutionist framework, the most responsible position available.
But they are prescriptions for what individuals can do within the existing institutional structure. They do not address whether the structure itself should be changed. The dams are built by beavers managing their relationship with the river. They are not built by citizens demanding accountability from the institutions that control the river's flow. The difference is the difference between adaptation and governance — between adjusting one's behavior within a system whose rules are given and participating in the determination of the rules themselves.
Morozov's 2024 argument for "ecological reason" — a mode of intelligence that "stresses both indeterminacy and the interactive relationship between ourselves and our environments" — provides a framework for understanding what a genuinely non-solutionist response to the AI moment might look like. Ecological reason does not treat intelligence as a resource to be optimized. It treats intelligence as an emergent property of the relationship between an organism and its environment — a property that cannot be extracted from the relationship without destroying the thing that makes it intelligent. The "intelligence amplification" that Morozov has endorsed as his preferred framework for thinking about AI — "relying on technology to make us, the human decision-makers, smarter" rather than to replace human judgment with algorithmic processing — is ecological in precisely this sense. It preserves the relationship between the human thinker and her environment rather than substituting the machine's processing for the human's engagement.
But ecological reason, as Morozov himself has acknowledged, requires institutional support. It cannot flourish in an environment structured by the incentives of the AI industry, because those incentives reward instrumental reason — the rapid, efficient, goal-directed processing that AI embodies — and marginalize the slower, more uncertain, more context-sensitive engagement that ecological reason requires. The institutional support must come from outside the industry, from democratic governance mechanisms that create space for ecological reason within an environment that would otherwise optimize it away. This is not a technical challenge. It is a political project — and the distinction between the two is the distinction between solutionism and its alternative.
The solutionist mirror reflects back not a distorted image but a truthful one — an image of a culture that can diagnose its own condition with extraordinary precision and cannot translate the diagnosis into political action, because the framework within which it operates converts every political question into a design challenge and every governance problem into an engineering opportunity. The book diagnoses the compulsion. The book cannot build the political institutions that would address the structural conditions producing the compulsion. The diagnosis is personal. The condition is structural. And the gap between the two — between what self-awareness can see and what political action can change — is the gap that Morozov's career has been devoted to making visible.
The mirror also reveals something about the nature of intellectual collaboration itself in the AI age. Segal describes moments when Claude found connections he had not seen — when the tool bridged domains in ways that genuinely advanced his argument. These moments are real, and their reality is what makes the solutionist mirror so disorienting. The tool is not merely efficient. It is sometimes genuinely generative, in the sense that it produces outputs the human collaborator could not have produced alone. The collaboration creates something that neither party — neither the human thinker nor the machine processor — could have created independently.
Morozov would not deny this. He would observe that the creation occurs within a framework whose terms are not set by the collaboration but by the institution that provides one of the collaborators. The machine's contributions are shaped by its training, which is shaped by corporate decisions about data selection, model architecture, and optimization objectives. The human's contributions are shaped by the machine's capabilities, which constrain and direct the human's thinking in ways that are often invisible to the human. The collaboration is real. Its terms are not freely negotiated. And the question of who governs the terms — who determines what the machine can contribute, what connections it can make, what frameworks it can operate within — is a political question that the celebration of the collaboration consistently defers.
Morozov has insisted, with the persistence of someone who recognizes that the point must be made repeatedly because the forces arrayed against it are structural rather than merely rhetorical, that the alternative to solutionism is not the rejection of technology but the construction of political institutions capable of governing it. Not institutions that prevent innovation but institutions that ensure innovation serves democratic values. Not institutions that reject AI but institutions that subject AI to the same democratic accountability that democratic societies apply to every other form of concentrated power.
The construction of these institutions is the work that lies ahead. It is not work that any individual builder can perform alone, however self-aware, however honest, however genuinely committed to the human dimensions of the technology she builds with and writes about. It is collective work — the work of citizens who understand that the tools reshaping their cognitive lives are products of institutions they have the right and the responsibility to govern. The solutionist mirror shows us our condition with painful clarity. The question is whether we will respond to the clarity with better tools or with better politics. Morozov has spent his career arguing for the latter. The argument has never been more urgent, and it has never been more difficult to hear above the noise of solutions.
---
The ideology I could not name was the one I was swimming in.
That is the sentence I keep returning to after months inside Morozov's framework. Not because it is elegant — it is not — but because it describes, with uncomfortable precision, the condition I have been in for most of my professional life. I have been a solutionist. I have been a very good solutionist. I have built products that converted human experiences into technical problems and deployed solutions at scale, and I have measured my success by the efficiency of the conversion and the reach of the deployment. When I described the imagination-to-artifact ratio in The Orange Pill — the distance between an idea and its realization, collapsing to the width of a conversation — I was celebrating the apotheosis of the very ideology Morozov diagnoses. The collapse I celebrated is real. What collapsed with it is the question Morozov will not let me stop asking: should every experience that can be converted into a problem actually be converted?
I do not have a clean answer. Morozov would say that the absence of a clean answer is itself the point — that the demand for clean answers is a solutionist reflex, the conversion of intellectual uncertainty into a problem requiring resolution. The discomfort of sitting with the question, of allowing it to reshape me without resolving it, is the specific form of cognitive work that solutionism eliminates and that ecological reason requires.
What I can say is that the political dimension of Morozov's critique has changed how I think about the dams I called for in The Orange Pill. The behavioral dams — the norms and practices that individual builders and organizations can construct — remain necessary. But they are not sufficient. The structural conditions that produce the compulsion I described, the four-in-the-morning sessions, the inability to stop, the appetite that grows with what it feeds on — these conditions are not personal failings correctable by better self-management. They are products of an institutional arrangement in which the tools that reshape my cognition are governed by corporations whose accountability extends to their shareholders, not to me.
The question Morozov leaves me with is the one I now ask in every meeting, every product decision, every late-night session with Claude: who governs this? Not who built it. Not who uses it. Who governs the terms under which it operates, and by what process, and with what accountability? These are not questions the tool can answer. They are not questions better engineering can resolve. They are political questions, and they require the thing I have spent my career avoiding: the slow, contentious, imperfect work of collective decision-making about the institutions that shape our shared cognitive lives.
I am still a builder. I will keep building. But I am trying, with genuine difficulty and imperfect success, to build something I have never built before — not a product, not a platform, not a dam, but the habit of asking, before I build anything else: is this a problem, or is this an experience? And if it is an experience, what would I lose by solving it?
The question does not have a technical answer. That is what makes it worth asking.
-- Edo Segal
Evgeny Morozov has spent fifteen years asking the question Silicon Valley cannot afford to hear: what happens when a culture treats every human experience as a deficiency awaiting its technological fix? In the age of AI, his answer has never been more urgent. The tools work. The capability is real. But the ideology underneath — the reflexive conversion of difficulty into friction, friction into cost, cost into market opportunity — is quietly reshaping what it means to think, to deliberate, to govern ourselves. This volume brings Morozov's framework to bear on the arguments of The Orange Pill, revealing the political economy hidden beneath the empowerment narrative. When the builder celebrates her new capability, who governs the infrastructure she depends on? When friction collapses, what was the friction protecting? When every question becomes a prompt, what happens to the deliberation that democracy requires? These are not questions better engineering can answer. That is precisely the point. — Evgeny Morozov

A reading-companion catalog of the 18 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Evgeny Morozov — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →