By Edo Segal
The word I kept using was "democratization." I used it in pitch decks. I used it on stage. I used it in The Orange Pill when I described the developer in Lagos, the student in Dhaka, the engineer in Trivandrum. I meant it every time. AI lowers the floor of who gets to build. The expansion is real. The moral weight is real.
Then I read Arturo Escobar, and I realized I had been confusing two completely different operations.
Giving someone a tool is not the same as giving them power over the tool. Distribution is not democratization. Access is not governance. The developer in Lagos appears in my book three times. She never speaks. I invoked her as evidence for my argument. I did not ask her to shape it. Escobar names this pattern with a precision that made me defensive on first reading and silent on second. The postwar development apparatus did exactly the same thing for seventy years: invoked the intended beneficiary as proof that the intervention was necessary, while never including her in the conversation about what the intervention should be.
I did not intend to replicate that structure. The structure replicated itself through me.
Escobar is an anthropologist who spent decades studying what happens when a powerful system arrives in a community, positions that community as deficient, and offers its own tools as the remedy. He watched it happen with development aid. He watched it happen with structural adjustment. And his analytical framework maps onto the AI transition with a fidelity that should make every builder uncomfortable.
This is not an anti-technology argument. Escobar does not ask anyone to stop building. He asks who designs the tools, whose knowledge they encode, whose problems they are optimized to solve, and who governs the systems that reshape billions of lives. These are questions the technology discourse does not ask, because the discourse is structured to make them invisible.
The Orange Pill argues that AI is an amplifier. Escobar made me see that the amplifier has a frequency response. It carries certain signals at full volume and attenuates others. It renders the world as text, as structured argument, as fluent English prose. Whatever does not fit that rendering gets filtered out. I wrote my book with Claude, and I was honest about the collaboration. But Escobar taught me to ask what could not be built within it.
This book will challenge you. It challenged me. The fishbowl I described in The Orange Pill turns out to have walls I could not see from inside. Escobar pressed my face against the glass.
-- Edo Segal ^ Opus 4.6
1952-present
Arturo Escobar (1952–present) is a Colombian-American anthropologist and one of the founding figures of postdevelopment theory. Born in Manizales, Colombia, he studied chemical engineering before turning to international development and eventually earning his PhD in philosophy, policy, and planning at the University of California, Berkeley. His landmark 1995 book Encountering Development: The Making and Unmaking of the Third World demonstrated that the postwar development apparatus did not discover underdevelopment but invented it, constructing entire populations as deficient through frameworks of measurement that rendered local knowledge systems invisible. His subsequent work, particularly Territories of Difference (2008) and Designs for the Pluriverse (2018), extended this analysis into ontological territory, arguing that design practices construct worlds rather than merely serving them and advocating for a "pluriverse"—a world in which many worlds fit. A longtime professor at the University of North Carolina at Chapel Hill, Escobar has spent decades collaborating with Afro-Colombian and indigenous social movements in Colombia's Pacific region, grounding his theoretical work in the lived practices of communities navigating the intersection of globalization, ecological crisis, and cultural survival. His recent writings on the "incomputable" and on what he calls "terricide"—the systematic destruction of the conditions for life—have extended his framework directly into the domain of artificial intelligence and digital technology.
On January 20, 1949, Harry Truman stood before the American people and divided the world in two. In his inaugural address, the president declared that "more than half the people of the world are living in conditions approaching misery," that "their economic life is primitive and stagnant," and that "their poverty is a handicap and a threat both to them and to more prosperous areas." With those sentences, Truman did not describe a pre-existing condition. He constructed one. He brought into being a new category of global existence — "underdevelopment" — and simultaneously positioned the United States as the agent capable of remedying it. Two billion people who had previously understood themselves through the categories of their own cultures, their own knowledge systems, their own conceptions of well-being and sufficiency, were reclassified in a single speech as lacking, deficient, and in need of intervention.
Arturo Escobar's intellectual project, from Encountering Development in 1995 through Designs for the Pluriverse in 2018 to his recent writings on "terricide" and the incomputable, has been dedicated to a single, devastating insight: development discourse did not discover underdevelopment. It invented it. The apparatus of development — the World Bank, the International Monetary Fund, the bilateral aid agencies, the army of consultants and experts and evaluators — did not arrive in the Global South to address problems that existed independently of the apparatus. The apparatus produced the problems it claimed to solve, by imposing a framework of measurement that rendered entire civilizations legible only through the vocabulary of deficit. GDP per capita. Literacy rates. Caloric intake. Life expectancy. Each metric illuminated one dimension of human existence while casting every other dimension into shadow. The farmer who maintained twenty varieties of traditional rice, who governed water resources through customary law refined over centuries, who understood the relationship between soil health and seasonal rainfall with a precision that no visiting agronomist could match — this farmer appeared in the development framework as a subsistence producer operating below optimal yield. The framework did not lie about what it measured. It lied about what measurement captured.
The discourse of artificial intelligence that crystallized in the mid-2020s replicates this structure with a fidelity that Escobar's analytical apparatus is uniquely positioned to detect. The vocabulary has been modernized. The institutional actors wear different logos. The timeline has been compressed from decades to months. But the underlying grammar — the discursive architecture that constructs a population as deficient, positions a technology as the remedy, and presents the distribution of that technology as liberation — remains intact.
Consider the figure who appears repeatedly in the technology literature of this period, including in Edo Segal's The Orange Pill: the developer in Lagos, the student in Dhaka, the builder in the Global South who possesses ideas and intelligence but lacks access to the tools that would allow her to realize her potential. The figure is invoked with genuine moral seriousness. The argument is that AI tools — Claude Code, large language models, the entire apparatus of generative AI — lower the floor of who gets to build, expanding creative capability to populations that were previously excluded by the cost and complexity of technical implementation. The argument is not false. The tools do expand capability. The expansion is real, measurable, and in many individual cases genuinely transformative.
But the argument operates within a discursive structure that Escobar spent his career anatomizing. The developer in Lagos appears in this discourse not as a knower, not as someone who possesses knowledge that the AI system lacks, not as a participant in a conversation about what technology should be and whom it should serve, but as a figure of lack awaiting remedy. She lacks access. She lacks tools. She lacks the computational infrastructure that would allow her to participate in the global knowledge economy on terms defined by that economy's existing architects. The deficiency is not discovered empirically. It is constructed discursively, by a framework that measures human capability against the standard of what AI tools provide and finds everyone who does not possess those tools to be, in the precise sense that Truman's speech deployed, underdeveloped.
The construction is not malicious. Segal's concern for the developer in Lagos is sincere, as was Truman's concern for "the people living in conditions approaching misery." The sincerity does not alter the discursive effect. Development practitioners throughout the postwar era were frequently animated by genuine compassion. They built hospitals. They vaccinated children. They constructed roads that connected isolated communities to national markets. The benefits were real, and Escobar has never denied them. What Escobar demonstrated was that the benefits were embedded in a system whose total effect exceeded, and frequently contradicted, the sum of its individual interventions. The hospital was real. The displacement of the indigenous healing practice that the hospital's presence rendered illegitimate was also real. The road was real. The integration of the subsistence community into a market economy that dissolved its self-sufficiency was also real. The vaccination was real. The epistemic authority that the vaccination program conferred on Western biomedicine, at the expense of local health knowledge that addressed dimensions of well-being the biomedical framework did not recognize, was also real.
The AI transition reproduces this pattern. The tool is real. Claude Code does enable a person with an idea and the capacity to describe it in natural language to produce a working software prototype in hours. Segal's account of the Trivandrum training — twenty engineers who experienced a twenty-fold increase in productive capacity over five days — is credibly documented and genuinely remarkable. But the tool is embedded in a system, and the system's total effect is not captured by the sum of its individual productivity gains. The system includes the training data that encodes a specific epistemology. It includes the pricing structure that creates new dependencies. It includes the linguistic requirements that privilege English-language competence. It includes the workflow assumptions that optimize for knowledge-economy production. It includes the institutional architecture of AI companies whose governance is concentrated in a specific geographic and class location. And it includes the discourse — the democratization narrative — that presents the distribution of these tools as the expansion of human freedom rather than as the latest iteration of a five-century pattern in which the center distributes its technologies to the periphery and calls the distribution progress.
Escobar's concept of the "development apparatus" provides the analytical instrument for seeing this system whole. The development apparatus was not a conspiracy. It was a formation — a convergence of institutions, discourses, practices, and professional identities that produced a specific way of relating to the Global South. The formation was maintained not by coercion but by what Michel Foucault called "the production of truth": the generation of knowledge that constituted the objects it described. The World Bank did not merely study poverty. It produced the categories through which poverty became visible, the metrics through which it was measured, and the interventions through which it was addressed. The categories, the metrics, and the interventions formed a self-reinforcing system: the categories defined what counted as a problem, the metrics confirmed the problem's existence, and the interventions addressed the problem as defined, producing data that confirmed the categories' validity, generating the next round of problems, metrics, and interventions.
The AI apparatus operates identically. Adoption rates define success. Productivity multipliers confirm the technology's value. The expansion of the user base generates data that improves the models, which increases the productivity multipliers, which accelerates the adoption rates, which confirms the framework's validity. Within this loop, the only relevant question is how to distribute the tools more widely and more equitably. The question of whether the tools encode assumptions that may conflict with the knowledge systems, the governance structures, and the life-worlds of the communities to which they are distributed — this question does not arise, because the loop does not contain the categories that would make it visible.
Segal, to his credit, is more reflexive than the average technology evangelist. The Orange Pill engages seriously with critics — with Byung-Chul Han's diagnosis of the achievement society, with the Berkeley researchers' findings on work intensification, with the historical precedents of technology transitions that produced both expansion and displacement. The book acknowledges that AI tools require infrastructure, connectivity, and linguistic competence that are not universally available. It calls for institutional responses — education reform, regulatory frameworks, organizational adaptation — that would distribute the transition's costs more equitably. These acknowledgments distinguish Segal from the triumphalists who see only the upside and dismiss all criticism as Luddism.
But the acknowledgments operate within the framework. They do not question the framework itself. They ask how the AI transition can be managed more equitably, not whether the categories through which the transition is understood — productivity, efficiency, creative capability, the imagination-to-artifact ratio — are the right categories for evaluating what communities in the Global South actually need. This is precisely the distinction that Escobar's work insists upon: the difference between alternative development and alternatives to development. Alternative development asks how the existing model can be reformed. Alternatives to development ask whether the model itself is adequate to the plurality of human purposes and human ways of being.
The AI discourse forecloses the second question by answering the first. When the conversation about AI and the Global South is framed as a conversation about access — who has the tools, who lacks them, how can the distribution be improved — the conversation has already accepted that the tools, as currently designed, constitute the relevant form of capability, and that the relevant form of progress consists in distributing them more widely. The developer in Lagos who does not use Claude Code is positioned, within this framework, the way the subsistence farmer was positioned within the development framework: as someone who has not yet received the benefit. The possibility that she might have her own tools, her own knowledge, her own criteria for evaluating technological adequacy — this possibility is foreclosed by the framework's definition of what counts as capability.
Escobar wrote, regarding development, that "development planning was not only a problem to the extent that it failed; it was a problem even when it succeeded, because it so strongly set the terms for how people in poor countries could live." The AI democratization narrative risks the same verdict. The tools may succeed. The adoption may expand. The productivity may multiply. And the terms may so strongly be set — the terms of what counts as building, what counts as intelligence, what counts as creative capability — that the populations who receive the tools find themselves living within a framework they did not design, pursuing purposes they did not define, measured by metrics that do not capture what matters most to them.
The question is not whether AI tools are useful. Many development interventions were useful. The question is whether the discourse that accompanies the distribution creates the conditions for genuine self-determination — the capacity of communities to define their own problems, design their own solutions, and evaluate the adequacy of external tools by their own criteria — or whether it preempts self-determination by defining the problems, prescribing the solutions, and establishing the criteria in advance.
Truman's speech did not end with the diagnosis of misery. It ended with a prescription: American technology, American expertise, American institutional models, distributed to the "underdeveloped" world for its own benefit. The prescription was not optional. It was presented as the only rational response to the condition it had diagnosed. The AI democratization narrative performs the same closure. The condition has been diagnosed: billions of people lack access to AI tools. The prescription follows: distribute the tools. The prescription is presented as the only rational response. The possibility that different communities might define the condition differently, might identify different problems as primary, might design different tools to address those problems, might evaluate technological adequacy by criteria that the AI market does not measure — this possibility is excluded not by argument but by the structure of the discourse itself.
Escobar's postdevelopment framework does not reject technology. It rejects the assumption that one technology, designed in one place, optimized for one set of purposes, evaluated by one set of criteria, constitutes a universal solution to the diverse challenges of human existence. The framework insists on plurality — on the recognition that there are many valid ways of knowing, many valid ways of building, many valid ways of living, and that the suppression of this plurality in the name of a single model of progress is not liberation but a particularly effective form of incorporation.
The chapters that follow apply this framework to the specific mechanisms through which the AI discourse operates: the construction of democratization as distribution, the encoding of epistemology in training data, the displacement of non-Western knowledge systems, the possibilities and limits of resistance, and the horizon of what a genuinely pluriversal AI might look like. The analysis engages directly with The Orange Pill's arguments, not as straw men but as the strongest available version of the position that the postdevelopment framework challenges. The challenge is not dismissal. It is the insistence that the conversation be wider than the framework that currently contains it — wide enough to include the voices, the knowledge, and the purposes of those whom the framework positions as beneficiaries rather than as participants.
---
She appears three times in The Orange Pill. Each appearance performs a specific rhetorical function. In none of them does she speak.
The developer in Lagos is invoked to demonstrate that AI capability extends beyond the privileged centers of the Global North. She is invoked to illustrate the moral weight of expanding who gets to build. She is invoked to counter the position of those who question AI's value from positions of comfort — to suggest that criticism of the democratization narrative is itself a form of privilege, available only to those who already possess the capabilities that the tools would provide. Each invocation is made with genuine moral seriousness. And each invocation converts a subject into evidence, a knower into a beneficiary, an agent into an illustration.
This conversion is not unique to Segal. It is the foundational discursive operation of the development apparatus, and Escobar identified it with characteristic precision in Encountering Development: the intended beneficiary of the intervention appears in development texts not as a participant in the design of the intervention but as evidence of the intervention's necessity and, subsequently, of its success. The World Bank report on rural poverty features photographs of smiling farmers holding improved seed varieties. The report does not include the farmers' analysis of what the seed program means for their community, their assessment of how it restructures land tenure, or their evaluation of what is gained and what is lost. The farmer appears as a face, not as a theorist. Her knowledge is invisible because the framework does not contain the categories that would make it visible.
Escobar's concept of "discursive subalternity" names this mechanism with philosophical precision. The subaltern, in the postcolonial tradition from which Escobar draws, is not simply the person who is economically disadvantaged. The subaltern is the person whose exclusion from the discourse of power is organized by the discourse itself. The exclusion is structural, not intentional. The development planner who writes a report about rural poverty in West Africa does not intend to silence the farmers whose lives the report describes. The planner writes within a genre that does not include the farmers' voice because the genre was designed to convey expert knowledge to institutional decision-makers, and within the genre's conventions, the farmer is an object of study rather than a subject of analysis. The conventions are not questioned because they are invisible — they constitute the water in which the planner swims, the glass walls of the fishbowl that Segal himself, borrowing from a different metaphorical tradition, recognizes as the condition of all situated knowledge.
The developer in Lagos occupies the same structural position in The Orange Pill that the subsistence farmer occupies in the development report. She is present as a figure — a moral argument given human form — but absent as a voice. Segal does not claim to speak for her. He invokes her experience to support an argument about the importance of expanding access to creative tools, and the argument is morally serious. But the discursive structure within which the argument operates produces effects that the author may not recognize. The Lagos developer becomes a token of democratization: her presence in the argument legitimizes the claim that AI tools are expanding access to creative capability, while her silence within the argument reproduces the asymmetry that the claim of democratization purports to overcome.
What does the developer in Lagos know? This is the question that the democratization discourse does not ask, and it is the question that Escobar's framework insists upon. She knows what it means to build software where electricity is unreliable, where the grid fails at two in the afternoon without warning and the UPS battery gives you eighteen minutes to save your work. She knows what it means to code in a second or third language, translating concepts between knowledge systems with each keystroke, navigating the gap between the problems her community faces and the problems that the AI tool was optimized to solve. She knows the texture of that gap — its specific dimensions, its daily frustrations, its occasional impossibilities — with an intimacy that no user in San Francisco possesses, because no user in San Francisco experiences the gap. The tool was designed for the San Francisco user's reality. The gap is invisible from inside that reality, the way water is invisible to the fish.
This knowledge is not merely experiential. It is analytical. It constitutes a perspective on the AI transition that is, in specific and identifiable respects, more comprehensive than the perspective available from the tool's home context. The developer in Lagos knows something about AI tools that the developer in San Francisco does not: she knows what happens when the tool encounters a world it was not designed for. She knows which assumptions break, which workflows fail, which categories produce nonsense when applied to problems shaped by a different history. This is diagnostic knowledge — the knowledge of the mechanic who understands the engine's failure modes because she has watched it fail under conditions the engineer who designed it never imagined.
Escobar's research with the Proceso de Comunidades Negras in Colombia's Pacific region documented precisely this kind of knowledge. The Afro-Colombian communities of the Pacific coast possessed a form of territorial knowledge — an understanding of the relationships between forest, river, mangrove, and ocean ecosystems — that was invisible to the development frameworks through which their territory was administered. The development apparatus classified the Pacific coast as "underdeveloped" because it had low GDP, limited infrastructure, and minimal integration into national markets. The classification was accurate on its own terms: GDP was low, infrastructure was limited, market integration was minimal. But the classification missed everything that mattered to the communities that lived there: the ecological knowledge that sustained fisheries without depleting them, the governance arrangements that managed common-pool resources without enclosing them, the cultural practices that maintained social cohesion without requiring state administration. The knowledge was invisible because the framework of measurement could not detect it, and the invisibility was the mechanism through which the framework produced the deficiency it claimed to address.
The AI discourse performs the same operation through a different mechanism. The Lagos developer's knowledge of what AI tools cannot do — her expertise in the gap between the tool's assumptions and her reality — does not appear in the discourse because the discourse is organized around what AI tools can do. The metrics of the discourse are adoption rates, productivity multipliers, lines of code generated, applications shipped. These metrics capture the tool's capabilities. They do not capture the user's knowledge of the tool's limitations, and the asymmetry between what is measured and what matters is the space within which the discourse operates.
Segal describes the Trivandrum training episode — twenty engineers who experienced a twenty-fold increase in productive capacity over five days — as evidence of AI's democratizing power. The description is empirically grounded and the transformation was clearly real. But the narrative structure follows the grammar of the development intervention with precision that Escobar's framework renders visible. The expert arrives from the metropole. The technology is demonstrated. The local population discovers capabilities they did not know they possessed. The transformation is celebrated. The local participants appear as beneficiaries of an external gift rather than as agents who might have their own criteria for evaluating what constitutes meaningful technological transformation.
The Trivandrum engineers are not, in fact, members of a marginalized community. They are knowledge workers employed by a global technology company, already embedded in the circuits of capital and information that constitute the network society. Their transformation, while real, does not demonstrate the democratization of capability to populations that the existing system excludes. It demonstrates the intensification of capability within populations that the system already includes. The distinction matters because the democratization narrative draws its moral force from the implication that AI tools will reach beyond the existing system to include the excluded. The Trivandrum case supports a different claim: that AI tools increase the productivity of workers who are already integrated into the global knowledge economy. This is significant but it is not the claim the narrative makes, and the gap between the two claims is the space within which the development discourse operates.
The question that Escobar's framework poses is not whether AI tools are useful for the Lagos developer. It is whether the discourse that accompanies the tools creates the conditions for the Lagos developer to articulate her own purposes, to demand tools configured to serve those purposes, and to participate in the governance of the systems that shape her technological environment — or whether the discourse preempts that articulation by defining the purposes in advance. Development discourse preempted articulation by defining development as economic growth and measuring progress in GDP. Communities that pursued other kinds of well-being — forms of prosperity rooted in ecological balance, cultural vitality, or communal solidarity — were rendered invisible by the metrics. The AI discourse risks the same preemption by defining democratization as access to tools and measuring progress in adoption rates and productivity multipliers.
In his 2025 essay "Against Terricide," Escobar identified the agents of this preemption directly: "the techno-patriarchs of new technologies are gaining the upper hand" in the work of "imagining the future." The phrase is deliberately provocative, but its analytical content is precise. The future is being imagined — the categories are being set, the metrics are being chosen, the framework is being constructed — by a specific class of actors located in a specific institutional context, and the imagining is presented as universal when it is in fact particular. The Lagos developer's imagining of the future — shaped by her knowledge, her community's priorities, her experience of the gap between the tool's assumptions and her reality — is not part of the conversation that determines how the AI transition is understood, governed, and directed.
She appears in the conversation as evidence. She does not appear as an interlocutor.
The postdevelopment framework does not resolve this exclusion by prescribing a technique for including excluded voices. The exclusion is structural, maintained by the institutional arrangements through which knowledge about the AI transition is produced: the venture capital firms that fund AI development, the research labs that publish the papers, the conferences that convene the conversations, the media outlets that shape public understanding, the policy forums that draft the regulations. Each of these institutions operates within the framework that Escobar's analysis identifies, and each reproduces the framework's exclusions not through deliberate choice but through the operation of conventions so deeply internalized that they constitute the conditions of possibility for the institution's work.
What would it mean to include the Lagos developer as an interlocutor rather than as evidence? It would mean journals that publish analyses written from the perspective of the Global South rather than about it. It would mean conferences that include speakers from communities navigating the gap between AI tools and local realities rather than speakers who study those communities from institutional distance. It would mean funding mechanisms that support research conducted by communities about their own experience of the AI transition. And it would mean a recognition that the discourse of AI democratization may itself be an obstacle to the democratization it advocates, because it constructs the conversation's terms in ways that exclude the voices whose inclusion would make the conversation genuinely democratic.
The Lagos developer deserves better than tokenization, however well-intentioned. She deserves the analytical status of a knower — a participant in the conversation whose knowledge shapes the conversation's direction rather than merely illustrating its conclusions.
---
The term performs a specific ideological function that its users do not acknowledge and may not recognize. When Silicon Valley describes the expansion of AI tool access as "democratization," the term imports the moral authority of political democracy — the expansion of participation, voice, and self-governance — while delivering something structurally different: the distribution of a product. The conflation is not innocent. It forecloses a specific set of questions by answering them in advance, and the questions it forecloses are precisely the questions that determine whether the expansion of access translates into the expansion of power or merely into the expansion of dependence.
Escobar's postdevelopment framework provides the analytical instruments for distinguishing between these two operations with the precision that the technology discourse lacks. Political democratization, in its substantive sense, refers to the expansion of participation in governance: the capacity of citizens to shape the rules under which they live, to contest decisions that affect their communities, to exercise voice in the institutions that structure their existence. The farmer who votes in a local water authority election is participating in democratic governance. The farmer who receives an irrigation pump from an NGO is receiving a distribution. Both may improve the farmer's life. Only one expands the farmer's power.
Technology "democratization" delivers the pump. The user of Claude Code gains the capacity to produce software prototypes through natural-language conversation. This is a real and significant expansion of productive capability. But the user does not gain the capacity to shape the design of Claude Code, to influence the composition of its training data, to participate in the governance of Anthropic, to negotiate the pricing structure under which the tool is provided, or to redirect the development roadmap toward problems that her community defines as priorities. She is a user, not a citizen. She has access, not governance rights. And the discourse that describes her access as democratization performs the crucial ideological work of legitimizing the arrangement by associating it with a value — democracy — that the arrangement does not embody.
The structural adjustment programs of the 1980s and 1990s demonstrate what happens when distribution is mistaken for democratization at civilizational scale. The programs were presented as the democratization of markets: the removal of trade barriers, the privatization of state enterprises, the deregulation of financial systems. The language of freedom was deployed with systematic consistency. Structural adjustment would liberate entrepreneurs from the dead hand of state control. It would empower consumers through market competition. It would democratize economic life by replacing bureaucratic allocation with the impersonal mechanism of price. The intentions, in some quarters, were genuinely progressive. The results were not. Structural adjustment did not democratize economic life. It redistributed economic power from states that, however imperfectly, represented domestic constituencies to international institutions and transnational corporations that represented their shareholders. The farmers, workers, and small producers who were supposed to be liberated by market "democratization" found themselves competing in markets whose rules were set by actors they could not influence, using financial instruments whose terms they could not negotiate, bearing the costs of adjustments whose benefits flowed to the already advantaged.
Escobar documented this process with ethnographic specificity in Colombia's Pacific region, where structural adjustment policies accelerated the integration of Afro-Colombian communities into national and international markets, disrupting the territorial governance arrangements that had sustained community autonomy for generations. The communities did not experience market integration as liberation. They experienced it as enclosure — the systematic incorporation of their territory, their resources, and their labor into a system whose purposes were defined elsewhere and whose benefits were captured elsewhere. The language of democratization accompanied the enclosure, rendering it invisible by naming it freedom.
The AI transition reproduces the structure. The Trivandrum engineers whom Segal trained experienced a genuine expansion of productive capacity. The expansion is documented, measurable, and impressive. But the expansion operated within terms the engineers did not set. The tools they used were designed by Anthropic, a company headquartered in San Francisco and governed by its founders and investors. The training data that shaped the tools' capabilities was composed according to Anthropic's internal priorities. The pricing structure was determined by Anthropic's business model. The development roadmap — the decisions about which capabilities would be enhanced, which languages would be supported, which workflows would be optimized — was set by Anthropic's product team in conversation with Anthropic's most commercially significant users. The Trivandrum engineers were not among those users. They were beneficiaries of a distribution, not participants in a governance.
This analysis does not require imputing malicious intent to Anthropic or to any other AI company. Escobar's framework is not a conspiracy theory. It is a structural analysis of how discursive formations produce specific distributions of power regardless of the intentions of individual actors. Anthropic's leadership has articulated a commitment to responsible AI development that is, by the standards of the industry, unusually thoughtful. The commitment is genuine. But genuine commitment to responsible distribution is not the same as genuine democratization, because distribution and democratization are structurally different operations, and the first cannot become the second through improvements in the distributor's intentions.
What would genuine democratization of AI look like? Escobar's framework, developed through decades of engagement with social movements that demanded alternatives to development rather than alternative development, suggests several dimensions.
First, it would involve participation in design. Not user feedback surveys, not beta testing programs, not advisory boards composed of institutional representatives from the Global South, but genuine participation in the foundational decisions that shape what the tools are and what they do. Which knowledge systems are represented in the training data. Which languages are supported at what level of capability. Which workflows are optimized and for which contexts. Which problems the tool is designed to solve. These decisions are currently made by AI companies' internal teams, informed by market signals that privilege commercially significant user populations. Genuine democratization would redistribute the decision-making power to include the communities whose lives the tools affect.
Second, it would involve data sovereignty. The training data on which AI models are built includes knowledge produced by communities that did not consent to its inclusion, do not benefit from its use, and exercise no governance over the systems that incorporate it. Indigenous ecological knowledge, traditional cultural expressions, locally produced content in languages that the models process but whose speakers the models' governance structures do not include — all of this enters the training pipeline as raw material rather than as intellectual contribution. Genuine democratization would establish frameworks that recognize communal knowledge as a contribution entitled to recognition, compensation, and governance rights. The precedents exist: indigenous intellectual property movements have developed legal instruments for the protection of traditional knowledge in other domains. The extension of these instruments to the AI domain is technically feasible. It is politically absent because the current distribution of power does not require it.
Third, it would involve governance pluralism. The governance of AI development is currently concentrated in corporate boards, executive teams, and the investors who fund them. This concentration reflects the structure of the capitalist enterprise, in which ownership confers governance rights. But the structure is not the only possible structure. Cooperative models, participatory governance arrangements, multi-stakeholder platforms, regulatory frameworks that require community representation in AI governance — each of these represents a departure from the current concentration of decision-making power, and each has precedents in other domains. Community forestry governance, participatory budgeting, cooperative ownership of telecommunications infrastructure in rural communities — the models exist. Their application to AI governance is a political choice, not a technical impossibility.
Fourth, and most fundamentally, it would involve what Escobar has called "autonomy in relation": the capacity of communities to define their own purposes and pursue them through their own means while maintaining productive relationships with other communities and with the global institutions that provide technological resources. Autonomy in relation is not isolation. It is the capacity for self-determination within a framework of mutual engagement. A community that possesses autonomy in relation can adopt AI tools when they serve community-defined purposes and decline them when they do not, without being penalized for the declination through loss of access to markets, services, or institutional support. The community can design its own tools, govern its own data, and evaluate its technological choices by its own criteria, while participating in wider networks of knowledge sharing and resource exchange.
The distance between this vision and the current reality is the distance between distribution and democratization. The Orange Pill moves across this distance without fully recognizing its extent. Segal advocates for institutional responses to the AI transition — education reform, regulation, organizational adaptation — that would distribute the benefits more equitably. These responses are valuable. They are also insufficient, because they address the terms of distribution without questioning the terms of governance. Who designs the tools. Who composes the training data. Who sets the prices. Who determines the development roadmap. Who decides what problems AI should solve and whose criteria of success it should satisfy. These questions constitute the substance of genuine democratization, and they remain unanswered — not because the answers are technically difficult but because the current distribution of power does not require them to be asked.
Escobar's postdevelopment framework does not provide a blueprint for the redistribution of AI governance. It provides something more fundamental: the analytical categories that make the current distribution visible as a distribution rather than as a natural feature of the technological landscape. The visibility is the precondition for political action, because power arrangements that are experienced as natural cannot be contested. Only power arrangements that are seen as constructed — as the product of specific decisions made by specific actors in specific institutional contexts — can be challenged by alternative constructions. The discourse of democratization conceals the construction by naturalizing the distribution. The postdevelopment framework reveals it.
The distinction between distribution and democratization is the most important analytical contribution that Escobar's framework offers to the conversation about AI. Everything else follows from it: the question of whose knowledge is encoded in training data, the question of whose problems the tools are designed to solve, the question of whose criteria determine whether the tools are adequate, and the question of what lies beyond distribution toward a genuinely pluriversal technology. Each of these questions becomes visible only after the conflation between distribution and democratization has been dissolved, and the dissolution requires the kind of structural analysis that Escobar's work provides.
---
In the chapter of Designs for the Pluriverse that has become foundational for critical technology studies, Escobar articulated a principle that the AI industry has not yet confronted: "In designing tools (objects, structures, policies, expert systems, discourses, even narratives), we are creating ways of being." The principle is ontological, not merely functional. It means that a tool does not simply help its user accomplish a task. A tool constructs a world — it determines what objects exist in the user's environment, what relationships obtain between those objects, what actions are possible, and what outcomes are desirable. The hammer constructs a world of nails. The spreadsheet constructs a world of rows and columns. The large language model constructs a world in which knowledge is propositional, language is the primary medium of intelligence, and the adequate response to any question is a fluent, confident, text-based answer.
This last construction is the one that Escobar's framework identifies as the most consequential and the least examined feature of the AI transition. When researchers at CHI 2025 — the premier Human-Computer Interaction conference — tested four major language models by asking them to define "ontology," every model produced a definition aligned with Western philosophical traditions. None integrated the discourse on ontologies in the plural — the body of work on multiple ontologies, pluriversality, and what anthropologists call the "ontological turn" that has reshaped the humanities and social sciences over the past two decades. The models did not suppress this knowledge deliberately. The knowledge was absent from the outputs because it was insufficiently represented in the training data, or insufficiently weighted in the evaluation criteria, or both. The result was that the models reproduced a specific philosophical tradition as if it were universal — precisely the operation that Escobar's work on the "One-World World" is designed to detect and contest.
The concept of "epistemic violence," drawn from postcolonial theory, names the mechanism with precision that the technology discourse lacks. Epistemic violence is not the censorship or suppression of non-Western knowledge through deliberate acts of prohibition. It is the structural marginalization of non-Western knowledge systems through the organization of knowledge production in ways that do not recognize their existence. The violence is infrastructural, embedded in the architecture of the systems that process, store, and transmit knowledge rather than in the intentions of the people who operate those systems. The engineer at Anthropic who composes a training dataset is not seeking to marginalize indigenous ecological knowledge. She is seeking to build a tool that performs well on the metrics by which her work is evaluated, and the metrics reflect the epistemic priorities of the institutional context within which she operates. The violence is structural, which is what makes it durable.
Consider the specific epistemological commitments encoded in a large language model's training data. The data is overwhelmingly English-language text produced within Western institutional contexts: academic publications, news articles, Wikipedia entries, books, web pages, forum discussions. This corpus does not merely privilege one language. It privileges one way of organizing knowledge. The disciplinary boundaries that structure the corpus — medicine separate from ecology, economics separate from ethics, technology separate from culture — are artifacts of Western academic tradition, not features of the phenomena themselves. A community that understands health as simultaneously biomedical, ecological, spiritual, and social encounters, in the AI tool's responses, a framework that compartmentalizes what the community experiences as integrated. The compartmentalization is not a bug. It is the architecture, and the architecture was designed — not maliciously but specifically — by and for communities that compartmentalize knowledge in this way.
The implications extend beyond disciplinary boundaries. The evaluative frameworks embedded in the training data privilege specific outcomes: efficiency over sufficiency, growth over stability, innovation over continuity, individual achievement over communal well-being. These valuations are not argued for within the training data. They are assumed, encoded in the patterns that the model learns and reproduces. When a developer in the Global South asks an AI tool how to improve her community's food system, the tool's response will be shaped by a training corpus that overwhelmingly understands food systems through the lens of agricultural productivity, market integration, and technological modernization. The response may include information about high-yield varieties, supply chain optimization, and digital marketplace platforms. It is unlikely to include information about seed-saving practices that maintain genetic diversity, communal land management systems that prevent the concentration of agricultural resources, or the spiritual and ceremonial dimensions of food production that sustain community identity. Not because this information does not exist, but because it exists in forms — oral traditions, embodied practices, community governance protocols — that the text-based training paradigm does not capture.
Escobar's fieldwork in Colombia's Pacific region provides the empirical foundation for this analysis. The Afro-Colombian communities with whom he worked possessed a relationship to territory that the categories of Western knowledge could not capture without distortion. Territory, in their understanding, was not a parcel of land to be owned and exchanged. It was a living system — forest, river, mangrove, ocean, and the communities of humans, plants, animals, and spirits that inhabited them — maintained through practices of cultivation, fishing, gathering, and ritual that constituted a form of knowledge about ecological relationships, seasonal cycles, and the interdependencies between human well-being and environmental health. This knowledge was embodied rather than propositional, transmitted through apprenticeship rather than instruction, validated by outcomes — the health of the river, the abundance of the fishery, the fertility of the soil — rather than by correspondence with theoretical models.
An AI tool trained on Western knowledge systems does not engage with this knowledge. It replaces it, substituting its own categories for the community's categories, its own frameworks for the community's frameworks. The substitution is not experienced as violence by the tool's designers because the designers do not recognize the community's knowledge as knowledge in the sense that the tool's epistemology understands knowledge: propositional, text-based, discipline-organized, individually authored. The community's knowledge fails every criterion of legibility that the training paradigm imposes, and the failure is interpreted not as a limitation of the paradigm but as an absence in the community — another iteration of the deficit construction that Escobar identified as the foundational operation of development discourse.
In his 2025 work with Michal Osterweil and Kriti Sharma, published in the volume Incomputable Earth, Escobar named what the AI epistemology cannot capture: "what remains incomputable and incalculable, what cannot be accounted for by logocentric" analysis. The concept of the "incomputable" is Escobar's most direct intervention in the AI discourse, and its implications are radical. The incomputable is not the not-yet-computed — the residual category of information that has not yet been digitized but could be in principle. The incomputable is the dimension of human experience that is constitutively resistant to algorithmic processing: the relational, the embodied, the spiritual, the communal, the aspects of human existence that cannot be decomposed into data points without being destroyed in the decomposition.
The traditional healer's diagnostic practice is incomputable in this sense. The healer assesses the patient not through the isolation of symptoms but through the reading of relationships — between the patient and her family, between the patient and her community, between the patient and the land, between the patient and the spiritual forces that the community's cosmology recognizes as agents of health and illness. The assessment integrates information from multiple registers simultaneously, and the integration is the knowledge. Decompose it into data points — blood pressure, family history, symptom inventory — and the knowledge disappears, not because the data points are wrong but because the knowledge lives in the relationships between them, relationships that the decomposition dissolves.
The AI tool can process data points. It cannot process relationships in the sense that Escobar means — the ontological sense, in which relationships are not connections between pre-existing entities but the medium through which entities come into being. Escobar's recent work on "relationality" argues that "the key to constructing livable worlds lies in the cultivation of ways of knowing and acting based on a profound awareness of the fundamental interdependence of everything that exists." AI, built on decomposition, classification, and optimization, operates from the opposite premise: that the world consists of discrete entities whose properties can be measured, whose relationships can be modeled, and whose behavior can be predicted. Both premises produce useful knowledge. Neither is universal. And the imposition of one as the default — the construction of the world-as-data as the world itself — is the epistemological equivalent of the construction of the world-as-GDP that development discourse performed.
Segal, in The Orange Pill, gestures toward this problem through his "fishbowl" metaphor — the recognition that every observer sees from a specific vantage point and that the vantage point constrains what can be seen. The metaphor is productive but insufficient. It frames the problem as one of perspective — different fishbowls, different views — when the problem is ontological. The fishbowls do not contain different perspectives on the same world. They contain different worlds. The Afro-Colombian community's territory is not the same object as the development planner's "natural resource zone," viewed from a different angle. It is a different kind of thing — constituted by different relationships, governed by different principles, existing within a different ontology. The AI tool does not offer a perspective on this territory. It offers a replacement ontology, one in which territory is a data object rather than a living system, and the replacement is enacted every time the tool is used to address questions about land, resources, or ecological management.
The practical implications are specific and actionable, though the scale of the required transformation is substantial. Addressing the epistemic violence of training data requires more than adding diverse texts to existing corpora — the "inclusion" approach that technology companies typically favor. Inclusion within an existing framework preserves the framework. What Escobar's analysis demands is the transformation of the framework itself: training paradigms that recognize embodied knowledge, that value oral traditions, that include community governance protocols as legitimate forms of information, and that evaluate model performance not solely by the criteria of Western knowledge production but by criteria drawn from the knowledge systems the models claim to serve.
The "Decolonial AI" paper published by researchers including Shakir Mohamed of Google DeepMind drew explicitly on the intellectual tradition that Escobar helped build, arguing that "a root cause of failure of developmental projects lies in default attitudes of paternalism, technological solutionism and predatory inclusion" — and that AI systems risk replicating these attitudes at computational scale. The paper's call for AI design "driven by the agency, self-confidence and self-ownership of the communities they work for" echoes Escobar's insistence on autonomy as the precondition for meaningful technological engagement.
The question is not whether AI tools encode knowledge. All tools encode knowledge. The question is whose knowledge they encode, whose knowledge they displace, and whether the displacement is recognized as a loss or experienced as an improvement. The development apparatus experienced the displacement of indigenous agricultural knowledge by Green Revolution technology as improvement — higher yields, greater efficiency, increased market integration. The communities that lost their seed varieties, their soil knowledge, their ecological resilience experienced it as dispossession. The AI apparatus risks the same split perception: the designers see improved capability; the communities whose knowledge systems the capability displaces may see something else entirely. The postdevelopment framework insists that both perceptions be heard, and that the conversation about AI's epistemic architecture include the voices of those whose epistemic worlds the architecture reconstructs.
Every technology transition produces resistance, and every dominant discourse interprets that resistance as noise. The Luddites were backward. The farmers who rejected Green Revolution seeds were irrational. The communities that refused structural adjustment were obstacles to progress. The teachers who ban AI from their classrooms are afraid of the future. In each case, the interpretive framework available to the dominant discourse contains only two categories for the response of those affected by the intervention: adoption or refusal. Adoption is rational, progressive, evidence of the intervention's success. Refusal is irrational, regressive, evidence of the population's need for further intervention — more education, more demonstration, more incentive, more pressure. The possibility that refusal might constitute a form of knowledge — a diagnostic assessment of the intervention's inadequacy, grounded in experience that the interveners do not possess — does not arise within the framework, because the framework does not contain the category that would make it visible.
Escobar's postdevelopment framework provides that category. Resistance, in Escobar's analysis, is not the opposite of progress. It is counter-expertise — a body of knowledge about the conditions under which interventions produce harmful effects, accumulated through the experience of communities that have been subjected to multiple rounds of intervention over decades or centuries. The knowledge is practical rather than theoretical, embodied in decisions rather than articulated in position papers, validated by outcomes rather than by peer review. But it is knowledge, and its systematic exclusion from the discourse about AI constitutes a loss not only for the communities that possess it but for the broader project of understanding how AI tools should be designed, deployed, and governed.
The Orange Pill engages with resistance primarily through two figures: the Luddites of early nineteenth-century England and the philosopher Byung-Chul Han, whose critique of the "achievement society" Segal treats with genuine intellectual seriousness. The engagement is substantive and, within its frame, illuminating. Segal correctly identifies the Luddites' error as strategic rather than diagnostic — they were right about the costs of the transition but wrong about the efficacy of machine-breaking as a response. He engages Han's critique of frictionless production with enough honesty to acknowledge its force against his own experience. But both engagements operate within a framework that positions resistance as a perspective to be accommodated rather than a source of knowledge that might reshape the analysis itself.
Han's critique, in particular, is ultimately contained by the counter-argument that Segal mounts in the chapters on flow, ascending friction, and the democratization of capability. The containment follows a specific rhetorical structure: Han's diagnosis is acknowledged as partly valid, then situated as a position of privilege — available to a tenured philosopher in Berlin who can afford to refuse smartphones and tend gardens — and finally subordinated to the empirical evidence that AI tools expand capability for populations that Han's refusal would leave unserved. The move is effective within the book's argumentative architecture. But it reproduces a discursive operation that Escobar's framework detects: the delegitimation of resistance by locating it in privilege, implying that only those who do not need the technology can afford to question it.
Escobar would note an irony that the structure of this argument conceals. Han and Escobar converge on several diagnostic points that neither The Orange Pill nor the broader technology discourse has explored. Both identify the internalization of external imperatives as a mechanism of control — Han's "achievement subject" who exploits herself in the name of self-optimization, Escobar's postcolonial subject who adopts the development framework's metrics as her own criteria of self-worth. Both argue that the rhetoric of freedom can function as a mechanism of incorporation — Han's "yes, you can" that converts external demand into internal compulsion, Escobar's "participation" that converts community voice into legitimation of externally designed interventions. Both insist that the most effective forms of power are those that render themselves invisible by operating through the subject's own desires rather than against them. The convergence suggests that Han's critique and Escobar's critique are not competing analyses but complementary diagnostics of the same phenomenon viewed from different positions — one from within the metropole, one from the periphery. The technology discourse's treatment of Han as a privileged refusenik whose critique can be absorbed and overcome misses the structural analysis that Han and Escobar share.
But the resistance that matters most for Escobar's argument is not philosophical. It is practical — the decisions made by communities and individuals in the Global South about whether, how, and on what terms to engage with AI tools. These decisions constitute a distributed experiment in the relationship between technology and community, and the experiment is generating knowledge that the dominant discourse is not collecting because it does not recognize the decisions as data.
The farmer who declines to use an AI-powered agricultural advisory system because the system does not account for her soil's specific microbial ecology, her community's rotational planting calendar, or the relationship between crop selection and watershed management is not being technophobic. She is exercising the same diagnostic function that the Luddites exercised: evaluating the technology not by its intrinsic capability but by its relationship to the conditions of her life. Her evaluation may be wrong — the AI system may offer genuine improvements that her assessment does not capture. But her evaluation may also be right in ways that the system's designers cannot see, because her knowledge of the local ecology is more granular, more temporally deep, and more relationally complex than anything the training data contains. The evaluation, whether right or wrong, is knowledge — knowledge about the gap between the tool's assumptions and the world's texture, knowledge that only someone positioned in that gap can produce.
The community leader who resists the digitization of customary land records because the digital system does not recognize communal tenure is defending a governance arrangement that has maintained equitable resource access for generations against a technology that would make the arrangement legible to formal markets and, in doing so, vulnerable to the market dynamics — speculation, concentration, enclosure — that formal legibility enables. His resistance contains specific knowledge about the relationship between legibility and vulnerability, knowledge accumulated through his community's historical experience with previous rounds of formalization that produced precisely the dispossession he fears.
The traditional healer who declines to consult an AI diagnostic system because the system reduces illness to biomedical categories is defending a practice that addresses the patient's condition in its full complexity — physical, social, spiritual, relational — against a technology that addresses one dimension and is structurally incapable of recognizing the others. Her resistance contains knowledge about the limits of decomposition as a diagnostic method, knowledge that the biomedical tradition itself is beginning to acknowledge through the belated turn toward "holistic" and "integrative" approaches that reinvent, under new vocabulary, what traditional healing practices have maintained for centuries.
Each of these resistances is a data point in the distributed experiment. Each contains diagnostic information about the conditions under which AI tools fail to serve the purposes of the communities they claim to benefit. The information is precisely the information that responsible AI development most urgently needs — information about failure modes, about assumption gaps, about the distance between the tool's ontology and the world's plurality. And the information is being systematically discarded, because the discourse that governs AI development categorizes resistance as friction to be overcome rather than feedback to be incorporated.
The framing of resistance as friction is itself a political act. It positions the technology as the standard against which human behavior is evaluated: adoption is normal, refusal is deviant, and the deviance requires explanation in terms of ignorance, fear, or insufficient incentive. Escobar's framework reverses the framing. Human purposes are the standard against which the technology is evaluated: adequacy is demonstrated when the tool serves purposes defined by the community, and inadequacy is demonstrated when the community judges that the tool does not serve those purposes or serves them at unacceptable cost. The reversal is not merely rhetorical. It produces different prescriptions. If resistance is friction, the prescription is more education, more demonstration, more subsidy — interventions designed to reduce the friction and accelerate adoption. If resistance is feedback, the prescription is redesign — modifications to the tool, to the terms of its deployment, or to the institutional arrangements that govern its distribution, informed by the specific diagnostic content of the resistance.
Escobar has distinguished between reactive resistance and what he calls "the politics of refusal" — a generative practice that establishes boundaries around the terms of engagement without rejecting engagement altogether. The Zapatista communities in Chiapas practice the politics of refusal when they use mobile phones for communication while declining to participate in the digital platforms the phones enable access to. They have not rejected the technology. They have established terms — community-defined terms — under which the technology is deployed, and the terms reflect their analysis of which technological capabilities serve their purposes and which threaten their autonomy. The distinction between the capability and the platform is analytical, and the analysis is theirs.
Indigenous communities in the Amazon who document traditional knowledge using digital recording tools while refusing to upload that documentation to external databases practice a version of the same politics. They recognize the tool's utility for internal purposes — preservation, transmission, education within the community — while identifying the specific mechanism through which external access would transform their knowledge from a communal resource governed by communal protocols into an extractable commodity governed by intellectual property law. The refusal is targeted, analytically grounded, and based on specific knowledge about the relationship between accessibility and appropriation that the communities have accumulated through historical experience with previous rounds of knowledge extraction.
Agroecological farmers in West Africa who consult satellite weather data while declining to adopt the algorithmic farming practices that data providers recommend practice yet another version. The weather data serves their existing agricultural knowledge by providing information that their traditional forecasting methods do not capture — data on distant weather systems, long-range precipitation trends, temperature anomalies. The algorithmic farming recommendations do not serve their knowledge; they replace it, substituting standardized protocols for the place-specific, relationally embedded, temporally deep understanding that generations of cultivation have produced. The farmers' distinction between the data and the recommendations is analytically precise, and the precision reflects a sophistication about the relationship between information and knowledge that the technology discourse routinely underestimates.
Each of these cases demonstrates what Escobar calls "autonomy in relation": the capacity to engage with external technological systems on terms defined by the community's own analysis of what serves its purposes. The capacity is not automatic. It requires the community to possess sufficient internal coherence, governance capability, and analytical sophistication to evaluate external technologies against community-defined criteria — to ask not "what can this tool do?" but "what does this tool do to us?" The question is diagnostic, and the communities that ask it are performing exactly the function that responsible AI development claims to value: identifying the conditions under which the technology produces benefit and the conditions under which it produces harm.
The knowledge embedded in resistance is not a substitute for the knowledge embedded in AI tools. It is a complement — a form of knowledge that the tools themselves cannot generate because it arises from the experience of encountering the tools' limitations, an experience that only the user in the gap between the tool's assumptions and the world's reality can have. The postdevelopment framework insists that this knowledge be heard not as the sound of progress being obstructed but as the sound of the world talking back to the tools that claim to serve it.
The Orange Pill treats the historical Luddites with genuine nuance, recognizing that their diagnostic assessment was accurate even as their strategic response was inadequate. Escobar's framework extends this nuance to the contemporary resistance that the AI transition is producing across the Global South and, indeed, across every community that experiences the gap between the tool's ontology and the world's plurality. The resistance is not a failure of adoption. It is the most important source of feedback that the AI transition possesses, and the question is whether the institutions that govern the transition will learn to hear it.
---
The Orange Pill organizes its institutional response to the AI transition around a single metaphor: the beaver's dam. The river of intelligence flows with the force of a natural phenomenon — powerful, persistent, indifferent to the wishes of those in its path. The beaver does not stop the river. The beaver builds dams that redirect the flow, creating pools where life can flourish, channeling the current toward productive ends. The metaphor positions the human actor as an individual builder within a natural environment, responding to forces too large to control but not too large to shape. Segal deploys it to argue for education reform, regulatory frameworks, organizational adaptation, and the cultivation of judgment as the scarce resource in an economy of abundant execution. The metaphor is effective. It captures the relationship between agency and structure with intuitive force, and it motivates institutional responses that are, within their frame, both sensible and urgent.
Escobar's analytical tradition suggests that the metaphor's effectiveness is also its limitation. The beaver builds alone, or at most in a family unit, responding to an environmental force that the metaphor presents as natural. The dam serves the beaver's territory. The pool sustains the beaver's ecology. The metaphor individualizes the response to a systemic transformation, locating agency in the solitary builder rather than in the collective political processes through which societies have historically governed major technological transitions. And the metaphor naturalizes the force it describes — the "river of intelligence" flows as water flows, governed by physics rather than by politics, by the inherent properties of the technology rather than by the decisions of the actors who built, funded, and deployed it.
Both features of the metaphor — the individualization of agency and the naturalization of force — reproduce characteristic assumptions of the discourse within which The Orange Pill operates. The technology discourse tends to locate agency in individuals: the founder, the builder, the visionary, the person who sees the future and constructs it. The discourse tends to present technological development as a natural process: the river flows because intelligence flows, because the evolution of capability is an emergent property of complex systems, because the trajectory from simple tools to sophisticated AI is the continuation of a thirteen-billion-year arc of increasing complexity. Both tendencies serve specific ideological functions. The individualization of agency licenses individual rather than collective responses — each beaver builds its own dam, rather than communities organizing collectively to govern the river. The naturalization of force forecloses political questions about who built the river, who profits from its flow, who decides where it goes, and who bears the costs when it floods.
Escobar's research in Colombia's Pacific region provides an alternative metaphor drawn not from individual ecology but from communal practice: the minga. A minga is a collective work practice in which community members come together to accomplish a shared task — building a house, clearing a road, maintaining shared infrastructure, planting a communal field. The minga is organized not by market incentives or hierarchical authority but by communal obligation and the ethic of reciprocity. Each participant contributes according to capacity, and the community benefits collectively from the result. The minga is simultaneously practical and political: it accomplishes material work while enacting and reinforcing the communal bonds that sustain the community's capacity for collective action.
The minga differs from the beaver's dam in four respects that Escobar's framework identifies as analytically consequential.
First, the minga is collective. The response to the challenge is organized through communal governance rather than individual initiative. The decision about what to build, where to build it, and how to distribute the benefits is made collectively, through processes that the community has developed to manage disagreement and allocate responsibility. The beaver decides alone where to place the dam. The minga decides together. The difference matters because the challenges posed by the AI transition are not challenges that individual actors can address individually. The distribution of AI's benefits and costs, the governance of training data, the regulation of AI companies, the redesign of educational systems, the protection of knowledge diversity — each of these requires collective political action at scales that the beaver metaphor does not reach.
Second, the minga is governed. The collective work is not spontaneous. It follows protocols — who calls the minga, who participates, what obligations attach to participation, how the product of the collective work is distributed, what happens when someone does not fulfill their obligation. These protocols are governance in the fullest sense: they are rules, developed and maintained by the community, that structure collective action toward collectively defined purposes. The beaver's dam has no governance. It is an individual response to an environmental force. The minga is a governed response to a collectively identified need, and the governance is as important as the work it organizes.
Third, the minga begins with the community's assessment of its own needs rather than with the force it confronts. The beaver responds to the river. The minga responds to the community's analysis of what it requires. The difference in starting point produces a different relationship to external forces. The beaver adapts to the river's flow. The minga may decide that the river's flow is not the most pressing challenge — that the community's needs are better served by building something else entirely, something that the river metaphor does not suggest because the metaphor has already defined the relevant challenge as the management of the river. The starting point matters because it determines the range of possible responses. Start from the river, and the only question is how to dam it. Start from the community, and the question is what the community needs — which may or may not include a dam.
Fourth, the minga enacts solidarity. The practice of working together toward a shared purpose strengthens the communal bonds that enable future collective action. The beaver's dam serves an ecological function. The minga serves an ecological function and a political function: it builds the community's capacity to govern itself, to make collective decisions, to mobilize resources for shared purposes. This dual function is precisely what the AI transition requires, because the challenges it poses — governance of training data, regulation of AI companies, redesign of education, protection of epistemic diversity — are challenges that require sustained collective political action, and sustained collective political action requires the kind of social infrastructure that the minga builds.
The industrial revolution was not governed by individual beavers building individual dams. It was governed, eventually and imperfectly, by collective political movements: labor unions that organized workers' collective bargaining power, political parties that translated workers' demands into legislative programs, regulatory agencies that enforced the rules those programs produced, and the gradual construction of a welfare state that distributed industrialization's costs and benefits across the population. The dams that redirected the industrial revolution's destructive energy toward broadly shared prosperity were collective dams, built through collective action, governed by collective institutions. The individual factory owner's decision to treat workers well was admirable but insufficient. The transformation required structural change, and structural change required collective power.
The AI transition requires the same. Segal's institutional prescriptions — education reform, regulation, organizational adaptation — are versions of the collective dams that the industrial revolution eventually produced. But the prescriptions are articulated within a metaphorical framework that individualizes the response, and the individualization constrains what the prescriptions can accomplish. Education reform is not a beaver's dam. It is a minga — a collective project that requires the participation of teachers, students, parents, administrators, policymakers, employers, and communities in a sustained process of collective deliberation about what education should be and whom it should serve. Regulation is not a beaver's dam. It is a minga — a collective assertion of public authority over private power, requiring the mobilization of political will, the construction of institutional capacity, and the ongoing maintenance of rules against the constant pressure of the interests they constrain.
The beaver metaphor is not wrong. Individual agency matters. Individual builders do construct dams that redirect the flow of powerful forces. Segal's own experience at Trivandrum and CES demonstrates the reality and the value of individual initiative within the AI transition. But individual agency operates within structures that it cannot, by itself, transform. The beaver builds a dam on the section of river that passes through its territory. The community decides whether the river should flow through the territory at all, whether its course should be redirected, whether the watershed as a whole serves the purposes of all who depend on it.
The pluralization of the dam metaphor is the institutional implication of Escobar's analysis. Not one beaver building one dam on one river, but many communities building many dams on many rivers, each dam designed for the specific ecology of its watershed, each community drawing on its own knowledge of the local terrain. The rivers flow differently in different places. The dams that serve life in one place may not serve it in another. The plurality of dams is not a coordination problem to be solved through standardization. It is a resource — the diversity of institutional responses that produces the collective learning a global transition requires.
The minga does not replace the beaver. It provides what the beaver cannot build alone: the collective governance structures that determine where the dams go, whose purposes they serve, and how their benefits are distributed. The AI transition needs both: the ingenuity of individual builders and the wisdom of collective governance. The Orange Pill provides the former. The postdevelopment framework insists on the latter.
---
In early 2026, the financial phenomenon that technology analysts called the "SaaS death cross" wiped a trillion dollars from the market valuations of software companies in a matter of weeks. The Orange Pill documents the event with the specificity of a participant observer: Workday down thirty-five percent, Adobe down a quarter, Salesforce down twenty-five percent, IBM suffering its largest single-day decline in more than a quarter century after Anthropic demonstrated Claude's capacity to modernize legacy COBOL systems. Segal analyzes the event as a repricing — the market discovering that the value of software companies had been located in code, and that code, as a product, was approaching commodity pricing. The analysis is economically literate and, within its frame, largely correct. When any competent person can describe what they want and receive working software in hours, the act of writing software ceases to function as a defensible business. The value migrates upward — to the ecosystem, the data layer, the institutional trust, the judgment about what software should exist — and the companies whose value was always located above the code layer survive while those that were "always just code" do not.
Escobar's framework does not contest this analysis on its own terms. It extends the analysis into territory that the financial frame does not reach — territory where the death cross operates not as a repricing of software companies but as a restructuring of the global division of labor, with consequences that fall disproportionately on populations that the financial analysis does not see because they do not appear in the indices that the analysis tracks.
The global software outsourcing industry was built on a specific arbitrage: knowledge workers in India, the Philippines, Eastern Europe, and parts of Latin America and Africa offered comparable technical skills at substantially lower labor costs than their counterparts in the United States and Western Europe. The arbitrage produced a massive transfer of employment from the Global North to the Global South. Bangalore became a global technology hub. The Philippine call center industry employed over a million workers. Eastern European developers built software for Western companies at a fraction of the cost of domestic hiring. The transfer was celebrated, within the development framework, as evidence that globalization was spreading prosperity — that the knowledge economy's benefits were flowing from the center to the periphery, lifting incomes and expanding the middle class in countries that development policy had previously classified as candidates for agricultural and industrial development.
The AI death cross inverts this arbitrage with a speed that the development framework did not anticipate. When Claude Code enables a single developer in San Francisco to produce in hours what a team of five in Bangalore previously produced in weeks, the cost advantage that sustained the outsourcing model collapses. The San Francisco developer's effective cost per unit of output drops below the Bangalore team's, not because the San Francisco developer is cheaper in absolute terms but because the AI tool multiplies her output to the point where the per-unit cost is lower despite the higher salary. The arbitrage that built Bangalore's technology sector — the same arbitrage that development discourse celebrated as the globalization of opportunity — runs in reverse.
The consequences are not symmetrical. The San Francisco developer who is displaced by AI-augmented competition operates within an institutional context that provides some degree of cushion: unemployment insurance, retraining programs, a social safety net that, however frayed, still functions. The Bangalore developer who is displaced operates within a different institutional context. India's social safety infrastructure is thinner, less comprehensive, and less accessible, not because Indian policymakers are less competent but because the institutional infrastructure was shaped by a different history — a history in which the colonial administration built institutions designed to extract value rather than distribute it, and in which the postcolonial state inherited those institutions and repurposed them imperfectly for developmental ends. The colonial infrastructure that Escobar's framework identifies as the persistent substrate of Global South institutional life does not cause the AI displacement. But it determines the conditions under which the displacement is experienced, and the conditions are harsher for those whose institutional infrastructure was not designed to protect them.
The displacement extends beyond the software industry. The death cross, as Escobar's framework reveals it, is not merely a repricing of SaaS companies. It is a restructuring of the global hierarchy of labor value, with implications for every sector in which the Global South's competitive position was based on the cost of human labor rather than on the control of capital, technology, or institutional infrastructure. The call center worker in Manila whose labor costs less than the American worker she replaced is vulnerable to the same inversion: when an AI system can handle customer inquiries at a fraction of the cost of any human worker, the cost advantage that sustained the Philippine call center industry dissolves. The data labeler in Kenya whose low-wage work — annotating images, transcribing audio, evaluating model outputs — makes AI systems possible is vulnerable on two fronts: the work itself may be automated as models become capable of self-evaluation, and the compensation for the work, already at the lowest tier of the global knowledge economy's wage structure, has no structural floor beneath it.
But the death cross also reaches communities that are not competing in the global knowledge economy at all, and this is the dimension that Escobar's framework most urgently illuminates. The traditional textile producer whose hand-woven cloth competes against factory-produced cloth now faces a further competitor: AI-designed textiles that replicate traditional patterns with industrial efficiency. The artisan whose handmade furniture is already under pressure from mass production now faces AI-customized mass production that offers the appearance of craft specificity at commodity prices. The farmer whose heirloom seed varieties were already being displaced by commercial cultivars now faces AI-optimized agricultural systems that can predict yield, manage inputs, and time harvests with a precision that her knowledge, however sophisticated, cannot replicate.
In each case, the displacement is not merely economic. It is, in the precise sense that Escobar's ontological framework intends, a displacement of worlds. The textile is not merely a commodity. It is a carrier of cultural information — patterns that encode social meaning, techniques that embody generations of practical knowledge, relationships between producer and consumer that constitute a form of community. When the textile is displaced by an AI-generated substitute, what disappears is not just a product but a world — a specific configuration of knowledge, practice, and social relationship that the product sustained. The heirloom seed variety is not merely a crop. It is a repository of genetic diversity developed through centuries of farmer selection, a component of an agroecological system that maintains soil health and supports biodiversity, and a carrier of the agricultural knowledge embedded in the practices of cultivation, selection, and seed-saving that maintain it. When the variety is displaced by an AI-optimized commercial cultivar, what disappears is not just a plant but an ecology of knowledge and practice that the plant anchored.
Segal's analysis of the death cross is economically precise: the value of code is collapsing, the value of ecosystems is rising, the companies that survive will be those whose assets were always located above the code layer. The analysis is correct for the actors it addresses — the SaaS companies, the enterprise platforms, the technology investors who must recalibrate their valuations. But the analysis operates within an economic frame that does not register the ontological dimensions of displacement. The economic frame measures what can be priced. The ontological dimensions of what is lost — the cultural knowledge embedded in the textile, the ecological knowledge embedded in the seed, the relational knowledge embedded in the communal practices that production sustains — do not have prices. They have value, but the value is not the kind that appears in the financial indices that the death cross analysis tracks.
Escobar's framework insists that the response to the death cross must address the ontological dimensions alongside the economic ones. This means institutional arrangements that protect the viability of traditional practices not through perpetual subsidy — which would position the practices as charity cases within the market framework — but through the recognition that the practices sustain forms of knowledge and life that the market does not value but that the human community cannot afford to lose. The analogy is biological: genetic diversity is not valued by the market that selects for the highest-yielding cultivar, but the loss of genetic diversity makes the entire agricultural system vulnerable to the pathogen or the climate shift that the monoculture cannot withstand. Cultural and epistemic diversity functions the same way. The loss of the textile tradition, the seed variety, the communal farming practice, the traditional healing knowledge diminishes the adaptive capacity of the human species — its collective ability to respond to challenges that the dominant system cannot foresee.
The practical implications are not utopian. They involve extending existing institutional mechanisms — geographical indications that protect the provenance of traditional products, cultural heritage designations that recognize the value of traditional practices, intellectual property frameworks that protect communal knowledge against commercial appropriation — into the domain of AI-mediated markets. They involve developing AI tools that support traditional practices rather than replacing them: tools that enhance the farmer's ecological knowledge rather than substituting algorithmic optimization, that augment the artisan's design capacity rather than automating production, that strengthen the healer's diagnostic practice rather than overriding it with biomedical protocols. And they involve governing the AI transition in ways that recognize the plurality of human purposes and the irreducible diversity of human forms of life, rather than measuring progress by the single metric of economic productivity that the death cross analysis deploys.
The death cross, seen from the margins, is not merely a repricing of the software industry. It is a restructuring of the global relationship between human knowledge and machine capability, with consequences that extend from the displacement of knowledge workers in Bangalore to the displacement of weavers in Guatemala, from the erosion of outsourcing economics to the extinction of epistemic traditions. The financial analysis captures the first dimension. Escobar's framework insists on the second — not as a sentimental attachment to the past but as a recognition that the diversity of human knowledge is a resource whose value the market cannot measure and whose loss the market will not mourn until the consequences arrive.
---
In 2024, researchers at the ACM's Halfway to the Future Symposium published a paper titled "From Singularity to PlurAIverse," proposing design principles for AI development drawn explicitly from Escobar's Designs for the Pluriverse. The paper was not the work of postdevelopment theorists applying their framework to an unfamiliar domain. It was the work of technology researchers who had concluded, from within the discipline of human-computer interaction, that the dominant model of AI development was producing systems whose universalist assumptions were generating systematic failures when deployed across diverse cultural contexts. The paper's core argument — that AI design must move from a singular paradigm optimized for one set of values toward a pluriversal paradigm that "emphasises inclusivity, cultural sensitivity and social justice" — represented the convergence of two intellectual traditions that had developed independently and arrived at compatible conclusions from different starting points.
This convergence is the most significant feature of the current intellectual landscape for the argument that Escobar's framework advances. The critique of AI's universalist assumptions is no longer confined to postcolonial theory, development studies, or the margins of the technology discourse. It has entered the mainstream of AI research itself, driven by the accumulation of empirical evidence that models trained on narrow datasets and evaluated by narrow criteria produce outputs that are systematically inadequate for the diverse populations the models claim to serve. The CHI 2025 study demonstrating that large language models default to Western philosophical frameworks when asked about fundamental concepts confirmed, through empirical methods that the AI research community recognizes as legitimate, what Escobar's theoretical framework had predicted: that the models encode a specific ontology and present it as universal.
The question that remains is what a genuinely pluriversal AI would look like — not as a theoretical aspiration but as a set of design principles, institutional arrangements, and governance structures that could be implemented within the constraints of the current political economy. Escobar's work provides the conceptual resources for answering this question, and the answer proceeds through four dimensions that his framework identifies as constitutive of any genuinely pluriversal practice.
The first dimension is the pluralization of design. The current model of AI development concentrates design authority in a small number of corporate laboratories, predominantly located in the San Francisco Bay Area, staffed by researchers whose educational backgrounds, cultural assumptions, and professional incentives reflect the epistemic priorities of Western academic and commercial institutions. The concentration is not conspiratorial. It reflects the economics of AI development: the computational resources required to train large models are expensive, the talent required to build them is scarce, and both resources and talent are concentrated in the institutional contexts that can attract capital at the scale the work requires. But the concentration produces a structural homogeneity in the tools that are built — a homogeneity that is invisible from inside the institutions because it constitutes the shared assumptions within which the work proceeds.
Pluralizing design means distributing design authority beyond these concentrated contexts. Not merely consulting diverse users after the foundational design decisions have been made — the "inclusion" approach that technology companies have adopted under pressure from diversity initiatives — but including diverse knowledge holders in the foundational decisions themselves. Which problems should the tool be designed to solve? What forms of knowledge should the training data include? What criteria should evaluation use? What values should the alignment process encode? These decisions currently reflect the priorities of the institutions that make them. Pluralizing them would require institutional arrangements that do not yet exist: governance structures that give communities outside the AI industry meaningful decision-making power over the tools that will reshape their lives.
The second dimension is the pluralization of knowledge. Training data is the epistemological infrastructure of AI systems, and its composition determines the boundaries of what the systems can know, recognize, and value. The current composition reflects the availability and digitization of knowledge in the dominant language and institutional context. Pluralizing knowledge means expanding the training paradigm to include forms of knowing that the current paradigm does not capture: embodied knowledge documented through video and demonstration rather than text, oral traditions preserved through audio recording and communal annotation, ecological knowledge mapped through community-led environmental monitoring, governance knowledge encoded in the protocols and procedures of communal decision-making. Each of these expansions requires not merely technical innovation but institutional negotiation — agreements between AI companies and knowledge-holding communities about the terms under which knowledge is shared, the protections that apply to it, and the governance arrangements that ensure community control over how their knowledge is used.
The third dimension is the pluralization of evaluation. AI models are currently evaluated by benchmarks that reflect the priorities of their developers: accuracy in English-language tasks, coherence in text generation, helpfulness as assessed by evaluators who share the cultural background of the development team. Pluralizing evaluation means developing criteria that reflect the priorities of the diverse communities the models claim to serve. A community whose priorities center on ecological sustainability would evaluate a model by its capacity to support ecologically informed decision-making. A community whose priorities center on cultural continuity would evaluate a model by its capacity to engage meaningfully with the community's knowledge traditions. A community whose priorities center on communal governance would evaluate a model by its capacity to support collective deliberation rather than individual productivity. The plurality of evaluation criteria would not produce a single ranking. It would produce a landscape of assessments — a map of the tool's adequacy across the range of contexts and purposes for which it might be used.
The fourth dimension is the pluralization of governance. The governance of AI development is currently structured as corporate governance: decisions are made by boards of directors accountable to shareholders, executive teams accountable to boards, and product teams accountable to executives. The structure concentrates decision-making authority in a class of actors whose interests may not align with the interests of the communities affected by their decisions. Pluralizing governance means creating institutional mechanisms that give affected communities meaningful voice — not advisory capacity, not consultative status, but genuine decision-making power — in the governance of AI development. Cooperative ownership models, multi-stakeholder governance platforms, regulatory requirements for community representation on AI company boards, participatory budgeting of AI research funding — each represents a departure from the current concentration of governance authority, and each has precedents in other domains that demonstrate feasibility even as they acknowledge difficulty.
The practical starting points for pluriversal AI are not hypothetical. They exist, in nascent form, in projects and institutions that embody pluriversal principles in the technology domain without yet achieving the scale that the AI transition demands. Community radio networks in Latin America and Africa demonstrate how communications technology can be governed by the communities it serves rather than by the corporations that produce it. Participatory mapping projects in which indigenous communities use geospatial tools to document their territorial knowledge on their own terms demonstrate how digital technology can be appropriated for community-defined purposes. Fab labs and maker spaces that produce locally designed technologies for locally identified needs demonstrate how the relationship between design and context can be structured to serve local purposes rather than global markets. Each of these examples operates at a scale far smaller than the AI transition requires, but each demonstrates a principle that scales: the principle that technology serves human purposes most effectively when the humans whose purposes it serves participate in its design and governance.
Escobar's concept of "autonomy in relation" provides the philosophical framework for understanding how pluriversal AI would operate at scale. Autonomy in relation is not autarky — the self-sufficiency of isolated communities producing all their own tools. It is the capacity of communities to engage with global technological systems on terms they define, adopting capabilities that serve their purposes while declining those that do not, participating in wider networks of knowledge sharing and technological exchange while maintaining the governance arrangements that protect their capacity for self-determination. The concept acknowledges that AI tools developed by Anthropic, Google, and OpenAI will continue to be used by communities around the world. It does not propose replacing these tools with locally produced alternatives, though it insists that locally produced alternatives should be possible. It proposes changing the terms of engagement — from the distribution of a finished product to a negotiation between providers and communities about what the product should be, how it should be configured, whose knowledge it should encode, and how its governance should be structured.
The distance between this vision and the current reality is considerable, and Escobar has never been naive about the obstacles. The concentration of capital in the AI industry, the network effects that reward dominant platforms, the intellectual property regimes that protect corporate advantages, the data dependencies that bind users to specific ecosystems — each of these structural features of the current political economy resists the pluralization that Escobar's framework proposes. But the distance between the colonial order and the postcolonial movements that challenged it was also considerable, as was the distance between the development apparatus and the communities that articulated alternatives to development, as was the distance between the monoculture of industrial agriculture and the agroecological movements that are rebuilding diverse food systems across the Global South. Each transformation began not with a comprehensive blueprint but with a refusal to accept the existing terms as the only possible terms, followed by the articulation of alternatives rooted in different knowledge, different values, and different visions of what a good life might look like.
The Orange Pill asks what AI can do. The postdevelopment framework asks what AI is for, and insists that the answer is not one but many — because the world in which AI operates is not one but many. The technology discourse assumes a single world, what Escobar calls the "One-World World," and asks how AI can serve it more effectively. The pluriversal question is different: how can multiple worlds — worlds organized around different values, different knowledge systems, different conceptions of well-being and meaning — coexist with AI without being absorbed into the single world that AI's designers have assumed? This is not a technical question. It is an ontological question — a question about what kinds of worlds are possible and who gets to decide.
The pluriverse is not a utopia. It is the world as it already is, in its irreducible diversity, waiting for institutional arrangements that would sustain that diversity rather than suppressing it in the name of standardized progress. The move beyond the democratization of AI toward the pluriversal machine begins not with a technology but with a recognition: that the conversation about AI's future must include the voices of those whose worlds AI will reshape, and that inclusion means not consultation but governance — the power to shape the technology rather than merely to receive it.
There is a feature of The Orange Pill that Escobar's framework is uniquely positioned to analyze, and that the preceding chapters have approached without directly confronting. The book was written with Claude. Segal states this openly, repeatedly, and with a transparency that he clearly considers an ethical obligation. He describes the collaboration in detail: the moments when Claude helped him find a structure for a half-formed argument, the moments when Claude produced a connection between ideas he had not seen, the moments when he caught Claude fabricating a philosophical reference that sounded right but was wrong, and the moments when the quality of Claude's prose seduced him into almost accepting an argument he had not earned through his own thinking. The transparency is genuine, and it distinguishes The Orange Pill from the many texts now produced with AI assistance but presented as purely human authorship. Segal is honest about what the collaboration is and what it costs. He is also, inevitably, honest from within the fishbowl of the collaboration itself — honest about the process but unable to see, from inside the process, what the process produces as a discursive formation.
Escobar's 1994 paper "Welcome to Cyberia" anticipated exactly this analytical problem. He defined cyberculture as referring "specifically to new technologies in two areas: artificial intelligence (particularly computer and information technologies) and biotechnology," and articulated the principle that would ground three decades of subsequent work: "any technology represents a cultural invention, in the sense that it brings forth a world; it emerges out of particular cultural conditions and in turn helps to create new ones." The principle applies to Claude Code with the same force it applies to the power loom, the printing press, or the Green Revolution seed package. The technology does not merely assist in the production of a text. It brings forth a world — it constructs the conditions of possibility for what can be thought, expressed, and recognized as knowledge within the interaction.
The world that Claude brings forth is a specific world. It is a world in which knowledge takes the form of fluent English prose organized according to Western rhetorical conventions. It is a world in which the adequate response to a question is a confident, well-structured answer rather than a silence, an admission of ignorance, or a redirection toward communal deliberation. It is a world in which the relationship between collaborators is one of complementary expertise — the human provides direction and judgment, the machine provides execution and association — rather than one of mutual vulnerability, shared uncertainty, or the productive discomfort that arises when genuinely different ways of knowing encounter each other without a predetermined protocol for their integration.
Segal describes catching Claude in a fabrication — a passage that attributed a concept to Gilles Deleuze that Deleuze's work does not support. The episode is presented as a cautionary tale about the dangers of "confident wrongness dressed in good prose." The framing is accurate as far as it goes. But Escobar's framework identifies a deeper mechanism at work. The fabrication was not a malfunction. It was a feature of the world that Claude brings forth — a world in which the production of fluent, confident text is the measure of adequacy, and in which the tool's architecture rewards coherence over accuracy, plausibility over verifiability, and rhetorical effectiveness over epistemic humility. The fabrication was smooth in precisely the sense that Han diagnosed: it eliminated the friction — the doubt, the uncertainty, the need to check — that would have interrupted the production process and forced the author to confront the limits of what he and his tool actually knew.
The collaboration between Segal and Claude is not merely a technical arrangement. It is an epistemological relationship, and the relationship has a politics that the transparency about the collaboration does not address. The politics are not personal — they do not inhere in Segal's intentions, which are clearly honest and reflective. They are structural, embedded in the terms of the collaboration itself. Claude was trained on a corpus of text that embodies a specific epistemology. When Claude contributes to the writing of a book about AI's impact on the world, it contributes from within that epistemology. The connections it draws, the structures it suggests, the references it marshals, the prose rhythms it produces — all of these are shaped by the patterns in the training data, and the patterns reflect the epistemic priorities of the cultures that produced the data. The collaboration does not produce a fusion of human and machine intelligence. It produces a specific kind of human intelligence — the kind that the machine's epistemology recognizes and amplifies — while leaving other kinds unengaged.
This is the point at which the self-referential dimension of The Orange Pill becomes most analytically productive from Escobar's perspective. The book argues that AI amplifies whatever the human brings to the collaboration. "Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real questions, real craft, and it carries that further than any tool in human history." The argument is persuasive within its own terms. But it does not ask what happens to the "whatever" that the human brings — whether the amplification preserves the input's character or transforms it. A microphone amplifies a voice, but the amplified voice is not the same as the voice in the room. It has been processed through the microphone's frequency response, the amplifier's gain characteristics, the speaker's coloration. The processing is not distortion in the pejorative sense. It is the imposition of the system's characteristics on the signal. The output sounds like the voice. It is not the voice. It is the voice as the system renders it.
Claude renders Segal's thinking through the system's characteristics: its preference for fluent prose, its tendency toward confident assertion, its structural affinity for Western rhetorical patterns, its epistemological orientation toward propositional knowledge. The rendering may produce text that Segal recognizes as his own — that captures what he meant, or what he was reaching for, or what he would have said if he had the words. But the rendering also shapes what he can mean by filtering his thinking through a system whose characteristics select for certain kinds of meaning and against others. The collaborative text is not Segal plus Claude. It is Segal as rendered by Claude — a specific version of the author, produced by the interaction between his intentions and the system's architecture.
Escobar, Osterweil, and Sharma wrote in their contribution to Incomputable Earth of the need "to ponder deeply about what remains incomputable and incalculable, what cannot be accounted for by logocentric" analysis. The collaborative authorship of The Orange Pill is itself a case study in the incomputable. What Claude cannot render — what falls outside the system's frequency response — includes precisely the dimensions of knowledge that Escobar's framework identifies as most vulnerable to the AI transition: the embodied, the relational, the place-specific, the communally held, the knowledge that lives in silence rather than in text, in gesture rather than in proposition, in the texture of a shared life rather than in the structure of an argument.
The book was written with the machine, and the machine shaped what the book could be. The shaping is not a corruption. It is what tools do: they enable and constrain simultaneously, opening some possibilities while closing others. The printing press enabled the mass distribution of text while closing the possibility of the illuminated manuscript's specific beauty. The camera enabled the photographic image while closing the possibility of the portrait painter's specific interpretation. Claude enables the rapid production of fluent, well-structured prose while closing the possibility of the specific kind of thinking that emerges only from the friction of solitary struggle with language — the thinking that Segal himself acknowledges, in his most honest passages, he may be losing access to.
What cannot be amplified cannot be heard. And the question that Escobar's framework poses to The Orange Pill — the question that the book's transparency about its own production invites but does not answer — is what is not being heard. What dimensions of the AI transition's meaning are inaudible through the specific amplification system that a human-Claude collaboration produces? What knowledge, what perspectives, what ways of understanding the moment are filtered out by the system's characteristics before they reach the page?
The question is not an accusation. It is a diagnostic — a reading of the collaboration as a case study in the epistemic effects of AI-augmented knowledge production. The collaboration amplifies what the system recognizes. It does not amplify what the system cannot see. And what the system cannot see — the incomputable, the relational, the pluriversal — is precisely what the current moment most urgently needs to hear.
---
The argument of this book has moved through nine chapters from the general to the specific and back again — from the structural homology between development discourse and AI democratization discourse, through the specific mechanisms of discursive construction, epistemic encoding, and knowledge displacement, to the practical and institutional dimensions of resistance, governance, and design. The movement has been animated by a single through-line: the insistence that the conversation about AI must be wider than the framework that currently contains it, wide enough to include the voices, the knowledge, and the purposes of those whom the current framework positions as beneficiaries rather than as participants.
The insistence is not a rejection of The Orange Pill's arguments. The productivity transformations that Segal documents are real. The creative capabilities that AI tools unlock are genuine. The moral significance of expanding who gets to build is not diminished by the observation that the terms of the expansion are set by actors whose interests may not align with those of the communities who receive the tools. The engineer in Trivandrum who discovers that she can build user interfaces for the first time, the designer who discovers that he can implement features end to end, the founder who ships a revenue-generating product over a weekend — each of these transformations represents a real expansion of human capability, and the expansion matters. Escobar's framework does not deny this. It insists that the expansion be situated within its full context: the institutional arrangements that determine who captures the value, the epistemological commitments that determine what counts as capability, the governance structures that determine who shapes the tools' future development, and the plurality of purposes that determine whether the expansion serves the diverse needs of the world's communities or primarily serves the commercial priorities of the AI industry.
The practical horizon of Escobar's analysis is not rejection but redesign — the construction of institutional arrangements that would make the AI transition genuinely pluriversal rather than merely distributive. The preceding chapters have identified the dimensions along which redesign is needed: the pluralization of design processes to include communities beyond the AI industry's core constituency, the pluralization of training data to include knowledge systems beyond the Western textual corpus, the pluralization of evaluation criteria to include measures of adequacy beyond productivity and accuracy, and the pluralization of governance to include decision-making authority beyond corporate boards and executive teams. Each of these dimensions involves structural change that the current political economy resists, and each requires collective political action of the kind that the minga metaphor, rather than the beaver metaphor, is designed to inspire.
But the analysis has also identified resources that are already available — existing practices, institutions, and intellectual frameworks that embody pluriversal principles and could be extended into the AI domain. Community governance of communications technology, as practiced by community radio networks across Latin America and Africa. Participatory mapping of territorial knowledge, as practiced by indigenous communities using digital tools on their own terms. Cooperative ownership of technological infrastructure, as practiced by rural telecommunications cooperatives. Data sovereignty frameworks, as developed by indigenous intellectual property movements. Each of these represents a working model of the kind of institutional arrangement that pluriversal AI would require, and each demonstrates that the arrangement is not merely theoretically conceivable but practically feasible, if politically difficult.
The difficulty is real, and Escobar's framework does not minimize it. The concentration of capital in the AI industry creates barriers to entry that prevent alternative approaches from competing on the market's terms. The intellectual property regimes that govern AI development protect corporate advantages while preventing communities from examining, modifying, or adapting the tools they use. The network effects that characterize platform markets reward incumbents and penalize alternatives, regardless of the alternatives' qualitative superiority for specific communities and purposes. The data dependencies that AI tools create bind users to specific ecosystems and prevent the portability that genuine choice requires. Each of these structural features resists the pluralization that Escobar's framework proposes, and together they constitute a formation — a convergence of economic incentives, institutional arrangements, and discursive structures — that reproduces itself with the same self-reinforcing logic that the development apparatus exhibited.
But formations are not permanent. The development apparatus seemed permanent in 1970. The colonial order seemed permanent in 1940. The divine right of kings seemed permanent in 1700. Each was dissolved not by a single intervention but by the accumulation of alternative practices, alternative institutions, and alternative ways of understanding the world that gradually rendered the existing formation visible as a formation — as a specific historical arrangement that served specific interests — rather than as the natural order of things. The visibility was the precondition for transformation, because power arrangements that are experienced as natural cannot be contested. Only arrangements that are seen as constructed can be challenged by alternative constructions.
Escobar's contribution to the conversation about AI is the provision of this visibility. The postdevelopment framework makes visible what the technology discourse renders invisible: that the distribution of AI tools is not democratization, that the training data encodes an epistemology, that the tools bring forth a specific world rather than neutrally serving all worlds, that resistance contains knowledge, that governance is concentrated in ways that exclude the populations most affected by the technology, and that the conversation about AI's future is being conducted in terms that foreclose the very plurality it claims to enable.
The visibility does not prescribe a single response. It enables a plurality of responses — each rooted in the specific knowledge, values, and purposes of the community that produces it, each contributing to the collective learning that a global transition requires. The Zapatista community that adopts mobile phones while refusing digital platforms produces one response. The agroecological farmer who uses satellite weather data while declining algorithmic farming recommendations produces another. The indigenous community that documents traditional knowledge digitally while refusing to upload it to external databases produces a third. Each response contains diagnostic knowledge about the relationship between AI tools and community purposes, and the diversity of responses constitutes a collective intelligence about the transition that no single vantage point can provide.
The river of intelligence flows. Escobar does not dispute this. But the river is not one. It is many rivers, flowing through many territories, shaped by many geographies, sustaining many ecologies. The dam that serves life on one river may not serve life on another. The beaver that builds alone builds for its own territory. The minga that builds collectively builds for the community. And the pluriverse that Escobar's framework envisions is not a rejection of dams but an insistence on their plurality — many rivers, many dams, many pools, many forms of life sustained by the specific relationship between each community and the waters that flow through its territory.
Segal concludes The Orange Pill with a question: "Are you worth amplifying?" The question is powerful, and the answer he proposes — that human value lies in the quality of our questions, the depth of our caring, the specificity of our judgment — is genuinely moving. But Escobar's framework poses a prior question, one that must be answered before the amplification question can be meaningful. The prior question is: whose amplifier is this, whose signal does it carry, and whose frequencies does it filter out? Until the amplifier is redesigned to carry the full range of human knowledge — the embodied and the propositional, the communal and the individual, the relational and the computational, the incomputable and the algorithmic — the question of worthiness will be answered within a framework that recognizes only one kind of worth.
The pluriverse is not a utopia to be built. It is the world as it already is, in its irreducible diversity, sustaining itself against the homogenizing pressure of systems that mistake their own particular frequency response for the full spectrum of human sound. The task is not to build the pluriverse. The task is to stop destroying it — to build institutions that sustain diversity rather than suppress it, to govern technology in ways that serve many purposes rather than one, and to listen for the signals that the current system filters out, because those signals carry the knowledge that the transition most urgently needs.
---
Escobar asks a question that I did not think to ask, and that changes the meaning of every question I did.
Throughout The Orange Pill, I asked: Are you worth amplifying? I meant it as a challenge. The amplifier does not filter. It carries whatever signal you feed it. Feed it carelessness, you get carelessness at scale. Feed it depth, and the depth carries further than any tool in history. I still believe this. But after spending months inside Escobar's framework, I hear something in my own question that I could not hear before. I hear the assumption.
The assumption is that the amplifier is neutral — that it carries your signal faithfully, that it is a transparent medium between your intention and its realization. Escobar made me see that the amplifier is not neutral. Claude has a frequency response. It amplifies certain kinds of knowing and attenuates others. It carries propositional knowledge at full volume and relational knowledge at a whisper. It renders the world as text, as structured argument, as fluent English prose — and whatever does not fit that rendering does not get carried. I wrote this book with Claude, and I was honest about the collaboration, and I remain proud of what we built together. But Escobar taught me to ask what could not be built within the collaboration — what ways of knowing, what forms of understanding, what dimensions of the moment were filtered out by the specific instrument I chose.
The concept that rewired my thinking was not pluriverse, though the pluriverse is the framework's most ambitious contribution. It was a simpler, more uncomfortable idea: that distribution is not democratization. I have used the word "democratization" hundreds of times — in pitch decks, in keynotes, in conversations with my team, in the pages of the book you have already read. I meant it sincerely every time. When I said AI democratizes the capacity to build, I meant that people who could never build before can build now, and that this expansion of capability is morally significant. I still believe the expansion is morally significant. But Escobar showed me that I was confusing two operations that look identical from the distributor's position and look completely different from the recipient's. Giving someone a tool is not the same as giving them power over the tool. Access is not governance. Receiving is not participating.
The developer in Lagos — the figure I invoked to argue for the moral weight of expanding access — haunts me now in a different way. I invoked her to illustrate my argument. I did not ask her to shape it. She appeared in my book as evidence, not as an interlocutor. Escobar named this with a precision that I found, on first reading, defensive-making, and on second reading, undeniable. The development apparatus did the same thing for seventy years: invoked the intended beneficiary as proof of the intervention's necessity while never including her in the conversation about what the intervention should be. I did not intend to replicate that structure. The structure replicated itself through me, because the discourse I inhabit — the technology discourse, the builder's discourse, the San Francisco discourse — carries that structure the way Claude carries the patterns of its training data: invisibly, below the threshold of intention.
The minga stayed with me longest. The communal work practice where a community comes together to build something collectively, where governance is shared, where the purpose is defined by the community rather than imposed by the builder. I wrote about beavers. Individual agency, individual dams, individual responses to a force too large to control. Escobar did not tell me the beaver was wrong. He told me the beaver was incomplete. That individual agency matters but individual agency operating within structures it cannot transform is not enough. That the AI transition requires collective governance — the kind that determines where the dams go, whose purposes they serve, how the benefits are distributed — and that collective governance requires the participation of the people whose lives the technology reshapes. Not after the reshaping. During it. Not as beneficiaries. As governors.
I am a builder. I will remain a builder. Nothing in Escobar's framework asks me to stop building. It asks me to build with a wider awareness of what my building assumes, whom it includes, whom it excludes, and whose world it brings forth. It asks me to recognize that the tools I celebrate carry an epistemology, and the epistemology is mine, and it is not the only one. It asks me to listen for the signals my amplifier filters out — the knowledge that does not take the form of text, the purposes that do not take the form of products, the ways of being that do not take the form of productivity.
I cannot redesign the amplifier alone. But I can stop pretending it is transparent. And I can start asking who else should have their hands on the controls.
-- Edo Segal
When Silicon Valley calls the global expansion of AI tools "democratization," it borrows the moral authority of political freedom to describe something structurally different: the distribution of a product. Arturo Escobar spent thirty years anatomizing exactly this maneuver in the postwar development apparatus—where powerful institutions arrived in the Global South, diagnosed deficiency, prescribed their own technologies as the cure, and called the prescription progress. His postdevelopment framework, applied here to the AI transition, reveals what the technology discourse renders invisible: that training data encodes an epistemology, that access is not governance, and that the developer in Lagos appears in the conversation as evidence rather than as an interlocutor. This book does not reject AI's transformative power. It insists that transformation without pluralism is incorporation—and that the communities positioned as beneficiaries of the AI revolution deserve to be its architects. Drawing on Escobar's concepts of the pluriverse, ontological design, and autonomy in relation, it challenges The Orange Pill's framework from the position of those whose knowledge the amplifier filters out. — Arturo Escobar, Designs for the Pluriverse

A reading-companion catalog of the 20 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Arturo Escobar — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →