By Edo Segal
The frequency I never checked was my own.
I built an amplifier. That is the argument of *The Orange Pill* — that AI amplifies whatever signal you feed it, and the question is whether you are worth amplifying. I believed that framing was honest. I still do. But there is a question that comes before it, one I did not think to ask because the answer seemed obvious from where I was standing.
Does the amplifier hear you?
I assumed it did. I assumed it heard everyone. The natural language interface abolished the translation barrier — that was my claim, and I meant it. You could describe what you wanted in plain English, and the machine would meet you there. The democratization of capability. The floor rising for everyone.
Ramesh Srinivasan's work stopped me cold. Not because he disputes the capability expansion — he does not. He is an engineer by training. He directs UCLA's AI Futures Lab. He is not a Luddite, not a refuser, not someone who mistakes nostalgia for analysis. What he does, with a precision that is difficult to sit with comfortably, is ask whose English. Whose plain language. Whose definition of a useful output. Whose problems were baked into the training data, and whose were never in the room when the design decisions got made.
I wrote about the developer in Lagos. I mentioned her twice. I named the caveats — bandwidth, power grids, economic precarity — and moved on, because the thrust of my argument was about what the tools make possible. Srinivasan does not let you move on. He stays. He asks about the language she thinks in before she translates. He asks whether her community's way of organizing knowledge fits the categories the machine was built to process. He asks who decided what counts as a good answer.
These are not hostile questions. They are the questions that genuine democratization requires. And I had not asked them — not because I was hiding from them, but because the water I swim in made them invisible.
This book applies Srinivasan's framework to the AI moment I described in *The Orange Pill*. It does not replace my argument. It completes it. The amplifier is real. The question of whether it can hear the full range of human intelligence — every language, every epistemology, every form of knowledge that lives in practice rather than in text — is the question that determines whether democratization becomes real or remains a promise extended mostly to the already powerful.
The lens he offers is one I needed. I suspect you do too.
— Edo Segal ^ Opus 4.6
1976-present
Ramesh Srinivasan (1976–present) is an American scholar of Indian descent whose work spans information studies, design, and political theory. A professor at UCLA, where he holds appointments in the Department of Information Studies and the Department of Design Media Arts and directs the Digital Cultures Lab (now the AI Futures Lab), Srinivasan was trained as an engineer at Stanford and MIT before turning to ethnographic research on how technology intersects with power, culture, and community governance. His major works include *Whose Global Village? Rethinking How Technology Shapes Our World* (2017) and *Beyond the Valley: How Innovators around the World Are Overcoming Inequality and Creating the Technologies of Tomorrow* (2019). His fieldwork with indigenous communities — including the Zuni Pueblo, Zapotec and Mixtec villages in Oaxaca, and cooperatives in Kenya and Detroit — has produced influential frameworks for participatory design and indigenous data sovereignty. A frequent policy adviser, media commentator, and host of the podcast *Utopias*, Srinivasan is among the most prominent voices arguing that the AI revolution must be governed not only by the corporations that build the tools and the states that regulate them, but by the communities whose lives those tools reshape.
In December 2025, a Google principal engineer sat down with Claude Code and described, in plain English, a problem her team had spent a year trying to solve. One hour later, the machine produced a working prototype. She posted about it publicly. "I am not joking," she wrote, "and this isn't funny."
That moment, and the thousands like it that followed in the winter of 2025, is what Edo Segal calls the orange pill — the instant of irreversible recognition that something genuinely new has arrived. The metaphor is vivid and useful. It captures the vertigo, the exhilaration, the sense that the ground has shifted permanently beneath one's feet. But the metaphor carries an assumption so deeply embedded that it functions as invisible architecture: the assumption that the pill is the same pill for everyone. That the recognition it produces is universal. That the ground shifts in the same direction, at the same speed, for a software engineer in Mountain View and a farmer in Tamil Nadu.
Ramesh Srinivasan's life work has been the systematic excavation of exactly this kind of assumption — the kind that presents itself as neutral description and functions as cultural prescription. Trained as an engineer at Stanford and MIT, Srinivasan spent the early part of his career inside the very institutions that produce these assumptions. His trajectory from engineering to ethnography, from building systems to documenting how systems land in communities whose realities differ fundamentally from the realities of their designers, is itself a kind of orange pill — though the recognition it produced was not that something marvelous had arrived but that something marvelous for some people could be something quite different for others.
The geography of the orange pill matters because position determines perception. Segal experienced the orange pill from inside the technology industry, surrounded by engineers, with access to frontier tools, deep familiarity with their predecessors, and the cultural capital to interpret their significance within a narrative of progress. His recognition — that the imagination-to-artifact ratio had collapsed, that a conversation with a machine could now produce working software, that the translation barrier between human intention and computational execution had been abolished — was genuine. The capability expansion was real. The vertigo was earned.
But consider the same moment from a different position. Consider a software developer in Kampala, Uganda, who has been building mobile applications for East African agricultural markets. She has spent years developing solutions for problems that Silicon Valley does not know exist: applications that work on low-bandwidth networks, that accommodate intermittent power supply, that serve users whose relationship to technology is mediated not by individual ownership of high-end devices but by shared access to basic smartphones in community settings. Her expertise is deep, contextually specific, and genuinely innovative in ways that the mainstream technology industry would not recognize as innovation because her work does not fit the template of scalable, venture-backed, English-language products designed for markets with reliable infrastructure.
When Claude Code arrives, what does she recognize? Not simply expanded capability. She recognizes a tool built by an American company, trained on predominantly English-language data, optimized for the workflows of Western knowledge workers, and designed to solve problems defined by the priorities of Silicon Valley. The tool can write code. It can write code impressively well. But the code it writes most fluently is code that addresses the problems its training data reflects — problems of enterprise software, of consumer applications for affluent markets, of the specific technical stack that dominates American technology companies. The problems she solves daily — offline-first architectures for unreliable networks, interfaces designed for shared devices, payment integrations for mobile money systems that most American developers have never encountered — sit at the margins of the tool's competence, not because the tool is incapable in principle but because the training data that shaped its capabilities reflects a world in which these problems are peripheral.
Her orange pill, if she takes one, tastes different. The recognition is not simply that something new has arrived. It is that something new has arrived from elsewhere, carrying assumptions about what matters that do not match her reality, and that the global discourse celebrating this arrival speaks as though her reality is an edge case rather than the condition of the majority of humanity.
Srinivasan has documented this pattern across decades of fieldwork — in indigenous communities in Oaxaca, Mexico, building their own cellular infrastructure because the telecommunications companies that were supposed to serve them found their communities insufficiently profitable; in Detroit neighborhoods creating worker-owned digital platforms because the platforms designed in San Francisco extracted value from their communities without returning it; in Zuni Pueblo, where knowledge systems organized around communal ownership and oral transmission resist the individualist, text-based architecture of every major technology platform. In each case, the technology arrived with the promise of inclusion and delivered something more ambiguous: access to a system whose fundamental assumptions had been set without the input of the people now being asked to use it.
The pattern is not conspiracy. It is not the deliberate exclusion of non-Western perspectives by malicious actors. It is something more structural and therefore more difficult to address: the natural consequence of building tools inside a particular cultural context and then distributing them globally as though cultural context were a variable that could be patched in later, like a language pack or a regional setting.
Segal himself names this limitation with admirable honesty. "It requires English-language fluency," he writes of the AI tools, "because the tools are built by American companies, trained on predominantly English data, and optimized for the workflows of Western knowledge workers." He names the barrier. He acknowledges it as real. And then the argument moves on, because the book's central concern is with what the tools make possible, not with the conditions that determine for whom they make it possible.
Srinivasan's framework insists on staying with the conditions. Not because the capability expansion is illusory — it is not — but because the conditions determine whether the expansion is experienced as liberation or as a new form of dependency. The developer in Kampala who adopts Claude Code gains leverage. She also gains a dependency on an American company's infrastructure, an American company's pricing decisions, an American company's definition of what constitutes good code, and an American company's training data that reflects American priorities. The leverage is real. The dependency is also real. And the relationship between them is not captured by the word "democratization."
Srinivasan's concept of digital cultures — his insistence that technology is never culturally neutral, that the interface, the workflow, the definition of a useful output all carry embedded cultural assumptions — provides the analytical frame that the democratization thesis requires but does not supply. When Srinivasan argues that "technology can really amplify biases because we create technologies based on who we are," the claim extends beyond the familiar critique of biased training data. The bias is not only in the data. It is in the architecture. In the conversational interface that assumes a particular model of knowledge exchange — question-and-answer, individual expertise, explicit instruction — that is culturally specific to Western professional settings. In the definition of "working software" that privileges deployable artifacts over the communal processes that produce them. In the implicit theory of creativity that assumes individual authorship and measurable output.
The Makerere University AI Lab in Uganda illustrates the tension with painful clarity. Srinivasan visited this lab and documented its predicament: the challenges it faces are local, the questions it wishes to answer are local, the communities and ecosystems it serves are local. But the funding models are not. The lab's survival depends on grants from international organizations whose priorities may not align with the questions that matter most to Ugandan communities. The researchers find themselves answering the "right" questions — right according to the funders — rather than the questions their neighbors are asking. The tools they use were designed for problems defined elsewhere. The metrics by which their success is measured were established by institutions whose understanding of innovation is shaped by Silicon Valley's template.
This is not an argument against the AI tools. It is an argument about the conditions under which the tools arrive and the power structures that determine whose questions they are optimized to answer. The orange pill is real. The capability expansion is real. The vertigo is real. But the experience of all three is shaped by where you stand when the ground shifts, and the majority of humanity stands in places that the designers of the tools have never visited, whose problems the training data does not reflect, and whose definitions of a good outcome the interface cannot express.
Srinivasan's most pointed intervention is his insistence that the response to this asymmetry cannot be merely technical. Adding more languages to the interface does not address the epistemological assumptions embedded in the interface's design. Including more non-Western data in the training set does not address the question of who decides what counts as data, what counts as knowledge, what counts as a useful output. The asymmetry is structural, and structural problems require structural responses — changes not just in what the tools can do but in who decides what they should do, who defines the problems they are meant to solve, and whose definition of success determines whether they have solved them.
The geography of the orange pill is not a minor caveat to the democratization thesis. It is the question the thesis must answer to be credible. If the pill produces different recognition depending on where you stand, then the claim that AI democratizes capability must specify: capability for what? Defined by whom? Measured how? And if the answers to those questions are — as they currently are — capability for building software that fits Silicon Valley's definition of useful, defined by the companies that built the tools, measured by the metrics those companies establish — then the democratization is real but radically incomplete.
Srinivasan would not reject the orange pill. He would ask whose hand is offering it, what assumptions are baked into its chemistry, and whether the person swallowing it had any voice in determining what recognition it would produce. These are not hostile questions. They are the questions that genuine democratization requires, and that the current discourse, in its exhilaration, has not yet learned to ask.
---
For the entire history of computing, using a computer required translation. Segal traces this trajectory with precision: from assembly language to graphical interfaces to touchscreens, each transition moved the human closer to the machine, each reduced the friction of translation, each was celebrated as a step toward a more natural relationship between human intention and computational execution. And then, in 2025, the final barrier fell. The machine learned to speak human language. Natural language became the interface. The translation cost that had governed every interaction between humans and computers since the first command line was abolished.
Except it was not abolished. It was relocated. And the relocation followed a pattern that five centuries of colonial history would have predicted.
The natural language interface is a natural English-language interface. The models that power it were trained on corpora that are overwhelmingly English — not because the designers harbor linguistic prejudice but because the internet, the source of most training data, is dominated by English-language content. English constitutes roughly sixty percent of all web content. The next largest language, Russian, accounts for approximately five percent. Languages spoken by billions — Hindi, Bengali, Yoruba, Swahili, Tagalog — are represented in fractions of a percent. The models learn what the data teaches, and the data teaches English.
This is not a transitional problem that will resolve as models improve. Srinivasan's framework identifies it as a structural feature that reproduces colonial language hierarchies through technological infrastructure. The British Empire established English as the language of administration, commerce, education, and social advancement across a quarter of the globe. Post-colonial nations retained English as a prestige language, a gateway to international markets, a requirement for participation in global institutions. The linguistic hierarchy was never merely linguistic. It was economic, political, and epistemological — a system that determined not just which words could be spoken in which rooms but which forms of knowledge counted as legitimate, which modes of argument were considered rigorous, which ways of organizing thought were recognized as rational.
The AI interface inherits this hierarchy and amplifies it. When Segal celebrates the abolition of the translation barrier — the moment when a builder could describe what she wanted "in the same language you'd use with a brilliant colleague" — the claim is true for English speakers. For the approximately five billion people whose first language is not English, the interface creates a new translation barrier that operates at a deeper level than any programming language ever did.
Consider the developer in Lagos. She speaks Yoruba at home, Pidgin English in the market, and formal English in professional settings. She can write competent English. But competent English is not the same as native fluency, and the difference matters in ways that the interface's designers may not have considered. The conversational AI that Segal describes — the one that responds "not with a literal translation of my words, but with an interpretation, a reading, an inference about what I was actually trying to do" — performs its interpretive magic most effectively when the input matches the linguistic patterns it has been trained on most extensively. The nuances of native English expression — the idioms, the implied context, the cultural references, the pragmatic conventions that determine how a request is understood — are captured in the training data with a density that non-English expression cannot match.
The developer in Lagos who formulates her ideas in English before the tool can process them is not merely translating words. She is translating conceptual frameworks. The Yoruba concept of àṣà — which encompasses custom, tradition, and the normative expectations of community life in ways that have no single English equivalent — must be compressed into English terms that lose essential dimensions of meaning. The relational logic that structures Yoruba thought, in which the community is the primary unit of analysis rather than the individual, must be reformulated into the individualist framework that English professional discourse assumes. The translation is not just linguistic. It is epistemological. Information is lost at every step, and the information that is lost is precisely the information that makes her perspective distinctive and potentially most valuable.
Srinivasan has documented this epistemological compression across his fieldwork. In indigenous communities in Mexico, knowledge about land management is embedded in narrative traditions that encode ecological relationships through stories passed between generations. These narratives do not separate "data" from "context" in the way that Western scientific discourse does. The knowledge is the story. Extract the data points and you lose the relational structure that gives them meaning. Feed the data points to an AI system trained on Western scientific literature and you get outputs that are formally correct and contextually meaningless — recommendations that work in the abstract and fail in the specific ecological, social, and cultural conditions they are meant to address.
The same dynamic operates in the AI coding interface. The problems that the tool solves most effectively are the problems that its training data represents most richly: the architectural patterns of Western enterprise software, the design conventions of Silicon Valley consumer products, the technical vocabulary of American engineering culture. When the developer in Lagos asks Claude to help her build an application for a local agricultural cooperative — an application that must work offline, accommodate shared device usage, integrate with mobile money systems, and respect communal decision-making processes that do not map onto Western user-experience conventions — she is asking the tool to operate at the margins of its training. The tool will try. It will produce something. But the something will carry the implicit assumptions of its training: individual user accounts rather than shared access patterns, continuous connectivity rather than offline-first architecture, payment integrations designed for credit cards rather than M-Pesa, interfaces that assume private device ownership rather than community access points.
The developer can correct these assumptions. She can prompt iteratively, specifying the local conditions the tool does not know to assume. But each correction is a cost — cognitive labor that the developer in San Francisco does not have to expend because the tool's default assumptions already match his reality. The translation barrier has not been abolished. It has been transformed from a syntactic barrier (learning a programming language) to an epistemological one (reformulating your reality into the tool's default assumptions). And the epistemological barrier may be harder to see, harder to name, and harder to overcome, precisely because the tool appears to speak your language while actually requiring you to speak its.
Srinivasan and Dipayan Ghosh, in their 2023 paper proposing a new social contract for technology, argued that digital rights must go beyond viewing data as property to recognizing it as an extension of human agency and dignity. The linguistic dimension of this argument is underexplored but critical. Language is not merely a medium of communication. It is a medium of thought. The Sapir-Whorf hypothesis, in its moderate form, holds that the language you speak shapes the cognitive categories available to you — not determining thought but influencing its texture, its grain, its default assumptions. If this is true of natural languages, it is emphatically true of the language in which you must formulate your prompts to an AI system.
The developer who thinks in Yoruba and prompts in English is not merely inconvenienced. She is cognitively taxed in a way that subtly degrades the quality of her interaction with the tool. The ideas that come most naturally to her — the ones that emerge from the conceptual structures of her first language, the ones that reflect her community's way of organizing knowledge and defining problems — must be translated before the tool can process them. And translation, as any bilingual speaker knows, is not a lossless operation. Something always stays behind. The untranslatable residue is often the most culturally specific, the most contextually rich, the most potentially innovative part of the thought.
Srinivasan's critique of Facebook's internet.org initiative illuminates the pattern. Facebook offered free internet access to communities in the Global South — but only to Facebook's platform and its selected partners. The access was real. The constraint was also real. The communities gained connectivity and lost the ability to determine what that connectivity was for. The gift came with an architecture, and the architecture came with assumptions, and the assumptions came with a business model that served the giver more reliably than the recipients. Srinivasan documented communities in Bolivia and India where the "free internet" was experienced not as liberation but as a new form of dependency — access to a system designed elsewhere, for others, whose terms of engagement had been set without consultation.
The AI language interface follows the same structural pattern. The access is real. The capability expansion is genuine. And the architecture carries assumptions — about what language is the medium of thought, about what forms of knowledge are legitimate inputs, about what constitutes a useful output — that were established without the input of the majority of the world's population. The interface speaks English. The communities it reaches speak thousands of languages. The gap between them is not a technical limitation awaiting a patch. It is a reproduction, in digital infrastructure, of the linguistic hierarchies that colonialism established and that post-colonial economies have maintained.
The response cannot be merely multilingual support, though multilingual support is necessary and overdue. The response must address the epistemological architecture of the interface itself — the model of knowledge exchange it assumes, the definition of useful output it enforces, the forms of thought it rewards and the forms it cannot parse. Srinivasan's insistence on participatory design — on building tools with communities rather than for them — points toward a different model of AI development, one in which the interface is shaped by the epistemological diversity of its users rather than by the epistemological monoculture of its designers.
The natural language revolution is real. Its promise is genuine. And its current implementation reproduces, at global scale, a hierarchy of whose natural language counts.
---
Segal mentions her twice. Once as a beneficiary of democratization — the developer in Lagos who "can now access the same coding leverage as an engineer at Google." Once as a reminder that barriers remain — "not the same salary, not the same network, not the same institutional support, not the same safety net if the project fails." The mentions are honest. They are also brief. The developer in Lagos appears as a figure in an argument about capability expansion, a data point in the case for democratization, and then the argument moves on to the engineers in Trivandrum, the solo builder in America, the SaaS companies watching their valuations crater.
Srinivasan's methodology requires staying longer. His fieldwork tradition — the ethnographic commitment to understanding a community's reality on its own terms before interpreting it through external frameworks — demands that the developer in Lagos be more than a rhetorical figure. She must be a person in a place, with conditions that shape what the tools can and cannot do for her, with a history that explains why those conditions exist, and with a perspective on the technology that may differ fundamentally from the perspective of the people who built it.
So consider her conditions. Not as barriers to be listed and moved past, but as the environment in which the tools actually function.
The power grid in Lagos is unreliable. The Nigerian Electricity Regulatory Commission's own data documents an average of approximately four thousand megawatt-hours of unserved energy daily. For a developer, this means interrupted work sessions. A coding conversation with Claude that requires sustained context — the kind of extended, iterative dialogue that Segal describes as the heart of the new workflow — is vulnerable to power cuts that reset the session, lose the context, and require the developer to reconstruct from memory what the machine had been holding for her. The developer in San Francisco does not think about electricity. It is infrastructure so reliable it has become invisible. For the developer in Lagos, electricity is a variable that shapes every working day, and the AI tools were designed for an environment in which it is not a variable at all.
Bandwidth costs compound the problem. A 2024 Alliance for Affordable Internet report found that the cost of one gigabyte of mobile data in Nigeria represented a significantly higher percentage of average monthly income than the same gigabyte in the United States. Cloud-based AI tools — tools that require continuous connectivity to remote servers, that transmit substantial data with every prompt and response — are designed for environments where bandwidth is cheap and plentiful. In Lagos, every prompt has a cost beyond the subscription fee: the data cost of the transmission, priced at rates that make sustained AI-assisted development a meaningful line item in a budget constrained by economic conditions that the tool's pricing model does not acknowledge.
Srinivasan's research on Kenya's M-Pesa mobile banking system provides a counterexample that illuminates the structural issue. M-Pesa succeeded precisely because it was designed for local conditions — for users who did not have bank accounts, who transacted in small amounts, who accessed the system through basic feature phones rather than smartphones, who needed the service to work in environments where infrastructure was unreliable and connectivity was intermittent. M-Pesa was not a global product adapted for Kenya. It was a Kenyan product designed for Kenyan conditions. The difference is not cosmetic. It is architectural. And it explains why M-Pesa transformed East African financial inclusion in ways that no Western banking platform, however generously priced, could have achieved.
The AI tools arriving in Lagos are the opposite of M-Pesa. They are global products designed for the conditions of their origin — continuous high-bandwidth connectivity, reliable power, individual device ownership, English-language fluency — and offered to the world on the assumption that the world's conditions either match or will eventually converge toward the conditions the tools were designed for. The assumption is not articulated. It does not need to be. It is built into the architecture.
Economic precarity adds a dimension that the democratization thesis does not address. The developer in San Francisco who builds a prototype over a weekend and watches it fail has lost a weekend. The developer in Lagos who builds a prototype over a weekend and watches it fail may have lost the money she needed for rent, for food, for her children's school fees. The safety net that allows experimentation — the financial cushion, the social network of other developers who can offer temporary employment, the venture capital ecosystem that funds failure as a cost of finding success — is thin or absent. Every project carries stakes that the discourse about rapid prototyping and fail-fast iteration does not acknowledge because the discourse assumes a baseline of economic security that most of the world's developers do not possess.
Srinivasan has argued repeatedly that the digital divide is not merely about access. It is about the conditions that determine whether access translates into agency. A community can have access to a tool and lack the conditions that make the tool meaningful: the infrastructure, the economic security, the cultural fit, the institutional support, the network connections that determine whether a good product finds users and funding or dies in obscurity. Access without these conditions is formal equality — everyone can use the tool — without substantive equality — the tool works differently depending on where you use it.
The institutional infrastructure of the technology industry reinforces the asymmetry. Venture capital is concentrated geographically — overwhelmingly in the United States, with secondary concentrations in China, the United Kingdom, and a handful of other nations. African startups received less than two percent of global venture funding in recent years. The developer in Lagos who builds something brilliant with Claude Code enters a funding landscape in which her geographic location is a liability, in which investors' pattern-matching algorithms (literal and cognitive) are calibrated to founders who look, sound, and build like the founders who have succeeded before — predominantly English-speaking, Western-educated, operating in markets that venture capitalists understand from personal experience.
Srinivasan documented this pattern in Beyond the Valley, his 2019 fieldwork-based study of innovation outside Silicon Valley. In community after community — in India, in Mexico, in Kenya, in Detroit — he found people building remarkable solutions to genuine problems, solutions that the mainstream technology industry would not recognize as innovations because they did not fit the template: they were not designed for scale in the Silicon Valley sense, they served local communities rather than global markets, they were built through collective processes rather than individual genius, and they measured success in terms of community benefit rather than revenue growth.
The AI tools could, in principle, amplify these innovations. A communal agricultural decision-making platform built by a cooperative in Nigeria. A health information system designed for community health workers who share devices and operate in areas without reliable connectivity. A local language education tool built for a specific linguistic community whose language is not supported by any major technology platform. Each of these represents the kind of problem that the democratization thesis celebrates — a problem that AI tools could help solve by lowering the cost of development.
But the tools, as currently designed, are optimized for a different class of problems. The training data does not include examples of communal device-sharing interfaces. The models do not have rich representations of Nigerian mobile money integration patterns. The default architectural recommendations assume continuous connectivity, individual user accounts, and the technical stack that dominates American development. The developer who wants to build for her community must first teach the tool about her community's conditions, and the teaching is a cost — in time, in cognitive labor, in the iterative prompting required to override default assumptions that do not match her reality.
Srinivasan's concept of participatory design offers an alternative model. Rather than building tools in one context and distributing them globally, participatory design begins with the community's own articulation of its needs, values, and constraints. The design process includes the people who will use the tool as co-designers, not as end-users who receive a finished product and are asked for feedback. The difference is not procedural. It is epistemological. Participatory design assumes that the community possesses knowledge that the designers do not — knowledge about local conditions, about cultural norms, about what constitutes a useful outcome — and that this knowledge must shape the tool's architecture, not just its surface features.
Applied to AI development, participatory design would mean including developers from Lagos, from Kampala, from Dhaka, from Oaxaca not as beta testers but as co-architects. It would mean training models on data that reflects the problems these developers actually solve, the languages they actually think in, the conditions they actually work within. It would mean defining "working software" not solely as code that compiles and runs but as code that serves the community it was built for, measured by the community's own criteria of success.
This is not currently happening at any meaningful scale in the AI industry. The reasons are structural, not conspiratorial. The economic incentives of the AI industry point toward the largest markets, the wealthiest users, the problems that generate the most revenue. The developer in Lagos is a future market, not a current priority. Her inclusion is aspirational — something the industry intends to address eventually, after the technology matures, after the costs come down, after the language support improves.
Srinivasan's entire career has been an argument against "eventually." The communities that are told to wait for inclusion are the communities whose exclusion becomes structural — baked into the architecture, the training data, the default assumptions, the economic models — so that by the time inclusion arrives, the terms have been set without them and the cost of changing those terms has become prohibitive.
The developer in Lagos deserves more than a mention. She deserves a seat at the table where the tools are designed. And the orange pill, if it is to mean what Segal wants it to mean — an irreversible recognition that something genuinely new has arrived — must include the recognition that the new thing has arrived unevenly, on terms set by its creators, carrying assumptions that serve some communities and constrain others, and that completing the promise of democratization requires not just distributing the tool but redesigning it.
---
In 2019, a team of researchers at the University of Virginia published a study examining bias in image recognition systems. They found that AI models trained on standard datasets associated images of kitchens with women at rates that significantly exceeded the actual gender distribution in the training images. The models had learned not just to recognize objects but to reproduce and amplify the stereotypical associations embedded in their training data. The bias was not introduced by the researchers. It was inherited from the data, which reflected the world as it had been photographed and captioned — a world in which cultural assumptions about gender and domestic space were encoded in millions of image-text pairs, invisible to the humans who created them, visible to the statistical patterns the machine extracted.
This study is one among hundreds that have documented bias in AI training data. Safiya Umoja Noble's Algorithms of Oppression demonstrated how search engine algorithms reproduced racial stereotypes. Ruha Benjamin's Race After Technology coined the term "the New Jim Code" to describe how ostensibly neutral technological systems encode and reinforce existing racial hierarchies. Joy Buolamwini's work at MIT showed that facial recognition systems performed significantly worse on darker-skinned faces, particularly women, because the training data was disproportionately composed of lighter-skinned subjects.
These findings are now well-established in the AI ethics literature. What Srinivasan's framework adds is not a repetition of the bias critique but a deepening of it — from bias as a technical problem with a technical solution (more representative data, better debiasing algorithms, improved evaluation metrics) to bias as an epistemological problem that reflects fundamental questions about whose knowledge counts, whose ways of organizing the world are treated as legitimate, and whose reality the machine is trained to recognize.
The distinction matters because it changes the nature of the response. A technical problem has a technical solution. An epistemological problem requires a rethinking of the categories themselves.
Consider what is in the training data. The large language models that power contemporary AI systems were trained on text scraped from the internet — books, articles, websites, forums, code repositories, social media posts, encyclopedias. The internet, despite its appearance of boundless diversity, is a profoundly skewed sample of human knowledge. Srinivasan has emphasized that the data represents the digitized, English-language, predominantly Western knowledge that happens to be available online. The knowledge systems of indigenous communities, oral cultures, non-Western philosophical traditions, and communities that have not been digitized are systematically underrepresented — not because they lack knowledge but because their knowledge exists in forms the training pipeline cannot ingest.
The Zuni people of New Mexico, whom Srinivasan has studied extensively, maintain knowledge systems that are communally held, orally transmitted, and embedded in ceremonial practices that are not public. Zuni astronomical knowledge — sophisticated understandings of celestial cycles developed over centuries of careful observation — is encoded in ritual narratives that integrate what Western science would separate into astronomy, ecology, agriculture, and social organization. This knowledge is not less rigorous than Western scientific knowledge. It is differently organized, differently transmitted, and differently authorized. It is also entirely absent from the training data of any major AI model, because it was never digitized, because digitizing it would violate the cultural protocols that govern its transmission, and because the categories of Western knowledge organization that structure the training pipeline have no place for it.
The absence is not trivial. When Segal argues that creativity is relational — that it lives in connections between things, in the synthesis of inputs through a particular lens — the immediate question is: whose inputs? The amplifier amplifies what it has been trained on. What it has been trained on reflects the epistemic priorities of the cultures that produced the training data. The synthesis the machine performs is a synthesis of Western knowledge, predominantly English-language knowledge, knowledge that was digitized because it existed within institutional structures — universities, publishers, media companies, technology platforms — that value digitization and have the resources to perform it.
The knowledge that was not digitized — because it is oral, because it is communal, because it is sacred, because it exists in languages that lack substantial digital presence, because the communities that hold it lack the institutional infrastructure to digitize it — is invisible to the machine. And invisibility in the training data is not neutral. It means that the machine's outputs systematically favor the epistemological frameworks that are represented in its training at the expense of those that are not. The machine does not know what it does not know. It does not flag its own ignorance. It produces confident outputs from a partial dataset and presents them as comprehensive.
Srinivasan's concept of epistemic justice — borrowed from philosopher Miranda Fricker and applied to digital systems — holds that justice requires not merely the fair distribution of material resources but the fair treatment of people as knowers. Epistemic injustice occurs when a person's testimony is discounted because of prejudice against their social identity (testimonial injustice) or when a person lacks the conceptual resources to make sense of their own experience because the dominant culture does not provide them (hermeneutical injustice). AI systems can commit both forms at scale.
Testimonial injustice in AI occurs when the system's training data discounts or excludes the testimony of entire communities. An AI system asked about agricultural best practices will draw on the scientific literature that dominates its training data — literature produced primarily by Western research institutions, written in English, published in journals that reflect the epistemological conventions of Western agronomy. The sophisticated agricultural knowledge of indigenous communities — knowledge developed over centuries of careful observation and experimentation in specific ecological contexts — is largely absent. The system's recommendation may be scientifically sound in the Western sense and ecologically disastrous in the local context, because it does not account for the soil conditions, the microclimatic patterns, the crop rotation practices, the water management systems that the indigenous knowledge system encodes. Srinivasan warned of exactly this kind of failure when he discussed AI in drug discovery: "If you don't include the right data, you might recommend a drug to a given patient that might help them with cancer but might make their heart condition worse if you don't account for that data."
Hermeneutical injustice in AI occurs when the system's conceptual categories do not accommodate the ways that non-Western communities organize knowledge. Consider a public health AI system deployed in a community where health is understood not as an individual biological state but as a relational condition embedded in family, community, and spiritual life. The system's categories — symptoms, diagnoses, treatments, outcomes — reflect the Western biomedical model. They have no place for the relational dimensions of health that the community considers central. The system can process the community's health data only by stripping it of the relational context that gives it meaning, producing recommendations that are formally correct within the Western model and culturally incomprehensible within the community's own framework.
The training data problem is often framed as a representation problem: the data does not include enough examples from non-Western sources, and the solution is to include more. Srinivasan's framework suggests that this framing is insufficient. The problem is not only that the data is unrepresentative. The problem is that the categories used to organize the data — the ontologies, the taxonomies, the classification systems — reflect Western epistemological assumptions that may not accommodate non-Western knowledge forms even when those forms are included.
Western knowledge organization tends to be taxonomic: knowledge is divided into discrete categories arranged in hierarchical structures. Biology is separate from ecology is separate from cultural practice. Indigenous knowledge systems often organize knowledge relationally: the boundaries between categories are porous, and meaning is constituted by connections between domains that Western taxonomy separates. Including indigenous agricultural knowledge in an AI training set organized by Western categories is like translating poetry by preserving the dictionary meanings of the words and losing the rhythm, the allusion, the cultural resonance that makes the poem a poem.
Segal's Orange Pill offers a suggestive parallel. He describes how Dylan's "Like a Rolling Stone" emerged not from solitary genius but from the synthesis of a vast implicit training set of cultural experience — Woody Guthrie, Robert Johnson, the Beat poets, the British Invasion, the Greenwich Village coffee shops. The synthesis was Dylan's. The inputs were relational. Now extend the analogy. If Dylan's training set had included only one musical tradition — only European classical music, for instance — the synthesis would have been fundamentally different. Not necessarily worse in some abstract aesthetic sense, but impoverished in a specific way: lacking the cross-cultural collisions that produced the distinctive energy of American popular music. The richness of the output depended on the diversity of the inputs.
The same principle applies to AI. A model trained predominantly on Western knowledge produces outputs that reflect Western epistemological assumptions with extraordinary fluency and sophistication. It can synthesize within that tradition with remarkable power. But it cannot synthesize across traditions that are not represented in its training — cannot connect Western ecological science with indigenous land management, Western pharmacology with traditional medicine, Western computer science with the communal knowledge-production practices that might reveal entirely different approaches to the problems AI is being asked to solve.
Srinivasan's work with the CARE Principles for Indigenous Data Governance — Collective benefit, Authority to control, Responsibility, Ethics — points toward a framework for addressing epistemic justice in AI training. The CARE Principles hold that indigenous communities have the right to control their own data, to determine how it is collected, stored, used, and shared, and to ensure that its use serves the collective benefit of the community. Applied to AI training, this would mean that indigenous knowledge could not simply be scraped from available sources and fed into models without the consent and active participation of the communities that hold it.
This is not a minor procedural requirement. It represents a fundamental challenge to the AI industry's standard practice of treating all available data as raw material for training, regardless of its provenance, its cultural significance, or the wishes of the communities that produced it. The CARE Principles insist that data is not a commodity. It is an expression of community identity, knowledge, and authority. Treating it as raw material for extraction is, in Srinivasan's framing, a form of digital colonialism — the appropriation of communal resources by external actors for purposes determined without communal consent.
The epistemological critique does not require rejecting AI. It requires redesigning the systems that produce it — not just the training data but the ontological frameworks that organize it, the evaluation metrics that measure its quality, and the governance structures that determine whose knowledge counts and whose does not. The amplifier is powerful. Its power makes the question of what it amplifies — whose knowledge, whose categories, whose epistemological assumptions — not a technical footnote but a question of justice.
And justice, as Srinivasan has argued throughout his career, cannot be achieved by optimizing the existing system. It requires changing who sits at the table where the system is designed.
In the foreword to The Orange Pill, Segal introduces a metaphor that may be the book's most quietly powerful contribution: the fishbowl. "The set of assumptions so familiar you've stopped noticing them. The water you breathe. The glass that shapes what you see. Everyone is in one." The scientist's fishbowl is shaped by empiricism. The filmmaker's by narrative. The builder's by the question, "Can this be made?" Every fishbowl reveals part of the world and hides the rest.
Segal encourages the reader to press her face against the glass and see, even for a moment, the world beyond the water she has always breathed. The instruction is generous and genuinely offered. But it carries an irony that Segal himself half-acknowledges: the author is inside the fishbowl he is describing. He knows this. He names it. And then the book proceeds, largely, from within the fishbowl it has named, because naming the glass is not the same as stepping through it.
Srinivasan's career has been an extended attempt to describe what the technology industry's fishbowl looks like from the outside — from the villages, the cooperatives, the indigenous communities, the urban neighborhoods where the industry's products land and where the assumptions embedded in those products meet realities they were not designed to accommodate. His perspective is that of a scholar who was trained inside the fishbowl — Stanford engineering, MIT — and who chose to leave it, not in the sense of abandoning technology but in the sense of refusing to accept its self-description as universal truth. The water he breathed as an engineering student, the assumptions about what constitutes progress and who defines it, became the object of his study rather than its invisible medium.
What are the assumptions? Srinivasan's work identifies several that operate with particular force in the AI industry, each so deeply embedded in the culture that produced these tools that they function not as beliefs but as the architecture of thought itself.
The first assumption is that productivity is the primary measure of human value. Segal's Orange Pill is saturated with this assumption — not because Segal is uncritical of it (he engages Han's critique of the achievement society with genuine seriousness) but because the book's central metric of AI's value is the expansion of what individuals can produce. The twenty-fold productivity multiplier. The imagination-to-artifact ratio. The capacity to ship in a weekend what once took a team of five six months to build. These are the measures by which the technology is evaluated, and they reflect a specific cultural commitment: the belief that making more, faster, is the fundamental good that tools should serve.
This commitment is not universal. Srinivasan's fieldwork has documented communities whose relationship to production is organized around principles that the productivity framework cannot capture. In indigenous communities in Oaxaca, Mexico, the concept of tequio — communal labor performed for the benefit of the community, not measured in individual output, not compensated by wages, not optimized for efficiency — represents an alternative understanding of what work is for. Work in this framework is not primarily about producing artifacts. It is about maintaining relationships, fulfilling obligations to the community, participating in a social fabric that is valued not for what it produces but for the form of life it sustains.
An AI tool optimized for productivity — for maximizing the output of individual users, for collapsing the time between idea and artifact, for eliminating the friction that slows production — is a tool built for a culture that values productivity above other goods. Deployed in a community organized around tequio, it does not merely fail to help. It introduces a metric of value that is foreign to the community's self-understanding, a metric that implicitly devalues the communal labor, the slow relationship-building, the non-productive social time that the community considers essential to its identity.
The second assumption is that individual agency is the unit of analysis. The hero of The Orange Pill is the individual builder — the engineer whose capability is amplified, the solo founder who ships a product without a team, the parent at the kitchen table wondering what to tell her child. The book's emotional power derives from its focus on individual experience: the vertigo, the exhilaration, the terror of watching your skills become less scarce. This focus is not wrong. Individual experience matters. But it is a focus shaped by a specific cultural commitment to methodological individualism — the belief that society is best understood as a collection of individuals whose choices, capabilities, and experiences are the fundamental units of social analysis.
Srinivasan's research on community networks in Latin America, cooperative technology platforms in Detroit, and communal decision-making processes in indigenous communities documents a different unit of analysis: the collective. In these contexts, the relevant question is not what the technology enables an individual to do but what it enables a community to become. The cooperative in Kenya that uses M-Pesa for community savings does not measure its success in individual productivity. It measures success in collective resilience — the community's capacity to absorb shocks, to support members in crisis, to make decisions that reflect shared values rather than individual optimization.
An AI industry built on the assumption of individual agency produces tools designed for individual users. The interface is a conversation between one person and one machine. The account is individual. The output belongs to the prompter. The pricing model charges per user, per seat, per individual subscription. The entire architecture assumes that the fundamental relationship is between a single human mind and a computational system, and that the value of the interaction is measured by what the individual produces.
Communities that organize knowledge and labor collectively — communities where the relevant unit is not the individual but the family, the cooperative, the village, the tribe — find that the tools do not fit their social architecture. A shared device used by multiple community members cannot maintain the individual conversation history that the AI interface assumes. A collective decision-making process that involves extended deliberation among community members cannot be compressed into a single prompt-response cycle. A communal knowledge system that distributes expertise across multiple people rather than concentrating it in individual specialists cannot be represented by a single user's interaction with the tool.
The third assumption is that the English-speaking knowledge worker is the default user. This assumption has been explored in the preceding chapter on language, but Srinivasan's framework extends it beyond linguistics to the broader category of what counts as knowledge work. The AI tools are optimized for the workflows of a specific professional class: people who write code, draft documents, analyze data, design interfaces, manage projects. These are the activities that the tools perform most impressively, because these are the activities represented most richly in the training data, because these are the activities of the people who built the tools.
The assumption that knowledge work is the paradigmatic form of labor — that it represents the frontier of human capability and that tools designed for it are tools designed for the future — reflects the class position of the technology industry's workforce. The majority of the world's workers are not knowledge workers in this sense. They are farmers, care workers, manual laborers, artisans, traders, community organizers, religious leaders, traditional healers. Their work involves knowledge — deep, sophisticated, contextually specific knowledge — but it does not take the form that the AI tools are designed to augment. A traditional healer in rural India possesses diagnostic knowledge that has been refined over generations of careful observation. An experienced farmer reads soil, weather, and ecological conditions with a sophistication that Western agronomy is only beginning to formalize. These forms of knowledge are invisible to the tools because they are invisible to the culture that built the tools.
Srinivasan has been explicit about this blind spot. "We see all these announcements with people like Larry Ellison and Sam Altman, side by side, offering tens of thousands of jobs," he observed at the 2026 Web Summit Qatar. "What jobs? Where? What we see is economic precarity more and more." The promise of AI-generated employment, in Srinivasan's analysis, reflects the fishbowl of the technology industry itself — a world in which "jobs" means knowledge-economy positions in developed nations, in which "opportunity" means access to tools that serve the workflows of the already-privileged, in which the vast majority of the world's workers are afterthoughts in a narrative about the future of work.
The fourth assumption is that building means producing codifiable artifacts. Segal's central narrative arc traces the collapse of the imagination-to-artifact ratio — the distance between an idea and its realization in working code, a deployed product, a functional system. The celebration is of making things: software that runs, interfaces that respond, products that ship. The assumption is that the fundamental creative act is the production of a discrete, deployable artifact, and that the value of AI lies in reducing the cost of that production.
Srinivasan's fieldwork documents creative processes that produce no artifacts in this sense. The communal deliberation that produces a decision about resource allocation. The storytelling tradition that transmits ecological knowledge across generations. The ritual practice that maintains social cohesion and collective identity. The participatory design process in which a community articulates its own needs and values — a process whose output is not a product but a shared understanding. These processes create value. They produce knowledge. They solve problems. But they do not produce codifiable artifacts, and the tools designed to accelerate artifact production cannot accelerate them — not because the tools are inadequate but because the processes are not the kind of thing the tools were built to serve.
The fishbowl of Silicon Valley is not a conspiracy. It is a culture — a set of assumptions about what matters, how to measure it, and who the relevant actors are. These assumptions are not false within their domain. Productivity matters. Individual capability matters. Knowledge work matters. Codifiable artifacts matter. The fishbowl is a fishbowl, not a delusion. What it reveals is real. What it hides is also real, and what it hides is the majority of human experience.
Srinivasan's challenge to the technology industry is not that it sees falsely but that it universalizes its partial vision. When a tool built for the fishbowl is distributed globally as though the fishbowl's water were the atmosphere — as though productivity, individual agency, English-language knowledge work, and artifact production were universal human priorities rather than the specific priorities of a specific culture — the distribution is not democratization. It is the globalization of a particular worldview, carried by tools so powerful that the worldview becomes infrastructure, embedded in the architecture of systems that billions of people will use without having been consulted about the assumptions those systems encode.
Segal asks the reader to look outside the fishbowl. Srinivasan has spent his career doing exactly that — not from a position of refusal but from a position of empirical engagement with the communities that live outside the glass. What he has found, consistently, is that the water inside the fishbowl is not the only water available. Other cultures breathe different assumptions, organize knowledge differently, measure value differently, define creativity differently, and understand the relationship between technology and human flourishing in terms that the fishbowl's inhabitants have not yet learned to hear.
The task is not to break the fishbowl. It is to acknowledge that it is a fishbowl — not the ocean — and to design tools that can function in waters the designers have never swum in. That requires, as Srinivasan has argued with increasing urgency, not just better engineering but a fundamental expansion of who gets to define what the tools are for.
---
The amplifier is the central metaphor of The Orange Pill. AI amplifies whatever signal you feed it. Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real questions, real craft, and it carries that further than any tool in human history. "The question this book is trying to answer," Segal writes, "is not 'Is AI dangerous?' or 'Is AI wonderful?' It's: 'Are you worth amplifying?'"
The question is powerful. It places responsibility on the person, not the tool. It reframes the AI debate from a question about the technology's nature to a question about the user's quality. It is, in its way, a democratizing move — it says the tool is neutral, and the outcome depends on you.
Srinivasan's framework accepts the question's power and asks a prior question that the metaphor conceals: Is the amplifier designed to hear you?
An amplifier, in the literal sense, is not neutral. It is tuned. It has a frequency response — a range of signals it can receive and reproduce faithfully, and a range it cannot. A guitar amplifier designed for electric guitars will distort an acoustic guitar's signal. Not because the acoustic guitar's signal is less worthy of amplification but because the amplifier's circuitry was designed for a different input. The distortion is not in the signal. It is in the mismatch between signal and amplifier.
The AI amplifier has a frequency response. It is tuned to certain inputs — English-language text, Western epistemological frameworks, the problem-solution structures of professional knowledge work, the individual-user interaction model — and it reproduces these inputs with extraordinary fidelity. Feed it a well-formulated prompt in English about a software architecture problem, and it will return a response of remarkable sophistication. The amplification is genuine.
Feed it a prompt in Yoruba about a communal agricultural decision, and the response degrades. Not catastrophically — the models have some multilingual capability — but measurably. The nuance diminishes. The cultural context drops out. The recommendations default to frameworks drawn from the English-language agricultural literature rather than from the indigenous knowledge traditions that might be more relevant to the specific ecological and social conditions the prompt describes. The amplifier is still amplifying. But it is amplifying a distorted version of the original signal, because the original signal is outside the amplifier's optimal frequency range.
Srinivasan's research on indigenous communities provides the most vivid illustrations of this mismatch, but the phenomenon extends far beyond indigenous contexts. Consider a social worker in Brazil using an AI system to assess family welfare cases. The system was trained on case studies from American and European social work literature. It understands family structures as the American and European literature describes them — nuclear families, single-parent households, blended families. The extended family structures that characterize many Brazilian communities — structures in which grandparents, aunts, uncles, and fictive kin play caregiving roles that the Western family model assigns exclusively to parents — do not map cleanly onto the system's categories. The system can process the case. It cannot understand it in the terms that would make its assessment meaningful to the community it serves.
Or consider a teacher in rural India using an AI system to develop educational materials. The system draws on English-language pedagogical research that assumes certain learning environments — individual desks, quiet classrooms, printed textbooks, standardized testing. The teacher's reality is different: multi-age classrooms, outdoor learning spaces, oral pedagogical traditions, assessment practices that measure understanding through demonstration rather than written examination. The system can generate lesson plans. They will reflect the assumptions of the training data. The teacher must then perform the labor of adapting them to her reality — a labor that the system's designers did not anticipate because the system was designed for a reality in which adaptation is unnecessary.
The pattern is consistent. The amplifier works. It works spectacularly well for the signals it was designed to receive. For other signals — signals from other languages, other epistemological traditions, other social structures, other definitions of what constitutes a useful output — it works less well, and the degradation is not random. It follows the contours of existing power structures, amplifying most effectively the knowledge of the communities that are already most powerful and least effectively the knowledge of the communities that are already most marginalized.
Srinivasan and Ghosh's proposed social contract for technology addresses this directly. Their framework holds that digital rights must go beyond viewing data as property to recognizing it as an extension of human agency and dignity. The amplifier metaphor, in their analysis, must be extended: the right to be amplified is not meaningful without the right to be heard — to have one's signal received without distortion, processed without epistemological violence, and amplified with fidelity to its original content and form.
This is where the representation question becomes urgent. The AI industry's workforce is drawn overwhelmingly from a narrow demographic: predominantly male, predominantly white or Asian, predominantly educated at elite Western universities, predominantly located in a handful of geographic clusters. The people who design the amplifier share a set of experiences, assumptions, and cultural references that shape what the amplifier can hear. They are not malicious. They are not deliberately excluding anyone. They are building what they know, for the users they understand, to solve the problems they can see from where they stand.
The absence of diverse representation in the design process produces an amplifier that is tuned to a narrow band of human experience. It is not that the designers chose to exclude Yoruba or Tamil or communal decision-making processes or indigenous knowledge systems. It is that these things were never in the room when the design decisions were made. The frequency response of the amplifier reflects the frequency range of its designers' experience, and that range, however sophisticated within its domain, does not encompass the diversity of human cognition and culture.
Srinivasan has argued, with increasing urgency, that the solution is not merely adding diversity to existing teams — though that is necessary — but fundamentally restructuring the design process to include the communities that the tools are meant to serve. His concept of participatory design, drawn from his fieldwork with indigenous communities and applied to technology development, holds that the people who will use a tool must be involved in its design from the earliest stages — not as consultants brought in to validate decisions already made but as co-architects whose knowledge shapes the fundamental structure of the tool.
Applied to AI, this would mean training processes that include non-Western epistemological frameworks not as supplementary data but as alternative organizational principles. It would mean evaluation metrics that measure not just accuracy in English-language tasks but fidelity to the knowledge systems of diverse communities. It would mean governance structures that give communities a voice in how their knowledge is used, how the tools that affect them are designed, and how the benefits of AI are distributed.
Segal asks, "Are you worth amplifying?" Srinivasan's response is that worthiness is not the issue. The issue is audibility. Billions of people are worth amplifying. The question is whether the amplifier can hear them — whether its frequency response extends beyond the narrow band of Western, English-language, productivity-oriented knowledge work to encompass the full range of human cognition, culture, and creativity.
Amplification without representation is not democratization. It is the technological reproduction of existing power structures, dressed in the language of universal access. The costume is convincing. The access is real. But the terms of access — whose signals are received clearly, whose are distorted, whose are inaudible — are set by the same concentrations of power that have set terms globally for centuries.
The amplifier must be redesigned. Not abandoned — Srinivasan is not calling for refusal. He is calling for redesign that is as radical as the technology itself. An amplifier that can hear the full range of human experience would be a genuinely revolutionary tool. The one we have is revolutionary for some and merely loud for others. The difference is the design, and the design reflects who was in the room.
---
In the mountains of Oaxaca, Mexico, in a region where the major telecommunications companies had decided the population density was too low to justify building cell towers, indigenous Zapotec and Mixtec communities did something the telecommunications industry had not anticipated. They built their own.
The project, which Srinivasan documented in Beyond the Valley, began in 2013 when the community of Talea de Castro, working with a small nonprofit called Rhizomatica, established an autonomous cellular network using open-source software and hardware that cost a fraction of what a commercial carrier would have invested. The network was governed not by a corporate board optimizing shareholder returns but by a community assembly — the traditional governance structure through which the community had managed its affairs for centuries. Pricing was set by communal consensus. Coverage decisions were made collectively. The network served the community because the community owned it, designed it, and governed it.
This is what a dam looks like when the community builds it.
Segal's Orange Pill calls repeatedly for dams — institutional structures that channel the flow of AI toward human flourishing rather than allowing it to flood indiscriminately. The metaphor is apt. The call is necessary. But the dams Segal envisions are, largely, dams designed by the technology industry and the policy institutions that regulate it: AI Practice frameworks, organizational protocols, educational reforms, national strategies for what the book calls attentional ecology. These are dams built from above — by the companies that produce the tools, by the governments that regulate them, by the thought leaders who shape the discourse about them.
Srinivasan's research suggests that the most effective dams are built from below — by the communities that will be most affected by the current and that possess the local knowledge necessary to determine where the water needs to be redirected.
The Oaxacan cellular network is one example among many that Srinivasan has documented. In Detroit, community organizations have built cooperatively owned digital platforms — alternatives to the gig-economy platforms that extract value from workers without returning it to their neighborhoods. The Platform Cooperativism movement, which Srinivasan has engaged with extensively, creates digital infrastructure that is owned and governed by its users, that distributes benefits equitably, and that is designed to serve community needs rather than investor returns.
In India, community radio networks have used digital tools to create information systems that serve agricultural communities in languages and formats that commercial media ignores. The systems are not technologically sophisticated by Silicon Valley standards. They use basic mobile phones, SMS networks, and low-bandwidth audio streams. But they are designed for the conditions of their users — intermittent connectivity, shared devices, oral communication preferences, agricultural cycles that determine when and how information is consumed.
Each of these examples represents a form of governance that the mainstream AI discourse has not yet learned to see. The discourse about AI governance is dominated by two poles: corporate self-regulation, in which the companies that build the tools set their own boundaries (Anthropic's constitutional AI, OpenAI's safety protocols, Google's responsible AI principles), and state regulation, in which governments impose rules from above (the EU AI Act, American executive orders, emerging frameworks in Singapore, Brazil, and Japan). Both forms of governance have value. Neither addresses the question of community agency — the capacity of the people most affected by AI to determine how it is deployed in their lives.
Srinivasan's critique of corporate self-regulation is grounded in structural analysis. The companies that build AI tools have financial incentives that are structurally misaligned with the interests of the communities their tools affect. A company's obligation is to its shareholders. Its revenue depends on adoption, engagement, and market expansion. The decisions it makes about how to deploy its tools — which markets to enter, which languages to support, which use cases to optimize for, which risks to accept — are governed by financial logic, however sincerely the company's leadership may profess concern for broader social welfare. The AI industry's self-regulatory frameworks are genuine efforts by talented, well-meaning people to address real problems. They are also frameworks designed by the same culture that produced the tools, shaped by the same assumptions, and limited by the same blind spots.
State regulation addresses some of these limitations but introduces others. Srinivasan has worked with legislators in California and elsewhere on AI governance frameworks, and his experience has reinforced his conviction that state regulation, while necessary, is insufficient. Regulatory frameworks are written in the language of the state — legal language, bureaucratic language, the language of compliance and enforcement. They operate at the level of national policy, which means they apply uniformly across diverse communities whose conditions, values, and needs may differ dramatically. A regulation designed to protect consumers in San Francisco may be irrelevant or counterproductive for a farming cooperative in Bihar or a community health network in Rwanda.
The community governance model that Srinivasan advocates operates at a different level — local, contextual, responsive to the specific conditions and values of the people it serves. It draws on governance traditions that predate the nation-state: community assemblies, cooperative decision-making, indigenous governance structures that have maintained social cohesion and managed shared resources for centuries. These traditions are not nostalgic remnants of a pre-modern world. They are living governance systems that communities continue to use, and that have demonstrated their capacity to manage complex shared resources — from water systems to forests to communal labor — in ways that corporate and state governance have often failed to achieve.
Applied to AI, community governance would mean that the communities affected by AI deployment have a meaningful voice in how the tools are used in their contexts. Not a voice mediated by corporate community advisory boards or government public comment periods — processes that Srinivasan has documented as largely performative — but genuine decision-making authority. The community decides which AI tools to adopt, how to configure them, what data to share, what limits to set, and what outcomes to prioritize.
The indigenous data sovereignty movement provides a concrete model. The CARE Principles for Indigenous Data Governance — Collective benefit, Authority to control, Responsibility, Ethics — establish a framework in which indigenous communities maintain authority over their own data. The data belongs to the community, not to the researcher who collects it or the company that processes it. The community determines how the data is used, who has access to it, and what purposes it may serve. The principles have been adopted by an increasing number of research institutions and are beginning to influence policy discussions about data governance at the national and international level.
Extending these principles to AI governance would mean that communities have the authority to determine how AI tools are deployed in their contexts — what data is used to train locally deployed models, what outputs are considered acceptable, what cultural protocols must be respected, and what accountability mechanisms are in place when the tools cause harm. This is a more radical proposition than anything currently on the table in mainstream AI governance discussions, and it is radical precisely because it shifts the locus of authority from the center (the companies, the governments) to the periphery (the communities).
Srinivasan's research on community networks demonstrates that this shift is not merely theoretical. The communities in Oaxaca, in Detroit, in rural India have already built governance structures that work. These structures are not perfect. They face challenges of scale, of resources, of the power asymmetries that make it difficult for small communities to negotiate with global corporations. But they exist. They function. And they embody a principle that the mainstream AI governance discourse has yet to fully embrace: that the people most affected by a technology must have the most influence over how it is deployed.
The dams Segal calls for are necessary. The question is who builds them. A dam designed in San Francisco and shipped to Lagos as a policy template will reflect the assumptions of its designers — their understanding of what risks matter, what protections are needed, and what outcomes count as success. A dam built by the community it serves will reflect the community's own understanding of these questions, an understanding shaped by local knowledge, local values, and local experience that no distant designer can replicate.
Srinivasan's position is not that corporate and state governance are unnecessary. They are necessary. But they are insufficient without the third element: community governance, built from below, rooted in local knowledge, responsive to local conditions, and backed by genuine authority rather than advisory privilege. The communities that will navigate the AI transition most successfully will be the communities that retain the capacity to govern their own relationship with the technology — that build their own dams, maintain them with their own labor, and direct the flow according to their own priorities.
The telecommunications companies told the communities of Oaxaca that their villages were not worth connecting. The communities built their own network. The principle holds. When the institutions that control a technology do not serve your needs, you build your own infrastructure. The question is whether the AI industry and the policy institutions that regulate it will create the conditions for community-built dams — open standards, accessible technology, legal frameworks that recognize community authority — or whether the concentration of AI capability in a handful of corporations will make community governance structurally impossible.
That question is not yet answered. But Srinivasan's work demonstrates that the communities are not waiting for the answer. They are building.
---
Segal's most ambitious metaphor is the river of intelligence — the claim that intelligence is not a human possession but a force of nature that has been flowing for 13.8 billion years, from hydrogen atoms condensing in the early universe to biological evolution to conscious thought to cultural accumulation to artificial computation. The metaphor is vivid, generative, and in important respects illuminating. It captures the deep continuity between different forms of information processing. It dissolves the hard boundary between human and artificial intelligence in a way that opens space for understanding AI as a branching of the same current rather than an alien invasion. It reframes the question from "Will AI replace humans?" to "How do we swim in a river that has picked up speed?"
Srinivasan's framework accepts the river's existence and asks about its hydrology. Rivers do not flow without direction. They flow downhill. They follow the path of least resistance, which is determined by the terrain — the topography of power, capital, and institutional authority that channels the flow toward some communities and away from others. The river of intelligence, in practice, flows toward the concentrations of resources that determine its course: the venture capital that funds AI development, the computing infrastructure that enables it, the data centers that house it, the research institutions that advance it, the regulatory environments that shape it. These concentrations are not natural formations. They are the products of historical processes — colonization, industrialization, the accumulation of capital through extraction, the construction of institutions that serve the interests of the powerful — that have shaped the global distribution of resources for centuries.
To describe intelligence as a force of nature flowing without inherent direction is to naturalize what is, in fact, a political arrangement. The river has a direction. The direction is set by power. And the communities that lack power find themselves downstream, receiving whatever the current carries — benefits and debris alike — without having had any influence over what was released upstream.
Srinivasan has made this argument with specificity. "We live in a democracy where we should get more answers and more transparency and accountability than we actually do," he told Al Jazeera in 2025, addressing the parallel structures of surveillance and data extraction in both Chinese and American technology platforms. His point was not that the two systems are identical but that both are characterized by a concentration of power that determines the direction of technological development without meaningful democratic accountability. The American technology industry operates within a nominally democratic framework. The decisions that shape AI development — what to build, for whom, with what data, under what constraints — are made by a small number of companies, governed by a small number of executives, funded by a small number of investors, and located in a small number of geographic clusters. The democratic input of the billions of people affected by these decisions is, functionally, zero.
The concentration of AI development is staggering. The compute required to train frontier models is available only to a handful of organizations with the financial resources to build or lease massive data center infrastructure. The talent pool is concentrated in a few dozen research laboratories, predominantly in the United States and China. The data that trains the models is extracted from billions of internet users who have no meaningful consent over or compensation for its use. The resulting products are distributed globally, but the profits return to the companies and investors that funded their development.
Srinivasan and Emily Jacobi, in their 2026 op-ed, framed this concentration in environmental terms that make its material consequences visceral: "AI's carbon emissions last year were equivalent to the entirety of New York City; and consumption of freshwater resources from 2025 alone exceeded the global consumption of bottled water." The data centers that power AI are projected to consume as much energy as India — the world's most populous country — by 2034. These resources are extracted from the physical world to serve the computational demands of an industry whose benefits flow predominantly to its shareholders and whose costs are distributed across communities that had no voice in the decisions that imposed them.
The environmental dimension is not peripheral to the power analysis. It is the power analysis made material. The communities that bear the environmental costs of AI development — the communities near data centers whose water tables are depleted, whose power grids are strained, whose landscapes are transformed by industrial infrastructure — are not the communities that capture the benefits. The extraction follows colonial patterns: resources flow from periphery to center, from communities with less power to institutions with more, and the communities that bear the cost have no mechanism for refusing it or negotiating its terms.
Srinivasan's most provocative policy position — his call, in the 2026 op-ed, to "halt the construction of all new data centers" — is best understood not as a Luddite refusal of technology but as an assertion of democratic authority over a resource extraction process that has been conducted without democratic consent. The moratorium he proposes is a dam — a structure designed to interrupt the flow long enough for democratic deliberation to occur. The question is not whether AI should exist but whether its material infrastructure should expand without limit at the discretion of private corporations whose incentive is profit and whose accountability to the communities that bear the costs is minimal.
The river metaphor, in Srinivasan's reading, requires a crucial modification. Segal writes that the river does not care about human preferences. This may be true of intelligence as an abstract force. It is not true of the AI industry as an institutional arrangement. The AI industry cares very much about preferences — the preferences of its investors, its customers, its workforce. These preferences determine the direction of the river. The choice to train models on English-language data is a preference. The choice to optimize for productivity is a preference. The choice to build tools for knowledge workers rather than subsistence farmers is a preference. The choice to locate data centers where land and energy are cheap rather than where democratic governance is strongest is a preference. Each preference channels the river in a direction that serves some communities and bypasses others.
The question of who holds the power to set these preferences is the political question at the heart of the AI transition. Segal's three positions in the river — the Upstream Swimmer who refuses, the Believer who accelerates, and the Beaver who builds dams — are all positions available to people who have some relationship to the river's flow. But there is a fourth position that the typology does not name: the communities downstream who did not choose to be in the river at all, who lack the power to swim upstream, the capital to accelerate, and the resources to build dams, and who will experience the river's consequences regardless.
These communities — the farming villages whose water is diverted to cool data centers, the workers whose jobs are automated without retraining, the cultures whose knowledge systems are rendered invisible by training data that does not include them — are the majority of the world's population. They are not swimmers or believers or beavers. They are the terrain the river runs through. And the question of whether the river enriches or erodes that terrain is determined not by the river's nature but by the power structures that shape its course.
Srinivasan has framed the three greatest effects of technology as its influence on the relationships people have with themselves, with others, and with the earth — and has argued that the result, across all three dimensions, has been "a disconnection from a shared sense of reality." This disconnection is not a side effect of technology. It is a consequence of technology developed without democratic accountability, deployed without community consent, and governed by institutions whose interests diverge from the interests of the people they affect.
The river of intelligence is real. Its power is real. Its potential to enrich human life is genuine. But the river, as currently channeled, flows downhill — toward the concentrations of capital and power that have shaped the global distribution of resources for centuries. Redirecting it requires not just dams built by well-meaning technologists but a fundamental redistribution of the power to determine where the river goes. That is a political project, not a technical one. And it requires the participation of the communities that are currently downstream — not as beneficiaries of someone else's dam-building but as the architects of their own relationship to the current.
Srinivasan's work does not offer a blueprint for this redistribution. It offers something more valuable: empirical evidence that the redistribution is possible, that communities can govern their own technological infrastructure, that alternatives to the centralized model exist and function, and that the communities best positioned to build effective dams are the communities with the deepest knowledge of the terrain the river runs through. The question is whether the institutions that currently control the river will create the conditions for community governance or whether the concentration of AI capability will render community governance structurally impossible.
The answer to that question depends on choices being made right now — in boardrooms, in legislatures, in community assemblies, and in the quiet negotiations between power and accountability that determine how the world's resources are distributed. The river is flowing. The terrain is being shaped. And the communities downstream are watching, with clear eyes, to see whether the water that reaches them will nourish or flood.
In the pueblo of Zuni, in western New Mexico, knowledge about the movement of stars has been accumulating for longer than any written record can trace. Zuni astronomical knowledge is not stored in textbooks or databases. It is encoded in ceremonial cycles, in the orientation of architectural structures, in narratives passed between generations through oral traditions governed by protocols that determine who may speak what, to whom, and when. The knowledge is not less sophisticated than Western astronomy. It is differently organized — embedded in practice rather than extracted into propositions, held communally rather than attributed to individual discoverers, transmitted through relationship rather than publication.
Srinivasan spent years working with the Zuni community, and what he documented was not a quaint survival of pre-scientific thinking but a living knowledge system of remarkable complexity and precision. The astronomical observations encoded in Zuni ceremonial practice reflect centuries of careful attention to celestial patterns — solstice alignments, lunar cycles, the movements of specific star clusters — integrated with ecological knowledge about seasonal planting times, water management, and the behavior of local animal populations. The integration is the point. Where Western science separates astronomy from ecology from agriculture from social organization into distinct disciplines studied by different specialists, Zuni knowledge holds them together in a relational framework in which the meaning of any single observation depends on its connections to everything else.
This knowledge cannot be fed to an AI system without destroying it.
Not because the knowledge is secret — though some of it is, governed by cultural protocols that restrict its transmission — but because the knowledge's meaning is constituted by its form. Extract the astronomical data points from the ceremonial narrative and you have data points. Accurate data points, potentially useful data points, but data points stripped of the relational context that gives them their distinctive value. The value of Zuni astronomical knowledge is not that it records the same celestial events that Western astronomy records. It is that it records them in relationship to everything else — to the soil, the water, the community, the obligations between generations — in a way that produces an integrated understanding of the local environment that no collection of disciplinary data points can replicate.
The AI amplifier is a tool for a specific kind of knowledge: explicit, codifiable, transferable, decomposable into discrete units that can be stored, processed, and recombined. This is an extraordinarily powerful kind of knowledge. The achievements of Western science, built on exactly this epistemological foundation, are genuine and transformative. But it is not the only kind of knowledge, and the pretense that it is — the assumption that any knowledge worth having can be extracted from its context, digitized, and processed by a computational system — erases the knowledge systems that are organized differently.
Srinivasan's fieldwork across indigenous communities — not only Zuni but communities in Oaxaca, in Bolivia, in Aboriginal Australia — has documented a consistent pattern. When outside institutions attempt to capture indigenous knowledge in digital form, the process systematically strips the knowledge of the contextual, relational, and procedural dimensions that give it meaning. A database of indigenous plant medicines records the plants and their uses. It does not record the protocols for gathering — which plants may be gathered at which times, by whom, with what prayers or preparations, in what relationship to the season and the condition of the land. These protocols are not decorative additions to the pharmacological information. They are ecological wisdom encoded in cultural practice, ensuring that gathering does not deplete the resource, that the gatherer's attention is directed toward the health of the ecosystem as a whole, that the use of the medicine is embedded in a network of relationships and obligations that prevent misuse.
Feed the pharmacological data to an AI system and you get drug candidates. Feed the protocols and you get — nothing the system can process, because the protocols are not data. They are practice. They are relationship. They are a form of knowledge that resists the extraction the amplifier requires.
The CARE Principles for Indigenous Data Governance — Collective benefit, Authority to control, Responsibility, Ethics — represent the indigenous data sovereignty movement's response to this extractive dynamic. Developed through consultation with indigenous communities worldwide, the CARE Principles assert that indigenous data is not raw material available for anyone's use. It is an expression of collective identity and self-determination. Communities have the authority to determine how their data is collected, stored, used, and shared. The principles directly challenge the AI industry's standard practice of treating all available data as training material, regardless of its provenance or the wishes of the communities that produced it.
But the CARE Principles address only the governance question — who controls the data. They do not resolve the deeper epistemological question: whether certain forms of knowledge can be meaningfully represented in the computational formats that AI systems require. Srinivasan's work suggests that some knowledge cannot. Not because it is inferior or imprecise but because its precision is of a different kind — relational rather than propositional, procedural rather than declarative, embedded in practice rather than extractable into statements.
This has implications that extend beyond indigenous communities, because the forms of knowledge that resist amplification are not exclusively indigenous. The tacit knowledge that Patricia Benner documented in expert nurses — the ability to read a patient's condition through subtle cues that the nurse could not articulate in propositional form — is a form of knowledge that resists extraction. The embodied skill of a master craftsperson, the kind of knowledge that lives in the hands rather than the head, is another. The practical wisdom that experienced leaders develop through decades of navigating ambiguous situations — the judgment that Segal's Orange Pill celebrates as the scarce resource in the age of AI — is itself a form of knowledge that resists codification, that is developed through experience rather than instruction, that cannot be transmitted by telling someone what to do but only by creating the conditions in which they can learn.
The AI amplifier is spectacularly good at amplifying explicit, codifiable knowledge. It is structurally unable to amplify the knowledge that lives in relationships, in practices, in the embodied understanding that develops through years of engaged experience in a specific context. And the danger of a tool that amplifies one kind of knowledge with extraordinary power is that the unamplified kinds of knowledge become invisible — not because they have ceased to exist but because the amplified knowledge is so loud, so impressive, so apparently comprehensive that it crowds out the awareness that anything else exists.
Srinivasan has warned about this crowding-out effect with increasing urgency. When AI systems trained on Western scientific literature are deployed in communities that possess sophisticated indigenous knowledge about the same domains — agriculture, medicine, ecology, resource management — the AI system's recommendations carry the authority of computational power and the prestige of Western science. The indigenous knowledge, which may be more appropriate to the local context, more ecologically sustainable, more socially embedded, and more practically effective, lacks the institutional backing that would make it competitive in a landscape where computational authority is treated as the gold standard.
The result is not merely the neglect of indigenous knowledge. It is its active displacement. Communities that have maintained sophisticated knowledge systems for centuries begin to defer to the AI system's recommendations, not because those recommendations are better but because the social and institutional prestige of computational authority makes questioning them feel like questioning progress itself. The knowledge that took generations to build erodes in a decade — not because it was proven wrong but because it was rendered inaudible by a system that cannot hear it.
Srinivasan's proposed response is not to reject AI but to recognize and defend the plurality of human knowledge. Genuine democratization must include the recognition that some forms of knowledge are better served by preservation than by amplification — that the appropriate relationship between AI and indigenous knowledge systems is not extraction but respect, not integration but coexistence, not amplification but the creation of conditions under which both forms of knowledge can flourish without one drowning out the other.
This requires institutional structures that the current AI governance landscape does not provide. It requires funding for indigenous knowledge preservation that is not contingent on digitization. It requires educational systems that teach multiple epistemologies rather than treating Western science as the singular legitimate form of knowledge. It requires AI development processes that include not just technical experts but cultural specialists, community elders, and the people whose knowledge the systems might otherwise erase.
And it requires, most fundamentally, an epistemological humility that the technology industry has not yet demonstrated — the recognition that the amplifier, however powerful, hears only what its designers built it to hear, and that the silence beyond its frequency range is not emptiness but knowledge in forms the amplifier cannot process.
The limits of amplification are not the limits of intelligence. They are the limits of a particular tool. And a tool that does not know its own limits is more dangerous than a tool that does — because it fills the world with its outputs and leaves no room for the knowledge that its outputs have displaced.
The communities that hold this displaced knowledge are not museums. They are living repositories of alternative ways of understanding the world — ways that may prove essential as the problems the AI industry cannot solve with more computation turn out to require exactly the relational, contextual, place-based knowledge that the amplifier cannot hear.
---
The argument of this book can be stated simply: the amplifier is real, and it does not hear everyone.
Segal's Orange Pill is correct that AI represents an unprecedented expansion of human capability. The imagination-to-artifact ratio has collapsed. The translation barrier between human intention and computational execution has been radically reduced. The floor of who gets to build has risen. These are genuine achievements, and Srinivasan's framework does not deny them. It locates them. It asks where the expansion is experienced, by whom, under what conditions, and on whose terms. And the answers to those questions reveal that the expansion, as currently configured, reproduces the distribution of power that preceded it — amplifying most effectively the knowledge, the languages, the epistemological frameworks, and the priorities of the communities that already hold the most power, while leaving the majority of humanity to adapt to tools designed for someone else's reality.
The response is not rejection. Srinivasan has never been an anti-technology thinker. He was trained as an engineer. He directs the UCLA AI Futures Lab. He works with legislators on AI governance. His podcast, Utopias, engages seriously with the possibilities of technology — alongside guests who range from technologists to economists to activists — because he believes technology can serve human flourishing if, and only if, it is designed with human flourishing as its purpose rather than as its marketing language.
The response is redesign. Not the incremental redesign of adding language packs and regional settings to tools whose fundamental architecture remains unchanged. Fundamental redesign — of the development process, the training pipeline, the evaluation metrics, the governance structures, and the economic models that determine who captures the benefits and who bears the costs.
Srinivasan and Ghosh's proposed social contract for technology provides the theoretical scaffolding. Their framework extends the social contract tradition — Hobbes, Locke, Rousseau — beyond the relationship between citizen and state to include the relationship between individual and corporation. In an era when technology companies exercise governing influence over digital life — determining what information people see, what tools they have access to, what economic opportunities are available to them, what forms of expression are permitted and promoted — the relationship between these companies and the people they affect is a political relationship that requires the same legitimacy that political relationships have traditionally demanded: consent, accountability, and the guarantee of rights.
Applied to AI, the social contract framework requires several structural changes that go beyond current governance proposals.
First, data sovereignty must be recognized as a fundamental right. The training data that shapes AI models is extracted from billions of people who have no meaningful consent over or compensation for its use. Srinivasan's demand — that data collection and surveillance be halted without informed consent — is not a regulatory detail. It is a precondition for a legitimate relationship between AI companies and the people whose cognitive labor, expressed as data, makes their products possible. The CARE Principles for Indigenous Data Governance provide a model for what consent looks like when applied to community data — collective authorization, community benefit, ongoing accountability. Extending this model beyond indigenous communities to all communities would fundamentally alter the economics of AI development, requiring companies to negotiate with the people whose data they use rather than extracting it unilaterally.
Second, the design process must include the communities the tools are meant to serve — not as end-users providing feedback on finished products but as co-architects whose knowledge shapes fundamental design decisions. Srinivasan's participatory design methodology, developed through decades of fieldwork, provides a practical framework. The process begins not with a technical specification but with a conversation — a sustained, respectful engagement with a community about its needs, its values, its constraints, and its definition of a good outcome. The technology that emerges from this process may look different from the technology that emerges from a Silicon Valley design sprint. It may be less elegant by San Francisco standards. It will be more useful by the standards of the community it serves.
The community cellular networks in Oaxaca were not designed by telecommunications engineers in Mexico City. They were designed by Zapotec communities with the support of a small nonprofit that brought technical expertise and left governance authority with the community. The result was infrastructure that served community needs because the community defined those needs. The model is replicable. It requires only the willingness of technical experts to share authority with the communities their expertise is meant to serve.
Third, AI evaluation must be pluralized. The current evaluation metrics for AI systems — accuracy, fluency, task completion, user satisfaction — reflect the priorities of the communities that designed the tools. Accuracy is measured against Western knowledge bases. Fluency is measured in English. Task completion is defined by the workflows of Western knowledge workers. User satisfaction is surveyed among the tool's current user base, which is disproportionately Western, English-speaking, and affluent.
Evaluation metrics designed with diverse communities would measure different things: cultural appropriateness, epistemological fidelity, community benefit, ecological sustainability, the preservation of knowledge diversity. A system that scores highly on English-language coding tasks and poorly on supporting communal agricultural decision-making in Yoruba is not a generally capable system with a minor gap. It is a system whose capability is culturally bounded, and the evaluation metrics should reflect that boundedness rather than concealing it behind aggregate performance scores.
Fourth, the economic model must change. The current model extracts data globally, processes it centrally, and returns benefits predominantly to shareholders and high-skill workers in a handful of geographic clusters. Srinivasan's advocacy for digital cooperatives, data compensation, and universal basic income programs addresses the distributive dimension. But the deeper issue is structural: the economic incentives that drive AI development point toward the largest markets and the wealthiest users, because that is where the revenue is. The developer in Lagos, the farmer in Tamil Nadu, the community health worker in Rwanda are future markets at best — communities the industry intends to serve eventually, after the technology matures and the costs come down.
Srinivasan's career has been an argument against "eventually." The communities told to wait for inclusion are the communities whose exclusion becomes architectural — baked into the training data, the interface design, the evaluation metrics, the economic models — so that by the time inclusion arrives, the terms have been set without them and the cost of changing those terms has become prohibitive. The time to include these communities is now, at the design stage, when the foundational decisions are being made. Not later, when the architecture has hardened and inclusion means adapting to a system built for someone else.
Fifth, and most fundamentally, the environmental costs must be democratically governed. Srinivasan's call to halt data center construction is the most provocative articulation of a principle that is, at its core, simple: communities must have a voice in decisions that affect their land, their water, their energy systems, and their ecological future. The AI industry's environmental footprint — the carbon emissions equivalent to major cities, the water consumption exceeding global bottled water consumption, the energy demands projected to match entire nations — is imposed on communities without their consent. Democratic governance of these impacts is not an obstacle to progress. It is a precondition for the kind of progress that can be sustained without destroying the material foundations it depends on.
The decolonized amplifier is not a utopian fantasy. It is a design specification. It describes a technology that can hear all frequencies, not just the ones its current designers are tuned to. It describes a development process that includes the majority of humanity in the decisions that shape the tools the majority of humanity will use. It describes an economic model that distributes benefits broadly rather than concentrating them narrowly. It describes a governance structure that holds the powerful accountable to the affected.
Each element of this specification has precedents. Community governance of telecommunications in Oaxaca. Participatory design in indigenous communities. Data sovereignty frameworks in the CARE Principles. Cooperative economic models in Kenya's M-Pesa and Detroit's platform cooperatives. Environmental governance through democratic deliberation. None of these is new. What is new is the urgency of applying them to the most powerful technology in human history before its architecture hardens into a structure that cannot be reformed.
Segal's Orange Pill asks: "Are you worth amplifying?" Srinivasan's work provides the necessary complement: the amplifier must be worth using. It must be designed to hear the full range of human intelligence — not just the frequencies that are loudest, not just the languages that dominate, not just the epistemologies that the designers happen to share. It must be accountable to the communities it affects. It must be governed by the people whose lives it shapes.
The orange pill is real. The recognition is genuine. Something genuinely new has arrived. The question that Srinivasan has spent his career posing, with empirical precision and moral clarity, is whether that something will serve humanity in its diversity or flatten it into a monoculture optimized for the preferences of its creators.
The answer depends on choices being made right now. Not eventually. Now.
---
The map I had drawn of the democratization of capability was missing most of the world.
I knew this, in the way you can know something without letting it reorganize your thinking. I wrote in The Orange Pill that the developer in Lagos could now access the same coding leverage as an engineer at Google, and then I named the caveats — not the same salary, not the same network, not the same safety net — and moved on. I moved on because the thrust of the argument was about capability, and the capability was real, and the caveats felt like terrain I would return to later.
Srinivasan does not let you move on. That is his gift and his demand. He stays with the developer in Lagos. He asks about her electricity. He asks about her bandwidth costs. He asks about the language she thinks in before she translates her thoughts into the English the tool requires. He asks whose problems the tool was trained to solve and whether hers are among them. And by the time he is finished asking, the word "democratization" has not been discredited — but it has been revealed as a promissory note rather than a settled account.
What unsettles me most is not the critique of the tools. Tools can be improved. Languages can be added. Training data can be diversified. What unsettles me is the deeper claim: that the amplifier has a frequency response, and the frequencies it hears most clearly are the ones that belong to people who already have the most power. That the natural language interface I celebrated as the abolition of the translation barrier is, for five billion people, a new translation barrier — one that operates not at the level of syntax but at the level of how you organize the world in your mind.
I think about the communities in Oaxaca who built their own cellular network because the companies that were supposed to serve them decided their villages were not profitable enough to connect. They did not wait for inclusion. They built their own infrastructure. That fact sits in my mind next to every claim I have made about AI lowering the floor of who gets to build, and it asks a question I cannot answer from inside my fishbowl: What does it mean to lower the floor of a building that was designed without asking most of humanity where the building should stand?
I do not have Srinivasan's fieldwork. I have not sat in the rooms in Zuni Pueblo or rural India or the mountains of Oaxaca. But I have sat in rooms in Trivandrum with twenty engineers whose reality is closer to the developer in Lagos than to the engineer in Mountain View, and I know that the tools I handed them carried assumptions — about connectivity, about workflow, about what constitutes a useful output — that I had not examined because they were the water I breathe.
The honest thing to say is that Srinivasan's critique does not invalidate the orange pill. It completes it. The recognition that something genuinely new has arrived must include the recognition that the new thing arrives unevenly, carrying the assumptions of its creators, tuned to some frequencies and deaf to others. The amplifier is real. The question of whether it can be redesigned to hear the full range of human intelligence — all the languages, all the epistemologies, all the forms of knowledge that resist extraction into the formats the machine requires — is the question that will determine whether the democratization I wrote about becomes real or remains a promise extended to the already powerful.
I am a builder. I will keep building. But I am building now with a different awareness — an awareness that the tools I celebrate were designed in a fishbowl, that the water I breathe is not the only water, and that the communities whose voices the amplifier cannot yet hear are not waiting for my permission to build their own.
The Orange Pill declared AI the most powerful amplifier ever built. Ramesh Srinivasan asks the question the declaration left unasked: whose signal does it actually carry?
Srinivasan -- engineer turned ethnographer, UCLA lab director, fieldworker in Oaxacan villages and Detroit cooperatives -- has spent decades documenting what happens when tools designed inside one culture land in communities whose realities were never part of the design. This book applies his framework to the AI revolution: the English-language interface that creates new translation barriers for five billion people, the training data that encodes whose knowledge counts, the economic structures that channel capability toward the already powerful. It does not reject the orange pill. It asks whose hand is offering it.
From indigenous data sovereignty to community-built cellular networks to the environmental costs borne by communities that had no voice in the decisions that imposed them, Srinivasan's lens reveals what the democratization thesis must answer to become credible -- and what genuine inclusion would actually require.
-- Ramesh Srinivasan

A reading-companion catalog of the 20 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Ramesh Srinivasan — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →