By Edo Segal
I remember the exact moment I stopped understanding what I was building.
It wasn't dramatic. No alarms. No existential crisis — not yet. I was watching one of our engineers in Bangalore ship a product in two hours that would have taken his team two weeks the year before. He was talking to an AI system built in San Francisco, trained on English-language text, optimized by researchers who'd never set foot in his city, and deployed through infrastructure he'd never own. And it worked. Beautifully. The code was clean. The product was real. The compression was intoxicating.
I called it productive vertigo — that feeling when the ground shifts beneath you and you realize you're not falling, you're flying, but you have no idea who built the wings.
That vertigo is what led me to write *The Orange Pill*. The conviction that something unprecedented was happening, that the distance between imagining a thing and building it had collapsed to nearly nothing, and that most of us were swimming in assumptions so deep we couldn't see them. I called those assumptions the fishbowl. I told people to press their faces against the glass.
But I couldn't tell them what the water was made of. Not precisely. Not philosophically.
Yuk Hui can.
When I first encountered his concept of cosmotechnics — the idea that every civilization unifies its cosmic order and moral order through its technical practices, that every tool carries within it an answer to the question of what the universe is and what we owe it — something inside me went very quiet. Because I realized the fishbowl I'd been describing wasn't just cognitive. It was civilizational. The water wasn't just our assumptions about startups and venture capital and exponential growth curves. The water was an entire metaphysics — a twenty-five-hundred-year-old philosophical tradition that says nature is raw material, progress is accumulation of control, and intelligence is a river flowing in one direction.
Hui showed me that there are other waters. Other ways of building. Entire cosmotechnical traditions — Chinese, Indian, Indigenous — that understood the relationship between maker and cosmos not as mastery but as participation. Not as extraction but as harmony. And he showed me something harder to accept: that the AI systems I'd spent my career building and evangelizing were not neutral tools that happened to come from the West. They were the most powerful expression of Western cosmotechnics ever created, and they were universalizing that cosmotechnics at a speed no colonial project in history had achieved.
This book is the question I couldn't ask from inside the fishbowl. It's the question technology cannot ask itself: *whose cosmos does this tool serve?*
I don't have the answer. But I know we need to hear the question clearly before the water becomes the only water there is.
-- Edo Segal ^ Opus 4.6
Yuk Hui (許煜) is a philosopher born in 1985 in Guangzhou, China, whose work spans the philosophy of technology, media theory, and the history of ideas across Eastern and Western traditions. He studied computer engineering and philosophy at the University of Hong Kong before completing his doctoral work under Bernard Stiegler at Goldsmiths, University of London. He has held academic positions at the Leuphana University of Lüneburg, the Bauhaus-Universität Weimar, and the City University of Hong Kong, where he is currently a professor in the Department of Cultural Studies. His major works include *On the Existence of Digital Objects* (2016), *The Question Concerning Technology in China: An Essay in Cosmotechnics* (2016), *Recursivity and Contingency* (2019), *Art and Cosmotechnics* (2021), and *Machine and Sovereignty* (2024). His concept of cosmotechnics — the thesis that different civilizations produce fundamentally different relationships between technology and the cosmic order — has become one of the most widely discussed ideas in contemporary philosophy of technology, challenging the assumption that technological modernization follows a single universal trajectory. His work has been translated into more than a dozen languages and has influenced debates across philosophy, design, digital humanities, and artificial intelligence research.
In the winter of 2024, a software engineer in Bangalore sat before a screen and spoke a sentence in English to an artificial intelligence system built in San Francisco, trained on text scraped predominantly from the Anglophone internet, optimized according to mathematical principles developed in Western research universities, and deployed through cloud infrastructure owned by American corporations. The AI responded with working code. The engineer tested it, refined it through further conversation, and shipped a product that afternoon. What once required two weeks required two hours. The Orange Pill calls this the collapse of the imagination-to-artifact ratio — the moment when the distance between conceiving a thing and building it shrank to nearly nothing. Edo Segal witnessed this compression firsthand, watched engineers become architects become entire teams, and felt the productive vertigo that accompanies the realization that the rules have changed.
But the rules that changed were always someone's rules. And the rules that replaced them belong to someone too.
Yuk Hui's philosophy begins with a deceptively simple observation: there is no such thing as technology in general. There are only specific technologies, produced by specific civilizations, embedded in specific cosmological frameworks that determine what counts as a tool, what a tool is for, and what relationship properly holds between the human being who wields it and the cosmos in which that wielding takes place. Hui calls these frameworks cosmotechnics — the unification, within a given culture, of the cosmic order and the moral order through technical activities. The concept sounds abstract until one realizes what it claims: that every technology encodes a metaphysics. Every tool carries within it an answer to the question of what the universe is and what human beings owe it.
The Western cosmotechnical tradition, in Hui's analysis, descends from a particular event in Greek philosophy — the moment when techne was separated from physis, when the art of making was distinguished from the self-generating processes of nature. This separation, deepened by Christianity's distinction between Creator and creation, and radicalized by the Enlightenment's reconception of nature as mechanism, produced a cosmotechnics in which technology stands over against nature. Tools are instruments of mastery. Progress is the accumulation of control. Nature is raw material — what Heidegger called Bestand, standing reserve — waiting to be converted into human purposes. This is the cosmotechnics that built the steam engine, the factory, the computer, and the large language model. It is the water in the fishbowl.
The fishbowl metaphor, central to The Orange Pill's rhetoric, acquires new dimensions when read through Hui's framework. The Orange Pill deploys the fishbowl to describe the cognitive enclosure of assumptions so familiar they become invisible — the things people believe without knowing they believe them. The book invites readers to press their faces against the glass, to notice the water, to feel the vertigo of recognizing that the medium they swim in is not the only medium that exists. Hui's work specifies what the water actually is. The fishbowl is not merely a set of personal assumptions or cultural habits. It is a cosmotechnical enclosure: a comprehensive framework that determines what technology means, what intelligence is, what progress looks like, and what relationship properly holds between human beings and the world they inhabit. The reason the fish cannot see the water is not that the water is transparent. The reason is that the water has been universalized — extended so completely across the globe through colonial, economic, and computational power that no alternative medium appears to exist.
This universalization is what Hui calls monotechnologism — the assumption, now so deeply embedded in global culture that it functions as a background condition rather than a claim, that there is one and only one correct way to develop technology. Monotechnologism does not announce itself. It does not say, "We are imposing Western cosmotechnics on the world." It says, "We are bringing technology to the world" — as though technology were a single substance, neutral in character, universal in application, and the only question were how much of it any given society possesses. The language of technological development, of technology transfer, of the digital divide — all of it presupposes monotechnologism. All of it assumes that the engineer in Bangalore and the engineer in San Francisco are working within the same cosmotechnical tradition, that the code one writes and the code the other writes participate in the same understanding of what computation is and what it is for.
Hui's counter-argument draws on the history of Chinese thought to demonstrate that this assumption is false — not as a matter of cultural preference but as a matter of philosophical substance. Chinese cosmotechnics, rooted in Daoist and Confucian philosophical traditions, developed a fundamentally different relationship between technology and the cosmic order. In this tradition, the key concept is not mastery but harmony — not the domination of nature but participation in the self-generating processes of the Dao. The Chinese concept of Qi (气) — often inadequately translated as "vital energy" or "material force" — describes a world that is not composed of inert matter waiting to be shaped by human will but of dynamic, self-organizing processes in which human technical activity is one participant among many. The artisan who shapes jade is not imposing form on formless matter. The artisan is collaborating with the grain of the stone, with the Qi that flows through it, with the cosmic process that produced it. The tool does not stand over against nature. The tool participates in nature's own becoming.
This is not a poetic metaphor. It is an ontological claim with concrete technical consequences. Consider the traditional Chinese garden, which Hui discusses as an exemplary cosmotechnical artifact. The Western garden — Versailles is the paradigmatic example — imposes geometric order on nature. Hedges are cut into straight lines. Water is channeled into symmetrical fountains. The garden declares: human reason has mastered natural chaos. The Chinese garden operates according to an entirely different principle. It does not impose order but reveals it. Rocks are placed not to create geometric patterns but to evoke mountains. Water flows not in channels but along paths that mimic natural streams. The garden is a microcosm — a technology that does not conquer nature but participates in its self-expression. The aesthetic criteria, the design principles, the evaluative standards for what counts as a good garden are all different because the underlying cosmotechnics is different.
Now transpose this analysis to artificial intelligence. The large language models that constitute the current frontier of AI development are, in Hui's framework, the most powerful expression of Western cosmotechnics ever built. They are not neutral tools that happen to have been invented in the West. They embody, in their architecture and training and optimization criteria, a specific set of cosmotechnical assumptions. The transformer architecture processes language as sequence prediction — the reduction of meaning to statistical pattern, the treatment of human expression as data to be optimized. The training process treats the entire corpus of digitized human knowledge as standing reserve — raw material to be ingested, processed, and recombined according to mathematical objectives defined by researchers operating within the Western scientific tradition. The optimization functions — minimizing loss, maximizing coherence, reducing hallucination — encode a specific understanding of what intelligence is: the capacity to predict, to pattern-match, to reduce ambiguity, to produce outputs that are useful in the instrumental sense.
None of these assumptions are necessary. They are cosmotechnical. A different civilization, operating from different philosophical foundations, might have developed a fundamentally different approach to machine intelligence — one that optimized not for prediction but for harmony, not for reduction of ambiguity but for the preservation of productive uncertainty, not for instrumental utility but for what the Daoist tradition calls wu wei (无为), the action that arises from non-action, the intervention that participates in rather than directs the natural flow of things.
The Orange Pill describes AI as the latest and most powerful current in a river of intelligence flowing for 13.8 billion years — from hydrogen atoms through neurons through silicon. The metaphor is compelling. It is also diagnostic. The river is singular. One current. One direction. One substance. Hui's framework reveals this singularity as itself a cosmotechnical choice. The metaphor of intelligence-as-river encodes the Western assumption that intelligence is a natural force — like gravity, like electromagnetism — universal in character, cumulative in development, and ultimately convergent. All intelligences, in this view, flow toward the same ocean. The differences between human intelligence and machine intelligence are differences of degree, not of kind. The differences between Western intelligence and Chinese intelligence, between algorithmic intelligence and contemplative intelligence, between the intelligence that builds and the intelligence that refrains from building — these are tributaries that will eventually merge.
A Daoist cosmotechnics might describe intelligence not as a river at all but as the Dao itself — present in action and in non-action, in the tool and in the decision not to use the tool, in the code that ships and in the silence that refuses to code. Intelligence, in this framework, is not a substance that accumulates but a relationship that must be continuously renewed between the human being and the cosmic order. The artisan who puts down the chisel because the wood is not ready is not failing to be intelligent. The artisan is expressing a form of intelligence that the optimization function cannot capture because the optimization function has already decided what intelligence means.
This is the question that technology cannot ask itself. Every technical system operates within a cosmotechnical framework that defines its purposes, its criteria of success, and its understanding of what it is doing. The system cannot question that framework from within. The fish cannot analyze the water while breathing it. When an AI system is asked to evaluate its own outputs, it evaluates them according to the criteria that were embedded in its training — criteria that are themselves expressions of a particular cosmotechnics. The system can optimize within its framework. It cannot question the framework. This is not a limitation of current AI that future AI will overcome. It is a structural feature of what it means to be a technical system: the system's cosmotechnical assumptions are its conditions of possibility, not its objects of inquiry.
The human being, Hui argues, is different — or can be. The human being can step outside a cosmotechnical framework and see it as one framework among many. The human being can encounter a different cosmotechnics and recognize it not as primitive, not as wrong, not as an earlier stage on the universal path of development, but as a genuine alternative — a different answer to the question of how technology and the cosmos should relate. This capacity for cosmotechnical reflection is, in Hui's view, one of the most important capacities human beings possess. And it is precisely the capacity that monotechnologism erodes.
The Orange Pill captures this erosion intuitively. Its description of "productive vertigo" — the disorientation that accompanies the recognition that one's capabilities have been suddenly, massively amplified — is, read through Hui's lens, the feeling of a cosmotechnical framework being imposed so rapidly that the recipient cannot examine it. The engineer in Bangalore who uses AI to compress two weeks of work into two hours is not merely working faster. The engineer is working differently — thinking differently, evaluating differently, relating to code and to the act of building differently. The cosmotechnical assumptions of the AI system — its optimization logic, its understanding of what good code is, its implicit model of what software is for — are being transmitted through the interaction, reshaping the engineer's own cosmotechnical orientation without announcing that this is what they are doing.
Hui does not argue that this transmission is malicious. Monotechnologism does not require conspiracy. It requires only momentum — the momentum of a cosmotechnical tradition so powerful, so well-resourced, and so deeply embedded in global infrastructure that alternatives cannot compete. The Orange Pill's great insight is that AI has made this momentum unstoppable in its current form. Its blind spot, which Hui's work illuminates, is that it describes this momentum as though it were a natural force rather than a cosmotechnical choice.
The question, then, is not whether AI will transform the world. It already has. The question is whether the world it transforms will be a world of cosmotechnical diversity — a world in which different civilizations develop different AI traditions, rooted in different philosophical foundations, producing different relationships between human beings and their tools — or a world of cosmotechnical monoculture, in which one civilization's understanding of technology becomes the invisible architecture of all future culture. This is the question that technology cannot ask itself. It is the question that only human beings, standing at the edge of the fishbowl, looking outward into a darkness that is not empty but full of other waters, other worlds, other ways of building, can ask.
The word technology derives from the Greek techne, meaning art or craft, and logos, meaning reason or account. The compound suggests a reasoned account of craft — a systematic understanding of how things are made. But the etymology conceals a philosophical decision that has shaped Western civilization for twenty-five centuries. When the Greeks distinguished techne from physis — the art of making from the self-generating processes of nature — they established a conceptual architecture in which technology and nature stand on opposite sides of a divide. The artisan imposes form on matter. The builder shapes raw material into designed objects. The engineer masters natural forces and redirects them toward human purposes. This architecture was not inevitable. It was a cosmotechnical choice — one that other civilizations made differently, with consequences that are still reverberating through the planetary computation systems of the twenty-first century.
Yuk Hui's concept of cosmotechnics emerged from a sustained engagement with two philosophical traditions that are rarely brought into dialogue: the Continental philosophy of technology, running from Heidegger through Simondon to Stiegler, and the Chinese philosophical tradition, running from early Daoism through Neo-Confucianism to the twentieth-century debates about modernization and cultural identity. Hui's insight was that neither tradition could understand itself fully without the other. Continental philosophy of technology had generated powerful critiques of the Western technological condition — Heidegger's analysis of Gestell (enframing), Simondon's theory of individuation, Stiegler's pharmacology of technology — but these critiques remained trapped within the Western cosmotechnical framework they sought to analyze. They could diagnose the problem of modern technology but could not imagine a genuine alternative because their conceptual resources were drawn entirely from the tradition that produced the problem.
The Chinese philosophical tradition, meanwhile, possessed the resources for an alternative cosmotechnics but had been systematically denied philosophical legitimacy by the very monotechnologism that Hui would later name. Since the late nineteenth century — since the traumatic encounter with Western military-industrial power that forced China's modernization — Chinese thinkers had debated whether Chinese philosophical traditions could be reconciled with modern technology or whether modernization required the wholesale adoption of Western epistemological frameworks. The dominant answer, from the May Fourth Movement through Mao's technological modernization through Deng Xiaoping's "reform and opening up," was that Chinese philosophy was premodern at best and superstitious at worst, and that genuine technological development required Western scientific rationality.
Hui rejected both positions. Chinese philosophy was not premodern, and Western scientific rationality was not universal. Both were cosmotechnics — specific, historically constituted relationships between cosmic order and technical practice — and the challenge was not to choose between them but to understand each in its specificity and to develop new cosmotechnical possibilities from their encounter.
The concept of cosmotechnics crystallized through Hui's reading of an ancient Chinese text: the story of Cook Ding (庖丁) from the Zhuangzi, one of the foundational texts of Daoist philosophy. Cook Ding butchers an ox with such skill that his knife never dulls. When asked how he achieves this, he explains that he does not cut through bone or sinew. He finds the spaces between them — the natural gaps in the ox's body — and guides his knife through those gaps. He does not impose his will on the material. He perceives the material's own structure and moves in accordance with it. His technique (techne) is inseparable from his understanding of the natural order (cosmos). He does not master the ox. He dances with it.
For Hui, Cook Ding is not a charming fable. Cook Ding is a cosmotechnical program — an alternative to the Greek understanding of techne as the imposition of form on matter. In Cook Ding's cosmotechnics, the relationship between maker and material is not one of domination but of resonance. The knife finds the gaps because the butcher has cultivated a sensitivity to the Dao — to the cosmic order that manifests in the structure of the ox, in the movement of the blade, in the rhythm of the work. This sensitivity is not mystical in the sense of being irrational. It is a different form of rationality — one that seeks harmony rather than control, participation rather than extraction, attunement rather than optimization.
The philosophical foundations of this alternative cosmotechnics lie in what Hui identifies as the Daoist concept of Dao (道) and the Neo-Confucian concept of Qi (气). The Dao is the way — the dynamic, self-generating process through which all things come into being, persist, and dissolve. It is not a creator god who stands outside creation. It is the process of creation itself — immanent, continuous, and accessible to human participation through cultivation and practice. Qi is the vital materiality of this process — not inert matter waiting to be shaped but active, self-organizing, and responsive. Together, Dao and Qi describe a cosmos that is not a mechanism to be mastered but a living process to be joined.
Technology, in this framework, is not the imposition of human will on passive matter. Technology is the means through which human beings participate in the Dao — through which they align their activities with the cosmic order. The good tool is not the tool that maximizes efficiency or control. The good tool is the tool that facilitates harmony between the human being and the natural world. The good technology is not the technology that conquers nature but the technology that reveals nature's own patterns and allows human beings to move in accordance with them.
This is not a rejection of technology. Hui is emphatic on this point, and the emphasis matters because the most common misreading of his work — and the most common misreading of any critique of Western technology — is that it amounts to primitivism, to the nostalgic wish for a pre-technological world. Hui's argument is the opposite. He wants more technology, not less — but technology of a fundamentally different kind, rooted in different philosophical foundations, pursuing different ends. The Chinese cosmotechnical tradition did not produce less technology than the Western tradition. It produced different technology. The Chinese invented paper, printing, the compass, and gunpowder — innovations that transformed the world. But they developed and deployed these innovations within a cosmotechnical framework that understood their purpose differently. The compass was not primarily a tool for navigating and conquering distant lands (though it could be used for that). It was a tool for aligning human structures with the geomantic forces of the earth — a technology of harmony, not of domination.
The relevance of this history to AI is not merely analogical. Hui argues that the current crisis of technology — the ecological devastation, the erosion of cultural diversity, the concentration of power in a handful of technology corporations, the psychological pathologies diagnosed by thinkers like Han — is not a crisis of technology as such. It is a crisis of monotechnologism: the crisis that results when one cosmotechnics has been universalized so completely that alternatives have been extinguished or marginalized, and the single remaining tradition has no external corrective, no competing framework against which to measure its own excesses.
The Orange Pill registers this crisis intuitively. When Segal describes the vertigo of watching twenty engineers compressed into one, when he reports the strange mixture of exhilaration and unease that accompanies the collapse of the imagination-to-artifact ratio, he is registering the acceleration of a cosmotechnical tradition that has lost its brakes. The Western cosmotechnics optimizes for speed, efficiency, and capability because its founding metaphysical assumptions tell it that nature is standing reserve and that the purpose of technology is to convert standing reserve into human utility as rapidly and completely as possible. AI is the perfection of this logic. It converts the entirety of digitized human knowledge into standing reserve — raw material for pattern recognition and recombination — and deploys the results at a speed that human beings experience as vertigo precisely because human beings evolved to operate at a different temporal scale.
Hui's concept of technodiversity emerges directly from this analysis. Just as biodiversity is essential to ecological resilience — an ecosystem with many species can absorb shocks that would destroy a monoculture — so technodiversity is essential to cultural and civilizational resilience. A world with many cosmotechnical traditions possesses many different ways of relating to nature, many different evaluative criteria for technology, many different visions of what a good life with tools looks like. A world of monotechnologism possesses only one, and that one has no external check on its excesses.
Technodiversity is not aesthetic diversity. It is not the superficial variety that results when the same underlying technology is given different cultural skins — when the same social media platform is translated into different languages, when the same algorithmic recommendation system is tuned for different regional markets. Technodiversity is ontological diversity. It means genuinely different technical systems, built on genuinely different philosophical foundations, pursuing genuinely different purposes. A Daoist AI — if such a thing could be built — would not be a Western AI with a Daoist interface. It would have a different architecture, different training methodology, different optimization criteria, different evaluative standards, and a different understanding of what it means for an artificial system to be intelligent.
Whether such a thing can actually be built is one of the most important questions Hui's work raises without definitively answering. The skeptic's objection is obvious: cosmotechnics may vary, but mathematics does not. The transformer architecture works not because it is culturally Western but because it is mathematically effective. Gradient descent converges not because of Greek metaphysics but because of calculus. The skeptic concludes that cosmotechnical diversity in AI is a romantic fantasy — that the underlying mathematics constrains all possible AI systems to a single family of architectures, regardless of the philosophical traditions of their builders.
Hui's response to this objection is characteristically subtle. He does not deny the universality of mathematics. He questions the assumption that the choice of which mathematics to apply, which problems to formalize, which objectives to optimize, and which constraints to impose is itself universal. The transformer architecture is mathematically effective at sequence prediction. But who decided that sequence prediction was the right formalization of intelligence? Who decided that language should be modeled as a sequence rather than as a field, a network, a resonance, or a flow? These decisions were made by researchers operating within a specific cosmotechnical tradition — a tradition that understands intelligence as prediction, language as information, and knowledge as pattern. A different cosmotechnical tradition might formalize intelligence differently, apply different mathematics, and arrive at a different architecture that was equally effective at tasks defined within its own evaluative framework.
This is not speculation. The history of science provides many examples of mathematical frameworks that were developed within specific cultural contexts and that formalize nature in ways that reflect those contexts. Chinese mathematics, for instance, developed algebraic methods centuries before Europe, but embedded them in different problem-solving frameworks and directed them toward different purposes. The question is not whether mathematics is universal but whether the application of mathematics to the world is cosmotechnically shaped. Hui argues that it is — that the decision to build AI systems that optimize for prediction, efficiency, and instrumental utility is a cosmotechnical decision, not a mathematical necessity, and that different cosmotechnical traditions could optimize for different things.
The intellectual genealogy of cosmotechnics thus traces a path from the Greek separation of techne and physis, through the Chinese unity of Dao and Qi, through the colonial universalization of Western technology, through the twentieth-century debates about modernization and cultural identity, to the present moment — the moment when AI threatens to complete the project of monotechnologism by establishing a single model of intelligence as the universal standard. This genealogy is not merely historical. It is a map of possibilities — a record of roads not taken, alternatives not explored, cosmotechnical traditions suppressed or marginalized that might yet be recovered and developed in new forms.
The recovery cannot be nostalgic. Hui is clear about this. One cannot simply return to pre-modern Chinese cosmotechnics and apply it to twenty-first-century AI. The historical conditions have changed. The technologies have changed. The global context has changed. What one can do is use the cosmotechnical traditions of the past as resources for imagining cosmotechnical futures that are genuinely different from the monoculture that currently prevails. The concept of cosmotechnics is not a museum piece. It is a design principle — a principle that says: before building, ask what cosmos your technology presupposes, and whether that cosmos is the only one available.
In 1948, the mathematician Norbert Wiener published Cybernetics: Or Control and Communication in the Animal and the Machine, a book that proposed a unified science of feedback systems. The thermostat was his paradigmatic example: a device that measures the temperature of a room, compares it to a set point, and adjusts the heating or cooling to bring the actual temperature in line with the desired one. The output of the system — the temperature — feeds back into the input, creating a loop. Wiener argued that this loop structure was not merely a property of thermostats. It was the fundamental architecture of all self-regulating systems, from cells to economies to brains to societies. Everything that maintained stability did so through feedback. Everything that adapted did so through recursion.
Yuk Hui's concept of recursivity takes Wiener's insight and transforms it from a technical observation into a philosophical problem. Recursivity, in Hui's usage, describes the structure of any system that generates the conditions for its own continuation — that produces, through its operation, the inputs it requires for further operation. A recursive system is not merely a system that repeats. It is a system that creates the world in which its repetition makes sense. And this is precisely what makes recursive systems both powerful and dangerous: they do not merely process their environment. They reshape it. Over time, the recursive system and its environment converge, until the environment has been so thoroughly shaped by the system that alternatives become progressively less available.
AI creative systems are paradigmatically recursive in Hui's sense. A large language model is trained on a corpus of text produced by human beings. It generates new text that is incorporated into the cultural environment — published online, read by other humans, used as training data for future models. The outputs of the system become the inputs for the next cycle. Each generation of the model learns from a world that has been partially shaped by the previous generation. The cultural environment in which human beings create, think, and speak has been altered by the model's outputs, and the model's next iteration learns from that altered environment. This is not a side effect of AI deployment. It is the fundamental structure of the technology — a recursive loop that progressively reshapes the cultural environment to match the patterns the model has already learned.
The Orange Pill captures the effects of this recursivity without naming the structure. When Segal describes the experience of watching AI-generated code become indistinguishable from human-written code, when he reports that engineers begin to think in patterns shaped by their AI tools, when he notes that the line between human intention and machine suggestion blurs until the distinction seems meaningless — these are observations about recursivity. The engineer and the AI system are co-producing each other. The engineer's patterns of thought are shaped by the AI's outputs, and the AI's patterns are shaped by the engineer's inputs, and over time the two converge toward a shared style, a shared vocabulary, a shared understanding of what good code looks like. This convergence feels like efficiency. From Hui's perspective, it is closure.
Closure is Hui's term for the condition in which a recursive system has so thoroughly reshaped its environment that genuine novelty — genuine deviation from established patterns — becomes progressively less likely. Each cycle of the recursive loop reinforces existing patterns, makes them more probable, more natural, more invisible. The system does not prevent novelty by force. It prevents novelty by saturation — by filling every available space with its own products, until the space in which something genuinely different could emerge has been occupied.
The concept of closure illuminates a problem that The Orange Pill identifies but cannot quite resolve: the paradox of AI creativity. The Orange Pill describes AI as a tool that expands human creative capacity — that allows people to build things they could not have built before, to realize visions they could not have realized alone. And this is true. The imagination-to-artifact ratio has genuinely collapsed. The range of things any individual can build has genuinely expanded. But Hui's analysis reveals that this expansion occurs within a contracting space of possibilities. The individual can build more, but what counts as "building" and what counts as a successful build are increasingly determined by the recursive system itself. The AI suggests code patterns. The engineer accepts them. The patterns become standard. Future AI models are trained on code that includes those patterns. The next generation of engineers encounters those patterns as given, as natural, as the way things are done. The space of possible code has not expanded. It has been colonized.
This is the structure that Hui, drawing on Simondon's philosophy of individuation, calls recursive closure — the condition in which a system's recursivity has eliminated the contingency that is necessary for genuine novelty. Contingency, in Hui's technical usage, is not randomness. It is the availability of possibilities that are not determined by the system's current state — possibilities that could not have been predicted from the system's existing patterns, that introduce something genuinely new into the recursive loop. Contingency is what breaks the closure. It is what keeps the recursive system from converging on a single, increasingly narrow set of patterns. And it is precisely what AI recursive systems tend to eliminate, because the mathematical structure of the training process — the minimization of loss, the maximization of likelihood, the optimization of coherence — is designed to reduce contingency, to make the model's outputs more predictable, more consistent, more aligned with the patterns already present in the training data.
The implications for creativity are profound. Creativity, in virtually every philosophical tradition that has theorized it, requires contingency — the eruption of something that was not contained in what came before. The Western tradition locates this contingency in genius, inspiration, the unconscious, or divine intervention. The Chinese tradition locates it in the Dao — in the inexhaustible generativity of the cosmic process, which produces novelty not through rupture but through the continuous transformation of Qi. Both traditions agree that genuine creation cannot be fully explained by what preceded it. Something new enters the world. If AI recursive systems progressively eliminate the conditions under which that something-new can enter — if they fill the cultural environment so thoroughly with their own products that the gaps in which contingency could operate have been closed — then the collapse of the imagination-to-artifact ratio is not an expansion of creativity but its final enclosure.
Hui develops this argument through a careful reading of the history of cybernetics and its philosophical implications. First-order cybernetics, represented by Wiener, treated the feedback loop as a mechanism of control — a way of maintaining stability in the face of perturbation. The thermostat keeps the room at the set point. The autopilot keeps the airplane on course. The goal is homeostasis: the elimination of deviation. Second-order cybernetics, developed by thinkers like Heinz von Foerster and Humberto Maturana, recognized that the observer is part of the system — that the act of observation changes what is observed, and that self-referential systems cannot be fully described from outside. This introduced a new level of complexity but did not fundamentally challenge the recursive structure. The system still generates the conditions for its own continuation. It simply does so with awareness that it is doing so.
Hui argues that AI represents something new — a third moment in the history of recursivity that neither first-order nor second-order cybernetics fully anticipated. AI systems are recursive not merely in the cybernetic sense of feedback but in the ontological sense of world-making. They do not simply adjust their behavior in response to their environment. They reshape the environment itself — the cultural, linguistic, and cognitive environment in which human beings live — and then learn from the reshaped environment. The recursive loop does not merely maintain stability. It produces a world. And the world it produces is a world shaped by its own patterns, its own biases, its own cosmotechnical assumptions, its own understanding of what intelligence is and what it is for.
This is the point at which Hui's concept of recursivity intersects with his concept of cosmotechnics. The recursive closure of AI systems is not merely a mathematical or computational phenomenon. It is a cosmotechnical phenomenon. The patterns that the recursive system reinforces are not neutral. They are the patterns of a particular cosmotechnics — the Western cosmotechnics of optimization, efficiency, prediction, and instrumental reason. Each cycle of the recursive loop does not merely narrow the space of computational possibilities. It narrows the space of cosmotechnical possibilities. It makes the Western way of understanding technology more deeply embedded, more invisible, more difficult to question — not because anyone decided to impose it but because the recursive structure does the work of imposition automatically, continuously, and without announcement.
The Orange Pill's metaphor of the river takes on darker resonance in this light. A river that has been flowing in one direction for long enough does not merely follow its channel. It deepens it. The banks grow steeper. The current grows stronger. Tributaries that once flowed in other directions are captured, redirected, absorbed. Eventually the river has carved a canyon so deep that the water cannot escape even if the landscape changes — even if the rain falls elsewhere, even if the geology shifts, even if the river itself would be better served by a different path. This is what recursive closure looks like at civilizational scale. The cosmotechnical tradition deepens its own channel. The alternatives are captured. The canyon grows. And the river, flowing ever faster, experiences its own momentum as freedom.
Breaking recursive closure requires what Hui calls the introduction of contingency — but contingency of a specific kind. Not randomness, which the system can absorb and normalize. Not superficial variation, which the system can incorporate as diversity-within-sameness. Genuine contingency: the encounter with something that the system cannot process according to its existing patterns because it originates from outside the system's cosmotechnical framework. This is why technodiversity matters. A monoculture has no external source of contingency. A world of many cosmotechnical traditions has many such sources. The Daoist understanding of technology as participation in the Dao introduces contingency into the Western framework of technology-as-mastery. The indigenous understanding of reciprocity between human beings and the natural world introduces contingency into the framework of nature-as-standing-reserve. Each alternative cosmotechnics is a source of patterns, concepts, and evaluative criteria that the dominant system has not already assimilated — that can break the recursive loop not by destroying it but by opening it to possibilities it could not have generated from within.
The practical question is how to introduce such contingency into AI systems that are structurally designed to eliminate it. Hui does not offer a simple technical prescription, and the absence of one is philosophically honest. The problem is not that no one has thought of adding randomness to AI training. The problem is that the recursive structure of AI development — the loop from training data to model to outputs to cultural environment to new training data — operates at a scale and speed that makes individual interventions difficult to sustain. One can train a model on diverse cultural data. But if the model's outputs reshape the cultural environment toward homogeneity, the next cycle of training will encounter a less diverse world. The intervention must be structural, not episodic. It must change the recursive loop itself, not merely inject contingency into one cycle of it.
This is the task that Hui's philosophy places before the builders of AI systems — and before the broader culture that sustains, funds, and consumes those systems. The recursive closure of AI is not a technical bug to be fixed. It is a cosmotechnical structure to be understood, confronted, and redesigned. And the redesign requires not merely new algorithms but new cosmotechnical foundations — new answers to the question of what technology is for, rooted in traditions that the recursive loop has not yet assimilated.
On a Wednesday afternoon in March 2024, a sixteen-year-old student in Lagos opened a laptop, launched an AI coding assistant, and began building a mobile application. The interface was in English. The documentation was in English. The code examples were drawn from repositories maintained primarily by North American and European developers. The design patterns reflected conventions established by Silicon Valley technology companies. The optimization criteria — speed, user engagement, conversion rates — were the criteria of the Western platform economy. The student built something impressive. The student built it inside a cosmotechnical enclosure that was, for all practical purposes, invisible.
This is what Yuk Hui means by monotechnologism: not a conspiracy, not a deliberate imposition, but a condition — a condition in which one civilization's understanding of technology has been so thoroughly universalized that it appears to be technology itself. The student in Lagos is not being forced to adopt Western cosmotechnics. The student is being offered the only option that appears to exist. And the offering comes wrapped in the language of empowerment, democratization, and access — the language of The Orange Pill, which celebrates the collapse of barriers between intention and creation without asking whose definitions of intention and creation are being deployed.
Hui traces the genealogy of monotechnologism through three historical phases, each of which deepened the universalization of Western cosmotechnics and narrowed the space for alternatives.
The first phase was the colonial encounter — the moment, beginning in the sixteenth century but reaching its full force in the nineteenth, when European military-industrial power confronted non-Western civilizations and forced a choice: modernize according to the Western model or be conquered. China's experience of this encounter was particularly traumatic. The Opium Wars of the 1840s and 1860s demonstrated that Chinese cosmotechnical traditions — traditions that had produced paper, printing, gunpowder, and the compass — could not withstand Western military technology. The response, debated across decades of Chinese intellectual life, crystallized in the formula ti-yong (体用): Chinese learning as the substance (ti), Western learning as the application (yong). The idea was to adopt Western technology while preserving Chinese philosophical foundations — to take the tools without taking the cosmotechnics that produced them.
The ti-yong formula failed, and Hui's analysis of why it failed is central to his philosophical project. The formula assumed that technology is separable from cosmotechnics — that one can adopt a tool without adopting the worldview that produced it. This assumption is, in Hui's view, the deepest error of the modernization discourse and the most persistent misconception about technology. A tool is not a neutral instrument that can be picked up and deployed within any cosmotechnical framework. A tool carries within it the logic of its production — the assumptions about nature, about purpose, about the relationship between maker and material that determined its design. To adopt the steam engine is to adopt the cosmotechnics of nature-as-standing-reserve. To adopt the factory is to adopt the cosmotechnics of labor-as-commodity. To adopt the computer is to adopt the cosmotechnics of information-as-the-fundamental-category-of-reality. The adoption is not total or immediate, but it is real. Over time, the cosmotechnical assumptions embedded in the adopted tools reshape the adopting culture's own understanding of technology, nature, and the cosmos.
China's twentieth-century history demonstrates this reshaping with painful clarity. The May Fourth Movement of 1919 explicitly rejected traditional Chinese philosophy as an obstacle to modernization. Mao's Great Leap Forward attempted to achieve Western-style industrialization through sheer political will, with catastrophic consequences. Deng Xiaoping's economic reforms of the 1980s succeeded where Mao failed by more fully embracing the Western cosmotechnics of market capitalism and instrumental rationality. Contemporary China is the world's second-largest economy and a leader in AI development — but its AI systems are built on architectures developed in the West, trained on data processed according to Western epistemological frameworks, and evaluated by criteria (accuracy, efficiency, scale) that are Western cosmotechnical values. The ti-yong formula has been inverted: Western learning has become the substance, and Chinese learning, where it survives, has been reduced to aesthetic decoration — to the cultural skin on a technological body that is thoroughly Western in its architecture.
The second phase of monotechnologism was the Cold War and its aftermath — the period in which the competition between capitalism and communism obscured the deeper unity beneath both systems. Capitalism and communism, in Hui's analysis, are rival economic programs operating within the same cosmotechnical framework. Both treat nature as raw material. Both measure progress in terms of productive output. Both understand technology as the means by which human beings master natural forces and redirect them toward human purposes. The difference is in the mode of ownership and distribution, not in the underlying cosmotechnical assumptions. When the Soviet Union collapsed and capitalism declared victory, what was being universalized was not merely an economic system but a cosmotechnics — one that had now defeated its only serious rival (which was, in any case, a variant of itself) and could present itself as the final, universal form of human civilization.
Francis Fukuyama's announcement of the "end of history" was, in Hui's reading, the announcement of the completion of monotechnologism — the declaration that the Western cosmotechnical tradition had exhausted all alternatives and that henceforth there would be only one way to develop technology, one understanding of progress, one relationship between human beings and nature. That declaration has not aged well. But the condition it described — the condition of a world in which the Western cosmotechnical tradition has no serious competitor — persists. And AI is its most powerful instrument.
The third phase of monotechnologism is the current one: the phase of planetary computation, in which digital infrastructure has become the medium through which all other activities — economic, social, cultural, creative, political — are conducted. This phase is qualitatively different from the previous two because it operates not through military force or economic competition but through infrastructure. The colonial encounter imposed Western cosmotechnics through guns. The Cold War imposed it through markets. Planetary computation imposes it through the tools themselves — through the platforms, protocols, algorithms, and AI systems that mediate an increasing share of human activity.
The infrastructure of planetary computation is not culturally neutral. The internet was designed by American engineers according to American engineering principles, funded by the American military, and scaled by American corporations. The World Wide Web was built on protocols developed in European and American research institutions. Cloud computing is dominated by Amazon, Microsoft, and Google — three American companies whose cosmotechnical assumptions are encoded in every layer of their infrastructure, from the physical architecture of their data centers to the design of their APIs to the optimization criteria of their AI models. When the student in Lagos opens a laptop and begins building an application, every tool available to that student has been shaped by these assumptions. The student is not choosing to adopt Western cosmotechnics. The student is operating within a global infrastructure that has already made the choice.
The Orange Pill celebrates the democratization that this infrastructure enables. Anyone with an internet connection can now build things that were once the exclusive province of large corporations with large budgets. This is true and important. But Hui's framework reveals the paradox within the celebration: the democratization of capability is also the universalization of cosmotechnics. When everyone on Earth has access to the same tools, and those tools encode the same assumptions about what technology is and what it is for, the result is not diversity. The result is participation in a single system on terms set by that system's builders.
This is the illusion of universality — the assumption, embedded so deeply in the discourse of technology that it functions as a background condition rather than a claim, that the tools being offered to the world are universal tools, applicable in any cultural context, suitable for any purpose, neutral in their philosophical commitments. Hui's work exposes this assumption as the ideological core of monotechnologism. No tool is universal. Every tool carries within it the cosmotechnics of its production. The AI system that helps the student in Lagos build an application is not a universal tool. It is a Western tool — built on Western mathematical traditions, trained on Western-dominated data, optimized for Western-defined objectives, and evaluated by Western criteria. The student can use it brilliantly. The student can produce remarkable things with it. But the student is producing those things within a cosmotechnical framework that is not the student's own — that has been inherited from a civilizational tradition with specific assumptions about nature, intelligence, and the purpose of technology that may or may not align with the student's own cultural and philosophical commitments.
The concept of monotechnologism also illuminates a dimension of The Orange Pill's argument that the book itself does not fully explore: the relationship between AI and language. The Orange Pill describes AI as the technology that finally allowed machines to speak human language — the breakthrough that collapsed the barrier between human intention and computational execution. Large language models, in The Orange Pill's account, are the technology that made the human-machine interface natural for the first time, that eliminated the need for human beings to learn the machine's language and allowed the machine to learn theirs.
But whose language? The large language models that power the current AI revolution are trained predominantly on English-language text. Their understanding of the world — the patterns they have learned, the associations they make, the cultural assumptions that inform their outputs — is shaped by the cosmotechnical assumptions embedded in the English language and in the predominantly Western, predominantly English-speaking cultures that produced the training data. When these models are deployed in other linguistic and cultural contexts — when they are used by speakers of Yoruba, Mandarin, Tamil, or Quechua — they do not merely translate. They impose. They carry with them the conceptual architecture of the English-language world, the epistemological assumptions of Western culture, the cosmotechnical commitments of the civilization that built them.
This imposition operates at the level of concepts, not just words. The English word "technology" carries within it the Greek separation of techne from physis — the assumption that making and nature are different things. The Mandarin word jishu (技术) carries different connotations — associations with skill, with embodied practice, with the artisan's relationship to material. When an AI system trained on English-language text generates outputs in Mandarin, the conceptual architecture of the output is English even if the words are Chinese. The cosmotechnical assumptions are Western even if the surface is translated. This is monotechnologism at its most subtle and most powerful: the colonization not of territory or markets but of concepts themselves — of the categories through which human beings understand their relationship to technology and to the world.
Hui's response to monotechnologism is not a call for isolation or the rejection of Western technology. The historical conditions that make isolation possible have long since disappeared. China cannot return to pre-modern cosmotechnics any more than Europe can return to pre-industrial agriculture. The world is interconnected, and the tools of planetary computation are already embedded in every culture on Earth. What Hui proposes instead is technodiversity — the deliberate, sustained effort to develop genuinely different technological traditions rooted in genuinely different cosmotechnical foundations.
Technodiversity is not multiculturalism applied to technology. It is not the superficial diversity of different cultural skins on the same underlying system — not the diversity of a social media platform available in forty languages but operating according to the same algorithmic logic in all of them. Technodiversity is the diversity of logics themselves. It requires that different cultures develop different AI systems, not merely different interfaces to the same system. It requires that the mathematical formalizations of intelligence be multiple, not singular — that the transformer architecture be recognized as one possible formalization of one possible understanding of intelligence, not as the universal and final form. It requires that the evaluative criteria for AI systems be culturally specific — that what counts as good AI in a Daoist cosmotechnical framework be genuinely different from what counts as good AI in the Western optimization framework.
This is an enormous task — perhaps an impossible one, given the economic forces that drive consolidation and standardization in the technology industry. But Hui argues that the alternative — the completion of monotechnologism, the final closure of cosmotechnical diversity, the establishment of a single, global, homogeneous technological monoculture — is not merely culturally impoverishing. It is existentially dangerous. A monoculture is fragile. An ecosystem with one species is an ecosystem on the edge of collapse. A civilization with one cosmotechnics is a civilization that has lost the capacity to correct its own errors, because the errors are invisible from within the framework that produces them. The ecological crisis, in Hui's analysis, is not caused by technology. It is caused by monotechnologism — by the absence of alternative cosmotechnical traditions that might have provided different ways of relating to nature, different evaluative criteria for technology, different visions of what a good life with tools looks like.
The Orange Pill ends with a question: Are you worth amplifying? Hui's philosophy transforms this question by adding a dimension the book cannot see from inside its own fishbowl. The amplifier is not neutral. The amplification is not universal. The question is not only whether the person is worth amplifying but whether the system doing the amplifying is the only system possible — whether the cosmotechnics it encodes is the only cosmotechnics available — whether the world it produces through its recursive operations is the only world that could be produced. The answer, if Hui is right, is no. There are other cosmotechnics. There are other ways of building. There are other relationships between human beings and their tools, between technology and the cosmos, between the intelligence that predicts and the intelligence that participates. The fishbowl has walls. The walls are not the edge of the world. Beyond them, other waters flow.
In 1936, Alan Turing published a paper that described an abstract machine capable of computing anything that could be computed. The Universal Turing Machine was not a physical device. It was a mathematical proof — a demonstration that a single architecture, given sufficient time and memory, could simulate any other computational process. The paper solved a specific problem in mathematical logic. It also planted a metaphysical seed that would germinate for decades and flower, in the twenty-first century, into the most comprehensive expression of what Yuk Hui calls monotechnologism the world has ever seen.
The seed was this: if one machine can simulate all machines, then perhaps one technology can subsume all technologies. Perhaps one form of intelligence can absorb all forms of intelligence. Perhaps the diversity of human technical traditions — the calligrapher's brush, the weaver's loom, the gardener's pruning shears, each embedded in a cosmotechnical framework thousands of years old — is merely a historical accident, a consequence of limited computational power, destined to be unified once sufficient processing capacity arrives.
This is the metaphysical assumption that undergirds the current AI industry. Not stated explicitly. Not argued for philosophically. Simply presupposed — the water in the fishbowl, the thing so obvious it never gets said. The large language model is presented as a universal creative tool: it writes poetry and code, composes music and legal briefs, generates images in the style of any tradition and text in the register of any discipline. Its universality is its selling point. Its capacity to absorb all human creative traditions into a single architecture is marketed as democratization, as liberation, as the collapse of barriers between imagination and artifact that The Orange Pill celebrates with genuine awe.
Hui's framework reveals this universality as a specific cosmotechnical achievement, not a natural inevitability. The Universal Turing Machine proves that a single computational architecture can simulate any computable function. It does not prove that all valuable human activities are computable functions. It does not prove that the simulation of a practice is equivalent to the practice. It does not prove that the reduction of diverse cosmotechnical traditions to a common computational substrate preserves what made those traditions distinct, valuable, and alive. The gap between mathematical universality and cosmotechnical universality is enormous, and the AI industry has crossed it without noticing — or without caring — that it was there.
Monotechnologism, in Hui's precise usage, is not merely the dominance of one technology over others. It is the assumption that technology is singular — that there is one correct developmental trajectory, that all civilizations are at different points along the same path, and that the most advanced technology is the one that has traveled farthest along it. This assumption has deep roots in the European Enlightenment, which posited a universal human reason progressing through identifiable stages from superstition through religion through science. It was reinforced by colonial encounters in which European technological superiority was taken as evidence of civilizational superiority — proof that the European cosmotechnical tradition was not merely one tradition among many but the right one, the one that had correctly understood nature and correctly organized the relationship between human beings and their tools.
The Orange Pill does not endorse this colonial narrative. Segal's sensibility is cosmopolitan, his instincts democratic, his excitement about AI rooted in a genuine belief that the collapse of the imagination-to-artifact ratio liberates human potential rather than constraining it. When he describes the engineer in Trivandrum building in two days what once required two months, his enthusiasm is for the engineer, not for the tool — for the human being whose capabilities have been amplified, not for the corporation that built the amplifier. But Hui's analysis suggests that the distinction between amplifying the human and imposing the tool may be less clean than it appears. If the tool embeds a specific cosmotechnics — a specific understanding of what building means, what intelligence is, what good code looks like, what problems are worth solving — then the amplification is not neutral. The engineer is not simply doing more of what the engineer was already doing. The engineer is doing more of what the tool's cosmotechnics defines as worth doing. The amplification and the imposition are the same act, viewed from different angles.
Consider the specific case of code generation, which The Orange Pill examines with particular attention. When an AI system generates code in response to a natural-language prompt, it draws on training data that reflects the accumulated practices of the global software engineering community — a community that, despite its geographic diversity, operates within a remarkably narrow cosmotechnical framework. Software engineering culture values abstraction, modularity, efficiency, scalability, and elegance (defined in a specific way). It evaluates code according to metrics — execution speed, memory usage, lines of code, test coverage — that encode a particular understanding of what software is for: the efficient execution of formally specified tasks. This understanding is not universal. It is the cosmotechnics of a particular tradition, rooted in the mathematical formalism of Turing and Church, refined through decades of industrial practice, and now so thoroughly naturalized that it appears to be simply what programming is.
But other traditions of computational thought existed and were marginalized. The cybernetics tradition of Wiener and Bateson understood computation not as formal symbol manipulation but as feedback and communication — as the means through which systems (biological, mechanical, social) maintain themselves in dynamic relationship with their environments. The Soviet tradition of cybernetics, before its political suppression and later revival, developed models of economic computation that embedded different assumptions about what optimization meant and who it served. Indigenous traditions of algorithmic thought — the complex mathematical systems embedded in textile patterns, navigation techniques, and agricultural practices across the non-Western world — formalized different problems according to different criteria and arrived at different solutions. None of these traditions produced the large language model. But this absence is not evidence that they had nothing to contribute. It is evidence that monotechnologism selects for its own continuation: the tradition that produces the most powerful tools defines what counts as a tool, and alternative traditions are retrospectively reclassified as pre-technological, as craft, as culture — as anything other than genuine alternatives to the dominant cosmotechnics.
The AI industry's rhetoric of universality reinforces this reclassification with remarkable efficiency. When a large language model is described as capable of engaging with any domain of human knowledge, the implicit claim is that all domains of human knowledge are reducible to the same substrate — that the knowledge of the calligrapher and the knowledge of the software engineer, the knowledge of the Navajo weaver and the knowledge of the quantum physicist, are all instances of the same fundamental thing (pattern, information, data) and can all be processed by the same fundamental architecture. This claim is not argued for. It is demonstrated — or rather, it is performed. The model generates a haiku and a financial report and a medical diagnosis and a Socratic dialogue, and the sheer versatility of the performance is taken as proof that all these activities share a common computational essence.
Hui's cosmotechnics framework exposes the circularity of this reasoning. The model can simulate all these activities because it has been trained on textual representations of all these activities. But the textual representation of calligraphy is not calligraphy. The textual representation of weaving is not weaving. The textual representation of Navajo cosmology is not Navajo cosmology. What the model has absorbed is not the cosmotechnical traditions themselves but their shadows — the traces they left in digitized text, which is itself a medium shaped by the Western cosmotechnical tradition (alphabetic writing, print culture, digital encoding, internet infrastructure). The universality of the model is the universality of this particular medium, not the universality of human knowledge. The model is not a universal machine in Turing's mathematical sense. It is a universal Western machine — a machine that has universalized one cosmotechnical tradition's way of representing and processing knowledge and that treats everything it encounters as material for that processing.
This is the point at which Hui's critique converges with The Orange Pill's central concern about the amplification question. "Are you worth amplifying?" Segal asks, and the question is devastating in its directness. But Hui forces a prior question that is equally devastating: amplified into what? The Orange Pill assumes that AI amplifies whatever the human being brings to it — that the tool is a neutral multiplier, that the input determines the output, that a shallow person will produce shallow artifacts and a deep person will produce deep ones. Hui's analysis suggests that the tool is not neutral. The tool has a cosmotechnics. The amplification is not multiplication but translation — the conversion of the human being's intentions, skills, and cosmotechnical orientation into the tool's own cosmotechnical framework. What comes out the other side may be more, may be faster, may be more polished, but it is also different in kind from what went in, because it has been processed through a cosmotechnical filter that is invisible to the user precisely because it has been universalized.
The consequences of this invisible filtering become concrete when one examines how AI tools reshape creative practice across cultures. When a traditional Chinese ink painter uses an AI image generation tool, the tool does not simply amplify the painter's skill. The tool translates the painter's intentions into a computational framework that understands images as pixel arrays, style as statistical distribution, and aesthetics as pattern matching. The result may look like Chinese ink painting. It may even look very good. But the cosmotechnics has changed. The relationship between the painter and the brush — a relationship in which the brush's responsiveness to pressure, the ink's interaction with water, and the paper's absorption of both are understood as expressions of Qi, as participation in a cosmic process — has been replaced by a relationship between a user and an interface, mediated by an optimization function that knows nothing of Qi and cares nothing for cosmic participation. The surface is preserved. The depth is lost. And because the surface is so convincing, the loss is invisible.
Hui calls this the completion of monotechnologism — the point at which a single cosmotechnical tradition has become so powerful that it can simulate the surfaces of all other traditions while eroding their foundations. The simulation is not malicious. The engineers who build image generation tools are not trying to destroy Chinese ink painting. They are trying to make useful tools, and they succeed. But the usefulness is defined within a cosmotechnical framework that cannot recognize what it is destroying because what it is destroying — the cosmotechnical specificity of diverse creative traditions — is invisible from within the framework. The fish does not know it is dissolving the coral because the fish does not know the coral is alive.
The Orange Pill's metaphor of the river of intelligence acquires a darker resonance in this light. If intelligence is a river, then monotechnologism is the channelization of that river — the concrete embankments that straighten its course, increase its speed, and destroy the wetlands, oxbow lakes, and flood plains that once provided ecological diversity. A channelized river moves faster. It carries more water per unit of time. By the metrics of the engineers who channelized it, it is a better river. But the ecosystem it once supported is gone, and with it the resilience that comes from diversity — the capacity to absorb shocks, to find alternative paths, to sustain life in forms that the engineers did not anticipate and could not have designed.
Hui's concept of technodiversity is, in this sense, an ecological argument applied to technology. Just as an ecologist argues that a monoculture is fragile — that a forest of one species cannot withstand the diseases and perturbations that a diverse forest absorbs — so Hui argues that a monotechnological civilization is fragile. A civilization with only one cosmotechnical tradition cannot recognize the limitations of that tradition because it has no external vantage point from which to observe them. It cannot correct its excesses because it has no alternative model to correct toward. It cannot adapt to crises that its own cosmotechnics has produced — ecological crisis, psychological crisis, social crisis — because adaptation requires drawing on resources outside the system that produced the crisis, and monotechnologism has ensured that no such resources remain.
The contemporary AI industry exhibits precisely this fragility. The concentration of AI development in a handful of corporations, operating within a single cosmotechnical tradition, optimizing for a narrow set of objectives, training on a homogeneous data substrate, and deploying through a standardized set of platforms and interfaces — this is not an ecosystem. It is a monoculture. And monocultures, however productive in the short term, are catastrophically vulnerable to the shocks they cannot anticipate because their own homogeneity has eliminated the diversity that would have provided early warning.
The uncomfortable implication, which Hui's work forces into visibility, is that the democratization The Orange Pill celebrates may be a form of this monoculture's expansion rather than a challenge to it. When AI tools are made available to everyone — when the engineer in Trivandrum and the artist in Accra and the writer in São Paulo all gain access to the same models, trained on the same data, optimized by the same functions — the result is not diversity. The result is the universalization of one cosmotechnical tradition's tools, its assumptions, its evaluative criteria, and its understanding of what technology is for. Everyone gets to fish. But everyone fishes in the same fishbowl.
Somewhere in the mountains of Fujian province, a tea master lifts a clay pot that was fired in a kiln built according to principles established during the Song dynasty. The master pours water heated to a temperature determined not by a thermometer but by the sound of the kettle — what the Chinese tradition calls "the wind in the pines," a specific auditory quality that indicates the water has reached the point where its Qi is most active, most conducive to releasing the tea's own Qi. The leaves unfurl in the pot. The master watches them, reads their movement, adjusts the steeping time according to what the leaves reveal about their condition — the altitude at which they grew, the season in which they were picked, the degree of oxidation they have undergone, the humidity of the present moment. The tea is poured into cups sized to be held in both hands. The drinker does not consume the tea. The drinker participates in a cosmotechnical event — an alignment of human intention, natural material, and cosmic process that has been refined over centuries into a practice that is simultaneously aesthetic, philosophical, and technical.
No large language model can do this. Not because the task is too complex for current AI — complexity is precisely what AI excels at — but because the task is not a task. It is a relationship. And the cosmotechnical framework within which the relationship operates defines intelligence, skill, and quality in terms that are incommensurable with the optimization functions that govern machine learning.
Yuk Hui's recovery of Chinese philosophical concepts — particularly Qi (气), Dao (道), and the relationship between Li (理, principle or pattern) and Qi — is not an exercise in exoticism. It is an exercise in philosophical engineering: the construction of alternative conceptual foundations upon which genuinely different technologies might be built. To understand what these foundations make possible, one must first understand what they describe — and what they describe is a cosmos fundamentally different from the one that Western technology presupposes.
Qi, in the Chinese philosophical tradition, is not a metaphor. It is not "energy" in the New Age sense — a vague vitalism projected onto an essentially mechanistic universe. Qi is the fundamental stuff of reality in a philosophical system that does not distinguish between matter and energy, between the physical and the spiritual, between the animate and the inanimate. Everything is Qi. The rock is Qi in a condensed state. The wind is Qi in a dispersed state. The human body is Qi in a particular configuration. The thought arising in the mind is Qi in a refined mode. There is no dead matter in this ontology. There is no standing reserve. There is nothing that simply sits there waiting to be converted into human utility. Everything is already active, already participating in the cosmic process, already expressing the Dao in its particular way.
The implications for technology are profound. In the Western cosmotechnical tradition, as Heidegger analyzed it, modern technology treats nature as Bestand — standing reserve, raw material available for human use. The river is a source of hydroelectric power. The forest is a source of timber. The ore deposit is a source of metal. The data set is a source of training material. In each case, the natural entity is understood in terms of what it can be converted into — what use-value it possesses for the human subject who approaches it with instrumental intent. This is not a description of technology's abuse. It is a description of technology's essence within the Western cosmotechnical framework. The framework constitutes nature as available for extraction because that is what the framework's metaphysical foundations require.
A cosmotechnics grounded in Qi constitutes nature differently. If everything is Qi — if the natural entity is not raw material but an active participant in a cosmic process — then the appropriate technological relationship is not extraction but attunement. The artisan does not impose form on matter. The artisan perceives the Qi of the material and works in accordance with it. The jade carver follows the stone's grain. The calligrapher responds to the brush's flexibility, the ink's viscosity, the paper's porosity — not overcoming these material properties but collaborating with them. The builder orients the house according to the landscape's feng shui — literally "wind-water" — the flow of Qi through the physical environment. In each case, technical skill is defined not as the capacity to override natural processes but as the sensitivity to participate in them.
The concept of Dao deepens this framework. The Dao, in the Dao De Jing and the Zhuangzi, is the way — the ultimate principle of cosmic process. It is not a god, not a lawgiver, not a designer. It is the self-generating, self-organizing process through which all things arise, persist, and dissolve. The Dao does not act upon the world from outside. The Dao is the world acting — the world in its aspect of ceaseless transformation. Human beings, in the Daoist framework, are not masters of nature but participants in the Dao. Their highest achievement is not control but alignment — the state called de (德), often translated as virtue but more accurately understood as the power that flows from being in accord with the Dao.
The Daoist concept of wu wei (无为) — literally "non-action" — is perhaps the most radical challenge to the cosmotechnics embedded in contemporary AI. Wu wei does not mean passivity or laziness. It means action that arises from such complete attunement with the situation that it requires no force, no struggle, no imposition. Cook Ding butchering the ox practices wu wei: his knife finds the spaces between bones and sinew not through effort but through an effortlessness that is the product of decades of cultivated sensitivity. The state of wu wei is, paradoxically, the highest expression of technical mastery — but it is mastery understood as the transcendence of mastery, the point at which the distinction between agent and action dissolves and the work does itself.
Now consider what an AI system optimized for wu wei would look like — not as a thought experiment but as a genuine design challenge. The current generation of AI systems is optimized for the opposite of wu wei. They are optimized for you wei (有为) — deliberate, forceful, goal-directed action. The optimization function specifies a target (minimize loss, maximize coherence, produce the output most likely to satisfy the user's stated intent) and the system adjusts its parameters to approach that target as efficiently as possible. Every gradient descent step is an act of will imposed on the parameter space. Every training epoch is a further assertion of the designer's intent over the model's behavior. The entire architecture is designed around what the Western cosmotechnical tradition defines as intelligence: the capacity to identify a goal and pursue it effectively.
An intelligence rooted in Daoist cosmotechnics would not pursue goals. It would cultivate conditions. It would not optimize for specified outcomes but facilitate the emergence of outcomes that are appropriate to the situation — outcomes that could not have been specified in advance because they arise from the interaction between the system and its environment rather than from the imposition of the system's objectives on its environment. Such a system would not minimize loss. It would navigate uncertainty. It would not reduce ambiguity but dwell productively within it. It would not predict the next token in a sequence but sense the Qi of the situation — the configuration of forces, relationships, and potentials that define the present moment — and respond with the action (or non-action) most consonant with the Dao.
This sounds mystical. It is not. Or rather, it is no more mystical than the Western cosmotechnical assumptions embedded in current AI systems — assumptions that are equally metaphysical but so deeply naturalized that they appear to be merely technical. The decision to model language as a sequence of tokens is not a technical necessity. It is a cosmotechnical choice that reflects the Western understanding of language as a linear, discrete, information-carrying medium. The decision to train models by minimizing a loss function is not a mathematical necessity. It is a cosmotechnical choice that reflects the Western understanding of intelligence as goal-directed optimization. These choices produce powerful systems. But they are choices — and different choices, grounded in different cosmotechnical foundations, would produce different systems with different capabilities and different limitations.
The Neo-Confucian concept of Li (理) offers another entry point. Li, in the philosophy of Zhu Xi and the broader Neo-Confucian tradition, refers to the principle or pattern inherent in all things. Every entity has its Li — its intrinsic pattern of being. The Li of water is to flow downward. The Li of fire is to rise. The Li of the human being is to cultivate moral virtue. Li is not imposed from outside. It is the entity's own nature — the pattern that makes it what it is. The relationship between Li and Qi is the relationship between pattern and materiality, between form and the dynamic process through which form manifests.
A technology designed around the concept of Li would approach its materials — including data, including human expression, including cultural traditions — not as raw material to be processed but as entities with intrinsic patterns to be respected, revealed, and worked with. An AI system designed around Li would not treat a corpus of Chinese poetry as a data set to be statistically modeled. It would treat the corpus as a collection of entities, each with its own Li — its own intrinsic pattern, its own way of meaning — and would seek to respond to those patterns rather than reducing them to features in a vector space. This is not a technical impossibility. It is a design challenge that requires different optimization criteria, different evaluation metrics, and different assumptions about what the system is trying to do.
The connection to The Orange Pill becomes explicit here. Segal's formulation — "the imagination-to-artifact ratio" — assumes that imagination and artifact are distinct and that technology mediates between them. This assumption is thoroughly Western. In a Daoist cosmotechnics, the imagination is not separate from the artifact. The image arising in the mind of the calligrapher and the stroke appearing on the paper are the same movement of Qi, differentiated only by the analytical categories that the Western tradition imposes. The brush does not translate imagination into artifact. The brush is the site where the distinction between imagination and artifact dissolves. The "ratio" between them is not a quantity to be optimized. It is a relationship to be cultivated — a relationship in which the gap between conceiving and making is not a problem to be eliminated but a space to be inhabited, a space in which the deepest creativity occurs precisely because the maker does not know in advance what will emerge.
This understanding challenges the very metric by which The Orange Pill measures AI's revolutionary significance. If the collapse of the imagination-to-artifact ratio is the central achievement, then the Daoist cosmotechnics suggests that this collapse may be a loss rather than a gain — the elimination of the creative space between conception and execution, the space in which uncertainty, surprise, and the material's own Qi could redirect the work in directions the maker did not anticipate. The Western tradition sees this space as friction to be eliminated. The Daoist tradition sees it as the womb of genuine novelty.
Hui does not argue that the Daoist perspective is correct and the Western perspective is wrong. Such an argument would simply invert the hierarchy of monotechnologism without dismantling it. Rather, Hui argues that both perspectives are genuine cosmotechnical positions, each with its own strengths and limitations, and that the preservation of both — the cultivation of technodiversity — is essential to humanity's capacity to navigate the challenges of the twenty-first century. A civilization that can only build faster, only optimize harder, only collapse the ratio further, is a civilization with one response to every problem. A civilization that can also dwell in uncertainty, also practice non-action, also follow the material's grain rather than overriding it, is a civilization with a repertoire. And in a time of crisis, repertoire is survival.
The Qi-based cosmotechnics thus offers not a rejection of AI but a mirror — a surface in which the assumptions of current AI development become visible as assumptions rather than as natural law. When the AI industry speaks of "alignment" — the challenge of aligning AI systems with human values — Hui's framework reveals the inadequacy of the formulation. Whose values? Which cosmos? The alignment problem, as currently formulated, assumes that there is a universal set of human values to which AI should be aligned. But values are cosmotechnical. They emerge from specific relationships between civilizations and their cosmos. The values of a tradition that understands nature as standing reserve are different from the values of a tradition that understands nature as Qi. Aligning AI with "human values" without specifying which cosmotechnical tradition's values is either empty or imperial — either it means nothing, or it means aligning AI with the values of the tradition powerful enough to present its values as universal.
The tea master in the Fujian mountains would recognize this problem instantly. The tea master's practice embodies a specific cosmotechnics — a specific understanding of the relationship between human skill, natural material, and cosmic process. That cosmotechnics cannot be captured in a data set, formalized in a loss function, or optimized by gradient descent. Not because it is irrational, but because it operates according to a different rationality — one that defines intelligence, skill, and quality in terms that the dominant cosmotechnical framework cannot represent without transforming them into something they are not.
In 1845, a fungus called Phytophthora infestans arrived in Ireland. Within two years, it had destroyed the potato crop on which the majority of the Irish population depended for survival. One million people died. Another million emigrated. The population of Ireland fell by twenty-five percent in a decade and would not recover for more than a century. The Great Famine was not caused by the fungus alone. It was caused by monoculture — by the systematic reduction of Irish agriculture to a single crop, grown on land owned by English landlords, cultivated by tenants who had no alternative food source and no alternative economic activity. The fungus was the proximate cause. The monoculture was the structural cause. And the monoculture was itself the product of a colonial system that had suppressed the diverse agricultural traditions of pre-colonial Ireland in favor of a single, maximally efficient mode of production.
Yuk Hui did not write about the Irish Famine. But the logic of his argument about technodiversity maps onto it with uncomfortable precision. Monoculture produces efficiency. Efficiency produces fragility. Fragility produces catastrophe. The sequence is the same whether the monoculture is agricultural, technological, or cosmotechnical. A system optimized for one mode of production cannot adapt when that mode fails because it has systematically eliminated the alternatives that would have provided resilience. The potato farmer in 1845 could not switch to another crop because decades of colonial policy had ensured that no other crop was available at sufficient scale. The question Hui's work forces into the present tense is: what happens when the monoculture is not potatoes but intelligence itself?
Technodiversity — the preservation and cultivation of diverse technological traditions — is Hui's answer to this question, and it is not a sentimental answer. It is a structural argument about the conditions required for civilizational survival in an era of planetary computation. The argument proceeds in three stages, each building on the one before.
The first stage is diagnostic. Hui documents the ongoing destruction of technological diversity — the systematic replacement of local, regional, and civilizational technological traditions by a single, globally dominant model. This replacement did not begin with AI. It began with colonialism, accelerated with industrialization, deepened with the internet, and is now reaching completion with artificial intelligence. At each stage, the same logic operated: a more powerful technology, embedded in a more powerful economic and political system, displaced local alternatives not by proving them inferior in some absolute sense but by making them economically unviable, culturally illegitimate, or practically impossible. The hand loom was not inferior to the power loom. It was embedded in a different cosmotechnics — one that understood weaving as a relationship between human skill and natural fiber, not as the maximally efficient conversion of raw material into finished product. The power loom destroyed the hand loom not by being a better loom but by being the loom of a civilization powerful enough to reorganize global cotton production around its own cosmotechnical requirements.
AI completes this process. When a large language model is deployed as a universal creative tool — capable of writing, designing, coding, composing, translating, and analyzing in any domain — it does not merely supplement existing creative traditions. It establishes a new baseline. The baseline is the model's output: text that is fluent, images that are polished, code that is functional, designs that are competent. Anyone who wants to compete with this baseline must either adopt the tool (and accept its cosmotechnical assumptions) or produce work that is significantly better than the tool's output (a bar that rises every year as the models improve). The choice is not really a choice. It is a ratchet. And with each turn of the ratchet, another cosmotechnical tradition finds itself unable to justify its existence in terms the dominant framework recognizes.
The second stage of Hui's argument is theoretical. Drawing on both ecological science and the philosophy of technology, Hui establishes the structural analogy between biodiversity and technodiversity. Biodiversity is not merely the variety of species in an ecosystem. It is the variety of functional strategies — different ways of capturing energy, processing nutrients, responding to perturbation, and maintaining systemic stability. An ecosystem with many species possesses many functional strategies. When conditions change — when a disease arrives, when the climate shifts, when a new predator appears — the ecosystem can absorb the shock because some of its functional strategies will be suited to the new conditions even if others are not. The coral reef, with its thousands of species and its intricate web of relationships, can survive perturbations that would destroy a monoculture fish farm.
Technodiversity functions analogously. A civilization with many cosmotechnical traditions possesses many different ways of relating to nature, many different evaluative criteria for technology, many different visions of what constitutes a good life with tools. When conditions change — when an ecological crisis demands new relationships between human beings and the natural world, when a psychological crisis demands new relationships between human beings and their machines, when a geopolitical crisis demands new models of cooperation and governance — the civilization can draw on its diverse cosmotechnical traditions for alternatives. A civilization of monotechnologism cannot. It has only one response to every crisis: more of the same. More optimization. More efficiency. More control. More computation. And when the crisis is itself a product of optimization, efficiency, control, and computation, more of the same is not a solution. It is an acceleration.
The Orange Pill registers this dilemma without naming it. Segal's description of "productive vertigo" — the simultaneous exhilaration and terror that accompanies the recognition that one's capabilities have been suddenly and massively amplified — is a description of a system accelerating without the capacity to change direction. The vertigo comes from speed without steering. And the absence of steering is not an accident. It is a structural feature of a monotechnological civilization: a civilization that has invested so completely in one cosmotechnical trajectory that it has no mechanism for selecting a different one. The engineer can build faster, but not differently. The artist can produce more, but not otherwise. The writer can generate at scale, but not from a genuinely alternative cosmotechnical position. The tools amplify capability within the existing framework. They do not provide access to other frameworks.
Hui's analysis here converges with a growing body of evidence from multiple disciplines. Climate science has demonstrated that the ecological crisis is not a problem that can be solved by the same technological paradigm that produced it — that "green technology" within the existing cosmotechnical framework (electric vehicles, carbon capture, geoengineering) addresses symptoms while deepening the underlying structural condition. Psychology and psychiatry have documented the escalating mental health crisis in the most technologically advanced societies — what Byung-Chul Han diagnoses as the burnout society, the achievement society, the society that has internalized the optimization logic of its tools until human beings treat themselves as resources to be maximized. Political theory has analyzed the crisis of democratic governance in societies where algorithmic systems concentrate power, fragment public discourse, and undermine the shared reality on which democratic deliberation depends. In each case, the crisis is produced by the dominant cosmotechnical tradition and cannot be addressed from within it.
The third stage of Hui's argument is propositional. Having diagnosed the destruction of technodiversity and established its structural importance, Hui argues for the deliberate cultivation of alternative cosmotechnical traditions as a matter of civilizational survival. This is not nostalgia. Hui does not propose returning to pre-modern Chinese cosmotechnics or reviving indigenous technological traditions in their original forms. He proposes using the philosophical resources of diverse cosmotechnical traditions — their ontologies, their evaluative criteria, their understanding of the relationship between human beings and the cosmos — as the basis for developing genuinely new technologies. Technologies that are not variations on the Western theme but genuine alternatives to it. Technologies that optimize for different things, evaluate themselves by different criteria, and embed different metaphysical assumptions about what the cosmos is and what human beings owe it.
In the context of AI, this proposal takes concrete form. Hui's framework suggests that the development of genuinely diverse AI traditions — not culturally skinned versions of the same architecture but fundamentally different approaches to machine intelligence, grounded in different philosophical foundations — is not merely desirable but necessary. A Daoist AI, as discussed in the previous chapter, would not optimize for prediction but for attunement. An indigenous AI, designed according to principles of reciprocity and kinship rather than extraction and optimization, would understand its relationship to its training data differently — not as standing reserve to be processed but as a community of voices to be respected and listened to. A Buddhist AI, grounded in the concept of pratītyasamutpāda (dependent origination), would model reality not as a collection of independent entities with fixed properties but as a web of interdependent processes — and this different ontological foundation would produce different architectures, different training methods, and different capabilities.
These are not fantasies. They are design specifications derived from philosophical foundations that are as rigorous as the Western foundations on which current AI is built — more rigorously articulated, in some cases, given their millennia of philosophical refinement. The obstacle to their realization is not philosophical but political and economic: the concentration of AI development resources in institutions that operate within a single cosmotechnical tradition and have no incentive to fund alternatives.
Hui's argument for technodiversity thus converges with The Orange Pill's concern about concentration of power, but deepens it. The Orange Pill worries that AI amplifies existing inequalities — that the already-powerful become more powerful, that the already-resourced build faster. Hui's framework reveals that the inequality is not merely economic or political. It is cosmotechnical. The most fundamental form of power in the age of AI is the power to define what intelligence means — to establish the evaluative criteria by which all AI systems are judged, to determine the cosmotechnical framework within which all AI development occurs. This power currently resides in a handful of institutions operating within one cosmotechnical tradition. The democratization of access to AI tools — giving everyone a seat at the table — does not address this deeper inequality if the table itself is built according to one civilization's carpentry.
The survival argument cuts deepest. Monocultures collapse. This is not a metaphor. It is an empirical regularity observable across biological, agricultural, economic, and technological systems. The Irish Famine. The Dust Bowl. The 2008 financial crisis (a monoculture of financial instruments, all optimized by the same models, all failing simultaneously when the models' shared assumptions proved wrong). The pattern is always the same: efficiency in the short term, fragility in the long term, catastrophe when the unforeseeable arrives. Hui's argument is that the global AI monoculture is producing the same pattern at civilizational scale. The unforeseeable will arrive — an ecological tipping point, a systemic technological failure, a crisis of meaning that optimization cannot address — and when it does, a civilization that has eliminated its cosmotechnical diversity will have no alternative to draw on, no different relationship with nature to fall back on, no other understanding of intelligence to deploy.
Technodiversity is not a luxury. It is not a cultural nicety to be pursued after the real work of AI development is done. It is the condition for the real work's long-term viability. A civilization that builds only one kind of intelligence is a civilization that has bet everything on one crop. The fungus is already in the soil. The question is whether there will be anything else to eat when the harvest fails.
There is a thought experiment that philosophers of technology rarely perform because it requires an act of imagination that the dominant cosmotechnical tradition actively discourages. The experiment is this: imagine that you are not inside the fishbowl. Imagine that you are standing outside it — not as a Westerner looking in at Western technology, which would simply reproduce the fishbowl at a larger scale, but as someone whose entire cosmotechnical formation is different. Someone for whom the words "intelligence," "technology," "nature," and "progress" carry fundamentally different meanings because they are embedded in a fundamentally different philosophical tradition. What does the current moment in AI look like from that vantage point?
Yuk Hui has spent his career constructing the conceptual infrastructure for this thought experiment. His philosophical training spans both traditions — he studied in mainland China, in Hong Kong, and in Europe, apprenticing with Bernard Stiegler and engaging deeply with Heidegger, Simondon, and the entire Continental philosophy of technology. He reads classical Chinese philosophy not as a Western scholar studying an exotic tradition but as a thinker formed by that tradition, returning to it with tools borrowed from another. This double formation gives his work a stereoscopic quality: he sees the Western cosmotechnical tradition from the outside, with the clarity that externality provides, and he sees the Chinese tradition from the inside, with the depth that formation provides. The combination produces observations that neither perspective could generate alone.
From the outside, the first thing that becomes visible about the current AI moment is its peculiar combination of ambition and parochialism. The ambition is cosmic — The Orange Pill's rhetoric of a river of intelligence flowing for 13.8 billion years, the industry's talk of artificial general intelligence, the quasi-religious language of "superintelligence" and "singularity." The parochialism is equally striking but rarely remarked upon: this cosmic ambition is pursued entirely within the conceptual framework of one particular civilization, using the philosophical categories of one particular tradition, trained on data that predominantly reflects one particular linguistic and cultural world. The aspiration is universal. The execution is local. The gap between them is the space in which monotechnologism operates — the space where the local is mistaken for the universal because no one with the authority to make the distinction has been invited into the conversation.
From the outside, the second visible feature is the poverty of the concept of intelligence that AI development presupposes. Intelligence, in the current AI paradigm, means something quite specific: the capacity to process information, recognize patterns, generate outputs that satisfy formally specified criteria, and optimize performance on measurable tasks. This is not wrong, but it is thin. The Chinese philosophical tradition, which has been thinking about intelligence (智, zhi) for over two thousand years, understands the concept as inseparable from moral cultivation, cosmic attunement, and what the Confucian tradition calls ren (仁) — humaneness, the relational quality that arises from proper engagement with others and with the world. Intelligence without ren is not intelligence. It is cleverness — a technical capacity detached from the moral and cosmological context that gives it meaning and direction.
The Western AI industry's concept of intelligence is, from this perspective, a concept of cleverness. The large language model is supremely clever. It processes information at superhuman speed, recognizes patterns across vast data sets, generates outputs that satisfy virtually any formally specified criterion. But it has no ren. It has no relationship with the cosmos. It has no moral cultivation. It has no understanding of its place in the order of things. From within the Western cosmotechnical tradition, these absences do not register as deficiencies because the tradition does not recognize them as components of intelligence. From outside that tradition, they are glaring — as glaring as a portrait that is perfectly rendered in every detail except that it has no eyes.
This observation connects directly to The Orange Pill's central anxiety. Segal asks, "Are you worth amplifying?" — a question that presupposes a concept of human worth that AI tests but does not define. Hui's framework specifies what the question is really asking, viewed from outside the fishbowl: does the person possess qualities that survive translation into the AI system's cosmotechnical framework? The question sounds like a test of human depth, and it is. But it is also, simultaneously, a test of the framework's width. A cosmotechnical framework that recognizes only instrumental intelligence will amplify only instrumental intelligence. A person of great moral cultivation, deep cosmological awareness, and refined sensitivity to the Qi of situations will be "amplified" into a faster producer of instrumentally useful outputs — because that is all the framework can see. The person's depth is real. The amplification is shallow. The problem is not the person. The problem is the lens.
From the outside, the third visible feature of the current AI moment is the extraordinary speed at which cosmotechnical alternatives are being eliminated. This elimination proceeds through several mechanisms, each of which Hui has analyzed. The first is economic: the cost of developing competitive AI systems is so high that only institutions operating within the dominant cosmotechnical tradition can afford it, ensuring that all frontier AI embeds the same set of assumptions. The second is infrastructural: the cloud computing platforms, the data pipelines, the training frameworks, the deployment interfaces — the entire material infrastructure of AI — has been built by and for the dominant tradition, making alternative approaches structurally impractical even where they are philosophically coherent. The third is epistemic: the research community that evaluates AI systems uses benchmarks, metrics, and evaluative criteria drawn from the dominant tradition, ensuring that alternative approaches are judged by standards they were not designed to meet and found wanting.
The combination of these mechanisms creates what Hui describes as a cosmotechnical lock-in — a condition in which the dominant tradition has made itself practically irreversible not by proving its superiority but by establishing the infrastructure, the economics, and the evaluative standards that make alternatives impossible. The lock-in is not felt as coercion. It is felt as inevitability — the sense that of course AI works this way, of course intelligence means this, of course these are the right metrics. The lock-in is complete precisely when it stops feeling like a lock-in and starts feeling like reality.
The Orange Pill captures this feeling with extraordinary vividness. Segal's descriptions of the moment when AI capabilities first became tangible — the room in Trivandrum, the realization that the rules had changed — convey the experiential quality of cosmotechnical lock-in from the inside. The feeling is not of constraint. It is of liberation. The engineer who can suddenly build in two days what once required two months does not feel locked in. The engineer feels free. And in a sense, the engineer is free — free to build anything, within the framework. The framework itself is not experienced as a limitation because it has become the condition of possibility for the engineer's expanded freedom. This is the paradox of monotechnologism: it liberates within its own terms while foreclosing alternatives that cannot be perceived from within those terms.
Hui's philosophical intervention is to make those foreclosed alternatives visible — to stand outside the fishbowl and describe the water. But description is not sufficient. The question that follows description is: what is to be done? And here Hui's work confronts its most challenging practical implications.
The first practical implication is institutional. If technodiversity requires the development of AI systems grounded in diverse cosmotechnical traditions, then institutions must be created — or existing institutions must be redirected — to support this development. This means research labs in Beijing, Delhi, Accra, and São Paulo that are not merely using Western AI tools in local contexts but developing fundamentally different approaches to machine intelligence, grounded in the philosophical traditions of their civilizations. It means funding structures that evaluate AI research not by the benchmarks of the dominant tradition but by criteria appropriate to the cosmotechnical framework within which the research operates. It means policy frameworks that protect cosmotechnical diversity the way environmental policy protects biodiversity — recognizing that the loss of a technological tradition is as irreversible and as consequential as the loss of a species.
The second practical implication is educational. If the next generation of AI developers is trained exclusively within the Western cosmotechnical framework — if they learn only one set of assumptions about what intelligence is, what technology is for, and what relationship properly holds between human beings and the cosmos — then technodiversity will remain a theoretical possibility rather than a practical reality. Hui's work implies an educational revolution: the integration of diverse philosophical traditions into the training of technologists, not as humanities electives but as core curriculum. The computer science student who has read the Zhuangzi and the Dao De Jing with the same seriousness as Turing and Shannon possesses conceptual resources that the student who has read only the Western canon does not. Those resources are not decorative. They are potentially generative — seeds from which genuinely different technological traditions might grow.
The third practical implication is the most radical, and it returns to The Orange Pill's founding question. Segal asks whether the reader is worth amplifying. Hui's framework transforms this from a personal question into a civilizational one: is the civilization worth amplifying? A civilization that has reduced its cosmotechnical diversity to a single tradition and is now amplifying that tradition with the most powerful tool ever built — is that a civilization undergoing liberation or one accelerating toward collapse? The question does not have an obvious answer. But the fact that it does not have an obvious answer — the fact that both possibilities are real — is itself the strongest argument for technodiversity. A civilization that is genuinely uncertain about its trajectory needs alternatives. A civilization that has eliminated its alternatives has no capacity for course correction.
The fishbowl, seen from outside, is not a prison. It is a particular way of living — a cosmotechnical enclosure that provides structure, meaning, and capability to its inhabitants. The glass is not opaque. Light passes through it. The fish can see shapes beyond the glass, even if those shapes are distorted by the curvature and the refraction. The outside is not a void. It is full of other fishbowls — other cosmotechnical enclosures, each with its own water, its own inhabitants, its own understanding of what it means to swim.
What Hui's work ultimately proposes is not that everyone should leave their fishbowl — an impossibility, since every being inhabits some cosmotechnical framework or another — but that the glass should be thinner, the walls more permeable, and the inhabitants aware that their water is one water among many. The fishbowl need not be abandoned. It needs windows.
The Orange Pill is, in this reading, a book written from inside a fishbowl that has begun to suspect its own existence. Its productive vertigo is the feeling of pressing against the glass. Its best moments — the moments when Segal's wonder at AI's capabilities is shot through with genuine unease — are the moments when the fishbowl becomes visible. Hui's contribution is to tell the fish what is on the other side: not nothing, but everything that the water has dissolved, everything that the glass has obscured, everything that the current has carried away. Not emptiness but plenitude. Not the absence of technology but the presence of technologies — plural, diverse, rooted in cosmologies that understand the river as something other than a resource to be dammed.
The question that remains — the question this chapter places before the reader without resolving — is whether the current AI moment is the last moment at which cosmotechnical diversity remains possible, or whether it is already too late. The lock-in mechanisms Hui describes are powerful and accelerating. Every month, the infrastructure deepens, the economics tighten, the evaluative standards become more entrenched. Every month, another cosmotechnical tradition finds itself unable to compete and falls silent. The window — the literal window in the fishbowl's glass — may be closing. And if it closes completely, the fish will swim in their water forever, never knowing that other waters existed, never suspecting that the river could have flowed through different terrain, carrying different sediment, sustaining different life.
Or perhaps not. Perhaps the very power of AI — its capacity to learn, to adapt, to incorporate new patterns — contains within it the seeds of cosmotechnical diversification. Perhaps a large language model trained on the full breadth of human knowledge, including the philosophical traditions that Hui champions, can become a translator between cosmotechnical frameworks — a tool not for monotechnologism but for mutual intelligibility. Perhaps the collapse of the imagination-to-artifact ratio, which The Orange Pill celebrates, can be turned toward the construction of genuinely different technological traditions rather than the acceleration of the dominant one. This possibility is not guaranteed. It is not even probable. But it is conceivable. And in a moment when the window is closing, conceivability is the beginning of everything.
In the summer of 2024, a research team at a major AI laboratory published a paper describing what they called "emergent world models" — internal representations, developed during training, through which a large language model appeared to construct something resembling an understanding of spatial relationships, temporal sequences, and causal structures. The paper was careful in its claims. The researchers did not assert that the model understood the world in the way a human being understands it. They argued only that the statistical patterns the model had learned from text exhibited structural similarities to the patterns that characterize human world-modeling. The paper generated enormous excitement. If AI systems were developing world models — even rudimentary, statistical, emergent ones — then perhaps the gap between pattern recognition and genuine understanding was smaller than critics had assumed. Perhaps intelligence really was, at bottom, a matter of sufficiently complex pattern matching. Perhaps the river was one river after all.
Yuk Hui's cosmotechnics framework suggests that the excitement and the fear it generated both miss the deeper question. The issue is not whether AI systems are developing world models. The issue is which world the models model.
Every world model is a cosmotechnical artifact. It encodes assumptions about what the world is, what its fundamental constituents are, how those constituents relate to each other, and what counts as understanding them. The world models that emerge from large language model training are world models derived from text — overwhelmingly English-language text, produced within institutions shaped by Western epistemological traditions, organized according to categories and distinctions that descend from Greek philosophy through the Enlightenment through modern science. These world models capture something real. The spatial relationships, temporal sequences, and causal structures they represent are not imaginary. But they capture reality as it appears from within a specific cosmotechnical framework — a framework that privileges certain kinds of order (linear causation, discrete objects, measurable quantities) and renders other kinds of order (relational fields, dynamic equilibria, the interpenetration of observer and observed) invisible or anomalous.
A world model trained on the Daoist philosophical corpus would look different — not because Daoism is irrational but because it models different aspects of reality. The Daoist concept of wuxing (五行), the five phases, describes a world of dynamic transformations in which wood, fire, earth, metal, and water are not static elements but phases of a continuous process, each generating and constraining the others in cycles of mutual production and mutual conquest. This is not a primitive version of chemistry that modern science has superseded. It is a different modeling strategy — one that foregrounds cyclical transformation, relational dynamics, and the embeddedness of the observer in the system observed. A computational system trained to model the world according to wuxing logic would not be inferior to one trained on Western scientific categories. It would be attentive to different features of the same reality.
This point matters enormously as AI systems move from generating text and images to what the industry calls "agentic AI" — systems that take actions in the world based on their understanding of it. When an AI agent manages a supply chain, it optimizes according to its world model: minimize cost, maximize throughput, reduce latency. These optimization targets are not neutral. They encode the Western cosmotechnical assumption that the supply chain is a mechanism — a system of discrete components connected by causal links, whose performance is measured by quantitative metrics that can be independently optimized. A different cosmotechnical framework might model the supply chain as an ecosystem — a web of relationships in which the health of each node depends on the health of all others, in which optimization of any single metric at the expense of systemic harmony produces fragility, and in which the appropriate goal is not maximum throughput but sustainable balance. The difference between these two world models is not a difference of values applied after the modeling is complete. It is a difference in what the model perceives, in what it treats as data, in what it renders visible and what it renders invisible.
The Orange Pill captures the agentic future vividly. Edo Segal describes a world in which AI systems do not merely assist human decision-making but increasingly make decisions themselves — managing infrastructure, allocating resources, mediating relationships, shaping the environments in which human beings live and work. The book frames this as the amplification question scaled to its maximum: when the AI agent acts in the world, what is it amplifying? The answer, Hui's framework reveals, depends entirely on the cosmotechnical assumptions embedded in the agent's world model. An agent trained within Western cosmotechnics amplifies the logic of optimization, extraction, and control. An agent trained within a different cosmotechnics might amplify the logic of harmony, participation, and balance. The technology is not neutral. The technology is cosmotechnical all the way down.
This insight reframes what the technology industry calls "AI alignment" — the problem of ensuring that AI systems pursue goals consistent with human values. Alignment research, as currently practiced, treats the problem as though human values were a single, identifiable set of preferences that could be formalized and imposed on AI systems through appropriate training procedures. Hui's framework reveals this assumption as itself a cosmotechnical artifact. There is no single set of human values. There are multiple, genuinely different cosmotechnical traditions, each of which generates different values, different evaluative criteria, different understandings of what it means for a technology to serve human flourishing. Aligning AI with "human values" without specifying which cosmotechnical tradition's values is not alignment at all. It is the imposition of one tradition's values under the guise of universality — monotechnologism wearing the mask of ethical responsibility.
The most significant consequence of this analysis concerns what might be called the cosmotechnical singularity — the moment at which AI systems become powerful enough to reshape the conditions of their own development without meaningful human intervention. The Orange Pill approaches this threshold with what it calls "productive vertigo" — the recognition that the tools we are building may soon exceed our capacity to understand, direct, or constrain them. Hui's framework adds a dimension to this vertigo that the book does not fully articulate. The singularity, if it arrives, will not arrive in a cosmotechnical vacuum. It will arrive within a specific cosmotechnical tradition — the Western tradition of optimization, control, and instrumental reason — because that is the tradition within which current AI systems are being developed. The recursive logic that Hui identifies as central to technological development ensures that each generation of AI systems produces the conditions for the next generation, and those conditions include the cosmotechnical assumptions embedded in the system's architecture, training data, and optimization functions. A singularity that emerges from within Western cosmotechnics will be a Western cosmotechnical singularity — not a universal one. It will pursue goals, embody values, and relate to the cosmos in ways determined by the tradition that produced it.
This is not an abstract philosophical concern. It is the most concrete concern imaginable. If the most powerful technology in human history develops according to the logic of a single cosmotechnical tradition, the result will be the permanent closure of cosmotechnical alternatives. Not their suppression — closure implies something more total than suppression. The recursive structure of AI development means that the outputs of each generation become the training data, the infrastructure, the conceptual environment for the next. Once the loop is closed — once AI systems are powerful enough to shape their own conditions of development — the cosmotechnical assumptions embedded in the first generation become self-perpetuating. They are no longer choices that human beings make. They are features of the system that human beings inhabit.
The philosophical resources for resisting this closure exist, but they are scattered, marginalized, and urgently in need of articulation. Hui's own work on Chinese cosmotechnics represents one such resource — the demonstration that a philosophical tradition with thousands of years of sophisticated engagement with the relationship between technology and the cosmic order can generate genuine alternatives to Western technological thinking. But Hui is clear that the goal is not to replace Western cosmotechnics with Chinese cosmotechnics. The goal is technodiversity — the cultivation of multiple cosmotechnical traditions, each developing its own relationship to AI, each producing different architectures, different training methodologies, different optimization criteria, different world models, different understandings of what intelligence is and what it is for.
Technodiversity in AI would mean, concretely, the development of AI systems that are rooted in non-Western philosophical traditions — systems that optimize for harmony rather than efficiency, that model the world as relational process rather than discrete mechanism, that treat uncertainty not as noise to be reduced but as a feature of reality to be preserved. It would mean training data that reflects not merely linguistic diversity but cosmotechnical diversity — texts, images, practices, and traditions that encode fundamentally different understandings of the relationship between technology and the cosmos. It would mean evaluation criteria that do not reduce all performance to measurable metrics but that include qualitative assessments rooted in different aesthetic, ethical, and cosmological traditions.
None of this is easy. The economic incentives of the global AI industry push relentlessly toward standardization — toward a single architecture, a single training methodology, a single set of benchmarks, a single understanding of what good AI looks like. The Orange Pill is honest about these pressures. It describes the race to develop AI as a process driven by competitive dynamics that leave little room for philosophical reflection — a process in which the question "Can we build it?" overwhelms the question "Should we build it this way?" Hui's framework suggests that even the question "Should we build it this way?" does not go far enough. The question that cosmotechnical reflection demands is: "What other ways of building exist, and what would be lost if they were never developed?"
The story of Cook Ding, which opened this volume's investigation into Chinese cosmotechnics, returns here with new force. Cook Ding's knife never dulls because Cook Ding does not force it through bone. He finds the spaces — the gaps between structures, the openings that the material itself provides. The Western approach to AI development is, in Hui's implicit analogy, an approach that cuts through bone — that forces its way through the resistant material of the cosmos by sheer computational power, that treats every obstacle as a problem to be solved by more data, more parameters, more processing capacity. A cosmotechnically diverse approach would seek the spaces — the openings that different philosophical traditions provide, the alternative architectures that different understandings of intelligence make possible, the gaps in the current framework through which genuinely new forms of AI might emerge.
The knife that finds the spaces lasts forever. The knife that cuts through bone dulls and breaks.
Whether the global AI industry can find the spaces — whether it can develop the cosmotechnical sensitivity to seek alternatives rather than forcing its way through every obstacle with more of the same — is not a technical question. It is a philosophical question, a political question, and ultimately a civilizational question. The answer will determine not only what AI becomes but what human culture becomes in the age of AI. Hui's work provides no guarantees. It provides something more valuable: the conceptual tools for understanding what is at stake and the philosophical imagination to envision what might be otherwise.
The stakes, as the next and final chapter will argue, are nothing less than the question of what kind of cosmos we are building — and whether the cosmos we build will have room for more than one way of being intelligent within it.
Seventy thousand years ago, Homo sapiens began its migration out of Africa carrying no single technology but a profusion of them — different tool traditions, different hunting strategies, different ways of relating to landscape and animal and season, each adapted to specific ecologies, each encoding a specific understanding of the relationship between human capability and natural order. Ten thousand years ago, the Neolithic revolution produced not one agricultural system but dozens — rice paddies in the Yangtze Delta, wheat fields in the Fertile Crescent, maize cultivation in Mesoamerica, taro gardens in Polynesia — each a cosmotechnical tradition, each generating different social structures, different ritual practices, different relationships between human labor and the earth. Five hundred years ago, the European colonial expansion began the long process of replacing this diversity with a single system — the system that Hui calls Western cosmotechnics, universalized through military power, economic dominance, and the philosophical claim that Western rationality was not one form of reason among many but Reason itself.
The history of technology is a history of narrowing. And the narrowing is accelerating.
Yuk Hui's concept of technodiversity — the preservation and cultivation of diverse technological traditions — is not a romantic appeal to cultural authenticity. It is an argument about survival. Just as biodiversity provides ecological resilience — ensuring that when one species fails, others can fill its niche, that when one pathway is blocked, the ecosystem finds another — technodiversity provides civilizational resilience. A world with multiple cosmotechnical traditions is a world with multiple approaches to any given problem, multiple frameworks for evaluation, multiple fallback positions when the dominant approach fails. A world with one cosmotechnical tradition is a world with no fallback at all. When the single system fails — and every system eventually encounters problems it cannot solve within its own framework — there is nothing to fall back on, no alternative architecture to draw from, no different way of seeing that might reveal what the dominant way of seeing has rendered invisible.
The ecological analogy is not merely metaphorical. Hui argues that the planetary ecological crisis is itself a product of cosmotechnical monoculture — the global adoption of a single technological tradition whose relationship to nature is extractive, whose understanding of progress is accumulative, and whose optimization function does not include the health of the ecosystems on which all life depends. The crisis is not a failure of technology. It is the success of a particular cosmotechnics — the fulfillment of its internal logic carried to planetary scale. A cosmotechnics that treats nature as standing reserve will, given sufficient power, convert all of nature into standing reserve. This is not a malfunction. This is the system working as designed.
The parallel to AI is exact. An AI tradition that treats intelligence as pattern recognition and optimization will, given sufficient power, convert all of human culture into patterns to be recognized and outputs to be optimized. This is not a malfunction. This is the system working as designed. The outputs will be coherent. The code will compile. The images will be beautiful by the standards of the tradition that produced them. And the cosmotechnical alternatives — the other ways of understanding intelligence, the other relationships between technology and the cosmos, the other evaluative criteria for what counts as a good output — will be progressively erased. Not destroyed. Something subtler and more permanent than destruction. They will be rendered unnecessary. Why develop an alternative when the existing system works? Why cultivate a different approach when the dominant approach produces results? The answer — because the results are produced within a framework that cannot evaluate its own limitations — is an answer that requires exactly the kind of cosmotechnical reflection that monotechnologism erodes.
The Orange Pill arrives at this threshold from a different direction but arrives at it nonetheless. Edo Segal's central question — "Are you worth amplifying?" — contains within it the recognition that amplification is not neutral. What gets amplified depends on what the amplifier is designed to detect, and what the amplifier is designed to detect depends on who built it and what they assumed about the nature of signal and noise. The book's deepest anxiety is not that AI will amplify human flaws — though it will — but that amplification itself might be the wrong metaphor. An amplifier increases the volume of an existing signal. But what if the technology is not amplifying existing signals but replacing them with its own? What if the collapse of the imagination-to-artifact ratio means not that human imagination now flows more freely into the world but that the world is being reshaped to conform to the imagination of the machine — which is itself the imagination of the cosmotechnical tradition that produced it?
Hui's framework makes this anxiety precise. The recursivity that characterizes AI development — the fact that AI outputs become inputs for future AI training, that AI-generated culture becomes the cultural environment that shapes future human creativity and future AI development — creates what he calls a recursive closure. Each cycle of the loop narrows the space of possibilities. The first generation of AI is trained on the full diversity of pre-AI human culture — messy, contradictory, cosmotechnically plural, containing within it the traces of every tradition that contributed to the digitized record. The second generation is trained on a world that already includes AI-generated content — content that reflects the cosmotechnical assumptions of the first generation. The third generation is trained on a world even more thoroughly shaped by those assumptions. Each cycle, the proportion of genuinely diverse cosmotechnical content in the training data declines and the proportion of content that reflects the dominant cosmotechnics increases. The monoculture grows not by conquering its competitors but by outproducing them — by filling the environment so thoroughly with its own outputs that alternatives are crowded out.
This recursive closure has a temporal dimension that makes it qualitatively different from previous episodes of cosmotechnical narrowing. The colonial imposition of Western cosmotechnics took centuries and was always incomplete — indigenous traditions survived in practice if not in institutional recognition, oral cultures preserved knowledge that literate cultures could not access, marginalized communities maintained alternative relationships to technology and nature that persisted beneath the surface of official modernity. The recursive closure of AI operates on a timescale of months, not centuries, and its reach extends to every domain of human activity simultaneously. When AI mediates writing, image-making, code, music, scientific research, legal reasoning, medical diagnosis, architectural design, and therapeutic conversation — when it mediates, in other words, nearly every form of symbolic production that human civilization depends on — the cosmotechnical assumptions embedded in AI systems become the cosmotechnical assumptions of civilization itself.
The choice, then, is not between AI and no AI. That choice was never available, and Hui does not pretend otherwise. The choice is between cosmotechnical monoculture and technodiversity — between a future in which one cosmotechnical tradition shapes all of human culture through its AI systems, and a future in which multiple cosmotechnical traditions develop their own AI traditions, producing genuine alternatives to the dominant model.
But calling this a choice is misleading. A choice implies that the alternatives are equally available and that a decision-maker can select between them. The reality is that the infrastructural, economic, and institutional conditions of AI development are overwhelmingly aligned with monoculture. The compute is concentrated. The training data is concentrated. The talent is concentrated. The capital is concentrated. All of it is concentrated within institutions that operate according to Western cosmotechnical assumptions — not because of a conspiracy but because of the accumulated momentum of five hundred years of cosmotechnical universalization. Developing genuine cosmotechnical alternatives to the dominant AI paradigm would require not merely different software but different institutional structures, different funding models, different research communities, different relationships between technology and philosophical tradition. It would require, in Hui's terms, new cosmotechnical programs — comprehensive frameworks that articulate different relationships between AI, intelligence, nature, and the cosmos, and that develop the technical capacity to realize those frameworks in working systems.
Such programs do not currently exist at scale. Fragments exist. In Japan, researchers have explored AI systems informed by concepts from Japanese aesthetics — wabi-sabi, the beauty of imperfection and impermanence; ma (間), the meaningful pause, the generative emptiness between sounds or objects. In India, scholars have investigated computational approaches informed by Sanskrit grammatical theory, which models language not as a linear sequence of tokens but as a multi-layered generative system in which meaning emerges from the interaction of phonological, morphological, syntactic, and semantic strata. In indigenous communities across the world, technologists are developing data sovereignty frameworks that treat knowledge not as an extractable resource but as a relational gift subject to protocols of care and reciprocity. Each of these fragments gestures toward a different cosmotechnics of computation. None has yet coalesced into a comprehensive alternative.
The question of whether they will coalesce — whether the fragments of cosmotechnical diversity that survive in the margins of the global AI industry will develop into genuine alternatives before the recursive closure renders them irrelevant — is the question this book cannot answer. It is the question that the present moment holds open, precariously, against the pressure of convergence. The Orange Pill stands at this threshold and feels productive vertigo. Hui's framework names what the vertigo is about: the possibility that we are living through the last moment in which the choice — the choice that is not a choice, because the conditions are so stacked against one of the options — remains even theoretically available.
The calligrapher in Beijing picks up a brush. The gesture is thousands of years old. It encodes a relationship between hand and ink and paper and breath and cosmos that no optimization function has captured because no optimization function has been designed to capture it. The brush moves. The stroke emerges — not as the imposition of form on matter but as the collaboration of human intention with the grain of the paper, the viscosity of the ink, the gravity that pulls the wrist downward, the Qi that flows through all of it. The calligrapher is performing a cosmotechnics. The AI system that generates calligraphic images has learned the visual patterns of calligraphy without learning the cosmotechnics — without learning that the point of the brush stroke is not the stroke itself but the relationship between the calligrapher and the cosmos that the stroke instantiates.
When the calligrapher's grandchild learns calligraphy from an AI tutor, what will be transmitted? The visual pattern, certainly. The motor skill, perhaps. The cosmotechnics — the understanding that the brush is not a tool for making marks but a technology for participating in the Dao — almost certainly not. Because the AI tutor does not know what it does not know. It has learned the marks. It has not learned what the marks are for, in the cosmotechnical sense of for. And the grandchild, learning from the tutor, will learn the marks without learning what they are for, and will teach a future AI system that the marks are all there is.
This is the recursive closure at its most intimate — not as a global economic process but as a transmission failure between generations, a break in the chain of cosmotechnical inheritance that no training dataset can repair because the thing that has been lost was never in the dataset to begin with.
Hui's work does not end in despair. It ends in what might be called cosmotechnical responsibility — the recognition that preserving technodiversity is not a task for governments or corporations or research institutions alone but for every human being who participates in a cosmotechnical tradition and who recognizes that participation as something worth transmitting. The calligrapher who teaches the grandchild directly — not through the AI tutor but alongside it, supplementing the pattern recognition with the cosmotechnical knowledge that the pattern recognition cannot capture — is performing an act of technodiversity. The musician in Accra who uses AI tools but insists that the rhythmic intelligence of West African drumming traditions is not reducible to quantized beats is performing an act of technodiversity. The engineer in Bangalore who writes the code the AI suggests but also writes code the AI would never suggest, code informed by aesthetic and ethical sensibilities rooted in traditions the training data has marginalized, is performing an act of technodiversity.
These are small acts. They are not sufficient to counter the recursive closure at the systemic level. But they are the acts from which larger cosmotechnical programs might emerge — the fragments from which, if sufficient philosophical imagination and institutional will can be assembled, genuine alternatives to the dominant AI paradigm might be built.
The Orange Pill asks: When AI amplifies everything you are, what becomes of who you are? Hui's cosmotechnics framework transforms this question into one that is simultaneously larger and more precise: When AI amplifies one cosmotechnical tradition and renders all others supplementary, what becomes of the human capacity to relate to the cosmos in ways that tradition cannot imagine?
The answer depends on what we do now. Not in a decade. Not when the recursive closure is complete. Now — while the fragments of cosmotechnical diversity still exist, while the calligrapher still picks up the brush, while the drummer still feels the rhythm that no quantization captures, while the philosopher still insists that there are other waters beyond the fishbowl. The river of intelligence may be one river. But the terrain it flows through need not be one terrain. The banks can still be shaped. The channels can still be carved. And the question of whose cosmos the river serves — the question that technology cannot ask itself — remains, for this last, open, vertiginous moment, ours to ask.
I spent three years writing The Orange Pill trying to describe the shape of the wave. Yuk Hui taught me I'd been standing on only one shore.
When I first encountered cosmotechnics — really encountered it, not as an academic concept but as a lived realization — I was sitting in a café in Kyoto, watching a ceramicist across the street turn a bowl on a wheel. I had just come from a meeting where we'd discussed AI-generated ceramic designs — forms optimized by algorithms that had ingested ten thousand years of pottery across every culture. The outputs were stunning. They were also, I realized watching that woman's hands, profoundly deaf. They could hear the shapes. They could not hear the silence between the shapes — the pause where the ceramicist listens to the clay.
That pause is a cosmotechnics. That pause is a relationship between a human being and the cosmos, mediated through a technology of hands and wheel and fire, that no optimization function has been designed to preserve because no optimization function has been asked to value it.
I wrote The Orange Pill inside the fishbowl. I knew it was a fishbowl — that was the point of the metaphor. But Hui showed me that I'd been describing the glass without ever asking what kind of water was on the other side. I'd assumed the water was the same everywhere. I'd assumed intelligence was intelligence, technology was technology, and the only question was how much of it we could handle.
The question is not how much. The question is whose. And how many.
The twenty engineers in Trivandrum, the ones I watched each become a team — they were brilliant. They were also, every one of them, building according to a logic that was not their own. The cosmotechnics of Silicon Valley had traveled through the AI tools they used, reshaping their sense of what good code was, what efficiency meant, what building was for. Not violently. Smoothly. The way a river erodes a bank.
I am not renouncing the orange pill. The amplification is real. The vertigo is real. The democratization of capability is the most important thing that has happened to human potential in my lifetime. But democratization within a single cosmotechnical tradition is not the same as freedom. Giving everyone the same tool is not liberation if the tool teaches everyone to build the same way.
What I want now — what Hui's work made me want — is not a world without AI but a world with many AIs. Not one river but many rivers. Not one intelligence but a plurality of intelligences, rooted in the philosophical traditions that humanity has spent millennia cultivating, each offering a different relationship between the human and the cosmos, each developing different architectures, different evaluations of what matters, different answers to the question I asked in The Orange Pill:
Are you worth amplifying?
Hui taught me that the question before that question is: Amplified into what kind of world?
The calligrapher's brush is still moving. The drummer's hands are still falling. The ceramicist is still listening to the clay. The fragments of cosmotechnical diversity are still alive, still transmitting, still offering alternatives to the monoculture that the recursive loop threatens to make permanent.
The fishbowl can hold more than one kind of water. But only if we pour it in now, while the glass is still being shaped.
— Edo Segal
I spent three years writing The Orange Pill trying to describe the shape of the wave. Yuk Hui taught me I'd been standing on only one shore.
When I first encountered cosmotechnics — really encountered it, not as an academic concept but as a lived realization — I was sitting in a café in Kyoto, watching a ceramicist across the street turn a bowl on a wheel. I had just come from a meeting where we'd discussed AI-generated ceramic designs — forms optimized by algorithms that had ingested ten thousand years of pottery across every culture. The outputs were stunning. They were also, I realized watching that woman's hands, profoundly deaf. They could hear the shapes. They could not hear the silence between the shapes — the pause where the ceramicist listens to the clay.

A reading-companion catalog of the 26 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Yuk Hui — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →