By Edo Segal
The map I trusted most was the one I drew myself.
Every product I've ever built started the same way. Whiteboard. Boxes. Arrows connecting them. Architecture diagrams, org charts, roadmaps — clean representations of messy realities, drawn from above, where everything looks governable. I never questioned whether the picture was complete. I questioned whether the boxes were in the right order. I optimized the map. I almost never asked what the map was missing.
James C. Scott spent his entire career asking what the map was missing.
Scott was a political scientist who studied what happens when powerful institutions — governments, corporations, planning authorities — impose rational, well-designed, scientifically informed plans on the complex, living systems they govern. The Prussian state looked at its tangled, thriving forests and saw an accounting problem. So they simplified: cleared the old growth, planted monoculture rows of spruce, measured and projected with precision. First-generation yields were spectacular. The second generation died. The Germans coined a word for it. *Waldsterben*. Forest death. The soil, stripped of the organic complexity that had sustained it, could no longer support the trees.
The foresters were not stupid. They were applying the best science of their era with genuine skill and sincere intentions. They understood the trees. They did not understand the forest — the invisible network of relationships that made the visible system work. Their knowledge was precise. It was also catastrophically incomplete.
I brought Scott into this series because every comprehensive AI strategy I have read — every corporate governance framework, every government regulation, every university policy — carries the structural fingerprint of those Prussian foresters. Rational. Well-intentioned. Designed from above by people who understand AI as a technical phenomenon. And missing the knowledge that matters most: the local, embodied, contextual understanding of the practitioners who actually use these tools eight hours a day and know, in their bones, where the systems work and where they quietly fail.
Scott called that missing knowledge *métis* — practical wisdom built through sustained engagement with a specific domain. It is the senior engineer who can feel when a codebase is fragile before she can explain why. It is the teacher who knows when a student has wrestled with an idea versus received it prepackaged. It is everything the productivity dashboard cannot measure and the compliance audit cannot detect.
The AI transition is being planned. The question Scott forces you to ask is whether the planners are listening to the people who live inside the plan.
The map is not the territory. It never was.
— Edo Segal ^ Opus 4.6
1936-2024
James C. Scott (1936–2024) was an American political scientist, anthropologist, and Sterling Professor of Political Science at Yale University. Born in Mount Holly, New Jersey, he spent decades conducting fieldwork across Southeast Asia, producing landmark studies of peasant politics, state power, and the dynamics of resistance. His major works include *The Moral Economy of the Peasant* (1976), *Weapons of the Weak: Everyday Forms of Peasant Resistance* (1985), *Domination and the Arts of Resistance* (1990), *Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed* (1998), *The Art of Not Being Governed* (2009), and *Two Cheers for Anarchism* (2012). Scott's concepts — legibility, high modernism, métis, and the "weapons of the weak" — reshaped how scholars and practitioners understand the relationship between institutional power, local knowledge, and the unintended consequences of top-down planning. His influence extended well beyond academia into urban design, technology criticism, development policy, and organizational theory. He died in July 2024, months before the AI transition his frameworks so precisely illuminate.
In the late eighteenth century, the Prussian state looked at its forests and saw a problem. Not an ecological problem — the forests were thriving, a tangled riot of oak, beech, pine, spruce, fungi, lichens, deadfall, underbrush, deer, boar, songbirds, and the thousand interrelationships between them that no single mind could catalog. The problem was administrative. The state needed to know how much timber it had, how much revenue the forests would produce, and how to maximize that production over time. The forests, in their natural complexity, resisted this knowledge. They were illegible — too various, too interconnected, too alive to be reduced to a ledger entry.
So the foresters simplified. They cleared the old-growth tangle. They planted monocultures of Norway spruce in neat, evenly spaced rows. They measured growth rates, calculated harvest cycles, projected yields with scientific precision. The Normalbaum — the standardized tree — became the unit of account. Everything that was not the Normalbaum was waste: the underbrush that harbored insects and small mammals, the deadfall that returned nutrients to the soil, the biodiversity that maintained the ecological relationships on which the forest's health depended. All of it was cleared in the name of legibility.
The first generation of the managed forest produced spectacular results. Yields exceeded projections. Revenue flowed. The Prussian model was exported across Europe and then the world. Scientific forestry became the gold standard of rational resource management.
The second generation died.
The Germans coined a word for what happened: Waldsterben — forest death. The soil, stripped of the organic complexity that had maintained its fertility for millennia, could no longer support the trees that grew in it. The pest-control services that biodiversity had provided for free collapsed when the biodiversity was removed. Monocultures, it turned out, were exquisitely vulnerable to the diseases and parasites that complex ecosystems suppress through the sheer variety of their inhabitants. The forests that had been made legible — countable, predictable, governable — were also made fragile. The simplification that enabled governance destroyed the conditions that enabled life.
James C. Scott opened Seeing Like a State with this parable because it contained, in miniature, the structure of every catastrophe his career would document. The Prussian foresters were not stupid. They were applying the best science of their era with genuine skill and sincere intentions. They understood the trees. What they did not understand — what the structure of their knowledge prevented them from understanding — was the forest. The relationships between things. The invisible network of dependencies that made the visible system work. They could see the trees because trees are countable. They could not see the mycorrhizal networks beneath the soil, the nutrient cycling, the predator-prey dynamics, the moisture regulation, the windbreak effects, the ten thousand interactions that constituted the forest as a living system rather than a timber farm. Their knowledge was precise. It was also catastrophically incomplete. And because they possessed the institutional power to act on their incomplete knowledge at scale, the incompleteness became a death sentence.
Scott called this pattern high modernism: the ideology that complex, functioning human arrangements can be redesigned from above by administrators armed with technical knowledge and rational planning. High modernism is not ignorance. It is a particular kind of intelligence — the intelligence of the administrator, the planner, the systems architect — applied with such confidence that it overrides the messier, less articulable, but often more complete knowledge of the people who actually inhabit the systems being redesigned.
The high modernist looks at a medieval city — its winding streets, its irregular lot sizes, its organic accumulation of centuries of human activity — and sees chaos. An inefficiency to be rationalized. Le Corbusier looked at Paris and proposed demolishing it, replacing the tangle with geometric towers set in open parkland, connected by superhighways, organized according to function: living here, working there, recreation in this quadrant, commerce in that one. Brasília was built on these principles. It is, by most accounts of the people who live in it, a failure — a city that functions on paper and suffocates in practice because the planners who designed it could not see, from where they sat, the informal social infrastructure that makes cities livable: the chance encounters on crowded sidewalks, the corner shops that serve as community gathering points, the mixed-use buildings where living and working and socializing interpenetrate in ways that no zoning map can capture.
Soviet collectivization followed the same logic. The peasant agriculture of the Russian countryside was, from the planner's perspective, appallingly inefficient. Small plots, irregular boundaries, traditional techniques, local customs governing crop rotation and grazing rights — all of it resistant to the kind of centralized management that industrialized agriculture required. The state simplified. It consolidated the small plots into collective farms. It imposed standardized techniques. It replaced the peasant's accumulated, local, experiential knowledge with the agronomist's scientific models. The result was the Ukrainian famine of 1932-33, which killed millions. Not because the science was wrong in the abstract, but because the science did not contain the knowledge that the peasants possessed — the knowledge of this specific piece of land, its micro-variations in drainage, its particular soil composition, the frost patterns that shifted from one hillside to the next. The plan was rational. The reality was local. And when the rational plan was imposed on local reality with sufficient force, local reality broke.
The pattern — technical knowledge overriding local knowledge, imposed by institutional power, producing catastrophe — is not a historical curiosity. It is the structural template for how complex systems fail when governed from above. And it is the structural template that the artificial intelligence transition is poised to reproduce.
Consider the comprehensive AI strategies now being drafted in every government ministry, every corporate boardroom, every university administration building in the developed world. The EU AI Act — 458 pages of regulation, years in the drafting, the most ambitious attempt to govern artificial intelligence ever undertaken — categorizes AI systems by risk level, mandates transparency requirements, imposes compliance obligations, creates oversight bodies. It is a serious, thoughtful, well-intentioned document. It is also, in Scott's terms, a Normalbaum: a simplified, standardized, legible representation of a reality that is orders of magnitude more complex than the representation can capture.
The AI Act classifies. It creates categories — "high-risk," "limited-risk," "minimal-risk" — and assigns regulatory requirements accordingly. But the actual effects of AI systems do not sort neatly into risk categories. The same system can be low-risk in one context and catastrophic in another. A language model that generates competent marketing copy is a different beast from the same language model deployed to generate medical advice, legal analysis, or educational curricula — not because the model is different, but because the context is different. The institutional setting, the users' expertise, the stakes of error, the feedback mechanisms that catch mistakes before they propagate — all of these are local, contextual, and invisible from the altitude at which the regulation was written.
Corporate AI governance frameworks exhibit the same structural features. A Fortune 500 company's "Responsible AI Framework" typically includes risk assessment matrices, ethical guidelines, review boards, and compliance checkpoints. These artifacts are legible. They can be presented to investors, regulators, and the public as evidence that the company is governing its AI responsibly. They are the equivalent of the forester's tree count: precise, measurable, and missing the forest entirely.
The people who understand how AI actually affects their organization — the engineers who know where the model hallucinates, the customer service representatives who know which AI-generated responses enrage callers, the product managers who know which use cases work and which produce subtle harm that no metric captures — possess knowledge that no governance framework solicits. Their knowledge is métis: practical, contextual, embodied, built through daily engagement with the systems the framework is supposed to govern. The framework governs without this knowledge. It governs like the Prussian forester: precisely, confidently, and blind to the relationships that determine whether the system actually works.
University AI policies follow the same pattern with even less self-awareness. A dean's office issues a policy on "appropriate use of generative AI in academic work." The policy categorizes: AI-assisted drafting is permitted in these courses, prohibited in those, restricted in others. The categories are clean. They are also useless to the instructor who watches a student submit an essay that is technically original — the student prompted, revised, re-prompted, integrated, restructured — but that represents no genuine encounter with the material. The policy cannot see what the instructor can see, because what the instructor sees is not a violation of a category but a quality of engagement that is visible only to someone who knows this student, this assignment, this subject, and the specific cognitive labor that genuine learning requires. That knowledge is local. It lives in the teacher's body, in the accumulated experience of watching thousands of students struggle with ideas, in the ability to distinguish between a student who has wrestled with a concept and a student who has received the concept prepackaged. No policy framework can capture it. But without it, the policy is the monoculture spruce plantation: legible, manageable, and dead.
The high modernist temptation is not the temptation of the foolish. It is the temptation of the competent. The Prussian foresters were excellent scientists. The Soviet planners were often brilliant theorists. Le Corbusier was a genuine visionary. Robert Moses built infrastructure of astonishing ambition. Each possessed real knowledge and real capability. The catastrophe did not arise from ignorance. It arose from the conviction that the knowledge possessed by the planner — technical, abstract, systematic — was sufficient. That it did not need to be supplemented by the messy, inarticulate, context-dependent knowledge of the people who would live inside the planned system. The conviction that the view from above was not merely useful but complete.
This conviction is rampant in the AI discourse. The technologists who build AI systems understand those systems with genuine depth — the architecture, the training dynamics, the failure modes, the capability frontiers. The policymakers who regulate AI systems understand governance with genuine sophistication — the institutional mechanisms, the enforcement challenges, the international coordination requirements. The executives who deploy AI systems understand markets with genuine acuity — the competitive pressures, the adoption dynamics, the revenue implications.
None of them — not the technologists, not the policymakers, not the executives — possess the knowledge of the practitioner who uses AI eight hours a day and has developed, through that sustained engagement, an embodied sense of where the tool is trustworthy and where it is not, when its outputs feel genuine and when they feel hollow, how it changes the texture of the work in ways that no metric captures and no framework anticipates.
The practitioner's knowledge is the underbrush. It is the deadfall, the mycorrhizal network, the songbird population, the thousand interactions that make the forest a forest rather than a timber farm. It is what the high modernist plan cannot see, because it is too local, too contextual, too embodied to survive the abstraction required to make it legible to the governing authority.
And yet it is precisely this knowledge — this métis, to use the term Scott adopted — that determines whether the AI transition produces a second-generation forest or a second-generation graveyard. The plan will be implemented regardless. The institutional power exists. The comprehensive strategies will be published, the compliance frameworks will be deployed, the risk categories will be imposed. The question is whether the knowledge that the plans cannot contain will be sought, valued, and integrated — or whether it will be cleared like underbrush, dismissed as anecdotal, overridden by the confident rationality of people who can see the trees but not the forest.
Scott spent his career documenting what happens when the underbrush is cleared. The record is unambiguous. The forests die. The cities suffocate. The farms produce famine. The plans, executed with precision and confidence, destroy the conditions for the life they were supposed to optimize.
The AI transition does not have to follow this pattern. But it will follow this pattern unless the people who govern it learn the lesson that Scott spent forty years trying to teach: that the knowledge required to govern complex systems well is not concentrated at the top. It is distributed among the people who inhabit those systems daily. And the governance that ignores this knowledge — however rational, however comprehensive, however well-intentioned — is not governance at all. It is the imposition of a simplified map on a territory that the map was never adequate to describe.
---
There is a kind of knowing that resists formalization. Not because it is primitive or pre-scientific, but because it is too finely adapted to its context to survive extraction from that context. The Greeks had a word for it: métis. Odysseus possessed it — not the brute strength of Achilles or the strategic brilliance of Agamemnon, but the cunning, the adaptability, the feel for the situation that allowed him to navigate circumstances no amount of planning could anticipate. Métis is the knowledge of the sailor who reads the sea by its color and the pattern of its swells. The knowledge of the midwife who feels the position of the child through the mother's abdomen. The knowledge of the carpenter who can tell by the sound of a saw whether the blade is set correctly.
In every case, the knowledge is real — as real as any theorem or technical specification. But it differs from formal knowledge in three ways that make it systematically invisible to the institutions that govern complex systems.
First, métis is local. It applies to this piece of land, this stretch of river, this codebase, this team, this market. The farmer who knows when to plant by the feel of the soil does not know when all farmers should plant. She knows when she should plant, on this hillside, given this year's frost pattern and this season's moisture. Her knowledge is precise, but its precision is contextual. It cannot be generalized without losing the very specificity that makes it valuable.
Second, métis is embodied. It lives in the practitioner's body — in the hands, the eyes, the nervous system — not in a document or a database. The glassblower who can feel when the melt has reached the right viscosity, the pilot who senses the aircraft's attitude before the instruments register the change, the surgeon whose hands know the difference between healthy tissue and diseased tissue — all of them possess knowledge that was deposited through years of physical engagement with a resistant medium. The knowledge is in the muscle memory, the calibrated senses, the reflexes that have been shaped by thousands of iterations of practice. Attempts to codify this knowledge into explicit rules invariably produce something that looks like the knowledge from the outside but lacks its essential quality: the ability to respond to the unexpected.
Third, métis is dialogical. It develops through a conversation between the practitioner and the environment — a back-and-forth in which each action produces a response, and the response informs the next action. The potter does not plan the pot and then execute the plan. She begins with an intention and then adjusts, continuously, as the clay responds to her hands and her hands respond to the clay. The plan and the execution are not sequential. They are simultaneous, each informing the other in real time. This is fundamentally different from the engineering model — design, then build — that formal knowledge assumes.
Scott adopted the concept of métis because it named something his research kept encountering: the gap between what institutions knew about the systems they governed and what the inhabitants of those systems knew. The gap was not a failure of data collection. It was structural. The knowledge that institutions needed most — the fine-grained, contextual, adaptive knowledge that determined whether their policies would work or fail — was precisely the knowledge that institutional structures could not accommodate. It was too local to aggregate. Too embodied to document. Too dialogical to capture in a policy framework. It existed only in the people who had developed it, and it could not be extracted from them without being destroyed.
The software engineer whom Edo Segal describes in The Orange Pill — the one who could "feel a codebase the way a doctor feels a pulse" — possesses métis. Her knowledge of the system is not primarily analytical. It is primarily embodied. She has spent years inside this particular codebase, debugging its failures, tracing its dependencies, building mental models of its behavior that are too complex and too context-specific to be fully articulated. When she says the system "feels fragile," she is not speaking metaphorically. She is reporting a genuine perceptual experience — the output of a pattern-recognition system (her nervous system) that has been trained on thousands of hours of engagement with this particular domain. She knows things about the system that the system's own documentation does not record, because the documentation was written to be legible and her knowledge was built to be true.
This distinction — between knowledge that is legible and knowledge that is true — is the crux of Scott's contribution to the AI discussion. AI systems produce knowledge that is extraordinarily legible. Their outputs are textual, structured, searchable, comparable. They can be evaluated, audited, and optimized by anyone with access. This legibility is one of AI's greatest strengths. It is also, from Scott's perspective, its greatest danger — because the legibility of AI output creates the illusion that the output captures everything that matters, when in fact it captures only the portion of reality that can survive the translation into legible form.
Consider what happens when an organization measures the productivity of AI-assisted engineers. The metrics are clear: lines of code produced, features shipped, tickets closed, time-to-completion ratios. These metrics are legible. They can be aggregated, compared across individuals and teams, presented in dashboards that make individual performance visible to management with unprecedented granularity. The twenty-fold productivity multiplier that Segal describes is visible precisely because it is legible — it shows up in the metrics that organizations already track.
What the metrics do not show is the métis that the engineer possesses or fails to possess. The judgment about which feature to build and which to defer. The architectural intuition about which design will scale and which will collapse under load. The sense — developed through years of friction-rich engagement with systems that resist — of when an AI-generated solution is genuinely sound and when it merely looks sound, when the code will hold under stress and when it is a brittle structure that will shatter at the first unexpected input. This knowledge does not appear in any productivity dashboard. It is invisible to the governance framework. And because it is invisible, it is systematically undervalued — treated as a nice-to-have rather than the critical infrastructure it actually is.
Scott documented this dynamic across dozens of domains. In agriculture, the Green Revolution introduced high-yield crop varieties that performed spectacularly under controlled conditions and failed catastrophically when deployed in the variable, unpredictable environments where actual farming occurs. The varieties were engineered for optimal conditions. The farmer's métis was adapted to actual conditions — the drought that comes every seven years, the pest that appears when the rains arrive late, the particular disease that afflicts this variety of rice on this particular soil type. The formal knowledge produced higher yields on the research station. The local knowledge produced reliable yields in the field. When the formal knowledge overrode the local knowledge — when the planners mandated the new varieties and prohibited the traditional ones — the reliable yields disappeared and the spectacular yields never materialized, because the conditions that produced them existed only on the research station.
In urban planning, the comprehensive redesigns that razed "slums" and relocated their inhabitants to modern housing projects produced environments that were physically superior — better plumbing, more light, more space — and socially devastating. The planners could see the physical infrastructure. They could not see the social infrastructure: the informal networks of mutual aid, the eyes on the street that provided safety, the mixed-use spaces that allowed communities to form organically. The social infrastructure was métis — built through years of inhabitation, invisible to anyone who did not inhabit the space, and destroyed the moment the inhabitants were displaced.
The pattern repeats with eerie precision in the AI-assisted workplace. When Claude Code handles the implementation — the syntax, the debugging, the mechanical labor of translating design into working software — it removes the friction through which a specific form of métis is developed. The junior engineer who used to spend hours debugging a null pointer exception and, in the process, came to understand how memory allocation actually works in this particular system, now receives the working code directly. The tedium is gone. The learning that was embedded in the tedium is also gone.
This is not an argument against AI. It is an argument for understanding what AI removes when it removes friction — and for recognizing that the thing removed is not merely inefficiency. It is the soil in which a particular kind of knowing grows. The friction was not just an obstacle between the engineer and the code. It was the medium through which the engineer's relationship to the code was built — the resistance that deposited, layer by layer, the embodied understanding that eventually became the ability to feel whether a codebase was sound.
When Marion Fourcade and Kieran Healy examined what they called "high-tech modernism" in a landmark 2023 paper, they identified a genuine complication in Scott's framework. AI systems, they observed, "incorporate tacit information in ways that are sometimes spookily right, and sometimes disturbing and misguided." A language model trained on billions of words has, in some sense, absorbed the accumulated métis of millions of practitioners — their patterns of reasoning, their domain-specific vocabularies, their implicit models of how systems behave. The model cannot feel a codebase the way the senior engineer feels it. But it can produce outputs that resemble what the senior engineer would produce, because it has been trained on the traces that millions of senior engineers have left in the written record.
This creates an epistemic situation that Scott did not anticipate and that his framework struggles to accommodate. The model's outputs are not métis — they are not local, not embodied, not developed through sustained engagement with a specific domain. But neither are they pure techne — formal, rule-based, context-independent knowledge. They occupy what Henry Farrell, writing after Scott's death, called an uncanny middle ground: "not techne, even if they are not métis either." They are pattern-matched approximations of métis, produced at scale, without the contextual specificity that makes genuine métis reliable.
The danger is not that these approximations are worthless. They are often remarkably good — good enough to pass as genuine understanding in contexts where the difference does not matter. The danger is that the approximation displaces the real thing. When the AI-generated output looks like the output of métis, organizations stop investing in the conditions that produce actual métis. Why wait years for a junior engineer to develop embodied knowledge of the system when the AI can produce competent code today? Why invest in the slow, expensive, friction-rich process of developing practitioner expertise when the productivity dashboard shows that AI-assisted workers are already twenty times more productive?
The answer — Scott's answer, refined across four decades of fieldwork — is that the productivity dashboard is the Normalbaum. It counts the trees. It cannot see the forest. And the forest's health depends on exactly the relationships that the count cannot capture: the mycorrhizal networks of institutional knowledge, the biodiversity of cognitive approaches, the deadfall of failed experiments that returns nutrients to the organizational soil. Strip these away in the name of legible productivity, and the first generation will look spectacular. The second generation will reveal what was lost.
The métis of AI use is developing right now, in the hands of millions of practitioners who are learning, through daily engagement, things about these tools that no policy document captures. Where the model hallucinates with dangerous confidence. Where its outputs are trustworthy and where they require verification. How to read the subtle difference between an AI response that reflects genuine reasoning and one that merely patterns-matches toward plausibility. This knowledge is local, embodied, and dialogical — precisely the kind of knowledge that Scott spent his career defending against the planners who believed their abstract models could replace it.
The question is whether this knowledge will be sought, valued, and integrated into the governance structures that shape the AI transition — or whether it will be cleared like underbrush, dismissed as anecdotal, and overridden by the confident rationality of comprehensive strategies designed by people who have never spent an afternoon watching what actually happens when a human being and a language model try to solve a hard problem together.
---
Before the cadastral map, land tenure in much of Europe was a thicket of local arrangements so complex that only the people who lived inside them could navigate. A family might hold seasonal grazing rights on one hillside, cultivation rights on a strip of bottomland, and gleaning rights on a neighbor's field after harvest — all governed by oral agreements, customary law, and the collective memory of the community. The arrangements were adaptive: they had evolved over centuries to accommodate the specific ecology, social structure, and economic needs of each locality. They were also illegible — invisible to any authority that did not possess the intimate local knowledge required to interpret them.
The state needed legibility. Legibility was the precondition for taxation, conscription, and governance of every kind. A state that cannot see its citizens cannot tax them, draft them, regulate them, or serve them. So the state simplified. It replaced the tangle of customary tenure with standardized property maps: each parcel assigned to a single owner, each boundary surveyed and registered, each transaction documented. The cadastral map made land visible. It made land governable. And in the process, it destroyed the complex, adaptive, locally evolved system of land use that had sustained communities for generations.
Scott did not argue that legibility is always wrong. States need to see, and much of what states accomplish — infrastructure, public health, education — requires the ability to aggregate, categorize, and act on simplified representations of complex realities. The problem is not legibility itself. The problem is legibility as a substitute for understanding. The map that replaces the territory. The metric that replaces the knowledge. The dashboard that replaces the judgment.
Every act of legibility, Scott demonstrated, involves a double movement. First, simplification: the complex reality is reduced to a set of categories that the governing authority can process. Second, inscription: the simplified categories are treated as the reality itself, and the complex reality they were derived from is forgotten, dismissed, or actively suppressed. The forest becomes a timber count. The city becomes a zoning map. The worker becomes a productivity metric. In each case, the simplification produces a representation that is useful for governance and catastrophically incomplete as a description of what is actually happening.
AI does not merely continue the legibility project that Scott documented. It accelerates legibility to a velocity and granularity that previous administrative technologies could not approach.
Consider what a language model makes legible about knowledge work. Before AI coding assistants, a software engineer's cognitive process was opaque to everyone except the engineer herself. She sat at her desk, stared at the screen, typed, deleted, typed again, consulted documentation, walked to the coffee machine, stared at the ceiling, returned to the desk, and eventually produced code. The process was visible only in its output. The thinking was private. The dead ends, the false starts, the moments of insight that arrived after twenty minutes of apparently doing nothing — all of it was illegible. A manager could see that the engineer produced a feature in three days. The manager could not see what happened inside those three days, what cognitive labor produced the feature, what knowledge was tested and refined and deposited in the process of building it.
Claude Code changes this. When the engineer works with an AI assistant, the interaction is logged. Every prompt is recorded. Every response is traceable. The sequence of questions the engineer asks, the approaches she considers, the mistakes she catches and the ones she does not — all of it becomes, in principle, available for inspection. The cognitive process that was previously opaque has been rendered transparent. The thinking has been made legible.
This legibility is attractive to organizations for the same reasons that the cadastral map was attractive to states. It makes the previously ungovernable governable. A manager can now observe, in real time, how an engineer approaches a problem — which questions she asks, which strategies she pursues, how she responds to the AI's suggestions. Performance evaluation, once dependent on output metrics supplemented by subjective managerial assessment, can now incorporate process data of unprecedented richness. The engineer's thinking has become visible to the institution in a way that was previously impossible.
And this is precisely where the legibility trap springs.
Because what the logged interaction makes visible is not the engineer's thinking. It is a trace of her thinking — a simplified record of a cognitive process that is vastly more complex than the record can capture. The prompt she typed is not the full extent of her reasoning. It is the portion of her reasoning that she chose to externalize, shaped by the constraints of the interface, the habits of interaction she has developed, and the implicit model she carries of what the AI needs to hear in order to produce useful output. Behind the prompt is everything she considered and rejected, every association her mind made that she did not articulate, every piece of contextual knowledge so deeply embedded that she could not have verbalized it even if she tried.
The log captures the transcript. It does not capture the thought. And when organizations treat the transcript as equivalent to the thought — when they optimize for the legible trace rather than the illegible process that produced it — they reproduce, at the level of individual cognition, the same error that Prussian foresters committed at the level of forest management. They substitute the map for the territory and then wonder why the territory stops functioning as expected.
Chuncheng Liu's 2022 study of China's COVID-19 Health Code system — an AI-powered contact-tracing algorithm that assigned risk levels to citizens based on location data, travel history, and health records — provides a striking illustration. The system was designed to make epidemic risk legible: to reduce the complex, contextual, constantly shifting reality of infection transmission to a simple three-color classification (green, yellow, red) that could be read by any checkpoint guard in the country. The legibility was real. Citizens could see their status on a screen. Authorities could aggregate risk data at the municipal and provincial level. The system was, in Scott's terms, a masterpiece of administrative simplification.
But as Liu documented, the simplification produced absurdities that only local knowledge could detect and correct. Citizens who had been nowhere near an outbreak zone received red codes because a cell tower they passed happened to overlap with a quarantine area. People who had recovered weeks ago remained flagged because the system's update cycle lagged behind the biological reality. The algorithm, working from simplified inputs, produced simplified outputs that bore an increasingly tenuous relationship to the actual distribution of risk in the population. The system saw what it was designed to see: location data, timestamps, categorized health records. It could not see what the people in the system could see: that the cell tower overlap was meaningless, that the recovered patient was healthy, that the risk classification was an artifact of the simplification rather than a reflection of reality.
The correction required human intervention — what Liu called "enacting like a human" in contrast to "seeing like an algorithm." At every checkpoint, guards exercised discretion. They looked at the person standing in front of them and made judgments that the algorithm could not make: Does this red code reflect actual risk or a data artifact? Is this person symptomatic? Do the circumstances justify overriding the system's classification? The guards possessed métis — practical, contextual knowledge of the local conditions that the algorithm's simplified categories could not accommodate. The system worked only because the humans inside it were constantly correcting for its limitations. When the humans were removed from the loop — when the algorithm's output was treated as self-executing rather than advisory — the system produced the administrative equivalent of Waldsterben: a governance structure that was legible, manageable, and systematically wrong.
The AI-assisted workplace is reproducing this dynamic at every level. Productivity dashboards that measure lines of code generated, features shipped, and tickets closed make individual performance legible with unprecedented precision. But the metrics capture only the outputs that are amenable to measurement. They cannot capture the engineer who spent two hours thinking about an architectural problem and produced no measurable output — no code, no tickets, no features — but arrived at an insight that will prevent six months of technical debt. They cannot capture the designer who rejected the AI's first three suggestions not because the suggestions were wrong but because they were boring — because the designer's taste, developed through years of engagement with users, told her that the competent solution was not the right solution. They cannot capture the product manager who overrode the data to make a decision based on a conversation she had with a customer last Tuesday — a conversation that the metrics could never have predicted or recommended, but that revealed a need the data had entirely missed.
These acts of judgment are invisible to the legibility apparatus. They show up in the dashboard as idle time, rejected output, unexplained deviations from the optimized path. In an organization that governs by legibility — that treats the dashboard as the territory rather than the map — these acts of judgment are not merely invisible. They are penalized. The engineer who thinks for two hours without producing code looks less productive than the engineer who prompts the AI continuously and generates a steady stream of output. The designer who rejects competent suggestions looks slower than the designer who accepts the first result. The product manager who overrides the data looks irrational.
The legibility trap operates at the organizational level when institutions restructure around what AI makes visible, promoting and rewarding the behaviors that produce measurable output while systematically devaluing the behaviors that produce unmeasurable judgment. But it operates at a more insidious level, too — at the level of the individual practitioner's relationship to her own work.
When every interaction with the AI is logged, when the cognitive process has been externalized into a traceable sequence of prompts and responses, the practitioner begins to reshape her thinking to fit the medium. She starts to think in prompts. She begins to articulate her intentions in the form the AI expects rather than the form her mind naturally produces. The internal monologue — the messy, associative, frequently irrational process through which genuine insight is produced — is compressed into the linear, verbal, structured format that the tool requires. The thinking becomes legible. And in becoming legible, it loses something that the practitioner may not recognize she has lost until she tries to solve a problem that the AI cannot help with and discovers that the cognitive muscles she once relied on have atrophied from disuse.
Fourcade and Gordon's concept of "inductive statecraft" adds a further dimension to the legibility trap. Classical high modernism, they argue, required the state to impose simplification on complex realities — to flatten the land into a cadastral grid, to sort the population into census categories, to organize the economy into measurable sectors. AI-era governance, by contrast, can let simplification emerge from data: categories are induced from patterns rather than imposed from above. The algorithm does not need to define risk categories in advance; it discovers them in the data. This appears to solve Scott's problem. If the categories emerge from reality rather than being imposed on it, surely they capture more of reality's complexity?
But the appearance is deceptive. The categories that emerge from AI analysis are still simplifications — still reductions of complex, contextual, local reality to patterns that the system can process. They are more sophisticated simplifications than the cadastral grid, but they are simplifications nonetheless. And because they are inductively derived rather than administratively imposed, they carry an aura of objectivity that makes them harder to challenge. The cadastral map was visibly a human creation — obviously a simplification, open to dispute on the grounds that the simplification missed important features of the territory. The AI-derived pattern presents itself as a discovery — as something found in the data rather than imposed on it — and this presentation makes it far more resistant to the kind of challenge that Scott insisted was essential: the challenge from below, from the practitioners whose local knowledge reveals what the pattern missed.
The legibility trap is not a conspiracy. No one sets out to replace understanding with measurement. It happens because measurement is easier, because measurement is scalable, because measurement produces the kind of knowledge that institutions can act on. The practitioner's métis — her feel for the system, her sense of what the metrics miss, her ability to read the situation rather than the dashboard — is the corrective that keeps the legibility from becoming toxic. But the corrective works only if the institution values it. And institutions, by their nature, value what they can see. The legibility trap is the tendency of institutions to optimize for the visible — and, in doing so, to destroy the invisible infrastructure on which the visible depends.
---
Scott did not argue that all centralized planning fails. He argued that a specific combination of conditions produces catastrophe — and that the combination is identifiable in advance, which means it is, in principle, preventable. The conditions are four, and all four must be present simultaneously for the catastrophe to occur. Remove any one, and the outcome may still be suboptimal, but it will not be ruinous. The four elements form not a prediction but a diagnostic — a framework for examining the AI transition and asking, with clinical specificity, where the danger concentrates.
The first element is a high modernist ideology: the sincere, often passionate belief that centralized rational planning, informed by the best available science, can redesign complex human systems more effectively than the organic, evolutionary processes through which those systems actually develop. The key word is sincere. High modernism is not cynicism dressed as rationality. It is genuine conviction. The Soviet agronomists who designed collectivization believed, with the fervor of the truly committed, that scientific agriculture would outperform peasant tradition. The urban planners who demolished neighborhoods believed, with equal conviction, that modern housing would produce better lives. The faith was real. The science was real. The confidence was earned through genuine achievement in other domains. What was missing was not intelligence or good will but epistemic humility — the recognition that the knowledge possessed by the planner, however sophisticated, was incomplete in ways that the planner's own framework could not detect.
The AI discourse is saturated with high modernist ideology. Not in the crude form of the technologist who claims AI will solve all problems — that variety is easy to identify and relatively easy to resist. In the subtler form of the policymaker who believes that the right regulatory framework, designed by the right experts, can anticipate and manage the technology's effects. In the form of the corporate executive who believes that the right governance structure, implemented by the right compliance team, can ensure that AI deployment produces the intended benefits without the unintended costs. In the form of the university administrator who believes that the right academic integrity policy can preserve the value of education in an AI-saturated environment.
Each of these beliefs is held sincerely by intelligent, well-informed people who have studied the technology carefully. Each is also structurally identical to the belief that scientific forestry would outperform the evolved complexity of the old-growth forest. The plans are rational. The science is real. The confidence is earned. What is missing is the knowledge that the planner cannot possess — the local, contextual, embodied knowledge of the practitioners who will live inside the planned system. And the absence of that knowledge is not a minor gap that better research can close. It is a structural feature of the relationship between centralized planning and complex systems. The gap exists because the knowledge is local, and the plan is not.
The second element is a state — or corporation, or institution — powerful enough to impose the plan. High modernist ideology without institutional power is merely bad theory. It does damage only when the theorist possesses the capacity to reshape reality according to the theory's specifications. Soviet collectivization required the coercive apparatus of the Stalinist state. Tanzanian villagization required the administrative machinery of the post-independence government. The comprehensive urban renewal programs that destroyed American neighborhoods required the legal authority of eminent domain and the financial muscle of federal housing policy. In each case, the plan's catastrophic effects were proportional to the power behind its implementation. A weak institution with a bad plan produces inefficiency. A powerful institution with a bad plan produces disaster.
The technology companies deploying AI at scale possess institutional power of a historically novel kind. They do not wield the coercive power of the state — they cannot draft citizens or seize property. But they wield something that may be more consequential for the AI transition: platform power, the ability to reshape the conditions of work, creativity, communication, and cognition for billions of people simultaneously through the design choices embedded in their tools. When Anthropic ships an update to Claude Code, the change affects every engineer who uses the tool the next morning. When OpenAI modifies GPT's default behavior, the change ripples through every application built on the API. When Google adjusts its search algorithm — which now integrates generative AI responses — the change alters how hundreds of millions of people encounter information.
These are not policy decisions subject to democratic deliberation. They are product decisions made by small teams of engineers and executives, implemented globally, often without public notice, and adjusted based on metrics that the affected populations have no role in defining. The speed and scale of deployment mean that the effects of each decision are felt before they can be assessed — before the practitioners who experience those effects have developed the métis required to understand them, much less the institutional channels through which to communicate that understanding to the people making the decisions.
The third element is a prostrate civil society — a population that is too atomized, too disorganized, or too demoralized to resist the imposition of the plan. Scott was careful to distinguish between populations that are unable to resist and populations that are unwilling. The distinction matters because it identifies different points of intervention. A population that is organizationally capable but politically disengaged needs different support from a population that is politically motivated but structurally atomized.
The "silent middle" that Segal identifies in The Orange Pill — the majority of workers, students, and parents who feel both the exhilaration and the terror of the AI transition but cannot articulate their experience in the binary terms the discourse rewards — is, in Scott's framework, a prostrate civil society in formation. These people are not powerless. Many of them possess exactly the practitioner knowledge that effective AI governance requires. They are silent not because they have nothing to say but because the institutional channels for saying it do not exist, and the public discourse does not reward the ambivalence that constitutes their honest assessment of the situation.
The silence is compounded by the nature of the workplace itself. Workers who depend on their employer for income, health insurance, and professional identity are structurally inhibited from publicly criticizing their employer's AI deployment decisions. The engineer who knows that the AI-generated code her team is shipping contains subtle architectural flaws that will produce technical debt in two years faces a choice between raising the concern — and being perceived as a Luddite, a blocker, someone who doesn't get it — and staying silent, collecting the productivity bonus, and waiting for the technical debt to materialize on someone else's watch. The incentive structure rewards silence. The organizational culture, in most technology companies, penalizes the specific kind of cautionary expertise that effective governance requires.
The fourth element is the absence of practical feedback mechanisms that would reveal the plan's failures before they become structural. This is the element that transforms bad policy into catastrophe, because it is the element that prevents self-correction. A plan that is wrong but correctable produces suboptimal outcomes. A plan that is wrong and uncorrectable produces irreversible harm. The difference between the two is the presence or absence of feedback — the signals from the affected population that reach the planners in time to adjust course.
Soviet agriculture lacked feedback mechanisms because the institutional structure punished the messenger. Local officials who reported crop failures were accused of sabotage. Agronomists who questioned the central plan were dismissed or worse. The information that would have revealed the plan's failures existed — it existed in every village, in the direct experience of every peasant who watched the collective farm's yields collapse — but it could not travel from the periphery to the center because every node in the transmission chain had an incentive to filter, distort, or suppress it.
The AI transition's feedback mechanisms are failing for different reasons but with structurally similar effects. The speed of deployment outpaces the speed of assessment. Segal observes in The Orange Pill that regulations arrive eighteen months after the tools they govern have already reshaped the workforce. But the problem is more fundamental than regulatory lag. The practitioners who possess the most relevant knowledge — the embodied understanding of how AI tools actually affect cognition, judgment, and the development of expertise — have no institutional channel through which to communicate that knowledge to the people making deployment decisions. The channels that exist — customer feedback forms, employee surveys, social media commentary — are designed for legible input: quantifiable satisfaction scores, binary feature requests, character-limited complaints. They cannot accommodate the kind of nuanced, contextual, often inarticulate knowledge that constitutes the practitioner's métis.
A senior engineer notices that her team's architectural judgment has degraded over six months of heavy AI use — that they are making decisions faster but with less consideration of edge cases, less awareness of downstream dependencies, less of the embodied caution that years of debugging had deposited in their reflexes. This observation is real. It is important. And it has no place to go. It does not fit in a satisfaction survey. It cannot be expressed as a feature request. It is too qualitative for the analytics dashboard and too institutional for the social media post. It exists as an inchoate sense of unease in the mind of a practitioner who has been doing this work long enough to know that something has changed but cannot locate the change with sufficient precision to make it legible to the people who would need to act on it.
This is the feedback mechanism failing — not because the signal does not exist, but because the institutional infrastructure for receiving, interpreting, and acting on it has not been built. The knowledge is there, distributed among millions of practitioners who are developing it in real time through daily engagement with the tools. The channels for transmitting it are not.
When all four elements converge — high modernist ideology, institutional power sufficient to impose the plan, a prostrate civil society unable to resist, and the absence of feedback mechanisms that would reveal failures in time to correct — the result is what Scott documented across cultures and centuries: the organized destruction of functioning systems by people who sincerely believed they were improving them. Not malice. Not incompetence. Structural blindness — the inability of the planning authority to perceive the local knowledge that would have revealed the plan's fatal flaws.
The AI transition is not yet at this convergence. The elements are present, but they are not yet fully assembled. High modernist ideology is pervasive but not yet hegemonic — the discourse still includes dissenting voices, skeptics, practitioners who report honestly on the costs as well as the benefits. Institutional power is vast but not yet unchecked — regulatory frameworks are emerging, however slowly, and the technology companies face competitive and reputational pressures that constrain their worst impulses. Civil society is atomized but not yet prostrate — the silent middle is silent, not absent, and the practitioner knowledge that effective governance requires exists in abundance even if the channels for transmitting it do not. Feedback mechanisms are inadequate but not yet absent — researchers like the Berkeley team are documenting what AI does to work, scholars are applying Scott's framework to digital governance, practitioners are sharing their experience in forums and conversations even if those observations rarely reach the people making deployment decisions.
The window for intervention is open. It will not remain open indefinitely. Each month that passes without the construction of adequate feedback channels — institutional structures that solicit, transmit, and act on practitioner knowledge — is a month in which the fourth element advances from inadequate toward absent. Each quarter in which the silent middle remains silent is a quarter in which the third element advances from atomized toward prostrate. Each policy cycle in which comprehensive strategies are drafted without practitioner input is a cycle in which the first and second elements consolidate.
Scott did not predict catastrophe. He identified the conditions that produce it and traced their convergence across historical cases with the care of a pathologist documenting the stages of a disease. The pathology is identifiable. The intervention is possible. But only if the intervention is informed by the knowledge that the pathology systematically excludes — the local, contextual, embodied knowledge of the people who live inside the systems being planned. The people at the periphery. The practitioners. The inhabitants of the territory that the map was drawn to govern but was never adequate to describe.
In the early 1980s, James C. Scott lived in a Malaysian village called Sedaka, in the Muda region of Kedah, and watched a revolution happen without anyone raising a flag.
The Green Revolution had arrived. Double-cropping of rice, enabled by new irrigation infrastructure and high-yield varieties, was transforming the Muda plain from a single-harvest economy into one of the most productive rice regions in Southeast Asia. The aggregate statistics were triumphant. Yields doubled. National rice self-sufficiency approached. The planners in Kuala Lumpur celebrated.
In Sedaka, the celebration looked different from below. The combine harvesters that made double-cropping efficient eliminated the manual harvesting work that had sustained the village's poorest families. The landlords who could afford the new inputs — fertilizers, pesticides, the varieties themselves — captured the productivity gains. The tenants and landless laborers who could not afford them were squeezed out. The village's informal economy of reciprocal obligation — the understanding that the wealthy farmer would hire his poorer neighbors for harvest work, that the surplus would be shared through customary arrangements calibrated over generations — was dismantled not by decree but by the simple fact that machines were cheaper than people, and the new varieties required inputs that only the wealthy could purchase.
The poor of Sedaka did not revolt. They did not organize. They did not march on the landlord's house or petition the district officer. They did something that Scott recognized as far more common and far less visible: they resisted in the ordinary course of daily life. They dragged their feet on work they found exploitative. They pilfered small quantities of rice from the fields of employers who had cut their wages. They spread gossip about the moral failings of the wealthy — stories designed not to overthrow the social order but to contest the legitimacy of the new arrangements. They feigned ignorance when asked to comply with procedures they found objectionable. They engaged in what Scott would later call "the prosaic but constant struggle between the peasantry and those who seek to extract labor, food, taxes, rents, and interest from them."
These were not revolutionary acts. They were survival strategies — the repertoire of resistance available to people who lack the organizational capacity, the institutional protections, or the simple physical safety required for open confrontation. Scott called them "weapons of the weak," and the concept reshaped how political scientists understood power, resistance, and the relationship between domination and the daily practices through which domination is contested, absorbed, and endured.
The weapons are specific. Foot-dragging: doing the work slowly enough to impose a cost on the employer without providing a justification for dismissal. False compliance: following the letter of the instruction while violating its spirit, producing outcomes that technically satisfy the demand while undermining its purpose. Feigned ignorance: claiming not to understand the new system, the new tool, the new procedure, thereby forcing the authority to invest additional resources in explanation, training, and supervision. Pilfering: extracting small benefits from the system in ways that are individually insignificant but collectively substantial. Character assassination: contesting the moral authority of those in power through gossip, rumor, and the manipulation of the community's narrative about who deserves respect and who does not.
Each of these weapons operates beneath the threshold of open confrontation. Each imposes costs on the dominant party without triggering the repressive response that open resistance would provoke. And each preserves the practitioner's sense of agency — the conviction that she is not merely a passive recipient of forces beyond her control but an active participant in a contest whose outcome is not yet determined.
The contemporary Luddites whom Segal describes in The Orange Pill — the senior engineers quietly refusing to adopt AI tools, the professionals maintaining practices the market no longer rewards, the workers who drag their feet on AI integration mandates while publicly declaring their enthusiasm — are deploying these weapons with a precision that Scott would recognize instantly.
The foot-dragging is ubiquitous. An engineering manager in a Fortune 500 company mandates that all teams adopt AI coding assistants by the end of the quarter. The mandate is clear. The metrics are defined. The dashboards are built. And yet, three months later, adoption rates in certain teams remain stubbornly low. The teams are not refusing. They are complying — slowly, partially, with frequent reports of "technical issues" and "workflow integration challenges" that require additional time, additional training, additional support. Each individual delay is minor and explicable. The aggregate effect is that the teams most resistant to the tools are operating six months behind the mandate's timeline, and by the time the mandate is enforced with sufficient rigor, the tools have been updated, the mandate has been revised, and a new round of slow compliance has begun.
The false compliance is subtler and more corrosive. A developer required to use Claude Code for all new feature development uses it — nominally. She prompts the tool, receives the output, and then rewrites the output from scratch, producing code that is functionally identical to what she would have written without the tool. Her workflow appears, to the productivity dashboard, to be AI-assisted. The prompts are logged. The tool's contribution is recorded. But the actual cognitive work — the thinking, the design, the architectural judgment — was entirely hers. She has complied with the mandate without submitting to the tool. The dashboard shows adoption. The reality is performance.
Scott would not be surprised. He documented precisely this pattern in Sedaka, where peasants who were required to participate in new agricultural programs participated — in form. They attended the meetings. They accepted the seeds. They planted the new varieties in the designated plots. And then they planted their traditional varieties in plots that the extension agents did not inspect, hedging their compliance with the prudence of people who knew from experience that the planner's confidence exceeded the planner's understanding.
The feigned ignorance is perhaps the most diagnostic weapon in the current AI transition, because it reveals the gap between the official narrative of adoption and the actual distribution of competence. In every organization that has mandated AI tool adoption, there are individuals who claim, with varying degrees of plausibility, that they "don't get it." The tool is confusing. The prompting is unintuitive. The outputs are unreliable. These claims are sometimes genuine — the tools do have learning curves, and not everyone learns at the same pace. But they are also, in many cases, strategic — a way of forcing the organization to invest resources in training and support, thereby slowing the pace of adoption and creating spaces in which the old way of working can persist under the cover of legitimate difficulty.
The character assassination is quieter still, but its effects on organizational culture are significant. In the informal conversations that constitute the hidden transcript of every workplace — the hallway exchanges, the after-meeting debriefs, the Slack channels where the real opinions live — the early adopters of AI tools are subjected to a specific kind of reputational challenge. They are characterized as shallow. As people who "don't really understand the code." As practitioners who have traded depth for speed, substance for surface, the real work for the appearance of productivity. The characterization is not always fair. It is not always unfair, either. What matters, from Scott's perspective, is not its accuracy but its function: it contests the legitimacy of the new dispensation by questioning the character of its most visible beneficiaries.
These weapons work. They preserve the practitioner's sense of agency. They impose real costs on the organizations that mandate adoption. They slow the pace of transition in ways that create space for adjustment, reflection, and the development of new norms. They are not nothing.
But Scott was honest about their limitations, and the honesty matters here. Weapons of the weak are substitutes for power, not expressions of it. They are the repertoire of people who cannot fight openly — who lack the organizational capacity, the institutional protections, or the structural leverage to contest the forces acting on them through collective action. They preserve dignity. They do not change outcomes.
The peasants of Sedaka who dragged their feet and pilfered rice remained poor. The combine harvesters came. The traditional arrangements dissolved. The resistance slowed the transition and extracted small concessions from the powerful, but it did not alter the structural forces driving the change. The weapons of the weak bought time. They did not build dams.
This is the critical limitation that the contemporary Luddite faces. The senior engineer who quietly refuses to adopt AI tools preserves his sense of professional identity — his belief that his expertise has value, that his way of working is legitimate, that the new methods are inferior in ways the productivity metrics cannot capture. This preservation is psychologically necessary. It may, in some cases, be epistemically correct — the engineer may genuinely know something about the value of friction-rich practice that the adoption mandate ignores. But the preservation does not change the structural conditions that are making his expertise less scarce. The tools continue to improve. The organizations continue to adopt. The new generation of practitioners continues to develop their skills in the AI-augmented environment, and their métis — the embodied knowledge they are building through engagement with the tools — will be the métis that matters going forward.
Scott's work contains a further insight that the contemporary resistance literature tends to overlook. The weapons of the weak are most effective when they are deployed within a system that the resisters continue to inhabit. The peasant who drags his feet is still working in the field. The developer who feigns ignorance is still sitting at her desk. The resistance is embedded in participation. It is a mode of engagement, not a mode of withdrawal.
When the resister leaves — when the senior engineer moves to "the woods," as Segal describes some doing, lowering his cost of living in anticipation of a livelihood he fears is disappearing — the weapons become unavailable because the system no longer contains the resister. His knowledge, his judgment, his dissent — all exit with him. The organization loses the very perspective that Scott's framework identifies as most valuable: the practitioner's métis, including the métis of knowing where the tools fail.
This is why Segal's call for engagement over disengagement aligns precisely with the logic of Scott's analysis. Not because resistance is wrong — sometimes resistance is the only honest response to a structural injustice — but because the most valuable thing the resister possesses is not his refusal but his knowledge. The knowledge of how the work actually works. The knowledge of what the tools miss. The knowledge of where the plan diverges from reality. That knowledge is useful only if it remains inside the system, available to influence the institutions that are shaping the transition.
The peasants of Sedaka could not build institutions. They lacked the resources, the literacy, the political access. Their weapons were the only weapons available, and Scott honored them for what they were: the rational response of the structurally powerless to conditions they did not create and could not control.
The contemporary Luddite is not structurally powerless. She possesses expertise, institutional standing, professional networks, and — crucially — the métis of her domain, which is the knowledge that effective AI governance most desperately needs. Her weapons of the weak are available to her, and she is right to use them when open dissent is punished. But they are not her only weapons. She has the option, if she chooses to exercise it, of converting her resistance into construction — of building the institutional channels through which practitioner knowledge can enter the governance conversation, of organizing with other practitioners to articulate the concerns that the silent middle holds but cannot express, of insisting that the comprehensive strategies being drafted in boardrooms and ministry offices include the perspective from below.
This conversion — from resistance to construction, from weapons of the weak to the building of institutional infrastructure — is the hardest step in Scott's entire framework. It requires the resister to move from a posture that is psychologically satisfying (refusal, the clean lines of principled opposition) to a posture that is psychologically uncomfortable (engagement, the messy compromise of working inside a system you distrust in order to shape it from within). It requires the faith that the system is capable of being shaped — a faith that the evidence does not always support and that the resister's experience may have actively undermined.
But the alternative — continued resistance without institutional construction — produces the outcome Scott documented across every case he studied. The weapons of the weak buy time. The time runs out. The structural forces proceed. And the knowledge that could have informed the transition exits with the practitioners who chose refusal over engagement.
The dams that Segal calls for cannot be built by people who have left the river.
---
In the central highlands of Ethiopia, farmers have cultivated teff — the grain that produces injera, the spongy flatbread that is the foundation of Ethiopian cuisine — for at least three thousand years. Teff is a tiny grain, smaller than a poppy seed, with a growth habit so fine-tuned to the Ethiopian highland environment that agronomists long considered it primitive, a relic of pre-scientific agriculture that would inevitably be replaced by higher-yielding cereals once the Ethiopian peasantry was properly educated.
The agronomists were not wrong about the yields. Teff produces less grain per hectare than wheat, maize, or rice under optimal conditions. Under optimal conditions — meaning controlled irrigation, standardized fertilizer application, predictable weather, and the kind of soil management that research stations can provide and highland farmers cannot.
Under actual conditions — meaning the erratic rainfall, the variable soils, the altitude fluctuations, the pest pressures, and the economic constraints that characterize Ethiopian highland agriculture — teff outperforms the higher-yielding alternatives with a consistency that the agronomists' models could not explain. It tolerates waterlogging that drowns wheat. It survives drought that kills maize. Its root system stabilizes soils on the steep hillsides where erosion destroys other crops within a few seasons. Its straw provides superior animal fodder and building material. Its grain stores without refrigeration in conditions that rot maize within weeks.
The Ethiopian farmer who plants teff possesses knowledge that the agronomist's model does not contain. Not because the agronomist is ignorant — the model is sophisticated, peer-reviewed, built on decades of research — but because the model describes an environment that does not exist in the farmer's field. The model describes optimal conditions. The farmer lives in actual conditions. And the gap between optimal and actual is precisely the space where métis operates — the space where local, experiential, contextually specific knowledge produces outcomes that formal knowledge cannot match, because formal knowledge has abstracted away the very variables that determine success or failure in the field.
Scott documented this pattern — the gap between the agronomist's model and the farmer's practice — across dozens of agricultural systems, from the polyculture gardens of Southeast Asia to the pastoral grazing patterns of East Africa to the rice varieties maintained by Balinese water temple networks. In every case, the local knowledge contained information that the formal model lacked. In every case, when the formal model was imposed and the local knowledge suppressed, the outcomes deteriorated. And in every case, the deterioration was invisible to the planners who imposed the model, because the metrics they tracked — yield per hectare, production per input unit, efficiency ratios — were designed to measure the model's performance, not the farmer's.
The parallel to AI governance is not analogical. It is structural. The farmer and the agronomist are not metaphors for the practitioner and the policymaker. They are instances of the same relationship — the relationship between knowledge that is local, embodied, and adapted to specific conditions and knowledge that is formal, abstract, and designed for general application. The policymaker who drafts an AI governance framework possesses real knowledge — knowledge of institutional mechanisms, enforcement challenges, international coordination requirements, risk categorization systems. This knowledge is valuable. It is also structurally identical to the agronomist's model: it describes what AI governance should look like under optimal conditions — conditions that do not exist in any specific workplace, classroom, or community where AI is actually being used.
The practitioner — the developer who uses Claude Code eight hours a day, the teacher who watches her students interact with AI tools, the customer service representative who knows which AI-generated responses enrage callers and which calm them — possesses knowledge that the governance framework does not contain. Not because the framework's authors were careless, but because the knowledge is local. It applies to this team, this classroom, this call center, with this set of users, these specific failure modes, these particular cultural and institutional conditions. The knowledge cannot be generalized without losing the specificity that makes it useful. It cannot be captured in a compliance checklist without being reduced to a form that strips away the contextual richness that constitutes its value.
Consider the teacher who has spent twenty years in classrooms and who is now watching AI reshape the learning environment. The governance framework says: AI may be used for drafting but not for final submission. The policy draws a line. The line is legible, enforceable, and wrong — not wrong in the sense of being poorly drawn, but wrong in the sense of addressing the wrong problem. The teacher's métis tells her that the issue is not whether the student used AI for the final draft. The issue is whether the student engaged with the material — whether the cognitive labor of wrestling with an idea, failing to articulate it, trying again, and eventually arriving at an understanding that feels earned rather than extracted actually occurred. This distinction cannot be captured in a policy. It can only be perceived by someone who knows this student, this subject, and the specific quality of attention that genuine learning produces — a quality that is as unmistakable to the experienced teacher as the feel of the soil is to the experienced farmer, and equally impossible to measure.
The EU AI Act, the most comprehensive attempt at AI governance yet undertaken, classifies AI systems by risk level and assigns regulatory requirements accordingly. The classification is intelligent, evidence-informed, and the product of genuine expertise. It is also, in Scott's terms, a monoculture — a uniform framework applied across the extraordinary diversity of contexts in which AI systems are deployed. A "high-risk" AI system in a German hospital operates in a different institutional, cultural, and practical environment from a "high-risk" system in a Brazilian school or a Nigerian bank. The regulatory requirements are identical. The conditions they must address are not. The farmers are different. The soils are different. The rainfall patterns are different. And the framework, designed at the altitude from which these differences are invisible, treats them as though they do not exist.
Fourcade and Healy's analysis of "high-tech modernism" adds a dimension that complicates even Scott's framework. They observe that AI systems do not merely impose categories from above, as traditional bureaucratic governance does. They induce categories from below — discovering patterns in data rather than prescribing them from theory. This inductive capacity appears to solve the problem Scott identified: if the categories emerge from the data rather than being imposed on it, surely they capture the local variation that top-down classification misses?
The appearance is misleading. The patterns that AI systems discover in data are patterns in recorded data — data that has already been through its own legibility filter. Medical AI trained on hospital records discovers patterns in the population that visits hospitals, not in the population that avoids them. Criminal justice AI trained on arrest records discovers patterns in policing behavior, not in criminal behavior. Educational AI trained on graded assignments discovers patterns in what teachers evaluate, not in what students learn. The data is not raw reality. It is reality that has been pre-filtered through institutional processes that determine what gets recorded and what does not — processes that systematically exclude the kind of local, contextual, informal knowledge that Scott's métis describes.
The farmer plants teff because she knows — through embodied, experiential, contextually specific knowledge transmitted across generations — that teff works here, in these conditions, for these purposes. No data set captures this knowledge, because the knowledge was never recorded in a form that data collection instruments can detect. It lives in practice, not in records. When the agronomist's model overrides it, the model is not overriding ignorance. It is overriding a different knowledge system — one that is less legible, less scalable, less amenable to centralized administration, but more adapted to the conditions it addresses.
The governance that Scott's framework demands is not governance that replaces the formal model with local knowledge. That would simply reverse the hierarchy without resolving the structural problem. The governance that works is governance that includes both: the policymaker's systemic perspective and the practitioner's contextual knowledge, combined through institutional mechanisms that allow genuine exchange rather than ceremonial consultation. Not the town hall meeting where the minister listens politely and then implements the plan she arrived with. Not the customer feedback survey that solicits opinions in a format too constrained to accommodate the knowledge that matters most. But the institutional structure that gives the farmer a seat at the table where agricultural policy is made — not as a token representative of "local perspective" but as a possessor of knowledge that the policy cannot succeed without.
What this means for AI governance is specific and immediate. It means that the governance frameworks being drafted in Brussels, in Washington, in corporate boardrooms — however technically sophisticated — will fail in the ways Scott documented unless they include structural mechanisms for incorporating practitioner knowledge. Mechanisms that are not cosmetic. Mechanisms that give the developer, the teacher, the customer service representative, the content moderator genuine influence over the policies that govern their daily engagement with AI tools. Mechanisms that treat the practitioner's métis not as a supplement to expert knowledge but as an indispensable component of it — the component that contains the information about actual conditions that the formal model, by design, cannot capture.
The farmer knows what the agronomist does not. The practitioner knows what the policymaker does not. And the governance that excludes this knowledge — however comprehensive, however well-intentioned, however technically sophisticated — is governance that has planted a monoculture where a polyculture is required. The first generation will look productive. The yield metrics will be impressive. And the Waldsterben will follow — not because anyone intended it, but because the knowledge that would have prevented it was not in the room when the decisions were made.
---
The impulse to plan is not pathological. It is, in many contexts, the most rational response to uncertainty: gather information, analyze options, design a course of action, implement. The impulse has produced bridges, vaccines, electrical grids, and legal systems — achievements of coordinated human intelligence that no spontaneous process could have generated. Scott never argued against planning as such. He argued against a specific relationship between the plan and the reality it addresses — the relationship in which the plan is treated as authoritative and the reality as raw material to be reshaped accordingly.
The alternative is not the absence of planning. It is a different kind of structure — one that emerges from engagement with conditions rather than from analysis conducted at a distance. The beaver's dam is such a structure. It is not designed from a blueprint. It is built through a continuous, responsive interaction between the builder and the environment — an interaction in which the builder studies the current, tests materials, observes how the water behaves around partial structures, and adjusts constantly as conditions change.
The dam is local. It exists in a specific stretch of river, responding to specific hydrological conditions. A dam that works brilliantly in one location would fail catastrophically fifty meters upstream, where the current is faster, the riverbed is rockier, and the bank is composed of different soil. The builder's knowledge is not transferable in the abstract. It applies here — to this current, this bank, this set of materials available within swimming distance. The dam is responsive. It is not built once and left. The river pushes against it every hour of every day, testing every joint, exploiting every gap. The builder returns, daily, to repair what the current has loosened, to reinforce what the water has tested, to adjust the structure as the river's behavior shifts with the seasons, the rainfall, and the thousand variables that determine how water moves through a landscape. The dam is a relationship, not an artifact. It persists only through sustained engagement. The moment the builder stops maintaining, the structure begins to fail.
The dam is distributed. No single dam controls the river. Each dam affects the flow in its immediate vicinity, creating a pool behind it and altering the current downstream. The cumulative effect of many dams, each built and maintained by a local practitioner responding to local conditions, produces an outcome — a network of pools, wetlands, and redirected currents — that no centralized plan could have designed, because the outcome depends on interactions between specific local conditions that are invisible from any single vantage point.
Scott's later work, particularly Two Cheers for Anarchism, articulated the principles that distinguish this kind of structure from the comprehensive plan. The principles are not complicated. They are, in fact, deceptively simple — simple enough to seem obvious, and therefore simple enough to be systematically ignored by institutions that mistake complexity of design for quality of governance.
Start small and observe. The comprehensive AI strategy begins with a theory of what AI will do and designs governance to manage the predicted effects. The anti-plan begins with observation of what AI is actually doing — in this team, this classroom, this community — and builds a response calibrated to the observed reality rather than the predicted one. The observation is not passive. It is the active, engaged, métis-rich observation of the practitioner who is embedded in the system and can perceive effects that no external assessment, however well-designed, can detect.
Prefer reversible interventions. The comprehensive strategy produces regulations that are, by institutional nature, difficult to amend. The EU AI Act took years to draft and will take years to revise. By the time the revision addresses the current generation of AI tools, those tools will have been superseded. The anti-plan favors interventions that can be adjusted quickly — organizational norms rather than legal mandates, team practices rather than company policies, experimental structures that are explicitly provisional and subject to modification based on what the practitioners observe.
Build in feedback. The comprehensive strategy produces compliance mechanisms — audits, certifications, reporting requirements — that measure whether the plan is being followed, not whether the plan is working. The anti-plan builds in mechanisms that measure effects — not the effects that the plan predicted, but the effects that the practitioners actually experience. What has changed about the quality of the work since the tool was adopted? What has happened to the development of junior practitioners' expertise? What kinds of problems are being solved better, and what kinds are being solved worse? These questions cannot be answered by a compliance audit. They can only be answered by practitioners who possess the métis to perceive the answers — and who are situated within institutional structures that solicit and value their observations.
Tolerate messiness. The comprehensive strategy aspires to consistency — the same rules applied everywhere, the same metrics tracked across all teams, the same expectations governing all uses. This aspiration is understandable. Consistency is a precondition for fairness, and fairness is a legitimate governance concern. But consistency applied to conditions that are fundamentally inconsistent produces not fairness but absurdity — the regulatory equivalent of planting Norwegian spruce in a tropical rainforest because the yield data from Prussia looked good. The anti-plan tolerates variation. It allows different teams, classrooms, and communities to develop different norms based on different conditions. It accepts that the resulting landscape will be messy — harder to manage, harder to audit, harder to present in a quarterly report — and recognizes that the messiness is not a failure of governance but a feature of governance that actually works.
These principles are not new. They are, in fact, very old — as old as the practice of building structures in rivers, as old as the agricultural knowledge that sustained human communities for millennia before the planners arrived. What makes them urgent is that they are precisely the principles that the dominant mode of AI governance is structured to ignore.
The Berkeley researchers whose work Segal examines in The Orange Pill proposed something they called "AI Practice" — structured pauses built into the workday, sequenced rather than parallel workflows, protected time for human-only engagement with the work. These are dams. Small, local, responsive structures built by practitioners who have observed, through direct engagement, what AI does to the texture of cognitive work and who are designing interventions calibrated to the specific effects they have witnessed. The structures are not comprehensive. They do not scale automatically from one organization to another. They require local knowledge to implement well and continuous maintenance to sustain. They are, in every respect, the opposite of the comprehensive AI strategy.
They are also, if Scott's framework holds, far more likely to succeed — precisely because they are built on the knowledge that comprehensive strategies systematically exclude.
Consider what an anti-plan approach to AI governance in education might look like. Not a university policy that categorizes permitted and prohibited uses of AI in academic work — a legibility project that simplifies a complex reality into manageable categories — but a structure that gives individual teachers the authority and the institutional support to develop AI norms calibrated to their specific subjects, their specific students, and their specific pedagogical goals. The medieval history teacher whose students need to develop the capacity for close textual analysis will arrive at different norms from the computer science teacher whose students need to develop the capacity for systems thinking. Both sets of norms will differ from those of the creative writing teacher whose students need to develop voice — that elusive quality that distinguishes writing that matters from writing that merely performs competence.
No centralized policy can accommodate this variation. A centralized policy must, by its nature, simplify — must treat medieval history, computer science, and creative writing as instances of a single category ("academic work") and apply a uniform rule. The simplification is not a choice. It is a structural requirement of centralized governance. The policy's authors may be perfectly aware that the subjects differ. They simplify not because they are ignorant but because the institutional form demands it. A policy that said "every teacher should figure it out based on their specific context" would not be a policy. It would be an abdication.
But Scott's work suggests that what looks like abdication from above may be governance from below — that the messy, inconsistent, locally variable landscape produced by distributed practitioner judgment may be more functional than the clean, consistent, centrally designed landscape produced by comprehensive planning. Not because practitioners are infallible — they are not, and their local knowledge can be as parochial and self-serving as any other form of knowledge — but because their errors are local, correctable, and subject to the feedback that direct engagement with the affected population provides. The teacher who gets it wrong will hear about it from students and colleagues in real time. The policymaker who gets it wrong will hear about it in an assessment report published two years after the policy was implemented — by which time the students affected by the error have graduated and the conditions that produced it have changed.
The dam is not a metaphor for doing nothing. It is a metaphor for doing something specific: building local, responsive, practitioner-informed structures that redirect the flow of a powerful force toward conditions that support life. The dam requires more knowledge than the plan, not less — more attention to local conditions, more willingness to adjust, more tolerance for the ambiguity that comprehensive plans are designed to eliminate. It requires, in short, the métis that institutional governance is structurally inclined to suppress.
Building governance from below is slower, messier, and less legible than designing governance from above. It produces landscapes that are harder to audit, harder to standardize, and harder to present to investors or regulators or accreditation bodies as evidence that the institution is "doing AI responsibly." It requires the institution to trust its practitioners — to believe that the people who interact with AI daily possess knowledge that the institution needs, and to create structures that allow that knowledge to inform institutional behavior.
This trust is the scarcest resource in the AI transition. Not trust in AI — the discourse is saturated with debates about whether to trust the tools. Trust in people. Trust that the practitioners who use AI daily know things about it that no assessment, no audit, no comprehensive strategy can capture. Trust that their knowledge, however local and inarticulate and resistant to the formats that institutional governance demands, is the knowledge without which governance is merely the imposition of a simplified map on a territory the map was never adequate to describe.
---
In September 2022, a research team at the University of Washington published a study examining how organizations develop AI ethics guidelines. The findings confirmed what Scott's framework would predict: the guidelines were produced through a process that systematically excluded the knowledge most relevant to their success. Executive committees drafted principles. Legal teams translated principles into policies. Compliance officers designed enforcement mechanisms. At no point in the process were the practitioners who would be governed by the guidelines — the engineers who wrote the code, the designers who shaped the interfaces, the customer-facing workers who dealt with the consequences of AI decisions — included in the design.
The guidelines were not bad, in the sense that their principles were defensible and their language was careful. They were disembodied — produced at a distance from the reality they addressed, by people who understood AI governance as a conceptual problem rather than a practical one. They governed in the abstract what could only be governed in the specific. They were, in Scott's vocabulary, a plan — comprehensive, rational, institutionally legible — imposed on a territory whose actual topography the planners had never walked.
Every government, every multinational corporation, every major university in the developed world has now produced or is producing a comprehensive AI strategy. The strategies vary in sophistication, scope, and ambition. The EU AI Act is 458 pages of risk-tiered regulation. The American executive orders are more modest in scope but significant in signaling. Corporate "Responsible AI Frameworks" range from thoughtful institutional documents to marketing artifacts dressed in ethical language. University policies range from outright prohibition to enthusiastic embrace, with every gradation of uncertainty in between.
What they share — every one of them — is a structural feature that Scott spent his career identifying as the precondition for failure: they are designed from above, by people who possess expert knowledge of AI as a technical and legal phenomenon, and they are imposed on practitioners who possess a different kind of knowledge — the local, contextual, embodied knowledge of what AI actually does when it enters a specific workplace, classroom, or community. The strategies are comprehensive. The reality they address is not.
The EU AI Act is the most ambitious attempt and therefore the most instructive example. The Act classifies AI systems into risk categories — unacceptable, high, limited, and minimal — and assigns regulatory obligations accordingly. The classification is based on the system's intended use, its potential for harm, and the vulnerability of the affected population. These are reasonable criteria. They produce a framework that is internally consistent, legally enforceable, and administratively manageable.
They also produce a framework that cannot accommodate the reality of how AI systems are actually used. The same language model — the same model, running on the same servers, producing outputs from the same training data — is "minimal risk" when generating marketing copy and "high risk" when generating medical advice. The risk does not reside in the model. It resides in the context — the institutional setting, the user's expertise, the stakes of error, the feedback mechanisms that catch mistakes before they propagate. The Act addresses the model. The risk lives in the context. And the context is local in precisely the way that Scott's métis is local: it varies from one deployment to the next, from one user to the next, from one moment to the next, in ways that no static classification can capture.
A 2023 analysis of AI governance frameworks across twenty major corporations found a consistent pattern: the frameworks were detailed on principles and vague on implementation. They specified what the organization valued — fairness, transparency, accountability, human oversight — but provided little guidance on how these values should be operationalized in specific workflows by specific practitioners facing specific decisions. The principles were legible. The implementation was left to the people who possessed the local knowledge required to translate principles into practice — and those people were not given the institutional support, the decision-making authority, or the feedback channels necessary to do the translation well.
This is the structural irony that Scott's framework exposes. The comprehensive strategy acknowledges, in its principles, that AI governance requires contextual sensitivity. It then produces a governance structure that is, by design, context-insensitive — because context-sensitivity is what the comprehensive plan cannot provide. A plan that said "respond to local conditions" would not be a plan. The institutional form demands simplification. The simplification produces uniformity. The uniformity is imposed on conditions that are anything but uniform. And the people who possess the knowledge required to navigate the non-uniformity — the practitioners — are the people the plan has, structurally, excluded from its design.
Scott was sometimes criticized for being better at identifying what goes wrong with comprehensive plans than at proposing what should replace them. The criticism was not entirely unfair. His prescriptive work — particularly Two Cheers for Anarchism — was thinner than his diagnostic work, in part because the prescription is genuinely harder. It is easier to demonstrate that a plan has failed than to specify what should have been done instead, because the alternative is not a different plan. It is a different kind of governance — one that is messier, less legible, harder to describe in a policy document, and dependent on the distributed judgment of practitioners whose knowledge cannot be centralized without being destroyed.
But the principles are identifiable, even if their implementation varies from one context to the next.
Create channels, not mandates. The comprehensive strategy mandates behavior: use the tool in this way, classify risk according to this matrix, report outcomes in this format. The alternative creates channels through which practitioner knowledge can travel: regular forums where engineers, teachers, and knowledge workers describe what AI is actually doing in their specific domains, institutional mechanisms that translate these descriptions into actionable adjustments, and decision-making structures that give practitioners genuine authority over the norms that govern their own work. The channel approach does not produce a governance document that can be audited for compliance. It produces a governance process that is responsive to the reality it addresses.
Embrace provisional governance. The comprehensive strategy aspires to permanence — or at least to the kind of institutional stability that justifies the years of work and the millions of dollars invested in its production. The alternative embraces provisionality — the explicit acknowledgment that any governance structure designed for AI in 2026 will be inadequate for AI in 2028, and that the structure's value lies not in its durability but in its adaptability. This means building governance mechanisms that are designed to be revised — sunset clauses, mandatory review cycles, explicit triggers for reassessment based on observed effects rather than calendar dates. And it means treating each revision not as a failure of the original design but as a feature of a governance approach that is responsive to a rapidly changing domain.
Measure effects, not compliance. The comprehensive strategy produces compliance metrics — adoption rates, policy adherence scores, audit results — that measure whether the plan is being followed. The alternative measures effects — what has actually changed in the quality of work, the development of expertise, the wellbeing of practitioners, and the outcomes for the people the work is supposed to serve. Effects are harder to measure than compliance, because they are qualitative, contextual, and resistant to standardization. But effects are what governance is supposed to produce. A governance framework that achieves perfect compliance while the quality of work deteriorates, the development of expertise stalls, and the practitioners burn out is not succeeding. It is succeeding at the wrong thing.
Trust practitioners. This is the hardest principle and the one that institutional governance is least equipped to enact, because institutional governance exists in significant part to compensate for the unreliability of individual judgment. The response is not that practitioners are infallible — they are not, and distributed governance creates its own failure modes. The response is that the knowledge required for effective AI governance is distributed among practitioners, and governance that does not access this knowledge is structurally blind in ways that no amount of expert analysis can compensate for. Trust is not the absence of accountability. It is the allocation of authority to the people who possess the knowledge required to exercise it well — combined with the feedback mechanisms that allow errors to be detected and corrected before they compound.
These principles do not produce a strategy. They produce conditions — conditions under which the people who understand AI from the inside, from the daily practice of using it, can develop and share the norms that effective governance requires. The conditions are institutional, not aspirational: they require organizations to create structures, allocate resources, and distribute decision-making authority in ways that the comprehensive strategy actively discourages, because the comprehensive strategy concentrates authority at the center and distributes compliance to the periphery.
Andrej Karpathy — a co-founder of OpenAI and one of the most technically sophisticated minds in the AI field — observed in early 2026 that he was "bullish on people empowered by AI increasing the visibility, legibility and accountability of their governments." The observation inverts Scott's framework in a genuinely novel way: instead of the state making citizens legible, citizens use AI to make the state legible — to read the four-thousand-page bills, analyze the budgets, parse the regulatory documents, and hold the institutions accountable in ways that previously required an army of lawyers and policy analysts. This inversion is hopeful. It suggests that the legibility project can be redirected — that the tools developed to make populations visible to power can be repurposed to make power visible to populations.
But the inversion works only if the populations possess the institutional capacity to act on what they see. Legibility without agency is surveillance by another name. Making the state's AI policies visible to citizens is valuable only if citizens have channels through which to contest, modify, and reshape those policies based on what the visibility reveals. The comprehensive strategy does not create these channels. It creates compliance mechanisms. The difference is the difference between governance that is responsive to the governed and governance that is imposed on the governed with sufficient transparency that the imposition looks democratic.
Scott's life's work points to a single conclusion about the AI transition. The governance that works will not look like a strategy. It will look like a thousand local experiments, conducted by practitioners who possess the knowledge required to calibrate the experiments to their specific conditions, connected by channels that allow successful experiments to propagate and failed experiments to be identified and abandoned. It will be messy. It will be inconsistent. It will resist the legibility that institutions crave. And it will work — not because messiness is a virtue, but because the alternative is the Norwegian spruce plantation: clean, consistent, legible, and dead within a generation.
The comprehensive AI strategy will be published. The compliance frameworks will be deployed. The risk categories will be imposed. These are institutional certainties, as predictable as the Prussian forester's decision to plant monocultures. The question — the only question that matters, from Scott's perspective — is whether the practitioners whose local knowledge the strategies exclude will find ways to build their own structures within and alongside the comprehensive plans. Whether the dams will be built despite the blueprints. Whether the farmers will plant their teff in the plots the agronomists do not inspect.
Whether the knowledge that cannot be centralized will, nevertheless, find its way into the governance of the systems that affect everyone.
There is a way of looking at the world that Scott practiced for fifty years and never quite named until late in his career. He called it "the anarchist squint" — not the ideology of anarchism, with its flags and manifestos, but a habit of perception. A deliberate reorientation of the gaze. The anarchist squint looks at any institution, any plan, any technology, any arrangement of power and asks a question that the institution's own self-description is designed to make invisible: Who is being governed here, and do they see what the governors see?
The squint is not cynicism. It does not assume that power is always corrupt or that institutions are always harmful. It assumes something more modest and more subversive: that the view from above and the view from below are genuinely different — that they reveal different features of the same landscape — and that the view from below is systematically absent from the conversations in which the landscape is designed. The anarchist squint corrects for this absence. It asks, of every plan: What does this look like from the position of the person who did not make it but must live inside it?
Applied to the AI transition, the anarchist squint produces a picture that is almost unrecognizably different from the one visible in earnings calls, product launches, and policy white papers.
From above, the transition looks like a productivity revolution. The twenty-fold multiplier. The collapsing cost of software. The expansion of who gets to build. The trillion-dollar revaluation of the technology sector. The numbers are extraordinary. The trajectory, from the vantage point of the people who build the tools and the investors who fund them and the executives who deploy them, is unmistakably upward.
From below, the transition looks like something else entirely.
From below, it looks like the combine harvester arriving in Sedaka. Not because the analogy is perfect — the AI transition is producing genuine, broadly distributed benefits that the Green Revolution in the Muda plain did not — but because the structural dynamics are the same. A technology that increases aggregate productivity while reshaping the distribution of whose labor is valued. A transition whose benefits are captured disproportionately by those who own the technology and whose costs are borne disproportionately by those who compete with it. A discourse that measures the gains in aggregate and experiences the losses in particular.
The developer in Lagos whom Segal invokes as evidence of democratization — the one who can now access coding leverage comparable to a Google engineer's — looks different through the anarchist squint. From above, she is a beneficiary. The floor has risen. Her capabilities have expanded. She can build things that were previously beyond her reach. All of this is true, and none of it is false.
From below, she is also a participant in a system whose rules she did not set and whose governance she cannot influence. The tool she uses was built by an American company, trained predominantly on English-language data, optimized for the workflows and assumptions of Western knowledge workers. Her interaction with it is logged, analyzed, and used to improve the product — a product she pays for but does not own, whose behavior can change overnight without her consent, whose pricing can shift at any time, and whose continued availability depends on the business decisions of a corporation headquartered eight thousand miles away. The democratization is real. The dependency is also real. And the dependency is invisible from the altitude at which democratization is celebrated.
The anarchist squint does not claim that the view from below is the only valid perspective. It claims that it is the perspective most likely to be absent from the governance conversation — and therefore the perspective whose inclusion is most urgently needed.
Consider how the AI transition looks from the position of the customer service representative whose employer has deployed an AI system to generate response templates. From above, the deployment is a productivity enhancement: the representative handles more calls per hour, the average resolution time decreases, the customer satisfaction scores hold steady or improve slightly. The metrics are legible. The dashboard shows green.
From below, the representative's experience is different. The templates are competent but generic. They handle the standard cases well — the cases that were already easy — and fail on the complex cases that require the representative's judgment, her knowledge of this particular customer's history, her feel for the emotional register of the conversation, her ability to deviate from the script in ways that the script's designers could not anticipate. The AI has not augmented her expertise. It has partitioned it. The easy work has been automated. The hard work remains, but now it arrives without the warm-up that the easy cases used to provide — without the rhythm of alternating between routine and difficulty that allowed the representative to calibrate her attention across the shift. The work is more intense. The support structures have been removed. And when the next round of headcount reduction arrives, the metrics that justified the deployment will be used to argue that fewer representatives can handle the same volume — because the metrics measure calls handled, not the quality of judgment exercised on the calls that the AI could not handle.
From below, the productivity revolution looks like intensification. The democratization of capability looks like the erosion of the conditions that sustain expertise. The expanding frontier looks like an accelerating treadmill. None of these perceptions are the whole truth. All of them are part of the truth — the part that is invisible from above and that the comprehensive strategy, by structural necessity, cannot incorporate.
Scott's most profound contribution to political theory was not a concept but a methodological commitment: the insistence that understanding power requires spending time with the people who are subject to it. He lived in Sedaka for two years. He did not parachute in for a survey. He learned the language, attended the weddings and funerals, participated in the daily labor of rice cultivation, and earned — through the slow, friction-rich process of becoming known — the trust required for people to tell him what they actually thought rather than what they thought he wanted to hear.
This methodological commitment is almost entirely absent from AI governance. The policymakers who draft regulations consult experts — technologists, ethicists, legal scholars, industry representatives. The corporate leaders who design governance frameworks consult their legal and compliance teams. The university administrators who write AI policies consult their faculty senates. These consultations are not meaningless. They bring genuine expertise to the table.
But they do not bring the view from below. They do not bring the customer service representative's experience of partitioned work. They do not bring the junior developer's experience of capability without understanding. They do not bring the teacher's experience of watching students produce articulate work without engaging with the ideas the work represents. They do not bring the parent's experience of watching a child lose the capacity for boredom — that neurologically essential state in which attention and imagination germinate — because every idle moment is now filled with a device that provides answers before questions can form.
The anarchist squint insists that these experiences are not anecdotal. They are data — data of a kind that the governance framework cannot accommodate because it is too local, too contextual, too resistant to the standardization that institutional processing requires. But it is the data that determines whether the governance framework works or fails, because the framework's effects are felt at exactly the level of granularity that the framework cannot perceive.
Farrell observed, in his assessment of AI and authoritarianism, that AI-empowered state control would be "more radically monstrous and more radically unstable" than the efficient techno-authoritarianism that the popular imagination anticipated. The instability is Scottian: the feedback loops of simplification and error compound. The state that sees its citizens through algorithmic legibility makes decisions based on what the algorithm reveals — and the algorithm reveals only what the data contains, which is a simplified, partial, systematically biased representation of a reality that the simplification cannot capture. The decisions produce consequences that the simplified model did not predict. The consequences generate data that the model interprets through its existing categories, reinforcing the simplification rather than correcting it. The system becomes more confident as it becomes less accurate — a dynamic that Scott documented in every comprehensive plan he studied, from Soviet agriculture to Tanzanian villagization.
The democratic version of this dynamic is subtler but no less concerning. Democratic governments that adopt AI for public services — welfare administration, criminal justice, educational assessment, public health — face the same structural problem: the AI sees what the data contains, the data contains what the institutional processes have recorded, and the institutional processes have systematically excluded the local knowledge that would reveal where the AI's outputs diverge from the reality they are supposed to represent. The welfare recipient who is denied benefits because the algorithm scored her risk profile incorrectly possesses knowledge that the algorithm does not: knowledge of her actual circumstances, her actual constraints, her actual needs. This knowledge is invisible to the system. And when the system is designed to be self-correcting — to learn from its own outputs — the corrections are derived from the same incomplete data that produced the errors, which means the system learns to be more efficiently wrong.
The anarchist squint does not produce a plan. It produces a question — the question that every plan must be subjected to before implementation and that most plans are never asked: What does this look like from below? The question is not sufficient for governance. Governance requires structure, resources, authority, and the willingness to make decisions that some people will disagree with. But the question is necessary for governance — necessary because without it, the governance operates in a perceptual field that is systematically missing the information it most needs.
Scott spent his career in the company of people who were subject to plans they did not make. The peasants of Sedaka. The inhabitants of razed neighborhoods. The farmers whose polycultures were replaced by monocultures. The communities whose informal institutions were dismantled in the name of rationalization. He found, in every case, that the people who lived inside the planned system possessed knowledge that the planners lacked — knowledge that would have improved the plan, moderated its worst effects, and in some cases prevented the catastrophe entirely.
The AI transition will be planned. It is already being planned. The question the anarchist squint poses is whether the planning will include the view from below — the experience of the practitioners, the workers, the students, the citizens who live inside the system being designed — or whether it will proceed, as so many plans have proceeded before it, in the confident conviction that the view from above is sufficient.
The history Scott assembled is unambiguous on what happens when it is not.
---
After Scott's death in July 2024, the tributes that accumulated from across the academic world shared a common observation: his ideas had traveled further than almost any political theorist of his generation, reaching not just the scholars who cited him but the practitioners, activists, designers, and technologists who found in his framework a language for something they had experienced but could not name. The concept of legibility became common currency in Silicon Valley — adopted, sometimes glibly, by the very industry whose products most aggressively extended the state's capacity to see. The concept of métis circulated among educators, urban planners, software developers, and organizational theorists as a defense of the knowledge that their institutions systematically undervalued. Seeing Like a State became one of those rare academic works that escapes its discipline and enters the broader culture as a way of seeing.
The irony of this reception was not lost on Scott. A thinker who spent his career arguing that practical knowledge cannot be separated from its context found his ideas separated from their context and applied, with varying degrees of fidelity, to situations he had never examined. The legibility concept was particularly vulnerable to this treatment. In Scott's hands, legibility was a precise analytical tool — a way of understanding how institutions simplify complex realities in order to govern them, and how the simplification produces consequences that the simplifying authority cannot foresee. In the broader culture, legibility sometimes degenerated into a general suspicion of measurement, a reflexive hostility toward any institutional effort to make complex phenomena visible. This was not what Scott argued. He argued that legibility is dangerous when it is treated as sufficient — when the simplified representation replaces the complex reality it was derived from. He did not argue against the act of simplification itself.
The distinction matters for AI governance, because the governance that the AI transition requires is not the absence of institutional structure. It is the presence of a different kind of institutional structure — one that draws on distributed practitioner knowledge without pretending that distributed knowledge is self-organizing. Scott knew that local knowledge, left entirely to its own devices, can be as parochial, self-serving, and resistant to necessary change as any centralized plan. The peasant who insists on traditional varieties when improved varieties genuinely outperform them is not practicing wise resistance. She is practicing conservatism — and conservatism, like high modernism, can be catastrophic when conditions change faster than local practices adapt.
The governance that Scott's framework implies is neither pure central planning nor pure local autonomy. It is something harder to describe and harder to build: an institutional architecture that creates the conditions for local knowledge to aggregate, circulate, and inform decisions at every level, while maintaining the coordination capacity that large-scale challenges require.
Elinor Ostrom — whose Nobel Prize–winning work on the governance of common-pool resources is the most rigorous institutional complement to Scott's anarchist politics — demonstrated that communities can govern shared resources sustainably without centralized authority, but only when specific institutional conditions are met. The conditions are not mysterious: clearly defined boundaries, rules that match local conditions, collective choice arrangements that include the affected parties, monitoring by people who are accountable to the community, graduated sanctions for violations, accessible conflict-resolution mechanisms, and recognition by external authorities of the community's right to self-organize. The conditions are demanding. They require sustained institutional work. And they produce governance that is, by the standards of comprehensive planning, inelegant — variable across communities, difficult to audit from the outside, resistant to the standardization that large-scale assessment requires.
But it works. Ostrom documented cases spanning centuries — fisheries, irrigation systems, forests, grazing lands — where communities that met these conditions sustained their shared resources indefinitely, while communities that were subjected to centralized management by external authorities, or left to unstructured individual exploitation, depleted those resources to collapse.
The application to AI governance is direct. The "shared resource" that AI affects is not a fishery or a forest. It is something harder to name and harder to delimit: the cognitive commons — the shared capacity for attention, judgment, expertise development, and meaningful work that AI both augments and threatens. The erosion of this commons cannot be measured by a productivity dashboard any more than the erosion of a fishery can be measured by catch statistics alone. The catch may remain high even as the breeding population declines — and the productivity may remain high even as the capacity for judgment that produces it degrades.
Governing this commons requires the institutional conditions Ostrom identified, translated into the specifics of the AI context.
Clearly defined boundaries. Not boundaries on what AI can do — those shift too fast for any boundary to hold — but boundaries on the conditions under which AI is deployed in specific contexts. The team that decides, collectively, that AI-generated code must be reviewed by a human who understands the system well enough to evaluate it meaningfully has drawn a boundary. The school that decides, collectively, that AI may be used for research but not for the production of assessed work has drawn a boundary. The boundaries are local. They are drawn by the people who possess the knowledge required to draw them well. And they are enforceable because the people who drew them are the people who inhabit the bounded space.
Rules that match local conditions. The engineering team working on safety-critical systems will develop different norms from the marketing team generating campaign copy. The medical school will develop different standards from the creative writing program. The customer service center in Lagos will operate under different constraints from the one in San Francisco — not because the work is less important but because the conditions are different. Rules that match local conditions cannot be written at the center. They can only be developed by the practitioners who understand the conditions — and then shared, compared, and refined through horizontal exchange with practitioners in other contexts, producing not a uniform policy but a distributed body of practice that is richer, more adaptive, and more responsive than any centralized framework.
Collective choice arrangements that include the affected parties. This is the condition most absent from current AI governance. The people affected by AI deployment decisions — the workers whose jobs are restructured, the students whose learning is reshaped, the communities whose information environment is transformed — are almost never included in the decisions that affect them. They are consulted, sometimes, through mechanisms designed to produce legible input: surveys, focus groups, comment periods. But consultation is not inclusion. Inclusion means that the affected parties have genuine decision-making authority — the power to shape the norms that govern their engagement with AI, not merely to comment on norms designed by others.
Monitoring by people who are accountable to the community. The productivity dashboard monitors from above. The compliance audit monitors from above. Neither is accountable to the people being monitored. The alternative is monitoring from within: practitioners who observe the effects of AI on their own work and their colleagues' work and who report those observations through channels that reach decision-makers in time to influence decisions. The Berkeley researchers' concept of "AI Practice" is a monitoring structure of this kind — designed by practitioners, based on observed effects, calibrated to specific conditions, and embedded in the daily rhythm of the work rather than imposed from above at annual review.
Graduated sanctions. When norms are violated — when the AI-generated code is deployed without review, when the student submits AI-produced work as original, when the organization optimizes for productivity metrics at the cost of practitioner wellbeing — the response should be proportional, escalating, and administered by the community rather than by a distant authority. The first violation produces a conversation. The second produces a consequence. The escalation is managed by people who understand the context well enough to distinguish between a genuine norm violation and a reasonable adaptation to changing conditions.
Recognition by external authorities. This is the condition that connects governance from below to governance from above — that prevents distributed governance from degenerating into a patchwork of isolated experiments with no mechanism for coordination. External authorities — governments, corporations, professional associations — must recognize the legitimacy of practitioner-developed norms. Not rubber-stamp them. Recognize that the practitioners possess knowledge the external authority does not, and that governance informed by that knowledge is likely to be more effective than governance designed without it. Recognition means creating institutional space for practitioner norms to operate — declining to override them with centralized mandates except when the local norms demonstrably fail to protect the interests of the affected community.
None of this is easy. Governance from below is slower than governance from above. It produces landscapes that are messy, variable, and resistant to the standardization that assessment requires. It demands institutional investment — forums, channels, decision-making structures, conflict-resolution mechanisms — that comprehensive planning does not require, because comprehensive planning concentrates authority at the center and distributes only compliance.
But the alternative — the comprehensive plan, the uniform framework, the centralized strategy designed by experts who possess technical knowledge and lack local knowledge — is the alternative whose track record Scott spent forty years documenting. The record is unambiguous. The monoculture fails. The cadastral map destroys the tenure system it was supposed to rationalize. The collective farm produces famine. The planned city suffocates. The legible forest dies.
The AI transition does not have to end this way. The conditions for catastrophe are present but not yet assembled. The window for building governance from below — for creating the institutional structures through which practitioner knowledge can inform the decisions that shape the transition — remains open. The practitioners possess the knowledge. The frameworks for translating that knowledge into institutional governance exist, in Ostrom's work, in Scott's work, in the accumulated experience of communities that have governed shared resources sustainably for centuries.
What is needed is not a better plan. What is needed is the institutional will to trust the people who live inside the system being planned — to believe that their knowledge matters, that their judgment has value, and that the governance built from their experience will be more durable, more adaptive, and more just than any governance designed without it.
The dam is not a plan. It is a relationship between the builder and the river. It persists through attention, not authority. And the ecosystem it sustains — the pool behind the dam where life flourishes in conditions the unimpeded current would not permit — depends, every day, on the builder's willingness to return, observe what the current has done overnight, and place the next stick where the water tells her it is needed.
---
The map I have been using all my life was drawn from above.
I mean that literally. Every time I assessed a product, a team, a market, I did what builders do: I went to the whiteboard, drew boxes, connected them with arrows, and believed the picture. Architecture diagrams. Org charts. Product roadmaps. Business model canvases. Every one of them a simplification — a way of making a messy, living, breathing reality into something I could see and govern. I never questioned whether the picture was complete. I questioned whether the boxes were in the right order, whether the arrows pointed in the right direction, whether the labels were accurate. I optimized the map. I rarely asked what the map was missing.
Scott's work is about what the map is missing.
What stayed with me — what I have not been able to put down since I first encountered it through the scholars who applied it to our moment — is not the grand thesis, though the grand thesis is powerful. It is a detail, almost a throwaway, from his research in Malaysia. The peasants of Sedaka had a practice of gleaning — gathering the leftover rice from the fields after harvest. It was not charity. It was not welfare. It was a customary right, woven into the social fabric so deeply that it was invisible to everyone except the people who depended on it. When the combine harvesters arrived, gleaning became impossible — not because anyone prohibited it, but because the machines left nothing to glean. The right was not revoked. It was rendered meaningless by a change in the technology of production that the planners who introduced the technology had not considered, because the planners were looking at yield-per-hectare and the gleaning was not on their map.
I think about the gleaners when I think about what we are doing with AI.
Not the people whose jobs are being displaced — the discourse covers them, imperfectly but audibly. I think about the people whose informal rights are being rendered meaningless by a change in the technology of production that nobody designed and nobody is governing. The junior developer whose right to learn through struggle — to develop the embodied understanding that only comes from hours of friction-rich engagement with a resistant system — is not being revoked. It is being rendered meaningless by a tool that eliminates the friction before the learning can occur. The teacher whose right to assess genuine understanding — to distinguish between a student who has wrestled with an idea and one who has received it prepackaged — is not being revoked. It is being rendered meaningless by a tool that produces articulate output regardless of whether the articulacy reflects actual thought.
These are informal rights. They appear on no charter and no policy document. They exist in the texture of daily practice, in the customs and expectations that constitute the lived reality of work, education, and community. They are what Scott would call métis — and they are being dismantled not by anyone's intention but by a change in the conditions of production that the planners of the AI transition do not see, because the planners are looking at their maps, and the maps do not show the gleaning.
In The Orange Pill, I wrote about the obligation to build dams — structures that redirect the flow of intelligence toward conditions that support life. Scott's work sharpened that obligation into something more specific and more uncomfortable. The dam cannot be designed from above. It must be built by the people who feel the current — the practitioners whose daily engagement with AI gives them knowledge that no policy document can contain. And building it requires not just the will to act but the institutional infrastructure that allows practitioner knowledge to travel from the periphery, where it is generated, to the center, where the decisions are made.
We are not building that infrastructure fast enough. The comprehensive strategies are being published. The compliance frameworks are being deployed. The risk categories are being imposed. And the practitioners — the people who know what the maps are missing — are being consulted through mechanisms too narrow to accommodate what they know.
What Scott teaches is that the knowledge is there. It is always there, distributed among the people who inhabit the system, developed through the friction of daily engagement, too local and too embodied to survive the translation into institutional language. The tragedy is not that the knowledge does not exist. The tragedy is that the institutions tasked with governing the transition do not possess the channels to receive it.
Build the channels. That is the imperative. Not better maps. Better channels — institutional structures through which the people who live inside the system being designed can communicate what they see to the people making the decisions. Not as consultation. As authority. As the recognition that the view from below is not a supplement to governance but its foundation.
The forester who counts the trees cannot save the forest. The farmer who feels the soil can. But only if someone builds the table where the farmer sits beside the forester, and only if both of them listen.
The most dangerous AI strategies are the most rational ones.
The best-designed plans destroy what they cannot see.
The knowledge that saves us lives in the people no one is asking.
Every government, every corporation, every university is drafting a comprehensive AI strategy right now -- rational, well-funded, designed by experts. James C. Scott spent forty years documenting what happens when powerful institutions impose elegant plans on complex living systems. The Prussian foresters who planted perfect rows of spruce and watched the forest die. The Soviet planners whose scientific agriculture produced famine. The urban visionaries whose model cities suffocated the communities they replaced. The pattern is always the same: the plan succeeds on paper while the reality it governs collapses, because the local knowledge that would have revealed the plan's fatal flaws was never in the room. This book applies Scott's devastating framework to the AI revolution and asks the question no strategy document contains: what does this transition look like from below -- from the position of the practitioners, workers, teachers, and parents who must live inside plans they did not make?

A reading-companion catalog of the 17 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that James C. Scott — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →