Classical high modernism required the state to impose simplification on complex realities — to flatten the land into a cadastral grid, to sort the population into census categories, to organize the economy into measurable sectors. AI-era governance, by contrast, can let simplification emerge from data: categories are induced from patterns rather than imposed from above. The algorithm does not need to define risk categories in advance; it discovers them in the data. Marion Fourcade and Jeff Gordon's concept of inductive statecraft names this mode of governance and captures both its promise and its peril. If categories emerge from reality rather than being imposed on it, surely they capture more of reality's complexity? The appearance is deceptive. The categories that emerge from AI analysis are still simplifications — still reductions of complex, contextual, local reality to patterns that the system can process. They are more sophisticated simplifications than the cadastral grid, but they are simplifications nonetheless. And because they are inductively derived rather than administratively imposed, they carry an aura of objectivity that makes them harder to challenge.
The concept extends Scott's framework into territory he did not fully explore. Scott had diagnosed the classical legibility trap — the institutional tendency to treat simplified representations as equivalent to the realities they describe. Inductive statecraft introduces a new variation of the same trap. Because the categories appear to emerge from the data rather than from institutional theory, they seem to carry the authority of empirical discovery rather than administrative imposition. This appearance gives them rhetorical protection against the kinds of challenges that classical administrative categories face.
But the appearance is misleading. AI-derived categories are patterns in recorded data — data that has already been through its own legibility filter. Medical AI trained on hospital records discovers patterns in the population that visits hospitals, not in the population that avoids them. Criminal justice AI trained on arrest records discovers patterns in policing behavior, not in criminal behavior. Educational AI trained on graded assignments discovers patterns in what teachers evaluate, not in what students learn. The data is not raw reality. It is reality that has been pre-filtered through institutional processes that determine what gets recorded and what does not — processes that systematically exclude the kind of local, contextual, informal knowledge that Scott's métis describes.
The result is that inductive statecraft inherits all the blind spots of the data collection processes it depends on, while appearing to transcend them. The cadastral map was visibly a human creation — obviously a simplification, open to dispute on the grounds that the simplification missed important features of the territory. The AI-derived pattern presents itself as a discovery — as something found in the data rather than imposed on it — and this presentation makes it far more resistant to the kind of challenge that Scott insisted was essential: the challenge from below, from the practitioners whose local knowledge reveals what the pattern missed.
Applied to the AI transition, inductive statecraft names a specific governance pathology that the comprehensive AI strategies of 2023-2026 exhibit in various forms. When regulators define AI 'risk categories' through analysis of AI behavior patterns, when corporations define employee 'performance categories' through analysis of productivity data, when universities define student 'success patterns' through analysis of learning metrics — all of these operations are inductive statecraft. All of them produce categories that appear objective because they emerge from data. All of them inherit the blind spots of the data collection processes on which they depend. And all of them reproduce, in more sophisticated form, the structural failure Scott documented across classical administrative contexts.
The concept was introduced by Marion Fourcade and Jeff Gordon in a 2020 paper 'Learning Like a State: Statecraft in the Digital Age.' Fourcade, a French-American sociologist at UC Berkeley, had been extending Scott's framework to digital governance for over a decade. The term 'inductive statecraft' captured the specific way AI-era governance differs from the administrative legibility Scott had diagnosed, while insisting that the difference was not a transcendence of the legibility trap but its reproduction in more sophisticated form.
Categories from data, not theory. Inductive statecraft lets classification emerge from algorithmic pattern recognition rather than from administrative definition in advance.
The appearance of objectivity. Because the categories are discovered rather than imposed, they present themselves as findings rather than choices — which makes them harder to contest.
Data is pre-filtered reality. The data from which patterns emerge has already been shaped by institutional processes that determine what gets recorded and what does not. Inductive categories inherit these upstream filters.
Scott's trap, renewed. Inductive statecraft does not escape the legibility trap. It reproduces the trap in a form that is more sophisticated, more protected from challenge, and therefore potentially more damaging.
Some scholars have argued that inductive statecraft genuinely does capture more of reality's complexity than classical administrative categories, even if it remains imperfect — that its sophistication is a real improvement, not merely a rhetorical mask. Defenders of Scott's framework respond that the sophistication masks rather than resolves the underlying problem, and that the rhetorical authority of AI-derived categories makes them more resistant to the kinds of corrective feedback that classical administrative categories at least nominally permit.