Scott traced the legibility project through the cadastral map, which converted the customary land arrangements of medieval Europe into standardized property parcels; through the census, which reduced populations to countable categories; through the urban grid, which replaced the organic accumulation of centuries of human activity with geometric rationality; through the collective farm, which destroyed the small-plot agriculture that had sustained peasant communities for generations. In each case, legibility was the precondition for centralized governance — but the governance it enabled produced consequences the planners had not foreseen.
The AI era introduces a qualitatively new form of legibility. Previous administrative technologies made populations visible to the state at the level of aggregate statistics and individual records. AI makes cognitive processes visible at unprecedented granularity. Every prompt, every interaction, every sequence of questions a user asks a language model becomes, in principle, available for inspection. The cognitive process that was previously opaque to employers, educators, and institutions has been rendered transparent. The thinking has been made legible. Chuncheng Liu's 2022 study of China's COVID-19 Health Code system illustrated the classic Scottian dynamic: an algorithm that simplified complex epidemic risk into three colors, produced absurd outputs when the simplification hit the edge cases, and worked only because humans at the checkpoints constantly corrected for the algorithm's failures.
What makes AI legibility particularly insidious is that it operates on cognition itself. When every interaction with an AI tool is logged, when the cognitive process has been externalized into a traceable sequence of prompts and responses, the practitioner begins to reshape her thinking to fit the medium. She starts to think in prompts. The internal monologue — the messy, associative, frequently irrational process through which genuine insight is produced — is compressed into the linear, verbal, structured format that the tool requires. The thinking becomes legible. And in becoming legible, it loses something that the practitioner may not recognize she has lost until she tries to solve a problem that the AI cannot help with and discovers that the cognitive muscles she once relied on have atrophied from disuse.
Fourcade and Gordon's concept of inductive statecraft adds a further dimension. Classical high modernism required the state to impose simplification on complex realities. AI-era governance can let simplification emerge from data: categories are induced from patterns rather than imposed from above. This appears to solve Scott's problem. If the categories emerge from reality rather than being imposed on it, surely they capture more of reality's complexity? The appearance is deceptive. AI-derived categories are still simplifications — still reductions of complex, contextual, local reality to patterns that the system can process. They are more sophisticated simplifications than the cadastral grid, but they are simplifications nonetheless. And because they are inductively derived rather than administratively imposed, they carry an aura of objectivity that makes them harder to challenge.
The analytical framework for the legibility trap was developed in the opening chapters of Seeing Like a State, building on Scott's earlier observations about how state institutions engage with peasant populations. The concept drew on a long tradition in critical sociology, including Weber's analysis of bureaucratic rationality, Foucault's work on governmentality, and Bourdieu's studies of how state classifications shape the realities they purport to describe.
The double movement. Legibility operates in two steps — simplification, then inscription. The trap is sprung when the simplified representation is treated as equivalent to the reality it describes.
Legibility is not the problem; sufficiency is. Scott did not argue against administrative visibility as such. He argued against the assumption that what is legible is all that matters.
AI as cognitive legibility. Previous legibility projects made populations visible. AI makes thinking visible — an extension of the legibility project into cognitive territory it has never before reached.
Inductive legibility is still legibility. Categories discovered in data are still simplifications of the reality the data was collected from, and they inherit the blind spots of the collection process.