The AI discourse that crystallized in the mid-2020s performs the same discursive operation that Truman performed in 1949. Billions of people who have lived through their own knowledge systems are now reclassified as lacking AI capability, awaiting the remedy that the technology apparatus is uniquely positioned to provide. The vocabulary has been modernized. The institutional actors wear different logos. But the underlying grammar — the discursive architecture that constructs a population as deficient, positions a technology as the remedy, and presents distribution as liberation — remains intact. The category is not discovered empirically. It is constructed discursively, by a framework that measures human capability against the standard of what AI tools provide.
The construction operates through specific mechanisms. Adoption rates define success. Productivity multipliers confirm the technology's value. The expansion of the user base generates data that improves the models, which increases the productivity multipliers, which accelerates the adoption rates. Within this loop, the only relevant question is how to distribute the tools more widely and more equitably. The question of whether the tools encode assumptions that conflict with the knowledge systems and governance structures of the communities to which they are distributed does not arise.
The figure who appears most prominently in this discourse is the developer in Lagos, the student in Dhaka, the builder in the Global South who possesses ideas and intelligence but lacks access to tools. The figure is invoked with genuine moral seriousness. The argument is not false — the tools do expand capability, the expansion is measurable. But the argument operates within a discursive structure that Escobar spent his career anatomizing: the intended beneficiary appears not as a knower but as a figure of lack awaiting remedy.
The construction is maintained by a specific rhetorical move. Populations that decline to adopt AI tools, or adopt them on terms different from those the industry prescribes, are classified as backward, fearful, or technologically illiterate — the same vocabulary that development discourse deployed against communities that resisted structural adjustment or refused Green Revolution seeds. Resistance is converted into evidence of deficiency rather than read as diagnostic knowledge about the tool's inadequacy.
The AI-underdeveloped category is not homogeneous. It contains wildly different populations — knowledge workers in Bangalore facing wage arbitrage collapse, traditional artisans facing AI-generated competition, rural communities whose knowledge does not fit the training corpus, elderly populations whose literacy was formed in pre-digital contexts. The homogenization of these populations under a single category is itself the work of the apparatus, making them legible as a market while erasing the specificity of their conditions.
The concept is implicit in Escobar's extension of his postdevelopment framework into the domain of digital technology, explicit in his 2025 writings and in the collaborative work Incomputable Earth with Michal Osterweil and Kriti Sharma.
It draws on the foundational analysis of Encountering Development (1995), now applied to the specific discursive operations through which AI companies, research institutions, and technology media construct the populations they claim to serve.
Discursive construction. The category of AI-underdeveloped is produced by the framework that claims to address it, not discovered as a pre-existing condition.
Productivity as metric of deficiency. Communities are classified as deficient against the standard of AI-augmented productivity, a metric whose universality is assumed rather than demonstrated.
Resistance pathologized. Communities that decline AI tools on their own terms are classified as technologically backward rather than as diagnosticians of the tool's inadequacy.
Homogenization of difference. The category treats radically different populations as a single market, erasing the specificity of their conditions and knowledge systems.
Preemption of alternatives. Framing the conversation as a conversation about access forecloses the question of whether the tools, as designed, constitute the right form of capability.