Invisible labor in AI refers to the global workforce — estimated at hundreds of thousands, concentrated in Kenya, the Philippines, Venezuela, and other low-wage economies — whose work annotating data, moderating content, flagging toxicity, and performing other forms of human judgment makes AI systems possible. The International Labour Organization has documented these workers as the invisible labor force behind AI's sleek interfaces and impressive capabilities. The wages are often below the Nickel-and-Dimed threshold that Ehrenreich documented in American low-wage work. The conditions include documented psychological harm from sustained exposure to traumatic content. Ehrenreich's method — immersion, class analysis, attention to where the costs are borne — is the appropriate framework for making this invisibility visible.
The pattern follows the template Ehrenreich established in her career-long documentation of invisible labor: the women cleaning offices, watching children, caring for the elderly whose complex work was rendered invisible by the professional class's accounting practices. AI has extended the pattern to a new class of workers — the annotators and moderators whose human judgment is required to make machine learning possible — and the invisibility is not merely an oversight. It is structural.
The technology discourse describes AI as a machine learning from data, as though the data arrived pre-labeled and the machine did the work of extracting patterns. This description is false. The data must be labeled, and the labeling is performed by humans, and the humans are employed under conditions designed to minimize both their visibility and their compensation. The image recognition system requires someone to label images. The content moderation system requires someone to review flagged content. The chatbot's safety requires someone to rate outputs against guidelines. At every layer, human labor is the substrate on which the AI's apparent autonomy rests.
Achille Mbembe's plantation logic framework extends Ehrenreich's class analysis geographically: the organization of AI production around the extraction of maximum value from labor that is simultaneously essential and invisible follows a colonial template. The workforce is concentrated in formerly colonized regions. The wages reflect local labor markets. The value flows to metropolitan capital. The pattern is continuous with five centuries of extractive economic relations, not a break from them.
The content moderation labor is particularly harrowing. Workers reviewing the most disturbing material the internet produces — child abuse, graphic violence, torture — report elevated rates of PTSD, anxiety, and other trauma-related conditions. The outsourcing of this work to low-wage economies is not incidental. It is a deliberate choice that externalizes the psychological costs of AI safety onto workers whose labor markets provide them no alternative. The AI systems marketed as 'safe' are safe because specific humans, employed under specific conditions, absorbed the exposure that would otherwise reach the end user.
The phenomenon has been documented by Time magazine's 2023 investigation into OpenAI's Kenyan content moderation contractors, by Mary Gray and Siddharth Suri's Ghost Work (2019), by the International Labour Organization's sustained reporting on platform work, and by journalists including Billy Perrigo and Niamh Rowe.
The framework for analyzing this labor as continuous with the invisible labor Ehrenreich documented comes from extending her Nickel and Dimed method to the global scale of the AI economy — treating the annotation worker in Nairobi as the twenty-first-century counterpart of the hotel housekeeper in Key West.
Human substrate of machine learning. AI systems depend on continuous human labor at every layer — annotation, moderation, rating — despite being marketed as autonomous.
Geographic externalization. The labor is concentrated in low-wage economies by deliberate design, producing a cost structure that would be impossible if performed in metropolitan labor markets.
Psychological externalization. Content moderation workers absorb the psychological costs of AI safety, producing documented trauma that the AI product does not display.
Ehrenreich method required. Making this labor visible requires the immersive journalistic method Ehrenreich developed — not the remote analysis typical of technology commentary.
Continuous with colonial pattern. The extractive structure is not a novel feature of the AI economy but a continuation of five centuries of colonial labor relations.