Efficient inefficiency names the central paradox of AI deployment within institutions that produce bullshit work. The technology that could eliminate pointless activity is instead applied to perform pointless activity more rapidly. The AI that automates compliance documentation does not eliminate the documentation requirement; it generates more documentation, faster, in more formats, with better grammar. Stuart Mills and David Spencer coined the term in their 2024 Journal of Business Research essay applying Graeber's framework to AI. The concept captures something that productivity metrics systematically miss: that doing pointless work efficiently is not progress. One study they examined found that programmers using AI co-pilots wrote more code but also increased code churn — broken code requiring editing and fixing. Writing more bad code, faster.
The mechanism is institutional rather than technological. AI is deployed within organizational structures that generate bullshit. The structures persist; the technology amplifies their output. The compliance department receiving AI tools generates more compliance reports. The marketing team receiving AI tools generates more marketing content. The legal department receiving AI tools generates more contractual obstruction. Each is more 'efficient' by the metric of output volume. None addresses whether the output should exist.
The phenomenon connects directly to task seepage documented in the Berkeley study. AI freed minutes and hours that were immediately colonized by additional tasks of the same type. The freed time was not redeployed toward higher-value work. It was filled with more of the same lower-value work, performed faster. The institution captured the productivity gain. The worker captured the intensification.
Mills and Spencer's analysis identifies AI as the most powerful tool yet built for industrializing bullshit. Previous office technologies — word processors, email, spreadsheets — also accelerated administrative work without eliminating it. AI represents a quantum increase in scale. The volume of plausible-sounding documentation, marketing content, and procedural output that a single AI system can generate exceeds the productive output of entire pre-AI organizations.
The pattern reveals something Graeber's framework anticipated: that the institutional logic generating bullshit work is more powerful than the technological capability that could eliminate it. The technology adapts to the institution, not the reverse. The bullshit changes form. The volume increases. The function remains.
Stuart Mills and David Spencer introduced the term in their 2024 essay 'Bullshit Jobs, Bullshit Tasks, and Artificial Intelligence' in the Journal of Business Research. The article explicitly extends Graeber's framework into the AI era, identifying the specific mechanisms by which AI deployment within bullshit-generating institutions produces efficient inefficiency rather than genuine elimination of waste.
Speed without elimination. AI accelerates pointless activity rather than ending it.
Institutional adaptation. Organizations deploy AI within existing structures, preserving the bullshit logic those structures generate.
Task seepage as mechanism. Freed time is colonized by more bullshit, not redeployed to meaningful work.
Code churn as case study. Programmers using AI write more code but also produce more broken code requiring fixes.
Industrial-scale bullshit. AI produces volumes of plausible-sounding output that exceed pre-AI institutional capacity entirely.