Artificial Stupidity is Hao Ma's 2024 term, published in Long Range Planning, for a specific category of AI deployment failure: systems that replace human judgment rather than augmenting it, producing outcomes that harm both the organization and the populations the organization serves. Ma identifies two types — replacement, in which human sensitivity and contextual judgment are eliminated rather than enhanced, and enslavement, in which human users are dehumanized and alienated by the systems they operate. Both map onto Cipolla's stupid quadrant with precision that confirms the framework's applicability to a technology its author never encountered.
There is a parallel reading that begins from the market's point of view rather than the organization's long-term health. What Ma frames as drift toward replacement may be evolution toward correct pricing of human judgment in commodity contexts. The quarter-to-quarter incentive structure he describes isn't pathological myopia—it's continuous discovery of where contextual sensitivity actually creates value versus where it represents expensive theatrical performance of care.
The replacement pattern Ma documents may reflect accurate learning: that in most service interactions, human "judgment" functioned primarily as costly variance rather than valuable adaptation. Call center workers exercising discretion weren't typically improving outcomes—they were introducing inconsistency that raised operational costs while creating customer confusion about what the organization would actually do. The algorithmic system that replaces them doesn't eliminate valuable judgment; it eliminates expensive judgment theater while revealing which contexts genuinely require human intervention. The organizations adopting replacement aren't drifting into stupidity—they're learning to separate the 5% of cases where human contextual sensitivity matters from the 95% where it was always costly noise. The harm Ma identifies may be transitional pain from finally pricing human labor at its marginal value rather than its historical theatrical value.
Ma's analysis grew from empirical work on AI adoption in organizations, where he observed a consistent pattern: deployments initially framed as augmentation drifted toward replacement as operators discovered cost savings from eliminating human judgment. The drift followed predictable incentive structures — quarterly reporting rewards visible cost reduction, while the costs of lost judgment manifest slowly and are hard to attribute.
The replacement pattern produces harm to organizations (decisions that worsen over time as contextual sensitivity is lost), harm to customers (whose problems receive algorithmic rather than situated response), and harm to employees (whose expertise becomes economically marginal). The enslavement pattern adds a specific form of worker alienation, in which human operators become extensions of algorithmic systems they cannot influence or override.
The concept explicitly extends Cipolla's framework into organizational theory, demonstrating that the quadrant structure applies to institutional actors as cleanly as to individual ones. The Research Society of Australia's parallel concept of Artificial Banditry covers the bandit quadrant; Ma's Artificial Stupidity covers the structurally more dangerous category of AI deployments that produce mutual loss.
Ma, a professor at Peking University, published 'Artificial Stupidity' in Long Range Planning in 2024. The paper synthesized organizational case studies with Cipolla's theoretical framework, producing one of the first rigorous academic extensions of the Cipolla laws to AI deployment.
Two failure types. Replacement eliminates human judgment; enslavement dehumanizes human operators.
Institutional drift. AI systems deployed for augmentation migrate toward replacement under ordinary market incentives.
Triple harm. Organizations, customers, and employees are damaged simultaneously — the signature of Cipolla's stupid quadrant.
Cipolla confirmed. The empirical pattern validates the framework's applicability to technologies its author never encountered.
The right weighting depends entirely on which service context you're examining. In genuinely commoditized interactions—password resets, routine claims processing, standard shipping queries—the contrarian view carries 80% weight: human judgment was often expensive variance, and algorithmic consistency improves both cost structure and customer experience. Ma's framework overclaims harm in these domains. But in domains requiring interpretation of ambiguous situations—medical triage, loan exceptions, child welfare assessment—Ma's framing is 90% correct: the replacement pattern produces compounding harm as edge cases accumulate and organizational learning capacity atrophies.
The deeper issue Ma identifies holds across both contexts: the economic pressure is to treat all domains as if they were the first kind. The market mechanism that correctly prices judgment in commodity contexts creates systematic mispricing in complex ones, because the costs of lost sensitivity in the complex domain appear slowly and attribute poorly. This is Ma's actual insight—not that replacement is always stupid, but that the replacement decision process itself is structurally biased.
The synthesis Ma's work points toward: judgment value varies wildly by context, but the forces driving replacement decisions are largely context-blind. Organizations need economic structures that can price judgment differently in routine versus complex domains—and current quarterly reporting frameworks systematically prevent this discrimination.