Polanyi's Paradox, named by MIT economist David Autor in 2014, formalizes Polanyi's insight that "we can know more than we can tell" into an economic principle governing automation. The paradox explains why certain tasks—diagnostic reasoning, skilled craftsmanship, contextual judgment—proved stubbornly resistant to computerization even as routine cognitive work was automated. These resistant tasks require tacit knowledge: embodied sensitivities built through practice that operate below conscious awareness and resist specification in algorithms. Autor's framework predicted that AI would struggle most with precisely these tacit-knowledge-intensive domains. The 2020s arrival of large language models appeared to overcome the paradox through statistical pattern-matching at scale, but critics argue this represents "Polanyi's Revenge"—systems that capture patterns without understanding produce new categories of failure invisible to their operators.
Autor developed the paradox while analyzing why computerization had not produced the mass unemployment economists predicted in the 1960s. He found that computers automated tasks that could be specified in logical rules while creating demand for tasks requiring flexibility, judgment, and contextual adaptation—precisely the capacities Polanyi identified as irreducibly tacit. The paradox became a central framework in labor economics for understanding the skill-biased nature of technological change: automation displaced routine work while complementing and increasing demand for non-routine work requiring tacit knowledge. From 1980 to 2010, this pattern held with remarkable consistency across developed economies.
The paradox's contemporary relevance intensified when deep learning systems began demonstrating competence in domains previously thought to require irreducible human judgment. AlphaGo mastered the game of Go—whose strategic depth exceeded any rule-based specification—through pattern recognition rather than explicit programming. Large language models produced sophisticated text, code, and analysis without being programmed with explicit rules of grammar, logic, or reasoning. These achievements suggested that statistical learning from sufficient data could capture the tacit patterns that rule-based systems could not. The triumphalist interpretation declared Polanyi's Paradox solved: machines had learned to learn tacit knowledge from examples.
The critical response, articulated most forcefully by Kambhampati, identified what he called "Polanyi's Revenge." The apparent success in capturing tacit patterns created new problems: systems that produced plausible outputs across domains where their reliability was unknown, that hallucinated with the same confidence they reasoned, that lacked any mechanism for recognizing the boundaries of their own competence. The paradox had not been solved—it had been displaced upward. The machine could now produce outputs that looked like the products of tacit knowledge without possessing the evaluative capacity, the self-awareness of limits, or the commitment to truth that makes tacit knowledge reliable. The new paradox was more dangerous than the old one: rule-based systems failed obviously when they encountered situations outside their rules, but pattern-based systems failed subtly, producing sophisticated nonsense indistinguishable from genuine insight except to evaluators who possessed the tacit knowledge the machine lacked.
David Autor introduced the term in his 2014 paper "Polanyi's Paradox and the Shape of Employment Growth" published in the Journal of Economic Perspectives. Autor had been studying the polarization of the U.S. labor market—the simultaneous growth of high-skill and low-skill employment with declining middle-skill jobs—and realized that Polanyi's 1958 insight about tacit knowledge provided the missing theoretical explanation. Tasks in the middle of the skill distribution were being automated because they involved routine cognitive work that could be specified explicitly. Tasks at the top and bottom resisted automation because they required either abstract reasoning and complex communication (high-skill) or physical adaptability and situational awareness (low-skill)—both forms of tacit knowledge. Autor's formalization made Polanyi required reading in labor economics and technology policy.
Tacit knowledge resists codification. The knowledge that enables adaptive performance—recognizing patterns, making contextual judgments, improvising responses—cannot be fully captured in explicit rules or algorithms.
Routine cognitive work automates first. Tasks specifiable in logical procedures (accounting, data entry, routine analysis) were automated earlier than tasks requiring tacit flexibility, explaining the historical pattern of computerization.
Deep learning appeared to solve it. Pattern-matching systems learned tacit regularities from data rather than from explicit rules, seemingly overcoming the barrier that had protected tacit-knowledge work from automation.
Polanyi's Revenge emerged. The apparent solution created new problems—systems with no mechanism for self-evaluation, producing outputs of uniform confidence across domains of variable reliability, requiring human oversight grounded in precisely the tacit knowledge they were automating.
Evaluative capacity becomes critical. In the AI age, the human contribution shifts from execution to evaluation—and evaluation depends entirely on tacit knowledge built through the friction AI tools eliminate.