Hirschman introduced the concept in his 1967 book Development Projects Observed, drawing on fieldwork for the World Bank in which he had observed the systematic gap between project plans and project realities. The gap was too large and too persistent to be explained by planner incompetence. Something structural was going on, and the something turned out to be benign: the underestimation of difficulty was what allowed projects to be undertaken at all, and the discovery of real difficulty forced the creativity that produced unanticipated solutions.
The argument has a shadow, which Bent Flyvbjerg and Cass Sunstein identified in their work on megaproject failure: a malevolent hiding hand that blinds optimistic planners not only to unexpectedly high costs but to unexpectedly low benefits. Some projects succeed because the hiding hand was benevolent; others fail catastrophically because it was malevolent. The question cannot be answered in advance — only after the project is complete, which is precisely when the answer is no longer useful for the decision it was supposed to inform.
AI tools like Claude Code partially collapse the hiding hand in software and adjacent domains. The builder who describes a project to the AI receives, within minutes, a working prototype or detailed implementation plan that reveals the project's actual complexity. She can see, before investing anything beyond a conversation, what the project requires. This is, in one reading, unambiguous improvement — better planning, more accurate resource allocation, fewer catastrophic cost overruns. It also eliminates the benevolent function of the hiding hand.
The benevolent function operated through a specific psychological mechanism: commitment under uncertainty produces determination that commitment under certainty does not. The builder who begins in ignorance is forced, when difficulty emerges, to draw on reserves of creativity she did not know she possessed. The difficulty is the stimulus; the creativity is the response. AI eliminates the stimulus, and the question is whether the response — the capacity for creative resilience under unexpected obstacle — can be developed through other means. The projects AI makes transparent are the ones whose difficulty is technical; the projects that remain opaque (human, institutional, political) still require the capacity the hiding hand used to build.
The concept appeared in Hirschman's Development Projects Observed (Brookings Institution, 1967), part of a broader argument that development economics' assumption of rational planning under conditions of good information was empirically wrong and, more importantly, normatively misguided. Projects that proceeded from accurate assessments were often less successful than projects that proceeded from optimistic misjudgments, because the optimistic misjudgment produced the over-commitment that creativity required.
The hiding hand is productive self-deception. Ignorance of difficulty enables commitment; discovered difficulty forces creativity; creativity produces the solutions that full foresight would have made unnecessary.
It has a malevolent twin. Some projects fail because the hiding concealed not creativity-provoking difficulty but fatal business-case errors.
AI eliminates the benevolent function for a class of projects. Transparent project difficulty produces better planning but reduces formative over-commitment.
The capacity the hiding hand built is still needed. Human and institutional complexity remains opaque even when technical complexity becomes transparent.
The removal is invisible until the capacity is called upon. Builders accustomed to known-in-advance difficulty may have atrophied the resilience required when unexpected obstacles arrive.
Flyvbjerg and Sunstein's 2017 critique argued that Hirschman's empirical claims were selective — that benevolent hiding hands were rare and malevolent ones common, and that the concept had done harm by encouraging over-commitment to projects that should have been abandoned. Defenders of Hirschman reply that the critique misses the developmental point: even projects that overran their budgets often produced institutional capacities that justified the overrun in retrospect, and the planning orthodoxy Flyvbjerg advocates would have prevented many of the twentieth century's most consequential achievements. The AI transition complicates both sides.