Technologies of Humility — Orange Pill Wiki
CONCEPT

Technologies of Humility

Jasanoff's institutional practices for governing under uncertainty — framing, vulnerability, distribution, and learning — designed to detect what prediction cannot anticipate.

Technologies of humility are institutional practices designed to acknowledge the limits of prediction, incorporate diverse knowledge, and create governance mechanisms that can detect and respond to emergent consequences. Introduced by Jasanoff in 2003 as a counterpoint to 'technologies of hubris' — the quantitative risk assessments and cost-benefit analyses that dominate technology governance — the framework consists of four components. Framing asks how a problem is defined and what definitions exclude. Vulnerability asks who is most exposed to harm and how they differ from the populations designers imagined. Distribution asks who benefits and who bears costs. Learning asks how institutions detect their own errors and revise course. Together, these practices constitute an institutional posture capable of governing technologies whose most important consequences are uncertain — not merely unknown but unknowable in advance, emerging from interactions no model can capture.

In the AI Story

Hedcut illustration for Technologies of Humility
Technologies of Humility

Jasanoff introduced technologies of humility in a 2003 essay that has become foundational to science and technology studies. The essay emerged from her observation that every major technology controversy she had studied — nuclear power, biotechnology, nanotechnology, climate intervention — exhibited the same pattern: governance institutions confidently predicted manageable risks, deployed technologies on the basis of those predictions, and then confronted consequences that fell outside the prediction models. The consequences were not black swans — wildly improbable events — but gray swans: outcomes that should have been considered but were excluded by the framing choices embedded in the risk assessment.

The first technology of humility is framing. How a problem is defined determines what solutions are imaginable and what consequences are anticipated. The dominant framing of AI governance treats it as a safety problem: How do we prevent harmful outputs? This framing admits certain risks (toxic content, privacy violations, discriminatory decisions) while excluding others (the erosion of professional identity, the atrophy of cognitive capacity, the restructuring of democratic culture). A humble framing would ask broader questions: What kind of society are we building with AI? What values should guide its development? What constitutes harm when the harm is not in outputs but in the relationship between humans and their work?

The second technology is vulnerability analysis. Who is most exposed to consequences, and how do they differ from the populations the technology's designers had in mind? AI tools are designed by people with specific profiles — predominantly English-speaking, highly educated, employed in knowledge-economy occupations. Vulnerability analysis asks what happens when the tools reach populations that do not share that profile. The developer in Lagos is vulnerable in ways the San Francisco engineer is not — not because she has less capability but because she has less infrastructure, less institutional support, less economic cushion when the platform she depends on changes its terms of service or pricing model. The displaced expert is vulnerable because her identity is organized around expertise that AI has commoditized. The child whose school has no AI governance framework is vulnerable because the adults responsible for her education are making decisions about AI tools without the knowledge or institutional capacity to evaluate their developmental consequences.

The third technology is distributional inquiry. Who captures the gains and who absorbs the costs? The twenty-fold productivity multiplier is real. The distribution of that multiplier — whether it flows to workers, to shareholders, to customers, to communities — is a political decision disguised as a technical outcome. Segal's account of the boardroom arithmetic reveals this clearly: the twenty-fold gain could be captured as margin (reducing headcount) or reinvested in human capability (expanding what the team builds). The choice is not determined by the technology. It is shaped by institutional incentives, cultural values, and individual leadership. A technology of humility would make the distributional question explicit and subject it to democratic deliberation rather than leaving it to market dynamics and individual conscience.

The fourth technology is learning mechanisms. How do institutions detect consequences they did not predict and revise their assumptions accordingly? The AI governance landscape is characterized by a nearly total absence of learning infrastructure. The EU AI Act was drafted before the generative AI explosion. American executive orders reflect a specific political moment. Corporate governance frameworks are designed by the companies they govern. None of these structures is designed for the continuous, humble, evidence-based revision that governing under uncertainty requires. A learning mechanism would monitor consequences systematically — not just the quantifiable risks but the slow accumulation of experiential evidence from affected communities — and would incorporate that evidence into governance revisions at a pace matched to the evidence's emergence, not the political calendar's convenience.

Origin

Jasanoff introduced the technologies of humility framework in 'Technologies of Humility: Citizen Participation in Governing Science' (Minerva 41, no. 3, 2003). The essay responded to the National Research Council's 1996 framework for risk characterization, which Jasanoff found insufficiently attentive to the role of framing, the knowledge of affected communities, and the limits of prediction. Her framework has been widely adopted across science and technology studies, environmental governance, and — increasingly — AI ethics and policy.

Key Ideas

Prediction fails for emergent phenomena. The most important consequences of AI arise from interactions between the technology and social order — interactions that generate outcomes no participant can specify in advance because the outcomes are genuinely emergent.

Framing determines what is governable. A governance framework that frames AI as a safety problem will address safety risks and miss identity erosion, cognitive atrophy, and democratic culture transformation — consequences that fall outside the frame.

Vulnerability is not uniform. The populations most exposed to AI's harms are often the populations least represented in governance conversations — and identifying who is vulnerable requires asking people who are not in the room.

Distribution is a political question. How productivity gains are allocated between capital and labor, between present and future, between powerful and vulnerable — this is the central governance question, and it is being answered by default rather than by deliberation.

Appears in the Orange Pill Cycle

Further reading

  1. Sheila Jasanoff, 'Technologies of Humility: Citizen Participation in Governing Science,' Minerva 41, no. 3 (2003): 223-244
  2. Sheila Jasanoff, The Ethics of Invention: Technology and the Human Future (W.W. Norton, 2016), Chapter 1
  3. National Research Council, Understanding Risk: Informing Decisions in a Democratic Society (National Academies Press, 1996)
  4. Matthias Gross, Ignorance and Surprise: Science, Society, and Ecological Design (MIT Press, 2010)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT