Sociotechnical Imaginaries — Orange Pill Wiki
CONCEPT

Sociotechnical Imaginaries

Collectively held visions of desirable futures shaped by science and technology — not predictions but blueprints that organize action and determine what gets built.

Sociotechnical imaginaries are collectively held, institutionally stabilized visions of desirable futures animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology. Developed by Jasanoff and Sang-Hyun Kim, the concept explains how societies organize their relationship to technological change through shared stories about what the technology will do and what kind of world it will create. An imaginary is not marketing or propaganda — it is deeper and more structural, shaping not just public opinion but institutional priorities, funding decisions, regulatory frameworks, and the design choices of builders. The AI moment has produced competing imaginaries: the productivity imaginary (AI as capability amplifier and democratizer), the existential risk imaginary (AI as civilizational threat), and the democratic imaginary (AI as constitutional question). These imaginaries are not true or false but generative — they determine what gets built, what gets funded, and what gets governed.

In the AI Story

Jasanoff and Kim introduced sociotechnical imaginaries in their 2009 essay 'Containing the Atom' and developed the concept fully in Dreamscapes of Modernity (2015). The framework emerged from their observation that nations pursuing the same technological capabilities — nuclear energy, biotechnology, space exploration — embedded those technologies in dramatically different narratives about national identity, social purpose, and the role of the state. These narratives were not mere rhetoric but constitutive: they shaped what got built, how it was governed, and how publics responded to it.

The productivity imaginary dominates the technology industry and organizes much of the public discourse about AI. In this vision, articulated most fully in The Orange Pill, AI is an amplifier of human capability — a tool that collapses the imagination-to-artifact ratio, democratizes the capacity to build, and frees humans from mechanical labor to concentrate on judgment, creativity, and meaning-making. The imaginary is compelling because it captures real phenomena: the capability expansion is measurable, the democratization is partial but genuine, the reallocation of human attention from execution to judgment is visible in every AI-augmented workplace. But the imaginary also suppresses: it treats productivity as self-evidently valuable, democratization as complete when it is partial, and the reallocation of attention as liberation when it may also be displacement.

The existential risk imaginary positions AI as a civilizational threat requiring extraordinary governance measures. Jasanoff observed in her 2024 Harvard podcast that this imaginary couples the idea of extinction with AI 'but very little specificity about the pathways by which the extinction is going to happen.' The vagueness is functional: a threat that cannot be specified cannot be governed through normal democratic institutions but only through expert-dominated emergency frameworks. The imaginary serves a political function whether or not its adherents intend it — delegating governance authority to the small community of researchers who claim unique understanding of the threat.

The democratic imaginary, least developed but most essential to Jasanoff's project, treats AI governance as a constitutional question about the kind of society that should exist on the other side of the transition. This imaginary does not begin with capabilities or risks but with values: What do we want to preserve? What are we willing to sacrifice? Who should decide? It positions the public not as beneficiaries of expert governance or victims of corporate deployment but as the legitimate authority for decisions about how AI should reshape social life. This imaginary is harder to articulate and harder to institutionalize because it requires slowing down, including voices that complicate consensus, and treating governance as an ongoing practice rather than a one-time design problem.

Origin

The concept builds on Benedict Anderson's imagined communities (1983) and Charles Taylor's social imaginary (2004), extending both into the domain of science and technology. Jasanoff and Kim's innovation was to show that imaginaries are not merely cultural background but active forces shaping technological development — embedded in design choices, funding priorities, regulatory frameworks, and the everyday practices of builders, users, and governors.

Key Ideas

Imaginaries organize collective action. A shared vision of the technological future determines what projects attract funding, what applications seem natural, what risks seem acceptable, and what consequences seem intolerable.

Imaginaries are constitutive, not descriptive. They do not predict what will happen but shape what gets built by organizing the decisions of thousands of actors who share the vision — often without recognizing they are participating in its realization.

Competing imaginaries produce governance conflict. The AI governance debate is not merely a disagreement about policy but a collision of incommensurable visions — productivity versus precaution, acceleration versus deliberation, expert authority versus democratic participation.

The imaginary choice is political. Deciding which sociotechnical imaginary should guide AI development is the most consequential political decision of the transition, and it is being made by builders and investors rather than through democratic deliberation.

Appears in the Orange Pill Cycle

Further reading

  1. Sheila Jasanoff and Sang-Hyun Kim, 'Containing the Atom: Sociotechnical Imaginaries and Nuclear Power in the United States and South Korea,' Minerva 47, no. 2 (2009): 119-146
  2. Sheila Jasanoff and Sang-Hyun Kim, eds., Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power (University of Chicago Press, 2015)
  3. Matti Rautiainen et al., 'AI Imaginaries: Shaping Perceptions Through Narratives,' AI & Society 39 (2024): 2067-2078
  4. Baki Cakici and Morana Alac, 'Algorithmic Imaginaries,' in The Routledge Companion to Media Technology and Obsolescence (Routledge, 2022)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT