Framing (Technologies of Humility) — Orange Pill Wiki
CONCEPT

Framing (Technologies of Humility)

The first of Jasanoff's four practices: asking how a problem is defined — because the frame determines what solutions are imaginable and what consequences are governable.

Framing is the deliberate examination of how a governance problem is defined — what it includes, what it excludes, and what assumptions about values and priorities the definition embeds. The dominant framing of AI governance treats it as a safety problem: preventing harmful outputs, managing algorithmic discrimination, protecting privacy, ensuring transparency. This framing is not wrong — these are genuine problems requiring genuine solutions. But the framing excludes from consideration the harms that do not arise from outputs but from AI's integration into human life: the restructuring of professional identity, the erosion of cognitive capacities, the displacement of human relationships by more-convenient machine interactions, the transformation of what it means to know something when knowledge can be borrowed rather than earned. These excluded consequences are not less real or less important; they are ungovernable within a safety frame because they are not safety problems. Humble framing asks: What are we not seeing because of how we have defined the problem?

In the AI Story

Hedcut illustration for Framing (Technologies of Humility)
Framing (Technologies of Humility)

Jasanoff's attention to framing emerged from her study of how scientific controversies become tractable or intractable. In disputes over biotechnology, nuclear waste, and climate policy, she observed that positions hardened not because evidence accumulated but because the problem was framed in ways that made certain evidence relevant and other evidence irrelevant. The frame determined the fight. A biotechnology debate framed as 'Is this organism safe?' admits toxicology data and excludes considerations of agricultural labor, farmer autonomy, and the relationship between eaters and their food. A climate debate framed as 'What is the optimal carbon price?' admits economic modeling and excludes questions about historical responsibility, international justice, and what obligations the present owes the future.

Applied to AI, the safety framing has become so dominant that alternatives are difficult to articulate. When governance institutions ask 'How do we make AI safe?,' they have already determined that the relevant governance instruments are technical — safety standards, testing protocols, alignment research. Questions that fall outside the safety frame — Should this application exist at all? Who should decide what gets built? How should productivity gains be distributed? What happens to human capabilities when machines perform? — are treated as philosophical or political rather than governance questions, and they are delegated to other forums where they receive less institutional attention and produce no binding decisions.

The twelve-year-old's question — 'What am I for?' — cannot be addressed within a safety frame because it is not a safety problem. No amount of alignment research, benchmark testing, or responsible deployment will answer the question, because the question concerns meaning and purpose in the presence of capable machines, and meaning is not a variable that safety governance operates on. A humble framing would reframe the governance question: not merely 'Is AI safe?' but 'What kind of relationship between humans and machines do we want to build, and is AI as currently designed and deployed building that relationship?'

Jasanoff's framing practice is not relativism. It does not claim that all frames are equally valid or that governance can proceed without defining problems. It claims that every frame reveals and conceals, that the concealment is consequential, and that governance institutions must be capable of examining their own frames and revising them when the revision is justified. The practice requires what she calls 'framereflexivity' — the institutional capacity to ask regularly and seriously whether the problem as defined is the problem that matters most, or whether the most important dimensions of the problem are the ones the frame excludes.

Origin

Framing as an analytical concept has a long history in sociology (Erving Goffman), political communication (George Lakoff), and policy analysis (Deborah Stone). Jasanoff's distinctive contribution was to make framing a governance practice rather than merely an analytical observation — to argue that institutions must be designed to interrogate and revise their own frames rather than treating frames as given.

Key Ideas

The frame determines what is governable. A problem defined as a safety issue will be governed through safety instruments, excluding consequences that are real but do not fit the safety category.

Every frame embeds values. Defining AI governance as a safety problem prioritizes risk mitigation over distributional justice, technical competence over democratic legitimacy, prediction over learning.

Excluded consequences compound invisibly. What the frame does not capture — identity erosion, cognitive atrophy, meaning displacement — accumulates outside institutional attention until it manifests as crisis.

Humble framing is a practice. Institutions must be designed to ask regularly: What are we not seeing because of how we have defined the problem? Whose knowledge does the frame exclude? What values does it privilege?

Appears in the Orange Pill Cycle

Further reading

  1. Sheila Jasanoff, 'Technologies of Humility,' Minerva 41 (2003): 223-244
  2. Donald Schön and Martin Rein, Frame Reflection: Toward the Resolution of Intractable Policy Controversies (Basic Books, 1994)
  3. George Lakoff, Don't Think of an Elephant! (Chelsea Green, 2004)
  4. Deborah Stone, 'Causal Stories and the Formation of Policy Agendas,' Political Science Quarterly 104, no. 2 (1989): 281-300
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT