Sociomaterial Assemblage — Orange Pill Wiki
CONCEPT

Sociomaterial Assemblage

The STS concept — central to Suchman's analysis of AI — that technologies are not autonomous objects but configurations of hardware, software, data, labor, corporate decisions, and institutional contexts that can only be understood in their specific arrangements.

Sociomaterial assemblage is the concept — developed across the STS tradition, including work by Bruno Latour, Annemarie Mol, and Donna Haraway, and sharpened in Suchman's analyses of AI — that technologies are not autonomous objects with stable properties but configurations of heterogeneous elements whose behavior emerges from their specific arrangement. An AI system is not just a model but the model plus the training data plus the corporate decisions that selected the data plus the labor that annotated it plus the deployment context plus the user practices. The concept is central to Suchman's argument against the reification of AI as a thing — because accountability, understanding, and effective governance all require engaging with the assemblage rather than with a hypostatized entity.

In the AI Story

Hedcut illustration for Sociomaterial Assemblage
Sociomaterial Assemblage

The concept emerges from the long tradition in STS that refuses the separation of social and technical, treating technologies as configurations in which human and non-human elements are interwoven and mutually constituted. Latour's actant framework insists that any element that makes a difference in the network is an actor; Haraway's cyborg figure names the constitutive entanglement of human and machine. Suchman's contribution has been to apply this tradition with unusual rigor to the specific case of contemporary AI, where the temptation to reification is especially strong.

The practical bite of the concept is analytical. When a self-driving car kills a pedestrian, the question 'what failed?' can be answered at many levels: the sensor, the classifier, the training data, the deployment decision, the regulatory framework, the operator's attention, the pedestrian's position, the lighting, the engineering culture that decided what to test. Treating the car as a sociomaterial assemblage means that all of these are candidate answers and that the investigation must range across them rather than settling at any single level. Treating the car as a thing — 'the AI failed' — forecloses the investigation prematurely and distributes responsibility conveniently.

For AI specifically, the concept is the analytical antidote to the reification Suchman diagnosed in her 2023 essay. When we ask what a large language model is, the sociomaterial answer is: the model weights plus the training corpus plus the RLHF annotations plus the corporate choices about alignment plus the deployment API plus the user's prompts plus the institutional context of use. Each element is shapeable by specific choices by specific actors, and specific accountability attaches to each.

The concept also has a specifically critical function. Power in AI systems operates through the concealment of the assemblage. When Anthropic, OpenAI, or Google releases a system, the marketing emphasizes the model; the labor of the content moderators, the labor conditions in which training data is annotated, the environmental costs of training, the data provenance questions — all the elements that constitute the assemblage but are politically inconvenient — recede. Sociomaterial analysis insists on bringing these elements back into view.

Origin

The concept has roots in actor-network theory (Latour, Callon, Law) and in feminist STS (Haraway, Barad, Mol), developed across the 1980s and 1990s. Suchman's application of it to AI and computational systems has been sustained since Plans and Situated Actions (1987) and sharpened in her post-2000 work at Lancaster University.

The concept has become increasingly central as the AI discourse has intensified, and scholars like Kate Crawford (in Atlas of AI), Ruha Benjamin, and others have made sociomaterial analysis foundational to contemporary critical AI studies.

Key Ideas

Technologies are configurations. The properties that matter emerge from the specific arrangement of elements, not from any element in isolation.

Heterogeneity is constitutive. The elements of an assemblage are diverse — hardware, software, data, labor, decisions, contexts — and none can be reduced to others.

Boundaries are choices. Where one draws the boundary of an assemblage (does the AI include its training data? its operators? its regulatory environment?) is an analytical choice with political consequences.

Accountability distributes. Questions about what an AI system did are questions about the entire assemblage, not about the model alone. Accountability must follow the distribution.

Anti-reification. Treating AI as a thing conceals the specific choices by specific actors that produced the outputs. Sociomaterial analysis restores the specificity that reification erases.

Appears in the Orange Pill Cycle

Further reading

  1. Bruno Latour, Reassembling the Social (Oxford University Press, 2005)
  2. Donna Haraway, 'A Cyborg Manifesto' (Socialist Review, 1985)
  3. Kate Crawford, Atlas of AI (Yale University Press, 2021)
  4. Annemarie Mol, The Body Multiple (Duke University Press, 2002)
  5. Lucy Suchman, 'The Uncontroversial "Thingness" of AI' (Big Data & Society, 2023)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT