Sociomaterial assemblage is the concept — developed across the STS tradition, including work by Bruno Latour, Annemarie Mol, and Donna Haraway, and sharpened in Suchman's analyses of AI — that technologies are not autonomous objects with stable properties but configurations of heterogeneous elements whose behavior emerges from their specific arrangement. An AI system is not just a model but the model plus the training data plus the corporate decisions that selected the data plus the labor that annotated it plus the deployment context plus the user practices. The concept is central to Suchman's argument against the reification of AI as a thing — because accountability, understanding, and effective governance all require engaging with the assemblage rather than with a hypostatized entity.
The concept emerges from the long tradition in STS that refuses the separation of social and technical, treating technologies as configurations in which human and non-human elements are interwoven and mutually constituted. Latour's actant framework insists that any element that makes a difference in the network is an actor; Haraway's cyborg figure names the constitutive entanglement of human and machine. Suchman's contribution has been to apply this tradition with unusual rigor to the specific case of contemporary AI, where the temptation to reification is especially strong.
The practical bite of the concept is analytical. When a self-driving car kills a pedestrian, the question 'what failed?' can be answered at many levels: the sensor, the classifier, the training data, the deployment decision, the regulatory framework, the operator's attention, the pedestrian's position, the lighting, the engineering culture that decided what to test. Treating the car as a sociomaterial assemblage means that all of these are candidate answers and that the investigation must range across them rather than settling at any single level. Treating the car as a thing — 'the AI failed' — forecloses the investigation prematurely and distributes responsibility conveniently.
For AI specifically, the concept is the analytical antidote to the reification Suchman diagnosed in her 2023 essay. When we ask what a large language model is, the sociomaterial answer is: the model weights plus the training corpus plus the RLHF annotations plus the corporate choices about alignment plus the deployment API plus the user's prompts plus the institutional context of use. Each element is shapeable by specific choices by specific actors, and specific accountability attaches to each.
The concept also has a specifically critical function. Power in AI systems operates through the concealment of the assemblage. When Anthropic, OpenAI, or Google releases a system, the marketing emphasizes the model; the labor of the content moderators, the labor conditions in which training data is annotated, the environmental costs of training, the data provenance questions — all the elements that constitute the assemblage but are politically inconvenient — recede. Sociomaterial analysis insists on bringing these elements back into view.
The concept has roots in actor-network theory (Latour, Callon, Law) and in feminist STS (Haraway, Barad, Mol), developed across the 1980s and 1990s. Suchman's application of it to AI and computational systems has been sustained since Plans and Situated Actions (1987) and sharpened in her post-2000 work at Lancaster University.
The concept has become increasingly central as the AI discourse has intensified, and scholars like Kate Crawford (in Atlas of AI), Ruha Benjamin, and others have made sociomaterial analysis foundational to contemporary critical AI studies.
Technologies are configurations. The properties that matter emerge from the specific arrangement of elements, not from any element in isolation.
Heterogeneity is constitutive. The elements of an assemblage are diverse — hardware, software, data, labor, decisions, contexts — and none can be reduced to others.
Boundaries are choices. Where one draws the boundary of an assemblage (does the AI include its training data? its operators? its regulatory environment?) is an analytical choice with political consequences.
Accountability distributes. Questions about what an AI system did are questions about the entire assemblage, not about the model alone. Accountability must follow the distribution.
Anti-reification. Treating AI as a thing conceals the specific choices by specific actors that produced the outputs. Sociomaterial analysis restores the specificity that reification erases.