'The Uncontroversial "Thingness" of AI' is Suchman's 2023 essay in Big Data & Society, a compressed diagnostic of how contemporary discourse — including critical discourse — reifies AI as a coherent, autonomous entity. 'How is it,' she asks, 'that AI has come to be figured uncontroversially as a thing, however many controversies "it" may engender?' The essay argues that this reification obscures what actual AI systems are: sociomaterial assemblages of hardware, software, training data, corporate decisions, user practices, and institutional contexts that cannot be meaningfully reduced to a single agent. The grammatical construction 'AI does X' smuggles in assumptions the analytical work of the essay painstakingly unpacks, and the consequences extend through every debate about AI safety, governance, and impact.
The essay's central observation is grammatical and therefore foundational. When commentators — enthusiasts, critics, regulators, journalists — write sentences of the form 'AI will do X' or 'AI threatens Y' or 'AI promises Z,' they attribute agency to a system that is better understood as a configuration. The attribution feels natural, because AI outputs are sophisticated enough to sustain the social-intelligence projection Suchman documented in her PARC work. But the naturalness conceals analytical work the essay insists must be done: what specific system, trained on what specific data, deployed by what specific organization, in what specific institutional context, is producing what specific outputs for what specific users?
Suchman's target is not opponents of AI but the framing shared across the debate. Triumphalists and critics alike tend to treat AI as a thing — a singular, coherent entity whose trajectory can be debated. The shared framing benefits the specific companies and institutions producing AI systems, because the frame abstracts away from questions about their particular choices, biases, and power. A debate about whether 'AI' should be regulated is easier to navigate than a debate about whether specific deployments by specific companies should be accountable to specific communities.
The essay connects to a broader tradition in STS — including Bruno Latour's actant framework, Donna Haraway's cyborg figure, and the tradition of materialist analysis that refuses to treat technologies as autonomous forces. Suchman's contribution is to apply this tradition specifically to contemporary AI discourse, where the reification is most pronounced and most consequential. Her argument has influenced subsequent critical work on large language models, algorithmic governance, and the political economy of AI.
The essay's practical implications are significant. If AI is not a thing but an assemblage, then debates about 'AI safety' that treat the AI as the locus of risk miss the distributed character of the actual risks: the labor conditions of content moderators, the training data's origins, the corporate incentive structures, the deployment contexts, the user practices. Each requires specific analysis and specific accountability. The reification of AI as a thing flattens this distribution into a single entity whose properties can be debated abstractly, which is convenient for the organizations whose specific choices produced the system in the first place.
The essay was published in Big Data & Society in 2023 as part of Suchman's continuing critical engagement with the AI discourse. It drew on her decades of STS work and on the broader tradition of materialist analysis of technology. Its influence has grown steadily as the AI discourse has intensified and the reification the essay diagnoses has become more pronounced.
The grammar matters. 'AI does X' attributes agency to a system better understood as an assemblage. The attribution feels natural; the naturalness is what needs interrogation.
Reification serves power. The flattening of specific deployments into generic 'AI' abstracts away from accountability for specific organizational choices.
Critical discourse repeats the error. Even critics of AI often begin with the proposition that AI is a thing with properties — unintentionally reinforcing the framing the critique needs to break.
The assemblage is the reality. Actual AI systems are configurations of hardware, software, training data, labor, corporate decisions, and institutional contexts. None of these can be analyzed coherently as 'AI.'
Accountability requires specificity. If AI is not a thing, then AI governance must address specific deployments by specific organizations for specific users — not a generic entity.