Dan McQuillan — Orange Pill Wiki
PERSON

Dan McQuillan

British physicist-turned-social computing scholar whose Resisting AI (2022) applied Barad's framework to machine learning, arguing that AI produces the world it claims to represent.

Dan McQuillan is a lecturer in creative and social computing at Goldsmiths, University of London, whose background as a physicist (he holds a PhD in experimental particle physics from CERN) informs an unusually rigorous engagement with the technical details of AI systems. His 2022 book Resisting AI: An Anti-fascist Approach to Artificial Intelligence provides the most sustained application of Barad's agential realism to contemporary machine learning, arguing that AI systems are not tools that represent the world but apparatuses that produce the world they claim to represent. McQuillan's work has become central to the Baradian interpretation of AI, providing the analytical bridge between Barad's philosophical framework and the specific technical and political realities of contemporary AI deployment.

In the AI Story

Hedcut illustration for Dan McQuillan
Dan McQuillan

McQuillan's unusual trajectory — from experimental physics at CERN to community computing with refugees and asylum seekers, then to academic social computing — gives his AI analysis an empirical grounding that purely philosophical treatments often lack. His scholarship consistently insists that AI must be understood not as abstract technology but as specific apparatuses deployed in specific contexts with specific effects on specific populations.

His central argument, developed across multiple essays in The Sociological Review and culminating in Resisting AI, is that machine learning systems enact what Barad would call agential cuts — producing the categories they appear to merely identify. A facial recognition system does not objectively detect gender, race, or emotion; it produces these categorizations through the specific architectures and training data that configure it. The system creates the effects it names. Setting up the AI one way or another changes what becomes naturalized and what becomes problematized, and the question of who gets to set up the AI becomes a fundamental question of power.

McQuillan's approach to AI governance follows directly from this analysis. Because AI is constitutive rather than representational, governance that treats AI as a neutral tool to be regulated after the fact is structurally inadequate. What is required is democratization of the apparatus itself — participatory design, community governance, what he calls machine learning for the people — a countercultural data science that builds different apparatuses producing different phenomena.

His work has been central to the emergence of critical AI studies as a distinct field, influencing scholars including Kate Crawford, Meredith Whittaker, and the broader community of researchers who treat AI as a social-political phenomenon rather than a purely technical one. The influence runs in both directions: McQuillan has drawn extensively on Barad's framework, and his work has helped extend Barad's reception into direct engagement with contemporary AI.

Origin

McQuillan trained as a physicist at Oxford and CERN, then worked for over a decade in NGO settings — including Amnesty International's technology team and the development of community technology projects with migrant communities. He joined Goldsmiths in 2012, where his teaching and research focus on the social implications of machine learning, data, and algorithmic governance.

Key Ideas

AI is an apparatus, not a tool. It participates in producing the phenomena it claims to identify.

Machine learning enacts agential cuts. The categories it recognizes are produced through its operations, not discovered as pre-existing facts.

Setting up the AI is a political act. Different configurations naturalize different realities and problematize others.

Governance requires democratization of the apparatus. Regulating outputs while leaving the apparatus's configuration to a small number of corporations is structurally inadequate.

Resistance is possible and necessary. Alternative configurations — community-controlled, democratically governed, serving different ends — can produce different phenomena.

Appears in the Orange Pill Cycle

Further reading

  1. Dan McQuillan, Resisting AI: An Anti-fascist Approach to Artificial Intelligence (Bristol University Press, 2022)
  2. Dan McQuillan, 'People's Councils for Ethical Machine Learning' (Social Media + Society, 2018)
  3. Dan McQuillan, 'Manifesto on Algorithmic Humanitarianism' (The Sociological Review, 2018)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
PERSON