The machine's gaze is the cognitive equivalent of what feminist film theory, following Laura Mulvey, identified as the gaze of the camera — the specific position from which the apparatus sees, which is never neutral and always politically consequential. For a large language model, the gaze is constituted by the training corpus, the design choices of the builders, and the institutional context of deployment. The corpus overrepresents English, overrepresents the educated and connected, overrepresents propositional over embodied knowledge, overrepresents the recent over the deep past, overrepresents the powerful over the marginalized. These are not technical limitations. They are the situated perspective of the machine, and they travel into every output.
The concept applies situated knowledges directly to AI systems. The machine does not see the world as it is. It sees the world as its training data can show it. And the training data is not a neutral map of human knowledge — it is a specific, situated, politically consequential slice of the textual record, heavily weighted toward the perspectives that had the resources, infrastructure, and cultural capital to produce text on the internet.
Computer vision researchers have extended the framework to image-based systems, showing how traditional computer vision systems perform a double god trick: treating image sets as objective recordings of reality detached from the cameras and photographers who produced them, and treating model performance on those sets as objective truth about the world. The consequences, as these researchers note, have been horrible with respect to bias and injustice — facial recognition systems that fail on darker skin tones, content moderation systems that misclassify African American Vernacular English as toxic, medical imaging systems that underdiagnose women.
The Deleuze error from The Orange Pill is the machine's gaze caught in the act. Claude's knowledge of Deleuze is statistical — it reflects how Deleuze is discussed in the training corpus, which may bear little relation to what Deleuze actually argued. The machine does not read Deleuze. It reads the internet's representation of Deleuze. The situated gaze produces a plausible-sounding but philosophically hollow output, and the fluency of the presentation conceals the specificity of the position from which the claim is being made.
The ethical implication is that evaluating AI output requires what Donna Haraway calls accountability — the disciplined practice of tracing the conditions under which knowledge is produced and remaining open to perspectives the dominant system renders invisible. This is not a technical fix. It is a cognitive practice, and it requires exactly the kind of situated knowledge on the user's side that the machine's gaze lacks.
The framework of the machine's gaze has been developed across computer vision, natural language processing, and AI ethics over the past decade, drawing directly on Haraway's 1988 Situated Knowledges essay. Key contributions include work by Timnit Gebru, Kate Crawford, Joy Buolamwini, Ruha Benjamin, Safiya Noble, and the authors of the 2018 Gender Shades study, which documented the racial and gender performance gaps in commercial facial recognition systems.
The gaze is constituted by data. What the machine sees is determined by what it has been shown, weighted by how often.
The gaze is constituted by design. Architectural and alignment choices shape what kinds of outputs the machine produces, which kinds of questions it can and cannot engage.
The gaze performs universality. Fluent outputs present situated knowledge as neutral information.
Bias is structural, not incidental. The patterns in the gaze reflect the patterns in the training corpus and the institutional context of production.
Accountability is the response. Evaluation requires tracing the conditions under which the gaze was constituted, not merely assessing the outputs it produces.