Epistemic Justice (AI) — Orange Pill Wiki
CONCEPT

Epistemic Justice (AI)

The fair treatment of communities as knowers—requiring AI systems to recognize diverse knowledge forms without testimonial or hermeneutical violence.

Epistemic justice, adapted by Ramesh Srinivasan from philosopher Miranda Fricker's framework, demands that technology treat people fairly as producers and holders of knowledge. Testimonial injustice occurs when AI training data systematically excludes or discounts the knowledge of marginalized communities—indigenous agricultural practices absent from models trained on Western scientific literature, for instance. Hermeneutical injustice occurs when AI's conceptual categories cannot accommodate non-Western ways of organizing knowledge—when relational knowledge systems must be forced into taxonomic structures that strip away the connections constituting their meaning. The AI age intensifies both forms at global scale.

In the AI Story

Hedcut illustration for Epistemic Justice (AI)
Epistemic Justice (AI)

Miranda Fricker's 2007 Epistemic Injustice identified two forms of unfair treatment of people as knowers. Testimonial injustice occurs when prejudice causes a hearer to assign deflated credibility to a speaker's word. A doctor dismissing a Black patient's pain reports because of racial stereotypes commits testimonial injustice. Hermeneutical injustice occurs when a gap in collective interpretive resources prevents someone from making sense of their own experience—as when sexual harassment had no name and women lacked the conceptual tools to articulate what they were suffering. Srinivasan's extension recognizes that AI systems, trained on partial datasets reflecting partial worldviews, systematically commit both forms at unprecedented scale.

Consider an AI system trained on Western agricultural science asked to recommend crop management for a farm in Tamil Nadu. The training data reflects research from American and European institutions—controlled experiments, chemical inputs, mechanized production, monoculture practices. The sophisticated agricultural knowledge Tamil farmers have developed through generations of careful observation—intercropping patterns specific to local soil conditions, integrated pest management using locally adapted species, water conservation techniques evolved over centuries—is largely absent from the training corpus. The system's recommendations may be scientifically valid within Western agronomy and ecologically disastrous in the local context because they ignore knowledge the training data never captured. This is testimonial injustice: the farmer's knowledge is treated as non-knowledge because it was never encoded in forms the system can process.

Hermeneutical injustice in AI operates more subtly. An AI health system deployed in a community where wellness is understood relationally—as connection to family, land, ancestors, and community—cannot recognize this understanding because its categories are biomedical. It has slots for 'symptoms,' 'diagnoses,' 'treatments,' 'outcomes'—the atomistic framework of Western medicine. The relational dimensions that the community considers central to health have no corresponding categories. The system can process health data only by stripping away the relational context, producing recommendations formally correct within Western biomedicine and culturally incomprehensible within the community's own framework. The community lacks the conceptual resources within the system to articulate what the system is failing to capture.

The epistemological architecture of AI—what counts as data, how knowledge is organized, what constitutes a valid inference—reflects Western Enlightenment commitments to universal categories, taxonomic organization, and explicit propositional knowledge. These are powerful epistemological tools. They are not the only ones. Indigenous knowledge systems often organize knowledge holistically rather than taxonomically, encode understanding in narrative and practice rather than propositions, and locate authority in community consensus rather than individual expertise. Including this knowledge in AI systems requires not merely adding data points but fundamentally rethinking the ontological frameworks that organize what the system treats as knowledge. Without this rethinking, inclusion becomes assimilation—the knowledge is included only by being stripped of the features that made it distinctive.

Origin

The concept of epistemic justice originated with Miranda Fricker's 2007 Epistemic Injustice: Power and the Ethics of Knowing, which synthesized feminist epistemology, virtue ethics, and social philosophy into a framework for understanding unfair treatment of people as knowers. Ramesh Srinivasan's application to technology emerged from his fieldwork with indigenous communities in the early 2000s, where he documented how digital systems systematically failed to recognize or accommodate non-Western knowledge forms. His collaborations with Zuni Pueblo, Zapotec and Mixtec communities in Oaxaca, and indigenous groups in Bolivia provided empirical cases of both testimonial and hermeneutical injustice operating through technological systems. The framework became central to the indigenous data sovereignty movement and the development of the CARE Principles.

Key Ideas

Testimonial injustice at scale. AI training data's systematic underrepresentation of non-Western knowledge means that entire epistemological traditions are treated as non-knowledge—present in human communities but absent from the machine's world-model.

Hermeneutical violence through categories. When AI's organizational frameworks cannot accommodate relational, narrative, or practice-embedded knowledge, communities lose the ability to articulate their own understanding within the system.

The extraction-erasure mechanism. Attempts to include indigenous knowledge in AI often require stripping it of contextual, relational, and procedural dimensions—including the knowledge only by destroying what made it valuable.

CARE Principles as structural response. Indigenous data sovereignty frameworks establish community authority over data collection, use, and benefit—challenging the AI industry's treatment of all digitized knowledge as available raw material.

Epistemic humility as design requirement. Recognizing the amplifier's limits—the knowledge forms it cannot process—is the precondition for building systems that do not claim comprehensive authority while operating from partial knowledge.

Appears in the Orange Pill Cycle

Further reading

  1. Miranda Fricker, Epistemic Injustice: Power and the Ethics of Knowing (Oxford, 2007)
  2. Ramesh Srinivasan, Beyond the Valley (MIT Press, 2019)
  3. Tahu Kukutai and John Taylor, eds., Indigenous Data Sovereignty (ANU Press, 2016)
  4. Ruha Benjamin, Race After Technology (Polity, 2019)
  5. Srinivasan, 'Whose Artificial Intelligence?,' The Economist (2018)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT