AI as Boundary Object — Orange Pill Wiki
CONCEPT

AI as Boundary Object

The thesis that large language models function as reifications and boundary objects rather than as community members — and that treating them as participants produces specific, predictable pathologies.

In 2023, Wenger and collaborators published an analysis of generative AI through the communities of practice framework, arriving at a precise conclusion: AI systems are sophisticated reifications incapable of participatory engagement in communities of practice. They function as the most powerful boundary objects the organizational world has produced — translating instantaneously between community vocabularies, coordinating work that previously required human brokers, generating outputs that have the form of communal knowledge. What they cannot do is participate. They have no self-authorship, no stakes, no identity implicated in the quality of a community's work. The distinction is not academic. It determines whether AI enhances communities of practice or substitutes for them — and the substitution, when it occurs, erodes the social infrastructure of learning itself.

In the AI Story

Hedcut illustration for AI as Boundary Object
AI as Boundary Object

Wenger's 2023 analysis was grounded in the three-decade theoretical framework he had developed: participation and reification as complementary processes, communities of practice as the primary sites of professional learning, the crucial role of mutual engagement and shared repertoire in generating knowledge that exceeds individual capability. AI, within this framework, is clearly a reification — training data is participation frozen into a generative system, and output is reification that mimics the form of participation.

The risk the analysis identified is that communities will mistake the reification for participation, accept AI-generated outputs as though they were products of genuine social engagement, and allow the reification to substitute for the participatory processes that generate meaning. The Deleuze error in The Orange Pill is the diagnostic case — an AI-generated philosophical connection that had the form of insight without its substance.

The practical implication is that AI's role in a community must be explicitly framed as reification, not participation. The builder who uses Claude should understand she is working with a high-quality reification — an artifact representing patterns extracted from millions of practitioners — and should apply the critical distance appropriate to any reified artifact. The manual is useful but not the practice. The specification is useful but not the design. The AI output is useful but not the community's collective judgment.

The institutional response emerging in 2024 and 2025 — the GSA's federal AI Community of Practice, Columbia's interdisciplinary AI community, Harvard's Digital Data Design Institute — represents an implicit recognition that the challenges of AI adoption are fundamentally social. The tool for addressing the disruption is the very social structure that the disruption threatens: a community of practice organized around the shared domain of responsible AI use.

Origin

The analysis emerged from Wenger's ongoing collaboration with the Wenger-Trayner network of practice-based learning researchers, in response to the rapid post-ChatGPT deployment of generative AI in educational and organizational settings. The collaborators recognized that existing frameworks — primarily treating AI as an information source or productivity tool — failed to capture what was distinctive about generative systems as participants in social learning.

The 2023 paper was published in the context of an intense debate among educational theorists about whether AI should be integrated into, excluded from, or carefully bounded within learning communities. Wenger's framework offered a principled basis for the last position: AI as tool within a community of practice, with explicit recognition of what it can and cannot do.

Key Ideas

AI systems are reifications. Sophisticated ones, but reifications nonetheless — frozen participation, not participation itself.

Lack of self-authorship. No identity, no stakes, no vulnerability — the constituents of participation that AI structurally lacks.

Most powerful boundary object yet built. Translates between community vocabularies faster and more consistently than any human broker.

Substitution produces erosion. When AI mediation replaces mutual engagement, the social infrastructure of learning thins.

Requires community practices around it. Critical reflection on AI outputs, collective consent about when to trust, community-level evaluation — all necessary to prevent reification from swallowing participation.

Debates & Critiques

The sharpest debate concerns whether future AI systems — more embodied, more continuously-learning, more situated in ongoing human relationships — might develop functional analogs of participation that would complicate the reification/participation distinction. Wenger's framework leaves open the possibility that such systems could exist, while insisting that current systems do not and that the distinction matters for how we use what we have.

Appears in the Orange Pill Cycle

Further reading

  1. Étienne Wenger-Trayner et al., "Generative AI and communities of practice" (2023)
  2. Étienne Wenger, Communities of Practice (Cambridge, 1998)
  3. Shannon Vallor, The AI Mirror (Oxford, 2024)
  4. U.S. General Services Administration, "Federal AI Community of Practice" documentation (2020-present)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT