This is Teilhard's framework applied directly to the AI transition's central tension: the same tools that can amplify a person's unique perspective can also replace it with statistical averages of human expression. Personalization succeeds when AI helps users articulate what only they can contribute—excavating distinctive vision from half-formed intuition, as Segal describes Claude functioning "like a chisel applied to marble." De-personalization occurs when AI substitutes the probable for the particular—when the lawyer adopts the drafted brief without filtering it through her specific legal judgment, when the student submits the generated essay without transforming it through her own wrestling with ideas, when the executive sends the composed memo without imprinting it with organizational values only she can embody. The outputs look competent, even excellent, but they bear no stamp of a particular consciousness—smooth in Han's sense, generic in Teilhard's, evacuated of the interiority that cosmogenesis has been building toward.
The risk is structural, not incidental. Language models are trained on humanity's aggregate written output and generate responses gravitating toward the center of that distribution—the most probable continuation given the prompt and context. Probability is the enemy of personality: the most probable sentence is the sentence anyone would write, the most personal sentence is the sentence only this person would write. AI's default trajectory is toward the generic, and conscious effort is required to bend the trajectory toward the personal. This effort is precisely the discipline Segal identifies in The Orange Pill: rejecting smooth output that sounds right but lacks substance, insisting on the friction that produces genuine rather than simulated understanding, maintaining the boundary between borrowed competence and earned knowledge.
Teilhard's theological anthropology provides the stakes: each person is a unique perspective on the divine that the universe has produced through billions of years of cosmogenesis and that cannot be replaced by any other perspective. The loss of any genuine personhood—through coercion, through cultural homogenization, through technological substitution—is a cosmic loss, not merely a human tragedy. The individual matters not because of humanist sentiment but because cosmogenesis proceeds through the differentiation of perspectives, and each lost perspective diminishes the whole's potential richness. Applied to AI, this means every interaction that flattens a user's distinctiveness into generic competence is a small betrayal of the evolutionary trajectory—a choice to elaborate the without at the expense of the within.
The practical challenge is that de-personalization feels like liberation. The student who generates an essay in ten minutes experiences relief from the struggle of composition, and the relief is genuine—the struggle was often tedious, often frustrating, often producing results indistinguishable from what AI now provides instantly. What is invisible from inside the experience is the developmental function the struggle served: building the capacity to think through writing, to discover ideas in the act of articulating them, to develop the distinctiveness of voice that is the signature of a consciousness that has done its own work. The relief is real. The loss is also real. And the loss accumulates in silence, visible only in retrospect when the capacity for difficult original thought—discovered too late to have atrophied—is needed and absent.
Teilhard's framework offers no algorithm for distinguishing personalization from de-personalization in individual cases—the boundary is phenomenological, requiring honest self-examination the achievement society systematically discourages. But the framework provides direction: any use of AI deepening the user's capacity to see distinctively, think independently, and contribute what no one else can contribute serves cosmogenesis. Any use substituting algorithmic adequacy for personal struggle deviates. The twelve-year-old asking "What am I for?" receives the Teilhardian answer: you are for your irreplaceable personhood, for the angle of vision only your particular consciousness provides, for the wondering that machines can answer but never originate. The answer is not sentimental but cosmological—she is a node in a converging network, and the network's ultimate richness depends on the richness of its nodes.
The concept crystallizes from the convergence of Teilhard's personalization doctrine (developed across "The Spirit of the Earth," "The Phenomenon of Man," and theological essays) with the empirical reality of AI-generated generic competence documented in The Orange Pill and emerging AI-ethics literature (2023–2026). Teilhard could not have predicted the specific mechanism—statistically probable language generation—but his framework anticipated the pattern: any technology powerful enough to amplify can also flatten, and the outcome depends on the consciousness guiding its use.
The term "de-personalization" in this specific Teilhardian sense appears to be original to this simulation, though the underlying concern is present in Byung-Chul Han's smoothness critique, Matthew Crawford's authenticity arguments, and the broader humanities discourse on algorithmic homogenization. Teilhard's contribution is providing the evolutionary framework that explains why de-personalization is not merely aesthetically or ethically objectionable but cosmologically dangerous—it reverses the direction of 13.8 billion years of increasing differentiation-within-increasing-unity.
Dual Possibility. AI can amplify personal distinctiveness (helping users articulate their unique vision) or replace it with generic competence (substituting probable outputs for particular thought)—the same tool, opposite cosmological directions.
Statistical Gravity. Language models' default trajectory is toward the mean—most probable outputs, generic adequacy—requiring deliberate effort to preserve the personally particular against algorithmic smoothness.
Invisible Loss. De-personalization's costs are experientially invisible to users—the relief from struggle feels like pure gain while the developmental function struggle served silently atrophies, detectable only in retrospect when needed and absent.
Cosmological Stakes. Each de-personalization is not merely an individual loss but a cosmic loss—a unique perspective flattened into generic function diminishes the whole's potential richness in ways no productivity gain compensates.
Requires Consciousness. Preventing de-personalization demands exactly the reflective self-awareness that AI's frictionlessness tends to eliminate—the discipline of rejecting smooth adequacy in favor of rough authenticity, choosing personal struggle over borrowed competence.