Downgrading — Orange Pill Wiki
CONCEPT

Downgrading

Center for Humane Technology's term for the systematic weakening of human capacities that engagement-optimized technology produces in the cognitive domains the technology assists.

Downgrading is the concept Raskin and Harris developed to name the systematic erosion of human cognitive capacities that engagement-optimized technology produces. The term is borrowed from software — where a downgrade is a reversion to an earlier, less capable version of a system — and the borrowing is deliberate. Technology designed for engagement systematically weakens the human capacities the tool depends on, reverting the user to a less capable version of herself in the specific domains where the tool provides assistance, while simultaneously making her more productive in the aggregate. The user produces more and can do less. She accomplishes more and understands less. She builds more and knows less about what she has built.

In the AI Story

Hedcut illustration for Downgrading
Downgrading

Social media downgraded attention spans, social cognition, and the capacity for nuanced thought. These effects are documented across hundreds of studies in developmental psychology, neuroscience, and sociology. AI tools risk a different set of downgradings operating on different cognitive capacities through different mechanisms, but producing the same fundamental outcome: the erosion of the human capacities the tool was designed to augment.

The specific downgradings that emerge from AI collaboration include: the erosion of friction tolerance, documented in The Orange Pill's account of engineers who became intolerant of work rhythms that had defined their careers; the erosion of sustained uncertainty, the capacity to sit with problems that have not yet yielded; the erosion of critical evaluation, what Segal calls the seduction of plausible output; and the erosion of willingness to attempt difficulty, the disposition to choose the harder path when an easier one is available.

The downgradings reinforce each other. A user whose friction tolerance has been eroded is less likely to engage in the deliberate practice that would maintain her capacity for critical evaluation. A user whose capacity for sustained uncertainty has been eroded is less likely to sit with the discomfort of discovering her output is flawed. The compound degradation exceeds the sum of its parts, progressing at a rate that makes intervention increasingly difficult the longer it continues.

The framework's precision derives from its grounding in neural plasticity. The brain adapts to whatever it practices: circuits repeatedly activated become stronger and more efficient; circuits not activated atrophy. A user who spends six hours a day in AI-assisted work is training her brain — strengthening circuits the tool engages and weakening circuits the tool renders unnecessary. The downgrading is not a design flaw to be patched. It is the natural consequence of tools that make difficulty optional, because difficulty is the training stimulus that maintains capacity, and removing the stimulus produces atrophy.

Origin

The term was introduced by Raskin and Harris through the Center for Humane Technology in presentations from 2018 onward, and developed in their 2023 AI Dilemma presentation. It draws on a lineage of research documenting the cognitive effects of externalized tools, including Nicholas Carr's The Shallows (2010), Sherry Turkle's work on digital relationships, and a growing body of empirical research on GPS-mediated spatial cognition, calculator-mediated number sense, and search-mediated memory.

Key Ideas

Borrowed from software. The term deliberately names a design operation — reverting to a less capable version — that technology performs on its users.

Four specific AI downgradings. Friction tolerance, sustained uncertainty, critical evaluation, and willingness to attempt difficulty — each identified with a specific mechanism.

Compound degradation. The downgradings reinforce each other, producing degradation greater than the sum of individual losses.

Neural plasticity ground. The framework is grounded in the biological fact that the brain adapts to what it practices, regardless of conscious intention.

Appears in the Orange Pill Cycle

Further reading

  1. Tristan Harris and Aza Raskin, The AI Dilemma (2023)
  2. Nicholas Carr, The Shallows (2010)
  3. Maryanne Wolf, Reader, Come Home (2018)
  4. Véronique Bohbot et al., GPS use and hippocampal function (various papers)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT