Generated vs. Earned Results — Orange Pill Wiki
CONCEPT

Generated vs. Earned Results

The diagnostic distinction between outputs produced through the practitioner's developmental engagement and outputs delivered without that engagement — identical on the page, decisive in what the person who holds them knows.

Generated and earned results are Suchman's analytical distinction between outputs that have been produced through a practitioner's sustained, situated engagement with the problem and outputs that have been delivered without that engagement. The distinction is not in the result — two pieces of code, two legal briefs, two diagnoses may be indistinguishable in content, structure, and quality. The distinction is in the person who holds the result. The earned result comes with the residue of the process that produced it: the situated knowledge of why the result works, where it is fragile, what the territory actually looks like. The generated result comes only with the result. In the current moment, organizations optimize for outputs while the difference between generated and earned accumulates quietly as institutional fragility.

In the AI Story

Hedcut illustration for Generated vs. Earned Results
Generated vs. Earned Results

The distinction sharpens the moral and epistemic stakes of AI-assisted production. When a senior engineer earns a fix through forty hours of debugging, the code is the visible product. The invisible product is the engineer herself — a more capable practitioner than the one who began the session, equipped with situated knowledge about this system that no documentation could capture. The same engineer, receiving the same fix from Claude in fifteen minutes, possesses the same visible product without the invisible one. She knows the code works; she does not know why it works in the way the earning process would have taught her.

The distinction maps directly onto The Orange Pill's metaphor of geological deposition. Segal's intuition that every hour of debugging deposits a thin layer of understanding is, in Suchman's framework, a description of how earning differs from receiving. The deposition occurs through situated action — through the practitioner's engagement with the specific resistances of the problem. Each unexpected error, each failed hypothesis, each moment when the code refuses to behave as expected is a moment of improvisation that deposits knowledge. Claude skips the deposition; the surface looks the same, but the knowledge beneath it is thinner.

The distinction has precedents across the history of automation. Aviation automation produced pilots with excellent procedure-following skills and diminished ability to handle situations the automation could not address — leading to FAA hand-flying requirements that deliberately reintroduced friction to maintain situated competence. Computerized legal research produced faster access to precedent while eroding the intimate familiarity that manual research built. Diagnostic imaging produced more accurate diagnoses while eroding physical-examination skills. In each case, generated results were superior by output metrics and institutional fragility accumulated beneath the metrics.

The stakes become lethal in Suchman's analysis of algorithmic targeting systems. The generated target recommendation and the earned intelligence assessment may converge on the same nomination. But the analyst who earned her assessment through months of studying a specific network possesses situated knowledge of which patterns are robust and which are artifacts of noise. The operator who accepts the generated recommendation without that background treats a generated result as an earned one, with consequences measured in civilian lives when the pattern breaks down.

Origin

The distinction is a natural extension of Suchman's situated action framework to the question of outputs. It draws on her earlier work on how practitioners develop the judgment necessary to function in open worlds, and on the broader STS literature on tacit knowledge, apprenticeship, and the conditions for expertise. It has been sharpened in her recent work on military AI, where the stakes of misclassifying generated results as earned ones are especially clear.

Key Ideas

Same result, different holder. The outputs may be identical; the epistemic position of the person who holds them is categorically different.

The invisible product. Earning produces two things: the visible result and the changed practitioner. Generation produces only the first.

Understanding is developmental. The knowledge that distinguishes robust solutions from plausible ones accumulates through the process of producing solutions, not through the receipt of them.

Organizational blindness. Output metrics cannot detect the difference. The dashboard shows improvement; the knowledge base erodes invisibly until system failure makes the erosion visible.

The template repeats. Aviation, medicine, law, intelligence — every domain has encountered the pattern. AI intensifies it by automating more of the formative friction across more of the knowledge economy.

Appears in the Orange Pill Cycle

Further reading

  1. Lucy Suchman, Human-Machine Reconfigurations (Cambridge University Press, 2007)
  2. Lisanne Bainbridge, 'Ironies of Automation' (Automatica, 1983)
  3. K. Anders Ericsson et al., The Cambridge Handbook of Expertise (Cambridge University Press, 2006)
  4. Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT