The Box-Ticker — Orange Pill Wiki
CONCEPT

The Box-Ticker

Workers who produce documentation, reports, and metrics demonstrating that processes have been followed — regardless of whether the processes accomplish anything.

Box-tickers generate evidence of process. The compliance officer producing reports no one reads. The quality assurance specialist documenting that procedures were followed. The administrator preparing materials demonstrating that the institution has met its accreditation requirements. Graeber identified box-ticking as the form of bullshit most resistant to elimination because it serves a political function distinct from any productive one — the demonstration that institutional responsibility is being exercised. AI's relationship to box-ticking is paradoxical. The technology can automate compliance documentation with extraordinary efficiency. It also generates entirely new categories of compliance requirement — AI ethics reviews, algorithmic impact assessments, AI bias audits — each requiring its own apparatus of human box-tickers. The result is what Mills and Spencer call efficient inefficiency: bullshit at scale.

In the AI Story

Hedcut illustration for The Box-Ticker
The Box-Ticker

Box-ticking proliferates across every sector of advanced economies. In universities, faculty spend increasing time documenting that teaching is conducted according to approved methodologies — time that would otherwise improve teaching. In hospitals, clinicians document that care meets approved protocols — time that would otherwise be patient care. The pattern is consistent: the documentation generated to demonstrate quality consumes the resources required to produce it.

AI's threat to box-ticking initially appears decisive. The reports can be generated automatically. The dashboards can be populated by algorithms. The forms can be filled by language models. But the political function of box-ticking — providing evidence of institutional responsibility — generates new requirements as fast as old ones are automated. The AI system that automates compliance must itself be audited. The audit generates documentation. The documentation generates compliance frameworks. A new cadre of workers materializes to tick the new boxes.

The European Union's AI Act exemplifies both necessity and risk. Risk categories, conformity assessments, transparency obligations — much of this addresses genuine concerns. Implementation is already generating an entire compliance industry: consultancies specializing in AI Act compliance, certification bodies issuing conformity assessments, internal compliance teams producing documentation. The productive work of building responsible AI is accompanied — and in some organizations overwhelmed — by the box-ticking work of demonstrating compliance.

Graeber identified the cultural root: a logic of distrust that assumes people will not do the right thing unless required to document that they have done it. AI could, in principle, address this anxiety by replacing process-based governance with outcome-based governance — measuring whether patients recover rather than whether forms were filled. But the transition requires a cultural shift no technology can deliver.

Origin

Graeber drew the box-ticker category from extensive testimony in corporate compliance, public-sector administration, and accreditation work. The recurring complaint: 'I produce documents that no one reads, demonstrating that processes were followed that no one verifies, satisfying requirements no one can justify.' The category named what generations of bureaucratic workers had felt without vocabulary.

Key Ideas

Evidence of process, not outcome. Box-ticking produces documentation that procedures were followed — regardless of whether procedures accomplish anything.

Political function distinct from productive function. The documentation demonstrates institutional responsibility, a service valuable independently of whether responsibility is actually being exercised.

Recursive generation. Every AI capability generates new compliance requirements; the box-ticking apparatus expands rather than contracts.

Logic of distrust. Box-ticking persists because institutions have substituted documentation for trust — a substitution AI alone cannot reverse.

Efficient inefficiency. AI performing bullshit faster is bullshit at scale, not its elimination.

Appears in the Orange Pill Cycle

Further reading

  1. David Graeber, The Utopia of Rules (Melville House, 2015)
  2. Stuart Mills and David Spencer, 'Bullshit Jobs, Bullshit Tasks, and Artificial Intelligence' (Journal of Business Research, 2024)
  3. Cathy O'Neil, Weapons of Math Destruction (Crown, 2016)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT