Pseudo-Attempts — Orange Pill Wiki
CONCEPT

Pseudo-Attempts

The structural problem of AI-era creative production: outputs that occupy the form of creative attempts without involving the evaluative engagement that the equal-odds baseline requires to convert volume into quality.

The pseudo-attempt is the diagnostic category Simonton's framework generates for the AI moment. The equal-odds baseline predicts that excellent work scales with creative volume — but the scaling depends on each unit of volume constituting a genuine creative attempt, involving generation, evaluation, and iteration. When AI makes production trivially easy while leaving creation as hard as it ever was, the risk is that volume increases without the engagement that gives the baseline its predictive power. The lottery tickets are counterfeit. They look real. They occupy space. But they do not carry the probability of genuine tickets because the creative process that loads the probability was never engaged.

In the AI Story

Hedcut illustration for Pseudo-Attempts
Pseudo-Attempts

The distinction operates at the level of the individual creative act. A developer using AI to explore problems that interest her — generating multiple architectural approaches, evaluating outputs against her judgment, iterating until solutions satisfy her aesthetic sense — is producing genuine attempts. Her output involves full creative engagement, and the equal-odds baseline predicts her rate of excellent work will scale with her output. A developer using the tool to clear a backlog — describing tasks, accepting first workable outputs, moving to the next item — is producing pseudo-attempts. Her production has increased twentyfold. Her creative engagement per unit has decreased proportionally. The baseline does not predict twenty times the excellence for her output; it predicts twenty times the adequate.

The pseudo-attempt framework explains why the Berkeley study found mixed empirical results in AI adoption. Some of the expanded output involved genuine creative engagement — designers writing code for the first time, engineers exploring architectural problems they had never had bandwidth to consider. Some of it was queue-clearing at scale. Aggregate statistics cannot distinguish between these categories, which is precisely why Simonton's framework matters: it provides the diagnostic vocabulary for asking whether a given instance of AI-amplified output constitutes genuine creative engagement or mere production.

The framework extends beyond individual creators to organizational and cultural scale. When AI tools are deployed in ways that pressure workers toward production velocity rather than creative engagement, the aggregate output of the organization consists disproportionately of pseudo-attempts. The volume multiplies. The creative value does not. The organization appears to be doing more creative work because its output has increased, but the output's quality distribution reflects the equal-odds baseline operating on pseudo-attempts rather than genuine ones — twenty times the adequate, not twenty times the excellent.

The distinction also clarifies what institutions must protect to preserve creative productivity in AI environments. Not the tool. Not the volume. The engagement — the organizational and cultural conditions that ensure each unit of output involves genuine creative investment. This requires structures that resist the optimization pressure to convert engagement into transaction, that reward evaluative discipline over raw velocity, that preserve the friction the baseline requires for its mechanics to operate.

Origin

The pseudo-attempt concept is an extrapolation from Simonton's equal-odds framework applied to the AI transition. The framework itself does not use the term, but the diagnostic category follows necessarily from the baseline's conditional prediction: if quality scales with quantity only under the condition of genuine creative engagement, then quantity without engagement produces a categorically different output distribution — pseudo-attempts rather than attempts.

The term captures what Simonton's framework predicts but predates AI's ability to produce at scale. Pre-AI, the opportunity constraint limited total output to roughly the volume a creator could genuinely engage with. AI breaks that coupling: production capacity now vastly exceeds engagement capacity, making pseudo-attempts possible at scales that previous eras could not generate.

Key Ideas

The baseline requires genuine engagement. Quality scales with quantity only when each unit involves real creative investment — generation, evaluation, iteration.

AI separates production from engagement. Tools that produce acceptable outputs with minimal creator engagement enable volume that does not involve genuine attempts.

Pseudo-attempts produce adequate at scale. The equal-odds baseline applied to pseudo-attempts yields twenty times the adequate, not twenty times the excellent.

Aggregate statistics hide the distinction. Output volume cannot distinguish genuine attempts from pseudo-attempts — the category requires examination of the creative process, not the creative artifact.

Institutional design determines the ratio. Organizations can structure AI deployment to preserve engagement or to convert it to transaction — the choice has first-order consequences for the quality of aggregate output.

Debates & Critiques

The framework raises empirical questions that remain unresolved: how large is the pseudo-attempt share of AI-amplified output in practice? How malleable is engagement — can workers producing pseudo-attempts shift toward genuine engagement through institutional redesign, or is the pressure toward transaction irreversible once the tool's capabilities are available? Can external evaluators distinguish genuine from pseudo-attempts by examining outputs, or does the distinction require knowledge of the creative process? These questions matter for the diagnostic utility of the framework: if pseudo-attempts cannot be identified empirically, the framework describes a theoretical concern but offers no actionable guidance.

Appears in the Orange Pill Cycle

Further reading

  1. Simonton, D.K. (2004). Creativity in Science: Chance, Logic, Genius, and Zeitgeist. Cambridge University Press.
  2. Ye, X.M. & Ranganathan, A. (2026). AI Doesn't Reduce Work — It Intensifies It. Harvard Business Review.
  3. Segal, E. (2026). The Orange Pill. Chapter 11: What the Data Shows.
  4. Simonton, D.K. (1988). Scientific Genius. Cambridge University Press.
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT