The lolcat problem is the name this book gives to the specific form of critical dismissal that treats a creative medium as evaluable by its median output. When Shirky described the first cognitive surplus, the rebuttal that circulated among skeptics came in one word: lolcats. The word was a shorthand for the argument that when you give humans tools for participation and time to deploy them, they do not produce Tolstoy or Linux; they produce misspelled captions on photographs of cats. The argument seemed to dismantle Shirky's case. The response was that critics were evaluating the surplus by its median rather than its distribution. The median output of any creative medium, at any point in its history, is mediocre; the value of the medium is determined by its tail. The lolcats were not the counter-evidence to participatory culture; they were its experimental substrate, the millions of people learning new creative habits that a fraction of would eventually deploy for purposes of greater consequence.
The second cognitive surplus will produce its own lolcats, and the volume will dwarf anything the first surplus generated. When the cost of creation approaches zero, the quantity of creation approaches quantities the existing infrastructure is not prepared to absorb. The experimental substrate of the second surplus will consist of billions of personal utilities, clumsy prototypes, tools that solve one person's problem and no one else's. The professional class will look at this output and pronounce judgment — amateur hour — committing the same category error the original lolcat critics committed.
The lolcat problem at the scale of the second surplus is genuinely harder than it was for the first, for three reasons. First, the stakes are different. A lolcat that is poorly made wastes seconds of the viewer's time. A software application that is poorly made can cause real damage: code that mishandles user data, contains security vulnerabilities the creator does not recognize, fails silently in contexts the creator did not test. Second, the learning trajectory is different. The experimental phase of the first surplus taught productive skills (writing, editing, video) that transferred directly to more valuable applications. The experimental phase of the second surplus teaches directive skills (prompting, evaluation, iteration) that are valuable but more dependent on domain knowledge. Third, sheer volume: when a billion people can build software through conversation, the signal-to-noise ratio deteriorates past the point where discovery mechanisms designed for the first surplus remain adequate.
The resolution of the original lolcat problem came through institutions that aggregated, curated, and evaluated contributions: Wikipedia aggregated individual edits into a comprehensive encyclopedia; GitHub aggregated individual code contributions into functional projects; Stack Overflow aggregated individual answers into a searchable knowledge base. The second surplus requires analogous institutions designed for the specific characteristics of AI-enabled creation, and these institutions have barely begun to be built.
The term lolcat as shorthand for participatory triviality circulated widely in technology discourse between roughly 2007 and 2012, during the first wave of reflection on user-generated content. Shirky's treatment of the rebuttal in Cognitive Surplus (2010) and in subsequent talks crystallized the argument about median versus distribution that this book extends.
The distributional principle. A creative medium is evaluated by its tail, not its median; critics who judge by average output commit a category error.
The experimental substrate. The mediocre bulk of participatory output is not the counter-evidence to participatory culture; it is the substrate from which extraordinary contributions emerge.
Scale-dependent stakes. The second surplus produces higher-stakes artifacts (functional software) than the first (text and media), so the institutional response must account for consequential failure modes.
The skill trajectory divergence. Lower-level implementation skills transfer across domains; higher-level directive skills are more domain-specific, making the experimental phase less automatically generative.
The discovery problem. At sufficient volume, the challenge shifts from producing participation to finding value within abundance; search and recommendation mechanisms designed for earlier phases become inadequate.