By Edo Segal
Gessner's catalog was obsolete before the ink dried.
That sentence hit me harder than any technical benchmark I encountered in 2025. Conrad Gessner, a Swiss naturalist in 1545, tried to list every book ever printed in Latin, Greek, and Hebrew. He got to about ten thousand titles. The printing presses kept running. His life's work was out of date the moment he finished it.
I know that feeling. I lived it in Trivandrum, watching twenty engineers produce more in a week than they could have produced in months. I lived it writing The Orange Pill with Claude, generating and discarding and evaluating and discarding again, the discard pile growing faster than the finished pages. I lived it every time I opened my laptop and felt the specific exhaustion that comes not from doing too much but from judging too much.
Every book in this series hands you a lens. Csikszentmihalyi gave us flow. Han gave us the warning about smoothness. The Luddites gave us the cost of refusal. Ann Blair gives us something none of them could — the six-hundred-year view. The view that says this has happened before. Not vaguely, not as a comforting analogy, but structurally. The flood of print after Gutenberg produced the same disorientation, the same collapse of filtering mechanisms, the same desperate scramble to build tools for navigating abundance that we are living through right now.
But Blair does not offer comfort. She offers precision. Her research shows exactly what happened in the gap between the old filters breaking and the new ones being built. People suffered in that gap. Whole generations bore the cost of a transition whose benefits would arrive later, for other people. The tools that eventually resolved the crisis — the index, the encyclopedia, the book review, the scholarly journal — were invented by specific humans making deliberate choices. They did not emerge automatically from the technology that created the problem.
That is the lesson that matters most. The curatorial practices we need for the AI age will not build themselves. Someone has to forge them. Someone has to teach them. Someone has to insist that abundance without judgment is just noise at higher volume.
Blair gave me a word I did not have before: iudicium. The Renaissance humanists' term for the cultivated capacity for judgment that no rule can capture. It is the muscle I have been trying to describe throughout The Orange Pill — the thing that makes you worth amplifying. Blair showed me it has a name, a history, and a pedagogy that is six centuries old.
The flood is here. The tools are waiting to be forged.
-- Edo Segal ^ Opus 4.6
1961-present
Ann Blair (1961–present) is an American historian of early modern European intellectual culture, the Carl H. Pforzheimer University Professor at Harvard University. Born in the United States, she was educated at Harvard and Princeton before returning to Harvard's faculty, where she has taught since 1996. Her landmark work, Too Much to Know: Managing Scholarly Information before the Modern Age (2010), traces the history of information overload from antiquity through the early modern period, demonstrating that the sense of having too much to read and process is not a modern phenomenon but a recurring structural condition accompanying every major expansion of information technology. Blair's research on commonplace books, florilegia, encyclopedias, and reference works has revealed the sophisticated curatorial practices that scholars developed to navigate abundance — practices she argues are the ancestors of modern information management. Her concept of "infolust," the culturally driven appetite for comprehensive knowledge that feeds and is fed by each new technology, reframes the relationship between tools and cognitive overload. Blair's work has become an essential reference point for historians, librarians, information scientists, and increasingly for technologists grappling with the implications of AI-generated content at scale.
In 1545, the Swiss naturalist Conrad Gessner published his Bibliotheca Universalis, an attempt to catalog every book ever printed in Latin, Greek, and Hebrew. The project was both heroic and doomed. Gessner listed approximately three thousand authors and roughly ten thousand titles, and the catalog was out of date before the ink dried. New books appeared faster than any individual could track them, let alone read them. Gessner described the situation in his preface with a mixture of scholarly determination and something close to despair: the abundance of books had become "confusing and harmful" to scholarship, and without systematic methods of navigation, the flood of print threatened to bury the very knowledge it was supposed to disseminate.
Ann Blair, the Harvard historian whose career has been devoted to understanding how early modern scholars managed exactly this crisis, has documented Gessner's predicament as one instance of a pattern that extends across centuries. The pattern is deceptively simple: every major expansion of the information supply produces a corresponding sense of overload, and every episode of overload eventually produces new tools and practices for managing the abundance. The tools change with each episode. The underlying dynamic does not. What the printing press was to the sixteenth century — a technology that expanded the supply of available text so dramatically that existing methods of navigation became inadequate — artificial intelligence is to the twenty-first. The content of the crisis is different. The structure is the same.
Blair's framework, developed across decades of archival research and articulated most fully in her 2010 study Too Much to Know: Managing Scholarly Information before the Modern Age, rests on a historical observation that the contemporary discourse about AI has largely failed to absorb: the feeling that there is too much to know is not a modern affliction. It is a recurring structural condition that has accompanied every significant increase in the technologies of textual production — from the accumulation of manuscript rolls in the ancient Library of Alexandria, through the multiplication of codices in late medieval monasteries, to the explosion of printed books in the decades after Gutenberg. Each expansion produced complaints that sound startlingly familiar to anyone who has spent time in the contemporary discourse about AI-generated content. There were too many books, the quality was uneven, the old methods of determining what was worth reading had broken down, and the pace of production outstripped any individual's capacity to keep up.
The printing press is the episode Blair has studied most closely, and the one whose structural parallels to the AI moment are most instructive. Before Gutenberg, a book was a rare and expensive artifact. The labor of copying ensured that only texts judged worthy of reproduction would be reproduced. The expense of production ensured that only wealthy individuals and institutions could accumulate substantial collections. These constraints functioned as a quality filter: if someone had invested the considerable resources required to copy a text by hand, the text was probably worth reading. The rarity of the book conferred a presumption of authority on its contents.
The press destroyed this equation within decades. After Gutenberg, a book was a common, cheap, and potentially unreliable object. The economic barrier to publication dropped so sharply that the filtering function of cost evaporated. Anyone with access to a press and paper could produce a book, and many did, regardless of whether the content merited wide circulation. The humanist scholar Erasmus, writing in the early sixteenth century, complained that printers published whatever came to hand, filling the world with "stupid, ignorant, slanderous, scandalous, raving, irreligious and seditious books." The complaint was not about the printing press itself. It was about the collapse of the filtering mechanisms that manuscript culture's economics had provided. The press produced abundance. Abundance overwhelmed the existing methods of evaluation. The reader could no longer assume that a printed book was worth reading simply because it existed in printed form. She had to evaluate it — and the evaluation required skills, standards, and institutional supports that the manuscript era had not demanded, because the manuscript era's bottleneck had been access rather than assessment.
The structural parallel to the AI transition is precise, once the surface differences are set aside. The large language model does not produce books. It produces text, code, analysis, design, strategic recommendations, creative writing — a heterogeneous flood of intellectual artifacts whose volume, speed, and superficial quality exceed any individual's capacity for evaluation. The filtering mechanisms that the pre-AI economy provided — the time and expense of producing software, the difficulty of writing competent prose, the specialized training required for professional analysis — have been dramatically reduced. A person with an idea and access to an AI assistant can now produce, in hours, artifacts that previously required teams, training, and months of labor. The result is an expansion of the intellectual supply that mirrors the expansion of the textual supply that the printing press produced — and that creates the same downstream challenge: how to evaluate the abundant output when the economic and practical barriers that once served as rough quality filters have been removed.
Edo Segal's The Orange Pill documents this expansion from the perspective of a practitioner who has lived through it. The account of training twenty engineers in Trivandrum, India, to use Claude Code — and watching each engineer's productive capacity multiply by a factor of twenty within five days — is a contemporary instance of the abundance shock that Gessner documented in 1545. The numbers are different. The medium is different. The experience is structurally identical: a technology has collapsed the cost of production, the volume of output has exploded, and the existing methods of navigation — in this case, the organizational structures, role definitions, and quality-assessment practices of a functioning software team — have been rendered inadequate to the new volume.
What Blair's framework reveals about this moment is something the contemporary discourse has been slow to recognize: the abundance does not reduce cognitive labor. It shifts it. In the manuscript era, the scholar's primary cognitive labor was acquisition — finding and accessing scarce texts. In the print era, the labor shifted from acquisition to evaluation — determining which of the many available texts were worth reading and studying. The shift was not a reduction. The evaluation labor was, in many respects, more cognitively demanding than the acquisition labor, because it required the exercise of critical judgment rather than merely the expenditure of effort. The scholar who could copy a text needed patience and a steady hand. The scholar who could evaluate a text needed discernment, taste, and a grasp of the intellectual landscape broad enough to situate any given text within the larger field.
The AI transition produces the same shift at a different level. Before AI, the knowledge worker's primary cognitive labor was execution — writing the code, drafting the document, building the model, producing the analysis. After AI, the execution labor is dramatically reduced, but the evaluation labor intensifies. The practitioner who uses AI to generate code must still evaluate whether the code is correct, elegant, maintainable, and appropriate to the project's architecture. The practitioner who uses AI to draft a document must still evaluate whether the document says what needs to be said, in the right tone, with the right emphasis, for the right audience. The practitioner who uses AI to produce an analysis must still evaluate whether the analysis addresses the right question, draws on the right evidence, and reaches conclusions that withstand scrutiny. In each case, the AI has reduced the labor of production while intensifying the labor of assessment — precisely as the printing press reduced the labor of copying while intensifying the labor of reading.
The Berkeley study that The Orange Pill cites — Xingqi Maggie Ye and Aruna Ranganathan's eight-month observational study of AI adoption in a two-hundred-person technology company — confirms this shift with empirical specificity. The researchers found that AI tools did not reduce the total amount of work. They intensified it: workers took on more tasks, expanded into domains that had previously been someone else's responsibility, and filled previously protected cognitive pauses with additional AI-assisted activity. The finding is consistent with Blair's historical analysis to a degree that deserves emphasis. Every expansion of information supply has produced this intensification. The printing press did not give scholars more leisure. It gave them more to read, more to evaluate, more to organize, and more to worry about. AI is doing the same thing to knowledge workers — not because the technology is flawed, but because the relationship between abundance and cognitive labor is structural. More supply means more evaluation, and more evaluation means more work for the human minds that must perform it.
Blair has argued, in a formulation with direct bearing on the AI moment, that overload is driven not only by technology but by what she calls "infolust" — an information obsession, a cultural appetite for more knowledge that predates and outlasts any particular technology. The printing press did not create the desire for comprehensive knowledge. The desire was already present in the encyclopedic ambitions of medieval compilers, in the universalizing aspirations of classical scholarship, in the human appetite for understanding that no amount of information has ever fully satisfied. The press fed an existing hunger. The hunger intensified because the feeding was easier, not because the hunger itself had changed.
The AI moment displays the same dynamic. The productive compulsion that The Orange Pill documents — the builder who cannot stop, the developer who works through the night not because anyone demands it but because the tool makes the next step so easy — is the contemporary expression of the same infolust that drove Gessner to attempt his impossible catalog. The tool feeds an appetite that was already there: the desire to build, to create, to externalize intention into artifact. The appetite intensifies because the feeding is frictionless, not because the appetite has changed in character. The Renaissance scholar who complained about too many books was not complaining about books. She was complaining about the mismatch between her appetite for knowledge and her capacity to absorb it. The AI-era practitioner who reports working harder than ever before, with more intensity and less rest, is not complaining about AI. She is experiencing the mismatch between her appetite for creation and her capacity to evaluate what she creates.
This mismatch is what Blair calls the information management problem: the challenge of developing techniques adequate to the abundance of available information at any given historical moment. The techniques are always specific to their moment — the commonplace book is not the same tool as the search engine — but the challenge is perennial. The challenge has been met, at every previous juncture, by the invention of new curatorial practices: new methods of selecting, organizing, and evaluating the abundant material so that human judgment can operate effectively within conditions that would otherwise overwhelm it. The printing press produced the index, the bibliography, the encyclopedia, the book review, the library catalog. Each of these was an invention — not a natural outgrowth of the technology, but a deliberate response by scholars and institutions who recognized that the abundance required new methods of navigation.
The AI transition demands analogous inventions. The curatorial practices that will allow human judgment to operate effectively within the abundance that AI creates have not yet been fully developed. They are being improvised, as The Orange Pill documents, by practitioners who are learning through trial and error how to evaluate, select, and direct the AI's output. The improvisation is necessary and valuable, but it is not sufficient. The historical pattern suggests that the full resolution of an information crisis requires not just individual ingenuity but institutional investment — the development of shared methods, professional standards, educational curricula, and cultural norms that support the curatorial labor on which the conversion of abundance into value depends.
The institutions that performed this function after the printing press — the university, the scholarly journal, the professional editor, the public library — took generations to develop. The AI transition cannot afford to wait that long, because the pace of change is faster and the scale of the abundance is larger. But the lesson of Blair's research is that the institutional development is neither automatic nor optional. The abundance will not curate itself. The tools that convert abundance into value are human inventions, developed through deliberate effort, and the effort must be made if the abundance is to produce intellectual flourishing rather than intellectual chaos.
The lesson of 1545 is not that everything will be fine. The lesson is that the crisis has a structure, the structure has been seen before, and the resolution depends on the quality of the curatorial response — the tools, practices, and institutions that human beings develop to navigate what no single mind can encompass.
John Locke kept a commonplace book whose indexing method he considered important enough to publish as a freestanding treatise. The Méthode nouvelle de dresser des recueils, published in French in 1685 and translated into English in 1706 as A New Method of a Common-Place-Book, laid out a system of extraordinary specificity. Locke's method assigned each entry to a heading based on the first letter of the keyword and the first vowel that followed it, creating a two-dimensional index that allowed rapid retrieval from a book of any size. The system was a response to a practical problem: Locke had accumulated so many excerpts from his reading that he could no longer find what he needed without a navigational apparatus. The apparatus was itself an intellectual artifact — a theory of organization embodied in a set of rules that reflected Locke's understanding of how knowledge was structured and how his own mind worked.
The commonplace book was the central information management technology of the early modern period, and Ann Blair's research has demonstrated that its significance extends far beyond the history of note-taking. The commonplace book was an epistemological practice — a way of knowing that shaped the knowledge it produced. The scholar who kept one was not passively transcribing passages from books. She was constructing a personal knowledge architecture, making decisions at every stage that reflected and reinforced her intellectual priorities. The decisions were cumulative: over months and years, the commonplace book became a mirror of the scholar's mind, revealing not merely what she had read but what she had judged worth remembering — and the distinction between the two was the scholar's intellectual identity made material.
Blair has identified three features of the practice that are essential to understanding its intellectual function. First, the practice was selective. The commonplace book did not aim to capture everything the scholar read. It captured what the scholar's judgment identified as valuable for her purposes. The judgment of value was the scholar's primary contribution — more important than the reading itself, because reading without selection produced a formless accumulation, while selection without excessive reading could still produce a focused and useful compilation. The printed book provided the raw material. The commonplace book refined it. And the refining process — the act of reading a passage, evaluating its worth, deciding to excerpt it or pass it by, choosing where to place it within the organizational scheme — was where the scholar's intellectual character was forged.
Second, the practice was organizational. The excerpts were not collected randomly but arranged under headings — topics, themes, categories — that reflected the scholar's understanding of how knowledge was structured. The choice of headings was an intellectual act of the first order. A passage placed under the heading "Justice" would generate different associations than the same passage placed under "Government" or "Virtue." The organizational scheme was the scholar's theory of knowledge made visible, and its revisions over time recorded the evolution of her thinking. Locke's index was one solution to the organizational problem. Other scholars developed different solutions — alphabetical arrangements, thematic trees, numbered cross-reference systems — and each solution embodied a different theory of how knowledge should be structured for retrieval and use.
Third, the practice was generative. The commonplace book was not an end in itself. It was a resource for the production of new work. The scholar consulted her commonplace book when writing, drawing on the accumulated excerpts and the connections between them to construct arguments, marshal evidence, and discover relationships that the original sources, read in isolation, would not have revealed. The commonplace book functioned as an amplifier of intellectual capacity: it allowed the scholar to bring a wider range of material to bear on any given question than memory alone could support, while the organizational scheme ensured that the material remained accessible in a form that supported the work of thinking rather than impeding it.
The structural parallel between commonplace book practice and AI collaboration is not metaphorical. It is operational. The practitioner who works with a large language model performs the same three operations — selection, organization, and generative use — that the Renaissance scholar performed with her commonplace book. The AI generates abundant material. The practitioner selects from it, retaining what meets her standards and discarding what does not. The practitioner organizes the selected material according to her own intellectual architecture — arranging it in a sequence that serves her argument, connecting it with her own analysis, situating it within a structure that reflects her purposes rather than the AI's default organizational tendencies. And the practitioner uses the curated material generatively, as the foundation for new work that goes beyond what either she or the AI could have produced independently.
The Orange Pill describes this process with the specificity of direct experience. Edo Segal recounts working with Claude on the book's structure: the AI generated multiple organizational possibilities, drawing connections between ideas from different chapters, suggesting frameworks that Segal had not considered. Segal then curated the output — keeping the connections that "felt true," discarding those that imposed a false coherence, rearranging the surviving elements into an architecture that reflected his sense of what the book needed to accomplish. The language is significant: "felt true" is not a computational criterion. It is a judgment criterion, the kind of evaluative response that emerges from a deep familiarity with one's own intellectual purposes and cannot be formalized into a rule that a machine could follow. The AI could generate structural possibilities. Only the author could determine which possibilities served the book's actual needs.
This curatorial process raises a question that the early modern scholars debated with intensity: what is the authorial status of compilatory work? The humanist tradition produced vigorous arguments about whether a scholar who assembled a work from excerpted material was truly an "author" or merely a "compiler" — a term that carried connotations of intellectual inferiority. The debate was eventually resolved in favor of nuance: the compiler who exercised original judgment in the selection, organization, and arrangement of excerpted material was performing a genuinely creative act, even if the raw materials were drawn from other sources. The compilation was not a copy. It was a new intellectual artifact, shaped by the compiler's judgment at every stage, bearing the imprint of a specific mind's engagement with the material. The same excerpts, selected and arranged by a different scholar, would have produced a different compilation — and the difference was the measure of the compiler's creative contribution.
The contemporary anxiety about AI-assisted authorship replays this debate. The practitioner who produces a finished work through collaboration with AI is performing a form of compilatory authorship: she did not generate the raw material (the AI produced that), but she selected, evaluated, organized, and transformed the material through an exercise of judgment that determined the finished work's character and quality. The judgment is the authorship. The raw material is what judgment operates upon. And the quality of the finished work depends on the quality of the judgment — not on whether the raw material was produced by a human hand, a printing press, or a neural network.
Blair's research reveals a further dimension of commonplace book practice that illuminates the contemporary experience: the practice was pedagogical. The humanist educators of the Renaissance did not merely use commonplace books themselves. They taught their students to keep them, and the teaching was considered central to intellectual formation. The student who learned to excerpt well — to identify the valuable within the voluminous, to organize the identified material according to a coherent scheme, to use the organized material as the basis for original composition — was learning the core intellectual skill that the abundance of print demanded. The skill was not reducible to a set of rules. It required practice, mentorship, and the gradual development of what the humanists called iudicium — judgment, the cultivated capacity for intellectual discernment that allowed the scholar to navigate the world of abundant information with confidence and purpose.
The pedagogical dimension is where Blair's framework becomes most urgent for the contemporary moment. Iudicium — curatorial judgment — is the intellectual virtue that the AI age demands above all others. The AI provides abundant raw material. The human provides the judgment that converts abundance into value. The judgment encompasses the ability to evaluate accuracy, to detect superficiality beneath a fluent surface, to recognize what is missing as well as what is present, to assess whether a piece of output serves the project's actual needs or merely its stated specifications. These capacities are not automated by AI. They are made more necessary by AI, precisely because AI increases the volume of material upon which judgment must operate.
Teaching iudicium was difficult in the sixteenth century, and it is difficult now. The Renaissance educators understood that judgment could not be transmitted through lectures or absorbed from textbooks. It had to be cultivated through the kind of sustained, mentored practice that the commonplace book tradition embodied: the student worked alongside a teacher who modeled expert curation, attempted curation under the teacher's guidance, and received feedback that gradually developed the student's capacity for independent discernment. The process was slow, individualized, and resistant to the economies of scale that modern educational institutions prize. But it was effective — and its effectiveness is confirmed by the extraordinary intellectual productivity of the generations that practiced it.
The AI transition requires an analogous pedagogy. Students and professionals need to develop the curatorial skills that effective AI collaboration demands — the skills of evaluation, selection, organization, and integration that transform the AI's abundant output into work of genuine quality. These skills cannot be acquired by reading a manual on prompt engineering. They must be developed through practice, guided by practitioners who have themselves mastered the art of AI curation and who can model, for their students, the exercise of judgment that separates competent AI use from excellent AI collaboration.
The commonplace book tradition offers a model — not a recipe, but a structural precedent for how curatorial skill has been taught in previous eras of information abundance. The model's core insight is that curation is not a mechanical operation but an intellectual practice, shaped by the curator's purposes, refined through experience, and constitutive of the curator's intellectual identity. The Renaissance scholar's commonplace book was not a filing system. It was a portrait of a mind at work — selective, organized, purposeful, always in the process of being revised. The AI practitioner's curatorial practice, however it may differ in medium and pace, performs the same intellectual function: it is the process through which abundant material is transformed, by human judgment, into coherent thought.
The question that remains — and that the historical parallel does not resolve — is whether the speed and fluency of AI-generated material change the curatorial dynamic in ways that the commonplace book tradition did not anticipate. The scholar who excerpted from printed books was working with material that had already been filtered, however imperfectly, by the economics of publication: someone had decided that the text was worth printing, and that decision, however commercially motivated, imposed a minimum standard. AI output carries no such presumption. It is generated on demand, in any quantity, at any level of apparent sophistication, with no external filter between the generation and the practitioner's evaluation. The curatorial burden is therefore both heavier and less supported by external signals of quality — a condition that the historical tradition illuminates but does not fully resolve.
The florilegium — literally a "gathering of flowers" — was the medieval ancestor of the commonplace book, and its history reveals a principle that the contemporary discourse about AI has been slow to recognize: the most successful responses to information abundance have never been technologies of reduction. They have been technologies of navigation.
The distinction matters. A technology of reduction aims to shrink the information supply back to manageable proportions — to eliminate the excess, to enforce scarcity, to return to a simpler state. A technology of navigation accepts the abundance as given and develops methods for moving through it productively — finding what is needed, evaluating what is found, and organizing the evaluated material for future use. The history of information management, as Blair has documented it across six centuries, is overwhelmingly a history of navigation rather than reduction. The scholars who confronted the flood of printed books did not try to stop the presses. They invented indexes.
The florilegium exemplified this navigational logic. Produced in enormous numbers throughout the Middle Ages, florilegia were compilations of the most instructive passages from a larger body of work — the Church Fathers, classical authors, scriptural commentaries — selected and arranged for the convenience of a reader who lacked the time or resources to read the originals. The compiler of a florilegium exercised the same curatorial judgment that the keeper of a commonplace book would later exercise: she decided what was worth preserving, how to organize the preserved material, and — crucially — what to leave out. The leaving-out was at least as important as the including. A florilegium that included everything would have been useless, because it would have reproduced the very problem of abundance that it was designed to solve. The technology's value was entirely dependent on its selectivity, and its selectivity was entirely dependent on the compiler's judgment about what mattered.
Blair traces a lineage from the medieval florilegium through the Renaissance commonplace book, the early modern encyclopedia, the Enlightenment bibliography, the nineteenth-century library catalog, and the twentieth-century database to the contemporary search engine. The lineage is not a simple line of descent — each technology introduced distinctive features that its predecessors lacked — but the underlying logic is constant across the series. Each technology was developed in response to an expansion of the information supply. Each embodied a principle of selection that reduced the total supply to a navigable subset. And each depended on human judgment to set the criteria of selection, evaluate the results, and determine how the selected material should be organized and deployed.
What changes across this lineage is not the logic but the mode of curation. Blair's framework suggests a typology: each major technology of information management shifts the primary mode of curatorial activity to a new register. In the manuscript era, the dominant curatorial mode was preservation — deciding which texts were worth the enormous labor of copying and thereby saving from the entropy of time. In the print era, the mode shifted to selection — deciding which of the many available texts were worth the reader's limited attention. In the digital era, the mode shifted again to navigation — finding relevant material within the vast, poorly organized landscape of the World Wide Web. And in the AI era, the mode is shifting to what might be called direction — guiding the production of new intellectual material by specifying purposes, evaluating outputs, and iteratively refining results through conversational judgment.
Each shift preserved the previous modes while adding a new layer. The AI-era practitioner still preserves (archiving valuable outputs for future reference), still selects (choosing among the AI's multiple offerings), and still navigates (searching within the AI's vast implicit knowledge base). But the distinctive curatorial challenge of the AI era is directorial: the practitioner must guide a generative process, shaping the production of material that does not yet exist, rather than merely selecting from material that has already been produced. The patron who commissioned a work of art in the Renaissance was exercising a form of directorial curation — specifying subject, scale, mood, and purpose while leaving execution to the artist — and the parallel is instructive, though imperfect. The AI practitioner's directorial role is more continuous and more granular than the patron's: she does not issue a commission and wait for the result, but engages in an ongoing conversation that shapes the output at every stage through iterative acts of evaluation and redirection.
This directorial mode demands a specific form of judgment that the previous modes did not. The scholar who selected from existing texts could evaluate each text against stable criteria — accuracy, relevance, quality of argument, reliability of evidence. The practitioner who directs AI output must evaluate material that is being produced in response to her own specifications, which means she must simultaneously assess the quality of the output and the quality of her own direction. The question is not only "Is this good?" but "Did I ask for the right thing?" The evaluative loop is reflexive in a way that earlier modes of curation were not, and the reflexivity introduces a form of cognitive complexity that the historical precedents illuminate but do not fully prepare the practitioner to manage.
The Orange Pill provides a precise illustration. Segal describes a moment of impasse during the writing of the book: he was attempting to articulate why Byung-Chul Han's critique of frictionless culture was partly right and partly wrong, and he could not find the structural pivot that would allow the argument to turn from acknowledgment to counter-argument. He described the problem to Claude. The AI responded with a connection Segal had not made — a parallel to laparoscopic surgery, in which the removal of one kind of friction (the surgeon's direct tactile contact with tissue) created a different and harder kind of friction (the cognitive challenge of operating through a two-dimensional image of a three-dimensional space). The connection unlocked the argument. But the connection was produced through directorial curation: Segal had specified the problem with enough precision that the AI could generate a useful response, and he had the judgment to recognize the response's value — to see that the surgical parallel was not merely clever but analytically productive, that it captured something true about the relationship between friction and depth that his argument required.
The directorial mode also demands a heightened capacity for what might be called negative curation — the recognition and rejection of material that is superficially adequate but substantively deficient. In earlier modes, the signals of deficiency were often visible on the surface: the badly printed pamphlet, the poorly organized bibliography, the reference work that lacked an index. In the AI era, as later chapters will explore in detail, the surface quality of generated material is uniformly high regardless of the substantive quality beneath it. The practitioner who directs AI output must learn to read past the fluent surface to the analytical depth — or shallowness — below, and this reading requires a form of expertise that is more demanding, not less, than the expertise required to evaluate material whose surface quality varied with its substance.
The lineage from florilegium to AI filter supports a conclusion that the AI discourse has been reluctant to accept: the curatorial imperative is not a temporary feature of the current transition. It is a permanent structural condition of any information-rich environment. As long as the supply of information or intellectual production exceeds an individual's capacity for direct engagement — and it has exceeded that capacity continuously since at least the invention of printing — the demand for curatorial judgment will persist. The tools of curation will change with each technological generation. The necessity of curation will not. And the human capacity for curatorial judgment, far from being rendered obsolete by AI, is made more necessary by it, because AI represents the largest expansion of the material upon which judgment must operate in the history of intellectual production.
The economics of this relationship deserve attention, because Blair's historical work reveals that curatorial labor has been undervalued at every stage of its history — a pattern with direct implications for the AI era. The medieval compiler of a florilegium received less credit than the original author from whom she excerpted. The Renaissance editor received less recognition than the writer whose work she prepared for publication. The nineteenth-century librarian received less prestige than the scholar whose research her catalog made possible. In each case, the curatorial contribution was real, consequential, and institutionally undercompensated — because the curation was invisible. The finished product appeared to the reader as a seamless whole, and the labor that had produced it — the reading, the evaluating, the selecting, the organizing, the arranging — was hidden behind the smooth surface of the published page.
AI collaboration reproduces this invisibility with a new intensity. When a practitioner publishes code, or a document, or a strategic plan that was produced through iterative collaboration with an AI system, the published artifact appears as a finished product. The curatorial labor that produced it — the prompts tried and abandoned, the outputs generated and rejected, the subtle redirections that shaped the AI's contribution toward the practitioner's vision — is invisible to anyone who encounters only the result. The invisibility leads to systematic undervaluation: observers credit the AI with capabilities it does not possess (because they do not see the human judgment that directed the output), practitioners undervalue their own skill (because the skill is exercised in private and never displayed), and institutions misunderstand the nature of AI-assisted work (treating it as automation rather than collaboration). These misunderstandings are not new. They are the same misunderstandings that have attended curatorial labor throughout its history, amplified by a technology that makes the curation more consequential and simultaneously less visible than ever before.
The lesson of the lineage is that the solution to abundance has never been restriction — has never been fewer books, fewer documents, fewer outputs. It has always been better navigation: better methods for finding, evaluating, selecting, and organizing the abundant material so that human judgment can operate within conditions that would otherwise overwhelm it. The AI era requires better navigation, not less abundance. And the navigational tools that will resolve the current crisis — whatever their specific form — will share with the florilegium, the commonplace book, the encyclopedia, and the search engine the fundamental feature that all effective information management technologies share: dependence on human judgment to set the terms of selection and to evaluate the results.
The analogy between the printing press and artificial intelligence is the most frequently invoked historical comparison in the contemporary AI discourse, and it is drawn with a regularity that has made it both ubiquitous and shallow. The standard version runs: the printing press disrupted existing practices, displaced some workers, created new opportunities, and eventually produced a net benefit; therefore AI will follow the same trajectory; therefore the disruption will be temporary and the outcome benign. The analogy, so stated, functions as a sedative. It converts a genuine and unresolved crisis into a comfortable story whose ending is already known.
Ann Blair's research provides the materials for a more rigorous and less comfortable version of the comparison — one that takes the printing press seriously as a historical event rather than as a reassuring precedent, and that identifies specific structural features of the print transition whose contemporary parallels are both more illuminating and more unsettling than the standard analogy suggests.
The first feature: the printing press did not merely produce more copies of existing books. It created entirely new genres of publication that had no manuscript-era equivalent. The pamphlet, the broadside, the periodical, the reference handbook, the practical manual, the school textbook — each was a product of the economics of print, not merely a cheaper version of something scribes had previously produced. Manuscript economics made these genres impractical: the cost of hand-copying a pamphlet was nearly as great as the cost of hand-copying a treatise, and no patron would fund the labor for an ephemeral text. Print economics made them trivial to produce, and the genres proliferated accordingly. The proliferation was not a simple expansion of volume. It was a diversification of type that transformed the intellectual landscape.
AI is producing an analogous genre explosion. The Orange Pill documents forms of intellectual production that have no pre-AI equivalent: the weekend prototype, the solo-built SaaS product, the rapid conversational research synthesis, the AI-assisted architectural exploration. These are not faster versions of things that were previously done slowly. They are new kinds of intellectual activity, enabled by the economics of AI collaboration in the same way that the pamphlet was enabled by the economics of print. The software engineer who builds a complete application over a weekend is not doing in two days what a team previously did in six months. She is doing something different in kind — engaging in a form of rapid creative iteration that the old economics of software production made impractical, and that the new economics of AI collaboration makes routine. The genre is new, and its implications are still being discovered.
The second feature: the printing press created a crisis of authentication that took centuries to resolve. In the manuscript era, the provenance of a text provided a reasonable guarantee of reliability. If a text had been copied in a monastery, preserved in a library, and cataloged by a scholar, the institutions of production and preservation conferred credibility. The press destroyed these institutional guarantees by enabling anyone with capital and access to a press to publish anything, under any name, with any claims to authority. The result was an epistemic crisis that Blair has documented in detail: readers could no longer trust the surface features of a text — its format, its binding, its publisher's mark — as reliable indicators of its content's quality. The institutional filters that manuscript culture provided had been bypassed, and new filters had to be invented.
The invention took generations. The peer review process, the scholarly journal, the university press, the critical edition, the footnote, the book review — each emerged gradually, through trial and error, as scholars and institutions developed methods for authenticating content in a medium that no longer provided authentication by default. The process was neither smooth nor inevitable. Decades of intellectual chaos intervened between the collapse of manuscript-era authentication and the establishment of print-era equivalents, and the individuals who lived through those decades bore real costs: they were exposed to unreliable information without adequate institutional support for evaluating it.
AI is producing a comparable authentication crisis, though with a distinctive feature that has no precise historical precedent. The printing press disrupted the correlation between the economics of production and content quality: cheap production could yield good or bad content, and the reader could not infer quality from cost. AI disrupts the correlation between the surface quality of content and its substantive depth: AI-generated text is uniformly fluent, well-organized, and apparently authoritative regardless of whether the content is accurate, insightful, or intellectually sound. The surface cues that readers have historically used to make preliminary quality judgments — the quality of the prose, the organization of the argument, the apparent command of the subject — are no longer reliable, because the AI produces these surface features automatically, independently of the content's actual merit.
This is a more radical disruption of authentication than the printing press produced. The printing press required the reader to evaluate content without relying on the economics of production as a quality signal. AI requires the reader to evaluate content without relying on surface quality as a signal — a harder task, because surface quality is more intimately connected to the reader's experience of the text than production economics ever was. A reader who picks up a cheaply printed pamphlet can hold the object at arm's length and assess it as a physical artifact before engaging with its content. A reader who encounters AI-generated text has no comparable distance: the text arrives in the same format, with the same fluency, and the same apparent authority as text produced by a human expert who has spent decades developing genuine understanding. The authentication challenge is not merely harder. It is harder in a way that demands new evaluative capacities — capacities that the print-era institutional innovations (peer review, scholarly journals, university presses) addressed at the institutional level and that AI may require at the individual level as well.
The third feature: the printing press created a new social role — the editor — that had no direct manuscript-era equivalent. The manuscript-era scribe was primarily a reproducer. Her task was faithful copying, and her professional identity was defined by the accuracy of her reproductions. The print-era editor was something categorically different: a curator whose task was to select, evaluate, prepare, and present texts for publication, and whose contribution was measured not by the fidelity of her reproduction but by the quality of her judgment about what was worth publishing and how it should be presented. The editor stood between the abundant supply of potential publications and the reading public, filtering the supply through professional judgment that the technology of printing could not itself provide.
AI is creating an analogous new role, though the role does not yet have a settled name or a stable institutional identity. The practitioners described in The Orange Pill — engineers, writers, entrepreneurs, educators who have learned to collaborate effectively with AI — are performing editorial functions with respect to AI output: evaluating quality, selecting what meets professional standards, rejecting what does not, and shaping retained material into finished work through revision and refinement. The curatorial labor they perform is the AI era's equivalent of editorial labor in the print era — necessary, consequential, and currently under-recognized because the institutional frameworks that would formalize and support the role have not yet been developed.
Blair's research indicates that the editor's role took decades to crystallize after the invention of printing. The early printers were often their own editors, combining the functions of selection, production, and distribution in a single enterprise. Gradually, as the volume of potential publications overwhelmed any single individual's evaluative capacity, the editorial function separated from the production function and became a distinct profession with its own standards, training, and institutional structures. The AI era may follow a comparable trajectory: the early AI practitioners combine the functions of direction, evaluation, and production in a single workflow, but as the volume and complexity of AI-assisted work increase, the evaluative function may separate and professionalize — creating new institutional roles whose defining skill is curatorial judgment applied to machine-generated output.
The fourth feature — and the one most often overlooked in the standard analogy — is temporal. The printing press produced its most significant intellectual effects not in the decades immediately following its invention but over several generations. The first century of print was characterized more by confusion than by progress. Scholars complained about the flood of mediocre publications. Religious authorities worried about the spread of heresy. Political leaders feared the destabilizing effects of widely distributed information. The Reformation, the Scientific Revolution, and the Enlightenment — the transformative movements that the standard analogy invokes as evidence of print's benign trajectory — emerged only after institutional innovations had been developed to manage the abundance that the press created.
This temporal dimension is the standard analogy's most significant omission. The individuals who lived through the transitional period — the decades during which the old authentication mechanisms had collapsed and the new ones had not yet been established — did not experience the transition as a story with a happy ending. They experienced it as a crisis with real costs: intellectual, economic, psychological, and social. The costs were eventually mitigated by institutional innovations that redirected the abundance toward productive ends. But the mitigation was not automatic. It required deliberate effort by scholars, educators, publishers, and policymakers who recognized that the technology alone would not produce the desired outcome — that the abundance needed to be channeled through institutions designed to support the curatorial labor on which intellectual quality depended.
Blair wrote in the Harvard Business Review that "information overload has very deep roots: signs of information overload were present already in the accumulation of manuscript texts in pre-modern cultures and were further accelerated by the introduction of printing." The statement, published in 2011, was addressed to a business audience that tended to treat information overload as a problem created by email and smartphones. Blair's point was that the problem was structural, not technological — that it recurred with each expansion of the information supply and was resolved not by the technology that created it but by the institutional responses that human beings developed to manage it.
The point applies with redoubled force to the AI transition. The technology will not resolve the crisis it has created. The resolution depends on institutional development: new methods of authentication for AI-generated content, new pedagogies for cultivating the evaluative skills that AI collaboration demands, new professional standards for AI-assisted work, new cultural norms that value curatorial judgment alongside productive capacity. The printing press analogy, properly understood, does not promise a benign outcome. It promises that a benign outcome is possible — but only if the institutional work is done. The press produced the Enlightenment in societies that built the institutional infrastructure to support critical inquiry. It produced propaganda and superstition in societies that did not. The technology was the same. The institutions made the difference.
The most sobering lesson of the printing press parallel is that the institutional development cannot be rushed, but neither can it be indefinitely deferred. The transitional generation — the generation that lives between the collapse of old institutional supports and the maturation of new ones — bears costs that subsequent generations, benefiting from the institutional innovations the transition eventually produces, do not fully appreciate. The contemporary parallel is exact: the current generation of knowledge workers, students, and citizens is navigating an abundance of AI-generated material without the institutional supports that would allow them to do so effectively. The supports are being developed — but the development is outpaced by the technology, and the gap between the two is where the real costs of the transition are being paid.
In 1255, the Dominican friar Vincent of Beauvais completed his Speculum Maius, an encyclopedia that attempted to compile all human knowledge into a single work. The result ran to roughly 4.5 million words across eighty books — and Vincent, in his preface, apologized. Not for the length, but for the incompleteness. The known world of texts had grown so vast that no compilation, however ambitious, could encompass it. Vincent described himself as overwhelmed by the "multitude of books, the shortness of time, and the slipperiness of memory," and he offered his encyclopedia not as a definitive statement but as a navigational aid — a device for finding one's way through a forest of knowledge that had become too dense for any single mind to traverse.
Ann Blair cites Vincent's preface as evidence for a claim that cuts against the grain of contemporary assumptions about information technology: the experience of having too much to know is not a consequence of any particular technology. It is a consequence of a set of cultural attitudes that she has called "infolust" — an appetite for comprehensive knowledge that predates the printing press, predates the internet, and will outlast any particular technological regime. The appetite is human, not technological. The technologies feed it. They do not create it.
The distinction matters because it reframes the relationship between AI and cognitive overload. The popular account treats AI as the cause of a new form of information crisis: the technology generates so much content, so fast, that human beings cannot keep up. Blair's framework suggests a different causal story. The appetite for more — more knowledge, more production, more creation — was already present before AI arrived. The printing press fed the appetite by making books cheap. The internet fed it by making information accessible. AI feeds it by making intellectual production frictionless. In each case, the technology satisfies an existing hunger, and the satisfaction intensifies the hunger rather than sating it, because the hunger was never for a specific quantity of information. It was for the feeling of comprehensive command over the available knowledge — a feeling that recedes with every expansion of what is available.
The Orange Pill documents this dynamic with the precision of self-observation. Segal describes the compulsive quality of AI-assisted building — the inability to stop, the sense that the next prompt might unlock something extraordinary, the feeling that stepping away from the tool is stepping away from possibility itself. The description maps with uncomfortable accuracy onto Blair's account of Renaissance scholars who responded to the flood of print not by reading less but by reading more, not by accepting the limits of individual comprehension but by developing ever more elaborate systems for extending those limits. The commonplace book, the index, the bibliography, the encyclopedia — each was a tool for doing more, not for accepting less. The appetite drove the tool development, and the tools, in turn, fed the appetite by making more knowledge accessible, which revealed how much more remained beyond reach.
This is what Blair's framework identifies as the abundance paradox: every expansion of the information supply reduces the labor of acquisition while increasing the labor of evaluation, with the net effect that the cognitive demands on the individual increase rather than decrease. The paradox is counterintuitive because it violates the logic of material abundance. When food becomes plentiful, the labor of obtaining food decreases. When information becomes plentiful, the labor of obtaining information decreases — but the labor of determining which information is worth obtaining increases by a greater amount, because the ratio of valuable to valueless information typically worsens as the total supply grows. The individual's total cognitive burden increases, not despite the abundance, but because of it.
The Berkeley study cited in The Orange Pill confirms this paradox with empirical specificity that Blair's historical analysis could not provide. The researchers found that AI tools did not reduce the total cognitive load of the workers who used them. Workers completed more tasks, expanded into new domains, filled previously protected pauses with additional work, and reported higher levels of intensity despite — or rather because of — the productivity gains the tools provided. The tools had reduced the friction of execution. The friction of judgment, evaluation, and decision-making had not been reduced. It had been amplified, because each unit of reduced execution friction produced multiple new occasions for evaluative judgment: Is this output correct? Is it appropriate? Does it serve the project's needs? Should I accept it, revise it, or reject it entirely?
The paradox has implications for how the value of human expertise is understood in the AI era. The standard account suggests that AI "democratizes" expertise by making expert-level performance available to non-experts. Blair's framework complicates this claim without rejecting it outright. The printing press also democratized knowledge — it made books available to readers who had previously lacked access. But the democratization of access did not eliminate the value of expertise. It transformed expertise from the mastery of scarce content to the mastery of judgment about abundant content. The scholar who had merely memorized texts lost value when printed reference works made memorization unnecessary. The scholar who could evaluate, synthesize, and apply textual knowledge gained value, because the evaluation, synthesis, and application required capacities that the reference works themselves could not supply.
AI is producing the same transformation at a different level. The knowledge worker whose primary value lay in executing well-defined technical tasks — writing boilerplate code, producing routine analyses, drafting standard documents — faces genuine displacement, because AI executes these tasks with comparable or superior competence. The knowledge worker whose primary value lies in evaluating, directing, and curating — assessing whether code is architecturally sound, whether an analysis addresses the right question, whether a document achieves its communicative purpose — faces not displacement but elevation, because AI increases the volume of material that requires exactly these forms of judgment.
The elevation is real but not automatic. Blair's historical research reveals that the transition from execution-based to judgment-based expertise has, at every previous juncture, been accompanied by a period of painful adjustment during which the individuals whose execution skills had been devalued struggled to develop the judgment skills that the new environment demanded. The scribes displaced by the printing press did not seamlessly become editors. The calculators displaced by the spreadsheet did not instantly become analysts. The transition required time, institutional support, and a willingness to invest in forms of skill development that the old economic structures had not incentivized. The individuals who managed the transition most successfully were those who recognized earliest that the ground had shifted, and who began developing the judgment-based capacities that the new environment rewarded before the old execution-based capacities had fully lost their value.
The temporal dimension is critical, and the contemporary discourse handles it badly. The optimists point to the long-term pattern — every previous expansion of abundance eventually increased the total demand for intellectual labor — and conclude that the AI transition will follow suit. The pessimists point to the transitional costs — the displacement, the disorientation, the period of inadequate institutional support — and conclude that the human costs are unacceptable. Both are right about the evidence they cite and wrong about the evidence they ignore. The long-term pattern is real: abundance has consistently increased the demand for judgment, and there is no structural reason to expect that AI will be the exception. The transitional costs are also real: the individuals and communities that bear those costs are not consoled by the knowledge that their grandchildren may benefit from institutional innovations that do not yet exist.
Blair's framework holds both truths without forcing a premature synthesis. The abundance paradox is not a problem with a solution. It is a structural condition of information-rich environments — a condition that recurs with every expansion of the supply and that must be managed, at each recurrence, through the development of curatorial practices and institutional supports adequate to the new volume. The management is never final, because the supply continues to expand and the practices must continue to evolve. The condition is permanent. The responses are provisional. And the quality of the responses — the speed with which adequate curatorial practices are developed, the breadth with which they are disseminated, the institutional robustness with which they are supported — determines whether a given expansion of abundance produces intellectual flourishing or intellectual degradation.
What this means in practice, for the organizations and individuals navigating the AI transition, is that the investment in evaluative capacity must be treated as urgent rather than deferrable. The historical pattern does not guarantee a benign outcome. It establishes that a benign outcome is achievable — but only through deliberate investment in the judgment-based capacities that abundance makes more valuable and that the disappearance of execution-based constraints makes more necessary. The investment takes time. The abundance is already here. The gap between the two is where the costs are paid, and the cost of delay compounds with every month that the institutional response lags behind the technological capability.
Vincent of Beauvais, composing his encyclopedia in the thirteenth century, understood something that the contemporary AI discourse has been slow to absorb: the problem of abundance is not solved by more abundance. It is addressed — never finally, always provisionally — by the human capacity to evaluate, select, and organize. Vincent could not read everything. Neither can anyone today. The question that matters is not how much can be produced but how well the produced material can be judged. The answer to that question has always depended on the quality of the curatorial practices that human beings develop, and the institutional structures that support those practices against the constant pressure of an abundance that does not curate itself.
In 1607, the Jesuit scholar Jeremias Drexel published Aurifodina Artium et Scientiarum Omnium — "The Gold Mine of All Arts and Sciences" — a treatise on the art of excerpting that reads, across four centuries, as an uncannily precise manual for the cognitive demands of AI collaboration. Drexel was writing for students and scholars who faced the challenge of extracting value from the flood of printed books, and his instructions were detailed, practical, and grounded in a sophisticated understanding of how intellectual judgment operates under conditions of abundance. Read carefully before you excerpt, he advised. Do not excerpt mechanically but with attention to your purposes. Organize your excerpts under headings that reflect your own intellectual priorities, not the structure of the source text. And return to your compiled excerpts regularly, not merely to retrieve specific items, but to discover connections between items that the act of compilation has brought into proximity — connections that the original sources, read in isolation, would never have revealed.
Drexel's instructions capture, in the vocabulary of early modern pedagogy, the core operations of what contemporary AI practitioners perform daily. Ann Blair's research on treatises like Drexel's reveals that the early modern scholars who theorized the art of excerpting understood something that the contemporary AI discourse has largely failed to articulate: that the cognitive demands of working with abundant material are not a single undifferentiated challenge but a sequence of distinct operations, each requiring a specific form of judgment, and each capable of being practiced and refined independently. The operations can be disaggregated, and the disaggregation is both historically informative and practically useful for understanding what AI collaboration actually requires of the human participant.
Blair's analysis of early modern information management practices, combined with the evidence that The Orange Pill provides about contemporary AI collaboration, suggests a taxonomy of four curatorial operations that together constitute the skilled practitioner's cognitive contribution: prompting, evaluating, selecting, and integrating.
Prompting is the practice of formulating the question, specification, or instruction that directs the productive process. The early modern scholars called this the ars interrogandi — the art of asking well — and they considered it a skill of the highest order, not reducible to rules and not acquirable through instruction alone. The quality of the scholar's research depended on the quality of her questions, because the question determined what the search would find. A question too broad would yield a superficial survey. A question too narrow would miss essential connections. The art consisted in finding the level of specificity that was productive — broad enough to capture the genuine complexity of the subject, narrow enough to generate actionable inquiry.
The contemporary AI practitioner faces a structurally identical challenge. The quality of the AI's output depends critically on the quality of the prompt — the specification that directs the AI's generative process. Segal's account of writing with Claude illustrates the point: vague specifications produced fluent but unfocused output; overly constrained specifications prevented the AI from contributing connections and possibilities that the human had not anticipated. The most productive prompts occupied a middle ground — specifying the intent with precision while leaving the method open enough for the AI to bring its distinctive associative capacities to bear. The skill of finding this middle ground is the contemporary expression of Drexel's ars interrogandi, and like its historical antecedent, it must be developed through sustained practice rather than acquired through reading a manual.
Evaluating is the second operation, and it corresponds to what Renaissance scholars practiced under the name ars critica — the art of critical reading. The critical reader did not accept a text's claims on the basis of their fluent presentation. She interrogated them: examining evidence, assessing logic, weighing the source's reliability, comparing claims against her own knowledge and against alternative sources. The critical reader was an active evaluator, and her evaluative labor was the mechanism by which the raw abundance of printed material was converted into reliable knowledge.
The AI practitioner performs an analogous evaluation, but under conditions that make the evaluation both more necessary and more difficult than the print-era equivalent. The early modern reader could draw on surface cues to make preliminary quality judgments: the reputation of the publisher, the format of the text, the quality of the Latin, the presence or absence of a scholarly apparatus. These cues were imperfect, but they provided a first filter that reduced the evaluative burden. AI-generated content provides no comparable surface cues. A passage that is factually wrong is presented with the same fluency and apparent confidence as a passage that is factually right. An argument that is logically unsound is structured with the same clarity as an argument that is sound. The evaluative burden falls entirely on the practitioner's substantive judgment, unsupported by the surface signals that have historically assisted the critical reader's preliminary assessment.
The Orange Pill describes a specific failure that illustrates this challenge: Claude produced a passage connecting Csikszentmihalyi's concept of flow to a concept attributed to Gilles Deleuze, and the passage was rhetorically elegant, structurally coherent, and philosophically inaccurate. The reference to Deleuze was wrong in a way that only someone who had actually read Deleuze would recognize — which is to say, in a way that the AI's fluent surface actively concealed. Segal caught the error the next morning, upon reflection. The episode demonstrates both the intensity of the evaluative demand and the specific difficulty that AI output presents: the better the surface, the harder the evaluation, because the surface quality induces a trust that the substance may not warrant.
Selecting is the third operation, and it corresponds directly to the excerpting practice that Drexel codified and that Blair has documented across the entire early modern period. The scholar read widely and copied selectively, preserving only passages that survived the filter of her judgment. The selectivity was the core of the practice: a commonplace book's intellectual value was determined not by its comprehensiveness but by the quality of its inclusions, and the quality of the inclusions was determined by the quality of the excluding. What the scholar chose to leave out was as important as what she chose to keep, because the leaving-out was the mechanism by which the abundant was converted into the essential.
The AI practitioner selects from a different kind of abundance — not a shelf of printed books but a stream of generated possibilities. Given a prompt, the AI can produce multiple variations, each technically adequate but different in emphasis, structure, depth, and alignment with the practitioner's purposes. The practitioner must compare, assess, and choose — and the choosing is the practitioner's primary creative contribution, because the choice determines the character and quality of the finished work. Segal's description of selecting among Claude's structural suggestions for the book — keeping what "felt true," discarding what imposed a false symmetry or failed to match his intellectual voice — is excerpting practice translated into the AI medium. The criterion of selection is not formal correctness, which the AI reliably provides, but a less articulable quality of authenticity, relevance, and alignment with purposes that the practitioner holds but may not have fully specified. The judgment is qualitative, contextual, and resistant to formalization — which is precisely why it remains a human contribution even when the production it judges has been automated.
Integrating is the fourth operation, and it corresponds to the final stage of commonplace book practice: the composition of new work from compiled materials. The scholar who consulted her commonplace book when writing did not merely reassemble excerpts. She integrated them — weaving selected passages into a new argument, connecting them with original analysis, arranging them in an order that served her purposes rather than reflecting the sequence of the original sources. The integration was the most creative phase of the curatorial process, because it required the scholar to impose her own intellectual architecture on material produced by others, creating coherence where there had been only collection.
The AI practitioner performs the same integration with generated material. The AI produces components — passages of text, segments of code, elements of analysis. The practitioner integrates these components into a coherent whole, connecting them with her own thinking, arranging them according to her own logic, and revising the AI's output until it serves a vision that the AI does not share and cannot independently assess. The integration often requires substantial transformation: rewriting for voice, restructuring for argument, supplementing with material the AI did not provide, and sometimes abandoning generated material entirely when it fails to serve the larger purpose. The finished work is not a compilation. It is a synthesis — and the synthesis is the product of the integrative judgment that the practitioner brings to the collaboration.
These four operations — prompting, evaluating, selecting, and integrating — constitute a curatorial method that is structurally continuous with the information management practices that Blair has documented across the early modern period. The continuity is not metaphorical. The cognitive demands are the same: the practitioner must direct a process of knowledge production, evaluate the results against criteria that she sets, select what serves her purposes, and integrate the selected material into a coherent intellectual artifact. The medium has changed. The speed has changed. The volume of material under consideration has changed by orders of magnitude. The underlying cognitive architecture has not.
The pedagogical implication is direct. If the curatorial method can be disaggregated into four distinct operations, it can also be taught as four distinct skills, each with its own exercises, standards, and developmental trajectory. The Renaissance educators who taught excerpting understood this: they did not simply instruct students to "take good notes." They broke the practice into components — reading with attention, identifying the valuable, choosing appropriate headings, revising the organizational scheme — and they provided guided practice in each component, with feedback from a mentor who could model expert performance. The AI era requires an analogous pedagogical disaggregation: training in the art of prompting (formulating productive specifications), in the art of evaluating (detecting deficiencies beneath fluent surfaces), in the art of selecting (choosing among adequate alternatives on the basis of qualitative judgment), and in the art of integrating (assembling curated components into coherent wholes that exceed the sum of their parts).
The training does not yet exist in mature form. It is being improvised, as The Orange Pill documents, by practitioners who are developing their curatorial skills through direct experience rather than through systematic instruction. The improvisation is valuable but insufficient. The historical precedent suggests that curatorial skills, once theorized and codified, can be taught more efficiently and more broadly than individual experimentation allows — and that the societies and institutions that invest in systematic curatorial education will navigate the abundance more effectively than those that leave the development of curatorial skill to chance.
Denis Diderot's Encyclopédie, published in seventeen volumes of text and eleven volumes of plates between 1751 and 1772, was designed to do something that no previous reference work had attempted at comparable scale: not merely to compile knowledge but to reveal the connections between different domains of knowledge through an elaborate system of cross-references. Diderot called these cross-references renvois, and he considered them the Encyclopédie's most important innovation. A reader consulting the entry on "Agriculture" would find, at the entry's end, pointers to "Chemistry," "Botany," "Commerce," and "Political Economy" — connections that the alphabetical arrangement of the encyclopedia had severed and that the cross-references restored. The system of renvois was, in effect, a theory of knowledge overlaid on an alphabetical index: it asserted that knowledge was not a collection of independent facts but a network of relationships, and that the encyclopedia's purpose was not merely to store knowledge but to make the relationships visible.
Ann Blair has situated the Encyclopédie's cross-reference system within a long-running debate in the history of reference works: the tension between alphabetical and systematic organization. The tension is more than a librarian's quarrel. It reflects a fundamental disagreement about the nature of knowledge and the purpose of its organization. Alphabetical order is arbitrary — the letter with which a word begins bears no relationship to the word's meaning — and its arbitrariness is both its weakness and its strength. Its weakness: it separates related topics and juxtaposes unrelated ones, obscuring the conceptual connections that systematic organization would reveal. Its strength: it imposes no interpretive framework on the material, allowing the reader to find information without first mastering the compiler's theory of how knowledge is structured. The organizational scheme is transparent to anyone who knows the alphabet.
Systematic organization embeds an interpretive framework. Gregor Reisch's Margarita Philosophica (1503) arranged knowledge according to the medieval liberal arts curriculum — the trivium and quadrivium — and a reader who consulted it was not merely finding information but absorbing, at the structural level, a theory of how the different branches of learning related to each other. The theory might be wrong — the liberal arts curriculum was already under challenge in 1503, and it would be substantially revised by the end of the century — but it was present, and its presence gave the reference work an intellectual depth that alphabetical arrangement could not match.
Every subsequent reference technology has navigated this tension. Library classification systems (Dewey, Library of Congress) are systematic — they group related materials together according to an explicit theory of knowledge — but they supplement the systematic arrangement with alphabetical indexes that allow retrieval without mastery of the classificatory scheme. Databases are structurally systematic (organized by tables, fields, and relationships) but queryable through interfaces that shield the user from the underlying structure. Search engines are neither alphabetical nor systematic: they are algorithmic, ranking results by a combination of relevance, authority, and user behavior that constitutes, in effect, a theory of what the searcher is most likely to want — though the theory is statistical rather than intellectual, and the user has no access to its logic.
Large language models represent a new position in this long-running tension. The model's internal organization is neither alphabetical (there is no index) nor systematic (there is no explicit classificatory scheme) nor even algorithmic in the search-engine sense (there is no ranking function that the user can interrogate). It is emergent: the product of a training process that has captured the statistical structure of a vast text corpus without imposing any explicit theory of how the knowledge in that corpus is organized. Words that frequently co-occur in the training data are represented by proximate internal states. Concepts that are frequently discussed together are more readily generated in combination. The organization reflects the patterns of human discourse about knowledge rather than the structure of knowledge itself — and the distinction between the two is consequential.
The consequence is that the model's connections between concepts are statistical rather than intellectual. Two ideas that are frequently discussed together in the training data will be readily connected by the model, even if the connection is superficial or misleading. Two ideas that are genuinely related but rarely discussed together — because they belong to different disciplinary traditions, or because the connection has not yet been widely recognized — may not be connected at all. The model's implicit cross-reference system is richer than Diderot's renvois in volume but poorer in intellectual curation. Diderot's cross-references reflected deliberate editorial judgment about which connections mattered; the model's connections reflect the statistical frequency of co-occurrence in the training corpus, which is a measure of cultural habit rather than intellectual significance.
Blair's framework illuminates this difference by placing it in the context of a recurring choice that every reference technology must make: between comprehensiveness and curation. The comprehensive reference work — the work that attempts to include everything — sacrifices the editorial judgment that would distinguish the important from the merely available. The curated reference work — the work that selects and arranges according to an explicit intellectual framework — sacrifices comprehensiveness in favor of depth and structure. No reference technology has fully resolved this trade-off, and each has managed it differently, according to the capacities of the medium and the needs of the users.
The large language model manages the trade-off by maximizing comprehensiveness and minimizing explicit curation — relying instead on the emergent statistical structure of the training data to provide a form of implicit organization. The result is a knowledge resource of extraordinary breadth whose organizational principles are invisible to the user. This invisibility creates a distinctive challenge that Blair's research helps to name. In previous reference technologies, the organizational principles were visible, even when they were imperfect. The user of Diderot's Encyclopédie could see the system of renvois, could evaluate whether a given cross-reference was illuminating or misleading, and could supplement the renvois with her own connections based on her independent knowledge. The user of a library catalog could see the classification scheme, could evaluate whether a book had been correctly classified, and could navigate among related classifications based on her understanding of the subject matter.
The user of a large language model cannot see the organizational principles, because the principles are not explicit. They are distributed across billions of numerical parameters whose individual values have no human-interpretable meaning. The user interacts with the model's output — the generated text — without access to the organizational structure that produced it. She cannot evaluate whether the model's connections between concepts reflect genuine intellectual relationships or merely statistical co-occurrence. She cannot determine whether the model's emphasis on certain aspects of a topic reflects the topic's actual structure or merely the biases of the training corpus. She cannot assess the model's coverage — whether its knowledge of a given domain is comprehensive or patchy — because she has no map of what the model knows and does not know.
This is what might be called the problem of the hidden index: the organizational structure of the model's knowledge is real and consequential — it shapes every output the model produces — but it is invisible to the user, who must infer the structure from the outputs rather than inspecting it directly. The hidden index has practical consequences that The Orange Pill documents at the level of specific experience. Segal describes working with Claude on connections between ideas drawn from different intellectual traditions — philosophy, neuroscience, evolutionary biology, software engineering — and finding that some connections were brilliantly generative while others were plausible but intellectually empty. The difference could not be predicted in advance, because the surface quality of both kinds of connection was identical: fluent, well-structured, apparently authoritative. The distinction between the valuable and the vacuous could only be made through the practitioner's independent judgment — a judgment that required, in addition to domain knowledge, a meta-awareness of the model's organizational limitations: the understanding that statistical co-occurrence is not the same as intellectual significance, and that the model's connections must be independently validated rather than accepted on the basis of their fluent presentation.
Diderot understood this problem in its eighteenth-century form. He wrote, in the Encyclopédie's "Preliminary Discourse," that the system of cross-references was designed to "indicate the close connections among human knowledge," but he acknowledged that some connections were more illuminating than others, and that the reader must exercise judgment in following the renvois — that not every suggested connection would prove productive. The reader's judgment was the final filter, even in a reference work whose explicit purpose was to make the connections visible. In the AI era, the reader's judgment bears an even heavier burden, because the connections are not explicitly curated but emergently generated, and the basis for the generation is opaque.
The historical lesson is not that opaque organization is inherently deficient. Every organizational scheme has limitations, and transparency does not guarantee quality — a visible but wrong classification is no better than an invisible but wrong one. The lesson is that the user's evaluative role intensifies when the organizational scheme is hidden, because the user cannot rely on the scheme itself to signal when its outputs are unreliable. The user of Diderot's Encyclopédie could evaluate the renvois against her knowledge of the subject and against the intellectual coherence of the suggested connection. The user of a large language model can evaluate the output's surface quality but not the organizational process that produced it — and the evaluation of surface quality alone is, as preceding chapters have argued, inadequate to the task.
The practical implication is that effective AI collaboration requires the practitioner to develop a personal map of the model's capabilities and limitations — an informal, experience-based understanding of where the model's knowledge is deep and where it is shallow, where its connections are genuinely illuminating and where they are merely statistically frequent, where its outputs can be trusted and where they require independent verification. This map cannot be acquired from documentation, because the model's capabilities are not fully documented and are in any case constantly evolving. It must be built through sustained interaction — through the accumulation of experience with the model's strengths and failures across a range of tasks and domains.
Blair has documented an analogous process in the history of scholarly reference use. The Renaissance scholar who used a printed reference work developed, over time, an understanding of the work's reliability — which entries were strong, which were weak, which reflected the compiler's particular expertise and which reflected her particular ignorance. This understanding was not available from the reference work itself. It was built through repeated use, cross-checked against other sources, and refined through the scholar's growing familiarity with the work's implicit editorial standards. The process was slow, individualized, and dependent on the scholar's own expertise — and it was essential, because no reference work, however carefully compiled, was uniformly reliable across all its entries.
The same process, adapted to the distinctive features of AI systems, is essential for effective AI collaboration. The practitioner who has developed an experiential map of the model's capabilities — who knows from practice where the model excels and where it fails, what kinds of prompts produce reliable output and what kinds produce plausible nonsense — is a more effective collaborator than one who treats every output with undifferentiated trust or undifferentiated suspicion. The experiential map is the contemporary equivalent of the Renaissance scholar's hard-won familiarity with her reference library — and like its historical antecedent, it cannot be shortcut. It must be earned through the sustained exercise of evaluative judgment that turns raw interaction into reliable understanding.
In 1620, Francis Bacon published the Novum Organum, a work whose central argument was that the human mind, left to its own devices, systematically deceives itself. Bacon cataloged the forms of self-deception under the heading of "idols" — the Idols of the Tribe (errors inherent in human cognition), the Idols of the Cave (errors arising from individual temperament and experience), the Idols of the Marketplace (errors arising from the imprecision of language), and the Idols of the Theatre (errors arising from received philosophical systems). The catalog was intended not as a curiosity but as a practical instrument: Bacon believed that the identification of systematic errors was the first step toward their correction, and that a mind aware of its tendencies toward self-deception was better equipped to resist them than a mind that operated in blissful ignorance of its own distortions.
Ann Blair has situated Bacon's project within the broader history of information management by noting that the identification of error was itself a curatorial practice — a method for navigating the abundant and unreliable information landscape of the early seventeenth century. The printing press had not merely produced more books. It had produced more wrong books, more misleading books, more books that presented the appearance of authority without the substance. Bacon's catalog of idols was, among other things, a reader's guide: a framework for detecting the specific ways in which a text could go wrong, tailored to the distinctive failure modes of the print medium.
The AI era needs its own catalog of idols — its own framework for identifying the specific ways in which AI-generated content can mislead — and Blair's historical methodology suggests that the framework must begin with the distinctive surface characteristics of the medium. Every information technology produces content with characteristic surface features, and those features shape the reader's evaluative experience in ways that the reader may not consciously recognize. The manuscript page, with its visible corrections and marginal annotations, communicated its own provisionality: the reader could see that the text had been produced by a human hand, was subject to error, and had been modified over time. The printed page, uniform and authoritative in appearance, concealed these features of production: the editorial decisions, the compositor's errors, the proofreader's oversights were hidden beneath a smooth typographic surface that conveyed an impression of finality and reliability that the manuscript page had not.
AI-generated content carries this concealment to an extreme. The output of a contemporary large language model is uniformly fluent, consistently formatted, and grammatically impeccable regardless of its substantive quality. A passage containing a factual error is presented with the same typographic confidence as a passage containing a verified truth. An argument with a logical gap is structured with the same apparent rigor as an argument that is watertight. A connection between ideas that is statistically frequent but intellectually vacuous is articulated with the same eloquence as a connection that is genuinely illuminating. The surface is smooth in a historically unprecedented way: no medium in the history of information technology has produced content whose surface quality is so thoroughly independent of its substantive quality.
This independence creates a specific form of epistemic risk that Bacon would have recognized as a new idol — an Idol of the Machine, perhaps, a systematic source of error arising not from human cognition but from the characteristics of the medium through which human cognition now operates. The idol works through a mechanism that is as old as rhetoric: the conflation of fluency with truth. Human beings have always been susceptible to the persuasive power of well-expressed ideas, and the history of rhetoric is in part the history of efforts to distinguish between the well-expressed and the well-reasoned. But AI intensifies this susceptibility by producing fluency at industrial scale, without the effortful human processes that have historically correlated fluency with understanding. A human author who writes fluently about a subject has, in most cases, spent considerable time understanding the subject, and the fluency is a byproduct of the understanding. An AI system that generates fluent text about a subject has processed statistical patterns in a training corpus, and the fluency is a byproduct of the pattern-processing — which may or may not correspond to genuine understanding of the domain.
The Orange Pill captures the practical consequences of this decorrelation. Segal recounts the episode of the Deleuze misattribution — a passage produced by Claude that connected Csikszentmihalyi's concept of flow to a concept attributed to Deleuze, presented with full rhetorical confidence and philosophical polish, and substantively wrong. The passage worked as prose. It had the rhythm and vocabulary of genuine philosophical analysis. It would have passed the scrutiny of a reader unfamiliar with Deleuze. Only a reader who had actually engaged with Deleuze's work — who possessed the domain knowledge to evaluate the substance independently of the surface — could detect the error. And the detection required active resistance to the surface quality: the willingness to question a passage that sounded right, to check the reference, to hold the fluent presentation at arm's length and ask whether the substance matched the style.
This is the evaluative challenge that the smooth surface creates: the reader must develop the capacity to distrust fluency — to treat the quality of the prose as orthogonal to the quality of the thinking, and to evaluate the thinking on its own terms rather than inferring its quality from the quality of its expression. The capacity runs against deep cognitive habits. Centuries of correlation between fluency and expertise have trained readers to use fluency as an evaluative shortcut — a heuristic that was imperfect but generally reliable in a world where fluent expression required effortful understanding. AI has broken the heuristic, and the break has not yet been widely internalized. The result is a systematic vulnerability: readers who continue to use fluency as a proxy for quality will be systematically misled by AI-generated content, because the content is optimized for fluency independently of its optimization for accuracy or depth.
Blair's framework suggests that this vulnerability is not merely an individual cognitive failing. It is a feature of a transitional period in which the evaluative practices appropriate to the old medium have not yet been replaced by practices appropriate to the new one. The print era produced its own transitional vulnerabilities: readers who had developed evaluative habits in the manuscript era — where the physical quality of a text was a reasonable indicator of its intellectual quality — were systematically misled by the early products of the printing press, which could present poorly researched material in a format that looked authoritative. The vulnerability was eventually addressed by the development of new evaluative practices (critical reading, source verification, peer review) and new institutional structures (scholarly journals, university presses, review publications) that provided the quality signals that the print medium itself did not.
The AI era requires an analogous development. The evaluative practices appropriate to the AI medium must include, at minimum, the following capacities. First, the capacity to separate surface quality from substantive quality — to read AI-generated content with the understanding that fluency, organization, and apparent confidence are not evidence of accuracy, depth, or insight. Second, the capacity to identify what is absent — to notice not only what the AI has produced but what it has failed to produce, because the smooth surface conceals omissions as effectively as it conceals errors. The AI that produces a comprehensive-seeming analysis of a problem may have omitted a crucial consideration, and the omission will not be signaled by any feature of the output: the analysis will read as complete even when it is not, because the AI has no mechanism for flagging its own gaps. Third, the capacity to test connections independently — to treat the AI's associations between ideas not as established relationships but as hypotheses that require verification, because the basis for the association (statistical co-occurrence in training data) does not guarantee intellectual validity.
These capacities are not new in kind. They are extensions of the critical reading practices that scholars have developed over centuries of engagement with unreliable sources. But they must be adapted to the specific characteristics of AI-generated content — particularly the uniform surface quality that deprives the reader of the preliminary quality signals that previous media provided. The adaptation requires explicit attention, because the old habits are strong: the tendency to trust fluent, well-organized text is deeply ingrained, and resisting it requires a conscious effort that must be sustained across every interaction with AI-generated material.
The institutional dimension of the problem is at least as important as the individual one. Blair's research reveals that individual evaluative skill, however well developed, has historically been insufficient without institutional support. The Renaissance scholar who could evaluate a printed text competently still benefited from the institutional infrastructure of scholarly communication — the reviews, the citations, the scholarly apparatus — that provided collective quality assessment to supplement individual judgment. The AI era requires analogous institutional support: mechanisms for collective evaluation of AI output, standards for AI-assisted work in professional contexts, and cultural norms that treat the critical evaluation of AI-generated content as a professional obligation rather than an optional enhancement.
The development of these institutional supports is the work of the coming decade. Blair's historical research indicates that institutional responses to new information technologies follow the technology's adoption by years or decades — a lag that is painful for the individuals who must navigate the new medium without adequate support but that appears to be structurally unavoidable, because institutional innovation requires the accumulation of experience with the new technology that can only be acquired over time. The lag is not a reason for complacency. It is a reason for urgency: the sooner the institutional development begins, the shorter the period during which practitioners must rely on individual judgment alone, and the lower the cumulative cost of the transition.
Bacon identified the idols of the mind as a practical problem — not a philosophical curiosity but a source of real error with real consequences. The smooth surface of AI-generated content is a practical problem of the same kind: a systematic source of evaluative error that affects every practitioner who engages with AI output, and that can be addressed — not eliminated, but mitigated — through the deliberate development of evaluative practices adapted to the medium's distinctive features. The development is the curatorial challenge of the moment, and the quality of the response will determine whether AI-generated abundance is converted into genuine knowledge or merely into a more sophisticated and harder-to-detect form of noise.
The transition from intensive to extensive reading is one of the most consequential shifts in the history of Western intellectual life, and its mechanism illuminates the AI moment with a precision that the participants in the current transformation have not yet fully absorbed. Before the printing press, reading was predominantly intensive: the reader engaged deeply with a small number of texts, rereading them many times, memorizing passages, annotating margins, treating each text as an object of sustained contemplation. The scarcity of books enforced the practice — when you owned three books, you read them thoroughly — but the practice also reflected a theory of knowledge that valued depth of engagement over breadth of coverage. To know a text was to have internalized it, to carry it in memory, to be able to deploy it in conversation and argument without consulting the physical object.
The printing press made extensive reading possible and, over the course of two centuries, dominant. The reader no longer needed to internalize each text, because the text would be available for consultation when needed. She could read more books less thoroughly, surveying the literature rather than mastering individual works, extracting what she needed and moving on. The shift was not universal — scholars continued to read intensively within their specialties — but the general trajectory was clear: more material, engaged with less deeply, navigated through the curatorial technologies (indexes, bibliographies, reviews) that the previous chapters have described.
Ann Blair has documented this shift with particular attention to its cognitive consequences. The extensive reader developed different intellectual capacities than the intensive reader. She was better at comparison, synthesis, and the identification of patterns across multiple sources. She was weaker at the kind of deep, embodied knowledge that comes from prolonged engagement with a single text — the knowledge that allows the Talmudic scholar to cite a passage from memory with its surrounding context, or the classicist to hear a Virgilian echo in a line of Dante. The shift was not a simple gain or a simple loss. It was a reallocation of cognitive resources: capacities that the old reading practice had cultivated were allowed to atrophy, while capacities that the new practice demanded were developed in their place.
The AI era is producing a third mode of reading that is neither intensive nor extensive but something for which the existing vocabulary is inadequate. The practitioner who works with AI does not read a fixed text, either deeply or broadly. She reads a dynamically generated text — a text that is being produced in response to her direction, that changes with each interaction, and that must be evaluated not as a finished artifact but as a provisional output in an ongoing process. The reading is simultaneous with the writing: the practitioner reads the AI's output, evaluates it, responds with a revision or redirection, reads the revised output, evaluates again. The cycle is continuous, and each iteration requires a distinct form of evaluative attention.
This interactive mode of reading demands capacities that neither intensive nor extensive reading fully developed. The intensive reader cultivated depth of engagement with a fixed text. The extensive reader cultivated breadth of coverage across many fixed texts. The interactive reader must cultivate something different: the capacity for rapid, iterative evaluation of provisional material — material that is not fixed, not final, and not the product of a human intelligence whose interpretive methods the reader shares. The evaluation must be fast enough to maintain the momentum of the collaboration but careful enough to catch the errors, omissions, and superficialities that the AI's smooth surface conceals.
The Orange Pill documents the development of this reading capacity as an experiential process. Segal describes how his ability to evaluate Claude's output evolved over weeks and months of collaboration — how he learned to recognize the patterns that signaled genuine insight versus statistical recombination, to detect the moments when the AI's fluency was masking a gap in substance, to distinguish between connections that illuminated and connections that merely sounded illuminating. The development was not linear. There were episodes of misplaced trust (the Deleuze error) and episodes of excessive skepticism (rejecting output that was, on reflection, genuinely valuable). The calibration was gradual, iterative, and dependent on Segal's growing familiarity with the specific AI system's characteristic strengths and failures.
Blair's research on the history of reading practices suggests that this kind of calibration — the development of evaluative habits appropriate to a specific medium — has accompanied every major shift in reading technology. The transition from scroll to codex required new navigational habits (the codex allowed random access; the scroll demanded sequential reading). The transition from manuscript to print required new evaluative habits (the printed text could not be assessed by the quality of its handwriting or the reputation of its scriptorium). The transition from print to digital required new attentional habits (the hyperlinked text demanded constant decisions about whether to follow a link or continue on the current path). Each transition produced a period of maladjustment — a period during which readers applied the habits of the old medium to the new one, with predictably poor results — followed by a period of adaptation in which new habits were developed, practiced, and eventually internalized.
The AI transition is in its period of maladjustment. Practitioners are applying the evaluative habits of previous media — habits calibrated to fixed texts produced by human authors — to dynamically generated text produced by statistical models. The habits are inadequate. They lead to the two characteristic errors of the maladjustment period: excessive trust (accepting AI output because it looks and reads like competent human-authored text) and excessive distrust (rejecting AI output categorically because it is machine-generated rather than evaluating it on its substantive merits).
Blair's framework suggests that the resolution will follow the historical pattern: the development of new reading practices specifically adapted to the characteristics of AI-generated text. The practices will not emerge spontaneously. They must be theorized, taught, and institutionally supported, just as the reading practices appropriate to the print era were theorized in the humanist pedagogical tradition, taught in the educational institutions of the early modern period, and supported by the institutional infrastructure of scholarly communication.
The four-part taxonomy proposed in the preceding chapter — prompting, evaluating, selecting, and integrating — provides a framework for the practice-level skills. But the taxonomy operates within a broader cognitive disposition that is harder to codify: the disposition of active, critical engagement with generated material. This disposition includes the willingness to question fluent output, the patience to verify connections independently, the self-awareness to recognize when one's own evaluative judgment is being compromised by the AI's surface quality, and the intellectual humility to acknowledge uncertainty about whether a given output reflects genuine insight or sophisticated pattern-matching.
The disposition is not a technique. It is a stance — a way of relating to AI-generated material that treats every output as provisional until validated by independent judgment. The stance is cognitively expensive: it requires sustained attention at a moment when the AI's fluency is designed to reduce the reader's sense that sustained attention is necessary. The effort of maintaining critical engagement against the AI's smooth surface is the interactive reader's equivalent of the intensive reader's effort to memorize and internalize a text: it is the cognitive labor that the medium demands and that produces, over time, the specific form of expertise that the medium rewards.
Blair's work on the humanist educators reveals that the most effective pedagogies for developing reading skills were not those that instructed students in rules but those that modeled expert practice. The teacher who demonstrated how she read a text critically — who showed the student where she paused, what she questioned, how she evaluated a claim, why she excerpted one passage and not another — was more effective than the teacher who simply told the student to "read critically." The modeling made visible the cognitive processes that expert reading involved, and the visibility allowed the student to imitate, practice, and gradually internalize the processes.
The same pedagogical principle applies to AI-era reading. The practitioner who models effective AI collaboration — who shows how she prompts, where she pauses to evaluate, what criteria she applies in selecting among the AI's outputs, how she integrates curated material into finished work — provides a form of pedagogical demonstration that no manual or course can replace. The demonstration makes visible the curatorial judgment that the smooth surface of AI output conceals, and the visibility transforms the curatorial judgment from an invisible private process into a teachable public practice.
The history of reading is the history of adaptation to new media, and each adaptation has produced new forms of literacy — new constellations of cognitive capacities appropriate to the specific demands of the medium. The AI era demands its own form of literacy: a curatorial literacy whose core capacities are the ability to direct generative processes through well-formulated prompts, to evaluate generated output against standards of accuracy, depth, and purpose, to select from abundant alternatives on the basis of qualitative judgment, and to integrate curated material into coherent work that exceeds the sum of its parts. The literacy is demanding. Its development is urgent. And its historical antecedents, documented across six centuries of adaptation to information abundance, confirm that the development is both necessary and achievable — provided the effort is made.
The argument of this book can be stated in a single sentence, though the sentence requires the preceding nine chapters to bear its full weight: every major expansion of the information supply has increased the value of human curatorial judgment, and the AI expansion is no exception.
The sentence is simple. Its implications are not. The simplicity has been a liability in the contemporary discourse, because the claim sounds like reassurance — a comfortable assertion that human beings will remain valuable despite the machines — and reassurance is what the anxious audience wants to hear. But the claim is not reassurance. It is a structural observation about the relationship between abundance and judgment, drawn from six centuries of historical evidence, and its practical implications are demanding rather than comforting. The observation says that judgment will be more valuable. It does not say that the development of adequate judgment will be easy, or automatic, or universally achieved.
Ann Blair's research supports the structural claim with an accumulation of evidence that is difficult to dismiss. The medieval florilegium compiler's judgment about what to include and what to omit determined the intellectual character of the compilation. The Renaissance scholar's judgment about what to excerpt and how to organize the excerpts shaped the knowledge that the commonplace book could support. The early modern editor's judgment about what to publish and how to present it determined what the reading public encountered. The Enlightenment encyclopedist's judgment about how to connect different domains of knowledge shaped the intellectual landscape for generations. In every case, the curatorial judgment was exercised by a human intelligence operating within conditions of abundance — an intelligence that could not engage with every available item and that therefore had to select, evaluate, and organize according to criteria that the intelligence itself supplied.
The criteria were never fully explicit. This is a point that Blair's research establishes with particular force and that the contemporary discourse about AI has largely failed to absorb. The Renaissance humanists who theorized the art of excerpting could describe some of the criteria that guided their selections — relevance to the topic, quality of expression, novelty of the insight, reliability of the source — but they could not reduce the practice to an algorithm. There was always a residual element of judgment that resisted codification: the capacity to recognize significance in a passage that met none of the explicit criteria, to detect the subtly misleading in a passage that met all of them, to sense the productive connection between two passages that no explicit rule linked. The humanists called this residual capacity iudicium, and they considered it the highest intellectual virtue — higher than memory, higher than diligence, higher than any technical skill — because it was the capacity upon which all other intellectual activities depended.
Iudicium is the capacity that the AI era makes most valuable and that AI is least capable of providing. The large language model can retrieve information, generate text, produce code, execute analyses, and suggest connections across domains — but it cannot evaluate whether its own outputs are genuinely significant or merely plausible, whether its connections illuminate or merely juxtapose, whether its analyses address the right question or merely answer the question as posed. The evaluation requires a form of judgment that is grounded not in statistical patterns but in purposes, values, and an understanding of what matters — an understanding that the AI does not possess because it has no purposes of its own and no stake in the outcomes its outputs produce.
The Berkeley study confirms this at the empirical level: AI intensifies work rather than reducing it, because the reduction of execution friction exposes the full weight of evaluative judgment that execution friction had partially concealed. When writing code was slow and difficult, the engineer's judgment about what code to write was partly embedded in the process of writing it — the difficulty imposed its own form of evaluation, forcing the engineer to think carefully before committing to a direction because the cost of changing direction was high. When writing code is fast and easy, the evaluative judgment must be exercised independently of the process, and the exercise is more demanding because the friction that once forced deliberation has been removed. The paradox is structural: the easier the production, the harder the evaluation, because the evaluation can no longer hitchhike on the difficulty of the production process.
Blair's framework illuminates one further dimension of the curatorial imperative that the contemporary discourse has not adequately addressed: the economics of curatorial labor. Throughout the history of information management, curatorial work has been systematically undervalued relative to the work it curates. The compiler received less credit than the original author. The editor received less recognition than the writer. The librarian received less prestige than the researcher. The indexer, the bibliographer, the reviewer — each performed work that was essential to the intellectual ecosystem and each was compensated, both materially and in terms of professional recognition, below the level that the work's actual contribution warranted. The undervaluation reflected the invisibility of curatorial labor: the finished product appeared to the reader as a seamless whole, and the curation that produced it — the reading, evaluating, selecting, organizing, revising — was hidden behind the surface of the published work.
AI collaboration reproduces this invisibility and intensifies the undervaluation. When a practitioner publishes code, a document, or an analysis produced through iterative collaboration with an AI system, the curatorial labor that produced the output is invisible to anyone who encounters only the finished artifact. The prompts tried and abandoned, the outputs generated and rejected, the evaluative judgments that shaped the AI's contribution toward the practitioner's vision — none of this is visible in the result. The invisibility leads to systematic misunderstanding: observers attribute the output's quality to the AI's capability rather than to the practitioner's judgment, and the practitioner's curatorial contribution is neither recognized nor compensated at its actual value.
The misunderstanding has practical consequences. Organizations that treat AI-assisted work as automated production rather than curated collaboration will structure their workflows, their compensation, and their professional development in ways that undervalue the curatorial judgment on which the quality of the output depends. They will reward speed and volume — the metrics that AI optimization naturally produces — rather than the evaluative depth that distinguishes excellent AI collaboration from merely competent AI use. The result will be organizations that produce more but produce worse: abundant in output, impoverished in judgment, unable to distinguish between the adequate and the excellent because the institutional structures do not reward the distinction.
Blair would note — with the scholarly understatement that her voice characteristically deploys — that this outcome is not inevitable. It is a choice. The institutions that recognized and supported curatorial labor in previous eras of information abundance produced the intellectual achievements that the historical record celebrates: the great encyclopedias, the scholarly editions, the research libraries, the review journals that maintained intellectual standards across entire disciplines. The institutions that failed to support curatorial labor produced the intellectual detritus that the historical record has mercifully forgotten: the compilations without judgment, the publications without standards, the abundant but worthless output that filled warehouses and contributed nothing to human understanding.
The choice between these outcomes is being made now, in the organizational decisions, educational policies, and cultural norms that are being developed — or not developed — in response to the AI transition. The historical evidence does not determine which choice will prevail. It establishes that the choice exists, that its consequences are significant, and that the outcome depends on whether human societies invest in the curatorial capacity that abundance makes necessary or allow that capacity to be eroded by the same abundance that makes it valuable.
The permanence of the curatorial contribution is not a guarantee of individual security. It is a structural feature of the relationship between information abundance and human judgment — a feature that has held across six centuries of technological change and that the AI expansion is confirming rather than contradicting. The feature tells us what kind of human contribution will remain valuable. It does not tell us that every individual will develop that contribution, or that every institution will support it, or that every society will make the investments necessary to cultivate it at scale. The development, the support, and the investment are human choices — choices that the historical evidence illuminates but does not make for us.
The printing press made curation more valuable. The internet made curation more valuable. AI makes curation more valuable still. The pattern is six centuries old and still accelerating. The question is not whether human judgment will matter. The question — the one that the historical evidence poses but cannot answer, because it concerns the future rather than the past — is whether we will build the institutions, develop the pedagogies, and cultivate the practices that allow human judgment to operate at the level the moment demands.
The reference works will be reorganized. The reading practices will evolve. The educational institutions will adapt or be replaced by institutions that do. The curatorial technologies of the AI era — whatever their specific form — will join the florilegium, the commonplace book, the encyclopedia, and the search engine in the long lineage of human responses to the permanent condition of having too much to know. The responses have always been adequate, eventually. The question is how long "eventually" takes, and who bears the cost of the interval.
My desk has a problem that would have made Conrad Gessner weep.
Not the clutter — though there is that — but the ratio. For every document I finish, every piece of code I ship, every decision I commit to, Claude has generated dozens of alternatives I had to evaluate and discard. The discard pile is invisible. Nobody sees it. Nobody counts the outputs I rejected, the structural suggestions I overruled, the passages I read three times before deciding they were fluent and empty. The finished work looks like it emerged cleanly. It did not. It emerged through a process of relentless selection that consumed more of my cognitive energy than any previous phase of my career — more than the years of managing engineering teams, more than the months of building Napster Station for CES, more than the frantic all-nighters of my twenties writing games in Assembler.
This is the paradox that Ann Blair's work forced me to see clearly for the first time: AI did not give me more leisure. It gave me more to judge.
I had been living inside that paradox for months before I encountered Too Much to Know, and living inside a paradox without a name for it is a particular kind of disorientation. I knew the work was intensifying. I could feel it in the quality of my exhaustion — not the exhaustion of execution, which is physical and resolves with sleep, but the exhaustion of evaluation, which is cognitive and accumulates. The Berkeley study confirmed what my body already knew. But Blair's framework did something the study could not: it showed me the structure beneath the feeling. The structure is six centuries old. The feeling is the same feeling Gessner had in 1545, staring at a catalog that was obsolete before the ink dried. The flood is different. The drowning is the same.
What changed my thinking was not the historical parallel itself — I had heard the printing press analogy a hundred times before — but the specificity of Blair's analysis of what actually happened after the flood. Not reassurance. Not the vague claim that everything worked out eventually. But the granular documentation of how specific people developed specific practices for navigating abundance: Locke with his vowel-based index, Drexel with his excerpting manual, Diderot with his cross-references. Real people building real tools for a real problem. The tools were not automatic. They were inventions — acts of intellectual labor as creative, in their way, as the texts they helped to navigate.
That is what the AI moment needs, and what it largely lacks: invented practices for navigating the abundance, developed with the same deliberate intelligence that the abundance itself displays. Not prompting tips. Not productivity hacks. Genuine curatorial methods — theorized, taught, institutionally supported — that allow human judgment to operate at the level the moment demands. Blair's history shows that these methods have always been developed, eventually. But "eventually" is not a timeline. It is a hope. And the people living inside the transition — my engineers in Trivandrum, my collaborators at Napster, my children at the dinner table asking what they are for — cannot wait for eventually. They need the methods now.
The concept from Blair's work that I cannot stop turning over is iudicium — the humanist term for the cultivated capacity for judgment that no rule can capture and no machine can replicate. Not because machines are stupid. Because judgment is grounded in caring about the outcome, and caring is not a computational operation. The practitioner with iudicium is the one who rejects the fluent passage because it does not say what needs to be said, who catches the elegant error that the smooth surface conceals, who knows — through some combination of expertise, taste, and stubborn commitment to getting it right — that this output serves the work and that one merely fills space. The knowledge is not algorithmic. It is earned. And it is the thing that makes the collaboration worth having.
I wrote The Orange Pill about amplification — about what happens when human intelligence meets machine capability and the signal gets louder. Blair's framework taught me what I should have seen from the beginning: that amplification without curation is just noise at higher volume. The signal matters only if someone is listening carefully enough to distinguish it from the static. That listener — critical, selective, purposeful, refusing to be seduced by the smooth surface — is the curator. Has always been the curator. Will always be the curator, as long as there is more to know than any single mind can hold.
The printing press did not replace the scholar. It replaced the scribe and elevated the scholar's judgment to the center of intellectual life. AI will not replace the builder. It will replace the executor and elevate the builder's judgment to the center of everything worth making. The elevation is real. It is also terrifying, because judgment is harder than execution, and the institutions that should be teaching it are still debating whether to allow calculators in the classroom.
Six centuries of evidence say the same thing: abundance is not the enemy. The enemy is the failure to build the curatorial practices that convert abundance into value. The practices must be invented, taught, and maintained — like Gessner's catalog, like Locke's index, like the dams I keep building in the river.
The flood is here. The tools are waiting to be forged. The judgment is yours.
-- Edo Segal
Every expansion of the information supply — from Gutenberg's press to the large language model — has overwhelmed existing methods of navigation and demanded the invention of new ones. Ann Blair spent decades proving that the crisis of "too much to know" is not a modern affliction but a permanent structural condition, and that the resolution has never come from the technology that caused the flood. It has come from the human practices of selection, evaluation, and judgment that convert abundance into knowledge.
This book channels Blair's historical framework through the lens of The Orange Pill and the AI revolution of 2025–2026. It reveals that what AI practitioners are improvising daily — the prompting, evaluating, discarding, and integrating — has deep roots in the commonplace book traditions of the Renaissance. And it argues that the curatorial judgment the humanists called iudicium is the skill the AI age rewards above all others.
The printing press did not replace the scholar. It replaced the scribe. AI will not replace the builder. It will replace the executor. What remains — what has always remained — is the human capacity to judge what is worth keeping.

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Ann Blair — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →