By Edo Segal
The building I keep thinking about is not a server farm or a startup office. It is a university.
Not any particular university. The idea of one. The institution that has survived every previous technological rupture for eight hundred years and is now facing the one that might actually break it.
I did not expect to end up here. I am a builder, not an academic. My relationship with universities has been the same as most people in technology — grateful for the foundation, impatient with the pace, and increasingly unsure whether the thing I am paying for my children to attend will exist in recognizable form by the time they graduate.
Then I started building with Claude, and the question changed shape.
In Trivandrum, I watched engineers acquire capabilities in days that used to take semesters. In my own work, I watched the gap between imagining a thing and building a thing collapse to a conversation. And the question that kept surfacing was not about productivity or even about jobs. It was about formation. If the skills a university teaches can be acquired through a tool that costs a hundred dollars a month, what is the university actually for? What was it ever for?
Clark Kerr answered that question sixty years ago, and almost nobody listened. Not because his answer was wrong. Because it was inconvenient. He described the modern university as an institution that had buried its deepest purpose under layers of function — credentialing, workforce preparation, research production, athletic entertainment — until the purpose itself was invisible. He called it the multiversity, a word he invented because no existing term could contain the contradictions.
What draws me to Kerr is not nostalgia for the institution he built. It is the precision of his diagnosis. He saw that the university's measurable outputs were not its purpose. The credits, the degrees, the publications — these were the scaffolding. The purpose was the formation of judgment. The development of minds capable of navigating complexity without reducing it.
That purpose is exactly what AI cannot replicate. And it is exactly what the institutions responsible for the next generation are least equipped to deliver, because they optimized for the scaffolding and forgot the building.
This book applies Kerr's framework to the moment we are living through. Not because the university is the only institution that matters, but because it is the institution where the stakes are clearest. The twelve-year-old who asks "What am I for?" will walk through those doors in six years. What she finds there — or fails to find — depends on whether we understand what the building was always meant to be.
— Edo Segal ^ Opus 4.6
1911–2003
Clark Kerr (1911–2003) was an American economist, labor mediator, and higher education leader who served as the first chancellor of the University of California, Berkeley (1952–1958) and as the twelfth president of the University of California system (1958–1967). Born in Stony Creek, Pennsylvania, Kerr was educated at Swarthmore College, Stanford University, and the University of California, Berkeley, where he earned his doctorate in economics. He was the principal architect of the 1960 California Master Plan for Higher Education, a three-tier framework — research universities, state colleges, and community colleges — that became the most influential model of public higher education in the twentieth century and expanded university access to millions. His 1963 Godkin Lectures at Harvard, published as *The Uses of the University*, introduced the concept of the "multiversity" — the modern research university as a sprawling, multi-constituency institution serving students, faculty, government, and industry simultaneously, held together not by a unified mission but by mediation among competing purposes. The book was revised through five editions over four decades and remains the foundational text in the study of American higher education. Kerr was dismissed as UC president in 1967 under political pressure from Governor Ronald Reagan, and he spent his remaining decades as chairman of the Carnegie Commission on Higher Education and the Carnegie Council on Policy Studies in Higher Education, producing landmark studies on access, equity, and institutional governance. His legacy endures in the institutional architecture of public higher education systems worldwide and in the analytical framework that continues to define how scholars and administrators understand the university's contradictory purposes.
In 1963, Clark Kerr stood before a Harvard audience and described an institution that did not yet have a name. He called it the multiversity — not because the word was elegant, but because no existing term could contain the thing he was describing. The modern American university was not a university at all, not in any sense that John Henry Newman or Wilhelm von Humboldt would have recognized. It was a collection of communities and activities held together by a common name, a common governing board, and related purposes. It was, as Kerr put it with characteristic bluntness, "a whole series of communities and activities held together by a common grievance over parking."
The joke landed because it was true. The multiversity that Kerr described — and that he had spent a decade building as president of the University of California — served undergraduates who wanted an education, graduate students who wanted research training, faculty who wanted intellectual autonomy, government agencies that wanted applied science, industries that wanted workforce preparation and technology transfer, hospitals that wanted clinical training, alumni who wanted football, and a public that wanted all of these things simultaneously and was willing to pay for them only intermittently. Each constituency believed its demand was the institution's primary purpose. Each was partly right. None had the complete picture.
Kerr's genius was not in resolving these contradictions. It was in recognizing that the contradictions were the institution. The multiversity did not have a single animating purpose. It had many purposes, and the tension between them was not a flaw to be corrected but a condition to be managed. The president's job was not to lead in the traditional sense — not to articulate a vision and drive the institution toward it — but to mediate. To keep the peace among competing factions. To ensure that the research enterprise did not cannibalize undergraduate teaching, that the athletic program did not embarrass the medical school, that the state legislature's demand for workforce production did not overwhelm the faculty's commitment to basic research.
The California Master Plan for Higher Education, which Kerr authored in 1960, was the institutional expression of this philosophy. It created a three-tier system — research universities, state colleges, community colleges — that distributed the multiversity's contradictory functions across different institutional types. Research happened at Berkeley and UCLA. Workforce preparation happened at the state colleges. Open access happened at the community colleges. The system was not perfect, but it was coherent, and it educated more human beings at greater scale and with greater social mobility than any educational structure in history.
Sixty years later, the institution Kerr built faces the most consequential challenge since its creation. The challenge is not financial, though financial pressures are real. It is not political, though political pressures are intensifying. It is technological, and the technology in question does not merely alter one of the multiversity's functions. It challenges all of them simultaneously.
When Segal describes the moment "the machines learned to speak our language" — the arrival of large language models that could engage in natural conversation, produce expert-level analysis, generate working code, and synthesize knowledge across domains — he is describing a capability that strikes at the foundations of every function the multiversity performs. Teaching undergraduates: when any student can access expert-level explanation through conversation with an AI, available at any hour, tailored to their individual level of understanding, patient beyond any human instructor's capacity for patience, the lecture hall's informational function becomes redundant. Training graduate students: when AI can produce literature reviews in minutes, generate analytical frameworks, and draft manuscript sections, the apprenticeship model that justified years of doctoral training requires fundamental redefinition. Conducting research: when AI can process datasets, identify patterns, and generate hypotheses at speeds no human researcher can match, the researcher's value migrates from computation to curation — from the ability to process information to the judgment about which information deserves processing. Serving industry: when industry can build its own tools through AI-augmented development, the technology-transfer pipeline that once justified billions in federal research funding becomes less central to the economic argument for public investment in universities. Contributing to public life: when any citizen can access knowledge that once required institutional intermediation, the university's role as knowledge gatekeeper dissolves.
No single function escapes. That is what makes the current moment different from every previous technological challenge the university has absorbed.
Kerr himself anticipated, in the broadest terms, that technology would reshape the institution. In the final edition of The Uses of the University, published two years before his death in 2003, he noted the emergence of what he called the first revolution in educational technology in five centuries — referring to the early promise of online learning. He warned that "education is not only about the transfer of factual knowledge," a statement that reads, two decades later, as both prescient and insufficient. Prescient because the distinction between knowledge transfer and something deeper — what this analysis will call judgment development — turns out to be the hinge on which the university's survival turns. Insufficient because Kerr could not have foreseen the scale of the challenge. Online courses in 2001 were clumsy, asynchronous, text-heavy affairs that replicated the lecture in a less effective medium. The AI systems of 2025 do not replicate the lecture. They render it unnecessary for informational purposes, and they do so with a conversational fluency that makes the comparison humiliating for anyone who has sat through a poorly organized fifty-minute presentation on introductory statistics.
Yet Kerr's framework — the multiversity as a multi-constituency organism, "partially at war with itself," requiring mediation rather than direction — remains the most useful lens for understanding what the university faces now. The war has not ended. It has acquired new fronts.
Consider the fault lines that AI has opened within the institution. Faculty are divided between those who embrace AI tools as research accelerants and those who view them as threats to intellectual integrity. The division does not follow predictable disciplinary lines: some humanists are early adopters, recognizing that AI can handle the mechanical aspects of literary analysis and free them for interpretive work; some computer scientists are skeptics, understanding better than anyone the limitations of systems they helped build. Administrators are caught between the imperative to modernize — every strategic plan now contains an AI section — and the governance structures that make rapid change nearly impossible. Students are, as always, the constituency with the least institutional power and the most at stake: they are being educated for a labor market that is transforming faster than the curriculum can track, and they know it. The twelve-year-old in Segal's narrative who asks her mother "What am I for?" will, in six years, be choosing whether a university education is worth four years and two hundred thousand dollars of her life.
The multiversity's competing constituencies now face a shared crisis, but they face it with contradictory expectations. Students expect AI tools to be integrated into their education; many already use them, often in ways that faculty find threatening to academic integrity. Faculty expect their expertise to remain the foundation of teaching and research, even as the information asymmetry that undergirded their authority dissolves. Government expects the university to produce a workforce adapted to AI-transformed industries, which requires curriculum changes that faculty governance processes were designed to delay. Industry expects the university to produce research that advances AI capability, and simultaneously expects the university to produce graduates who can work alongside AI systems that did not exist when those graduates began their degrees. The public expects all of this at a cost that has not increased, in an institution whose overhead has grown relentlessly for four decades.
These expectations conflict. The conflicts are structural, not personal. They arise from the institutional architecture of the multiversity itself — the same architecture that Kerr designed for stability in an era when stability was the highest institutional value.
Stability, in 1960, meant the capacity to absorb growth. The University of California was expanding at a rate that would have destroyed a less carefully designed system. Kerr's Master Plan managed that growth by creating clear institutional roles: research universities conducted research and trained the most advanced students, state colleges prepared the professional workforce, community colleges provided open access. Each tier had a defined mission, a defined student population, and a defined relationship to the others. The system was a masterpiece of institutional engineering, and its stability was its greatest achievement.
That stability is now the obstacle. The governance structures that prevented hasty decisions during periods of growth now prevent necessary decisions during a period of transformation. Faculty senates that were designed to protect academic freedom against administrative overreach now function, in practice, as brakes on curriculum reform. Tenure systems that were designed to ensure intellectual independence now protect, along with genuine scholarly freedom, a resistance to pedagogical change that borders on institutional inertia. Accreditation processes that were designed to ensure quality now enforce compliance with standards developed for a world that no longer exists.
Gerald Chan, delivering the inaugural Dean's Distinguished Lecture at UC Berkeley's College of Computing, Data Science, and Society in February 2025, framed the problem in explicitly Kerrian terms. His lecture was titled "Rethinking Clark Kerr: The Uses of the University in the Age of Generative AI," and his central argument was that AI represents both a fulfillment and a disruption of Kerr's vision. A fulfillment because Kerr had always insisted that the university's deepest purpose was the production and dissemination of knowledge, and AI promised to accelerate both beyond anything Kerr could have imagined. A disruption because the acceleration threatened to bypass the institution entirely — to deliver knowledge directly to the learner, conduct research without the university's infrastructure, and prepare workers without the university's curriculum.
Chan cited a 2023 Harvard experiment in which students taught by an AI tutor learned faster, with learning gains that doubled those of students taught by a university instructor. The students also reported feeling more engaged and motivated. The implications were uncomfortable for an audience of university administrators and faculty. If AI-tutored students learn faster and feel more engaged, what is the university providing that justifies its extraordinary cost?
The answer — and it is the answer this entire analysis will develop across the remaining chapters — is that the university provides something the AI tutor cannot: the formation of judgment. Not the delivery of information, which AI handles more efficiently. Not the certification of competence, which portfolio-based assessment may soon handle more accurately. But the development of the capacity to evaluate, to discern, to make decisions under conditions of uncertainty, to ask the questions that determine whether the information is worth having in the first place.
Chan put it directly: "The challenge for education is one of knitting together what technology is good at and what humans are good at." Kerr would have recognized the formulation. It is the language of a mediator — someone who does not choose between competing goods but finds the arrangement that allows them to coexist. The multiversity was always about knitting together competing purposes. The AI-era multiversity must knit together human and artificial intelligence, and it must do so under conditions of extreme time pressure, because the alternative — the gradual erosion of the institution's relevance as students, employers, and the public find cheaper, faster, more convenient ways to acquire what the university once monopolized — is already underway.
Kerr ended his career with a statement that reads, in 2026, as both reassurance and challenge: "Higher education has been very resilient in turning fears into triumphs. I expect that this will continue." The resilience is real. The university has survived the Reformation, the Enlightenment, the Industrial Revolution, two world wars, the GI Bill's explosive expansion, the social upheavals of the 1960s, the defunding of the 1980s, and the commercialization of the 1990s. Each crisis forced a reinvention, and each reinvention produced an institution more capable than the one it replaced.
But resilience is not destiny. The university survived previous crises because it possessed something no alternative could provide. The medieval university possessed the library. The Humboldtian university possessed the laboratory. The American multiversity possessed the research infrastructure that government and industry needed and could not replicate elsewhere. In each case, the institution's survival depended on a function that could not be performed outside its walls.
The question for the multiversity in the age of AI is whether such a function still exists, whether the university possesses something that no combination of AI tools, online platforms, corporate training programs, and self-directed learning can replicate.
Kerr's framework insists that the answer depends not on what the university has been, but on what it chooses to become. The multiversity was never a fixed entity. It was always an institution in motion, pulled in different directions by different forces, held together by the skill of its mediators and the inertia of its traditions. The directions have changed. The forces are different. The mediator's task is harder than it has ever been, because the window for mediation is narrower than any the institution has previously faced.
The next chapter examines the function the university must abandon — the one that AI performs better, cheaper, and more personally than any lecture hall ever could.
For eight hundred years, the university's primary delivery mechanism has been a person standing in front of a room, talking.
Describe this to someone who has never encountered it — an anthropologist from a distant culture, or simply a thoughtful twelve-year-old — and the strangeness becomes visible. A single individual, selected for expertise in a narrow domain, speaks for fifty to seventy-five minutes while a hundred or three hundred or seven hundred other individuals sit in rows, facing the same direction, taking notes. The speaker may be brilliant or tedious; the format does not distinguish between the two. The listeners may be engaged or asleep; the format cannot tell. The information flows in one direction. Questions are permitted at the end, if time allows, which it usually does not. The process repeats three times a week for fifteen weeks, at the conclusion of which the listeners are tested on their retention of what the speaker said, and a letter grade is assigned that will follow them for the rest of their professional lives.
This format — the lecture — predates the printing press. It was invented in an era when books were rare, when the professor's primary function was literally to read aloud from the only available copy of a text. The students' notes were the means of textual reproduction. The lecture was, in its origin, a technology for copying information in an age before mechanical copying existed. The printing press should have ended it. Inexpensive textbooks should have ended it. The internet should have ended it. Recorded video lectures, available free from the world's best universities through platforms like MIT OpenCourseWare and Coursera, should have ended it.
None of them did. The lecture persisted through five centuries of technological change, absorbing each new medium without fundamentally altering its form. Professors who could have assigned the reading and used class time for discussion continued to lecture. Universities that could have replaced large lecture halls with smaller seminar rooms continued to build auditoriums. The format persisted not because it was the most effective way to transmit knowledge — decades of educational research have demonstrated that it is not — but because it was the most efficient way to scale faculty labor. One professor, three hundred students, fifty minutes. The economics were irresistible, and the economics drove the architecture, and the architecture reinforced the pedagogy.
AI does not compete with the lecture on the lecture's terms. It does not offer a better version of one person talking to many. It offers something categorically different: a conversational partner that responds to the individual student's level of understanding, available at any hour, with infinite patience, capable of explaining the same concept in seventeen different ways until the student finds the one that clicks. The AI tutor is not better at lecturing. It renders lecturing unnecessary for the purpose lecturing was designed to serve, which is the transmission of established knowledge from expert to novice.
The Harvard experiment that Gerald Chan cited at Berkeley — in which students taught by an AI tutor learned faster, with double the learning gains of students taught by a human instructor — measured precisely this function: the informational function, the transfer of established concepts and methods from a source that possesses them to a learner who does not. On that metric, the AI won decisively. The result should surprise no one who has thought carefully about what a lecture actually delivers. A lecture delivers information at the speed of the lecturer's speech, at the level of the median student's preparation, with no capacity to adjust in real time to the particular confusions of the particular student sitting in row seventeen. An AI tutor delivers information at the speed of the student's comprehension, at precisely the level that student needs, with the capacity to detect confusion and adjust explanation instantly.
The informational contest is over. AI wins. The question is what remains.
What remains is everything that was never about information in the first place.
The best lectures — the ones students remember decades later, the ones that changed how they saw their field — were never primarily informational. They were performances of disciplined inquiry. They modeled how an expert mind works: how it formulates questions, evaluates evidence, navigates uncertainty, arrives at conclusions while acknowledging their provisionality. The great lecturer did not merely transmit what was known. She demonstrated, in real time, the process by which knowing happens — the weighing of competing interpretations, the recognition of a gap in the evidence, the intellectual courage required to follow an argument to an uncomfortable conclusion.
This modeling function is distinct from the informational function, and it is the function the university must learn to see, name, and protect. A student who watches a historian work through a primary source in real time — pausing at an inconsistency, considering what the author's silence about a particular event might mean, connecting a marginal annotation to a political context the student had not considered — is learning something no AI tutor currently provides. Not the history. The AI provides that more efficiently. The capacity for historical thinking. The embodied demonstration of what it looks like when a trained mind encounters complexity and does not reduce it.
Kerr's framework illuminates why the university has been so slow to recognize this distinction. The multiversity optimized for the informational function because the informational function was the one that scaled. Lectures to three hundred students. Standardized exams graded by teaching assistants. Credit hours accumulated toward a degree. Every institutional metric — enrollment numbers, graduation rates, credit-hour production — measured the informational function. The modeling function, which requires small groups, sustained interaction, and faculty who are evaluated on the quality of their intellectual mentoring rather than the quantity of their publications, was always present in the university's rhetoric and largely absent from its incentive structures.
The research university compounded the problem. Kerr observed, with the frankness that made him both admired and disliked by faculty, that the multiversity's incentive structures drew the best minds away from teaching and toward research. Tenure depended on publication. Prestige depended on grants. The faculty member who spent twenty hours a week mentoring undergraduates in small-group seminars — doing precisely the work that AI cannot replicate — was, in the multiversity's reward system, making a career-ending choice. The faculty member who spent those hours in the laboratory, producing the publications and grants that the institution counted, was making the rational one.
This misalignment was always a problem. Kerr identified it in 1963. It has been identified, lamented, reported on, and unsuccessfully addressed in every decade since. AI makes it an existential crisis.
When the informational function was the university's primary offering, the misalignment was tolerable. Students received information in lectures; they received mentoring, if they were fortunate, in office hours and seminars. The lecture cross-subsidized the seminar. The large enrollment class generated the revenue that supported the small advanced course. The system was inefficient but functional, because no alternative existed for either the informational or the modeling function.
Now an alternative exists for the informational function. It is cheaper, more convenient, more personalized, and — the Harvard data suggests — more effective. The lecture's competitive advantage in knowledge transmission has collapsed to zero. What remains is the modeling function: the small-group experience of watching and participating in disciplined inquiry, guided by a mind that has spent decades developing the judgment the student is trying to acquire.
But the modeling function requires exactly the resources the multiversity has spent fifty years defunding. It requires small classes. It requires faculty who are rewarded for teaching quality rather than research productivity. It requires a pedagogical culture that values the seminar over the lecture, the question over the answer, the process of inquiry over the product of information. Building this culture requires inverting the institutional incentive structures that Kerr's multiversity created and that fifty years of research-university competition have reinforced.
Segal's observation that AI "shifted the premium and offered you a promotion" — that human value moves from execution to judgment — applies with special force to the university's pedagogical function. The university's informational function was execution: the delivery of established knowledge to students who needed it. AI handles this execution better than any lecturer. The university's modeling function is judgment: the demonstration and cultivation of the intellectual capacities that determine whether knowledge is used wisely. The promotion is real, but accepting it requires the institution to reorganize itself around a function it has undervalued for half a century.
The practical implications are specific and uncomfortable. Large lecture courses, which generate the majority of credit-hour production at most research universities, must be reconceived. Not eliminated — there is still value in the shared experience of a large group engaging with a compelling speaker, the same way there is value in attending a concert even though the recording is available. But the lecture's purpose must shift from information delivery to intellectual provocation. The lecture becomes the opening move, the posing of a question that the students then explore in smaller groups, with AI tools, with faculty mentors, through the friction of attempting to answer a question that does not have a clean answer.
Assessment must change correspondingly. The examination that tests recall — which facts did the student retain from the lecture — measures the informational function that AI has made redundant. Assessment that measures judgment — how well the student evaluates evidence, formulates questions, navigates disagreement, synthesizes across sources that conflict — measures the function the university must now cultivate. Segal's teacher who grades questions instead of essays is not employing a clever pedagogical trick. She is practicing the assessment philosophy the entire institution must adopt.
The resistance to this transformation is not irrational. Faculty who have built careers around the lecture format are being asked to abandon a practice that has defined their professional identity. Departments that have built their budgets around large enrollment courses are being asked to reduce the revenue-generating mechanism on which their existence depends. Administrators who have built their metrics around credit-hour production are being asked to measure something harder to count and harder to report to a state legislature.
Kerr would recognize every one of these resistance patterns. The multiversity has always been "partially at war with itself," its constituencies defending their interests against changes that serve the institution's long-term survival but threaten their short-term position. The faculty senate that blocks curriculum reform is not obstructing progress out of malice. It is protecting the institutional arrangements that protect faculty autonomy, and faculty autonomy was, for most of the university's history, genuinely worth protecting.
But the arrangements that protect autonomy also protect inertia. The governance structures that prevented hasty, ill-considered changes during periods of stability now prevent necessary, urgent changes during a period of transformation. The window for adaptation is closing. Students are already making choices based on their assessment of the university's relevance. Enrollment declines at institutions that have failed to adapt are not hypothetical; they are measurable, and they are accelerating at institutions that offer nothing an AI tutor and a certification program cannot provide more cheaply.
Kerr insisted that the university's deepest purpose was never the transfer of factual knowledge. Knowledge, he argued, was the university's medium, not its product. The product was something harder to name: the capacity to work with knowledge, to evaluate it, to generate it, to connect it across domains, to apply it with judgment. AI has now made the medium freely available. The product remains scarce, and the institution that learns to produce it at scale — judgment, not information — will define the next era of higher education.
The institution that continues to sell the medium will find that its customers have discovered a cheaper supplier, and that supplier never cancels office hours.
The professor's authority has always rested on a simple asymmetry: she knows things the student does not. The asymmetry was informational, but it expressed itself as something larger — as intellectual authority, the capacity to distinguish the important from the trivial, the rigorous from the sloppy, the genuinely new from the merely novel. The student who sat in the professor's office during the third week of a graduate seminar, struggling to articulate why a particular theoretical framework felt inadequate, was not receiving information. She was receiving something for which the English language has no precise word — a demonstration of taste, of intellectual discrimination, of the judgment that comes from having read three thousand papers in a field and knowing, without being able to fully explain how, which ones matter and which ones do not.
This asymmetry — the gap between the professor's knowledge and the student's — was the foundation of the pedagogical relationship. The student came to the university because the professor possessed something the student could not acquire alone. The professor's expertise justified the tuition, the years of coursework, the deference that the academic hierarchy demanded.
AI has dissolved the informational component of this asymmetry with a speed and thoroughness that would have been inconceivable five years ago. A first-year graduate student in political science can now receive, through a fifteen-minute conversation with an AI system, an overview of her field's major theoretical frameworks, a summary of the key debates, a bibliography of essential readings, and a preliminary analysis of the gaps in the existing literature — all tailored to her specific research interests, delivered with a clarity and patience that many professors cannot match, and available at three in the morning when the idea strikes.
The student's phone now knows more facts than the professor. This is not an exaggeration. It is a straightforward description of the informational capacity of large language models relative to any individual human mind, however learned. The professor who has spent thirty years studying the political economy of Southeast Asia has read perhaps four thousand papers, attended two hundred conferences, supervised fifty dissertations. The AI system has been trained on the full text of millions of papers, the proceedings of every conference whose records are digitized, and the dissertations of every university that has made its archives accessible. On the dimension of informational breadth, the contest is not close.
And yet the professor's value has not decreased. It has migrated — upward, from what she knows to how she thinks. From the informational layer to the judgment layer. From the capacity to recall and recite to the capacity to evaluate and discern.
Segal describes an episode that serves as an almost perfect illustration of this migration. Working on his book with Claude, the AI produced a passage connecting Csikszentmihalyi's flow state to a concept it attributed to Gilles Deleuze — "smooth space" as the terrain of creative freedom. The passage was elegant, structurally sound, and rhetorically convincing. It connected two intellectual threads in a way that felt like insight. Segal read it twice, liked it, and moved on.
The next morning, something nagged. He checked. Deleuze's concept of smooth space has almost nothing to do with how the AI had used it. The passage worked as prose. It failed as philosophy. The AI had produced "confident wrongness dressed in good prose" — a plausible synthesis that dissolved under disciplinary scrutiny.
What caught the error was not superior factual knowledge. The AI possessed far more facts about Deleuze than Segal did. What caught it was disciplinary judgment — the embodied sense, built through years of intellectual engagement, that something was off. The reference did not fit. The concept was being used in a way that a reader who had actually engaged with Deleuze's thought, rather than merely processed summaries of it, would immediately recognize as wrong.
This is what the faculty possess, and it is the thing the university must learn to name, value, and transmit. Not knowledge. Judgment. The capacity to sense when a plausible argument is wrong. The ability to distinguish between a synthesis that illuminates and a synthesis that merely sounds illuminating. The taste that separates an important question from a trivial one dressed in important-sounding language.
The challenge for the university is that judgment, unlike knowledge, cannot be transmitted through lectures, textbooks, or — crucially — AI tutors. Judgment is transmitted through apprenticeship: the sustained, personal, often uncomfortable process of watching an expert mind work and gradually internalizing its patterns of evaluation. The graduate student who sits in her advisor's office for an hour while the advisor thinks aloud about a paper's weaknesses — identifying the unstated assumptions, probing the methodology, asking the question the author did not ask — is learning judgment in the only way it can be learned: by observing its practice in a mind that has developed it through decades of disciplined engagement with a field.
This form of teaching — mentoring, modeling, apprenticeship — is the one the university has most systematically undervalued. Kerr diagnosed the problem with his characteristic directness. The multiversity's incentive structures drew the best minds away from teaching and toward research. The faculty member who spent her time mentoring was, within the institutional reward system, making an economically irrational choice. Publications counted toward tenure. Grants counted toward prestige. The hours spent in one-on-one conversations with students, modeling the very judgment that AI now makes more valuable than ever, counted toward nothing measurable.
The irony is structural and deep. The function that AI has made most valuable — the cultivation of judgment through mentored apprenticeship — is the function the university has most neglected. The informational function that AI has made redundant is the function the university has most invested in: large lectures, standardized curricula, examination systems that measure recall.
The faculty retraining challenge this creates is enormous, and it cannot be addressed honestly without acknowledging why the resistance will be fierce. Many faculty members have built their professional identities around expertise understood as knowledge. They are the people who know things. Their authority in the classroom derives from the demonstration of that knowledge — the capacity to answer questions, to provide explanations, to display mastery of a body of information that the student does not yet possess. When the student's device can provide the same information, often more efficiently, the ground shifts beneath that identity.
This is not a comfortable realization, and the discomfort it produces is not trivial. Professional identity is not a surface phenomenon. It is the structure around which careers, self-conceptions, and decades of investment are organized. The professor who has spent twenty-five years building expertise in eighteenth-century British literature — who can recite from memory the publication history of every major novel, who knows the secondary literature as one knows the streets of a childhood neighborhood — is not going to respond to the suggestion that her informational expertise has been commoditized with equanimity. The response will be defensive, and the defensiveness will be rational, because the loss is real.
But the loss, while real, is partial. The knowledge that the professor possesses is not merely informational. It is structured by judgment — by the thousands of evaluative decisions that shaped which papers she read closely, which she skimmed, which she dismissed, which she returned to years later with new understanding. The knowledge is inseparable from the taste that curated it, and the taste is the thing the AI does not possess and the student desperately needs.
The faculty member who can make this distinction — who can say, "My value is not what I know but how I think, and let me show you how I think by working through this problem with you in real time" — is the faculty member who will thrive in the AI-era university. The one who cannot make the distinction, who continues to derive authority from the recitation of knowledge that the student's phone can provide, will find the authority increasingly contested and the classroom increasingly empty.
The tenure system, which is typically analyzed as an obstacle to institutional change, turns out to have an unexpected asset in this context. Tenure was designed to protect intellectual freedom — to ensure that faculty could pursue controversial research and express unpopular ideas without fear of termination. This protective function has always coexisted with a less discussed effect: tenure insulates the faculty from market pressures. A tenured professor cannot be fired for failing to adapt to new teaching methods, which is why tenure is often blamed for pedagogical stagnation.
But insulation from market pressure also means insulation from the short-term thinking that market pressure produces. The tenured professor who decides to invest five years in developing a new model of mentored, judgment-centered teaching — a model whose results will not be visible in this year's student evaluations or next year's enrollment numbers — can make that investment without fear that a disappointing quarter will end her career. The market would never fund this kind of long-term pedagogical development. Tenure can.
The asset is real but conditional. Tenure protects long-term investment only if the institution signals clearly that the investment is valued. A tenure system that continues to reward publication above teaching, that continues to count articles and grants while ignoring the quality of mentoring, will produce faculty who use their protection to double down on research at the expense of the very function that justifies the university's continued existence.
Faculty development programs — the university's mechanism for retraining its workforce — must be reconceived with the same urgency as curriculum reform. The current model treats AI as a research tool: workshops on how to use language models for data analysis, grant writing, literature review. This is necessary but radically insufficient. Faculty need training not in how to use AI for their work, but in how to teach in an environment where students have constant access to AI for theirs. What does it mean to lead a seminar when every student can produce a competent first draft of any analytical essay in ten minutes? How does the professor structure discussion when the informational baseline has shifted from "the students have not done the reading" to "the students' AI has done all the reading and produced a summary more comprehensive than anything the professor could generate extemporaneously"?
The pedagogical challenge is to make the classroom the place where what AI produces is subjected to the scrutiny it cannot apply to itself. The seminar becomes a space not for the delivery of knowledge but for its interrogation — where the student brings the AI's output and the professor demonstrates, in real time, how an expert mind evaluates it. Where does the synthesis hold? Where does it break? What assumptions are embedded in the framing? What questions has it failed to ask? This is the modeling function at its most valuable: the professor as practiced evaluator, demonstrating judgment in the act of exercising it.
The transition will not be smooth. It requires a renegotiation of the implicit contract between faculty and institution — a contract that has, for fifty years, said: "Produce research, and we will tolerate your teaching." The new contract must say something closer to: "Develop judgment in the next generation, and we will value that development as highly as we value your publications." The renegotiation will be contested at every stage by faculty who have built careers under the old contract and see the new one as a betrayal.
Kerr would recognize the dynamic. The multiversity has always been an institution where change happens not through consensus but through the gradual, often painful adjustment of competing interests. The faculty will not unanimously embrace a new model of teaching. Some will lead the transition. Some will resist it. Some will retire before it reaches their department. The administrator's task is not to achieve consensus but to create the conditions — the incentive structures, the support systems, the protected spaces for experimentation — that allow the transition to proceed at the pace the moment demands, which is faster than any pace the university has previously sustained.
The pace matters because the students are not waiting. They are already using AI in ways that the faculty have not yet imagined, and the gap between what the students are doing and what the institution has prepared the faculty to address is widening with each semester.
There is a bargain at the heart of American higher education, and for most of the twentieth century it held. The bargain went like this: the student would invest four years of her life and an escalating sum of money — adjusted for inflation, the cost of a bachelor's degree at a public university roughly tripled between 1980 and 2020 — and in exchange, the university would provide two things. The first was knowledge and skill: the capacity to do something the student could not do before. The second was a credential: a signal to employers that the student possessed the knowledge and skill the degree certified.
The two components were bundled so tightly that most students could not distinguish between them. You went to college to learn things, and the proof that you had learned them was the degree, and the degree opened doors that would otherwise remain closed. The bundling was the bargain's genius: it aligned the student's intellectual motivation (the desire to learn) with her economic motivation (the desire to earn) so seamlessly that neither the student nor the institution needed to examine whether the two were actually connected.
They were connected, but not in the way most people assumed. The economic value of the degree was never primarily about what the student had learned. It was about what the degree signaled to employers: that this person had been selected by a competitive institution, had endured four years of intellectual and social demands, and had emerged with credentials that served as a proxy for qualities — intelligence, persistence, the capacity to meet deadlines and navigate bureaucracies — that employers valued but could not easily measure directly.
The economist Michael Spence formalized this insight in signaling theory: the degree's value lay not in the knowledge it certified but in the selection and endurance it demonstrated. The fact that many graduates entered careers with little direct connection to their field of study — English majors in consulting, philosophy majors in technology, biology majors in finance — was not a failure of the educational system. It was evidence that the system's primary economic function was signaling, not training.
This analysis, uncomfortable as it may be for university administrators who prefer to believe their institutions exist to educate, explains a great deal about the university's behavior over the past four decades. If the degree's primary economic value is as a signal, then the institution's primary economic incentive is to make the signal more expensive — harder to obtain, more exclusive, more prestigious — because a signal's value depends on its scarcity and its cost. This incentive explains the relentless escalation of admissions selectivity, the proliferation of extracurricular requirements, the inflation of résumé expectations, and the steady increase in tuition that has outpaced inflation in every decade since 1980. The university was not primarily selling knowledge. It was selling a signal, and the signal's value depended on being expensive and difficult to obtain.
AI attacks both components of the bargain simultaneously. The knowledge component: when the skills the degree certifies can be acquired through AI-augmented learning in weeks rather than years, the knowledge justification for four years of enrollment erodes. Segal describes engineers in Trivandrum who expanded their capabilities across entirely new domains — backend specialists building user interfaces, designers implementing complete features — in days, not semesters. If a working professional can acquire new capabilities at this speed, the student who spends four years in a classroom acquiring the same capabilities at a slower pace is making an increasingly difficult cost-benefit calculation.
The signaling component: when demonstrated capability can be assessed directly through portfolio, project, or AI-augmented assessment, the degree's role as proxy diminishes. The non-technical founder who prototypes a product over a weekend, using AI tools, has produced a more legible signal of capability than a computer science degree from a middling university. The employer who can see what the candidate built, how they built it, and whether it works has less need for the degree's reassurance that the candidate probably possesses the underlying skills.
Segal puts the student's dilemma bluntly: young people will increasingly refuse to "waste years of their life acquiring student debt or arcane skills that the world does not need." This is not a prediction about a distant future. Enrollment data already reflects the shift. Total undergraduate enrollment in the United States has declined by approximately fifteen percent since its 2010 peak. The decline is concentrated at institutions that offer neither the prestige signal of elite universities nor the practical skill development that students increasingly expect. The middle of the market — the regional universities, the non-selective four-year colleges — is being hollowed out, not because students have stopped valuing education, but because students have started distinguishing between the education and the credential, and they are no longer willing to pay for the credential alone.
The credential crisis accelerates under AI. When the skills that once required a four-year degree can be demonstrated through a project portfolio built with AI tools in months, the four-year timeline becomes harder to justify. The student who can show an employer a working application she built, a marketing campaign she designed and executed, an analytical framework she developed and tested — all augmented by AI but directed by her judgment — possesses a more convincing signal of capability than a transcript full of letter grades in courses the employer has never heard of.
This is not an argument that the degree is worthless. The degree continues to signal selection, persistence, and social integration. Elite credentials retain their value precisely because their selectivity signals intelligence and ambition in ways that a project portfolio cannot replicate — the admissions process itself is the filter, and employers know it. The crisis is not at the top. It is in the broad middle of American higher education, where institutions that lack the prestige signal are selling, in effect, knowledge and certification — both of which AI is commoditizing.
What, then, motivates the student to attend a university rather than assembling her education from AI tools, online resources, boot camps, and project-based work?
The answer, if there is one, lies in what the university provides that these alternatives cannot: the developmental experience of sustained intellectual engagement within a community of inquiry. Not the knowledge, which is now free. Not the credential, which is eroding. The experience — the four years of encounter with minds that think differently, with problems that resist easy solutions, with mentors who model intellectual judgment, with peers who challenge comfortable assumptions.
This is not a new argument. Liberal education's defenders have made it for centuries. What makes the argument urgent now is that it can no longer coast on the bundled bargain. When the knowledge and the credential were packaged with the developmental experience, the university did not need to justify the experience separately. Students came for the bundle, and the experience was included, often without the student or the institution recognizing its value. Now the bundle is unbundling. The knowledge is free. The credential is weakening. The experience must justify itself on its own terms, at its own price, to a student population that is more skeptical, more indebted, and more aware of alternatives than any generation in the university's history.
Justifying the experience requires the university to understand what the experience actually provides — which is harder than it sounds, because the developmental effects of a university education are diffuse, long-delayed, and notoriously difficult to measure. The skills that a graduate identifies, twenty years later, as the most valuable things she gained from college — the capacity to evaluate competing arguments, the ability to communicate complex ideas clearly, the confidence to enter unfamiliar domains and orient herself quickly, the network of relationships that shaped her professional trajectory — are skills that no single course produced and no transcript records.
These are the outcomes of what the developmental psychologist might call cognitive and social maturation in a structured environment: the process of being exposed, repeatedly and with increasing sophistication, to complexity that exceeds your current capacity to comprehend it, in the presence of people — faculty, peers, mentors — who model more sophisticated ways of engaging with that complexity. The process is slow. It is frustrating. It requires the student to sit with confusion long enough for understanding to develop, and to discover, through repeated experience, that confusion is not the enemy of learning but its precondition.
AI threatens this developmental process in a subtle way that the previous chapter's discussion of knowledge transmission only partially captures. The threat is not that AI provides answers — that much is obvious. The threat is that AI provides answers at the moment of maximum cognitive discomfort, the precise moment when the student is confused, frustrated, and tempted to give up, which is also the moment when the deepest learning occurs. The student who turns to AI at this moment — not to cheat, but genuinely seeking understanding — receives an immediate, clear, patient explanation that resolves the confusion. The confusion is eliminated. The answer is obtained. And the developmental experience of sitting with confusion, of working through it with the resources of her own mind, is bypassed.
This maps directly onto the critique from Byung-Chul Han that Segal takes seriously: the removal of friction eliminates the struggle that produces depth. The student who has never sat with intellectual confusion long enough to develop her own strategies for resolving it — who has always had a patient, infinitely available AI tutor to smooth the difficulty away — arrives at graduation with more information and less cognitive resilience than her predecessors. She knows more facts. She has less capacity to function in the absence of a system that provides them.
The university's response cannot be to ban AI — that battle is already lost, and banning would deprive students of the most powerful learning tool available to them. The response must be to design the educational experience so that the developmental friction is preserved even as the informational friction is removed. This means creating assignments that cannot be completed by AI alone — not because they are designed to defeat AI (an arms race the university will always lose) but because they require the student to exercise judgment that AI cannot provide. Assignments that require the student to evaluate AI output, identify its limitations, challenge its assumptions, and produce something that reflects not just competence but discernment.
It means restructuring the social architecture of the university so that the encounters that produce developmental growth — the late-night argument about whether a theory holds, the seminar where a peer's question reframes everything, the mentoring relationship where a faculty member's challenge forces the student to think more rigorously — are not incidental features of the campus experience but its deliberate, designed, central offering.
Kerr's Master Plan distributed the multiversity's functions across institutional tiers: research at the top, workforce preparation in the middle, open access at the base. The AI era may require a redistribution of different kind. Not a distribution of functions across tiers, but a redistribution of emphasis within each tier — away from the informational function that AI handles and toward the developmental function that justifies the student's presence on campus.
The student's bargain must be renegotiated. The new bargain cannot promise that four years of university education will provide knowledge the student could not obtain elsewhere, because that promise is no longer credible. It cannot promise that the degree will open doors that remain closed to the uncredentialed, because that promise is weakening with each hiring manager who asks to see the portfolio instead of the transcript.
The new bargain must promise something harder to deliver but more genuinely valuable: that four years of sustained engagement with a community of inquiry, guided by mentors who model intellectual judgment, surrounded by peers who challenge comfortable assumptions, will develop capacities in the student that she could not develop alone. The capacity to ask questions that matter. The capacity to evaluate answers with disciplinary rigor. The capacity to function — to think, to decide, to act — in conditions of uncertainty, ambiguity, and competing values.
This is a harder sell. It requires the student to value something she has not yet experienced and cannot easily preview. It requires the university to deliver something it has not yet learned to measure. And it requires both parties to trust that the investment will pay returns that are real but delayed, significant but difficult to quantify, and visible only in the long retrospect of a life well-lived.
Kerr, who spent his career managing institutions that served constituencies with contradictory expectations, would recognize the difficulty. He would also, one suspects, recognize the opportunity. The university has always been at its best when it provides something no one else can, and what no one else can provide in the age of artificial intelligence is the deliberately structured, face-to-face, friction-rich, mentored developmental experience that produces not knowledge but judgment.
The institutions that learn to articulate this promise — and, more importantly, to deliver on it — will find their students. The ones that continue to sell knowledge and credentials in a market where both are commoditizing will find their lecture halls emptying and their balance sheets deteriorating, and they will have earned the reckoning that follows.
The research university was the multiversity's crown jewel, and Clark Kerr knew it. When he described the modern American university as the engine of the "knowledge industry," he was not speaking metaphorically. The phrase was deliberate, almost provocative — designed to make humanists uncomfortable and policymakers attentive. Knowledge, Kerr argued, had become the most important factor in economic and social growth. What the railroads did for the second half of the nineteenth century, and the automobile for the first half of the twentieth, the knowledge industry would do for the second half of the twentieth: serve as the focal point for national growth. The university was where that knowledge was produced, and the production justified everything — the federal funding, the tax exemptions, the public patience with an institution whose internal workings were opaque and whose costs were relentless.
The production model worked spectacularly. Between 1945 and 2000, the American research university system generated more basic scientific knowledge than any institutional arrangement in human history. The transistor, the laser, the internet's foundational protocols, recombinant DNA, the human genome project, the algorithms that undergird modern machine learning — all emerged from university laboratories funded by the federal compact that Vannevar Bush articulated in 1945 and Kerr's multiversity institutionalized. The compact was simple: the government would fund basic research without directing it, and the university would produce knowledge that eventually, through pathways no one could predict in advance, would generate economic and military advantage. The compact held for half a century, and the returns were extraordinary.
AI does not break the compact. It transforms the production process so fundamentally that the compact's terms require renegotiation.
Consider what a graduate student in molecular biology does in 2025 versus what her predecessor did in 2005. In 2005, the student spent months learning laboratory techniques — pipetting, gel electrophoresis, PCR, the manual skills whose mastery was the entry price of the discipline. She spent additional months learning to read the literature — developing the capacity to identify relevant papers, evaluate their methodology, synthesize their findings, and locate the gaps that her own research would address. She spent further months designing experiments, running them, analyzing data with statistical tools she had to learn from scratch, and writing up results in the peculiar prose style that scientific journals demand. The entire process, from entry to first publication, typically consumed three to five years.
In 2025, AI compresses nearly every phase. Literature review that once required months can be conducted in hours: AI systems can process thousands of papers, identify patterns across studies, highlight methodological inconsistencies, and generate preliminary syntheses that would have taken a human reader weeks. Experimental design can be augmented by AI systems that suggest protocols based on the full corpus of published methodology, flagging potential confounds the student might not have considered. Data analysis that once required weeks of statistical programming can be performed through conversational interaction with AI tools that explain their methods as they execute them. Manuscript preparation — the formatting, the citation management, the generation of first drafts that capture the logical structure of the findings — can be accelerated by an order of magnitude.
The graduate student of 2025 can, in principle, produce more research, faster, across a wider range of questions than her 2005 predecessor could have imagined. The production capacity of the research university has expanded enormously.
But production capacity is not the same as production quality. And quality, in research, depends on something that AI acceleration does not automatically improve and may, under certain conditions, actively degrade: the quality of the questions being asked.
Segal's argument that questions diverge while answers converge — that the history of human progress is a history of great questions, not great answers — applies with particular force to the research enterprise. Newton did not begin with the law of gravity but with a question about falling objects. Darwin did not begin with evolution but with a puzzle about birds. Einstein did not begin with relativity but with a teenager's thought experiment about riding a beam of light. In each case, the question was worth more than any answer it produced, because the question opened a field of inquiry that generated thousands of subsequent answers, each of which generated further questions, in a cascade that continues to this day.
The research university's deepest function is not the production of answers — the papers, the patents, the data — but the cultivation of the capacity to ask questions that open new fields of inquiry. This capacity is what distinguishes the researcher who advances knowledge from the researcher who merely adds to its volume. It is what the tenure system was designed to protect: the freedom to pursue questions whose value cannot be assessed in advance, whose payoff may take decades to materialize, whose importance is legible only to other researchers who possess the disciplinary judgment to recognize them.
AI accelerates the answering machinery. It does not originate the questions. Not yet, and perhaps not in any meaningful sense, because the questions that matter in research arise not from the processing of existing knowledge but from the recognition that existing knowledge is insufficient — that it fails to explain something the researcher has observed, or that it rests on assumptions the researcher has found reason to doubt, or that it neglects a connection between domains that the researcher's particular biography and intellectual formation have made visible.
This is the judgment function that the previous chapter identified as the faculty's irreplaceable contribution, now applied specifically to the research enterprise. The researcher's judgment determines which questions are worth asking. AI can help answer them, often with extraordinary efficiency, but the question itself — the identification of the problem, the framing that makes it tractable, the intuition that this particular line of inquiry will prove fertile — remains a human capacity, developed through the specific, slow, friction-rich process of apprenticeship that the research university was designed to support.
The risk is that AI's acceleration of the answering process creates an incentive to ask answerable questions rather than important ones. When a literature review can be completed in hours and a preliminary analysis in days, the temptation is to pursue questions that yield to this acceleration — questions whose answers can be generated quickly, published rapidly, and added to the researcher's growing list of outputs. The incentive structure already pushes in this direction: publications count, impact factors matter, and the quantitative assessment of research productivity rewards volume over depth.
AI intensifies this existing pressure. The researcher who uses AI to produce five publishable papers in the time her pre-AI predecessor would have produced one is, by every metric the institution measures, five times as productive. Whether those five papers represent five genuine contributions to understanding or one contribution sliced into five publishable units is a question the metrics do not address and the institutional incentive structure does not reward anyone for asking.
The distinction between quantity and quality in research is not new. Kerr himself noted that the multiversity's reward system created incentives for publication volume that did not necessarily align with intellectual significance. The pressure to publish — the "publish or perish" imperative that has governed academic careers since the mid-twentieth century — was always a distortion, valuing the countable over the consequential. AI does not create this distortion. It amplifies it, making volume cheaper to produce while leaving the assessment of significance as difficult as it has ever been.
The research university's AI-era function, then, is not to produce more research — the machinery for that is already in place and accelerating — but to cultivate the judgment that determines whether more research is better research. This means training the next generation of researchers not in the mechanics of the research process, which AI increasingly handles, but in the intellectual taste that distinguishes important questions from trivial ones.
Intellectual taste in research is a real phenomenon, well recognized within scientific communities even if poorly understood from outside them. The senior researcher who reads a graduate student's proposal and says, "This is technically feasible but not interesting" — or, conversely, "This is wildly ambitious but if it works it changes the field" — is exercising taste. The judgment is not arbitrary. It is informed by decades of engagement with the field, by an intimate knowledge of what has been tried, what has failed, what remains unexplained, and where the pressure points are that, if probed, might yield fundamental insight. It is the kind of knowledge that cannot be transmitted through a lecture or a textbook, because it is not propositional knowledge — not a set of facts that can be stated and memorized — but a form of embodied understanding that develops only through sustained practice.
The apprenticeship model of doctoral training was designed, whether its designers recognized it explicitly or not, to transmit precisely this kind of understanding. The graduate student who works in a senior researcher's laboratory for five years is not merely learning techniques. She is absorbing, through daily proximity, a way of seeing the field — a set of instincts about what matters, what doesn't, where the real puzzles lie, and which apparent puzzles are merely artifacts of inadequate methodology. This absorption cannot be accelerated. It requires time, friction, and the specific intimacy of a mentoring relationship in which the senior researcher's judgment becomes gradually legible to the apprentice.
AI changes the content of the apprenticeship without eliminating its necessity. The graduate student no longer needs to spend months learning to run a gel or months learning to navigate the literature manually. Those months can now be redirected — toward deeper engagement with the conceptual foundations of the field, toward exposure to adjacent disciplines that might illuminate her research from unexpected angles, toward the sustained conversation with her advisor that develops the judgment no tool can provide.
If the university uses the time wisely. The risk, documented by the Berkeley researchers and consistent with every previous technology adoption cycle, is that the freed time fills with more production rather than deeper formation. The graduate student who no longer needs to spend months on literature review may spend those months producing more papers rather than developing the taste that determines whether those papers contribute to understanding or merely to her curriculum vitae.
The institutional structures that determine which path the freed time follows are the dams that matter most. Doctoral programs that assess students on the quality of their questions rather than the volume of their publications. Qualifying examinations that test not what the student knows but how she thinks — her capacity to identify the weaknesses in an argument, to connect findings across subdisciplines, to articulate what would change her mind. Dissertation committees that evaluate the significance of the research question at least as rigorously as they evaluate the methodology. Funding structures that reward high-risk, high-potential inquiry rather than the incremental, safe, and easily publishable.
These reforms would have been valuable before AI. They become urgent after it, because AI dramatically increases the cost of getting the incentives wrong. When the machinery for producing mediocre research at scale is freely available, the institution that fails to redirect its evaluation criteria toward quality will drown in a flood of technically competent, methodologically sound, intellectually empty publications. The flood is already visible: preprint servers are filling with AI-augmented papers whose contribution to understanding is, in many cases, indistinguishable from zero. The research university's responsibility is not to dam the flood — the publications will continue regardless — but to ensure that its own researchers are trained to swim above it, asking the questions that matter rather than the questions that publish.
Kerr's knowledge industry has acquired a new and extraordinarily powerful production facility. The university's role is no longer to operate the facility — AI does that more efficiently — but to ensure that what the facility produces is worth producing. The foreman has been replaced by the curator. The curriculum must follow.
The multiversity's relationship with industry was, from the beginning, a marriage of convenience that both parties occasionally mistook for love. Kerr was characteristically clear-eyed about the arrangement. The university needed industry's money — research funding, endowment gifts, the hiring of graduates that justified tuition investment. Industry needed the university's knowledge — the basic research that no corporation could justify funding internally, the trained workforce that no corporate training program could produce at scale, the prestige association that made a company's philanthropic arm look good in the annual report. Each party gave what the other could not provide for itself, and the exchange worked because the university's monopoly on knowledge production and credentialing was unchallenged.
That monopoly has ended. Not completely — monopolies rarely dissolve all at once — but sufficiently to restructure the terms of the arrangement.
The first disruption is to the workforce preparation function. For fifty years, industry relied on the university to produce workers with specific technical skills: programmers who could write code in particular languages, engineers who could design particular systems, analysts who could build particular models. The university obliged, creating departments and degree programs that mapped, with varying degrees of precision, to industry's skill requirements. Computer science departments taught the languages industry used. Business schools taught the analytical frameworks industry employed. Engineering programs taught the design methodologies industry specified.
This mapping worked because the skills industry needed were difficult and time-consuming to acquire. A competent software engineer required four years of undergraduate training and, often, additional years of graduate work or on-the-job experience before she could contribute independently. The university's value proposition was the production of this competence at scale — thousands of graduates per year, each certified by the degree as possessing the baseline skills the employer required.
When AI makes those baseline skills acquirable in weeks rather than years, the value proposition collapses for the specific category of skills AI can develop. Segal's Trivandrum narrative makes this concrete: backend engineers building user interfaces in days, designers implementing complete features, the boundaries between specialized roles dissolving as AI tools make cross-domain work accessible to anyone with the judgment to direct it. If a working professional can expand her capabilities this rapidly using AI tools, the four-year degree as a pathway to the same capabilities becomes harder to justify on purely economic grounds.
Industry's response is already visible in hiring patterns. The major technology companies — the same companies that once recruited exclusively from top-tier computer science programs — have been moving toward skills-based hiring for several years. Portfolio assessment, project-based evaluation, and technical interviews that test problem-solving capacity rather than credential pedigree are increasingly common. The trend predates AI, but AI accelerates it dramatically, because AI makes it possible for candidates to demonstrate capability through completed projects that would previously have required institutional support to produce.
The second disruption is to the technology transfer function. The university's role as the origin point of commercially valuable knowledge — the research laboratory where the breakthrough happens, the technology transfer office that licenses the patent, the faculty startup that commercializes the discovery — was one of Kerr's most celebrated contributions to the multiversity model. The university produced knowledge; industry consumed it; the technology transfer pipeline connected the two. Stanford's relationship with Silicon Valley, MIT's relationship with Route 128, the entire venture-capital ecosystem that grew around university research laboratories — all of this was the knowledge industry in action, precisely as Kerr had envisioned.
AI complicates this pipeline in two directions. From the industry side, companies no longer depend on universities for the fundamental research that drives AI development to the degree they once did. The most consequential AI research of the past decade has come as much from corporate laboratories — Google DeepMind, OpenAI, Anthropic, Meta AI — as from university departments. The corporate labs can offer salaries that university departments cannot match, computing resources that university budgets cannot fund, and deployment opportunities that academic publication cannot provide. The talent pipeline has reversed in significant segments: instead of the university training researchers who then move to industry, industry is increasingly recruiting researchers away from universities before their academic training is complete.
From the university side, the knowledge being produced in AI research raises governance questions that the technology transfer model was not designed to address. When a university laboratory produces a breakthrough in large language model capability, the commercial applications are immediate and enormous — but so are the social implications. The technology transfer office that licenses the patent is optimizing for revenue. The faculty researcher who developed the technology may be concerned about its social effects. The university's public mission — to serve the common good — may conflict with the commercial terms that maximize licensing income. These tensions existed before AI, but AI's scale and social impact make them acute.
Kerr's framework predicts that the university will manage these tensions the way it has always managed tensions: through mediation, through the balancing of competing interests, through governance structures that give each constituency a voice without giving any constituency a veto. But the pace of AI development makes the multiversity's characteristic speed of governance — deliberative, consultative, glacial by corporate standards — a potentially fatal liability. By the time the faculty senate has debated the ethical implications of a particular AI application, the application has been deployed, scaled, and normalized.
The third and most consequential disruption is to the nature of what industry needs from the university at all. The historical demand was for specialists: people trained in specific technical domains who could fill specific organizational roles. The emerging demand is for integrators: people who can direct AI tools across multiple domains, evaluate their output with disciplinary judgment, synthesize technical, ethical, and practical considerations, and make decisions that require the kind of broad intellectual formation that specialization, by definition, does not provide.
Segal describes this shift through the concept of "vector pods" — small groups whose function is not to build but to decide what should be built. The vector pod's value lies in integrative judgment: the capacity to see how a technical decision affects user experience, company culture, and competitive position simultaneously. This capacity is not the product of specialist training. It is the product of broad intellectual formation — the kind that general education, at its best, was always designed to provide.
Industry's evolving demand, in other words, is converging with the university's oldest and most neglected offering. The industry that needs narrowly trained specialists — the industry the university has been serving for fifty years — is giving way to an industry that needs broadly formed integrators, and the university that has been defunding general education in favor of specialist programs is, inadvertently, defunding its own relevance to the market it claims to serve.
The implications for the university-industry relationship are structural. Career services offices that place graduates into specialist roles must reconceive their function. Advisory boards composed of industry leaders must communicate the shift in demand clearly enough for curriculum committees to act on it. Internship programs that slot students into narrow functional roles must expand to include the integrative experiences that develop cross-domain judgment.
The research collaboration model must adapt as well. When industry's most pressing need is not for specific technical knowledge but for the broad-based research that addresses complex, multi-dimensional problems — AI ethics, the social effects of automation, the design of human-AI collaboration systems, the regulatory frameworks that emerging technologies require — the university's comparative advantage reasserts itself. Corporate laboratories optimize for deployable products. University laboratories can afford to pursue questions whose answers may take years to emerge and whose applications cannot be specified in advance. This was always the university's distinctive contribution to the knowledge industry, and it remains valuable precisely because no other institution can make the same commitment to long-term, curiosity-driven inquiry.
But the university must learn to communicate this value in terms the industry partnership can sustain. The old pitch — "Fund our research, and your graduates will be better trained" — worked when both the research and the training were visible, measurable, and connected to industry's immediate needs. The new pitch is harder: "Fund our research, and the broad intellectual formation it contributes to will produce people who can navigate the complexities that your AI tools create but cannot resolve." This pitch requires industry to value something it has historically undervalued — the integrative judgment that comes from broad formation rather than narrow training — and it requires the university to demonstrate that it can actually produce this judgment at scale, which it has not yet proven.
Kerr designed the multiversity to serve industry without being captured by it — to maintain the intellectual independence that makes basic research possible while producing the economic returns that justify public investment. That balance was always precarious. AI makes it more so, because the economic pressures are more intense, the talent competition is fiercer, and the social stakes of the research itself are higher. The university that allows industry to dictate its research agenda will produce commercially useful work at the expense of the fundamental inquiry that is its distinctive contribution. The university that ignores industry entirely will lose the funding, the talent, and the social relevance that the industry connection provides.
The mediator's task, as Kerr would recognize, is not to choose between these poles but to find the arrangement that allows them to coexist — an arrangement that is, like everything in the multiversity, imperfect, contested, and in constant need of renegotiation.
The most important curriculum in the American university is the one that nobody defends at budget time.
General education — the set of requirements that compel the engineering student to take a literature course, the English major to study statistics, the premed to sit through a semester of philosophy — has been under siege for as long as the multiversity has existed. Faculty view it as a distraction from their departmental mission. Students view it as an obstacle between them and their major. Administrators view it as a scheduling problem. State legislators, when they notice it at all, view it as an inefficiency — credit hours that could be redirected toward workforce preparation, semesters that could be eliminated if the university would simply teach students what they need to know for their first job.
Kerr defended general education against these pressures, but even he acknowledged that the defense was losing. The research university's reward system concentrated prestige and resources in the departments, and the departments' incentive was to claim as many student credit hours as possible for their own courses. General education was everybody's responsibility, which meant it was nobody's priority. The courses that satisfied the requirements were often taught by the department's least enthusiastic faculty, staffed by teaching assistants, and designed to be endured rather than engaged with. A generation of students came to associate "gen ed" with the intellectual equivalent of eating their vegetables — a mandated unpleasantness that preceded the real meal of the major.
This was always a failure. It is now a catastrophe, because the broad intellectual formation that general education was supposed to provide turns out to be the single most valuable capability the university can develop in the age of artificial intelligence.
The argument is straightforward, though its implications are radical. When specialist knowledge can be acquired through AI tools in weeks — when the backend engineer can learn frontend development through conversational interaction with an AI system, when the marketing analyst can acquire data science capabilities through AI-augmented tutorials, when the lawyer can generate competent first drafts of briefs in areas of law she has never practiced — the economic premium on specialist training diminishes. Not to zero; deep expertise retains its value as an input to judgment. But the premium shifts, decisively, from the capacity to perform within a single domain to the capacity to integrate across domains.
Integration — the ability to see how a technical decision affects user experience, how a business model shapes organizational culture, how a regulatory framework constrains engineering choices, how an ethical principle applies to a design decision — is not a specialist skill. It is the product of broad formation: the experience of having engaged seriously with multiple disciplines, multiple modes of thinking, multiple frameworks for understanding the world. The engineer who has studied philosophy brings to her design work a sensitivity to ethical implications that her purely technical training would not have produced. The literature student who has studied statistics brings to her interpretive work a rigor about evidence that her purely humanistic training would not have demanded. The breadth is not a luxury. It is the connective tissue that makes integration possible.
Segal identifies this shift explicitly. His emphasis on integrative thinking — the capacity of the leader who "can see how a technical decision affects user experience, company culture, and competitive position in a single glance" — describes the outcome of general education at its best. His "vector pods," the small groups whose function is to decide what should be built rather than to build it, are composed of people whose value lies precisely in their capacity to synthesize across domain boundaries. This capacity does not emerge from specialist training. It emerges from the repeated experience of encountering unfamiliar intellectual territory and developing the facility to navigate it — which is exactly what general education, when it works, provides.
The irony is so precise it approaches the structural. The university has spent fifty years defunding, neglecting, and apologizing for the one component of its curriculum that the AI era makes most valuable. The specialist programs that consumed the resources — the engineering departments, the business schools, the computer science programs that grew relentlessly as their graduates commanded the highest starting salaries — are the programs whose core skills AI is commoditizing fastest. The general education program that was starved of resources, taught by adjuncts, and dismissed as a relic of a premodern educational philosophy is the program whose outcomes the market is beginning to reward.
But recognizing the value of general education is not the same as delivering it effectively, and the current model of general education at most American universities is, bluntly, inadequate to the task the moment demands.
The distribution requirement — "take one course from each of six categories: humanities, social sciences, natural sciences, quantitative reasoning, arts, and diversity" — is not an education. It is a menu. The student checks boxes. She takes Introduction to Astronomy to satisfy the natural science requirement, not because she is interested in astronomy but because the course has a reputation for generous grading. She takes Introduction to Philosophy to satisfy the humanities requirement, writes two papers she will forget by the following semester, and moves on. The courses exist in isolation from each other. No one — not the student, not the faculty, not the institution — treats them as components of an integrated intellectual formation. They are checkboxes, and they produce checkbox thinking.
A rebuilt general education curriculum would look fundamentally different. It would be organized not by disciplinary category but by the integrative capacities it aims to develop. Instead of "take one course in the humanities," the requirement might be: "Demonstrate the capacity to evaluate competing interpretive frameworks applied to a complex text, artifact, or situation." Instead of "take one course in quantitative reasoning," the requirement might be: "Demonstrate the capacity to assess quantitative evidence, identify its limitations, and communicate its implications to a non-specialist audience." The focus shifts from coverage — how many disciplines has the student sampled? — to capability: what can the student do with what she has encountered?
This is the pedagogical application of Segal's ascending friction principle. The friction of acquiring disciplinary knowledge — reading the textbook, memorizing the formulas, passing the exam — is the friction that AI removes. The student who needs to know the basics of statistical inference can learn them through a fifteen-minute conversation with an AI tutor more efficiently than through a semester-long course. Removing this friction does not eliminate all friction. It reveals a harder, more valuable kind: the friction of integrating knowledge across domains, of recognizing that a statistical finding and a philosophical argument and a historical pattern are all illuminating the same phenomenon from different angles, of developing the judgment to know when a quantitative analysis is sufficient and when it must be supplemented by qualitative understanding.
This higher-order friction cannot be smoothed away by AI, because it is not an informational problem. It is a judgment problem — the problem of how to think across boundaries, how to hold multiple frameworks in mind simultaneously, how to make decisions when the relevant considerations come from different disciplines and cannot be reduced to a single metric. This is the friction the university should be cultivating, and it is the friction that a well-designed general education curriculum can produce.
Consider a concrete example. A course organized around a single complex problem — the design of a city's response to climate change, say — that requires students to engage simultaneously with atmospheric science (the physical mechanisms of climate change), economics (the costs and distribution of mitigation strategies), ethics (the obligations of current generations to future ones), political science (the governance structures that enable or prevent collective action), engineering (the technical feasibility of proposed solutions), and literature (the narratives that shape public understanding and political will). The students are not sampling six disciplines. They are using six disciplines to address a single problem, and the intellectual difficulty lies not in mastering any one discipline — the AI can provide the disciplinary knowledge — but in integrating them. In recognizing where the economic analysis conflicts with the ethical analysis. In understanding why the engineering solution that is technically optimal is politically impossible. In developing the judgment to navigate these conflicts rather than resolving them prematurely.
This kind of teaching is harder than lecturing. It requires faculty who are themselves capable of thinking across disciplinary boundaries, which is to say it requires faculty who have experienced the general education the university claims to provide. It requires institutional structures — team-taught courses, interdisciplinary appointments, evaluation criteria that reward integrative pedagogy — that the departmental organization of the university actively discourages. It requires an assessment framework that can evaluate integrative judgment, which is harder to grade than factual recall and harder to standardize than a multiple-choice exam.
The difficulty is real, and the institutional resistance will be formidable. Departments will resist losing credit hours to interdisciplinary courses. Faculty will resist team-teaching arrangements that dilute their disciplinary authority. Assessment offices will resist evaluation frameworks that cannot be reduced to numerical scores. Every constituency that Kerr identified as a participant in the multiversity's internal war will have reasons to oppose the transformation, and many of those reasons will be locally rational even as they are globally catastrophic.
The administrator who navigates this resistance must understand that the resistance is not mere obstruction. It reflects genuine concerns about disciplinary rigor, faculty autonomy, and the institutional arrangements that have sustained the research university for half a century. The solution is not to override the resistance but to demonstrate that integrative general education can coexist with disciplinary depth — that the engineer who has studied philosophy is not a worse engineer but a more capable one, and that the philosopher who has engaged with engineering is not a dilettante but a thinker whose range makes her analysis more relevant to a world where philosophical questions are embedded in technological decisions.
Kerr understood, from decades of institutional leadership, that the multiversity changes not through revolution but through the accumulation of small, demonstrable successes that shift the institutional consensus gradually. The general education curriculum will not be rebuilt overnight. But it can be rebuilt — through pilot programs that demonstrate the value of integrative pedagogy, through faculty champions who model cross-disciplinary teaching, through assessment innovations that make integrative judgment visible and measurable.
The university that succeeds will have recovered something it had nearly abandoned: the conviction that breadth is not the enemy of depth but its necessary companion, and that the educated person is not the person who knows one thing completely but the person who can connect many things intelligently. This conviction was always the philosophical foundation of general education. AI has made it the economic foundation as well.
If the argument of the preceding chapters holds — that the university's knowledge-transmission function is obsolete, that its credentialing function is eroding, that its remaining value lies in the cultivation of judgment — then the institution faces a design problem of the first order. How does an organization built to deliver information at scale reorganize itself to develop judgment at scale? The two activities require different architectures, different incentive structures, different assessment methods, and different conceptions of what a successful graduate looks like. The transition from one to the other is not an upgrade. It is a reconstruction.
The term "judgment factory" is deliberately uncomfortable. Factories mass-produce standardized outputs. Judgment is, by definition, the capacity to respond to non-standardized situations — to evaluate, discern, and decide when the relevant considerations are ambiguous, the evidence is incomplete, and the stakes are real. The tension between the factory metaphor and the nature of judgment is the tension the university must navigate: developing this irreducibly human capacity at an institutional scale that serves tens of thousands of students, across hundreds of programs, with the efficiency that public funding demands.
Kerr would appreciate the tension. The multiversity was always a factory of sorts — a knowledge factory, a credential factory, a research factory — and Kerr was honest about the industrial metaphors that described it. He called the university president a "captain of bureaucracy" and described the institution's operations in terms that would have been recognizable to a manufacturing executive. The question now is whether the factory can retool — whether the institutional machinery that was optimized for knowledge production and credential certification can be adapted for judgment cultivation.
The redesign begins with the classroom, because the classroom is where the institution meets the student, and the quality of that meeting determines everything that follows.
The knowledge-transmission classroom had a clear architecture: one expert, many novices, information flowing in one direction. The judgment-cultivation classroom requires a different architecture: fewer students, more interaction, information flowing in multiple directions, and a faculty member whose role is not to deliver content but to create the conditions under which judgment develops. The conditions are specific. They include exposure to genuine complexity — problems that do not have clean answers, situations where reasonable people disagree, evidence that supports contradictory conclusions. They include the experience of being wrong — making a judgment, defending it, encountering a counterargument that undermines it, and revising. They include the modeling of expert judgment by a faculty member who thinks aloud in real time, making her evaluative process visible to students who are trying to develop their own.
This is the seminar model, and it is old. The Socratic method, which is a particular form of judgment cultivation through guided questioning, has been practiced for twenty-four centuries. The Oxford tutorial, in which one or two students meet weekly with a faculty member to present and defend their analysis, has been practiced for eight. The problem has never been knowing what the judgment-cultivating classroom looks like. The problem has been scaling it.
The multiversity solved the scaling problem for knowledge transmission by inventing the large lecture course: one professor, three hundred students, standardized exams. It never solved the scaling problem for judgment cultivation, because judgment cultivation resists standardization. You cannot develop the capacity to evaluate complex situations by watching someone else evaluate them in a lecture hall. You develop it by doing it yourself, in real time, with feedback from someone who has done it longer and better.
AI offers the university something it has never had: a partial solution to the scaling problem for judgment cultivation. Not a complete solution — the human mentor remains irreplaceable for reasons the previous chapters have examined. But a partial one, because AI can serve as the first layer of intellectual engagement that prepares the student for the more demanding, more valuable engagement with a human mentor.
Consider the pedagogical sequence. The student encounters a complex problem — a policy question, a design challenge, an ethical dilemma. She engages first with an AI system that provides background information, outlines the major positions, presents the evidence on each side, and generates preliminary analysis. This first engagement replaces the lecture: it provides the informational foundation that the student needs before she can exercise judgment. The AI performs this function more efficiently than any lecture, because it tailors the information to the student's existing knowledge and responds to her questions in real time.
The student then brings her AI-augmented understanding to the seminar, where the human faculty member does something the AI cannot: challenges the student's judgment. Not her facts — the AI has provided those. Her evaluation of the facts. Her framing of the problem. Her assumptions about what matters and what does not. The faculty member asks the question the AI did not ask, because the question arises not from the information but from the faculty member's judgment about what the student's analysis is missing — a judgment that requires the disciplinary taste and interpersonal perception that human mentoring provides and AI does not.
This sequence — AI for information, human for judgment — allows the university to scale the judgment-cultivating classroom by compressing the informational phase. The seminar that once required the first thirty minutes of a fifty-minute session for the professor to provide background context can now begin at the level of judgment, because the students arrive already informed. The human contact hours become more valuable, not less, because they are devoted entirely to the function that justifies their cost.
The assessment challenge is equally fundamental. The examination that tests recall — the blue-book essay, the multiple-choice midterm, the problem set with deterministic solutions — measures the informational function. It asks: what does the student know? A judgment-oriented assessment asks a different question: how well does the student evaluate? How rigorously does she distinguish strong evidence from weak? How effectively does she identify the assumptions embedded in an argument? How capably does she navigate disagreement between credible sources?
Segal describes a teacher who stopped grading her students' essays and started grading their questions. The shift is instructive. An essay demonstrates what the student knows and can articulate. A question demonstrates what the student recognizes she does not know — which requires a more sophisticated form of intellectual engagement, because identifying the right gap in one's understanding is harder than filling a gap that someone else has identified. The student who produces the five questions she would need to answer before she could write an essay worth reading has demonstrated a level of engagement with the material that no essay, however well-written, can match.
Scaling this form of assessment requires new tools and new training. Rubrics that evaluate the quality of questions are harder to develop and harder to apply consistently than rubrics that evaluate the correctness of answers. Faculty who are accustomed to grading for content must learn to grade for judgment — to distinguish between the student whose question reveals genuine engagement with the complexity of the material and the student whose question, though well-phrased, does not penetrate beyond the surface. AI can assist in this process — AI systems can provide preliminary assessment of student work, freeing faculty time for the higher-order evaluation that requires human judgment — but the design of the assessment framework itself requires the kind of pedagogical expertise that no AI currently possesses.
Faculty development is the institutional mechanism on which everything else depends. A faculty member who has spent twenty years teaching through lectures cannot be expected to adopt judgment-centered pedagogy without support. The support must be substantive, not performative — not a two-hour workshop on "teaching with AI" but a sustained program that helps faculty reconceive their role, develop new pedagogical methods, and build the capacity to facilitate the kind of intellectual engagement that judgment cultivation requires.
The investment is large. A serious faculty development program for a research university of thirty thousand students might require dedicated staff, protected time for participating faculty, stipends that compensate for the opportunity cost of the research hours redirected toward pedagogical development, and a multi-year timeline that allows the program to iterate and improve. This investment competes with every other demand on the university's budget — laboratory equipment, building maintenance, athletic facilities, administrative salaries — and in the current budget environment, it is unlikely to win the competition without a compelling argument that the investment is existentially necessary.
The argument is this: without faculty who can cultivate judgment, the university has nothing to sell that AI cannot provide more cheaply. The lecture is redundant. The credential is weakening. The research function, while still valuable, does not directly serve the undergraduate student whose tuition pays the bills. What the undergraduate student receives — or should receive — is the developmental experience of sustained engagement with faculty who model and cultivate the intellectual judgment that no AI can provide. If the faculty cannot deliver this experience, the student's rational choice is to go elsewhere, and the elsewhere — AI tutors, bootcamps, project-based learning platforms, corporate training programs — is multiplying rapidly.
The institutional metrics must change correspondingly. A university that measures its success by graduation rates, credit-hour production, and post-graduation employment statistics is measuring the outputs of the knowledge-transmission model. A university that aims to produce judgment must measure judgment — which requires developing new assessment instruments, new institutional metrics, and new relationships with employers and graduate programs that can provide longitudinal data on whether graduates are, in fact, exercising the judgment the university claims to have cultivated.
This measurement problem is genuine and should not be minimized. Judgment is harder to measure than knowledge. Its effects are delayed, diffuse, and context-dependent. A graduate may exercise excellent judgment in her career for twenty years before encountering the situation that reveals its full value. No institutional metric can capture this on a timeline that satisfies a state legislature or an accreditation body.
But the difficulty of measurement does not excuse the absence of attempt. Proxy measures exist: the quality of questions students ask on qualifying examinations, the rigor of analysis in capstone projects, the evaluations of mentors who observe student judgment in clinical, laboratory, or fieldwork settings, the assessments of employers who supervise graduates in their first professional roles. None of these measures is sufficient alone. Together, they provide a picture — imperfect, impressionistic, but real — of whether the institution is producing the capacity it claims.
Kerr designed the multiversity to be measured by the metrics of its era: enrollment growth, research expenditure, patent revenue, Nobel laureates. The AI-era university must be measured by different metrics, because its product is different. The product is not knowledge, which is now free. The product is not credentials, which are eroding. The product is judgment — the capacity to evaluate, to discern, to decide wisely under conditions of uncertainty. And the institution that learns to produce this capacity, and to demonstrate that it has produced it, will have found the function that justifies its existence in an age when every other function can be performed more efficiently elsewhere.
The factory metaphor breaks down at the final stage, and the breakdown is the point. A factory produces identical outputs. Judgment is, by its nature, individual — the product of a particular mind engaging with particular circumstances through a particular set of evaluative capacities. The university cannot mass-produce judgment the way it mass-produced credentials. It can only create the conditions under which judgment develops, and then trust the process. The trust is harder than the engineering. It always has been.
The degree has a history, and the history is not what most people assume. The modern university credential — the bachelor's degree as a standardized signal of competence, recognized across institutions, industries, and national borders — is a surprisingly recent invention. For most of the university's eight-hundred-year existence, the credential was incidental to the experience. Medieval students attended a university to study under particular masters, not to receive a standardized certificate. The master's designation, when it existed, certified the right to teach, not the possession of a body of knowledge. The credential-as-hiring-filter — the degree as the thing an employer checks before reading the rest of the résumé — is largely a twentieth-century phenomenon, and its universalization as the minimum threshold for professional employment is a development of the past fifty years.
The credentialing function grew in lockstep with the expansion of the professional labor market. As the economy shifted from manufacturing to services, from physical labor to knowledge work, employers needed a mechanism for sorting an increasingly large and undifferentiated pool of applicants. The degree served this purpose efficiently. It did not tell the employer exactly what the candidate knew. It told the employer that the candidate had been selected by an institution with standards, had endured four years of intellectual and social demands, and had emerged with a certification that served as a proxy for the qualities the employer actually cared about: intelligence, persistence, the capacity to navigate complex institutional environments, and the basic reliability of someone who shows up and completes what she starts.
The signaling function was always more important than the training function. The evidence for this is abundant and uncomfortable. Studies of earnings premiums consistently show that the value of the degree varies dramatically by institution — a Harvard degree and a degree from an unranked regional university certify the same nominal competence but command vastly different market premiums. If the degree's value were primarily about the knowledge it certified, the premium would reflect the quality of instruction. It does not. The premium reflects the selectivity of admission, which is a signal of the student's pre-existing capability, not the institution's pedagogical contribution. Economists Stacy Dale and Alan Krueger demonstrated in a landmark study that students who were admitted to highly selective universities but chose to attend less selective ones earned roughly the same as those who attended the elite institutions — suggesting that the selection itself, not the education, was the primary driver of earnings.
The credential, in other words, was always a somewhat circular institution: it certified that the student possessed qualities she already possessed before enrolling, and the certification's value depended on the institution's prestige, which depended on its selectivity, which depended on the quality of the students it admitted, which had nothing to do with what the institution taught them after they arrived.
This circularity was sustainable as long as the credential had no competition. The degree was the only widely recognized, broadly portable signal of competence available to the labor market. Employers who wanted to assess capability directly — through project evaluation, skill testing, or extended observation — faced costs that made the degree, for all its imprecision, the most efficient sorting mechanism available.
AI dissolves this monopoly in two directions simultaneously.
First, AI makes the skills the credential certifies acquirable outside the institution. When a motivated individual can learn data analysis through conversational interaction with an AI tutor, build a working software application through AI-augmented development, or produce professional-quality analytical writing with AI assistance — all without institutional enrollment — the credential's claim that its holders possess these skills becomes less distinctive. The skills are no longer scarce. The credential's value as a signal of their possession diminishes accordingly.
Second, AI makes direct capability assessment cheaper and more reliable. An employer who can evaluate a candidate's portfolio of AI-augmented projects — seeing what she built, how she built it, the decisions she made, the judgment she exercised — has a more informative signal than a transcript from a university the employer has never visited. Portfolio-based assessment was always possible in principle. In practice, it was expensive and inconsistent, which is why the degree survived as the default filter. AI reduces the cost of assessment by making project creation cheaper (the candidate can produce more impressive portfolio pieces) and by providing tools that can evaluate technical competence at scale (AI-powered coding assessments, for instance, that test problem-solving ability directly rather than accepting the degree's certification that the candidate probably possesses it).
The credential reckoning does not mean the death of the degree. It means the end of the degree's monopoly as the default signal of competence, and the beginning of a market in which the degree competes with alternatives that are cheaper, faster, and in some cases more informative.
The reckoning will not be uniform. At the top of the prestige hierarchy, elite credentials will retain their value for reasons that have little to do with education and everything to do with network effects: the Harvard degree opens doors not because it certifies knowledge but because it certifies membership in a network of ambitious, capable people, and the value of that network is self-reinforcing. The employer who hires a Harvard graduate is not primarily betting on what Harvard taught her. She is betting on the judgment of Harvard's admissions office, which selected this person from thirty thousand applicants, and on the network of other Harvard graduates who will be this person's colleagues, collaborators, and sources of opportunity throughout her career. AI does not diminish this value. If anything, it increases it, because in an environment where technical skills are commoditized, the social capital that elite credentials provide becomes relatively more important.
At the bottom of the prestige hierarchy, community colleges and vocational programs serve students whose primary goal is practical capability development, not credentialing. These institutions were never primarily in the signaling business. They were in the training business, and their value will be determined by how effectively they integrate AI tools into their curricula — a question the next chapter addresses directly.
The crisis is in the middle. The regional universities, the non-selective four-year colleges, the institutions that serve the broad majority of American college students — these are the institutions whose value proposition was always a blend of moderate training and moderate signaling, and both components are eroding simultaneously. The student who enrolls at a mid-tier university is not receiving the elite credential that opens doors through network effects. She is not receiving the practical, AI-integrated training that develops immediately deployable capability. She is receiving four years of moderately effective instruction, certified by a moderately valued credential, at a cost that has grown faster than her family's income for four consecutive decades.
This student is the one who is most likely to ask whether the investment is worth it. And she is the student the university is most likely to lose.
Segal's parallel to the software industry's death cross illuminates the structure of the credential reckoning. In the software death cross, code's value as a product approached commodity pricing when AI made code production cheap. The companies that survived were those whose value was never primarily in the code — it was in the ecosystem, the data layer, the institutional relationships that the code enabled. Similarly, the university whose value was never primarily in the credential — whose actual product was the developmental experience, the judgment cultivation, the intellectual community — will survive the credential reckoning. The university whose value was primarily in the credential, whose actual offering was the signaling function dressed in the language of education, will find that the signal is losing its frequency and the market is tuning to other channels.
The practical implications are immediate and specific. Universities must invest in the assessment infrastructure that makes judgment visible. Not just to accreditation bodies, but to employers, to graduate programs, to the public that funds the institution. If the degree's value is migrating from knowledge certification to judgment certification, the university must demonstrate that it produces judgment — not through rhetoric but through evidence. Capstone projects that demonstrate integrative thinking. E-portfolios that document the student's intellectual development over four years. Assessments designed by faculty and validated by employers that measure the evaluative capacities the institution claims to cultivate. Alumni tracking that provides longitudinal data on whether graduates exercise the judgment the university claims to have developed.
These investments are expensive. They compete with every other demand on the university's budget. They require institutional will — the willingness to be held accountable for outcomes that are harder to measure than graduation rates and harder to manipulate than satisfaction surveys. But they are the only credible response to a market that is beginning to ask, with increasing insistence, what exactly the credential certifies that cannot be demonstrated more directly.
Kerr's multiversity was built on the assumption that the credential's monopoly would hold — that the degree would remain the indispensable passport to professional life, and that the university's role as credential-granting authority would ensure its institutional relevance regardless of how well it performed its other functions. That assumption held for sixty years. It is breaking now, not with the dramatic collapse of a single event but with the gradual erosion of a thousand individual decisions — each student who chooses a boot camp over a bachelor's degree, each employer who hires on portfolio rather than transcript, each parent who calculates the return on four years of tuition and finds the arithmetic less compelling than it was a decade ago.
The university that responds by defending the credential — by insisting that the degree remains essential, that alternatives are inferior, that the market will eventually recognize the irreplaceable value of the four-year experience — is engaging in the institutional equivalent of denial. The credential is not going to disappear. But its monopoly is over, and the institutions that built their business models on that monopoly must find new foundations or face the enrollment declines that are already, for many of them, well underway.
Clark Kerr left the presidency of the University of California the way he arrived — "fired with enthusiasm," as he put it, deploying the pun with the precision of a man who had learned to use humor as a shield and a weapon simultaneously. The Board of Regents dismissed him in 1967, and the dismissal had less to do with his management of the university than with the political dynamics of a state in upheaval. Ronald Reagan had campaigned for governor on a promise to "clean up the mess at Berkeley," and Kerr's removal was the first installment on that promise.
The irony was brutal and persistent. The man who had designed the most successful public higher education system in history — the California Master Plan, the three-tier structure, the expansion that brought university education to millions who would otherwise never have had access to it — was fired by politicians who understood the university's value primarily as a campaign issue. Kerr spent the rest of his career studying the institution he had built, producing four more editions of The Uses of the University over the next thirty-six years, each one updated to reflect the new pressures the multiversity faced and each one informed by the experience of having built the thing he was analyzing.
In his final edition, published in 2001, Kerr made a statement that reads, in the light of 2026, as both a benediction and a challenge: "Higher education has been very resilient in turning fears into triumphs. I expect that this will continue."
The resilience is real. The university has survived challenges that would have destroyed any institution with less adaptive capacity. The Reformation, which stripped the medieval university of its monopoly on theological authority. The Enlightenment, which challenged the scholastic methods that had defined the curriculum for centuries. The Industrial Revolution, which created new forms of knowledge the old university was not equipped to produce. The GI Bill, which flooded the postwar university with a student population it had not been designed to serve. The social upheavals of the 1960s, which challenged the institution's claims to political neutrality. The defunding of public higher education in the 1980s and 1990s, which forced a cost shift from taxpayers to students that has produced the debt crisis now threatening the entire financial model.
Each crisis forced a reinvention. The medieval guild of scholars became the Humboldtian research university. The Humboldtian model became the American multiversity. The multiversity adapted to mass enrollment, to federal research funding, to the commercialization of knowledge, to the digital revolution. At each turn, the institution shed functions that had become obsolete and developed new ones that the changed environment required.
The pattern is consistent, and it is the pattern Segal identifies across every major technological transition: threshold, exhilaration, resistance, adaptation, expansion. The university has moved through this cycle before. It is moving through it again now. The question is not whether the institution will survive — the question, as always, is which version of it will emerge from the transformation, and whether that version will serve the people it exists to serve.
The argument of this book is that the version that emerges must be organized around a function the university has always possessed but rarely prioritized: the cultivation of judgment. Not judgment in the narrow sense of professional decision-making, though that is included. Judgment in the broader sense of the capacity to evaluate, to discern, to navigate complexity with intellectual rigor and ethical awareness, to ask the questions that determine whether information is worth having and whether capability is worth deploying.
This function is not new. It is the university's oldest function, predating the research mission, predating the credentialing function, predating the knowledge-transmission model that AI has now rendered obsolete. The Socratic method, which is a particular technology for cultivating judgment through guided questioning, is twenty-four centuries old. The Oxford tutorial, which develops judgment through sustained one-on-one intellectual engagement, is eight centuries old. The liberal arts curriculum, which cultivates judgment through broad exposure to multiple modes of inquiry, has been the philosophical foundation of Western education since the Renaissance.
What is new is the urgency. When the university's other functions were viable — when knowledge transmission, credentialing, and specialist training all justified the institution's claim on resources and social attention — the judgment function could be undervalued without immediate consequence. The university could neglect small seminars in favor of large lectures. It could reward publication over teaching. It could treat general education as a checkbox rather than a core offering. The other functions were generating sufficient value to keep the institution afloat.
Now those functions are eroding simultaneously. Knowledge transmission is redundant. Credentialing is losing its monopoly. Specialist training is being commoditized. The judgment function is the one that remains — the one that no AI tutor, no bootcamp, no online platform, no corporate training program can replicate, because judgment develops through a process that requires human mentoring, social interaction, sustained engagement with complexity, and the experience of being challenged by minds that think differently than your own.
The recovery is institutional, but it begins with a recognition that is almost philosophical: the university's deepest purpose was never the things it measured. It was never the credit hours produced, the degrees conferred, the papers published, the patents licensed, the grants awarded. These were the measurable outputs of an institution whose most important output was always immeasurable — the development of minds capable of engaging with a complex world with intelligence, discernment, and care.
Gerald Chan, standing in a Berkeley auditorium sixty-two years after Kerr's Harvard lectures, identified the operational principle: "The challenge for education is one of knitting together what technology is good at and what humans are good at." Education, he insisted, "is always a human and social activity." The technological component — the AI tools, the digital platforms, the computational resources — serves the human activity. It does not replace it.
The university that enacts this principle will look different from the university of 2020. Its classrooms will be smaller. Its faculty will be evaluated on the quality of their mentoring as well as the volume of their publications. Its assessment methods will measure the quality of questions asked rather than the correctness of answers given. Its general education curriculum will be integrated rather than distributed — organized around complex problems that require students to synthesize across domains, rather than around disciplinary checkboxes that students satisfy and forget. Its relationship with industry will emphasize the production of broadly capable integrators rather than narrowly trained specialists. Its credentials will be supplemented by portfolio-based evidence of the judgment the institution claims to cultivate.
These changes are specific enough to implement and difficult enough to resist that the implementation will be contested at every stage. Faculty governance processes will slow the curriculum reform. Departmental budgets will resist the reallocation of resources toward interdisciplinary programs. Assessment offices will struggle with the measurement of judgment. Accreditation bodies will require evidence that the new model produces outcomes at least as good as the old one, even though the outcomes being measured are different.
Kerr's multiversity was designed to manage precisely this kind of contested transformation. The governance structures that now impede change were designed to ensure that change, when it came, reflected the input of all constituencies — faculty, students, administrators, the public. The structures are slow. The slowness was, for most of the university's history, a feature: it prevented hasty decisions that served one constituency at the expense of others. It is now a liability, because the pace of AI development means that the window for adaptation is narrower than any the institution has previously faced.
The administrator who navigates this transformation must be the mediator Kerr described — not the visionary leader who articulates a direction and drives the institution toward it, but the negotiator who finds the arrangements that allow competing interests to coexist while the institution moves, incrementally but steadily, toward its new center of gravity. The mediator's tools are pilot programs, faculty champions, demonstrated successes, budgetary incentives, and the patient accumulation of evidence that the new model works. Not revolution. Evolution — but evolution at a pace that matches the external pressure, which is faster than the institution's natural metabolic rate.
The university does not need to become something it has never been. It needs to recover something it has nearly forgotten. The cultivation of judgment, the development of intellectual character, the formation of identity within a community of inquiry — these are the functions that justify the university's existence in an age when every other function can be performed more efficiently elsewhere. They are the functions the university was founded to perform, before the research mission, before the credential, before the knowledge industry.
The medieval university's highest aspiration was not the production of knowledge. It was the formation of a particular kind of person — a person capable of engaging with the deepest questions of human existence with rigor, humility, and care. The AI-era university's highest aspiration is, remarkably, the same. The technology is different. The institutional context is vastly more complex. The pressures are more intense and the timeline more compressed. But the core purpose — the development of minds that can think with discipline, evaluate with discernment, and decide with wisdom — has survived eight centuries of technological change.
Kerr built the multiversity to serve the knowledge economy. The knowledge economy has been disrupted by its own most powerful product. What remains, beneath the disrupted surface, is the oldest thing the university has to offer — and, as it turns out, the newest thing the world needs.
The institution that recovers it will find that it has not merely survived the transformation. It has been returned, by the force of the transformation itself, to the purpose it was always meant to serve.
---
The parking lot.
That was the detail I could not get past. Clark Kerr — architect of the most ambitious public university system in human history, author of the California Master Plan, chancellor of Berkeley, president of the entire University of California — described the modern multiversity as "a series of communities and activities held together by a common grievance over parking."
I laughed the first time I read it. The tenth time, I stopped laughing. It stopped being a joke and became a diagnosis.
The parking lot is what happens when an institution becomes so large, so multi-purpose, so committed to serving everyone that it forgets why it exists. The parking lot is the thing every constituency shares — not a mission, not a vision, not a commitment to the cultivation of human capacity, but a logistical frustration. The parking lot is what you get when the purpose has been buried under so many layers of function, credential, metric, and institutional accretion that the only thing everyone can agree on is that there aren't enough spaces.
I have been building organizations for three decades, and I recognize the parking lot. Every company I have ever built eventually developed its own version — the operational detail that became the shared complaint, the thing everyone talked about because the actual thing, the mission, the reason we were all in the room, had gotten too complicated or too contested to discuss directly. The parking lot is a symptom of institutional forgetting.
When I was in that room in Trivandrum, watching twenty engineers discover they could each do what all of them used to do together, I was watching one kind of institution — the development team organized around specialist silos — discover that its organizing principle had become obsolete overnight. The moment was electric and terrifying. What struck me was not the productivity gain. It was the look on people's faces when the categories that had defined their professional identities dissolved and something harder, something more human, was left standing in the clearing.
What was left standing was judgment. Taste. The capacity to decide what should exist in the world.
Kerr saw this exact structure in the university, sixty years before AI made it visible to everyone else. He saw that the university's measurable outputs — the credits, the degrees, the publications, the patents — were not the institution's purpose. They were the parking lot. The purpose was the thing the parking lot had buried: the formation of minds capable of engaging with complexity, with uncertainty, with the hardest questions human beings face.
AI has now stripped away the layers. Knowledge transmission: the machine does it better. Credentialing: the market is finding alternatives. Specialist training: acquirable through AI in weeks. What remains, when you remove everything the machine can do, is the thing the machine cannot do — the thing Kerr always insisted was the university's deepest function, even as the university itself forgot it.
The cultivation of judgment. The development of the capacity to ask questions that matter.
The same thing I have spent this entire Orange Pill journey trying to understand.
We are all in the parking lot. The question is whether we stay there, arguing about spaces, or whether we walk inside and remember what the building was for.
The institution that survived the printing press, the Industrial Revolution, and two world wars may not survive a chatbot. Unless it remembers what it was built for.
AI has made the university's most visible functions obsolete overnight. Knowledge transmission -- redundant when any student can converse with a system that knows more than any professor. Credentialing -- eroding as employers learn to read portfolios instead of transcripts. Specialist training -- acquirable in weeks through tools that cost less than a single textbook.
Clark Kerr saw this reckoning sixty years early. His framework for the "multiversity" -- the modern university as a collision of competing purposes masquerading as a single institution -- is the sharpest lens available for understanding what happens when every measurable function of higher education is commoditized simultaneously. What remains is the one function Kerr always insisted mattered most and the institution always funded least: the cultivation of judgment.
This book applies Kerr's architecture to the AI crisis in higher education, arguing that the university's survival depends not on defending what it has been but on recovering what it was always meant to be -- a place where human beings learn not what to think, but how to decide.

A reading-companion catalog of the 18 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Clark Kerr — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →