By Edo Segal
Every profession tells itself a story about why it matters. The doctor heals. The lawyer advocates. The engineer builds. These stories feel so natural that we forget they are stories -- constructed, defended, and sometimes demolished by forces far larger than any individual practitioner.
I have spent decades building technology companies, and for most of that time, I took the boundaries of my own profession for granted. Software engineers wrote code. Designers made interfaces. Product managers wrote specs. These divisions felt like physics -- immutable laws governing how things got built. Then, in the winter of 2025, I watched those boundaries dissolve in real time.
In a room in Trivandrum, India, I watched a backend engineer with no frontend experience build a complete user-facing feature in two days. Not because she had suddenly learned React, but because a tool had collapsed the gap between what she could imagine and what she could create. The professional boundary that had separated her from frontend work for eight years was not a wall. It was a line drawn in sand, and the tide had come in.
I did not have the vocabulary to describe what I was witnessing. I had the experience -- the vertigo, the awe, the quiet terror of watching everything I thought I knew about teams and expertise restructure itself in a week -- but I did not have the framework.
Andrew Abbott gave me that framework. His life's work on how professions compete for territory, how they defend their boundaries with arguments about quality that are really arguments about power, how jurisdictions rise and fall not because one group is objectively better but because organizations decide what they need -- all of it clicked into place against what I was living through. His analysis, developed across decades of meticulous historical research, described the structural dynamics beneath the surface turbulence I was experiencing every day.
This book applies Abbott's framework to the AI revolution. It asks the question his research has always asked: when the conditions that sustained a professional jurisdiction are altered by forces beyond the profession's control, what happens to the people who built their lives inside those boundaries? The answer, drawn from two centuries of professional history, is both more uncomfortable and more hopeful than the easy narratives allow.
What follows is our best approximation of how Abbott's patterns of thought illuminate the most consequential professional disruption of our era -- and what that disruption reveals about the nature of professional authority itself.
-- Edo Segal ^ Opus 4.6
(1948-)
Andrew Delano Abbott (1948-) is an American sociologist and social theorist serving as the Gustavus F. and Ann M. Swift Distinguished Service Professor of Sociology at the University of Chicago. Born in November 1948, Abbott attended Phillips Academy at Andover before taking his BA in history and literature at Harvard University in 1970. He received his PhD in sociology from the University of Chicago in 1982. After thirteen years teaching at Rutgers University, he returned to Chicago in 1991, where he has remained since, serving as master of the Social Science Division (1993-1996), chair of the Department of Sociology (1999-2002), and editor of the American Journal of Sociology from 2000 to 2016.
Abbott's major works include The System of Professions: An Essay on the Division of Expert Labor (1988), which won the American Sociological Association's Distinguished Scholarly Book Award and established the theoretical framework that informs this volume; Department and Discipline: Chicago Sociology at One Hundred (1999); Chaos of Disciplines (2001), an analysis of fractal patterns in social and cultural structures; Time Matters: On Theory and Method (2001), a collection of theoretical essays in the Chicago pragmatist tradition; Digital Paper: A Manual for Research and Writing with Library and Internet Materials (2014); and Processual Sociology (2016). He is also the founder of sequence analysis as a methodology in social science and a member of the American Academy of Arts and Sciences.
Abbott's distinctive contribution to sociology has been to redefine professions as dynamic systems of jurisdictional competition rather than static categories of expert labor. His framework, developed over decades of meticulous historical research, demonstrates that professional boundaries are political constructions maintained through ongoing competition between groups seeking to claim authority over specific domains of work -- a framework that proves remarkably illuminating when applied to the most sweeping jurisdictional disruption of our era.
The most consequential illusion in the history of professional life is the belief that professional boundaries are natural. That physicians heal because healing naturally requires medical knowledge. That lawyers litigate because litigation naturally requires legal training. That software engineers build systems because building systems naturally requires years of specialized technical education. Each of these claims contains a kernel of truth. The work is genuinely difficult. The knowledge is genuinely specialized. But the leap from difficulty to exclusivity -- from the observation that the work is hard to the assertion that only practitioners who followed a specific training path may legitimately perform it -- is not a factual claim. It is a political one.
Abbott's career has been devoted to exposing this distinction with a rigor that few sociologists have matched. His approach is distinctive in the discipline: where most scholars of professions study individual professions in isolation, Abbott insists on studying the system -- the competitive ecology in which professions exist in relation to one another, each one's jurisdiction defined not by its own claims but by the boundaries of the adjacent jurisdictions it competes with. This systemic perspective is what makes his framework so illuminating for understanding the AI disruption, because AI is not disrupting individual professions in isolation. It is disrupting the system itself.
His landmark work, The System of Professions, published in 1988, demonstrated through meticulous historical analysis that professional jurisdictions are products of competition rather than reflections of competence. When a group of practitioners asserts exclusive authority over a domain of work, the assertion is maintained not by the objective superiority of their knowledge but by the institutional mechanisms through which they defend the claim: credentialing systems, educational gatekeeping, licensing regimes, and the careful cultivation of public belief that the work cannot be done by anyone who has not followed the prescribed path.
The distinction between difficulty and exclusivity is not a theoretical abstraction. It is the central mechanism through which professions maintain their economic privileges, their social status, and their cultural authority. When barber-surgeons challenged university-trained physicians in early modern Europe, the physicians did not respond merely by demonstrating superior healing outcomes. They argued that surgery performed without theoretical knowledge of humoral medicine was morally irresponsible -- a violation of the proper order of healing. The theoretical knowledge was, in many cases, wrong. Humoral medicine is not an accurate model of the human body. But the jurisdictional argument was effective because it appealed to the institutions that arbitrated the dispute: the universities, the courts, the emerging regulatory bodies. The resolution depended not on who healed more effectively but on which group better satisfied the institutional demand for legitimacy.
This pattern repeats with remarkable consistency across centuries and across professions. When accountants challenged lawyers for jurisdiction over tax work in the early twentieth century, the lawyers did not simply demonstrate superior legal reasoning. They argued that tax work was inherently legal in character and that accountants who performed it were engaged in the unauthorized practice of law. When psychologists challenged psychiatrists for jurisdiction over talk therapy in the mid-twentieth century, the psychiatrists did not simply demonstrate superior therapeutic outcomes. They argued that psychotherapy was a medical procedure that required medical training, and that psychologists who practiced it were engaged in unlicensed medicine. In each case, the jurisdictional defense invoked moral and institutional arguments that went far beyond the empirical question of who performed the work more competently.
Abbott's framework reveals that these arguments, however sincere, serve a structural function that is independent of their truth value. The gatekeeping argument -- the assertion that legitimate practice requires a specific form of knowledge that can only be acquired through the path the profession has defined -- maintains the boundary between insiders and outsiders. It converts a description of competence into a prescription for exclusion. And the prescription is enforced not by the superiority of the insiders' work but by the institutional infrastructure that the profession has constructed around the claim.
The arrival of artificial intelligence has triggered the most sweeping jurisdictional disruption since the industrial revolution. The disruption is so consequential precisely because AI challenges not a single profession's jurisdiction but the foundational mechanism through which all knowledge-based professions maintain their authority: the scarcity of specialized knowledge itself. When a large language model can draft a legal brief, produce a medical diagnosis, generate working software, or compose a financial analysis, the specialized knowledge that previously distinguished the professional from the non-professional is no longer scarce. The knowledge has not become less valuable in absolute terms. It is still useful. But it has become less scarce, and the scarcity was the foundation on which the jurisdictional claim was built.
The contemporary version of the gatekeeping argument is visible across every profession that AI has touched. Established software developers insist that anyone who builds without understanding the lower layers of the abstraction stack -- the binary, the assembly, the compiler, the operating system -- is a fraud. Lawyers insist that AI-drafted briefs lack the judgment that only years of case analysis can develop. Physicians insist that AI-generated diagnoses lack the contextual sensitivity that only clinical experience provides. In each case, the argument has empirical merit. Lower-level understanding does often produce more robust practitioners. But the function of the argument is jurisdictional, regardless of its truth value. It defines the terms of legitimate participation in a way that excludes new entrants who have arrived at competent performance through a different path.
The gatekeeping argument also reveals something important about the relationship between professional identity and professional knowledge that Abbott's framework makes analytically precise. The knowledge that professions defend is not merely instrumental -- it is not simply the means by which the work gets done. It is constitutive of the profession's identity, its culture, and its sense of what distinguishes it from other occupations. When the software engineering profession defends the importance of understanding algorithms, data structures, and system architecture, it is not merely arguing that these knowledge domains produce better software. It is arguing that these knowledge domains define what it means to be a software engineer. The defense of the knowledge is a defense of the identity, and the identity is what makes the knowledge feel natural rather than arbitrary. This circularity -- knowledge defines the profession, and the profession defends the knowledge -- is the mechanism through which jurisdictional boundaries acquire their apparent naturalness, and it is the mechanism that AI disrupts most fundamentally.
Abbott's framework predicts with uncomfortable precision what happens next. The resolution of a jurisdictional dispute depends not on who is objectively more competent but on which group better satisfies the needs of the organizations that constitute the demand for the work. If AI-enabled practitioners produce adequate output at lower cost and greater speed, the jurisdiction will shift, regardless of the quality argument. This is not cynicism. It is the historical pattern, documented across every profession that has faced a technology enabling new entrants to perform previously gated work. The organizations that purchase professional services -- the factories that buy cloth, the companies that deploy software, the hospitals that treat patients -- care about cost, speed, and adequacy. When the profession's definition of quality diverges from the organization's definition of utility, the organization's definition prevails.
The question the AI disruption forces upon every knowledge-based profession is not whether the gatekeeping argument is correct. It often is. The question is whether correctness is sufficient to maintain the jurisdiction when the institutional forces that arbitrate jurisdictional disputes -- the organizations, the markets, the regulatory bodies -- have already begun to value the capabilities that AI-enabled practitioners bring. Abbott's research suggests, with the weight of two centuries of evidence, that correctness alone has never been sufficient. The professions that survive jurisdictional disruptions are the ones that redefine their jurisdictions around capacities the disrupting technology cannot replicate. The professions that cling to the old definition, however accurate, find that accuracy is irrelevant to the institutional forces that determine jurisdictional outcomes.
The implications extend beyond any single profession. What Abbott's framework reveals is that the entire architecture of professional authority in knowledge-based economies rests on a foundation -- knowledge scarcity -- that AI is systematically eroding. This is not a disruption that can be absorbed by adjusting credentialing requirements or updating curricula. It is a disruption to the concept of credentialing itself, to the assumption that the path to professional legitimacy runs through the acquisition of knowledge that others lack. The professions that recognize this distinction early and begin rebuilding their authority on a different foundation -- on judgment, on care, on the human capacities that AI amplifies rather than replaces -- will navigate the disruption. The professions that do not recognize it will find themselves defending a position that history has consistently shown to be untenable.
Abbott would also direct attention to a dimension of the current disruption that most commentators overlook: the role of the clients themselves in shaping jurisdictional outcomes. In his framework, clients are not passive recipients of professional services. They are active participants in the jurisdictional system, and their decisions about whom to trust, whom to hire, and how to define their own needs play a crucial role in determining which jurisdictional claims succeed. The AI disruption is empowering clients in a way that previous disruptions did not, because it gives clients the ability to evaluate professional output with tools that the profession previously monopolized. A legal client who can use AI to review a draft brief is a client who can challenge the lawyer's judgment in ways that were previously impossible. A medical patient who can use AI to research a diagnosis is a patient who can question the physician's recommendations with evidence the physician must take seriously. This empowerment of the client changes the dynamics of jurisdictional competition, because it shifts the locus of evaluation from the profession's internal standards to the client's informed assessment of whether the work serves their purposes.
What makes the AI disruption qualitatively different from previous jurisdictional challenges is its simultaneity. When accountants challenged lawyers for tax jurisdiction, the disruption was contained within a single professional boundary. When nurse practitioners challenged physicians for primary care jurisdiction, the disruption affected one corner of the medical profession. The AI disruption challenges every knowledge-based jurisdiction simultaneously, because the mechanism it disrupts -- knowledge scarcity -- is the mechanism on which every knowledge-based profession depends. This simultaneity means that the institutional forces that normally stabilize jurisdictional competition -- the ability of displaced practitioners to move laterally into adjacent jurisdictions, the capacity of the educational system to retrain practitioners for new domains, the regulatory system's ability to update its frameworks incrementally -- are all strained at once.
Abbott's concept of the professional system as an ecology becomes particularly important in this context. In an ecology, the displacement of one species creates opportunities for others. But when multiple species are displaced simultaneously, the ecology itself becomes unstable. The stabilizing mechanisms that normally allow the system to absorb disruption and reach a new equilibrium are overwhelmed by the scale and speed of the change. This ecological instability is precisely what the AI disruption is producing in the professional system, and it explains why the current moment feels qualitatively different from previous jurisdictional disruptions, even though the underlying dynamics -- gatekeeping, competition, institutional arbitration -- are the same.
The concept of abstraction plays a central role in understanding why the AI disruption is so comprehensive. Abbott observed that every profession maintains its jurisdiction through abstraction -- through the development of a formal knowledge system that classifies problems in terms the profession controls. Medicine abstracts symptoms into diagnoses. Law abstracts disputes into causes of action. Software engineering abstracts requirements into architectures. The power of abstraction is that it makes the profession indispensable: only those who command the formal knowledge system can translate the client's problem into a professional solution. AI disrupts this mechanism by providing an alternative path from problem to solution that bypasses the profession's abstraction entirely. The client who uses AI to solve a problem has not learned the profession's abstractions. She has circumvented them. And circumvention, in the system of professions, is an existential threat -- not because the solution is worse but because the jurisdiction has been breached.
This is not a comfortable conclusion. It does not validate the established professionals' sense of injustice, nor does it celebrate the new entrants' disruption. It describes, with the analytical precision that Abbott has brought to every jurisdictional dispute he has studied, the structural dynamics that will determine the outcome -- dynamics that are indifferent to the merits of either side's argument about the nature of genuine expertise.
The history of computing is conventionally told as a story of technological progress -- each advance making machines more powerful, more accessible, more capable. Abbott's framework reveals a different narrative running beneath the surface: each advance is also a jurisdictional event, creating new professions, reducing the jurisdiction of existing ones, and triggering a cycle of gatekeeping, resistance, and eventual accommodation that follows patterns his research has identified across every professional domain.
Assembly language created the programmer -- the practitioner who could translate logical operations into the sequences of machine instructions that hardware required. Before assembly, the work of instructing computers was performed by mathematicians and engineers who worked directly with hardware, flipping switches and wiring circuits. Assembly abstracted the hardware interface, creating a new category of work that could be performed without understanding the electrical engineering of the machine. The engineers who had previously held jurisdiction over computer instruction argued that assembly programmers lacked genuine understanding, that they operated at a surface level without comprehending the foundations on which their work rested. The argument was empirically accurate. It was also jurisdictionally irrelevant. The organizations that needed computers to perform useful work cared about the quality of the programs, not about the programmer's understanding of circuit design.
Compilers created the high-level language programmer. The compiler abstracted the assembly interface, allowing practitioners to express logical operations in languages that resembled mathematical notation rather than the mnemonic codes of assembly. The assembly programmers argued that high-level language programmers produced bloated and inefficient code, that they could not optimize performance because they did not understand what the machine was actually doing with their instructions. The argument was empirically accurate. It was jurisdictionally irrelevant. Organizations cared about functionality, not about the efficiency of the machine instructions -- especially as hardware became powerful enough to make the efficiency gap invisible to users.
Each of these transitions produced not merely a technical shift but a cultural one. The communities of practice that formed around each level of the abstraction hierarchy developed their own values, their own aesthetic standards, and their own criteria for what constituted excellent work. Assembly programmers valued elegance measured in instruction counts and cycle efficiency. High-level language programmers valued clarity of algorithmic expression and the beauty of a well-structured procedure. These cultural values were not incidental to the profession. They were constitutive of it. And when a new level of abstraction made those values less relevant to the organizations that consumed the output, the cultural loss was experienced as intensely as the economic one.
Frameworks and libraries created the application developer -- the practitioner who assembled pre-built components into functional applications without necessarily understanding the implementation details of the components themselves. The language programmers argued that application developers were assembling black boxes without comprehending the algorithms inside them. Cloud infrastructure created the cloud-native developer, and the systems administrators argued that these practitioners built on foundations they could not inspect. In each case, the pattern held: the gatekeeping argument from the previous level was empirically accurate and jurisdictionally irrelevant. The organizations that consumed the output defined adequacy in terms of the output's utility, not in terms of the producer's understanding.
This pattern is so consistent across every layer of the abstraction sequence that it deserves recognition as a structural law of professional evolution: every increase in the level of abstraction at which a profession operates triggers a gatekeeping argument from the practitioners of the previous level, and the gatekeeping argument fails whenever the new level of abstraction produces output that is adequate for the purposes of the organizations that consume it. The argument fails not because it is wrong but because the organizations that arbitrate jurisdictional disputes define adequacy in terms of their own needs, not in terms of the profession's internal hierarchy of knowledge.
The consistency of this pattern across every layer of the abstraction sequence is not coincidental. It reflects a deeper structural principle that Abbott's framework identifies: jurisdictional disputes are resolved not by the internal standards of the profession but by the external needs of the organizations that consume the profession's output. The assembly programmers were right that high-level language programmers produced inferior code by the standards of assembly programming. The high-level language programmers were right that framework developers assembled software without understanding the algorithms inside the components. Each was right by the standards of their own level. Each was irrelevant by the standards of the organizations that purchased the output. The organizations cared about the output, not about the process by which it was produced or the depth of understanding that the process required.
AI represents the most radical step in this sequence because it does not abstract a specific technical operation. It abstracts the entire process of translating intent into implementation. Previous abstractions still required the practitioner to learn a specific technical language -- a programming language, a framework API, a cloud deployment syntax. AI removes even this requirement. The practitioner describes what they want in natural language -- the medium of human thought itself -- and the AI handles the translation into whatever technical language the implementation requires.
The jurisdictional implications are profound. If each previous abstraction created a new profession and reduced the jurisdiction of the old one, AI has the potential to create a profession that is not defined by technical knowledge of any kind -- a profession defined by intent specification, outcome evaluation, and judgment about what to build and whether it has been built well. This profession does not yet have a settled name, but its contours are becoming visible in organizations that have restructured around AI: small teams whose primary work is not building but deciding what should be built, evaluating whether AI-produced output serves its intended purpose, and directing the tool toward outcomes that serve human needs rather than merely organizational efficiency.
The historical pattern also suggests something more hopeful than the prevailing narrative of loss and displacement. At each level of the abstraction sequence, the new profession was not merely a diminished version of the old one. It was a more capable version, operating at a higher level, with a broader scope and a greater capacity to produce value. The assembly programmer could address a single machine. The high-level language programmer could address any machine with a compiler. The application developer could address any platform with the appropriate framework. Each level of abstraction expanded the scope of what the practitioner could accomplish while contracting the depth of what the practitioner needed to understand.
The pattern extends beyond the technology sector. In medicine, each level of diagnostic abstraction -- from physical examination to laboratory testing to imaging to genomic analysis -- expanded the scope of what the physician could detect while reducing the depth of hands-on examination that each diagnosis required. In law, each level of research abstraction -- from library-based case review to computer-assisted legal research to AI-augmented analysis -- expanded the scope of precedent that the lawyer could survey while reducing the depth of engagement with individual cases. The abstraction sequence is not unique to computing. It is a universal feature of professional evolution, and its jurisdictional implications are consistent across every domain.
This expansion of scope is not a consolation prize for lost depth. It represents an ascent in the jurisdictional hierarchy toward work that is more consequential, more judgment-intensive, and more dependent on the distinctly human capacities that no level of abstraction can replicate. The lower levels of the abstraction stack are the most technical and the least human. The higher levels are the most human and the least technical. AI operates at the most human level of all, because it operates in natural language. The profession that organizes itself around this level will be defined by its humanity -- by judgment, care, contextual understanding, and the capacity to evaluate whether technology serves human purposes -- rather than by technical expertise that can be replicated by the tool itself.
Abbott would note, however, that this ascent is not automatic or inevitable. Each jurisdictional transition in the history of computing produced winners and losers, and the distribution of gains and losses was determined by institutional dynamics -- by which organizations chose to expand capability rather than reduce headcount, by which educational institutions redesigned their curricula to develop judgment rather than technical knowledge, and by which practitioners recognized early enough that the jurisdiction was shifting and invested in the capacities the new jurisdiction would require. The abstraction sequence is a jurisdictional history, and like all jurisdictional histories, its outcomes are shaped by the choices of the institutional actors who arbitrate the competition.
The practitioners at the higher level are not shallower. They work in different dimensions. Each abstraction freed cognitive resources, and those freed resources were invested in the next level of complexity. Abbott's processual sociology -- his insistence that social reality is constantly making and remaking itself -- captures this dynamic precisely: the professions do not simply lose ground when technology advances. They are remade, moment by moment, through the competitive dynamics that his framework describes. The question is not whether the remaking will occur. It is whether the practitioners and institutions that participate in it will shape the outcome or merely be shaped by it.
There is a further dimension of the abstraction sequence that Abbott's processual sociology illuminates with particular clarity. At each level of abstraction, the nature of professional expertise changes not merely in content but in kind. The assembly programmer's expertise was primarily technical: knowledge of machine architecture, register allocation, memory management. The high-level language programmer's expertise was partly technical and partly logical: the ability to think in algorithmic terms, to decompose problems into procedures. The application developer's expertise was partly technical, partly logical, and partly architectural: the ability to assemble components into systems that served user needs. At each level, the proportion of purely technical knowledge decreased while the proportion of judgment, design sense, and contextual understanding increased.
AI accelerates this trend to its logical conclusion. The AI-augmented practitioner's expertise is primarily human: the ability to articulate intent clearly, to evaluate output critically, to make decisions about what should exist in the world and for whom. The technical knowledge has not disappeared. It has been absorbed into the tool, the way the knowledge of assembly language was absorbed into the compiler. And the human who directs the tool is freed to operate at a level where the questions that matter are not technical questions -- how to implement a feature -- but human questions: whether the feature serves its intended users, whether it respects their autonomy, whether it contributes to a system that enhances human capability or diminishes it.
This shift from technical to human expertise has implications for who can participate in the profession. At each previous level of abstraction, the pool of potential practitioners expanded. More people can learn a high-level programming language than can master assembly. More people can use a framework than can write one from scratch. More people can deploy a cloud service than can build the infrastructure it runs on. AI extends this expansion to its widest possible reach: anyone who can articulate a clear intention in natural language can now participate in the production of software. The jurisdictional implications are revolutionary, and they extend far beyond the technology sector to every profession whose jurisdiction was built on the scarcity of specialized knowledge.
The lesson of the abstraction sequence, read through Abbott's jurisdictional lens, is that the direction of professional evolution has been consistent for eighty years: toward greater scope, greater human relevance, and greater dependence on judgment. AI does not reverse this direction. It accelerates it. And the practitioners who understand the direction -- who see the abstraction sequence not as a history of loss but as a history of ascent -- are the practitioners who will position themselves to claim the jurisdictions that the next level of abstraction creates.
Every profession faces periodic challenges from new entrants who claim the ability to perform the profession's work without the profession's credentials. These challenges are the engine of professional evolution. They force the profession to articulate what it actually does, as opposed to what it claims to do -- to distinguish between the knowledge that is genuinely necessary for competent performance and the knowledge that is merely traditional, a residue of the training path the profession has historically required.
The AI disruption has produced the most powerful cohort of new entrants in the history of the technical professions. The new entrant, in this context, is not a single figure but a category of practitioners who share one defining characteristic: they have arrived at competent performance through a path that the established profession does not recognize as legitimate. A junior developer who uses AI to produce in a day what a senior developer without AI produces in a week is a new entrant in the jurisdictional sense, even if she has been employed in the field for years. Her method of producing output -- natural language collaboration with an AI system -- is not the method the profession has traditionally sanctioned. The established profession defines competence in terms of the ability to write code directly, to debug through systematic reasoning, to build from the ground up with intimate understanding of each layer of the abstraction stack. The new entrant achieves comparable output through a fundamentally different process: describing intent, evaluating output, iterating through conversation.
The characteristics of the AI-enabled new entrant differ from those of previous new-entrant cohorts in ways that Abbott's framework helps to specify. In previous jurisdictional disruptions, new entrants typically possessed a different but recognizable form of expertise. The chiropractor challenging the physician's jurisdiction over back pain had a different training but still possessed a body of specialized knowledge. The paralegal challenging the lawyer's jurisdiction over document preparation had formal training in legal procedures. The AI-enabled new entrant, by contrast, may possess no specialized training in the domain at all. A product manager who uses AI to build working software has not been trained in software engineering. A small business owner who uses AI to draft legal documents has not been trained in law. The new entrant's qualification is not a different form of expertise but the capacity to articulate intent clearly enough for the AI to produce adequate output. This is a fundamentally new kind of jurisdictional challenge, because it does not replace one form of expertise with another. It replaces expertise itself with the combination of clear intention and a powerful tool.
Abbott's research established a principle that applies directly to this situation: the outcome of a jurisdictional challenge depends not on who is objectively more competent but on which group better satisfies the needs of the organizations and clients that constitute the demand for the work. This principle is counterintuitive to practitioners within the profession, who naturally assume that the better-qualified group will prevail. But the history of professions demonstrates that quality, as defined by the profession, is not the same as adequacy, as defined by the organizations that consume the profession's output. Organizations care about whether the work serves their purposes. Professions care about whether the work meets their standards. When these two criteria diverge, the organization's preference prevails.
The empirical evidence from organizations already navigating the AI transition confirms this pattern with striking clarity. In companies where AI-augmented practitioners are producing software that works, that serves users, that solves the problem it was designed to solve -- without possessing the depth of technical understanding that the established profession considers essential -- the jurisdictional boundary is already shifting. The software may not be optimal. It may contain inefficiencies that a traditional expert would not have introduced. But it works. And the organizations that deploy it care that it works, more than they care about the process by which it was produced.
Abbott uses the term "jurisdictional settlement" to describe the stable arrangement that eventually emerges from a period of jurisdictional competition. The settlement defines the terms under which different groups share -- or divide -- authority over a domain of work. In the current disruption, the settlement that is emerging in workplace after workplace is one in which AI-augmented practitioners handle the implementation layer while established practitioners retain authority over the judgment layer. This is not a compromise. It is a new jurisdictional arrangement, and its terms are being set not by negotiation between professional groups but by the unilateral decisions of the organizations that employ them.
This is the point at which the jurisdictional challenge becomes irreversible. Once organizations have demonstrated to themselves that adequate output can be produced through the new method, the old method's monopoly on the jurisdiction is broken. The profession can still argue that its method produces superior output. It may even be right. But the argument is no longer sufficient to maintain the jurisdiction, because the jurisdiction depended not on superiority but on monopoly -- on the claim that the profession's method was the only path to competent performance.
The senior professional's response to the new entrant follows a characteristic three-stage trajectory that Abbott's research has documented across dozens of historical cases. The first stage is denial: the assertion that AI-produced work is not real work, that the practitioners who use it are not real professionals, that the results will inevitably fail in production. This is the jurisdictional defense in its most primitive form -- the attempt to define the new entrant out of the profession entirely. The second stage is qualification: the concession that AI can produce adequate results in some contexts, combined with the insistence that the hard problems still require human expertise. This is the retreat to the core jurisdiction, the attempt to draw a new boundary around the subset of work that the new entrant cannot yet claim. The third stage is redefinition: the recognition that the jurisdictional boundary has shifted, combined with the attempt to define a new jurisdiction that the new entrants cannot easily claim -- a jurisdiction based on judgment, architecture, system design, and holistic understanding.
The practitioners who move most successfully through these three stages are those who recognize that the jurisdictional shift is not merely a loss but an opportunity to claim a new jurisdiction that is, in many respects, more valuable than the one that was displaced. The jurisdiction of judgment, integration, and architectural thinking operates at a higher level of abstraction, and the history of professions demonstrates that jurisdictions at higher levels of abstraction command greater authority, greater compensation, and greater social prestige. The developer who shifts from writing code to directing AI-augmented teams has not descended in the professional hierarchy. She has ascended.
The velocity of this transition deserves emphasis, because it compounds the difficulty of navigating the three-stage trajectory. In previous jurisdictional disruptions, the new entrants typically accumulated capability and market presence over years or decades, giving the established profession time to observe, assess, and adapt. The AI disruption has compressed this accumulation into months. The new entrants arrived not gradually but suddenly, equipped with a tool that leapt from competent to formidable within a single year. This temporal compression means that practitioners who are still in the denial stage may find that the jurisdictional boundary has already shifted by the time they reach the redefinition stage. The window for adaptive response is narrower than in any previous disruption, and the practitioners who move through the three stages most quickly are those most likely to position themselves advantageously in the emerging jurisdictional landscape.
The parallel to historical jurisdictional shifts is instructive and extends beyond the technical professions. When physicians shifted from performing surgery to directing surgical teams, from mixing compounds to prescribing pharmaceuticals, from treating individual symptoms to managing complex chronic conditions, each shift represented a contraction of the lower jurisdiction and an expansion of the higher one. The physicians who resisted -- who insisted that real medicine meant getting their hands dirty -- were defending a jurisdiction that was, in the terms of the broader system, less valuable than the jurisdiction they were being pushed toward.
Abbott's framework also reveals a dimension of the new-entrant challenge that is often overlooked in conventional analyses of AI disruption: the role of interprofessional competition, not just intraprofessional competition. AI does not merely create new entrants within a single profession. It dissolves the boundaries between professions entirely. When a product manager can produce working software, and a designer can implement features end to end, and a marketing specialist can build analytical tools -- the jurisdictional boundaries that separated these roles are breached simultaneously. The competition is not only between AI-augmented and traditional developers. It is between developers and the entire ecosystem of adjacent professionals who can now claim jurisdiction over work that was previously gated by technical skill.
This interprofessional dimension is where Abbott's framework is most distinctive and most illuminating. Other analysts of AI disruption focus on the displacement of individual workers or the automation of individual tasks. Abbott's framework insists that the relevant unit of analysis is not the individual or the task but the system of professions -- the competitive ecology in which groups of practitioners vie for jurisdiction over specific kinds of work. AI is not merely automating tasks. It is reorganizing the ecology. And the ecology will settle into a new configuration -- a new system of professions -- that reflects the institutional dynamics of the competition rather than the abstract merits of any single group's claim to expertise.
The speed of the AI-driven new-entrant challenge also produces a distinctive temporal dynamic that Abbott's processual sociology helps to explain. In previous jurisdictional disruptions, the new entrants typically occupied the lower end of the market first -- performing simpler versions of the profession's work at lower cost and gradually moving upmarket as their capabilities improved. This pattern, familiar from Clayton Christensen's theory of disruptive innovation, allowed the established profession time to observe the challenge, assess its implications, and develop responses. The AI disruption compresses this timeline dramatically. The new entrants are not starting at the bottom and working their way up. They are entering at multiple levels of complexity simultaneously, because the tool that enables their entry does not distinguish between simple and complex tasks in the way that previous technologies did. An AI-augmented practitioner can produce a simple script and a complex system with equal facility, because the complexity is managed by the AI rather than by the practitioner's accumulated knowledge.
This compression of the typical disruption timeline means that the established profession has far less time to develop adaptive responses. The three-stage trajectory that Abbott identifies -- denial, qualification, redefinition -- which typically unfolded over decades in previous disruptions, is now compressed into months. Practitioners who entered denial in early 2025 found themselves forced into qualification by mid-year and grappling with redefinition by the end of it. The compression does not change the structure of the response. It changes the experience of it, making each stage more intense, more disorienting, and more demanding of the practitioner's adaptive capacity.
Abbott's framework also illuminates the role that professional associations play -- or fail to play -- in mediating the new-entrant challenge. In previous disruptions, professional associations served as institutional buffers between the profession and the disrupting technology. They lobbied for protective regulation, developed new credentialing standards, and organized training programs to help practitioners adapt. The speed of the AI disruption has largely outpaced these institutional responses. Professional associations in software engineering, law, medicine, and other knowledge-based professions are still debating their response to a disruption that is already reshaping the jurisdictional landscape. The institutional lag creates a vacuum that is being filled by the organizations that employ professionals -- organizations that are making jurisdictional decisions not as stewards of the professional system but as consumers of professional services optimizing for their own needs.
The new entrants, in this analysis, are not the enemies of the profession. They are the agents of its evolution. The profession's task is not to defeat them but to learn from the challenge they represent: to identify what the new jurisdiction requires, to develop the training paths that produce practitioners capable of claiming it, and to build the institutional structures that will stabilize the new jurisdiction as effectively as the old structures stabilized the old one. This is the work of professional evolution, and it is work that cannot be accomplished through gatekeeping alone. It requires the profession to look honestly at what it actually provides -- not what it claims to provide, but what organizations and clients genuinely need -- and to reorganize itself around that honest assessment.
Professional identity is not merely a label attached to a set of tasks. It is a deeply felt sense of who one is, constructed through years of training, socialization, and daily practice. The practitioner who has spent a decade mastering the intricacies of backend development does not merely know backend development. She is a backend developer. The identity is woven into her daily habits, her social relationships, her sense of purpose, her understanding of where she fits in the broader ecology of technical work. When AI disrupts the jurisdiction on which this identity is built, the practitioner does not merely face a professional challenge. She faces an existential one.
Abbott's framework provides the analytical vocabulary to understand this existential dimension through what might be termed the endowment effect of expertise. The endowment effect, as behavioral economists have documented, refers to the tendency of people to overvalue things they already possess, simply because they possess them. A coffee mug that a person would not pay five dollars to acquire becomes worth ten dollars the moment they own it. The possession creates the value, not the object's intrinsic utility.
The endowment effect of expertise operates by the same mechanism but with far greater intensity. The expertise a developer has spent years acquiring is not a coffee mug. It is the foundation of her professional identity, the basis of her social status, the source of her economic livelihood, and the medium through which she experiences the particular satisfaction of doing difficult work well. The expertise is, in a deep sense, who she is. When that expertise is rendered less valuable by a technology that enables comparable output without comparable knowledge, the practitioner experiences a loss that is not merely economic but ontological. She is losing not just a market advantage but a part of herself.
The psychological mechanism is worth examining in detail, because its intensity is often underestimated by those who have not experienced it. The endowment effect in behavioral economics has been extensively studied in laboratory settings with trivial objects -- mugs, pens, lottery tickets. The effect is robust even with objects of minimal personal significance. When the object is not a mug but a professional identity -- the product of ten thousand hours of deliberate practice, the basis of social relationships and community belonging, the lens through which one perceives one's own value and purpose -- the endowment effect operates with a force that is difficult to overstate. Practitioners who are told that their expertise is still valuable, just in a different form, often experience the reassurance as dismissive, because it fails to register the depth of what is being lost. The lower-level expertise being displaced is not merely a tool in the practitioner's kit. It is the foundation on which the practitioner's entire professional self-concept rests.
This explains the intensity of the emotional responses visible across every profession AI has disrupted. The programmer who feels like a master calligrapher watching the printing press arrive is not exaggerating. The analogy is structurally exact. The calligrapher's identity was built on a specific relationship between hand and page -- a relationship that required years of practice to develop and that produced an intimacy with the medium constituting not just a skill but a way of being in the world. The printing press did not destroy the calligrapher's skill. It rendered the skill unnecessary for the purpose it had served. The books that the calligrapher had spent months producing could now be produced in hours by practitioners who had never held a brush. The calligrapher's grief was real. The grief that contemporary professionals experience when their craftsmanship is rendered unnecessary by AI is equally real, and dismissing it as mere resistance to progress misses the depth of what is being lost.
Abbott's research on historical jurisdictional disruptions demonstrates that the emotional trajectory of practitioners who undergo identity reconstruction follows a characteristic pattern. The first phase is denial: the insistence that the disruption is temporary, that the technology will prove inadequate, that quality will ultimately prevail over speed. The second phase is anger: the bitter recognition that the disruption is real, accompanied by resentment toward the technology, toward the organizations that adopt it, and toward the new entrants who benefit from it. The third is bargaining: the attempt to carve out a protected domain within the disrupted jurisdiction, to identify the subset of work that still requires the old expertise. The fourth is grief: the acceptance that the old jurisdiction is genuinely gone, accompanied by mourning for the identity it supported. The fifth is reconstruction: the gradual development of a new professional identity built on a new jurisdictional foundation.
The parallel to the stages of grief is not coincidental. Professional identity disruption is a form of loss, and the emotional processing of that loss follows the same trajectory as the processing of other significant losses. The practitioner is mourning not merely a job or a market position but a relationship -- the intimate connection between the builder and the medium, the specific satisfaction of understanding a system from the ground up, the particular pride of having earned a capability through years of sustained effort.
The five-phase trajectory is not a rigid sequence. Some practitioners move through the phases in a different order, and some cycle back to earlier phases before reaching reconstruction. The variability reflects the complexity of the individual's relationship to the disrupted jurisdiction -- the specific ways in which the expertise was woven into the practitioner's identity, social relationships, and daily routines. A practitioner whose identity was primarily built on implementation skill will experience a different trajectory from one whose identity was built on architectural vision, even if both are affected by the same disruption.
The speed of the current disruption makes this trajectory more compressed and more painful than in previous jurisdictional shifts. Historical disruptions typically unfolded over decades, giving practitioners time to move through the emotional phases at a pace that allowed gradual adaptation. The AI disruption is unfolding over months. Practitioners who were in denial at the beginning of 2025 found themselves bargaining by mid-year and grieving by its end. The compression does not change the structure of the trajectory. It intensifies the experience of each phase, often to a degree that overwhelms the practitioner's capacity for adaptive response.
The endowment effect compounds the difficulty in a way that Abbott's framework makes analytically precise. Practitioners who have invested the most in the old jurisdiction -- who have spent the most years, acquired the deepest expertise, built the most elaborate identity around the knowledge being displaced -- experience the most intense resistance to the transition. A junior developer who has invested two years in learning to code has less to lose than a senior developer who has invested twenty years. The senior developer's resistance is proportional to her investment, and the investment is measured not just in time and money but in the identity the expertise has supported.
Abbott's comparative analysis across professions reveals that the endowment effect has played out in remarkably similar ways across different historical contexts. When the stethoscope was introduced in the early nineteenth century, physicians who had spent years developing the skill of direct auscultation -- listening to the body by pressing an ear to the patient's chest -- resisted the new instrument not because it was less effective but because it displaced a skill they had internalized as part of who they were. When computer-assisted legal research was introduced in the 1970s, senior lawyers who had spent decades building knowledge of case law through painstaking library research resisted the tool not because it produced worse results but because it rendered their accumulated knowledge -- their professional endowment -- less essential. In each case, the resistance was proportional to the investment, and the investment included not merely time and effort but identity itself.
The path through the endowment effect, consistent with Abbott's framework, is not to deny the loss or to minimize the grief. It is to recognize that the expertise being devalued is the lower-level expertise -- the knowledge of specific implementations, the mastery of specific tools, the fluency in specific technical languages -- and that the higher-level expertise retains its value and in many cases becomes more valuable when freed from the burden of implementation detail. The capacity for sustained attention, the tolerance for complexity, the drive to solve problems that resist easy resolution, the aesthetic sensibility that distinguishes elegant solutions from merely functional ones -- these capacities are not diminished by AI. They are amplified by it.
The practitioner who navigates the transition successfully is the one who recognizes that the endowment effect is causing her to overvalue lower-level expertise relative to higher-level capacities. The lower-level expertise was valuable, and it is genuinely being displaced. But the higher-level capacities -- judgment, architectural vision, integrative thinking, the ability to evaluate not just whether something works but whether it should exist at all -- are the ones that the new jurisdiction will be built on. The practitioner who can make this distinction is the practitioner who will thrive in the new system of professions.
Abbott's processual sociology offers a further insight here that is both analytically precise and existentially reassuring. If social reality is constantly making and remaking itself, then professional identity is not a fixed possession but an ongoing process -- a career, in Abbott's technical sense, that accumulates meaning through choices, developments, and plans over time. The AI disruption does not destroy a fixed identity. It interrupts an ongoing process and forces the practitioner to redirect it. The redirection is painful. But the process itself -- the capacity for professional development, for learning, for the accumulation of new competencies -- remains intact. The practitioner is not starting from zero. She is redirecting a career that has built capacities she has not yet fully recognized.
There is a further dimension of professional identity disruption that Abbott's framework illuminates with particular force: the collective dimension. Professional identity is not merely individual. It is communal. The practitioner does not construct her identity in isolation. She constructs it within a community of practice -- a network of colleagues who share the same training, the same vocabulary, the same standards, the same frustrations, and the same satisfactions. The community validates the identity, reinforces it through daily interaction, and provides the social infrastructure within which the identity acquires its meaning.
When AI disrupts the jurisdiction, it disrupts not merely individual identities but the community of practice itself. The shared vocabulary becomes contested. The shared standards become uncertain. The social infrastructure that reinforced the old identity becomes inadequate for the new one. Practitioners who are navigating the identity transition find that the community they relied on for validation is itself in transition, unable to provide the stability that the individual needs in order to reconstruct her professional self. The result is a compounding of isolation: the practitioner is losing her individual identity and her communal support simultaneously.
Abbott's analysis of professional socialization in historical contexts reveals that this communal dimension of identity disruption has been present in every major jurisdictional shift, and that the communities that recover most effectively are those that actively reconstruct their shared identity around the new jurisdiction rather than mourning the old one. The medical profession's transition from individual practitioners to hospital-based teams, the legal profession's transition from solo practice to firm-based specialization, the engineering profession's transition from craft-based guilds to university-trained specialists -- in each case, the reconstruction of communal identity was as important as the reconstruction of individual identity, and the communities that managed the reconstruction most effectively were those that created new shared practices, new shared vocabularies, and new shared standards that reflected the jurisdiction they were claiming rather than the jurisdiction they had lost.
The AI disruption demands a similar communal reconstruction. The communities of practice that will thrive are those that organize themselves around the capacities the new jurisdiction requires -- around judgment, architectural thinking, ethical reasoning, and the ability to direct AI toward outcomes that serve human purposes -- rather than around the technical skills that the old jurisdiction required. This reconstruction cannot be imposed from above. It must emerge from the daily interactions of practitioners who are themselves navigating the transition, sharing what they are learning, and gradually building a new communal identity that is adequate to the new professional reality.
The distinction Abbott's framework insists upon is the distinction between jurisdiction and capacity. The jurisdiction -- the specific domain of work over which the practitioner claims authority -- is shifting. The capacity -- the human endowment that enabled the practitioner to claim the jurisdiction in the first place -- is not. The profession that recognizes this distinction and redefines its jurisdiction around human capacity rather than technical knowledge will emerge from the AI disruption not diminished but enlarged.
The phenomenon that financial analysts call the Software Death Cross -- the moment when AI market capitalization overtakes SaaS market capitalization, with the former rising and the latter falling -- is, through the lens of jurisdictional theory, something more consequential than a market repricing. It is a jurisdictional collapse at the industry level, and understanding it as such reveals dimensions that the economic analysis alone cannot capture.
The jurisdiction of the software company was built on a specific asset: proprietary code. A company that possessed software other organizations needed could claim jurisdiction over the services that the software enabled. The jurisdiction was defended by familiar mechanisms: specialized knowledge (the company's engineers understood the code), demonstrated competence (the company's track record of delivering reliable software), and the social construction of barriers to entry (the complexity of software development created the perception that only specialized organizations could produce reliable software). The combination created a jurisdictional moat: other organizations could not easily produce their own software, and so they paid the software company for the privilege of using its products.
The jurisdictional moat that the software company enjoyed was remarkably durable for several decades. The barriers to entry in software production were formidable: the specialized knowledge required to produce reliable code, the organizational infrastructure needed to coordinate large development teams, the quality assurance processes that distinguished production-grade software from amateur projects, and the institutional trust that large organizations required before adopting a software product for mission-critical operations. Each of these barriers contributed to the jurisdictional claim, and the cumulative effect was a moat that protected software companies from competition even when the underlying technology was theoretically available to anyone. The moat was not merely technical. It was institutional, reputational, and organizational -- a multi-layered defense that no single disruption could breach.
The Death Cross describes the moment when AI collapses the cost of producing code to near zero, eliminating the scarcity on which the software company's jurisdiction was built. If anyone can produce code -- if the translation from human intent to working software no longer requires the specialized knowledge and organizational infrastructure that software companies uniquely possessed -- then the jurisdiction of the software company must be located somewhere other than in the code itself.
Abbott's analysis of jurisdictional collapse across other professional domains illuminates what happens next with a clarity that purely economic models cannot match. When a profession's lower-level jurisdiction is commoditized, the profession does not disappear. It ascends -- or it dies. The professions that survive are those whose real jurisdiction was always above the commoditized layer, even if neither the profession nor its clients recognized this during the period when the lower-level jurisdiction seemed sufficient. The professions that do not survive are those whose jurisdiction was coextensive with the commodity.
The analogy to previous commodity disruptions is illuminating but imperfect, and the imperfection matters. When previous commodities -- textiles, electricity, computing power -- became cheap and abundant, the industries that had produced them were disrupted, but the commodity itself remained useful, and new industries organized around the commodity's abundance rather than its scarcity. Cheap electricity enabled electrification; cheap computing enabled the software industry; cheap textiles enabled the fashion industry. The question for the software industry is: what does cheap code enable? What industries, what practices, what professional configurations become possible when code is as abundant as electricity? Abbott's framework suggests that the answer lies in the higher-level jurisdictions that cheap code makes possible -- jurisdictions organized around judgment, integration, and the human capacity to direct abundant technical capability toward purposes that matter.
Applied to the software industry: the companies whose real asset was their code will not survive the Death Cross. The code was what they produced, what their engineers wrote, what their quality assurance teams tested, what their sales teams demonstrated. It was natural to assume that the jurisdiction was coextensive with the artifact. But the jurisdiction was never actually located in the code. It was located in the value the code enabled -- the data management, the workflow optimization, the user community, the institutional relationships, the regulatory compliance, the accumulated trust that comes from decades of reliable service. The code was merely the medium through which that value was delivered.
The Death Cross reveals this distinction by removing the scarcity of the medium. When code was scarce, the company's jurisdiction over the code and its jurisdiction over the value were indistinguishable. When code becomes abundant, the distinction becomes visible -- and consequential. Companies whose value was genuinely in the code -- startups whose entire value proposition was a novel piece of software performing a specific function -- discover that their jurisdiction has evaporated. The barrier to entry that protected them was the difficulty of producing the code, and that difficulty has been abolished. Companies whose value was in the ecosystem -- enterprise platforms with decades of institutional trust, workflow integration, regulatory compliance, and deep understanding of their clients' operational needs -- discover that their jurisdiction remains intact, and that the abundance of code is actually an opportunity rather than a threat, because it reduces their own production costs while leaving their higher-level jurisdiction untouched.
Abbott's comparative analysis across professional domains reveals that this pattern of jurisdictional collapse and ascent has occurred repeatedly. When photography became cheap and accessible in the early twentieth century, the jurisdiction of the portrait painter collapsed at the lower level (producing likenesses) but expanded at the higher level (aesthetic vision, artistic interpretation, the capacity to render not what the subject looked like but what the subject meant). When word processors automated typesetting, the jurisdiction of the typesetter collapsed, but the jurisdictions of the graphic designer, the information architect, and the user experience specialist expanded. In each case, the commodity destroyed the lower jurisdiction and created the conditions for a higher one. The destruction was real. The expansion was also real. And the distribution of gains and losses depended on the institutional choices that accompanied the transition.
For the venture-backed startup ecosystem, the Death Cross carries a particularly stark jurisdictional message. The traditional venture model depends on the scarcity of technical execution: a startup's competitive advantage lies in its team's ability to build software that others cannot easily replicate. When AI reduces the cost of execution to near zero, the startup's competitive advantage must be located elsewhere -- in domain expertise, in unique data, in institutional relationships, in the founder's judgment about what markets need and what products should exist. The venture capitalists who recognize this shift are already restructuring their evaluation criteria, investing less in technical team composition and more in market insight, domain knowledge, and the judgment capacities of the founding team. The venture capitalists who do not recognize it will continue funding companies whose jurisdictional moats are made of sand, and the returns will reflect the fragility of those moats.
The Death Cross also illuminates a dimension of jurisdictional competition that Abbott's analysis of the state as a jurisdictional arena brings into focus. Governments are not merely passive observers of the software industry's restructuring. They are active participants, and their regulatory, procurement, and policy decisions shape which jurisdictional configurations survive and which do not. The European Union's AI Act, the emerging regulatory frameworks in Asia, and the American approach to AI governance are all, in Abbott's terms, interventions in the jurisdictional competition -- interventions that favor certain configurations of professional authority over others.
The state's role in the Death Cross is particularly significant because software companies have built significant portions of their jurisdictional moats through government relationships: compliance certifications, security clearances, procurement contracts, and the institutional trust that comes from decades of serving government clients. These relationships constitute a form of jurisdiction that operates in what Abbott calls the legal arena -- the arena of licensing, regulation, and formal institutional authority. AI may commoditize the technical jurisdiction, but the legal jurisdiction -- the authority conferred by regulatory compliance and institutional trust -- remains largely intact. The companies that survive the Death Cross will be those whose jurisdictional claims extend into the legal arena, not merely the workplace arena.
For the practitioners inside these companies, the Death Cross forces a reckoning with the nature of their own professional jurisdiction. The engineer who defined her value by the code she wrote must now define it by something else -- by her understanding of the institutional context in which the code operates, by her judgment about what software should exist and for whom, by her capacity to evaluate whether AI-produced code serves the purposes it was designed to serve, by her ability to navigate the regulatory and ethical dimensions of technology deployment. The jurisdiction has migrated from the code layer to the judgment layer, and the engineer must migrate with it or find that the jurisdiction she occupied has been absorbed into the AI-augmented workflow.
The Death Cross also forces a reckoning with the concept of the professional moat that extends beyond individual companies to entire categories of professional work. Abbott's framework distinguishes between what he calls full jurisdiction -- where the profession controls the entire process from diagnosis to treatment to evaluation -- and various partial jurisdictions, where the profession controls only one stage of the process. Software companies historically enjoyed something approaching full jurisdiction over the software production process: they diagnosed the client's needs, designed the solution, implemented it, tested it, deployed it, and maintained it. The Death Cross fragments this full jurisdiction by enabling clients to perform many of these stages themselves, with AI assistance, while still relying on the software company -- or on professional practitioners -- for the stages that require judgment, institutional knowledge, and the accumulated trust that comes from years of reliable service.
This fragmentation of full jurisdiction into partial jurisdictions is, in Abbott's analysis, a characteristic feature of jurisdictional disruptions. The medical profession, for instance, once held full jurisdiction over all aspects of healthcare. The rise of nursing, pharmacy, physical therapy, and other allied health professions fragmented this full jurisdiction into a system of partial jurisdictions, each controlled by a different professional group. The fragmentation was resisted by physicians, who argued that patient care required unified medical authority. But the fragmentation prevailed, because the organizations that delivered healthcare -- the hospitals, the clinics, the insurance companies -- found that distributed jurisdiction produced care that was more efficient, more accessible, and in many cases more effective than the unified jurisdiction the medical profession defended.
The software industry is undergoing a similar fragmentation. The full jurisdiction of the software company -- control over the entire process from requirements to deployment -- is being distributed among a larger number of actors: AI-augmented individual practitioners, small teams with cross-functional capabilities, client organizations that can now produce their own solutions, and platform companies that provide the infrastructure on which all of these actors operate. The companies that thrive in this fragmented landscape will be those that identify the partial jurisdiction they can defend -- the specific stage of the process where their expertise, their institutional relationships, and their accumulated trust provide genuine value that the AI-enabled alternatives cannot match.
Abbott would note that this migration is structurally identical to the migrations that have accompanied every major jurisdictional disruption in professional history. The skilled weaver of 1812 was not wrong that his craftsmanship was superior to the power loom's output. He was wrong to assume that the market would continue to pay a premium for that superiority when the loom could produce adequate cloth at a fraction of the cost. The same structural dynamic is operating now, at industry scale, and the outcome will be determined not by arguments about the quality of handcrafted code but by the institutional dynamics of organizational demand, regulatory intervention, and the competitive pressures that Abbott's framework identifies as the determinants of jurisdictional outcomes in every era.
One of Abbott's most consequential insights is that jurisdictional disputes are settled not in the court of professional opinion but in the court of organizational demand. Organizations are not passive consumers of professional services. They are active participants in the system of professions, and their decisions about how to deploy professional expertise are the primary mechanism through which jurisdictional competition is resolved. The profession that better serves the organization's needs wins the jurisdiction, regardless of which profession's internal standards are higher.
This principle has profound implications for the AI disruption. The organizations that employ professionals will be the entities that determine whether the jurisdiction of knowledge work contracts, expands, or reorganizes itself around new foundations. The professionals themselves will have a voice in the process, but not a decisive one. The decisive voice belongs to the organizations, because organizations control the institutional infrastructure -- the hiring practices, the team structures, the performance evaluation criteria, the investment priorities -- that determines which jurisdictional claims are rewarded and which are marginalized.
The organizational role in jurisdictional arbitration is not merely a theoretical construct in Abbott's framework. It is an empirically documented pattern that recurs across every profession he has studied. When the American Medical Association sought to restrict the scope of chiropractic practice in the mid-twentieth century, the decisive factor was not the AMA's arguments about the superiority of medical training. It was the decisions of insurance companies and hospital systems about whether to reimburse chiropractic services and grant chiropractors hospital privileges. When certified public accountants sought to expand their jurisdiction into management consulting, the decisive factor was not the accounting profession's arguments about the natural connection between financial expertise and business strategy. It was the decisions of corporate clients about whether to hire their accountants for consulting work. In each case, the organizations that consumed the professional services determined the jurisdictional outcome, often against the wishes of the profession that claimed the jurisdiction.
The AI transition has placed organizations in the position of jurisdictional arbiter once again, and the decisions they are making now will shape the system of professions for decades. Abbott's framework identifies two archetypal organizational responses that have appeared in every major jurisdictional disruption. The first is headcount reduction: the organization converts productivity gains directly into reduced staff, maintaining the same level of output with fewer practitioners. The second is capability expansion: the organization maintains or increases its headcount while dramatically expanding the scope and ambition of the work it undertakes.
These two responses produce fundamentally different professional outcomes. Headcount reduction contracts the jurisdiction by reducing the demand for practitioners. Fewer professionals are needed to produce the same output, and the jurisdiction contracts accordingly. The practitioners who remain are those whose expertise is most difficult to replicate with AI -- the architects, the system designers, the practitioners whose jurisdiction lies above the implementation layer. The practitioners whose expertise is primarily implementational are displaced. The net effect is a smaller profession concentrated in the residual tasks that AI cannot yet perform.
Capability expansion does something structurally different. It expands the jurisdiction by expanding the scope of what each practitioner can accomplish. The twenty engineers who are retrained from narrow technical specialists to AI-augmented generalists do not produce the same output as before with fewer people. They produce dramatically more output with the same number of people. The jurisdiction expands from narrow technical implementation to broad product development, from writing code in a specific domain to building complete systems across multiple domains. The practitioner's jurisdiction grows in scope even as it changes in character. The net effect is a profession that is the same size or larger but operating at a fundamentally different level.
The choice between these two responses is not merely a business strategy. It is a jurisdictional intervention with consequences that extend far beyond the organization's quarterly results. The organization that chooses capability expansion is creating the institutional conditions for a new jurisdiction -- a jurisdiction defined by the capacity to direct AI toward ambitious outcomes rather than by the ability to perform specific technical tasks. The organization that chooses headcount reduction is creating the institutional conditions for a contracted jurisdiction -- a smaller profession concentrated in the residual tasks that AI cannot yet perform, with a correspondingly smaller claim to professional authority and social relevance.
Abbott's historical analysis suggests that the choice between these two responses has been present in every major jurisdictional disruption. When manufacturing was mechanized, some organizations converted productivity gains into headcount reduction (firing workers), while others converted them into capability expansion (producing more goods with the same workforce). The organizations that chose expansion drove the economic growth that eventually produced new categories of employment and new professional jurisdictions. The organizations that chose reduction captured short-term margin but contributed to the displacement of workers who, in the absence of institutional support, bore the full cost of the transition.
The parallel extends beyond manufacturing into the knowledge professions themselves. When computer-assisted legal research was introduced, some law firms used it to reduce the number of junior associates performing legal research. Others used it to expand the scope of research that each associate could perform, enabling them to take on more complex cases and serve more clients. The firms that chose expansion found that their associates developed broader competencies and stronger professional identities, because the tool freed their cognitive resources for the higher-level work that the profession claimed as its core jurisdiction. The firms that chose reduction found that they had a smaller and more specialized workforce -- efficient in the short term but less adaptable when the next jurisdictional shift arrived.
Abbott's analysis of linked ecologies adds further depth to the organizational analysis. The organizational choice between headcount reduction and capability expansion does not occur in a vacuum. It is influenced by the regulatory environment (do labor laws protect displaced workers?), the educational environment (do training programs exist to help practitioners develop new competencies?), and the broader economic environment (does the market reward innovation more than efficiency?). The quality of these institutional environments shapes the organizational choices, and the organizational choices, in turn, shape the professional landscape. Organizations, educational institutions, regulatory bodies, and professional associations form an interconnected ecology, and the health of the ecology depends on the choices that each actor makes.
Abbott's concept of linked ecologies provides a further framework for understanding why organizational choices about AI deployment are so consequential. In Abbott's analysis, professions do not exist in isolation. They exist in ecological relationships with states, universities, and other institutional environments. The organizational decision to choose headcount reduction or capability expansion does not occur in a vacuum. It sends signals through the linked ecology that influence the behavior of other institutional actors. When major technology companies lay off large numbers of engineers following AI adoption, the signal reverberates through the educational system (students question whether pursuing computer science degrees is wise), through the regulatory system (policymakers consider whether intervention is needed to protect displaced workers), and through the professional association system (engineering societies debate whether their credentialing standards remain relevant). The organizational choice is, in this sense, not merely a business decision but an ecological event with consequences that propagate through the entire linked ecology of professions, states, and educational institutions.
The evidence from companies already navigating the AI transition reveals a striking and empirically consistent pattern: the organizations that choose capability expansion tend to be led by practitioners who understand the work at a deep level -- builders who have themselves experienced the jurisdictional vertigo that the AI disruption produces and who choose to direct the disruption rather than simply exploit it. The organizations that choose headcount reduction tend to be led by financial managers who view the productivity gain purely through the lens of cost optimization. This is not a moral judgment but a structural observation: the nature of the leader's own professional identity shapes the organizational response, and the organizational response shapes the jurisdictional outcome for everyone within the organization.
The choice between headcount reduction and capability expansion also has different implications for the distribution of professional opportunity across geographic boundaries. Headcount reduction tends to concentrate the remaining professional positions in the geographic centers where the organization's leadership is located and where the highest-value work is performed. Capability expansion, by contrast, can distribute professional opportunity more broadly, because the AI tools that enable capability expansion are accessible from any location with an internet connection. The organization that chooses capability expansion and invests in AI-augmented teams in Trivandrum, Nairobi, or Bucharest is not merely making a business decision. It is making a jurisdictional decision that expands the geographic boundaries of professional opportunity -- a decision that, multiplied across thousands of organizations, has the potential to reshape the global distribution of knowledge work.
The tension between these two responses plays out in boardrooms and quarterly reviews with predictable regularity. The arithmetic of headcount reduction is clean and seductive: if five people can do the work of a hundred, why keep a hundred? The argument for capability expansion requires a different kind of reasoning -- a reasoning that values long-term capability over short-term margin, that recognizes the professional ecosystem as an asset rather than a cost, and that understands that the organization's competitive advantage in an AI-augmented future will depend not on having fewer people but on having people who are capable of directing AI toward outcomes that no AI can envision on its own.
Abbott's analysis also identifies a third organizational response that is less visible than either headcount reduction or capability expansion but potentially more consequential: structural reorganization. Some organizations respond to the AI disruption not by reducing or expanding their workforce but by fundamentally restructuring how professional work is organized within the firm. Teams that were previously organized around technical specialization -- frontend, backend, infrastructure, data -- are reorganized around product outcomes, with each team member operating as an AI-augmented generalist capable of contributing across the full stack. The jurisdictional boundaries that previously existed within the organization -- the internal divisions of labor that mirrored the external divisions of the professional system -- are dissolved, and new organizational forms emerge that have no precedent in the pre-AI professional landscape.
This structural reorganization has implications that extend beyond the individual firm. When organizations restructure their internal jurisdictional boundaries, they create new models of professional practice that other organizations observe and, if the model proves successful, adopt. The organizational innovation diffuses through the professional system, gradually reshaping the external jurisdictional landscape to match the internal reorganization. This is the mechanism through which organizational decisions become professional norms: the choices that individual organizations make about how to structure AI-augmented work accumulate into a new pattern of professional organization that eventually displaces the old one.
The role of middle management in this process deserves particular attention through Abbott's lens. Middle managers occupy a distinctive jurisdictional position: they hold authority over the organization of work within their teams, and their decisions about how to deploy AI directly shape the jurisdictional experience of the practitioners they manage. A middle manager who uses AI as a tool for surveillance and control -- monitoring output, measuring keystrokes, optimizing for efficiency metrics -- creates a jurisdictional environment in which the practitioner's autonomy contracts and the work becomes more routinized. A middle manager who uses AI as a tool for empowerment -- freeing practitioners from routine tasks, expanding the scope of what each practitioner can attempt, investing in the development of judgment and architectural thinking -- creates a jurisdictional environment in which the practitioner's autonomy expands and the work becomes more judgment-intensive.
Abbott's framework does not prescribe which choice organizations should make. It describes the jurisdictional consequences of each choice and insists that organizations understand themselves as what they are: the primary institutional mechanism through which the system of professions is reorganized. The choices they make now are, in jurisdictional terms, the settlements that will define the professional landscape for decades. The organizations that recognize this responsibility -- that understand their role as jurisdictional arbiters and make choices accordingly -- will shape a professional system that serves human flourishing. The organizations that do not recognize it will shape a professional system by default, and the default, Abbott's research suggests, tends to favor efficiency over humanity, concentration over distribution, and short-term extraction over long-term cultivation.
Abbott's framework distinguishes three arenas in which jurisdictional competition plays out: the workplace, where work is actually performed; the public arena, where professional authority is contested in media, public opinion, and cultural discourse; and the legal arena, where licensing, regulation, and formal institutional authority are established. Most analyses of AI's impact on professions focus exclusively on the workplace arena -- on what happens when AI enters the office or the development environment. Abbott's framework insists that the workplace arena is only one theater of jurisdictional competition, and that the outcomes in the other two arenas may be equally consequential for the long-term shape of the professional landscape.
Abbott's three-arena framework is one of his most distinctive contributions to the sociology of professions, and its application to the AI disruption reveals dynamics that single-arena analyses systematically miss. Most public discussion of AI's impact on professions focuses on the workplace arena -- on productivity gains, job displacement, and the changing nature of tasks. This focus is understandable but incomplete, because the workplace arena is only one of the three theaters in which jurisdictional competition plays out, and outcomes in the other two arenas often override outcomes in the workplace. A profession can win the workplace competition -- demonstrating that its practitioners produce superior output -- and still lose the jurisdictional dispute if it loses in the legal arena (where regulatory changes remove its monopoly protections) or the public arena (where cultural shifts undermine its prestige and authority).
In the legal arena, the state plays a decisive role that extends far beyond regulation in the narrow sense. Governments do not merely regulate professions; they constitute them. Licensing requirements, educational mandates, scope-of-practice laws, and regulatory frameworks are the mechanisms through which the state formally recognizes -- and constrains -- professional jurisdictions. When the state requires a medical license to practice medicine, it is not merely certifying competence. It is creating a jurisdictional boundary, enforced by law, that determines who may and who may not perform certain kinds of work. The boundary has force not because the state has independently verified that licensed practitioners are more competent than unlicensed ones but because the state has accepted the profession's claim that its credentialing system adequately distinguishes competent from incompetent practice.
The AI disruption is challenging these state-created jurisdictions in ways that most regulatory frameworks were not designed to address. If an AI system can produce a medical diagnosis, does the diagnosis require a licensed physician to review it before it is communicated to the patient? If an AI system can draft a legal brief, does the brief require a licensed attorney to certify it before it is filed with the court? If an AI system can design a building's structural components, does the design require a licensed engineer to stamp it before construction begins? These questions are not hypothetical. They are being contested in regulatory bodies, courts, and legislatures across the world, and the answers will shape the system of professions for decades.
Abbott's research predicts that the state's response to these questions will follow the pattern of previous regulatory adaptations to technological disruption: initially protective of existing jurisdictions, gradually accommodating new configurations as the technology's capabilities become undeniable, and ultimately establishing new regulatory frameworks that reflect the new jurisdictional reality. The pace of this adaptation varies enormously across nations and across professional domains, and the variation itself creates jurisdictional asymmetries that shape the competitive landscape. The European Union's AI Act represents one approach -- comprehensive, precautionary, focused on risk classification and the protection of existing professional authority. The American approach has been more fragmented, with different agencies and states adopting different regulatory postures. Asian nations, particularly Singapore and Japan, have adopted approaches that emphasize innovation while establishing guardrails.
From Abbott's perspective, these different regulatory approaches represent different jurisdictional settlements -- different configurations of authority between established professionals, AI-enabled practitioners, the organizations that employ them, and the state that regulates them. The settlement that each nation reaches will shape its professional landscape in distinctive ways, creating different opportunities and different constraints for practitioners within each system. The nation that builds the most adaptive regulatory framework -- one that protects the public interest while allowing new jurisdictional configurations to emerge -- will produce the most resilient professional system.
In the public arena, the jurisdictional competition around AI is visible in the cultural discourse that accompanies the technological disruption. The debate over whether AI-assisted work is "real" work, whether AI-generated art is "real" art, whether AI-drafted legal arguments are "real" legal reasoning -- these debates are jurisdictional competitions conducted in the public arena. They are attempts to define, in the court of public opinion, what counts as legitimate professional practice and who deserves to be recognized as a legitimate practitioner.
These public-arena debates have a temporal dimension that Abbott's framework makes analytically precise. In the early stages of a jurisdictional disruption, the public debate tends to be dominated by the voices of the established profession -- the practitioners who have the cultural authority, the media access, and the institutional platforms to shape the narrative. The new entrants are less visible in the public arena, partly because they lack the institutional standing of the established profession and partly because they are too busy exploiting their new capabilities to participate in cultural debates about whether those capabilities are legitimate. As the disruption matures, the public debate shifts: the new entrants gain visibility, their successes become undeniable, and the cultural narrative gradually accommodates the new jurisdictional reality. This temporal pattern suggests that the current public debate, which still favors the established professionals' narrative of loss and degradation, will gradually shift as the AI-enabled practitioners' successes accumulate and become culturally visible.
Abbott's analysis of the public arena reveals that these debates serve a function beyond their explicit content. They are mechanisms through which professional groups seek to establish cultural authority -- to shape public perception of who deserves trust, respect, and compensation for their work. The profession that wins the public debate does not necessarily win the workplace competition (organizational demand may override public sentiment), but the public debate shapes the cultural context in which the workplace competition unfolds. A profession that enjoys high public regard has an easier time defending its jurisdictional claims than one that does not. The cultural narrative matters, not because it determines outcomes directly but because it influences the institutional actors -- the organizations, the regulators, the educational institutions -- whose decisions do determine outcomes.
Education occupies a distinctive position in Abbott's framework as both a jurisdictional gatekeeper and a producer of the human capital on which jurisdictions depend. Educational institutions have always served a dual function in the system of professions: they transmit the knowledge practitioners need, and they control access to the jurisdiction by determining who receives the credentials the profession requires. AI has progressively undermined both functions. The knowledge function has been eroded by each successive wave of information technology -- the internet, online courses, bootcamps, and now AI tutoring systems that can provide personalized instruction at any hour, adapting to the student's level and pace. The credentialing function is losing force as organizations shift from credential-based to capability-based evaluation -- hiring based on portfolio and demonstrated ability rather than formal degrees.
The educational institution must find a new jurisdiction, and Abbott's framework suggests where that jurisdiction lies. It lies in the development of capacities that AI cannot replicate and that organizations increasingly value: judgment, ethical reasoning, contextual sensitivity, the ability to evaluate AI output critically, and the capacity for integrative thinking that draws on multiple domains of knowledge to address problems that resist single-discipline solutions. These capacities cannot be transmitted through lectures or tested through examinations. They are developed through mentored practice, through sustained engagement with complex and ambiguous problems, through the kind of educational experience that produces not just knowledge but the wisdom to deploy knowledge wisely.
The credentialing crisis extends beyond the content of education to its economic model. Professional education has historically been expensive because the knowledge it transmitted was scarce and the credentials it conferred were valuable. When the knowledge becomes abundant and the credentials become less relevant to hiring decisions, the economic justification for expensive professional education weakens. Students and their families begin to question whether a four-year computer science degree, with its attendant debt, is a wise investment when a motivated individual with access to AI tools can achieve comparable productive capability in a fraction of the time and at a fraction of the cost. This economic pressure will reshape the institutional landscape of professional education as surely as the pedagogical pressure will reshape its content.
The university that teaches students to write code is teaching a skill that AI is rapidly commoditizing. The university that teaches students to think about what code should do, for whom, and with what consequences, is teaching a capacity that grows more valuable as AI makes technical implementation more accessible. The shift from the first model to the second is not merely a curricular adjustment. It is a jurisdictional repositioning -- a redefinition of the educational institution's role in the system of professions.
Abbott's analysis of linked ecologies makes clear that the interconnections between these three arenas -- workplace, legal, and public -- determine the overall shape of the jurisdictional settlement. The workplace arena determines which professional configurations are economically viable. The legal arena determines which configurations are formally authorized. The public arena determines which configurations enjoy cultural legitimacy. A professional configuration that succeeds in all three arenas achieves the most stable jurisdictional position. A configuration that succeeds in one or two but fails in the third remains vulnerable to challenge.
The relationship between education and professional jurisdiction raises a further question that Abbott's framework addresses with characteristic directness: what is the purpose of professional education in an era when the knowledge it transmits can be accessed through AI? The traditional answer -- that professional education transmits the specialized knowledge that practitioners need in order to perform the profession's work -- is no longer sufficient, because the knowledge is no longer scarce. A new answer is required, and Abbott's framework suggests what that answer might be: professional education exists to develop the judgment that practitioners need in order to direct AI-augmented work toward outcomes that serve human purposes. The knowledge is still important as context for judgment, but it is the judgment itself -- not the knowledge -- that constitutes the educated practitioner's distinctive contribution.
This redefinition of professional education has implications for the institutional structure of higher education that extend far beyond curriculum design. If the purpose of professional education is to develop judgment rather than transmit knowledge, then the methods of education must change accordingly. Lectures that deliver information to passive recipients are less effective than seminars that develop the capacity for critical thinking through active engagement with complex problems. Examinations that test recall of factual knowledge are less relevant than assessments that evaluate the quality of judgment in ambiguous situations. Credentialing systems that certify knowledge acquisition are less meaningful than portfolio-based evaluations that demonstrate the capacity for wise decision-making in contexts that matter.
The professional school that adapts to this new reality -- that reorganizes its curriculum around the development of judgment, its pedagogy around active engagement, and its assessment around demonstrated capability -- will produce practitioners who are equipped for the new jurisdictional landscape. The professional school that continues to operate as if the transmission of knowledge were its primary purpose will produce practitioners who enter the workforce with credentials that no longer correspond to the capacities the market values. The jurisdictional consequences of this mismatch will be severe: graduates of knowledge-focused programs will find that their credentials do not translate into jurisdictional authority, because the organizations that arbitrate jurisdictional disputes are increasingly hiring for demonstrated capability rather than formal credentialing.
The AI-enabled practitioner currently enjoys strong positioning in the workplace arena (organizations value the productivity gains) but weaker positioning in the legal arena (regulatory frameworks have not caught up) and contested positioning in the public arena (the cultural debate over AI-assisted work is far from resolved). The established professional currently enjoys strong positioning in the legal and public arenas (licensing requirements and cultural prestige remain intact) but weakening positioning in the workplace arena (organizations are increasingly finding that AI-enabled practitioners serve their needs more efficiently). The jurisdictional settlement will emerge from the interaction of these arena-specific dynamics, and Abbott's framework insists that understanding the interaction -- not just the individual arenas -- is the prerequisite for predicting the outcome.
The democratization of capability that the AI moment represents is, in jurisdictional terms, the most radical event in the history of the professions. Previous jurisdictional disruptions shifted boundaries between professional groups. AI does something more fundamental: it expands the pool of potential claimants for professional jurisdictions to include virtually anyone who can articulate a clear intention. This is not a boundary shift. It is a boundary dissolution, and its implications for the system of professions are unlike anything the historical record contains.
Abbott's historical research provides the evidence base for understanding just how radical this dissolution is. In two centuries of professional history, every previous jurisdictional disruption left the concept of jurisdiction itself intact. The boundaries shifted, new professions emerged, old professions contracted, but the fundamental structure -- groups of practitioners holding exclusive authority over specific domains of work -- persisted through every disruption. The AI disruption challenges this fundamental structure because it challenges the exclusivity that makes jurisdiction possible. If anyone can perform the work, the concept of exclusive authority over it becomes incoherent. The system of professions must either find a new basis for exclusivity or transform itself into something that does not depend on exclusivity at all.
To appreciate the radicalism of this democratization, consider what it means for the concept of jurisdiction itself. Jurisdiction, as Abbott has defined it, is the exclusive right of a particular group to perform a particular kind of work. The exclusivity is the defining feature. Without exclusivity, there is no jurisdiction -- there is merely a kind of work that anyone can do. The exclusivity of professional jurisdictions has historically been maintained by the scarcity of the knowledge required to perform the work. Medical practice required medical knowledge. Legal practice required legal knowledge. Software engineering required technical knowledge. The scarcity guaranteed the exclusivity.
AI has not merely reduced this scarcity. In many domains, it has eliminated it entirely. A designer who had no programming knowledge can now build functional applications. A product manager who had never written a line of code can now produce working prototypes. A student who has not completed a computer science degree can now create software that would have required a team of specialists a decade ago. The knowledge that previously gated entry to the jurisdiction of software development is now available to anyone with access to a natural language interface. The gate has not merely been lowered. It has been removed.
The democratization of one jurisdiction is, however, always accompanied by the creation of new ones. This is a pattern Abbott has identified across every historical case of jurisdictional expansion, and it provides a more nuanced picture than the simple narrative of professional dissolution suggests. When literacy became universal, the jurisdiction of the scribe collapsed, but the jurisdictions of the editor, the publisher, the literary critic, and the writing teacher emerged. When photography became accessible, the jurisdiction of the portrait painter contracted, but the jurisdictions of the art director, the photo editor, and the cinematographer expanded. When desktop publishing made typesetting accessible, the typesetter's jurisdiction collapsed, but the graphic designer's, the information architect's, and the user experience specialist's jurisdictions expanded.
In each case, the democratization of a lower-level capability created new upper-level jurisdictions that were more complex, more nuanced, and more dependent on human judgment than the lower-level capability they replaced. The pattern suggests that the democratization of software development will follow the same trajectory: the lower-level jurisdiction of code production will be universalized, and new upper-level jurisdictions will emerge around capacities that code production alone does not provide.
What are these new jurisdictions? Abbott's framework, applied to the evidence from the current disruption, identifies several candidates. The capacity to evaluate whether AI-produced output actually serves its intended purpose -- a capacity requiring judgment that AI itself does not possess. The capacity to navigate the ethical, social, and aesthetic dimensions of technology that no amount of technical knowledge addresses. The capacity to design systems that enhance human cognition rather than diminish it. The capacity to integrate multiple perspectives, multiple domains of knowledge, and multiple stakeholder interests into coherent strategies that serve human flourishing rather than merely organizational efficiency. And the capacity for strategic intervention in complex systems -- studying the flow of forces within a domain, identifying leverage points, and building structures that redirect those forces toward productive outcomes.
The emergence of these new jurisdictions is not speculative. It is already visible in organizations that have restructured around AI-augmented work. The role of the "AI director" -- the practitioner who does not build but who evaluates, directs, and integrates AI-produced output -- is appearing in technology companies, consulting firms, and creative agencies with increasing frequency. The role has no established credentialing path, no professional association, and no standardized training program. It is, in Abbott's terms, a jurisdiction in its earliest stage of formation -- a domain of work that is being claimed by practitioners from diverse backgrounds who share the capacity for judgment that the work requires. The jurisdictional competition over who will control this emerging domain -- whether it will be claimed by the technology professionals, by the management consultants, by a new profession that has not yet coalesced, or by some combination -- is one of the most consequential professional competitions currently underway, even though most participants are not yet aware that they are competing.
Each of these capacities defines a potential jurisdiction in the new system of professions. Each requires expertise rooted in human experience, human values, and the specifically human capacity for judgment in situations where the right answer is not determined by data alone but by the weighing of competing values, interests, and possibilities that only a consciousness capable of caring about the outcome can perform.
The democratization also has profound implications for the geographic and economic distribution of professional opportunity that Abbott's framework helps to specify. In the old system, the geographic distribution of professional opportunity was constrained by the geographic distribution of educational institutions, professional networks, and the cultural capital that facilitated entry into the profession's social world. A practitioner in Lagos or Dhaka or Trivandrum faced barriers to entry that were not merely technical but institutional and cultural -- barriers that reflected the historical concentration of professional authority in specific economic centers. AI removes many of these geographic constraints. The tool that enables entry to the jurisdiction is available globally, and the capacity the new jurisdiction requires -- the capacity for judgment, for clear communication, for the articulation of intent -- is not geographically concentrated.
Abbott would caution, however, that the democratization of capability does not automatically produce the democratization of opportunity. Access to AI tools is not universal. The cultural and linguistic capital that effective use of AI requires is not evenly distributed. The institutional structures that recognize and reward AI-augmented competence are concentrated in specific economic centers. The trajectory is toward broader access, but the pace and equity of that trajectory depend on the same institutional dynamics -- organizational choices, regulatory frameworks, educational investments -- that shape jurisdictional outcomes in every era. The democratization of the tool is not the democratization of the system. The system must be deliberately restructured to ensure that the tool's democratizing potential is realized rather than captured by existing centers of privilege.
The future system of professions that Abbott's framework envisions will be organized around judgment rather than knowledge, around care rather than craft, around the human capacities that AI amplifies rather than the technical capacities that AI replaces. This reorganization is not unprecedented. Every major jurisdictional disruption in professional history has produced a system that was, by the measures that matter most -- accessibility, social utility, and the match between professional authority and demonstrated value -- more adequate than the system it replaced. The medical profession that emerged from the disruptions of the nineteenth century was more effective and more accessible than its predecessor. The legal profession that emerged from the disruptions of the twentieth century was more diverse and more responsive to public needs. Abbott's framework suggests that the professions that emerge from the AI disruption will be, if the historical pattern holds, more judgment-oriented, more ethically grounded, and more attentive to the human consequences of technology than the professions they replace.
This is not optimism. It is pattern recognition, grounded in two centuries of evidence. The pattern holds when the institutional actors -- the organizations, the educational institutions, the regulatory bodies, the practitioners themselves -- make choices that facilitate the transition rather than resist it. The pattern breaks when the transition is left unmanaged, when the institutional actors optimize for short-term efficiency rather than long-term adaptation, and when the people who bear the cost of the transition are left without the institutional support they need to navigate it.
Abbott's analysis leaves us with a structural observation that carries moral weight. The AI disruption will produce a new system of professions. The question is not whether the system will be reorganized but how. Whether the reorganization serves human flourishing or merely organizational efficiency. Whether it creates new jurisdictions that are accessible to the broadest possible pool of practitioners or concentrates authority in the hands of a privileged few. Whether the practitioners, organizations, and institutions that shape the settlement build structures that redirect the enormous flow of new capability toward life -- or allow it to flood without intervention, sweeping away the livelihoods and identities of those who lack the institutional support to build on new ground.
The history of professions offers both warnings and hope. The warnings come from transitions that were managed badly -- the Luddite era, where the absence of institutional support turned a structural transformation into a human catastrophe. The hope comes from transitions that were managed well -- the professionalization of medicine, the expansion of access to legal services, the democratization of computing itself -- where institutional investment in education, regulation, and professional development turned disruption into expansion.
Abbott's processual sociology adds one final dimension that is crucial for understanding the democratization of jurisdiction: the recognition that the system of professions is never static. It is always in process, always being made and remade through the daily interactions of practitioners, organizations, clients, and institutional actors. The system that exists at any given moment is not a structure but a snapshot -- a momentary configuration of ongoing processes that will continue to evolve. The AI disruption has not destroyed the system of professions. It has accelerated the processes through which the system remakes itself, compressing decades of professional evolution into years or even months.
This processual understanding has practical implications for practitioners navigating the disruption. If the system is always in process, then the practitioner's relationship to it is not that of a passenger on a fixed track but of a participant in an ongoing negotiation. The practitioner does not merely occupy a jurisdiction. She actively constructs it through her daily choices about what work to take on, what capabilities to develop, what relationships to build, and what standards to uphold. The AI disruption has made this constructive work more visible and more urgent, but it has not created it. Practitioners have always been the builders of their own jurisdictions. The disruption has simply made the building more conscious, more deliberate, and more consequential.
The prediction Abbott's framework supports, based on the patterns documented across two centuries of professional history, is that the current period of jurisdictional fluidity will eventually produce a new settlement -- a new system of professions organized around the scarcity that survives the democratization of knowledge. That scarcity is human judgment, human care, and the specifically human capacity for wisdom in the face of complexity. The system that emerges from this settlement will look different from any professional system that has existed before, because the technology that precipitated it is different from any technology that has precipitated a jurisdictional disruption before. But the dynamics that will shape its emergence -- the competition between professional groups, the arbitration of organizations, the regulation of states, the cultural negotiations of the public arena -- are the same dynamics that have shaped every professional system in the modern era.
The AI disruption is the most consequential jurisdictional event since the industrial revolution. Its outcomes are not yet determined. The settlement has not yet arrived. And the choices that practitioners, organizations, educators, and policymakers make in this narrow window between disruption and settlement will shape the system of professions for generations to come. Abbott's framework does not tell us what to choose. It tells us, with the precision of a half-century of research, what the consequences of different choices will be. The rest is up to the people who must live within the system those choices create.
I keep thinking about a line Abbott never wrote but his work makes unavoidable: every profession is a story people tell themselves about why they matter.
I have watched that story crack in real time. Not from a distance. From inside it. I sat in rooms with engineers whose identities were built on twenty years of mastering systems that a tool could now navigate in minutes. I watched their faces cycle through the phases Abbott describes -- denial, anger, bargaining, grief -- sometimes in a single afternoon. And I recognized every phase, because I was moving through them too.
Abbott gave me a framework to understand what I was feeling. His life's work says, in essence, that the boundaries we draw around professional competence are not discovered. They are constructed. They are maintained through institutional power, credentialing systems, and the careful cultivation of the belief that the work cannot be done by anyone who has not followed the prescribed path. And when a technology arrives that enables competent output through a different path, the boundary does not hold. It never has. Not when photography challenged portrait painting. Not when calculators challenged arithmetic. Not when computers challenged the entire architecture of industrial-age work.
The boundary shifts. And the people who built their lives inside it must rebuild.
What struck me most in writing this book was how Abbott's analytical precision illuminated something I had been experiencing as chaos. The vertigo I felt -- the simultaneous terror and exhilaration of watching professional categories dissolve -- was not chaos at all. It was a pattern. A pattern that has repeated across centuries, across professions, across every moment when a technology collapsed the scarcity on which a jurisdiction was built. The pattern does not make the experience less painful. But it does make it legible. And legibility is the first step toward navigation.
Abbott does not tell us what the new system of professions will look like. He tells us what determines its shape: the choices of organizations, educators, regulators, and practitioners themselves. The organizations that choose capability expansion over headcount reduction. The universities that teach judgment rather than skills that a tool can replicate. The regulators who build adaptive frameworks instead of freezing old jurisdictional boundaries in law. The practitioners who recognize that the endowment effect is causing them to overvalue what they know and undervalue what they are.
This is what I take from Abbott's work and want to leave with you: the jurisdiction is shifting, but the capacity is not. The knowledge that made you a professional may be commoditized. The judgment, the care, the stubbornness that drove you to master something difficult in the first place -- those remain. And they are, if Abbott's two centuries of evidence hold, the foundation on which the next system of professions will be built.
We are in the narrow window between the old settlement and the new one. The choices we make now -- as builders, as leaders, as parents, as citizens -- will determine what the new settlement looks like. Whether it serves human flourishing or merely organizational efficiency. Whether it opens new jurisdictions to the broadest possible pool of people or concentrates authority in fewer hands.
I do not know what the settlement will be. But I know this: the professionals who will shape it are not the ones defending the old boundary. They are the ones who see the boundary shifting and have the courage to walk toward the new ground -- even though the ground is still moving, and the view from there has not yet come into focus.
The jurisdiction shifts. The capacity endures. The question, as always, is what you choose to build with what remains.
-- Edo Segal
---
Andrew Abbott spent a career proving that these stories are political constructions -- jurisdictional claims maintained not by the superiority of the knowledge but by the institutional infrastructure built to defend it. Now artificial intelligence has breached every boundary at once, and Abbott's framework reveals what the disruption means with uncomfortable clarity. This book applies Abbott's analysis of jurisdictional competition to the AI revolution, tracing the patterns that have governed professional evolution for two centuries and showing how those patterns illuminate the most consequential disruption of our era. From the abstraction sequence that has reshaped computing to the Death Cross reshaping the software industry, Abbott's lens reveals the structural forces beneath the surface turbulence. What emerges is neither elegy nor celebration but a rigorous framework for understanding the new system of professions that is taking shape -- a system organized around judgment rather than knowledge, around care rather than craft, around the human capacities that AI amplifies rather than the technical capacities it replaces.

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Andrew Abbott — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →