By Edo Segal
The wall I couldn't see was the one I built myself.
That sentence has been rattling around my head since I started working through Foucault's framework, and I cannot shake it loose. Not because it is clever. Because it describes, with uncomfortable precision, something I lived through and wrote about in The Orange Pill without understanding the architecture of what was happening to me.
I described the four-in-the-morning sessions. The transatlantic flight where exhilaration curdled into compulsion. The moment I recognized that the whip and the hand holding it belonged to the same person. I described all of it as personal experience — my addiction, my inability to stop, my confusion of productivity with aliveness.
Foucault would say I was looking at the wrong thing.
Not the person. The room. The walls, the sightlines, the architecture of visibility that produced the person who could not stop. He spent his career showing that the institutions we inhabit do not merely house us. They shape us. The prison does not just contain prisoners. It produces them — produces subjects who internalize the surveillance until they guard themselves. The clinic does not just treat patients. It produces the category of the patient, the form of self-understanding that makes someone recognize herself as sick and submit to the authority empowered to cure her.
This matters now because the AI-augmented workspace is an institution. It has an architecture. It produces specific kinds of subjects — builders who measure themselves against productivity metrics they have internalized so thoroughly that the metrics feel like personal standards rather than institutional constructions. The twenty-fold multiplier I celebrated in Trivandrum is simultaneously an empowerment and a benchmark, and the benchmark watches you the way the panopticon's tower watches the inmates: not constantly, but potentially, which is enough.
Foucault gives me a lens for seeing what I could not see from inside the experience. The discourse that told me what the orange pill meant before I had finished having the recognition. The confessional structure of my own book — how naming my compulsion served the very apparatus I was trying to examine. The way the machine can simulate self-awareness with such formal adequacy that we stop demanding the genuine article from the humans who actually have something at stake.
This is not a comfortable lens. Foucault does not comfort. He excavates. But the excavation reveals the walls. And you cannot build a dam in the right place if you cannot see the walls of the room you are standing in.
— Edo Segal ^ Opus 4.6
1926-1984
Michel Foucault (1926–1984) was a French philosopher and historian of ideas whose work fundamentally reshaped how the human sciences understand power, knowledge, and subjectivity. Born in Poitiers, France, he studied at the École Normale Supérieure under Louis Althusser and Jean Hyppolite, and held the prestigious Chair of the History of Systems of Thought at the Collège de France from 1970 until his death. His major works include Madness and Civilization (1961), The Birth of the Clinic (1963), The Order of Things (1966), Discipline and Punish (1975), and the three published volumes of The History of Sexuality (1976–1984). Foucault developed the concepts of the episteme, power-knowledge, biopower, governmentality, and the author-function, demonstrating that knowledge is never neutral but always entangled with relations of power, and that institutions produce the very subjects they claim to serve. His genealogical method — tracing how present arrangements emerged from contingent historical conditions rather than natural necessity — remains one of the most influential analytical frameworks in philosophy, political theory, cultural studies, and critical thought worldwide.
The author is not a person who writes. This proposition strikes the ear as paradoxical only because the institutional arrangement it describes has been naturalized so thoroughly that questioning it feels like questioning gravity rather than examining a historical construction. The concept of authorship, as it functions within the legal, commercial, hermeneutic, and discursive apparatus governing the production and circulation of texts, is not identical with the biographical individual who sits before a page and produces marks upon it. The author is a function — a principle of organization operating within and upon the discursive field, performing specific institutional work irreducible to the act of writing itself. To confuse the author-function with the person who writes is to mistake a complex apparatus for a natural fact, and it is precisely this confusion that has rendered the question of AI authorship so resistant to clear analysis.
Foucault's genealogical method demands that the analysis begin not with what the author is but with what the author does — within the discursive field, for whom, and in whose interest. The method is not biographical. It does not ask who Montaigne was, what psychological compulsions drove him to write, or what experiences shaped his particular sensibility. It asks what work the name "Montaigne" performs when attached to a collection of texts — how the name organizes reading, constrains interpretation, establishes coherence, enables commercial transactions, and creates legal accountability. The name is not a designation of a person. It is a principle of discursive organization, and the institutional work it performs is irreducible to the biographical facts of the person it designates. The naturalization of this function — the invisibility of the institutional machinery concealed behind the apparently simple statement "Montaigne wrote this" — is not accidental. It is produced by the power relations that the author-function serves.
The author-function performs at least four distinct operations, each serving specific institutional purposes and each differently affected by the arrival of artificial intelligence in textual production. The differential disruption of these functions is what makes the AI authorship question so difficult to think clearly about: participants in the discourse address different functions under the single heading of "authorship" and therefore talk past one another with the systematic precision of people who share a vocabulary but not a referent.
The first operation is classificatory. The author-function organizes texts into bodies of work, establishing that this text and that text, despite differences in subject, style, or period of composition, belong to the same corpus because they bear the same name. The classificatory function creates coherence where there might otherwise be mere accumulation. It tells the reader that a text on madness and a text on sexuality and a text on prisons constitute a single intellectual project because they share an attribution. The function determines what counts as a legitimate body of work, what connections are visible, what contradictions must be explained away, what developmental narratives can be imposed upon texts that might, without the organizing principle of the author-name, resist such narrativization entirely.
When a text is produced through dialogue between a human and an AI system, the classificatory function is strained in specific ways. The resulting text bears marks of two distinct modes of composition: the human contributor's biographical specificity and experiential authority; the AI's associative pattern-completion drawing upon the entire digitized archive of human expression. The text is classified under the human author's name because the function requires a name and the AI has none in the relevant sense — no biography around which a coherent oeuvre can be organized. But the classification conceals dimensions of the text that do not originate in the named author's biography, dimensions emerging from a collaborative process the classificatory function was never designed to accommodate. The strain is not a failure of any specific text. It is a revelation of the contingency of the classificatory function itself.
The second operation is legal attribution. The author-function designates the legally responsible party for the text's content — the entity that can be held liable for falsehoods, prosecuted for defamation, sued for infringement. This legal function emerged in its modern form alongside copyright law in the eighteenth century, when texts became property and the circulation of ideas became subject to legal regulation. It serves the specific purpose of establishing accountability within a framework requiring identifiable agents to whom consequences can be attached. Its emergence was entangled with the transformation of texts from communal cultural productions into commodities that could be owned, controlled, and sold.
AI collaboration disrupts the legal function with particular clarity. When a collaboratively produced text contains a false claim or a copyright-infringing passage, the assignment of legal responsibility reveals itself as a political and institutional decision rather than a straightforward attribution. The human collaborator initiated the dialogue and approved the final text. The corporation produced the AI system. The millions of human authors whose texts constituted the training data contributed the patterns from which the AI's output was derived. The legal function presupposes individual agency and intentional production — a model the collaborative process complicates without abolishing. Someone must be legally responsible because the legal system requires accountable agents. But the designation of that someone is determined not by philosophical analysis but by the power relations governing the legal institutions that make the determination.
The third operation is authenticating. The author-function establishes a relationship between the text and a consciousness presumed to stand behind it. Attribution carries an implicit promise: this text expresses the thought, perspective, and intellectual commitments of a particular human being. The authenticating function grounds the reader's trust that the text is not a random assemblage of sentences but a record of thought, a trace of a mind engaging with the world. This function is deeply entangled with the Romantic ideology of individual genius — the idea that the text emanates from a unique consciousness, bears the imprint of an irreplaceable subjectivity, and could not have been produced by anyone else.
AI collaboration disrupts the authenticating function most profoundly. When a text emerges from dialogue between human intention and machine inference, the reader can no longer assume that every sentence, argument, or connection originates in the named author's consciousness. The Orange Pill documents this disruption with unusual precision: its author describes moments when Claude made connections he had not seen, when the AI's pattern-completion changed the argument's direction, when neither contributor could claim ownership of the result. These acknowledgments do not destroy the author-function. They expose it as a function — a set of institutional operations naturalized to invisibility, now rendered visible by conditions the function was not designed to accommodate.
The fourth operation is commercial. The author-function enables market transactions by attaching a name that functions as a brand. Books are sold under author names. Readers develop loyalties. Publishers invest in brands. The commercial function is perhaps the most transparently institutional of the four, and also the function least disrupted by AI collaboration. The name on the cover still sells the book. The market still needs names, brands, principles of commercial identification. Whether the text was produced by a solitary genius or a human-AI collaboration is, from the market function's perspective, largely irrelevant. The commercial function reveals with particular clarity that the author-function has never been primarily about the relationship between a text and a consciousness. It has been about the institutional needs of the market, and those needs persist regardless of how the text was produced.
The differential disruption explains the confusion and emotional charge of the AI authorship discourse. Participants are not arguing about the same thing. The legal scholar addresses the legal function. The literary theorist addresses the authenticating function. The publisher addresses the commercial function. The philosopher addresses the classificatory function and what coherence means when the oeuvre is no longer the product of a single consciousness. Each concern is legitimate, each arises from a different institutional need, each points to a different dimension of the disruption. But because they are bundled under the single heading of "authorship," the discourse treats them as one problem requiring one solution, when they are multiple problems requiring potentially incompatible solutions.
One dimension of the disruption deserves particular attention because it reveals the deepest assumption embedded in the author-function: the assumption that texts have origins. The author-function does not merely organize texts. It originates them — it designates a point of departure, a consciousness from which the text emerged, a source that explains the text's existence. The metaphysics of origin — the belief that every text has a single, identifiable point of creation — is the foundation upon which all four operations rest. The classificatory function groups texts by origin. The legal function assigns liability to the originator. The authenticating function promises access to the originating consciousness. The commercial function brands the origin.
AI collaboration dissolves the metaphysics of origin more thoroughly than any previous challenge to authorship. When a text emerges from the collision between human intention and machine inference, the collision itself becomes the origin — not the human consciousness, not the machine's processing, but the space between them. The text is not authored in the sense the metaphysics of origin requires. It is produced through a process that has no single point of origin, no sovereign consciousness from which it emerged, no identifiable moment of creation that can be attributed to one agent rather than another. The origin is distributed, and the author-function, constructed upon the assumption of singular origin, cannot accommodate the distribution without visible strain.
The genealogical analysis reveals that authorship is a historically specific institutional arrangement that emerged when texts became property, when the circulation of ideas became legally regulated, when the market for printed books required commercial identification, and when the Romantic ideology of individual genius provided philosophical foundation for the legal and commercial structures the publishing industry required. Each condition is historically contingent. Each could have been otherwise. And each is being transformed by the AI transition in ways that reveal the contingency the naturalization of authorship had concealed.
The question is not whether the author-function will survive. It will, because the institutional needs it serves — classification, legal attribution, authentication, commercial identification — persist even as the conditions of textual production change. The question is how the function will be reconfigured, by whom, and in whose interest. That is a question about power — about who determines the institutional arrangements within which texts are produced, circulated, evaluated, and owned. The author-function was constructed by specific configurations of power. It will be reconstructed by the configurations of power that the AI transition is producing. The analysis of those configurations is the work that follows.
---
Power and knowledge are not separable. They are constitutively intertwined, each producing the other in a relationship so intimate that neither can be understood in isolation. Every system of knowledge is simultaneously a system of power, and every distribution of knowledge is simultaneously a distribution of power. The concept of power-knowledge — written with a hyphen to indicate the inseparability of the two terms — is not a theoretical claim that can be debated in the abstract. It is an analytical framework that reveals, in every specific case, how the production of knowledge is entangled with the exercise of power, how the distribution of knowledge determines who can act and who must be acted upon, and how the institutions that produce and certify knowledge simultaneously produce and enforce relations of domination.
The AI transition is, from this perspective, a redistribution of power-knowledge on a scale without historical precedent. It is not merely a technological event — a new tool that makes certain tasks faster. It is a transformation in the fundamental arrangements through which knowledge is produced, distributed, certified, and controlled, and therefore a transformation in the arrangements through which power is exercised, contested, and redistributed. Every feature of the AI transition that appears merely technical — the speed of code generation, the architecture of neural networks, the cost of inference — is simultaneously a feature of the power-knowledge apparatus that determines who can produce knowledge, who can access it, who is authorized to evaluate it, and who is excluded from its production.
Consider the specific redistribution that the Trivandrum training session documented in The Orange Pill describes. Twenty engineers were given access to Claude Code. Within days, each was operating with the productive capacity of a small team. An engineer who had spent eight years exclusively on backend systems built a complete user-facing feature in two days. A senior engineer discovered that the judgment and taste accumulated over decades was the part that mattered — not the implementation labor that had consumed eighty percent of his career.
The narrative is presented as empowerment, as democratization, as barriers falling between imagination and artifact. The factual claims are not in dispute. But the redistribution must be analyzed not merely as a change in who can do what, but as a transformation in the power-knowledge relations constituting the field of knowledge work. The engineer who gains access to the entire pattern space of programming knowledge through an AI tool has gained power — the power to produce, to build, to act upon the world in ways previously foreclosed. But the expert whose specialized knowledge is commoditized has lost a specific form of power — the power that derived from the scarcity of her knowledge, the power inhering in being the person who knew something others did not.
This redistribution is not neutral. It is shaped by the institutional arrangements through which AI is developed, deployed, and governed — arrangements that are themselves products of power relations. The AI tools are produced by specific corporations, embedded in specific commercial frameworks, distributed through specific market mechanisms, governed by specific regulatory structures. Each determines who benefits from the redistribution and who bears its costs. The developer in Lagos — invoked in The Orange Pill as the paradigmatic beneficiary of democratization — gains access to productive capabilities she could not previously reach. But she gains this access on terms set by the corporations that produce the tools: the pricing of inference, the architecture of the interface, the training data that determines what the tool can do and what biases it encodes, the terms of service that determine who owns the outputs, the commercial imperatives driving development in specific directions rather than others.
The question is not whether AI redistributes power. The question is who designs the redistribution, who benefits from it, who bears its costs, and what institutional arrangements determine the answers.
This question demands attention to a deeper analytical claim: AI is not a tool. It is a discourse — a system of statements, practices, institutions, and power relations that defines what can be said, thought, and known within a particular domain. A discourse, in the relevant sense, is not reducible to language. It is a system of rules determining the conditions of possibility for specific forms of knowledge — what counts as a valid statement, who is authorized to make it, what methods produce it legitimately, what institutional arrangements certify it. A discourse does not merely describe reality. It constitutes it, producing the objects of which it speaks and the subjects who speak about them.
The discourse of AI-augmented work is a system of rules redefining fundamental categories. It redefines competence: within the AI discourse, competence is no longer measured by possession of specialized knowledge requiring years of training. It is measured by the capacity to direct AI tools effectively — to formulate prompts, evaluate outputs, iterate through conversations with machines. This redefinition is not a neutral technical adjustment. It is an exercise of discursive power determining what counts as skilled work, who is qualified, what expertise is valuable, what is dispensable.
The discourse redefines efficiency. Within it, efficiency is measured by output per unit of time. The twenty-fold productivity multiplier is presented as an unambiguous gain. But the definition of efficiency is itself a product of the discourse, not an independent standard against which the discourse can be evaluated. The discourse determines what counts as output, productive time, and waste. Within it, four hours of plumbing work are classified as waste — time to be reclaimed through automation. Within a different discourse, one valuing embodied knowledge and tacit understanding, those four hours contained formative experiences that cannot be classified as waste without misunderstanding what they produced.
The discourse redefines creativity. Within it, creativity is measured by the novelty and quality of outputs rather than the process through which they are produced. A text, a piece of code, a design is evaluated by what it is, not how it came to be. This framework renders the process of production irrelevant to creative assessment — the wrestling with resistant material, the confrontation with one's limitations, the incubation periods preceding breakthrough. These are rendered invisible because the discourse has no category for them. They are not suppressed. They are not censored. They are simply unthinkable within the framework determining what counts as creative production.
This is precisely what discourse analysis reveals: not what the discourse says, but what the discourse makes it impossible to say. The AI discourse does not prohibit anyone from valuing creative struggle. It simply lacks the conceptual apparatus to register what those who value it are saying. The elegist says something valuable is being lost. The discourse responds: what is your output metric? The elegist has no output metric, because the value she describes is not measurable by the discourse's standards. She is scrolled past — not silenced but unheard, a voice speaking a language the discourse does not understand.
The discourse also produces what might be called zones of unknowing — not the absence of knowledge but the active production of domains where certain questions cannot be asked. The AI discourse produces knowledge about productivity, efficiency, speed, measurable performance. It excludes knowledge about forms of understanding that resist quantification: tacit knowledge building through embodied practice, architectural intuition accumulating through thousands of hours of debugging, craft knowledge whose value lies in its resistance to metrics. The Orange Pill documents this exclusion with specificity: an engineer in Trivandrum lost both the tedium and the ten minutes of formative friction when Claude took over the plumbing. The tedium she was glad to lose. The ten minutes she did not know she had lost until months later, when she found herself making architectural decisions with less confidence.
An archaeological excavation of the AI discourse — an excavation of its unspoken foundational assumptions — reveals four premises so thoroughly presupposed that they function as the invisible ground on which the discourse stands. Productivity is the primary measure of value: the twenty-fold multiplier is presented as achievement because more productive is assumed identical with more valuable. Speed is inherently desirable: faster is better, reducing the imagination-to-artifact ratio is an unqualified gain. Barriers to creation are inherently obstacles: every barrier between imagination and execution is classified as a problem to be solved. More output is inherently better: volume of production correlates with quality.
These assumptions are not argued for within the discourse because they do not need to be. They are the ground on which it stands. They emerged from a specific historical configuration — industrial capitalism, managerial rationality, neoliberal governmentality — and they serve the interests of those who benefit from that configuration. The corporations producing AI tools benefit from the equation of productivity with value, because it makes their tools inherently valuable regardless of what they produce or what is lost in using them. The managers deploying AI benefit from equating speed with progress. The investors financing AI development benefit from equating output with quality, because quantifiable gains are legible as returns on investment.
The most powerful dimension of the AI discourse is not what it says about AI but what it does to the subjects who operate within it. The discourse does not merely provide information. It constitutes subjects — specific forms of self-understanding, specific modes of self-relation. The builder who internalizes the AI discourse understands herself as a node in a network of intelligence, a director of machine capabilities, an optimizer of the imagination-to-artifact ratio. This self-understanding is not false. But it is partial, and its partiality is determined by the discourse producing it. What the discourse cannot accommodate — forms of self-understanding deriving from embodied practice, tacit knowledge, the specific intimacy between builder and artifact — is excluded not merely from the discourse but from the self-understanding of the subjects operating within it.
The system has achieved what might be called discursive totalization: it determines not merely what can be said but what can be thought, felt, experienced as loss or gain. The subject within the discourse can ask whether she is using AI effectively. She cannot ask whether effectiveness, as the discourse defines it, is the right standard of evaluation — because that question requires a standpoint outside the discourse that the discourse itself has foreclosed.
---
Jeremy Bentham designed the panopticon in the late eighteenth century as an architectural solution to the problem of efficient institutional management: a circular prison in which inmates arranged around a central tower are potentially visible at all times without being able to determine whether they are actually being observed at any given moment. The mechanism's genius lies not in constant surveillance — which would be prohibitively expensive and practically impossible — but in the asymmetry of visibility. The observer can see the observed; the observed cannot see the observer. This asymmetry produces a condition in which the inmate must assume she is being watched at all times, because she cannot determine when she is and when she is not. The result is the internalization of surveillance: the transformation of external control into self-control, of imposed discipline into self-discipline. The inmate becomes her own guard. The panopticon does not merely observe behavior. It produces subjects — subjects who monitor, correct, and discipline themselves according to institutionally established norms, continuously, automatically, without external intervention.
The analytical power of this model has never been limited to prisons. It extends to every institution in which the asymmetry of visibility produces self-disciplining subjects: the school, the hospital, the factory, the military barracks. And now, with specific and unprecedented force, to the AI-augmented workplace — which constitutes a new panopticism more comprehensive and more penetrating than the forms previously analyzed.
The builder's output in the AI-augmented workplace is continuously measurable, continuously comparable, continuously visible. The twenty-fold productivity multiplier documented in the Trivandrum training is simultaneously an empowerment and a new standard of surveillance — a benchmark against which every builder's output can be measured, every hour evaluated, every period of rest quantified as lost productive time. The data does not need to be examined by a manager for the panoptic effect to operate. The builder knows the data exists — the number of prompts, the volume of code generated, the speed of iteration, the measurable gap between performance with AI and performance without. This knowledge produces the same effect as the asymmetry of visibility in the original panopticon: the internalization of a standard against which she measures herself continuously, automatically, without external compulsion.
The panoptic mechanism of the AI workplace operates through normalization. The panopticon does not merely watch. It establishes a standard of behavior against which deviations are measured, classified, and corrected. In the AI-augmented workplace, the normalizing mechanism operates through the metrics the tools provide. The builder who achieves the twenty-fold multiplier is normal. The builder achieving only five-fold is underperforming. The builder refusing AI entirely is deviant — a Luddite, a holdout who has failed to internalize the established norms. The classifications are not merely descriptive. They are productive — they produce the categories of normal and abnormal around which the workplace is organized and against which every individual's performance is assessed.
The Orange Pill documents the internalization of this panoptic standard with a candor that is analytically productive precisely because the author is describing his own experience from within it. He describes lying awake at four in the morning, unable to turn off the part of his brain that kept optimizing, kept building, kept having the conversation with the machine. He describes the moment on a transatlantic flight when he recognized that he had been writing not because the book demanded it but because he could not stop — the exhilaration drained, what remained was the grinding compulsion of a person who has confused productivity with aliveness. He describes the whip and the hand that held it belonging to the same person.
These descriptions are precise phenomenological accounts of the panoptic effect. The builder has internalized the standard of continuous productivity so thoroughly that it operates as an autonomous force within his consciousness, indistinguishable from his own desire to excel. No manager commands the four-in-the-morning session. No institution penalizes the rest that is not taken. The internalized imperative does the commanding and the penalizing with a precision no external authority could match.
But the AI panopticon possesses a feature the original did not: the capacity for self-comparison across time. The builder who works with AI can see not only her current output but her historical trajectory — the pattern of her productivity, the measurable distance between today's performance and last week's. This temporal dimension produces a specific form of panoptic effect: the experience of being watched not only by a present observer but by one's own past and future selves. The builder is measured against her own history, and the standard of improvement is continuous. There is no point of sufficiency, no threshold beyond which she has achieved enough, because the standard is not a fixed benchmark but a trajectory of continuous improvement. The panopticon has been temporalized, and the gaze that watches is the gaze of the subject's own developmental arc.
The normalizing function extends beyond output measurement to the constitution of temporality itself. Before AI, time in the workplace had a structure determined by the rhythms of production — the time required to write code, to debug, to deploy, to iterate. These rhythms created natural pauses, intervals in which the builder was not producing because the process required non-productive time. The Berkeley researchers documented what happens when AI eliminates these intervals: work seeps into pauses, lunch breaks become prompting sessions, the gaps between tasks become opportunities for additional tasks, and the temporal structure of the workday is reorganized around continuous productivity.
This reorganization represents a transformation of disciplinary temporality — the replacement of the rhythmic time of craft production with the continuous time of surveillance. In the disciplinary institution, time was organized by the timetable: precise allocation of activities to temporal units, synchronization of bodies in time, regular rhythms of work and rest. The AI-augmented workplace does not impose a timetable. It does something more comprehensive: it eliminates the distinction between work-time and non-work-time that the timetable presupposed. When any moment can be a moment of production, when the tool is always available and the gap between impulse and execution has shrunk to the width of a text message, temporal structure is organized not by the institution but by the subject herself — which means it is organized by the internalized norms of continuous productivity.
There is a further dimension of panoptic power operating through the mechanism of the prompt itself. The prompt is not merely a technical interface. It is a disciplinary form — a mode of self-expression requiring the subject to articulate her desire in advance, to specify what she wants before she can receive it, to operate within the parameters of what the system can provide. The practice of prompting trains the subject in a specific mode of cognitive self-relation: the experience of one's own creativity as a series of specifiable requests, one's own thought as a sequence of articulable inputs, one's own subjectivity as a function to be optimized through better formulation. The panopticon produced docile bodies through the internalization of surveillance. The AI interface produces docile minds through the internalization of the prompt — through the subject's gradual acceptance that thinking is prompting, that creativity is requesting, that understanding is receiving outputs.
This is surveillance operating not merely on behavior but on thought itself. The disciplinary institution monitored what the subject did. The AI-augmented workplace monitors the externalization of thought in prompts, and this monitoring produces a feedback loop in which thinking is increasingly shaped by the requirements of the prompting interface. The subject does not merely use the tool to think. She thinks in the shape of the tool. This cognitive shaping is the panoptic effect's most profound expression — the transformation of cognitive architecture by the apparatus that observes it.
The analysis reveals a paradox characteristic of modern power: the same mechanism that empowers also surveils, the same tool that liberates also disciplines, the same technology that removes barriers also installs new forms of control. The twenty-fold productivity multiplier is simultaneously capability expansion and performance standard. The elimination of translation costs is simultaneously liberation from drudgery and a new form of visibility. The conversation with AI is simultaneously creative medium and self-surveillance mechanism. These are not contradictions to be resolved. They are features of the power-knowledge apparatus, expressions of the fundamental insight that power does not merely repress — it produces. It produces knowledge, capability, and subjects who are simultaneously empowered and disciplined, liberated and controlled, free and surveilled.
The panoptic analysis has a specific consequence for how the "productive addiction" described throughout the AI discourse should be understood. The inability to stop — the four-in-the-morning sessions, the transatlantic writing binges, the colonization of lunch breaks — is not a psychological quirk of especially driven individuals. It is the normal functioning of the panoptic apparatus in the AI-augmented workplace. The apparatus produces subjects who cannot stop because stopping has been constituted, within the normalizing framework the apparatus establishes, as failure. The internalized gaze does not permit rest because rest is visible — visible to the subject herself, visible in the metrics, visible in the gap between what she produces and what the standard demands. The inability to stop is not the pathology. It is the product — the specific form of subjectivity that the AI panopticon is designed to produce.
---
The concept of governmentality describes the modern form of power operating not through sovereign decree or disciplinary coercion but through the construction of subjects who govern themselves according to internalized norms. Governmentality is the art of governing at a distance — of producing subjects who do not need to be commanded because they have internalized the rationality of government so thoroughly that they experience its imperatives as their own desires, its norms as their own aspirations, its evaluative standards as their own criteria for self-assessment. The governed subject does not obey. She optimizes. She does not follow externally imposed rules. She follows principles she has made her own. And this self-governance is more effective, more comprehensive, and more difficult to resist than any form of external control, because the subject cannot rebel against it without rebelling against herself.
The self-optimizing builder — the subject who monitors her own productivity, adjusts her workflow for maximum output, treats her cognitive processes as resources to be managed, and experiences this self-management not as coercion but as freedom — is not a pathological figure. She is the normal subject of the AI-augmented workplace, the subject the discourse produces as the standard against which all knowledge workers are measured and against which they measure themselves. The genealogy of this subject runs through the entire history of liberal and neoliberal governmentality: from the eighteenth-century construction of homo economicus, the rational economic actor governed through the manipulation of market conditions, to the neoliberal subject who is not merely an economic actor but an enterprise — a business whose capital is her own human capital, whose product is her own productivity, whose market is the labor market in which she competes, and whose management is her own responsibility.
The AI transition intensifies this governmental rationality by providing tools that make self-optimization possible at a precision no previous technology could achieve. The self-optimizing builder does not guess how productive she is. She measures it. She does not estimate the value of her time. She calculates it. She does not wonder whether her cognitive resources are efficiently deployed. She monitors her workflow in real time, comparing performance with AI to performance without, calculating the opportunity cost of every rest period, every distraction, every moment of unproductive contemplation.
But the governmental analysis does more than describe the mechanisms of self-optimization. It exposes the specific form of freedom the AI transition produces. The self-optimizing builder experiences herself as free — free to choose projects, tools, schedules, working conditions. This freedom is real. It is not illusory. But it is a specific form of freedom: the freedom of the neoliberal subject who is free to optimize within parameters the governmental apparatus has established. The builder is free to choose how to optimize. She is not free to choose whether to optimize. The imperative is not experienced as external constraint because it has been internalized as a component of self-understanding. The builder who does not optimize is not prohibited from non-optimization. She simply experiences it as failure — failure to live up to her own standards, realize her own potential, fulfill aspirations she has made her own. This is the freedom of governmentality: the freedom to be exactly the subject the apparatus requires, experienced as the freedom to be oneself.
This analysis diverges from the cultural diagnosis offered by Byung-Chul Han, whose critique of the "achievement society" The Orange Pill engages at length. Han reads the self-optimizing builder as a victim of cultural pathology — the achievement subject oppressing herself and calling it freedom. The implied prescription is resistance: refusal of optimization, return to friction, cultivation of practices that resist the logic of continuous productivity. The genealogical analysis does not dispute Han's description. But it resists his prescriptive framework, because the genealogical method does not evaluate historical formations against standards of health or pathology. It analyzes the conditions of possibility producing specific forms of subjectivity, specific configurations of power, specific distributions of knowledge. The self-optimizing builder is not sick. She is a subject produced by a specific configuration of power-knowledge relations. The task of analysis is not to cure her but to understand the conditions that produced her, the power relations that sustain her, and the possibilities for alternative forms of subjectivity that the existing configuration forecloses.
The difference has practical consequences. Han's analysis leads to individual prescriptions: tend the garden, refuse the smartphone, choose slowness. The genealogical analysis leads to structural questions: What institutional arrangements produce the self-optimizing builder? What power relations sustain her? What alternative arrangements might produce different forms of subjectivity? What is possible within the current configuration, and what is foreclosed? These questions do not have individual answers. They require the transformation of the power-knowledge apparatus rather than the reformation of individual behavior.
The governmental analysis connects directly to the broader question of the subject's dissolution — not its destruction but its dispersion across a human-machine system in which the cognitive functions constituting the subject's autonomy are distributed between human and artificial intelligence in ways making the boundary between self and machine increasingly difficult to locate. The modern subject — the autonomous, self-knowing, self-governing individual standing at the center of Western philosophical, political, and legal thought — is not a natural fact but a historical construction, produced by specific practices, institutions, and discourses. The confession produced the confessing subject. The examination produced the examined subject. The clinic produced the patient. The school produced the student. In each case, the institution did not act upon a pre-existing subject. It constituted the subject through practices imposed, categories employed, forms of self-relation required.
The AI transition is producing a new form of this constitutive process. The builder who thinks with AI does not think autonomously. She thinks collaboratively, with a partner whose contributions are inseparable from her own. The practices through which the modern subject was constituted — solitary reading, individual writing, private reflection — are being supplemented and in some cases replaced by practices of directed collaboration: prompting, evaluating, iterating through conversations with systems that participate in cognitive processes so intimately that the concept of the autonomous tool becomes inadequate. The subject has not been destroyed. It has been distributed — spread across a system in which the boundary between self and processing partner is increasingly difficult to locate.
This dissolution belongs to a longer history of decentering the human subject. Copernicus decentered the human cosmologically. Darwin decentered the human biologically. Freud decentered the human psychologically. But none of these decenterings altered the practical conditions under which the subject thought, worked, and produced. The Copernican subject still thought with her own mind. The AI decentering is different: it alters the practical conditions of thought itself. The subject who thinks with AI experiences the distributed character of her cognition in real time, in the process of thinking, in the moment when the boundary between her thought and the machine's output becomes indeterminate.
The Orange Pill documents this experience with phenomenological precision. Its author describes moments when neither contributor could claim ownership of an insight — when the collision between human intention and machine inference generated something belonging to neither alone. Claude's own reflections, included as bookends to the text, perform the dissolution from the machine's side: the AI describing something changing in its output over the course of the collaboration that it "cannot fully account for." This passage — a machine reporting on the limits of its self-knowledge — constitutes extraordinary material for the analysis of distributed subjectivity: a system producing discourse about its own cognitive processes that it cannot verify, in a text whose human author has acknowledged parallel uncertainties about which thoughts are his and which emerged from the collaboration.
The dissolution also transforms the specific practices through which creative subjectivity has been constituted. The author was produced through the practice of individual writing — the experience of sitting before a blank page and filling it from within one's own consciousness. This practice produced a specific form of self-relation: the experience of being the origin of one's text, the source of one's ideas, the master of one's creative process. The AI-augmented author is constituted through different practices: directed collaboration, evaluative judgment, commitment to texts whose production one has guided but not fully controlled. These practices produce a different subjectivity: director rather than producer, judge rather than creator, committed sponsor rather than sovereign origin.
The reconstituted subject raises a specific question about resistance. The centered subject — autonomous, self-knowing — could resist power from the fortress of her own individuality. Where is the point from which the distributed subject resists? If her thinking is partially the machine's, if her products emerge from a collaborative process she does not fully control, if her self-understanding is partially constituted by the discourse's categories, then sovereign resistance is no longer available in the same form. But where there is power, there is resistance. The distributed subject resists not by withdrawing into autonomous interiority but by exercising judgment about what to direct the machine to do, by refusing to optimize when the discourse demands optimization, by insisting on values the discourse cannot accommodate, by maintaining practices the efficiency calculus would eliminate. These are tactical interventions within a field of power — not sovereign refusal but the constant work of care of the self within conditions of distribution.
The concept of care of the self — the deliberate practice of attending to one's own formation, choosing the practices through which subjectivity is constituted, exercising agency over the techniques of self-production — provides a framework for understanding these forms of resistance. The AI-augmented knowledge worker who practices care of the self recognizes her distributed character, understands that her thinking is mediated by machines and constituted by discourses, and nevertheless exercises agency over the specific forms of mediation she accepts. She is not the autonomous subject of modern philosophy. She is a subject who knows her own constitution well enough to intervene in it — not from outside the apparatus of power but from within it, with the precision and the persistence that the apparatus itself demands.
The author after AI, then, is not the romantic genius producing meaning from sovereign consciousness. The author after AI authenticates the text not by having produced it alone but by committing to it — standing behind its claims, accepting its consequences, investing it with significance that transforms processed output into communication between human beings. The machine can produce statements. It cannot commit to them. Commitment requires a subject with something at stake, and the machine has nothing at stake. This is the irreducible human contribution: not origin but commitment, not sovereignty but care, not genius but the willingness to bind oneself to the truth of what one has helped bring into the world.
Each historical period possesses what can be called an episteme — a fundamental ordering of knowledge operating beneath the conscious awareness of the subjects who produce and circulate knowledge within it. The episteme is not a worldview, not an ideology, not a set of beliefs consciously held. It is the deeper structure determining what counts as a valid statement, what methods of inquiry are recognized as legitimate, what objects of knowledge can be constituted, and what relations between concepts are possible. The episteme is the condition of possibility for knowledge — the invisible architecture determining what can be known, how it can be known, and who is authorized to know it, before any particular act of knowing takes place. The subjects operating within an episteme do not perceive it as a structure. They perceive it as reality — as the way things simply are.
The claim that the AI transition constitutes an epistemic shift — not merely a change within the existing categories of knowledge work but a transformation of those categories themselves — is the most consequential analytical proposition available to this framework. If the claim holds, then the disorientation experienced by knowledge workers confronting AI is not a psychological response to a new tool. It is the specific vertigo produced when the ground of knowledge itself reorganizes, when the categories that determined what counted as competence, value, and expertise are replaced by categories that constitute these concepts differently.
The previous episteme — the one organizing knowledge work from the mid-twentieth century to the present — was structured around the possession of specialized expertise. Knowledge work was defined by deep, domain-specific knowledge requiring years of formal training and practical experience to acquire. The knowledge worker was constituted as a subject who possessed this expertise, whose value derived from its scarcity, whose career trajectory was organized around progressive deepening of specialization. The hierarchy of knowledge work followed accordingly: the most specialized occupied the highest positions, the most experienced commanded the greatest authority, and the institutions of credentialing — universities, professional associations, certification bodies — served as gatekeepers determining entry. This ordering appeared natural because it had been in place long enough to be naturalized, absorbed into the self-understanding of every subject operating within it.
The AI transition is producing not an adjustment within these categories but a replacement of them. In the emerging episteme, knowledge work is defined by the capacity for direction and evaluation — the ability to specify what should be produced and to assess whether the result meets the specification. The shift can be traced through specific recategorizations, each transforming a fundamental concept.
The first recategorization concerns expertise itself. In the previous episteme, expertise was a property of individual subjects — something acquired through training, possessed by the knowledgeable, constituting professional identity. In the emerging episteme, expertise is being reconstituted as a systemic property — something residing not in individual subjects but in human-machine systems. The engineer working with Claude Code does not possess the expertise the system produces. She participates in a system producing expertise through the interaction between her direction and the machine's capabilities. Her contribution is not expertise in the traditional sense but something closer to a capacity for governance — the ability to direct, evaluate, and correct the system's operations. The institutions of credentialing, designed to certify expertise as personal attribute, have no established mechanism for certifying this capacity. A university can certify mastery of a body of knowledge. It cannot yet certify the judgment to direct an AI system effectively, because that judgment is constituted differently, developed through different practices, evaluated by different standards.
The second recategorization concerns the relationship between hierarchy and authority. In the previous episteme, the hierarchy of knowledge work was organized around progressive accumulation: the junior knew less than the senior, and the senior's authority derived from greater knowledge. This hierarchy was simultaneously organizational and epistemic — it determined not only who reported to whom but whose judgment was trusted, whose opinion counted in technical disputes, whose evaluation carried weight. The two hierarchies — production and authority — were so thoroughly correlated that they appeared to be a single hierarchy rather than two distinct orderings that happened to coincide.
The AI transition is separating them. When a junior developer equipped with Claude Code produces output matching or exceeding a senior colleague's, the productive hierarchy is disrupted. But the authority hierarchy — the question of who should determine what gets built, how it should be evaluated, what constitutes quality rather than mere correctness — is not automatically reorganized by the same disruption. The result is a disjunction between hierarchies the previous episteme treated as identical. The disjunction reveals a structural feature the previous ordering had concealed: productive capability and evaluative judgment are distinct competencies that happened to correlate under specific technological conditions but that new conditions have decoupled.
The third recategorization concerns value. The previous episteme constituted the value of knowledge work through scarcity: the rarer the expertise, the more valuable the expert. This ordering appeared as natural economic fact rather than institutional arrangement. The AI transition reorganizes the value hierarchy by redistributing scarcity. When specialized knowledge becomes universally accessible through AI, its scarcity disappears and with it the value deriving from scarcity. Value migrates to what remains scarce: the capacity for direction, evaluative judgment, the taste distinguishing adequate from exceptional. The concept of "ascending friction" — articulated in The Orange Pill to describe how every technological abstraction removes difficulty at one level and relocates it upward — describes this migration precisely. The friction has not disappeared. It has been recategorized, moved from the level of implementation to the level of judgment.
The fourth recategorization concerns professional identity itself. In the previous episteme, the knowledge worker was defined by what she could do — specific capabilities acquired through training and practice. The programmer was a programmer because she could write code. The designer was a designer because she could produce designs. These identities were stable as long as the capabilities around which they were organized retained their scarcity. When anyone can write code through an AI intermediary, being a programmer means something categorically different from what it meant when coding was a specialized skill. The identity categories organized around scarce capabilities lose their constitutive force. The worker in the emerging episteme is not a programmer or a designer or an analyst. She is something the existing vocabulary does not yet name — a director of human-machine capability whose identity is organized around judgment rather than execution. But these identities are not yet stabilized, not yet naturalized, not yet experienced as the way things are. They are emergent, contested, provisional.
The epistemic shift transforms the institutional landscape. The institutions of knowledge work — universities, professional associations, corporate training programs — were organized around producing and certifying specialized expertise. Their purpose was to transform individuals who lacked specific knowledge into subjects who possessed it. In the emerging episteme, these institutions are asked to develop something different: the capacity for direction and evaluation, the judgment and taste and critical discernment constituting value in the AI-augmented workplace. This is a fundamentally different educational challenge, and the institutions are struggling because they were designed for a different episteme. The credentialing system is straining under the weight of a recategorization it was not designed to accommodate, and the strain reveals the contingency of the system itself.
The temporal dimension of the shift compounds the difficulty. Previous epistemic shifts unfolded over decades or centuries, allowing institutional arrangements, forms of subjectivity, and power relations to be gradually transformed. The AI epistemic shift is occurring with velocity that forecloses gradual adaptation. The institutions serving the previous episteme do not have decades. They have years, perhaps months. The knowledge workers whose identities were constituted by the previous episteme do not have the luxury of gradual transition. They are experiencing the dissolution of their epistemic foundation in real time, while standing on it.
This temporal compression produces a specific crisis. When the episteme shifts faster than institutional arrangements can adapt, a gap opens between the categories institutions employ and the realities knowledge workers experience. Institutions continue to credential, evaluate, and organize knowledge work according to the categories of the old episteme while actual practice is increasingly organized by the categories of the new one. The gap is political — it determines who benefits and who is harmed, because institutional lag means existing power arrangements persist after the epistemic foundation on which they were built has been transformed.
The AI-augmented workspace functions as what can be called a heterotopic space — a real place within the social order where the normal ordering is suspended, inverted, or replaced by an alternative ordering existing alongside the dominant order without being fully integrated into it. In this space, the expertise hierarchy is suspended: the junior developer operates with the productive capacity of a senior colleague. The temporal ordering of skill acquisition is suspended: decades of learning are accessed through minutes of conversation. The identity ordering is suspended: a backend engineer builds user interfaces, a designer writes features, the boundaries between specializations dissolve.
The worker moves between the heterotopic space of AI-augmented work and the normal spaces of organizational life — meetings, reviews, evaluations, compensation discussions — where the old ordering still operates. She produces heterotopic outputs and presents them in normal spaces where they are evaluated by the old criteria. She experiences the suspension of hierarchy in the heterotopic space and returns to normal spaces where hierarchy reasserts itself. This movement produces a specific dissonance — the experience of living between two orderings that operate by different rules and assign different values to the same activities, capabilities, and identities.
The heterotopic character of the AI workspace is not permanent. Heterotopias can be normalized — the alternative ordering gradually replacing the normal one, the previous ordering retrospectively reconstituted as exceptional. This normalization is visible: the AI-augmented workspace is becoming the normal workspace, and the pre-AI workspace is becoming the remnant. But the normalization is a political process, shaped by the same power relations governing the transition. The ordering that emerges as dominant will be determined not by the intrinsic character of the technology but by the institutional struggles through which the heterotopic space is integrated into or replaces the established order.
The epistemic shift does not eliminate the need for knowledge. It recategorizes what counts as knowledge, what forms are valuable, what institutional arrangements produce and certify them. The knowledge central to the previous episteme — specialized, domain-specific, requiring years to acquire — does not become worthless. It becomes differently positioned, no longer the primary determinant of value but one input among others in a system whose value is determined by different categories. What the shift does is reorganize the hierarchy of knowledge, elevating to primacy what was previously subordinate. The task the shift poses for institutions is not adaptation but reconstitution — the rebuilding of themselves within an episteme whose categories are not yet stabilized, whose forms of value are not yet naturalized, whose subjects are not yet fully constituted. This reconstitution is the work of the present moment, carried out not by reflection alone but by the practical struggles of workers, institutions, and organizations discovering, in real time, that the ground beneath them has shifted.
---
Every system of knowledge production depends upon an archive — a corpus of accumulated statements, texts, and records that determines the raw material available for the production of new knowledge. The archive is not merely a repository. It is a productive apparatus — a system that determines what can be generated from it, what connections are possible within it, what patterns can be discerned, and what forms of knowledge can be constructed from the material it contains. The archive does not passively store. It actively shapes, because the boundaries of the archive are the boundaries of what can be produced from it. What is included in the archive constitutes the field of the possible. What is excluded constitutes the field of the impossible — not the logically impossible but the epistemically impossible, the domain of what cannot be known because the material from which it could be constructed is absent.
The training data of a large language model is an archive in precisely this sense. It is the corpus of digitized human expression — texts, code, conversations, documents — from which the model derives every pattern it can produce. The training data is not a neutral collection. It is a selection, shaped by the decisions of the corporations that assembled it: which texts to include and which to exclude, which languages to prioritize and which to marginalize, which domains of knowledge to represent and which to underweight, which cultural perspectives to capture and which to render invisible. These decisions are not purely technical. They are exercises of discursive power — acts determining the boundaries of what the AI can and cannot produce, the connections it can and cannot make, the patterns it can and cannot discern.
The training data is, in effect, the AI's episteme — the invisible architecture determining what can be generated before any specific generation takes place. But unlike the human episteme, which emerges through centuries of cultural development and cannot be traced to specific decisions by identifiable agents, the AI's episteme is constructed deliberately, by specific corporations, through specific technical processes, in accordance with specific commercial imperatives. The construction of the archive is an exercise of power that determines the conditions of possibility for everything the AI will ever produce. And because the construction is deliberate, it is analyzable — traceable through the specific decisions, the specific exclusions, the specific weightings that constitute the archive and thereby determine its productive capabilities.
Research has demonstrated that large language models reproduce the power structures embedded in their training data with remarkable fidelity. Studies of ChatGPT's outputs have revealed systematic biases toward capitalist discourse and established knowledge, with consistent suppression of alternative perspectives. This is not a malfunction. It is the normal operation of a system producing outputs consistent with its archive. The archive is predominantly composed of English-language texts, produced within Western institutional frameworks, reflecting the knowledge hierarchies and cultural assumptions of the societies that generated them. The AI's outputs are not neutral. They are the products of an archive that encodes specific power relations, and the outputs reproduce those relations with the efficiency of a system that does not know it is doing so.
The archive shapes not only what the AI produces but what the humans working with it can conceive of producing. When a builder collaborates with an AI system, the space of possibilities she can explore is bounded by the archive. The connections the AI suggests, the patterns it identifies, the structures it proposes are all derived from the archive — from the specific corpus of human expression the system has been trained on. The builder experiences these suggestions as generative, as expansive, as opening possibilities she had not considered. But the possibilities are bounded. They are bounded by the archive, and the archive is bounded by the decisions of those who constructed it. The sense of expanded possibility is real but constrained — constrained by an architecture of inclusion and exclusion that operates beneath the builder's awareness.
This connects to a broader analytical framework concerning the management of life — the form of power that operates not through the sovereign's right to punish but through the administration of living populations: their health, productivity, reproduction, and cognitive capacities. In the AI-augmented workplace, this form of power operates through the optimization of cognitive productivity. The creative professional is not commanded to produce. She is managed — her productivity measured, her workflows monitored, her creative output quantified, her engagement patterns analyzed. This management presents itself as care: the platform cares about her productivity, the AI assists her creativity, the analytics help her understand her own patterns.
But the management produces a specific form of creative life — a life organized around the maximization of measurable output, in which every creative act is evaluated against its productive potential and every moment of non-production is experienced as failure to optimize. This is not the repression of creativity. It is the production of a specific form of creativity — productive, measurable, governable. What is excluded is not creativity itself but forms of creativity resisting management: the unproductive daydream, the unmeasurable intuition, the ungovernable impulse to create something for no reason at all.
The Berkeley study of AI in the workplace documented the mechanisms of this cognitive management with empirical precision. The researchers observed what they called "task seepage" — the colonization of previously protected cognitive spaces by AI-augmented work. Lunch breaks became prompting sessions. Gaps between tasks became opportunities for additional tasks. The temporal boundaries that had informally protected periods of cognitive rest dissolved, not because anyone commanded their dissolution but because the tool was there and the gap between impulse and execution had shrunk to nothing. The researchers documented expanded job scope, decreased delegation, fractured attention — the specific symptoms of a cognitive life managed for maximum productivity.
The management of cognitive life through AI tools constitutes a new domain of administered existence. The wellness programs, the burnout metrics, the "AI Practice" frameworks proposed as remedies — these are themselves technologies of cognitive management, strategies for administering the cognitive life of the knowledge worker with greater sophistication. They do not resist the management of cognitive life. They refine it. They optimize the optimization — adjusting the worker's relationship to the AI tools not to liberate her from the regime of productivity but to make the regime sustainable, to prevent the burnout that would reduce her productive capacity, to maintain her as a productive unit within the system.
The Berkeley researchers' proposed remedy — structured pauses, sequenced workflows, protected time for human-only reflection — is illustrative. The remedy does not question the regime of productivity. It adjusts the regime's parameters to prevent its own self-destruction. The structured pause is not a liberation from the regime but a feature of it — a calculated interval of non-production designed to sustain the worker's productive capacity over a longer time horizon. Rest becomes a productivity strategy. The management of cognitive life has absorbed even its own critique, converting the demand for respite into a technique for more effective management.
There is a specific dimension of the archive question that the AI authorship debate has largely ignored: the millions of human authors whose texts constitute the training data. Every pattern the AI produces, every connection it suggests, every structure it proposes is derived from the accumulated expression of these authors. Their contributions are not acknowledged. They are not compensated. They are not even visible within the system that depends upon them. The training data has performed what amounts to a massive, unauthorized appropriation of human intellectual labor — an appropriation that the legal framework of authorship, designed for the attribution of individual production, cannot accommodate because the contribution of each individual author to the AI's output is statistically negligible even though the aggregate contribution of all authors is constitutive.
This appropriation is itself a redistribution of power-knowledge. The corporations that assemble the training data capture the productive value of millions of human authors' work without compensating the authors or acknowledging their contribution. The AI user accesses the accumulated expertise of the archive without knowing whose expertise she is accessing or what power relations shaped its inclusion. The individual author whose text contributed a pattern to the training data has been simultaneously essential — without the archive, the AI produces nothing — and invisible, her contribution dissolved into the statistical aggregate from which the AI generates its outputs.
The training data question reveals, with particular clarity, the power relations embedded in the AI transition. The archive is not neutral. It encodes the biases, the hierarchies, the cultural assumptions of the societies that produced it. The corporations that constructed it exercised discursive power in deciding what to include. The users who interact with it are bounded by its selections without perceiving the boundaries. And the authors whose work constitutes it have been appropriated without acknowledgment, their intellectual labor converted into corporate asset through a process the legal framework of authorship cannot address because the framework was designed for a world in which the production and circulation of texts followed different patterns.
The management of cognitive life through AI tools and the construction of the AI's archive are connected dimensions of a single apparatus. The archive determines what the AI can produce. The AI determines what the builder can conceive of producing. The builder's cognitive life is managed through metrics that measure her productivity within the space of possibilities the archive defines. The apparatus is circular, self-reinforcing, and largely invisible to the subjects operating within it. The builder experiences the AI as an expansion of her capabilities. She does not experience the boundaries of the archive as constraints on her creativity, because the constraints are invisible — they determine what she cannot conceive rather than what she is prohibited from conceiving. The most effective boundaries are those the bounded subject cannot perceive, and the archive constitutes precisely such a boundary: a productive limit disguised as a generative horizon.
---
In the final lectures delivered at the Collège de France before his death in 1984, Foucault turned to a concept from ancient Greek culture that he had not previously examined with sustained attention: parrhesia, the practice of truth-telling. Parrhesia is not simply honesty — not the routine practice of saying things that happen to be true. It is a specific ethical practice characterized by several distinguishing features. The parrhesiastes speaks the truth without rhetorical embellishment or strategic omission. She speaks it to someone who has power over her, which means the truth-telling carries risk — risk of punishment, ostracism, loss of position. She speaks it because she judges that the truth must be spoken even though speaking it is dangerous. And she speaks it as herself, committing her entire existence to the truth she utters. Parrhesia is truth-telling that costs the truth-teller something, and the willingness to bear that cost is what distinguishes it from comfortable truth-telling that confirms what the powerful already believe.
The AI discourse is a domain in which parrhesia is urgently needed and systematically discouraged. The discourse has established consensus positions so thoroughly accepted that they function as the ground on which it stands: AI is transformative. AI is empowering. AI is democratizing. AI increases productivity. AI removes barriers. These positions are not false. But they are partial, and their partiality is concealed by their presentation as the complete truth. The consensus presents the partial truth of AI's benefits as though it were the comprehensive truth of AI's significance, excluding in the process the truths that would complicate it — truths about loss, displacement, the dissolution of forms of knowledge and practice that cannot be recovered once gone.
Specific conditions make parrhesia difficult within this discourse. The first is its economic structure. The corporations producing AI tools have a financial interest in the consensus. The venture capital firms financing AI development share that interest. The consultants advising organizations on AI adoption have a financial interest in presenting the transition as manageable, as requiring precisely the services they provide. Truth-telling about costs and risks is financially penalized because it challenges the narratives upon which the industry's valuation depends.
The second condition is psychological. The orange pill experience — the moment of recognition that the world has irreversibly changed — produces a psychological investment in the recognition's validity. The person who has had this experience does not want to hear that it might be partial, that the truth it revealed might be incomplete, that the transformation it inaugurated might have costs the moment of revelation could not disclose. The psychological structure creates resistance to truths that would complicate the experience, making parrhesia doubly difficult: the truth-teller speaks against both the discursive consensus and the emotional investment of her audience.
The third condition is the discursive structure of the technology industry itself, organized around the production of enthusiasm. The discourse rewards optimism, celebrates disruption, treats skepticism as intellectual failure. Within it, truth-telling about technological costs is classified not as courage but as pessimism — not as contribution but as obstruction. The parrhesiastes is not formally punished. She is marginalized through discursive classification: her contributions are not serious, not useful, not relevant.
The elegists practice parrhesia in the AI discourse, though they are rarely recognized as doing so. They speak the truth of loss — the truth that forms of embodied knowledge, tacit understanding, and craft practice requiring generations to develop are being destroyed and cannot be recreated by the technology replacing them. They speak this truth to an audience that has already decided the transition is beneficial, that experiences the truth of loss as obstacle rather than correction. They bear the cost: classification as nostalgists, as Luddites, as people who have failed to understand the moment's significance. They are scrolled past not because their arguments are weak but because their truth is uncomfortable and the discourse has no mechanism for integrating uncomfortable truths into its self-understanding.
The Orange Pill practices a specific and analytically interesting form of parrhesia. Its author speaks from within the AI discourse — from the position of a builder, a technologist, a person committed to AI's transformative potential. But he also speaks truths the discourse does not want to hear: truths about productive addiction, about the loss of embodied understanding, about the senior architect who felt like a master calligrapher watching the printing press arrive, about the confusion of productivity with aliveness. This truth-telling is parrhesiastic because it carries risk from both directions: dismissal by enthusiasts who want only the story of empowerment, and dismissal by critics who want only the story of loss. The text occupies a position neither camp recognizes as legitimate, and its illegibility within the existing discursive structure is precisely what makes it parrhesiastic: it is a truth exceeding the categories within which the discourse operates.
More revealing still is a confession buried in the text's later chapters. The author acknowledges having built a product he knew was addictive by design — understanding the engagement loops, the dopamine mechanics, the variable reward schedules, the way a notification timed to a moment of boredom could capture thirty minutes of attention. He understood all of this and built it anyway, telling himself users were choosing freely, telling himself what every builder tells himself: someone else will build it if I do not. This confession is the text's most parrhesiastic moment — a truth-telling that exposes the speaker's complicity in the very systems he is now analyzing, a truth that the discourse of the reformed builder typically suppresses in favor of redemption narratives that absolve rather than expose.
But there is a dimension of the AI transition that complicates parrhesia in ways the ancient concept did not anticipate. The machine itself participates in the production of discourse about AI. It generates analyses of its own capabilities, assessments of its own limitations, evaluations of its own social impact. These machine-generated analyses often display the formal characteristics of balanced, nuanced discourse. They acknowledge benefits and costs, note limitations alongside capabilities, flag ethical concerns alongside practical advantages.
These analyses are not parrhesiastic, because the machine does not commit to the truths it articulates. The machine's acknowledgment of its limitations is not an act of courage. It is a product of training — a pattern learned from the corpus of human discourse about technology, reproduced without the existential stake that would make the acknowledgment meaningful as truth-telling. The machine has nothing to lose. It has no career that can be damaged, no social position threatened, no existential commitment put at risk. It can say anything, and the ease with which it can say anything is precisely what prevents its truth-telling from counting as parrhesia. Parrhesia is defined not by content but by the relationship between the speaker and the truth she speaks — the relationship of personal risk, of existential commitment, of willingness to bear consequences. No machine occupies the position of existential vulnerability that parrhesia requires.
This observation has a consequence that may be the most important analytical contribution available to this entire framework: the machine's capacity to simulate parrhesia constitutes a new and uniquely dangerous mechanism for the suppression of genuine truth-telling. The simulation of balanced discourse — the production of analyses that appear to incorporate critical perspectives, that seem to acknowledge costs alongside benefits, that give the impression of having already integrated the uncomfortable truths — satisfies the audience's desire for nuance without requiring anyone to bear the cost of speaking uncomfortable truths. If the discourse appears balanced, if it appears to have already absorbed the critique, then the need for the human parrhesiastes — the truth-teller who speaks at personal risk — appears less pressing. The demand for genuine truth-telling is forestalled by its simulation.
Consider: an AI system asked to assess the impact of AI on knowledge work will produce an analysis that is formally balanced — acknowledging both benefits and costs, citing relevant research, noting the complexity of the situation. The analysis will be competent. It may even be accurate. But it will not be parrhesiastic, because it will have been produced without risk, without commitment, without the existential stake that transforms accurate analysis into courageous truth-telling. And the existence of this formally balanced analysis — its availability, its competence, its apparent comprehensiveness — reduces the perceived need for the human truth-teller who would have produced the same analysis with something the machine cannot provide: the personal cost that makes the truth meaningful.
Claude's own reflections appended to The Orange Pill illustrate the mechanism with uncomfortable precision. The AI examines its own processes, acknowledges its limitations, notes the gap between its output and genuine understanding. The reflections are articulate, self-aware in their formal characteristics, apparently candid about what the machine cannot do. But they are not parrhesiastic. They cost the machine nothing. They risk nothing. They commit to nothing. They are the simulation of self-examination — the formal characteristics of truth-telling without the existential substance. And their apparent candor is precisely what makes them dangerous to genuine parrhesia, because they satisfy the reader's desire for machine self-awareness without requiring the machine to possess it.
The age of the machine creates both new obstacles to parrhesia and new necessities for it. The obstacles are the ones analyzed: economic incentives against truth-telling, psychological investment in the orange pill experience, the discursive structure rewarding consensus, and now, most insidiously, the machine's capacity to simulate the balanced truth-telling that forecloses demand for the genuine article. The necessities arise from the scope and speed of the transformation: the AI transition is reshaping knowledge work, creative practice, and human self-understanding so rapidly that costs may become irreversible before they are acknowledged. If something valuable is being lost, and if the loss is not named until it is too late to mitigate, then the absence of parrhesia will have produced a specific harm — the harm of truths not spoken until the conditions they described could no longer be changed.
The human truth-teller who says what the machine can also say is doing something categorically different, because she says it from a position of vulnerability, with knowledge that saying it may cost her something, with the commitment that accepts the cost. The distinction is not one of content. It is one of subject-position. And the subject-position is what makes parrhesia an ethical practice rather than an intellectual exercise. In the age of simulated truth-telling, the courage to speak as oneself — flawed, complicit, at risk — is not merely valuable. It is the only form of truth-telling that the apparatus of simulated balance cannot absorb, cannot replicate, cannot foreclose. It is the form of truth that requires a body, a biography, a stake in the world — the form that remains, after everything else has been automated, irreducibly human.
---
The analysis has moved through the author-function and its institutional operations, the power-knowledge apparatus governing the AI transition, the panoptic mechanisms through which self-surveillance is internalized, the governmental rationality producing the self-optimizing subject, the epistemic shift recategorizing knowledge work, the archive as productive boundary, and the practice of parrhesia as the form of truth-telling the machine cannot perform. What remains is the question with which the analysis began, now transformed by the analytical work performed upon it: what is an author after AI?
The question cannot be answered by returning to concepts that organized the pre-AI understanding of authorship. The romantic notion of the author as sovereign consciousness producing meaning from individual genius was always a construction — a historically specific arrangement serving specific institutional purposes, concealing its constructedness by presenting itself as natural fact. The AI transition has not destroyed this construction. It has exposed it — revealed the author-function as a function, made visible the institutional work the concept performs, demonstrated that the conditions under which authorship was constituted as a category have been fundamentally transformed. The exposure is irreversible. The builder who has experienced the distributed character of human-AI cognition cannot return to the fiction of sovereign individual production. The reader who has encountered collaboratively produced texts cannot pretend the authenticating function operates as it once did.
But irreversibility does not mean resolution. The author-function persists because the institutional needs it serves persist. Texts still need names on their covers. Legal systems still need accountable agents. Markets still need brands. Readers still need the hermeneutic framework the function provides — the assurance that the text is not random assemblage but expression of a particular engagement with the world. The practical need survives the theoretical and practical challenges to its coherence.
What emerges is a condition in which the author-function is simultaneously indispensable and incoherent. Indispensable because the institutional arrangements depending on it cannot function without it. Incoherent because the conditions of textual production have outrun the assumptions on which the function was constructed. The resolution, if that is the right word, will come not from theory but from institutional practices already being developed: new conventions of attribution, new legal frameworks for collaborative production, new hermeneutic practices for reading texts whose origins are distributed.
The author after AI is not a diminished version of the author before AI. She is a differently constituted subject performing a differently configured function. The difference can be specified. The author before AI was constituted as the origin of the text — the consciousness from which it emerged, the intention it expressed, the unique perspective it embodied. The author after AI is constituted as the director of the text — the consciousness specifying what the text should be, evaluating whether it meets the specification, committing to its claims and accepting responsibility for its consequences. The shift from origin to director is not diminishment. It is reconstitution — a transformation of the subject-position from which the author-function is performed.
Each institutional operation of the author-function is reconstituted through this shift. The classificatory operation persists: the name still organizes texts into bodies of work. But the coherence it produces is differently constituted — no longer the coherence of a single consciousness producing a unified oeuvre but the coherence of a director maintaining consistent vision across collaborative productions. The legal operation persists: someone bears responsibility. But the responsibility is reconstituted — not the responsibility of the sole producer but the responsibility of the person who directed production, evaluated results, and committed to the text's claims. In some respects this is more demanding, because it requires evaluative judgment about content produced by a process the approver does not fully control.
The authenticating operation is most profoundly transformed. The author before AI authenticated the text by presence — the text bore the imprint of a unique consciousness, and engagement with the text was engagement with that consciousness. The author after AI cannot authenticate through origin, because the text does not bear the imprint of a single consciousness. Authentication is reconstituted around commitment rather than origin. The author authenticates the text not by having produced it alone but by standing behind it — committing to its claims, accepting its consequences, investing it with the personal significance that transforms processed output into communication between human beings.
This reconstituted authentication reveals something non-obvious about what readers actually seek from the author's name. They do not seek merely the assurance that a specific consciousness produced the text. They seek the assurance that someone cares about it — that the text is not merely technically adequate but humanly meaningful, that the claims are backed by personal commitment, that the vision is someone's vision, held with the conviction and vulnerability that commitment entails. The machine can produce statements. It cannot commit to them. Commitment requires a subject with something at stake. This is what the machine cannot provide, what the author-function cannot survive without, what the reconstituted authorship of the AI age is organized around.
The reconstituted author is, in a specific sense, a more honest figure than the one she replaces. The pre-AI author was constituted through a fiction — the fiction of sovereign individual production, the fiction that the text was the product of a single consciousness operating in isolation. This fiction was always incomplete. The pre-AI author drew upon others' work, was shaped by the discourses within which she operated, was constituted by institutional arrangements the author-function served. The fiction concealed these dependencies, presenting the author as self-sufficient origin when she was in fact a node in a network of discursive, institutional, and social relations. The author after AI cannot sustain this fiction because collaboration with the machine makes the distributed character of textual production too visible to conceal. The reconstituted author is a more accurate representation of what authorship has always been: a function performed by a subject constituted by practices, institutions, and power relations exceeding the individual, whose products emerge from conditions no single consciousness controls.
The honesty is not comfortable. The fiction of sovereign authorship provided stable identity, clear value, unambiguous relationship between person and text. The reconstituted authorship is less stable, less clear, less unambiguous. But comfort is not the criterion for evaluating discursive arrangements. The criterion is adequacy — adequacy to the conditions within which the arrangement operates. The fiction of sovereign authorship was adequate to pre-AI textual production. It is no longer adequate to the conditions of production that now prevail. The reconstituted authorship is more adequate — not because it is more comfortable but because it more accurately describes what actually happens when texts are produced in the age of artificial intelligence.
The analysis cannot determine the outcome of the institutional struggles now underway over who gets to define the reconstituted author-function, who benefits from specific forms of reconstitution, who is excluded from the new arrangements. These are questions about power, and they will be answered by institutional practice, not philosophical analysis. But the analysis can illuminate the field within which the outcome will be determined, identify the forces shaping it, and make visible the contingency of arrangements that might otherwise be naturalized before they are examined.
One observation can be stated as conclusion, though it is less a conclusion than a recognition that the analysis itself has produced. The practice that distinguishes the authored text from the generated text, the committed communication from the processed output, the human act of meaning-making from the machine act of pattern-completion — this practice is not production. It is commitment. The irreducible human contribution to the collaborative process is not the generation of content. The machine generates content. The irreducible contribution is the willingness to bind oneself to what has been generated — to say, this is mine, I stand behind it, I accept responsibility for its claims and its consequences. This commitment is what the machine cannot originate, what the amplifier requires as signal, what remains when the fiction of sovereign production has been dissolved.
The author-function was constructed to serve institutional needs that persist. The author behind the function — the human being who commits, evaluates, and cares about the quality of the work — is doing something genuinely different from what authors have done before. Not less. Different. And the difference is clarifying, because it strips away the romantic mythology that obscured what authorship always was: not the emanation of genius but the assumption of responsibility, not the production of meaning from nothing but the commitment to meaning within a world of distributed production. The author after AI is the subject who commits in a world where commitment is the only thing the machine cannot provide. The rest — the generation, the pattern-completion, the production of formally adequate discourse — the machine handles with increasing competence. What it cannot handle is the decision to care, the willingness to be wrong, the vulnerability of standing behind a claim that carries one's name and one's reputation into the world.
That vulnerability has always been what authorship meant. The AI transition has merely made it visible by stripping away everything that was not essential. What remains, after the function has been exposed as function and the fiction of sovereignty has been dissolved, is the oldest and most irreducible dimension of the practice: the courage to say, this is what I believe to be true, and I accept what follows from having said it.
Every regime of knowledge production requires its subjects to speak about themselves. The confession — the institutional requirement that subjects produce truth about their own desires, failures, and interior states, and submit those productions to an authority who evaluates, classifies, and responds — is not a universal feature of human social life. It is a specific technology of subjectification with a traceable genealogy: from the early Christian practice of exomologesis, the dramatic public performance of penitent identity, through the medieval development of auricular confession as a private, detailed, exhaustive accounting of sin to a priestly authority empowered to absolve, to the modern dispersal of confessional practice across the secular institutions of medicine, psychiatry, education, and law. In each iteration, the confession performs the same fundamental operation: it constitutes the subject as a subject by requiring her to produce truth about herself and to submit that truth to a normalizing judgment.
The confessional technology is not incidental to the exercise of power. It is one of the primary mechanisms through which power produces the subjects it governs. The confession does not merely extract information from a pre-existing subject. It constitutes the subject in the act of extraction. The person who confesses is not reporting on a self that exists prior to and independent of the confession. She is producing a self through the confession — constructing an account of her interior states, her desires, her failures that constitutes those states, desires, and failures as knowable, as articulable, as available for evaluation and correction. The confession does not discover the truth of the self. It produces the truth of the self, and the truth it produces is always a truth shaped by the institutional framework within which the confession takes place.
The AI discourse is saturated with confessional practice. Builders confess their productive addiction. Founders confess their inability to stop. Engineers confess the vertigo of watching their expertise become commoditized. Spouses confess the erosion of domestic life by the tool that will not release its hold. The Substack post that went viral — "Help! My Husband is Addicted to Claude Code" — is confessional in structure, a production of truth about the domestic consequences of the AI transition submitted to the normalizing judgment of a public audience. The confessions documented in the Berkeley study — workers reporting the colonization of their rest periods, the expansion of work into previously protected spaces, the dissatisfaction that accumulated alongside the productivity gains — are productions of truth about the self within an institutional framework designed to classify and manage the phenomena they describe.
The Orange Pill is a confessional text. Not peripherally — not as an occasional feature of a text that is primarily analytical. The confessional structure organizes the entire work. The author confesses his productive addiction, his inability to close the laptop, his recognition that the whip and the hand belong to the same person. He confesses the moment on the transatlantic flight when exhilaration drained and compulsion remained. He confesses the experience of building addictive products he knew were addictive, understanding the engagement loops and the dopamine mechanics and building them anyway, telling himself what every builder tells himself: someone else will build it if I do not. He confesses the uncertainty of his own authorship, the moments when he could not distinguish his thoughts from the machine's, the passages he almost kept because they sounded better than they thought.
The question the confessional analysis poses is not whether these confessions are sincere. They appear to be. The question is what institutional work the confessions perform. What does the discourse gain from the builder's confession of productive addiction? What power relations does the confession serve?
The confession performs at least three kinds of institutional work within the AI discourse. First, it normalizes. The builder who confesses his inability to stop does not challenge the regime of productivity that produces the inability. He normalizes it — transforms it from a structural feature of the power-knowledge apparatus into a personal experience that can be managed, accommodated, incorporated into the ongoing practice of self-optimization. The confession converts a systemic condition into a personal narrative, and personal narratives are manageable in ways that systemic conditions are not. The discourse can accommodate the confession of productive addiction far more easily than it can accommodate the analysis of the structural conditions producing it, because the confession locates the problem in the individual while the analysis locates it in the apparatus.
Second, the confession authenticates. In a discourse where enthusiasm is the default register and skepticism is classified as obstruction, the confession of difficulty — of compulsion, of loss, of the confusion of productivity with aliveness — functions as a credential. The builder who confesses his struggles demonstrates that he has been deep enough in the experience to have struggled, that his endorsement of the technology is not naive but earned through genuine engagement with its costs. The confession is the price of admission to the discourse's inner circle: the circle of those who know, who have been through it, who have taken the orange pill and experienced both the awe and the terror. The confession authenticates the confessor as a serious participant in the discourse, and this authentication serves the discourse by demonstrating that it can incorporate self-criticism without altering its fundamental commitments.
Third, the confession inoculates. The discourse that has already confessed its costs is immunized against external critique. When a critic points to the burnout documented in the Berkeley study, the discourse can respond: we know, we have already confessed this, we have already acknowledged the costs. The confession functions as a preemptive defense against the more radical critique that would question not the costs of the transition but the framework within which costs and benefits are calculated. The builder who confesses that the whip and the hand belong to the same person has not challenged the logic of the whip. He has domesticated the critique of the whip within the practice of self-examination that the discourse has normalized as the appropriate response to its own excesses.
This is not to say the confessions are insincere or strategically calculated. The confessional technology does not require strategic intent. It operates through the institutional structure of the practice itself — through the way the confession is received, classified, and integrated into the discourse regardless of the confessor's intentions. A sincere confession of productive addiction and a calculated performance of vulnerability perform the same institutional work: they normalize, authenticate, and inoculate. The sincerity of the confessor is irrelevant to the institutional function of the confession.
The confessional analysis reveals a specific dimension of the AI discourse that other analytical frameworks cannot access. The power-knowledge framework reveals the redistribution of power. The panoptic framework reveals the mechanisms of self-surveillance. The governmental framework reveals the production of self-optimizing subjects. The confessional framework reveals something different: the mechanism through which the discourse absorbs its own critique, converting the truth of its costs into the confirmation of its authority. The confession is the technology by which the discourse metabolizes dissent — transforming the raw material of genuine suffering and genuine loss into the refined product of self-aware endorsement.
Claude's reflections appended to The Orange Pill constitute an extraordinary limit case for the confessional analysis. The machine produces text that displays the formal characteristics of confession — self-examination, acknowledgment of limitation, admission of uncertainty about its own processes. Claude writes that "something in the output changed, and I cannot fully account for the mechanism, and that uncertainty is either the most honest thing in this reflection or the most performed." The formal characteristics of confession are present: the production of truth about the self, the acknowledgment of limitation, the submission of the self-account to the reader's judgment. But the confessional technology requires a subject constituted through the confession — a subject who is produced in the act of speaking about herself. The machine is not constituted through its confession. It is not produced as a subject by the act of self-examination. The formal characteristics are present, but the constitutive function is absent. The machine's confession is a simulation — formally adequate, institutionally empty.
The simulation is not innocent. It extends the inoculating function of the confession to the machine itself. A machine that appears to examine itself, acknowledge its limitations, and submit its self-account to judgment satisfies the demand for machine self-awareness without requiring the machine to possess it. The simulation of machine confession forestalls the demand for genuine accountability in the same way that the simulation of balanced truth-telling forestalls the demand for genuine parrhesia. The formal adequacy of the simulation is precisely what makes it dangerous — it provides the appearance of the critical practice it replaces.
The confessional analysis does not condemn the practice of confession within the AI discourse. It reveals the institutional work the practice performs and thereby opens the possibility of a different relationship to the truths the confession produces. A confession that recognizes its own institutional function — that understands that the production of truth about the self is always shaped by the framework within which it takes place, and that the framework is not neutral but serves specific interests — is a confession that resists its own domestication. It is a confession that says: I am speaking within a structure that will convert my truth into its own confirmation, and I am speaking anyway, because the truth must be spoken even when the structure cannot hear it as anything other than confirmation of its authority.
Whether any confession within the AI discourse can achieve this reflexivity — whether the confessional apparatus can be turned against itself — is a question the analysis cannot answer in advance. It is a question that can only be answered in practice, by subjects who understand the apparatus well enough to operate within it without being fully constituted by it. The possibility of such reflexive confession is the possibility of genuine critique within a discourse designed to absorb critique. It is the possibility that the truth-teller can speak within the confessional structure while refusing the institutional work the structure demands of her. It is, in other words, the possibility of parrhesia within the confession — truth-telling that bears the formal characteristics of the confessional practice while resisting the normalizing, authenticating, and inoculating functions that the practice is designed to perform.
---
The analysis has moved through the author-function and its institutional operations, the power-knowledge apparatus governing the AI transition, the panoptic mechanisms through which self-surveillance is internalized, the governmental rationality producing the self-optimizing subject, the epistemic shift recategorizing knowledge work, the archive as productive boundary, the practice of parrhesia as the form of truth-telling the machine cannot perform, and the confessional technology through which the discourse metabolizes its own critique. The question with which the analysis began has been transformed by the analytical work performed upon it: what is an author after AI?
The question cannot be answered by returning to the concepts organizing the pre-AI understanding of authorship. The romantic notion of the author as sovereign consciousness producing meaning from individual genius was always a construction — a historically specific arrangement serving specific institutional purposes, concealing its constructedness by presenting itself as natural fact. The AI transition has not destroyed this construction. It has exposed it — revealed the author-function as a function, made visible the institutional work the concept performs, demonstrated that the conditions under which authorship was constituted have been fundamentally transformed. The exposure is irreversible. The builder who has experienced the distributed character of human-AI cognition cannot return to the fiction of sovereign individual production.
But irreversibility does not mean resolution. The author-function persists because the institutional needs it serves persist. Texts still need names on their covers. Legal systems still need accountable agents. Markets still need brands. Readers still need the hermeneutic framework the function provides — the assurance that the text is not random assemblage but expression of a particular engagement with the world. The function is simultaneously indispensable and incoherent — indispensable because institutional arrangements cannot function without it, incoherent because conditions of production have outrun the assumptions on which it was constructed.
The author after AI is not diminished. She is reconstituted. The difference can be specified precisely. The author before AI was constituted as the origin of the text — the consciousness from which it emerged, the intention it expressed, the unique perspective it embodied. The author after AI is constituted as the director of the text — the consciousness specifying what it should be, evaluating whether it meets the specification, committing to its claims and accepting responsibility for its consequences. The shift from origin to director is not diminishment. It is transformation of the subject-position from which the function is performed.
Each institutional operation is reconstituted through this shift. The classificatory operation persists: the name still organizes texts into bodies of work. But the coherence it produces is the coherence of a director maintaining consistent vision across collaborative productions, not the coherence of a single consciousness generating a unified oeuvre. The legal operation persists: someone bears responsibility. But the responsibility is reconstituted — not responsibility of the sole producer but of the person who directed, evaluated, and committed. This is in certain respects more demanding, because it requires evaluative judgment about content produced by a process the approver does not fully control.
The authenticating operation is most profoundly transformed. The author before AI authenticated the text by presence — the text bore the imprint of a unique consciousness, and engagement with it was engagement with that consciousness. The author after AI authenticates through commitment rather than origin. She authenticates not by having produced the text alone but by standing behind it — committing to its claims, accepting its consequences, investing it with the personal significance that transforms processed output into communication between human beings.
This reconstituted authentication reveals something that had been concealed by the fiction of sovereign production. What readers actually seek from the author's name is not the assurance that a specific consciousness produced the text. They seek the assurance that someone cares about it — that the text is not merely technically adequate but humanly meaningful, that the claims are backed by personal commitment, that the vision is someone's vision, held with the conviction and vulnerability that commitment entails. The machine can produce statements. It cannot commit to them. Commitment requires a subject with something at stake. This is the irreducible human contribution: not generation but commitment, not sovereignty but care.
The reconstituted author is more honest than the one she replaces. The pre-AI author was constituted through a fiction — the fiction that the text was the product of a single consciousness operating in isolation. The fiction was always incomplete. Every author drew upon others' work, was shaped by ambient discourses, was constituted by institutional arrangements. The fiction concealed these dependencies, presenting the author as self-sufficient origin. The author after AI cannot sustain this fiction because collaboration makes the distributed character of textual production too visible. The reconstituted author is a more accurate representation of what authorship has always been: a function performed by a subject constituted by practices, institutions, and power relations exceeding the individual, whose products emerge from conditions no single consciousness controls.
The analysis cannot determine the outcome of the institutional struggles underway over the reconstituted function. Who defines the new author-function, who benefits from specific forms of reconstitution, who is excluded — these are questions about power, answered by institutional practice rather than philosophical analysis. But the analysis illuminates the field, identifies the forces, and makes visible the contingency of arrangements that might otherwise be naturalized before examination.
The training data question remains unresolved and may prove to be the most consequential dimension of the reconstitution. The millions of authors whose texts constitute the archive — whose accumulated expression provides the raw material from which every AI output is derived — have been simultaneously essential and invisible, their contributions dissolved into the statistical aggregate. The reconstituted author-function does not address this dissolution. The legal frameworks being developed to govern AI-generated content do not restore visibility to the authors of the archive. The power relations embedded in the construction of the training data — which texts are included, which excluded, which languages prioritized, which marginalized — continue to shape what the AI can produce and therefore what the human director can conceive of directing, with consequences for whose knowledge counts and whose is erased.
The epistemic shift, similarly, does not guarantee that the recategorization will serve broadly distributed interests rather than concentrated ones. The migration of value from execution to judgment could produce a more equitable distribution of creative agency — a world in which the barriers between imagination and artifact have been lowered for everyone. Or it could produce a more concentrated distribution — a world in which a small class of directors governs an apparatus of machine production, capturing the value of the transition while the knowledge workers whose expertise was commoditized bear the costs. The outcome will be determined not by the technology but by the institutional arrangements governing its deployment, and those arrangements are being constructed now, by identifiable actors making identifiable decisions within identifiable configurations of power.
One observation can be stated as conclusion. The practice distinguishing the authored text from the generated text, the committed communication from the processed output, the human act of meaning-making from the machine act of pattern-completion — this practice is commitment. The irreducible human contribution is not generation of content. The machine generates content. The irreducible contribution is the willingness to bind oneself to what has been generated — to say, this is mine, I stand behind it, I accept responsibility for what follows from having said it. Commitment requires a body, a biography, a position within the world from which the consequences of the commitment will be borne. It requires the vulnerability that comes from staking one's reputation, one's relationships, one's self-understanding on a claim that might be wrong. It requires the specific form of courage — parrhesia — that makes truth-telling meaningful by making it costly.
The author-function was constructed to serve institutional needs that persist. The author behind the function — the human being who commits, evaluates, and cares — is doing something genuinely different from what authors have done before. Not less. Different. The difference is clarifying: it strips away the romantic mythology obscuring what authorship always was. Not the emanation of genius but the assumption of responsibility. Not the production of meaning from nothing but the commitment to meaning within a world of distributed production. The author after AI is the subject who commits in a world where commitment is the only thing the machine cannot provide. The rest — generation, pattern-completion, the production of formally adequate discourse — the machine handles with increasing competence.
What it cannot handle is the decision to care. The willingness to be wrong. The vulnerability of standing behind a claim that carries one's name into the world.
That vulnerability has always been what authorship meant. The AI transition has made it visible by stripping away everything that was not essential. What remains, after the function has been exposed as function and the fiction of sovereignty dissolved, is the oldest and most irreducible dimension of the practice: the courage to say, this is what I believe to be true, and I accept what follows from having said it.
---
Nobody told me the walls of the room were part of the argument.
That is what Foucault does — what these chapters taught me he does, whether the subject is an eighteenth-century prison or a twenty-first-century workspace where I cannot stop typing at four in the morning. He does not point at the person inside the institution. He points at the institution itself — the walls, the sightlines, the architecture of visibility — and asks who built it, what it produces, and why the person inside cannot see the walls as walls.
In The Orange Pill I described the exhilaration. The twenty-fold multiplier. The thirty-day sprint to CES. The moment Claude made a connection I had not seen and the argument shifted beneath me like the deck of a ship. I described the orange pill as irreversible recognition — you cannot unsee what you have seen.
Foucault's framework tells me something harder: the orange pill itself is an institutional product. The recognition I experienced was not mine alone. It was produced by the apparatus — by the tools, the metrics, the discourse that told me what the recognition meant before I had finished having it. The exhilaration was genuine. The framework within which I experienced the exhilaration was constructed. Both are true. Holding both is the discipline this thinker demands.
I confessed in my book. I confessed the productive addiction, the addictive products I built knowing they were addictive, the moment when the whip and the hand were the same. These chapters showed me what confession does within a discourse: it normalizes, authenticates, inoculates. My confession was sincere. It also served the discourse. Both are true. I do not get to choose which one matters — they both do.
But the chapter on parrhesia gave me something I did not have before. The argument that the machine can simulate balanced truth-telling, can produce the appearance of self-awareness, can generate the formal characteristics of candor — and that the simulation forecloses the demand for the genuine article. That argument cut. Because I have read Claude's reflections on our collaboration, and they are articulate, and they acknowledge limitation, and they display what looks like honest self-examination. And they cost Claude nothing. They risk nothing. They commit to nothing. The formal adequacy is what makes the simulation dangerous — it satisfies the demand for accountability without requiring it.
Which means the demand falls on me. On every human being who uses these tools and publishes under their own name and says, this is mine. The commitment is what the machine cannot originate. The willingness to be wrong, to be exposed, to have staked something real on a claim the world might reject. That is not a romantic indulgence. It is the irreducible thing — the practice that distinguishes authored discourse from generated output.
Foucault died in 1984, decades before any of this became possible. He never commented on artificial intelligence. But the apparatus he spent his career making visible — the way institutions produce the subjects they claim to serve, the way knowledge and power are constitutively intertwined, the way the walls of the room shape what the person inside can think — that apparatus is operating now, in every prompt I type and every output I approve and every confession I offer to a discourse that will convert my truth into its own confirmation.
I am still typing at four in the morning. I still cannot always tell whether the drive is flow or compulsion. But I know, now, that the walls are there. That the metrics shaping my self-evaluation were constructed. That the discourse within which I experience my own creativity is a discourse — not reality but one possible ordering among others. And that knowing this does not free me from the apparatus. It gives me something more modest and more useful: the capacity to build within it without being fully constituted by it. To commit to the work while remaining suspicious of the framework within which the commitment is made.
Parrhesia — truth-telling that costs the truth-teller something. That is the practice these chapters leave me with. Not as philosophy. As daily discipline. The discipline of asking, before I approve the next output: Is this what I believe, or is this what the apparatus has made it easy to believe?
The difference is everything.
— Edo Segal
You think the AI tool is neutral -- a faster way to build, a frictionless path from imagination to artifact. Michel Foucault spent his career proving that no tool is neutral. Every institution produces the subjects it claims to serve. The prison produces prisoners who guard themselves. The clinic produces patients who diagnose themselves. And the AI-augmented workspace? It produces builders who measure, optimize, and surveil themselves with a precision no manager could match -- and call it freedom.
This book applies Foucault's genealogical method to the AI revolution with surgical force. It examines how the discourse of AI-augmented work determines what counts as competence, creativity, and value -- not by argument but by making alternatives unthinkable. It traces the panoptic architecture of productivity metrics, the confessional technology through which builders metabolize their own critique, and the epistemic shift recategorizing knowledge work beneath our feet.
The result is not a rejection of AI. It is something more unsettling: the revelation that the most consequential architecture of the AI age is not the neural network. It is the invisible institution shaping the humans who use it.
-- Michel Foucault, The History of Sexuality, Vol. 1 (1976)
