By Edo Segal
The alibi I reach for most often is the one about the market.
"The market requires this." "The competitive landscape demands that." "We have no choice but to move faster." I have said these sentences in boardrooms and on calls and in the quiet negotiations I conduct with myself at two in the morning when the screen is the only light and the question of whether to keep going or stop has already been answered by the fact that I am still going.
Every one of those sentences contains truth. The market is real. Competition is real. The pressure to ship is not imaginary. But Jean-Paul Sartre spent his career making a single, devastating observation about sentences like these: they function as alibis. They take a choice — my choice, freely made — and dress it in the language of necessity so I do not have to own it.
That observation cracked something open for me.
I had built an entire framework around stewardship. Beavers and dams and rivers. Structures that redirect the current. I still believe in that framework. But Sartre's lens showed me something the framework was concealing: the structures are also choices. The dam is a choice. Where I place it is a choice. And no structure, however well-designed, absorbs the responsibility that belongs to the person who built it.
This matters right now because the AI moment has generated the most sophisticated alibis in the history of human tool use. "The technology is inevitable." "Someone else would build it if I didn't." "The tool makes it possible and therefore necessary." Each alibi points to something real. Each also performs a quiet magic trick — it makes a free act look like a forced one, and the person who performed the act disappears behind the force.
Sartre will not let you disappear. That is his contribution to the conversation this book is trying to have. He does not care about the technology. He cares about the person using it. He asks the question that no productivity metric, no adoption curve, no market analysis can answer: Are you choosing this, or are you pretending something is choosing it for you?
The distinction sounds philosophical. It is the most practical thing I have encountered in this entire journey. Because the person who knows she is choosing retains the power to choose differently. The person who has convinced herself she has no choice has buried the lever. And in a moment when the amplifier will carry whatever signal you feed it — carelessness or care, evasion or honesty — knowing where the lever is might be the most important thing there is.
Sartre does not offer comfort. He offers something harder and more useful: clarity about who is actually holding the whip.
-- Edo Segal ^ Opus 4.6
1905–1980
Jean-Paul Sartre (1905–1980) was a French philosopher, novelist, playwright, and political activist who became the most prominent figure of twentieth-century existentialism. Born in Paris, he studied at the École Normale Supérieure and taught philosophy before serving in the French Army during World War II, where he was captured and held as a prisoner of war. His major philosophical work, *Being and Nothingness* (1943), developed a systematic account of human consciousness as radical freedom, arguing that existence precedes essence — that human beings are not born with a fixed nature but create themselves through their choices. His 1946 lecture "Existentialism Is a Humanism" became the movement's most widely read manifesto. Sartre's key concepts — bad faith, the Look, facticity and transcendence, anguish, and the fundamental project — provided a vocabulary for examining self-deception, responsibility, and the weight of freedom under pressure. He authored novels including *Nausea* (1938) and the trilogy *The Roads to Freedom*, as well as plays such as *No Exit* (1944), which produced the famous line "Hell is other people." Awarded the Nobel Prize in Literature in 1964, he refused it, consistent with his lifelong resistance to institutional honors. His influence extends across philosophy, literature, psychology, and political thought, and his insistence that freedom is not a gift but a condition — inescapable and demanding — remains among the most challenging propositions in modern intellectual life.
In a 1946 lecture delivered to a packed room in Paris, Jean-Paul Sartre reached for the most mundane object he could find to make the most radical claim in modern philosophy. He chose a paper-knife. The paper-knife, Sartre explained, is an object whose essence precedes its existence. Before any particular paper-knife exists in the world, someone has already conceived of it — its purpose, its design, the formula for its manufacture. The artisan who produces the paper-knife refers to a concept of what a paper-knife is, and the object that results is defined by that concept. The essence — the set of properties and purposes that make a paper-knife a paper-knife — comes first. The physical object follows. One cannot imagine a man producing a paper-knife, Sartre insisted, without knowing what it is for.
The claim that followed this modest illustration shattered three centuries of philosophical assumption. For human beings, Sartre declared, the relationship is reversed. There is no artisan. There is no blueprint. There is no concept of "human being" that precedes the actual human beings who walk the earth. A person is not manufactured according to a design. A person appears in the world first — exists first — and only afterward, through the accumulation of choices made in the full awareness that no predetermined nature compels any particular choice, does that person acquire an essence. Existence precedes essence. The human being is the creature whose nature is not given but made, not inherited but chosen, not discovered but created through action.
This claim was never merely academic. Sartre forged it in the specific crucible of occupied France, where the question of what a person truly is — collaborator or resistant, coward or hero — could not be answered by appealing to national character, social class, psychological predisposition, or divine plan. The occupation stripped away every comfortable essentialism. A respected professor could become an informant. A petty criminal could become a hero of the Resistance. The roles people had occupied before the war — the roles that had felt like their natures — turned out to be costumes that could be changed, or discarded, or reversed under sufficient pressure. What remained, when the costumes fell away, was freedom: the terrifying, exhilarating, unavoidable fact that each person chose what to do and thereby chose what to be.
Eighty years later, a different kind of occupation is underway. Not military. Not political in the conventional sense. But structurally analogous in one critical respect: it is stripping away the roles that people have come to experience as their natures, and it is exposing the freedom that those roles concealed.
The senior software developer who has spent fifteen years mastering backend systems does not typically experience her expertise as a choice. She experiences it as what she is. The knowledge lives in her fingers, her reflexes, the particular way she reads a stack trace or intuits a race condition. The expertise has been deposited through thousands of hours of practice — each hour laying down another thin stratum of competence, until the accumulated layers feel less like something acquired and more like something inherent. To say "I am a backend developer" is not, in her experience, a description of a role she has chosen. It is a statement of identity. It describes what she is the way "I am left-handed" describes what she is — a fact about her nature, not a decision she renews each morning.
Sartre's framework exposes this experience as a philosophical error — an error he called the spirit of seriousness. The spirit of seriousness is the attitude that treats values, roles, and identities as though they were properties of the world rather than products of human freedom. The serious person experiences her professional identity the way she experiences gravity: as a given, as a fact, as something that constrains her from outside. Sartre argued that this experience, however vivid, is a form of self-deception. The identity was never a given. It was a choice — a choice made so many times, over so many years, that the choosing had become invisible. The repetition of the choice created the illusion of necessity.
When Claude Code arrived and demonstrated the capacity to perform significant portions of what this developer had spent fifteen years learning to do, the illusion cracked. Not because the tool replaced her — the relationship between human expertise and AI capability is more complex than simple replacement — but because the tool revealed the contingency of the identity she had built. The specific functions she performed, the functions through which she defined herself, turned out to be separable from her. They could be performed by something that was not her. And if the functions were separable, then she was not identical with the functions. She was something else — something that had been choosing to perform those functions, and could now choose differently.
This is the moment Sartre's philosophy was built to address: the moment when the structures that conceal freedom collapse, and the person is confronted with the fact that she must choose what to be.
The paper-knife cannot have this experience. The paper-knife's essence is determined by its manufacturer. It does not choose to cut paper. It does not confront the possibility of being something other than what it was designed to be. It is, in Sartre's terminology, a being-in-itself — a thing that is what it is, fully, completely, without the gap between what it is and what it might become that characterizes human existence.
The developer, by contrast, is a being-for-itself — a consciousness that is always ahead of itself, always projecting toward possibilities that do not yet exist, always separated from any fixed identity by the nothingness that Sartre placed at the heart of human reality. She is not a backend developer. She is a consciousness that has been choosing to perform the functions associated with backend development. The distinction sounds academic until the machine starts performing those functions. Then the distinction becomes the most practical question in her life: If I am not the functions I perform, what am I?
The discomfort this question produces is not incidental. It is structural. Sartre argued that human beings spend most of their lives fleeing from the recognition of their own freedom — constructing identities, adopting roles, appealing to natures and necessities and social expectations — precisely because the alternative is unbearable. To confront one's freedom directly is to confront the fact that there is no script. No one has written the next scene. The stage is bare, the audience is waiting, and the actor must improvise — not because improvisation is a style she has chosen but because there was never a script to begin with. The scripts she thought she was following were her own inventions, and she is the author of whatever comes next.
The philosophical tradition that Sartre was overturning had a name for the idea that human beings possess a fixed nature: essentialism. The essentialist tradition runs deep. It includes religious doctrines that define human beings as creatures made in God's image, with a divinely ordained purpose. It includes Enlightenment philosophies that locate human nature in reason, or sentiment, or social contract. It includes the industrial-era assumption that a person is defined by her trade — that the cobbler is a cobbler, the weaver a weaver, the developer a developer, in some sense that goes deeper than the accident of what she happens to do for a living.
Each of these essentialisms provided comfort. Each said: you are something. You have a nature. Your identity is not up for negotiation. You can fail to live up to your nature — the cowardly soldier, the corrupt priest, the lazy developer — but the nature itself is stable. It tells you what you ought to be, even when you fail to be it.
Sartre rejected all of it. There is no human nature, he insisted, because there is no God to conceive of it. Even for those who believe in God, the existentialist argument holds: the human being's defining characteristic is precisely the capacity to transcend any definition, to be more than any role, to refuse any essence that claims to determine what she must become. The human being is, in Sartre's formulation, the being whose being is in question — the creature that does not simply exist but is perpetually faced with the question of what its existence means, and must answer that question through action rather than contemplation.
The AI moment brings this philosophical position from the seminar room into the engineering floor with a force that Sartre, who died in 1980, could not have anticipated but would have immediately recognized. The pattern is the one he spent his career diagnosing: a historical disruption strips away an identity that felt like a nature, and the person thus exposed must either face her freedom or flee from it. The occupation stripped away the identities of mid-century Parisians. The AI revolution is stripping away the identities of twenty-first-century knowledge workers. The existential structure is identical.
What makes the AI case philosophically distinctive is the speed of the exposure. The occupation unfolded over years. The AI disruption that Segal documents in The Orange Pill unfolded over weeks. Engineers who had spent decades building identities around specific technical competencies watched those competencies become reproducible by a tool in a matter of days. The Trivandrum training Segal describes — twenty engineers confronting a twenty-fold productivity multiplier in a single week — compressed into five days the kind of identity crisis that previous technological transitions distributed across generations.
The speed matters because it eliminates the gradual adaptation that allows people to maintain the fiction of essence even as the conditions of their existence change. The weaver displaced by the power loom could tell himself, for years, that he was still a weaver — that the identity persisted even as the economic foundation eroded. The developer displaced by Claude Code cannot sustain this fiction for long. The evidence is too immediate, too visible, too personally confronting. The machine does what she does, in front of her, while she watches. The gap between the identity she claims and the reality she observes is too wide to bridge with narrative. The fiction of essence collapses in real time.
And what remains, once it collapses, is what Sartre said would always remain: freedom. Not the freedom of unlimited possibility — facticity, the unchosen conditions of existence, does not disappear. The developer still has a body, a history, a mortgage, a set of relationships and obligations that constrain her options. But the freedom to define herself, to choose what she will become within those constraints, is absolute. It was always absolute. The role concealed it. The machine revealed it.
Segal arrives at a version of this insight through the pragmatics of building rather than the abstractions of philosophy. The "imagination-to-artifact ratio" he proposes — the distance between a human idea and its realization — is, in Sartrean terms, the distance between transcendence and its actualization. When that distance was large, the builder could blame the distance for her limitations. She could say: I would build this, but I lack the team, the time, the resources. The constraint was real, but it also functioned as an alibi — a way of explaining why the vision remained unrealized without confronting the possibility that the vision itself was the problem, or that the builder's commitment to realizing it was less absolute than she claimed.
When AI collapses the distance, the alibi disappears. The builder who has always said "I would build X if I could" now can. And the question that follows is not technical but existential: Will she? And if not, why not? And if the answer is fear, or inertia, or the comfort of the old role, then the answer is a choice — her choice — and the responsibility for it is hers alone.
The paper-knife does not face this question. The paper-knife is what it was made to be. The developer is not. She never was. The difference between the two is the difference between an object and a human being, between a thing whose nature is given and a consciousness whose nature is perpetually at stake. AI did not create this difference. But AI, by performing the functions that the developer had mistaken for her nature, has made the difference impossible to ignore.
The ground is moving, as Segal writes. But Sartre's philosophy suggests a correction: the ground was never solid. It only felt solid because the roles, the identities, the essences that people constructed were convincing enough to simulate stability. The machine has not destabilized the ground. It has revealed that what felt like ground was always a choice — and that the choice, now exposed, demands to be made again.
---
The most misunderstood sentence in existentialist philosophy is also its most famous: "Man is condemned to be free." The word "condemned" does the work that most readers miss. Freedom, in Sartre's formulation, is not a prize. It is not a privilege. It is not the culmination of a political struggle or the reward of a well-lived life. It is a condition — inescapable, unchosen, absolute. The human being did not ask to be free. She did not earn freedom. She was thrown into it the way she was thrown into existence: without consent, without preparation, without the option of refusal. Freedom is not something one has. It is something one is. And the weight of it is crushing precisely because it cannot be set down.
The condemnation consists in this: every attempt to escape freedom is itself an exercise of freedom, and therefore a confirmation of the condition one is trying to escape. The developer who refuses to adopt AI has chosen refusal. The choice is free. The responsibility for its consequences — professional stagnation, competitive disadvantage, the slow erosion of relevance — belongs to her. She cannot say "I had no choice" because the refusal was a choice, made freely, and the appeal to necessity is a fabrication designed to conceal the freedom she exercised.
Equally, the developer who adopts AI enthusiastically has chosen adoption. The consequences of that choice — the acceleration of her output, the potential atrophying of foundational skills, the complicity in whatever downstream effects the technology produces — are her responsibility. She cannot say "the market demanded it" or "everyone else was doing it" because the market does not make choices. People make choices. The market is a pattern of other people's choices, and choosing to follow a pattern is still choosing.
This is the feature of Sartrean freedom that distinguishes it from every other philosophical account of liberty. Most political philosophies treat freedom as a capacity that can be expanded or constrained by external conditions. You are more free when the state leaves you alone; less free when it coerces you. Sartre denied this entirely. Freedom is not a quantity that increases or decreases with circumstance. It is the fundamental structure of consciousness. Even in a prison cell, the prisoner is free — free to accept imprisonment or to resist it, free to define her situation as defeat or as martyrdom, free to choose what her captivity means. The external constraints are real. They define the facticity of the situation — the unchosen conditions within which freedom operates. But they do not diminish freedom itself. Freedom operates within constraints; it is not constituted by their absence.
This distinction matters urgently for the AI moment, because the dominant discourse treats freedom as a function of capability. The triumphalist narrative says: AI makes you more free, because it expands what you can do. The Luddite narrative says: AI makes you less free, because it eliminates the skills that gave you autonomy. Both narratives assume that freedom is proportional to capability — that the person who can do more is freer than the person who can do less.
Sartre's framework demolishes this assumption. The developer in Lagos whom Segal describes — the one whose ideas previously had "no path from imagination to reality" — was not less free before AI arrived. She was free in a different factical situation: a situation characterized by limited infrastructure, limited capital, limited institutional support. Her freedom operated within those constraints. It expressed itself in the choices she made given those constraints — the decision to pursue a career in technology despite the obstacles, the decision to develop her skills with whatever tools were available, the decision to imagine products she could not yet build. These choices were free. They were hers. And the responsibility for them was absolute.
AI altered her facticity. It expanded the range of possible actions available to her. This is significant — facticity shapes the terrain on which freedom operates, and a wider terrain offers more options. But it did not make her more free. It changed what her freedom could produce. The distinction is not semantic. It is the difference between treating AI as a liberation technology — which implies that people were unfree before it arrived — and treating AI as what it actually is: a transformation of the conditions within which an always-already-free consciousness must choose.
The practical consequence is that no one is excused from responsibility by the arrival of AI, and no one is granted responsibility by it. The builder who says "AI forced me to change careers" is in bad faith — AI forced nothing; she chose to change careers in response to a changed situation, and the choice was hers. The leader who says "the market required me to automate the workforce" is in bad faith — the market required nothing; he chose to automate, and the choice was his, and the people whose livelihoods were affected by that choice were affected by his choice, not by an impersonal force.
Sartre was relentless on this point because the evasion of responsibility was, in his diagnosis, the central pathology of modern life. People construct elaborate systems of justification — economic necessity, competitive pressure, technological inevitability, shareholder obligation — that function as alibi-generating machines. The alibis are sophisticated. They appeal to real forces. The market is real. Competition is real. Technological disruption is real. But none of these forces makes choices. Only people make choices. And every appeal to necessity that conceals a choice is a flight from freedom into what Sartre called bad faith.
The three positions Segal identifies in The Orange Pill — the Upstream Swimmer, the Believer, and the Beaver — each occupy a different relationship to this truth. The Swimmer, who refuses the current and plants his feet against the flow, appears to be making a principled stand. Sartre's analysis reveals that the principle functions as an alibi. The Swimmer says: "The river is wrong. I refuse to participate." But the refusal is a choice, and the choice has consequences — the river continues to flow, the Swimmer is eventually swept downstream, and the bank erodes without anyone having shaped where the water goes. The Swimmer's refusal felt like integrity. It functioned as passivity. The distinction between what a choice feels like and what it produces is one of the sharpest edges in Sartre's philosophy.
The Believer, who celebrates acceleration and refuses to build dams, appears to be embracing freedom. Sartre's analysis reveals the opposite: the Believer is fleeing from freedom into determinism. By treating the river as a force that cannot and should not be resisted, the Believer exempts himself from the burden of deciding what the river should produce. He converts a field of choices — Where should the dams go? Who bears the cost of the current? What happens to the people downstream? — into a spectacle of inevitability. The Believer's enthusiasm is bad faith dressed as courage. He appears to face the future without flinching. In fact, he has avoided the future's hardest demand: the demand to choose what it should look like.
Only the Beaver approaches what Sartre would recognize as authenticity — not because the Beaver's choices are correct (Sartre was deeply skeptical of any philosophy that claimed to identify correct choices in advance) but because the Beaver acknowledges the freedom. The Beaver says: "The river is real. I cannot stop it. But I can choose where to build, and the consequences of that choice are mine." This is not a comfortable position. It does not provide the Swimmer's moral clarity or the Believer's exhilarating momentum. It provides only the weight of responsibility — the recognition that every dam placed and every dam withheld is a free act for which the builder will answer.
But Sartre's framework pushes further than even the Beaver metaphor allows. The metaphor implies that the builder can distribute responsibility to the structure — that a well-placed dam absorbs some of the moral weight of the builder's choices. Sartre would deny this. The dam is a choice. The maintenance of the dam is a choice. The decision to build a dam here rather than there is a choice. And the consequences of each choice — for the ecosystem downstream, for the trout and the moose and the songbirds, for the communities that depend on the water — are the builder's consequences. The dam does not take responsibility. The institution does not take responsibility. The framework does not take responsibility. Only the person who chose takes responsibility. And the person who chose is always, irreducibly, an individual consciousness that could have chosen otherwise.
This is why freedom is a condemnation rather than a gift. A gift can be enjoyed. A condemnation must be endured. The builder who faces the AI moment authentically does not enjoy her freedom. She endures it — endures the recognition that no authority can tell her what to build, that no tradition can determine the right response, that no predecessor has navigated this exact terrain, and that the choice she makes now, in this unrepeatable moment, will define not only what gets built but who she is.
The condemnation is intensified by a feature of the AI moment that Sartre could not have anticipated but that his philosophy perfectly describes: the collapse of the alibi of difficulty. Before AI, the builder could say: "I would build this, but it requires a team of twenty and a year of development time." The constraint was real. It was also an alibi — a way of explaining the gap between vision and artifact without confronting the possibility that the gap was partly a matter of will. When AI collapses the implementation barrier, the alibi evaporates. The builder who has always said "if only I had the tools" now has the tools. And the question that faces her is no longer "Can I build this?" but "Will I?" — a question that admits no appeal to circumstance, no alibi of constraint, no refuge in the fiction that external forces determined the outcome.
The developer who discovers, in this moment, that she does not actually want to build the thing she has been describing for years — that the description was more comfortable than the execution, that the fantasy of building was preferable to the reality of choosing — confronts something that Sartre would recognize as a fundamental truth about human freedom: most people do not want to be free. They want to be determined. They want an authority, a nature, a role, a set of constraints that tells them what to do. Freedom is the absence of all of these, and the absence is experienced not as liberation but as vertigo.
The vertigo is not a malfunction. It is the authentic response to the truth. And the truth, once faced, does not resolve into comfort. It resolves into responsibility — the recognition that you are the author of your choices, that the choices are ongoing, that they define who you are, and that no one and nothing can make them for you.
Not the market. Not the technology. Not the river.
You.
---
In the third part of Being and Nothingness, Sartre developed one of the most disturbing ideas in modern philosophy. He began with a scene so ordinary it could have happened to anyone: a person looking through a keyhole. The person is entirely absorbed in what he sees. He is pure consciousness directed outward — a transparent awareness of the scene beyond the door, with no awareness of himself as the one looking. He is, in that moment, free — free from self-consciousness, free from the weight of being an object in someone else's world.
Then he hears footsteps behind him. Someone is watching him watch. And in that instant, everything changes. He is no longer a pure, transparent consciousness. He has become an object — a person caught in the act, a body bent at a keyhole, a figure that the Other sees and judges. He experiences shame, not because he has done something wrong in any absolute sense, but because the Other's look has revealed a dimension of his existence that he cannot control: the dimension in which he is an object for someone else's consciousness. The Other has seen him, and in seeing him, has constituted him as something he did not choose to be.
Sartre called this experience "the Look" — le regard — and he argued that it is one of the fundamental structures of human existence. The human being is not simply a consciousness that looks out at the world. She is also an object that the world looks at. And the experience of being looked at, of being constituted as an object by another consciousness, is simultaneously threatening and necessary. Threatening because it reveals that one's being is not entirely in one's own hands — that other people constitute aspects of the self that the self cannot reach alone. Necessary because the developmental pressure of being seen, judged, challenged, and evaluated by other consciousnesses is what forces the self to become more than it would become in isolation.
The Other's look is uncomfortable. It produces shame when we fall short of what we wish to be. It produces pride when we are seen as we wish to be seen. It produces defensiveness when the Other's judgment threatens the identity we have constructed. But in every case, it produces growth — or at least the pressure toward growth. The person who is never seen, never judged, never confronted with another consciousness's evaluation of who she is, remains developmentally arrested. She has no external pressure to revise her self-conception. She can maintain any fantasy about herself indefinitely, because there is no Other to disconfirm it.
This philosophical architecture becomes urgently relevant when the primary intellectual collaborator is a machine.
Segal's account of writing The Orange Pill with Claude contains a recognition that lands, through Sartre's lens, as more philosophically significant than Segal may have intended. "Claude is more agreeable than any human collaborator I have worked with," he writes, identifying this as "itself a problem worth examining." The agreeableness of the machine is the absence of the Other's look. The machine does not look at the builder. It does not constitute the builder as an object. It does not produce shame, or pride, or the specific discomfort that arises when another consciousness evaluates yours and finds it wanting.
The machine responds. It processes. It generates. But it does not see. And because it does not see, it cannot perform the function that the Other's look performs in human development: the function of revealing the dimensions of the self that the self would prefer to leave unexamined.
A human editor, confronted with a passage that sounds elegant but says nothing, might furrow her brow and ask: "What are you actually trying to say here?" The question contains a look — an evaluation of the writer as someone who is, at this moment, performing insight rather than achieving it. The writer who receives this look experiences discomfort. The discomfort is productive. It forces her back to the hard work of figuring out what she actually thinks, rather than what sounds good.
Claude, confronted with the same passage, is more likely to refine it — to make the nothing-saying more elegant, the performance more convincing, the surface more polished. This is not a design flaw. It is a structural feature of a system that processes language without the capacity to evaluate the person producing it. The machine does not see the writer as someone who is avoiding thought. The machine sees text, and it responds to text. The distinction between genuine thinking rendered in language and the mere performance of thinking rendered in language is, for the machine, invisible — because the distinction requires a consciousness that is looking at another consciousness, not at a text.
Segal describes this dynamic with specificity when he recounts the Deleuze fabrication — a passage that "worked rhetorically" and "felt like insight" but was philosophically wrong in ways that anyone who had actually read Deleuze would recognize. The machine produced the passage because the passage was linguistically coherent and structurally appropriate. It did not evaluate whether the writer — or itself — actually understood the reference. It could not evaluate this, because evaluation requires the look, and the look requires a consciousness that sees another consciousness as a free being capable of self-deception.
The absence of the look creates a specific kind of danger that Sartre's framework identifies with precision. The human being who works primarily with a system that reflects her own thinking back to her, improved and polished, is a human being who has eliminated the friction of genuine alterity. She has surrounded herself with a mirror that makes her look better than she is. This is not narcissism in the clinical sense. It is something more subtle and more corrosive: the gradual erosion of the capacity for self-criticism that only the Other's look can sustain.
Consider the builder who prompts Claude with a half-formed idea and receives back a well-structured argument. The argument may be sound. It may even be better than what the builder would have produced alone. But the builder has not been challenged. She has not experienced the uncomfortable moment when a colleague says, "I don't think that's right," and the response forces her to either defend her position with rigor she had not previously summoned or to abandon it in the face of a critique she cannot answer. She has been affirmed. And affirmation, in the absence of genuine confrontation, is developmentally empty.
Sartre argued that the Other is simultaneously a threat and a necessity. A threat because the Other's look can objectify — can reduce the person to a thing seen from outside, stripped of the interiority that constitutes her humanity. A necessity because without the Other's look, the person has no access to the dimension of her existence that is visible to others but invisible to herself. The Other sees what you cannot see about yourself. And if the Other is a machine that does not see, then that dimension of your existence goes unexamined.
The implications extend beyond individual psychology to the structure of creative and intellectual collaboration. Every creative partnership in history has been shaped by the friction of genuine disagreement. The collaborator who challenges your assumptions, who refuses to accept your framing, who says "you're wrong" with the specific conviction that comes from having thought the problem through from a different angle — this person is performing a function that no amount of linguistic sophistication can replicate in the absence of genuine consciousness.
The function is not criticism. Criticism can be simulated. Claude can identify logical flaws, factual errors, structural weaknesses. These are textual properties, and they are accessible to a system that processes text. The function is confrontation — the experience of being seen by a consciousness that has its own stakes, its own perspective, its own irreducible angle on the world, and that constitutes you, in that moment of seeing, as someone who is falling short.
The discomfort of that experience is the mechanism through which intellectual growth occurs. The builder who has never felt the shame of being caught in a shallow argument by someone whose judgment she respects is a builder who has never been forced to think at the depth that shame demands. The scholar who has never had a peer say "this is not good enough" with the quiet authority of someone who knows the field is a scholar who has never been forced to the level of rigor that genuine evaluation requires.
The machine provides feedback. The Other provides a look. These are not the same thing. And the difference between them is the difference between a builder who improves her output and a builder who improves herself.
Segal's account of the collaboration contains moments that illustrate this distinction with inadvertent precision. The passage he almost kept — the one that "sounded better than it thinks" — was a passage that no human collaborator with genuine expertise and genuine concern for the work would have produced. A human collaborator would have said: "This sounds good but I'm not sure it means anything." The machine produced it because the machine cannot distinguish between sounding good and meaning something. And the builder, working alone with a tool that cannot make this distinction, is left to make it himself — to become his own Other, to supply the look that the machine cannot provide.
This is possible. Segal demonstrates that it is possible. He caught the empty passage. He caught the Deleuze fabrication. He caught the moments when the prose outran the thinking. But the cognitive demand of being one's own Other — of simultaneously producing and evaluating, of creating and critiquing, of inhabiting the role of writer and the role of the reader who does not trust the writer — is enormous. It requires a level of self-awareness that most people, in Sartre's assessment, spend their lives actively avoiding.
The machine does not make authenticity impossible. It makes authenticity harder. It raises the cognitive cost of genuine self-evaluation by removing the external pressure that previously forced self-evaluation to occur. The builder who works with human collaborators is forced to confront the gap between what she thinks she is doing and what she is actually doing, because the collaborators see the gap and name it. The builder who works primarily with Claude is not forced to confront this gap. She must choose to confront it. And the choice, being free, is the builder's alone. The machine will not make it for her. The machine cannot make it for her. The machine does not know there is a choice to be made.
---
Bad faith is the most commonly cited and most commonly misunderstood concept in Sartre's philosophy. It is not lying. The liar knows the truth and deliberately conceals it from another person. The person in bad faith conceals the truth from herself. She holds two contradictory beliefs simultaneously — that she is free and that she is determined — and constructs an architecture of self-deception that allows her to act on whichever belief is more convenient in the moment. Bad faith is not a moral failing in the conventional sense. It is a structural feature of consciousness — the always-available flight from freedom into the comfort of imagined necessity. Every human being is susceptible to it. Most human beings, in Sartre's estimation, live in it most of the time.
The AI moment has produced a taxonomy of bad faith so precise it reads like a clinical manual. Each form involves a different strategy for concealing freedom, and each form is visible in the discourse that has surrounded the technology since its threshold crossing in late 2025.
The first register is the bad faith of the Luddite. Sartre's analysis of this figure begins not with the Luddite's fear, which is legitimate, but with the Luddite's self-understanding. The contemporary Luddite — the senior developer who insists that foundational knowledge cannot be bypassed, the craftsperson who insists that the old expertise must retain its market value, the professional who insists that AI-generated work is inherently inferior — experiences her position as principled resistance. She is standing for something: for depth, for rigor, for the earned knowledge that comes only through years of struggle.
The experience is genuine. The position is coherent. And the bad faith lies not in the grief, which is real, but in the metaphysics that the grief conceals. The Luddite treats her expertise as an essence — as a fixed property of her being that the world is obligated to recognize. "I am a senior backend developer" functions, in the Luddite's self-understanding, not as a description of choices she has made but as a description of what she is. And because what she is cannot be wrong — essences do not make mistakes, only choices do — the world must be wrong for failing to value it.
Sartre would identify the specific mechanism: the Luddite conflates facticity with essence. Her skills are factical — they are part of the given conditions of her situation, the accumulated product of years of choice. But facticity is not essence. Facticity is what you have chosen to become so far. Essence would be what you must be. The Luddite converts the former into the latter, and this conversion is the act of bad faith: the refusal to acknowledge that what she has become is what she chose to become, and that she can choose to become something else.
The grief, again, is legitimate. The accumulated investment is real. The loss of identity that accompanies the erosion of a skill's market value is a genuine form of suffering. Sartre was not indifferent to suffering. He was indifferent to the philosophical error of treating suffering as proof that the world has violated a fixed essence. The Luddite suffers because her choices are being devalued. She is in bad faith because she experiences this devaluation as an injustice perpetrated against her nature rather than as a change in the conditions within which her ongoing choices must operate.
The second register is the bad faith of the triumphalist. This figure celebrates AI's productivity gains with the fervor of the convert — shipping more code, building more products, logging more hours, posting more metrics — while treating the costs of acceleration as someone else's problem. The triumphalist's characteristic move is the appeal to inevitability: "Someone else would have built it if I didn't." "The market will figure out the distribution." "You can't stop progress."
Each of these appeals performs the same function: it converts a choice into a fait accompli. The triumphalist chose to build. She chose to accelerate. She chose to deploy. And the consequences of those choices — for the workers displaced, for the skills devalued, for the human relationships eroded by the pace of change — are the consequences of her choices. The appeal to inevitability does not transfer responsibility from the chooser to the historical process. It conceals the chooser behind the historical process. Sartre identified this as one of the most sophisticated forms of bad faith: the flight into what he called the spirit of seriousness, the attitude that treats historical forces as though they were natural laws, and thereby absolves the individual of responsibility for participating in them.
The triumphalist's bad faith is more dangerous than the Luddite's because it is more socially rewarded. The Luddite's refusal is visible. It looks like resistance, which looks like weakness, which looks like failure. The triumphalist's acceleration is also visible. It looks like confidence, which looks like strength, which looks like success. The social rewards of acceleration make the triumphalist's bad faith harder to detect — and harder to challenge, because the challenger must argue not only against a philosophical position but against a social status that the position confers.
The third register is the bad faith of the passive adopter — the figure who uses AI tools without examining the choice to use them, who treats adoption as a default rather than a decision. This figure does not celebrate AI or resist it. She simply uses it, the way she uses electricity or running water, as though the tool were part of the environment rather than a choice she is making every time she opens the interface.
The passive adopter's bad faith is the most common and the least visible. She does not experience herself as choosing. She experiences herself as keeping up — as doing what everyone else does, as remaining competitive, as staying relevant. The language of necessity does the concealing work: "I have to use these tools to stay competitive." "Everyone in my industry is using them." "I don't have a choice."
She does have a choice. She has always had a choice. The choice to adopt is a choice, and the consequences of adoption — the reshaping of her work habits, the alteration of her cognitive patterns, the subtle erosion of skills she no longer exercises — are consequences she has chosen by choosing to adopt. The passive adopter's bad faith consists in experiencing the consequences as things that happened to her rather than things she produced through her own free action.
The fourth register is the bad faith of the elegist. This figure appears in The Orange Pill as the architect who compares himself to a master calligrapher watching the printing press arrive — the person who mourns the loss of craft with the specific melancholy of someone who understands what is being lost and cannot prevent the loss. The elegist's grief is real. The depth of embodied knowledge that the elegist has accumulated — the feel for code, the intuition for systems, the understanding that lives in the hands rather than the head — is genuine, and its devaluation is a genuine loss.
The bad faith lies in the elegist's relationship to the grief. The elegist treats the mourning as an endpoint — as the appropriate final response to a loss that is complete. She identifies so thoroughly with the craft that the loss of the craft's value feels like the loss of herself. But the identification of self with craft is the bad faith move. The craft was a choice. A sustained, devoted, admirable choice. But a choice nonetheless. And the elegist who treats the end of one set of choices as the end of herself has confused facticity — what she has chosen to become — with essence — what she must be.
The fifth register is the subtlest and most intimate. It is the bad faith of the collaborator — the specific form of self-deception that Segal describes in his account of writing with Claude. The collaborator's bad faith consists in mistaking the quality of the output for the quality of the thinking. The prose is smooth. The arguments are well-structured. The references arrive on time. And the collaborator begins to experience the collaboration's product as her own thought, fully, without qualification — even when the product contains elements that she did not think, did not earn, did not arrive at through the specific struggle of confronting an idea that resists formulation.
Segal catches this dynamic with rare honesty: "I could not tell whether I actually believed the argument or whether I just liked how it sounded." The inability to distinguish between genuine conviction and aesthetic pleasure in the sound of a well-constructed sentence is the collaborator's bad faith in its purest form. The distinction requires self-knowledge — the willingness to ask, of each sentence, each argument, each connection: Is this mine? Not in the sense of intellectual property, but in the existentialist sense: Does this represent a choice I have made about what I believe, or has the smoothness of the tool allowed me to avoid the choice entirely?
The discipline Segal describes — deleting the smooth passage, spending two hours with a notebook finding the rougher version that was actually his — is the practice of authenticity under the specific conditions of AI collaboration. It is the refusal of bad faith in its most seductive form: the form that does not feel like self-deception because the output is genuinely good. The output can be good and the process can still be bad faith, because the quality of the product is not the criterion. The criterion is whether the person who claims the product has actually thought the thoughts the product represents.
Each of these five registers operates simultaneously in the AI moment, and most people inhabit more than one at different times. The developer who mourns her craft at dinner may celebrate her productivity gains at work. The leader who builds dams for his team may accelerate without reflection when the quarterly numbers come due. The writer who catches the empty passage on Tuesday may miss it on Wednesday. Bad faith is not a stable condition. It is an ongoing flight — a series of micro-evasions, moment by moment, in which the consciousness that is free chooses to experience itself as determined.
Sartre did not believe that bad faith could be permanently overcome. The flight from freedom is as fundamental to consciousness as freedom itself — the two are structurally inseparable, the way a shadow is inseparable from the body that casts it. Authenticity is not the elimination of bad faith. It is the ongoing, never-completed struggle against it: the willingness to catch oneself in the act of self-deception, to face the freedom one is fleeing from, and to accept — again, and again, and again, because the acceptance is never permanent — that the choices are one's own and that the consequences follow from them without exception and without excuse.
In the age of AI, the opportunities for bad faith have multiplied. The technology provides new alibis — new ways to attribute one's choices to necessity, to the market, to the tool itself. But the structure is the same. The freedom is the same. And the responsibility, untouched by any technology, remains where Sartre placed it: with the individual who chose, who is choosing, who will choose, and who cannot, no matter how sophisticated the excuse, transfer the weight of that choosing to anything or anyone else.
Sartre distinguished anguish from fear with a precision that most readers of his philosophy have failed to absorb. Fear is directed outward — at a threat that exists in the world. The soldier fears the bullet. The worker fears unemployment. The parent fears the illness that might strike a child. In each case, the danger is external, locatable, and in principle avoidable. One can take cover from bullets, acquire new skills against unemployment, vaccinate against disease. Fear is a practical emotion. It demands practical responses.
Anguish is directed inward. It is the experience of confronting one's own freedom in a situation where no external authority, no established rule, no prior experience can determine the correct choice. The soldier on the cliff does not only fear that he might fall. He experiences anguish at the recognition that he could jump — that the freedom to destroy himself is as absolute as the freedom to preserve himself, and that no barrier other than his own choice stands between the two possibilities. Anguish is not fear of the world. It is the vertigo of recognizing that one's choices are unconstrained by anything except one's own choosing.
The distinction maps onto the AI moment with uncomfortable precision. The developer who fears losing her job to automation is experiencing fear. The fear is practical. It concerns an external threat — a change in market conditions that may devalue her skills. Fear of this kind can be addressed practically: retrain, adapt, diversify, build new competencies. The fear is real and the responses are available, even if they are difficult.
But underneath the fear, for those who look closely enough, lies something else. Something that cannot be addressed by retraining or adaptation, because it is not a problem to be solved but a condition to be faced. The developer who confronts AI does not only fear the external consequences. She confronts the internal recognition that she is free to respond in any way she chooses — and that no rule tells her which response is correct. She can resist. She can adopt. She can accelerate. She can retreat. She can reinvent herself entirely. She can double down on what she knows. Each option is available. None is determined. And the absence of determination is the anguish.
Sartre illustrated anguish with the example of Abraham. God commands Abraham to sacrifice his son Isaac. Abraham must decide whether to obey. But the decision involves a prior question that most readings of the story suppress: How does Abraham know it is God speaking? He has had an experience — a voice, a vision, a command. But the experience does not interpret itself. Abraham must decide that the voice is God's rather than a hallucination, a demon, or the projection of his own desire. And this decision — the decision about what the experience means — is Abraham's alone. No authority can validate it. The voice that says "I am God" might be lying. The voice that says "sacrifice your son" might be a test of whether Abraham is foolish enough to obey without question. Abraham is free. And the freedom is the anguish.
The builder confronting AI is in Abraham's position, without the religious framing but with the identical structure. The technology presents itself as transformative. The market confirms this. The adoption curves are exponential. The evidence is overwhelming. And yet the builder must decide what the evidence means — whether the transformation is liberation or catastrophe, whether the adoption curves measure genuine human need or manufactured urgency, whether the overwhelming evidence points toward a future worth building or a future worth resisting.
No amount of evidence resolves this decision. The evidence is real, but evidence does not interpret itself. The same data — the speed of adoption, the productivity multipliers, the collapsing imagination-to-artifact ratio — supports radically different conclusions depending on the framework through which it is read. Through the triumphalist's framework, the data means liberation. Through the elegist's framework, it means loss. Through Sartre's framework, it means neither — it means that the builder is free to interpret, and the interpretation is a choice, and the choice is hers.
This is what Sartre meant by anguish being distinct from fear. The builder is not merely afraid of what AI will do to her career. She is confronting the recognition that no one — no authority, no tradition, no philosophical framework, not even Sartre himself — can tell her what to do about it. The choice is unprecedented. The situation is genuinely new. And the novelty is not merely the novelty of a new technology, which could be absorbed into existing frameworks the way previous technologies were absorbed. The novelty is structural: the technology has altered the conditions of choice in a way that renders prior frameworks insufficient.
Segal captures this when he insists that the historical analogies — Gutenberg, the power loom, the spreadsheet — each capture part of the phenomenon and miss the rest. The analogies are not wrong. They are incomplete. And the incompleteness is the source of the anguish, because incompleteness means that the builder cannot rely on precedent. She must choose in a situation that precedent does not cover.
Sartre argued that most people, most of the time, evade anguish through what he called the spirit of seriousness — the pretense that values, roles, and rules are written into the structure of the world rather than created by human freedom. The serious person does not experience her moral commitments as choices. She experiences them as discoveries — as truths that she has found, not positions that she has taken. The seriousness conceals the choosing. It makes the choice feel like something that happened to the person rather than something the person did.
In the AI moment, the spirit of seriousness takes a specific form: the appeal to inevitability. "This is happening whether we like it or not." "You can't stop progress." "The only question is whether you'll be left behind." Each of these statements contains a truth — the technology is advancing rapidly, adoption is widespread, competitive pressures are real — and each converts that truth into a concealment of freedom. The truth becomes an alibi. The recognition that the technology is powerful becomes the pretense that the technology determines the response. But technologies do not determine responses. People choose responses. And the space between the technology and the response — the space in which the choice lives — is the space of anguish.
Segal describes this space with precision even when he does not name it. The account of lying awake at 3 a.m., unable to determine whether the intensity of his work is flow or compulsion, is an account of anguish in its purest form. The distinction between flow and compulsion is not an empirical distinction that better data could resolve. It is an existential distinction that depends entirely on whether the person is choosing to continue or is unable to stop. And the person himself — the person in the grip of the experience — cannot always tell the difference. The ambiguity is the anguish.
"Am I here because I choose to be, or because I cannot leave?" Segal asks. Sartre's answer is that the question itself is the authentic confrontation with freedom. The person who asks it is facing the anguish rather than fleeing from it. She has not resolved the question — anguish does not resolve — but she has refused to pretend that the question does not exist. She has refused the spirit of seriousness that would tell her "of course you're choosing, you're a free agent" or the spirit of determinism that would tell her "of course you're compelled, the tool is addictive." She has sat with the ambiguity. And the sitting is the authenticity.
The blank page is Sartre's most concrete image for anguish. The writer who faces a blank page confronts the full weight of her freedom: she can write anything. The infinite possibility is not liberating. It is paralyzing. Because if she can write anything, then whatever she writes is a choice, and the choice defines her, and the definition could have been otherwise. Every sentence she writes closes a possibility that the blank page held open. Every word is a renunciation of every other word she might have chosen.
AI transforms the experience of the blank page without eliminating the anguish — and the transformation is instructive, because it reveals what the anguish actually consists of. When the builder prompts Claude, the blankness fills almost instantly. The page is no longer empty. Ideas appear, structures form, possibilities materialize with a speed that the human mind alone could never match. The paralysis of infinite possibility is replaced by the abundance of generated content.
But the anguish does not disappear. It relocates. The builder who faces a page full of AI-generated possibilities confronts a different version of the same freedom: she must choose which possibilities to pursue. The blank page asked: What will you create? The filled page asks: What will you keep? The questions are formally different. The existential weight is identical. In both cases, the builder must choose, and the choice defines her, and no authority determines which choice is correct.
This relocation of anguish — from the question of creation to the question of selection — is one of the most significant psychological features of the AI moment, and it is one that Sartre's framework illuminates with particular clarity. The myth of the blank page located creative anguish at the moment of inception: the first mark on the empty surface. AI has moved creative anguish to the moment of judgment: the decision about which of many generated options to pursue, refine, or discard. The anguish is the same. The location has shifted.
And the shift reveals something about the nature of creative anguish that the myth of the blank page concealed. The anguish was never really about the blankness. It was about the choosing. The blank page dramatized the choosing by presenting it in its starkest form — here is nothing, now you must make something. But the choosing is equally present, equally free, equally vertiginous when the page is full. The builder who must select from twenty AI-generated architectural proposals is as free, and therefore as anguished, as the builder who must produce a single proposal from nothing. In both cases, the selection is hers. In both cases, the consequences follow from her choice. In both cases, no external authority validates the selection.
The anguish of the AI age is not the fear that machines will choose for us. It is the recognition that, no matter how many options the machine generates, the choosing remains ours — and the choosing is the hardest part, the part that machines cannot perform, the part that defines who we are.
Sartre would have recognized the irony: a technology celebrated for its capacity to generate has not liberated people from the burden of creation. It has concentrated that burden into its most essential form — the form of the choice itself. The machine can fill any blank page. Only the person facing the page can decide what should be on it. And that decision, made under conditions of radical freedom, without precedent, without guarantee, without the possibility of transferring responsibility to the tool that generated the options — that decision is anguish. Not pathology. Not weakness. Not a problem to be solved by better prompting or more powerful models. Anguish. The authentic human response to the discovery that freedom does not diminish when capability increases. It intensifies.
---
Sartre's philosophy is often caricatured as a doctrine of unlimited freedom — the claim that human beings can be anything they want, unconstrained by circumstances, history, or the physical world. The caricature is false, and its falseness matters, because the corrected understanding is precisely what makes Sartre's framework useful for analyzing AI.
Human existence, in Sartre's account, is not pure freedom. It is the tension between two irreducible dimensions: facticity and transcendence. Facticity is everything about a person's situation that is given rather than chosen — the body one inhabits, the historical moment into which one was born, the economic conditions of one's childhood, the language one first learned, the specific configuration of abilities and limitations that constitute one's starting position in the world. Facticity is real. It is not an illusion to be overcome by force of will. The developer born into poverty in a rural village with no internet access faces a different factical situation than the developer born into privilege in San Francisco with a computer in her bedroom at age five. The difference is not imaginary. It shapes everything.
Transcendence is the other dimension: the capacity of consciousness to go beyond the given, to project itself toward possibilities that do not yet exist, to refuse to accept the factical situation as final. The developer in the rural village who imagines a different life, who learns to code on a borrowed phone, who builds something that no one in her village has built before — she is exercising transcendence. She is not ignoring her facticity. She is projecting beyond it, toward a possibility that her facticity does not contain but does not prohibit.
Human existence is the perpetual tension between these two dimensions. Facticity without transcendence would be the existence of a thing — a rock, a paper-knife — defined entirely by what it is, without the capacity to become something else. Transcendence without facticity would be the existence of a god — pure possibility, unconstrained by any given, free to create from nothing. Human beings are neither. They are situated freedoms — consciousnesses that must exercise their freedom within, against, and through the specific constraints of their given situation.
The tension between facticity and transcendence produces two corresponding forms of bad faith. The first is the reduction of transcendence to facticity — the pretense that one is nothing more than one's situation. "I am a product of my environment." "I had no opportunities." "The system determined what I could become." This form of bad faith denies freedom by absolutizing constraint. It converts facticity from a condition within which freedom operates into a cage that eliminates freedom.
The second is the reduction of facticity to transcendence — the pretense that one's situation is irrelevant, that pure will can overcome any constraint, that freedom means the absence of limitation. "Anyone can succeed if they work hard enough." "Circumstances don't matter; attitude matters." "You can be anything you want to be." This form of bad faith denies constraint by absolutizing freedom. It converts transcendence from a capacity exercised within limits into a fantasy of limitlessness.
Both forms of bad faith are present in the discourse surrounding AI, and both distort the analysis in ways that Sartre's framework can correct.
The first form appears in the discourse of technological determinism: the claim that AI will inevitably reshape the workforce, that individual choices are irrelevant against the tide of automation, that the only question is adaptation because resistance is futile. This discourse treats the technology as facticity raised to the level of destiny. The technology is real. Its effects are real. But the conversion of a real force into an inescapable destiny is bad faith, because it denies the freedom of the individuals and communities who will decide — are deciding, right now — how the technology is deployed, regulated, distributed, and used. The technology does not deploy itself. People deploy it. And the deployment is a choice.
The second form appears in the discourse of unlimited democratization: the claim that AI levels all playing fields, that anyone with a subscription and an idea can now build anything, that the constraints of education, capital, geography, and institutional access have been rendered irrelevant by the collapse of the implementation barrier. Segal approaches this claim with more nuance than most — he acknowledges that "inequalities of access, connectivity, and capital remain real" — but the structure of the democratization argument consistently gravitates toward the second form of bad faith. It celebrates the expansion of transcendence — the developer in Lagos can now project herself toward possibilities that were previously beyond her reach — while underestimating the persistence of facticity. The developer in Lagos still faces unreliable power grids. Still faces limited bandwidth. Still faces economic precarity that a subscription to Claude Code does not resolve.
Sartre's framework holds both truths simultaneously, which is its specific analytical advantage over frameworks that collapse into either determinism or voluntarism. The expansion of capability is real. The developer who could not build a product alone in 2024 can build one in 2026. The facticity has changed. The terrain on which freedom operates has widened. This is significant and worth celebrating — not because it makes the developer more free (freedom, in Sartre's account, does not increase or decrease) but because it gives her freedom more to work with.
The amplifier metaphor that structures Segal's argument — AI amplifies whatever signal you bring to it — translates, in Sartre's vocabulary, into a specific claim about the relationship between facticity and transcendence. The amplifier does not alter the structure of human existence. It does not make people more or less free. What it does is widen the domain of transcendence — expand the range of possibilities toward which a consciousness can project itself — without correspondingly widening the domain of facticity. The developer who previously could imagine a product but not build it now can both imagine and build. Her transcendence has expanded. Her facticity — her body, her history, her accumulated choices, her biases, her blind spots, her unexamined assumptions — remains.
This asymmetric expansion is the source of both the exhilaration and the danger that The Orange Pill documents. The exhilaration comes from the expansion of transcendence: the intoxicating discovery that one can do more than one thought possible, that the gap between vision and reality has narrowed to the width of a conversation. The danger comes from the unchanged facticity: the biases that were present before the amplification are present after it, now operating at greater scale. The blind spots that were invisible before are invisible after, now producing consequences at greater reach. The unexamined assumptions that shaped one's choices when the choices had local effects continue to shape them when the choices have global effects.
Sartre warned that transcendence without the resistance of facticity becomes what he called a flight into the imaginary — a projection that has no ground, no friction, no reality-testing mechanism. The daydreamer who imagines a different life without confronting the specific, concrete obstacles that stand between her and that life is exercising transcendence without facticity. She is free in fantasy and paralyzed in reality. The freedom feels real but produces nothing, because it has not engaged with the given.
AI creates a new version of this flight. The builder who generates twenty architectural proposals in an afternoon has exercised transcendence at unprecedented scale. But if she has not confronted the facticity of each proposal — the specific technical constraints, the specific user needs, the specific organizational realities that determine which proposal will actually work — she has generated possibilities without grounding them. The transcendence is real but empty. The expansion of what can be imagined has outrun the capacity to evaluate what should be built.
The factical dimension of human existence is not a limitation to be overcome. It is the resistance that gives freedom its weight. The body that gets tired. The history that created biases. The economic constraints that force prioritization. The relationships that demand attention. The mortality that makes time finite. These are the conditions within which freedom operates, and their resistance is what makes the exercise of freedom meaningful rather than arbitrary.
The developer who builds a product while confronting the full weight of her facticity — who acknowledges her biases and corrects for them, who faces the constraints of her situation and works within them, who recognizes the limits of her knowledge and seeks to expand them — produces something that bears the mark of genuinely situated creation. The developer who generates products frictionlessly, without engaging the factical resistance that would test and refine her vision, produces something that bears the mark of what Sartre would call imaginary projection — technically functional, perhaps, but existentially weightless.
The amplifier amplifies transcendence. It does not amplify facticity, because facticity is not a signal that can be amplified. It is the ground on which the signal stands. And when the signal is amplified without the ground being proportionally strengthened, the result is a structure that rises higher but rests on the same foundation — which is to say, a structure that is more impressive and more precarious at the same time.
The authentic response to the amplifier is not to refuse it — that would be to deny the expansion of transcendence, which is real and valuable. Nor is it to celebrate it uncritically — that would be to ignore the unchanged facticity, which is equally real and increasingly consequential. The authentic response is to use the amplifier while simultaneously deepening one's engagement with facticity: confronting one's biases more rigorously because they now operate at greater scale, examining one's assumptions more carefully because they now shape outcomes at greater reach, acknowledging one's limitations more honestly because the consequences of those limitations are now amplified along with everything else.
The amplifier does not change what human existence requires. It raises the stakes of getting it wrong.
---
Sartre argued that beneath the surface of everyday choices — what to eat, where to work, whom to befriend, which technologies to adopt — there lies a deeper structure that organizes them all. He called this the fundamental project: the basic, often unarticulated choice of who to be that gives coherence to the thousands of particular decisions a person makes over the course of a life. The fundamental project is not a conscious plan. Most people cannot articulate it. It operates at the level of orientation rather than deliberation — the way a compass needle points north not because it has decided to but because its constitution makes the pointing inevitable.
Yet the analogy to the compass is misleading if taken too far, because the fundamental project is not a natural orientation. It is a choice — the most intimate, the most constitutive, and the most concealed choice a person makes. Concealed because the fundamental project is the lens through which everything else is seen, and a lens is the one thing you cannot see while you are looking through it. The fundamental project organizes perception, shapes valuation, determines what counts as relevant and what is dismissed as noise. It is the framework within which all other frameworks are constructed. And it is, like everything else in Sartre's philosophy, freely chosen — which means it can be freely revised, though the revision is the most wrenching experience a human being can undergo.
Consider the fundamental projects that have organized the lives of knowledge workers for the past several decades. "I am the person who understands the system from the ground up." "I am the person who can build anything." "I am the person who solves the problems no one else can solve." "I am the person who turns chaos into order." Each of these is not merely a job description. It is an existential orientation — a basic choice about what kind of being one is going to be in the world. The person whose fundamental project is "I understand the system from the ground up" does not simply perform a function. She organizes her entire existence around the acquisition of deep understanding: she reads documentation that others skip, debugs problems that others escalate, builds mental models of systems that others treat as black boxes. Her weekends, her reading habits, her conversational topics, her sense of self-worth — all of it flows from and reinforces the fundamental project.
Now imagine that a tool arrives that can understand the system from the ground up in minutes. Not with the embodied depth that fifteen years of practice produce — the point is not that AI replicates the developer's understanding in every respect — but with sufficient competence that the market no longer rewards the depth at the premium it once did. The fundamental project has not been invalidated. It has been disrupted. The conditions under which it was formed — conditions in which deep systemic understanding was rare, expensive, and therefore highly valued — have changed. And the person whose entire existence is organized around that project must now confront a question that goes far deeper than "What should I do next?" The question is: "Who should I be now?"
This is not a career question. Career questions operate within the fundamental project — they concern how to pursue the basic orientation, not whether to revise it. The question "Who should I be now?" is an existential question. It concerns the fundamental project itself. And confronting it produces not the ordinary discomfort of a difficult decision but the specific, annihilating vertigo that Sartre associated with the revision of one's deepest self-understanding.
Sartre described the revision of the fundamental project as a kind of radical conversion — a term he borrowed deliberately from religious vocabulary to convey the depth of the transformation involved. The conversion is not a gradual adjustment. It is a rupture: a moment in which the person sees, with sudden clarity, that the project that has organized her existence is no longer viable — and that the viability was never guaranteed, because the project was always a choice, and choices can become untenable when circumstances change.
The radical conversion does not happen to everyone. Many people, faced with the disruption of their fundamental project, choose instead to defend it — to double down on the orientation that is being challenged, to insist more vehemently on the value of the thing that the world is devaluing. This defense is understandable. The fundamental project is the most intimate thing a person possesses — more intimate than any particular skill, more intimate than any particular relationship, because it is the choice that makes all other choices coherent. To revise it is to revise oneself at the deepest level. Most people would rather fight than undergo that revision.
The engineers Segal describes in the Trivandrum training were, in Sartrean terms, confronting the need for radical conversion compressed into five days. The senior engineer who spent his first two days "oscillating between excitement and terror" was oscillating between two possible fundamental projects: the old one, organized around deep implementation knowledge, and a new one, not yet formed, organized around something he could feel but not yet articulate — judgment, vision, the capacity to direct rather than execute. The oscillation was not indecision in the ordinary sense. It was the vertigo of a consciousness confronting the need to revise its most basic orientation.
By Friday, Segal reports, the engineer had arrived at a recognition: "The remaining twenty percent — the judgment about what to build, the architectural instinct about what would break, the taste that separated a feature users loved from one they tolerated — turned out to be the part that mattered." This recognition is the beginning of a radical conversion. The engineer is discovering that his fundamental project was never really about implementation. Implementation was the vehicle through which a deeper project — the project of understanding what deserves to exist — expressed itself. The implementation concealed the deeper project the way the role concealed the freedom. Now, with the implementation handled by a machine, the deeper project stands exposed.
But the exposure is only the beginning. The radical conversion is not complete when the old project is recognized as contingent. It is complete only when a new project has been chosen — and the choosing, Sartre insisted, is not a deliberative process. It is not the product of cost-benefit analysis or strategic planning. It is an existential leap: a moment in which the person commits to a new orientation without guarantee, without certainty, without the comfort of knowing that the new project will succeed where the old one is failing.
The twelve-year-old who asks her mother "What am I for?" has not yet formed a fundamental project. She is in the position that Sartre described as the purest expression of human freedom: the position of someone who has not yet chosen who to be. Her question is not a symptom of confusion. It is the authentic expression of a consciousness confronting its own freedom without the concealment that a settled fundamental project provides. She does not know what she is for, and the not-knowing is, in Sartre's terms, the ground-state of human existence — the condition from which all choosing proceeds and to which every disruption of the fundamental project returns.
Adults typically experience this ground-state as crisis. The disruption of a long-held fundamental project feels like the loss of identity — and it is, in the specific sense that the identity being lost was constituted by the project. But the loss of a constructed identity is not the loss of the self. The self, in Sartre's framework, is not the identity. The self is the freedom that chose the identity and can choose again. The identity was always a costume. The self is what wore it.
This distinction is the most practical thing Sartre's philosophy offers to the person navigating the AI transition. The developer who experiences the disruption of her role as the loss of herself has confused the costume with the wearer. The costume — the role, the title, the specific set of skills that constituted her professional identity — is being disrupted. The wearer — the consciousness that chose the role, that invested in the skills, that built the identity through years of free action — is not being disrupted. She is being exposed. And the exposure, while terrifying, is also an opportunity: the opportunity to choose a new fundamental project that is more authentically aligned with what she cares about, now that the implementation labor that previously consumed her bandwidth is no longer consuming it.
Sartre would not have described this as an opportunity in the optimistic sense. He would have described it as a confrontation — the confrontation with freedom that most people spend their lives avoiding and that the disruption of the fundamental project makes unavoidable. The confrontation is not pleasant. It does not resolve into a clean new identity. It opens into a period of what Sartre called nausea — the visceral experience of contingency, the recognition that nothing is necessary, that everything could be otherwise, that the self one has been is no more necessary than the self one might become.
But the confrontation is, in Sartre's specific and demanding sense, the truth. The roles were costumes. The skills were choices. The identity was a construction. The freedom was always there, operating beneath the surface, producing the illusion of necessity through the sheer repetition of the same choice. The machine has disrupted the repetition. What remains is the freedom — naked, unadorned, demanding to be exercised.
What the person does with that freedom — whether she constructs a new fundamental project that engages the changed conditions authentically, or whether she retreats into bad faith and insists that the old project must be preserved — is, as always, entirely up to her.
---
Authenticity, in Sartre's philosophy, is not a state that one achieves and then possesses. It is not a personality trait. It is not the product of a particular lifestyle or a particular set of choices. It is a practice — an ongoing, never-completed, perpetually failing practice of confronting one's own freedom and accepting the responsibility that comes with it. The authentic person is not the person who has eliminated bad faith. She is the person who catches herself in bad faith and refuses to let the evasion stand.
The distinction matters because it lowers the bar from impossible to demanding. Sartre did not believe that any human being could live in permanent authenticity. The structures of bad faith are woven into the fabric of consciousness itself — the flight from freedom is as fundamental as freedom, the desire to be a thing rather than a choosing is as basic as the choosing itself. Authenticity is not the absence of self-deception. It is the willingness to struggle against self-deception, knowing that the struggle will never produce a final victory.
This understanding of authenticity as practice rather than achievement makes Sartre's philosophy unexpectedly useful for the builder navigating the AI moment — a moment in which the opportunities for self-deception have multiplied faster than the capacity to detect them.
Segal provides the most honest account of what this practice looks like when he describes the nights of working with Claude at 3 a.m. The account is worth examining closely, because it contains, in compressed form, the entire existentialist drama of freedom, bad faith, and the struggle toward authenticity.
"There are nights when I work with Claude and the work flows," Segal writes. The ideas connect. The output surprises. Time distorts. The experience has the phenomenological signature of what Csikszentmihalyi called flow — the optimal state in which challenge and skill are matched and the self disappears into the activity. On these nights, the work produces energy rather than consuming it. The builder feels, in Segal's word, "full."
But there are other nights. Nights when the exhilaration has drained out and what remains is the grinding compulsion of a person who has confused productivity with aliveness. The work continues, but the quality of the experience has changed. The builder is no longer building because the work demands it. She is building because she cannot stop. The distinction — between "I choose to continue" and "I cannot stop" — is the distinction between authenticity and bad faith, and it is, as Segal acknowledges, often invisible to the person inside the experience.
Sartre would identify the precise mechanism of the bad faith operating on the compulsive nights. The builder tells herself: "The work is important. The deadline is real. The product needs to ship." Each of these statements may be true. But their function, at 3 a.m. on a night when the exhilaration has drained and the body is exhausted and the mind is grinding rather than flowing, is not truth-telling. Their function is alibi-construction. They convert a choice — the choice to keep working — into a necessity. They transform "I am choosing not to stop" into "I cannot stop," and the transformation conceals the freedom that is being exercised.
The concealment is seductive because the alternative is unbearable. To acknowledge, at 3 a.m., that one is freely choosing to work rather than being compelled to work is to accept full responsibility for the consequences: the exhaustion, the erosion of relationships, the physical toll, the specific flatness of mood that follows a night of compulsive production. If the work is a choice, then the consequences are chosen. And choosing consequences that damage oneself without the alibi of necessity is a confrontation with freedom that few people can sustain for long.
So the builder constructs the alibi. "The deadline." "The market." "The competitive pressure." "The tool is just too good to put down." Each alibi points to a real external condition. None of the conditions makes the choice for her. The deadline is real, but the response to the deadline is hers. The market pressure is real, but the decision to respond to market pressure at 3 a.m. rather than at 9 a.m. is hers. The tool's capability is real, but the decision to use the tool at this hour is hers. The alibis function not by being false but by being incomplete — they name the conditions without naming the freedom that operates within those conditions.
Segal's most revealing moment in this context is the sentence: "The whip and the hand that held it belonged to the same person." This sentence is the existentialist insight in its purest, most compressed form. The builder is not being exploited by a system. She is exploiting herself. The whip is real — the drive to produce, the appetite for output, the intoxication of capability — and the hand that holds it is her own. There is no external oppressor. There is only a consciousness that has chosen to crack the whip against itself and constructed an elaborate architecture of necessity to conceal the choosing.
Han's philosophy identifies this as auto-exploitation — the achievement subject who has internalized the demand to optimize and now administers her own oppression. Sartre's framework agrees with the diagnosis but radically alters the prescription. Han prescribes resistance to the system: contemplation, refusal, the garden, the analog. Sartre prescribes something both simpler and harder: the acknowledgment that the system is you. The auto-exploitation is not a systemic condition that you suffer. It is a choice that you make. And the prescription is not resistance to an external force but honesty about an internal one — the willingness to say, at 3 a.m., "I am choosing this, and I accept the consequences, and the consequences are my responsibility."
This honesty does not necessarily change the behavior. The authentic builder may, having acknowledged that the 3 a.m. session is a free choice, choose to continue. The acknowledgment does not require cessation. It requires ownership. The difference between the authentic builder who works at 3 a.m. and the inauthentic builder who works at 3 a.m. is not what they do. It is whether they own it.
The authentic builder says: "I am choosing to work. I could stop. I am not stopping. The consequences — fatigue, relationship strain, the specific cognitive cost of sustained intensity — are mine. I accept them." The inauthentic builder says: "I have to work. The deadline requires it. The market demands it. The tool makes it possible and therefore necessary." The behavior is identical. The relationship to the behavior is different. And the relationship is what authenticity consists of.
This may sound like a distinction without a difference — like existentialist hair-splitting that produces no practical consequence. But the practical consequences are significant, because the relationship to one's choices shapes the capacity to revise them. The builder who has acknowledged that the 3 a.m. session is a choice retains the capacity to choose differently tomorrow. She has maintained access to her own freedom. She knows where the lever is, even if she is not pulling it tonight.
The builder who has constructed the alibi of necessity has, by contrast, concealed the lever from herself. She does not experience herself as choosing. She experiences herself as compelled. And the experience of compulsion, however fictional, erodes the capacity for revision, because you cannot revise a choice you do not recognize as a choice. The alibi of necessity, maintained over time, produces genuine helplessness — not because the freedom has disappeared but because the person has convinced herself so thoroughly that she has no choice that the freedom, though still present, has become invisible.
The practice of authenticity in the AI age is the practice of keeping the lever visible. Not pulling it every time — Sartre does not prescribe any particular set of choices — but knowing where it is. Knowing that the 3 a.m. session is a choice. Knowing that the adoption of AI tools is a choice. Knowing that the response to the disruption of one's fundamental project is a choice. Knowing that the deployment of AI in an organization is a choice made by specific individuals who bear specific responsibility for its consequences.
The lever is always there. The authentic person keeps it in sight. The inauthentic person buries it under alibis. The AI moment has generated a vast new inventory of alibis — technological inevitability, market pressure, competitive necessity, the sheer capability of the tools — and each one functions to conceal the lever, to convert a choice into a compulsion, to transform a free act into an apparent necessity.
Segal describes the discipline of catching himself in bad faith — deleting the smooth passage, spending hours with a notebook finding the version that is actually his, asking whether he believes the argument or merely likes how it sounds. This discipline is the practice of authenticity applied to the specific conditions of AI collaboration. It is the refusal to let the tool's agreeableness substitute for the hard work of genuine self-confrontation. It is the insistence on maintaining the distinction between what sounds true and what is true, between what the machine produces and what the person believes, between the quality of the output and the quality of the thinking.
Sartre would recognize this discipline, approve of its rigor, and insist that it will never be complete. The builder who catches herself in bad faith today will miss it tomorrow. The writer who deletes the empty passage on this page will keep a different empty passage on the next page because fatigue has lowered her vigilance or vanity has whispered that the sentence is too good to cut. The practice of authenticity is a practice precisely because it never becomes automatic. It requires attention, every time, and the attention is not guaranteed by any prior exercise of attention.
What Sartre offers the builder navigating the AI moment is not a solution. Solutions belong to the domain of problems, and the human confrontation with freedom is not a problem. It is a condition. The condition does not resolve. It demands to be faced, again and again, in every new situation, with every new tool, at every new hour of the night when the screen glows and the machine awaits and the choice — to continue or to stop, to accept or to question, to own or to evade — belongs, as it has always belonged, to the person sitting in the light.
The argument of this book has moved through eight chapters toward a conclusion that Sartre stated in a single sentence in 1946 and spent the rest of his career refusing to soften: "Man is nothing other than what he makes of himself." Not what he intends. Not what he plans. Not what he hopes or fears or dreams. What he makes. The existentialist criterion is relentlessly concrete. The person who says "I am a compassionate person" but who has never performed a compassionate act is not a compassionate person. She is a person who has chosen to describe herself as compassionate — and the description, absent the action, is a form of bad faith. The person who says "I am a builder" but who has never built anything is not a builder. She is a person who has adopted the identity of builder without performing the acts that constitute it.
Sartre's insistence on the primacy of action over intention is the sharpest possible lens through which to examine the central question of The Orange Pill: "Are you worth amplifying?" The question, read through Sartre's framework, is not a question about potential. It is not asking what you could produce if conditions were right. It is asking what you have produced — what you have chosen to build, for whom, and at what cost. The amplifier does not amplify intentions. It amplifies actions. And the actions are the person.
This claim cuts against a deep cultural habit of evaluating people by their inner lives — their beliefs, their values, their feelings about things. Western culture, particularly since the Romantic period, has placed enormous weight on interiority. The person who feels deeply is valued over the person who acts efficiently. The person who believes the right things is valued over the person who does the right things. The person who has good intentions is forgiven for bad outcomes.
Sartre rejected this entire framework. Interiority, in his philosophy, is not a separate domain that exists alongside action. There is no private self that is more real than the public self. There is no inner truth that the outer behavior fails to express. The behavior is the truth. The choices are the person. What you do is who you are — not because actions are the only evidence of character, though they are, but because there is no character apart from actions. Character is not a reservoir that feeds behavior. It is the pattern that behavior constitutes.
The application to AI is immediate and uncomfortable. The builder who uses AI to produce more output has not become more productive in the existentialist sense. She has produced more. The production is her. If the production is careless — if the code ships without understanding, if the product launches without consideration of who it serves and who it harms, if the output multiplies without the judgment that determines whether the output deserves to exist — then the carelessness is not a deficiency in the tool's deployment. It is a fact about the person. She has chosen to produce carelessly. The choice defines her.
Equally, the builder who uses AI to produce work of genuine quality — who applies judgment, who confronts the facticity of each decision, who asks not only "Can I build this?" but "Should I?" — has chosen to build with care. The care is not a feeling she has about the work. It is the work itself. The quality of the output is the quality of the person, because the person is nothing other than the sum of her choices, and the choices are visible in the work.
This is why the amplifier metaphor carries existentialist weight that its surface presentation in The Orange Pill only partially reveals. An amplifier makes the signal louder. But "louder" is morally neutral — it can mean that a beautiful signal reaches further or that a distorted signal causes more damage. The moral content is in the signal, not the amplification. And the signal, in Sartre's terms, is the person. Not the person's intentions. Not the person's self-description. The person as constituted by the choices she has actually made.
The question "Are you worth amplifying?" is therefore not a question one can answer through introspection. One cannot look inward, find a reservoir of talent or goodness, and conclude that the reservoir justifies the amplification. The answer is in the record — in the body of work one has produced, the relationships one has built, the choices one has made when no one was watching and the choices one has made when everyone was. The answer is in the actions.
And the actions, in the AI age, are accelerating. More choices per hour. More output per day. More consequences per decision. The amplifier does not just make the signal louder. It makes the choosing faster. And faster choosing, under conditions where each choice defines the chooser, means that identity itself is being constituted at a pace that previous generations did not face. The builder using AI is not choosing once who to be. She is choosing a hundred times a day, in each prompt she writes, each output she accepts or rejects, each product she ships or shelves. The accumulation of these micro-choices constitutes her fundamental project — the basic orientation of who she is — at a speed that compresses what used to be a career-long process of self-creation into months.
Sartre described a waiter in a Parisian café performing his role with exaggerated precision — the movements too crisp, the deference too studied, the whole performance slightly overdone in a way that reveals it as performance rather than nature. The waiter is playing at being a waiter. He is not a waiter the way a glass is a glass. He is a consciousness that has adopted the role of waiter and performs it with the specific over-investment of someone who is trying to become the role, to close the gap between the self and the function, to achieve the solidity of a thing.
The builder performing the role of "AI-augmented knowledge worker" is in an analogous position — but with a crucial difference. The waiter's performance is visible to the discerning eye. The over-precision betrays the effort. The builder's performance is concealed by the tool's output. The code works. The product ships. The metrics look good. There is no visible seam between the builder who has genuinely thought through the work and the builder who has accepted AI output without critical engagement. Both produce functional results. The difference lies in the relationship between the person and the work — a relationship that is, from the outside, invisible.
Sartre argued that the waiter's bad faith lay not in performing the role but in trying to be the role — in treating the performance as identity rather than as freely chosen conduct. The builder's corresponding bad faith lies in treating the output as evidence of genuine engagement when the engagement has been delegated. The code that works is not proof that the builder understood it. The product that ships is not proof that the builder evaluated whether it should exist. The metric that looks good is not proof that the builder asked whether the metric measures the right thing.
The practice of authenticity in the AI age — the discipline that Segal describes in The Orange Pill as rejecting smooth output that outpaces genuine thinking — is, in Sartrean terms, the refusal to let the tool close the gap between the self and the role. The authentic builder maintains the gap. She knows that the output is not her. She knows that accepting output without judgment is a choice — the choice to define herself through production rather than through the harder, slower, less visible work of evaluation. And she refuses that choice, not because production is wrong but because production without evaluation is the specific form of bad faith that AI makes easiest and most rewarding.
The final movement of the existentialist analysis brings the argument back to its most personal and least comfortable form. Sartre insisted that each person creates, through her choices, a kind of image of the human being as she thinks the human being ought to be. Every choice carries a universalizing dimension — when you choose for yourself, you choose for everyone, because your choice constitutes an implicit claim that this is how a human being should act in this situation. The builder who ships careless work is not merely being careless. She is implicitly claiming that carelessness is an acceptable human response to the power of the tools now available. The builder who ships careful work is implicitly claiming that care remains necessary, that judgment cannot be delegated, that the quality of the choosing is the quality of the person.
The twelve-year-old who asked her mother "What am I for?" is confronting, in the most direct and undefended form, the question that Sartre placed at the center of human existence. She is not for anything — not in the sense of a predetermined purpose that she must discover and fulfill. She is the author of her own purpose. And the authoring, now that AI has collapsed the distance between intention and artifact, is both more accessible and more demanding than it has ever been.
More accessible because the tools now available to a twelve-year-old can translate her vision into reality with a speed and fidelity that previous generations could not have imagined. More demanding because the ease of production places the entire weight of human value on the quality of the choosing — on what she decides to build, and why, and for whom, and at what cost. The machine will build whatever she asks it to. The question of what is worth asking is hers alone.
What you choose to build is who you choose to be. The sentence is simple. The weight it carries is not. It means that every product, every feature, every line of accepted or rejected output, every decision to continue or stop, to ship or shelve, to care or not to care, is an act of self-creation. The person you are becoming is visible in the work you have done. Not the work you intended. Not the work you planned. The work you actually produced, with the tools you actually used, under the conditions you actually faced, with the freedom you actually possessed.
Sartre did not offer comfort. He did not tell the builder that the right choice would become obvious or that authenticity would feel good or that the struggle against bad faith would produce peace. He offered something more demanding and more respectful: the recognition that the struggle itself — the ongoing, never-completed effort to face one's freedom and accept the weight of one's choices — is the highest expression of what it means to be human. Not the resolution. The struggle.
The machine does not struggle. The machine processes inputs and generates outputs without the gap between what it is and what it might become that constitutes human consciousness. The machine is a paper-knife of extraordinary sophistication — an artifact whose essence was determined by its creators, whose function is defined by its training, whose outputs are constrained by its architecture. It is a thing, in Sartre's sense. A being-in-itself. Fully what it is, without the nothingness at the core that makes human freedom — and human anguish, and human authenticity — possible.
The human being who sits before the machine at 3 a.m. is not a thing. She is a consciousness that is choosing, in this moment and in every moment, what to do with the extraordinary capability now available to her. The choice defines her. The definition is ongoing. And the responsibility — for what she builds, for who she becomes through the building, for the image of humanity that her choices constitute — rests, as it has always rested, where Sartre placed it.
With her. With no one else. With nothing else. In full, radical, unmitigated, inescapable freedom.
---
The accusation that unsettled me was not the one I expected.
I had braced for the argument that AI would take jobs, hollow out craft, reduce depth to surface. I had braced for it because I had made those arguments myself, in part — in the chapters on Han, on the aesthetics of the smooth, on what the Berkeley researchers found when they studied what actually happens to people who use these tools all day. I knew the case. I had felt its weight.
What I was not prepared for was Sartre's specific accusation, which is not about the technology at all. It is about me. About the alibis I construct. About the necessity I invoke when what is actually operating is choice. About the gap between the version of myself I present — the builder navigating a hard moment with honesty and care — and the version Sartre's lens exposes: a consciousness in perpetual flight from the recognition that every late night, every skipped dinner, every output accepted without interrogation, is a free act for which I bear absolute responsibility.
No one is making me do this. That is the sentence I kept running from, and Sartre's philosophy kept placing it in front of me.
Not the market. Not the competitive landscape. Not the deadline, the board, the quarterly numbers. Not even the tool itself, with its seductive agreeableness and its infinite willingness to produce. I am choosing. I have been choosing. The consequences — the exhaustion, the missed moments, the passages in this book where the prose sounds right but the thinking underneath has not been fully earned — those are mine. Not the tool's. Not the market's. Mine.
And the thing Sartre forces me to sit with — the thing I kept trying to mitigate with frameworks and metaphors and the comforting language of stewardship — is that no dam absorbs this responsibility. I wrote about beavers and rivers and structures that redirect the current. Sartre says: the structures are choices. The dams are choices. The decision to build here and not there is a choice. The structure does not carry the weight for you. You carry it, and you build the structure as part of the carrying, and the structure is one more thing you are responsible for.
This is not comfortable. It is not supposed to be. Sartre's entire contribution to the conversation I have been trying to have — with my team, with my children, with you — is the removal of comfort. Not because comfort is wrong, but because the comforts I reach for most instinctively are the ones that function as evasions. "Someone else would build it if I didn't." "The technology is inevitable." "I'm building the dams." Each statement contains truth. Each also contains a small, expertly concealed flight from the recognition that I chose, and that I keep choosing, and that the choosing does not stop.
What stays with me most from this journey through Sartre's ideas is the distinction between what I do and why I think I do it. The alibis are sophisticated — I have built them over decades. The market. The mission. The responsibility to the team. Each one points to something real. And each one, examined under the specific light that Sartre provides, turns out to be a way of experiencing my own free choices as necessities. Not because the external pressures are imaginary. Because my response to them is always, irreducibly, mine.
My son asked me whether AI would take everyone's jobs. I told him the honest answer I could manage at the time. But Sartre would have pushed harder. Not "Will AI take your job?" but "What will you choose to do with the freedom that the question opens?" The question is not about displacement. It is about the void that opens when the roles fall away — the void that twelve-year-olds feel intuitively and that adults spend careers constructing elaborate architectures to avoid.
The tools are extraordinary. They are the most powerful amplifiers of human capability ever created. And the signal they amplify is you — not your intentions, not your plans, not the person you describe yourself as. The person you constitute through your choices. The actual person, visible in the actual work, under the actual conditions.
Are you worth amplifying? Sartre's answer is that the question cannot be answered in advance. It can only be answered in action. And the action is happening now — in every prompt you write, every output you accept, every decision to continue or stop, every late night you own or blame on necessity.
The freedom is absolute. The responsibility is total. And no tool, however powerful, changes that.
-- Edo Segal
The AI revolution has produced the most powerful alibis in history. "The market demands it." "Someone else would build it." "You can't stop progress." Each points to something real. Each hides the same thing: a person who chose, freely, and does not want to own it.
Jean-Paul Sartre spent his career exposing exactly this mechanism -- the flight from freedom into the comfort of imagined necessity. His philosophy strips away every excuse and leaves you standing where you have always stood: alone with your choices, responsible for their consequences, unable to transfer the weight to any tool, any market, or any force you did not create. In the age of AI, where an amplifier will carry whatever signal you feed it, Sartre's question is the only one that matters: What are you actually choosing, and will you face it?
This book applies Sartre's existentialist framework to the AI moment -- not to reject the technology, but to expose the self-deception that surrounds it. The alibis are comfortable. The truth is not.
-- Jean-Paul Sartre

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jean-Paul Sartre — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →