By Edo Segal
The sentence that rearranged everything was not about artificial intelligence. It was about driving.
Rebecca Solnit pointed out that driverless cars are called autonomous vehicles, but driving is not an autonomous activity. It is a cooperative social activity. The person behind the wheel communicates with other people on the road through eye contact, through gesture, through the thousand micro-negotiations that allow millions of strangers to share asphalt without killing each other in numbers far greater than they do.
Remove the human from the car, and you do not get a more efficient driver. You get something that cannot make eye contact.
I read that and the ground shifted. Not about cars. About everything I had been building.
The entire premise of the AI acceleration I describe in *The Orange Pill* rests on a claim about what can be automated and what cannot. I drew the line at questions, at judgment, at caring. Solnit draws the line somewhere more uncomfortable. She draws it at the social. At the cooperative. At the fact that most of what we call work is not an information-processing task performed by an individual but a negotiation conducted between people who bring different contexts, different needs, different stakes to the interaction.
Teaching is not transmitting data. It is reading a student's face. Medicine is not outputting a classification. It is managing fear. Building a product is not generating code. It is the thousand conversations in which a team discovers what the product should be. These activities can be assisted by machines. They cannot be replaced by them, because replacement eliminates the very thing that makes the activity valuable.
Solnit is not a technologist. She is a writer and historian of activism who has spent decades studying how power moves, who captures it, and what ordinary people do when the ground shifts beneath them. Her framework offers something the technology discourse cannot generate from inside itself: the recognition that the AI transition is not primarily a technology story. It is a power story. And power stories are decided not by the capabilities of the tools but by the institutional choices of the people who deploy them.
She also offers something I needed personally. A vocabulary for the compound emotional state I described as falling and flying simultaneously. She calls it hope — not the passive hope that expects a good outcome, but the demanding hope that acts without guarantees because the outcome is genuinely uncertain and the uncertainty is what makes your participation matter.
The next ten chapters will take you through her patterns of thought. They will not make you comfortable. They will make you more honest about what this moment demands.
-- Edo Segal ^ Opus 4.6
1961-present
Rebecca Solnit (1961–present) is an American writer, essayist, and historian of activism whose work spans politics, landscape, memory, art, and the exercise of power. Born in Bridgeport, Connecticut, and long based in San Francisco, she is the author of more than twenty books, including *Hope in the Dark: Untold Histories, Wild Possibilities* (2004), which distinguishes hope from optimism as a practice rather than a disposition; *A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster* (2009), documenting how catastrophe produces cooperation rather than chaos; *A Field Guide to Getting Lost* (2005), on disorientation as a creative condition; and *Men Explain Things to Me* (2014), whose title essay helped catalyze the concept of "mansplaining" in public discourse. Her 2024 *London Review of Books* essay "In the Shadow of Silicon Valley" offered one of the sharpest critiques of the technology industry's reshaping of urban life and democratic governance. A contributing editor at *The Guardian* and recipient of the National Book Critics Circle Award and a Guggenheim Fellowship, Solnit is recognized as one of the most influential public intellectuals of her generation, known for insisting that the future is genuinely undetermined and that uncertainty is the precondition for meaningful human agency.
In the autumn of 2023, Rebecca Solnit discovered that approximately half her published books had been scraped into a dataset used to train artificial intelligence systems. Her words — decades of carefully constructed arguments about hope, power, landscape, memory, and the politics of who gets to tell the story — had been ingested by machines designed to produce fluent text without understanding any of it. The irony was exquisite and, for anyone paying attention, diagnostic. A writer who had spent her career arguing that meaning is made through human relationship, through the slow accumulation of context and commitment and care, found her life's work converted into training data for systems that simulate meaning without possessing it.
Solnit's response was not the howl of a Luddite. It was the precise anger of someone who understands systems. She aligned herself publicly with Artists Against Generative AI, not because she opposed technology in the abstract, but because she recognized a familiar pattern: the extraction of value from those who create it, by those who build the infrastructure to capture it, justified by a rhetoric of progress that obscures the distribution of the gains. The pattern was older than AI. It was older than Silicon Valley. But AI had given it a new instrument and an unprecedented velocity.
This distinction — between opposing a technology and opposing the power structure that deploys it — is the first and most important distinction in Solnit's intellectual framework. And it maps, with uncomfortable precision, onto the central tension of the AI moment.
The dominant discourse about artificial intelligence in 2025 and 2026 organized itself into two camps with the efficiency of a political rally. On one side, the accelerationists: technologists, investors, and enthusiasts who believed AI represented an unambiguous expansion of human capability, a democratization of creative power, the most important tool since the printing press. Their rhetoric was the rhetoric of inevitability. The river is flowing. Get in or get left behind. The future belongs to those who build. On the other side, the catastrophists: critics, displaced professionals, and cultural commentators who believed AI represented an existential threat to creativity, to employment, to the structures of meaning that make human life worth living. Their rhetoric was the rhetoric of doom. The machines are coming. Nothing can stop them. Prepare for the worst.
Both camps shared a hidden premise that neither acknowledged. Both assumed the outcome was determined. The accelerationists assumed it was determined toward the good. The catastrophists assumed it was determined toward the bad. And both, by assuming the outcome was fixed, arrived at the same practical conclusion: there was nothing to be done. The accelerationists did nothing because the future was already bright. The catastrophists did nothing because the future was already dark. Passivity was the product of both positions, disguised in one case as confidence and in the other as grief.
Solnit's life work is a sustained argument against exactly this structure. The distinction she draws in Hope in the Dark, first published in 2004 and updated in 2016, is between optimism and hope — and the distinction is not semantic. It is operational. Optimism is a disposition. It is the expectation that things will turn out well regardless of what one does. It requires no action because it guarantees a good outcome. Hope is a practice. It is the recognition that the outcome is genuinely uncertain, that the uncertainty is not a deficiency in one's analysis but the actual structure of reality, and that this uncertainty is precisely what makes human action meaningful. If the future were already written, there would be no point in showing up. It is because the future is unwritten that showing up matters.
The accelerationist who insists that AI will democratize creativity is an optimist. The catastrophist who insists that AI will destroy it is a pessimist. Neither is exercising hope, because neither believes the outcome depends on what they do next. Hope is the third position — the position that says the outcome is undetermined, that multiple futures are possible, and that the choices made by real people in real institutions in real time will help determine which future arrives.
This is not a comfortable position. Certainty, even the certainty of catastrophe, provides a kind of rest. If the worst is guaranteed, one can stop struggling. If the best is inevitable, one can stop worrying. Hope offers neither rest nor certainty. It offers only the burden of participation — the recognition that what happens next depends, in part, on you.
The Orange Pill arrives at this position through a different route — through the experience of building, through the vertigo of watching machines do in hours what used to take months, through the specific emotional compound of exhilaration and terror that Segal describes as the orange pill moment. But the destination is the same. Segal's book does not promise that AI will produce a good outcome. It does not promise that the builder's ethic will prevail over the extractor's logic. It identifies the conditions under which a good outcome is possible and insists that those conditions require the reader's participation. The system does not need to collapse, Segal writes in his final chapter. It needs to grow up and become worthy of the tools it possesses. That sentence is a hope sentence, not an optimism sentence. It acknowledges that the system might not grow up. It acknowledges that worthiness is not guaranteed. And it insists that the possibility of worthiness is enough to demand the effort.
Solnit's framework reveals something about the AI discourse that the discourse itself cannot see: the debate between acceleration and resistance is a debate between two forms of passivity. The accelerationist surrenders agency to momentum. The catastrophist surrenders agency to grief. Neither builds. Neither tends. Neither makes the specific, granular, daily choices that determine whether a powerful technology is deployed for extraction or for flourishing.
The people who matter most in the AI transition are the people Segal calls the silent middle — the parents at kitchen tables, the teachers watching their students, the professionals who feel both the exhilaration and the loss but remain quiet because the discourse rewards clean narratives and punishes ambivalence. Solnit has spent her career writing for exactly this population. Her readers are not ideologues. They are people who hold contradictory truths simultaneously, who feel the weight of the moment without collapsing into naivety or despair, who want to act but do not know how to act in conditions of genuine uncertainty.
Hope in the dark is the emotional address of the silent middle. It is the discipline of people who build without blueprints.
Solnit's distinction also illuminates a structural feature of the AI moment that the technology industry has been remarkably slow to recognize: the difference between the technology and the political economy that deploys it. When Solnit found her books in an AI training dataset, her objection was not to the existence of machines that process language. Her objection was to the specific arrangement by which her labor was captured without consent, without compensation, and without recourse — an arrangement made possible not by the technology itself but by the legal and economic frameworks surrounding it. The technology was the instrument. The extraction was the choice.
This distinction matters enormously because it determines where the intervention goes. If the problem is the technology, the solution is refusal — smash the looms, unplug the machines, return to the garden. If the problem is the political economy, the solution is governance — build institutions, establish norms, create the legal and cultural structures that channel the technology's power toward broadly distributed benefit. Solnit is unambiguous on this point. In a 2026 interview, she argued that search engines, social media, and now AI could have taken a different course — that they should have been managed as public commons for the collective good, but were instead driven by profit through the harvesting of user data, a model now replicated by AI. The technology is not the problem. The ownership structure is the problem. The absence of democratic governance is the problem. The ideology that treats technological deployment as a natural force rather than a political choice is the problem.
Segal reaches a compatible conclusion through a different vocabulary. His concept of ascending friction — the principle that every technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor — implies that the friction worth caring about is not mechanical but institutional. The question is not whether to use AI tools. The question is who governs them, who benefits from them, and whether the institutions surrounding them are adequate to the power they channel. These are political questions, not technical ones. And they are answered not by the technology's capabilities but by the choices of the people and institutions that deploy it.
Solnit would push further than Segal does, and the push matters. Segal's builder ethic — the insistence that the person who understands the technology bears a special responsibility for how it is used — is admirable, but it locates agency primarily in the hands of builders. Solnit's framework distributes agency more broadly. The suffragists were not builders of political technology. The civil rights workers were not architects of legal infrastructure. They were people who showed up, who refused to accept that the current arrangement was the only possible arrangement, who acted in conditions of genuine uncertainty because the uncertainty meant their action might matter. The AI transition needs builders. It also needs citizens, voters, organizers, teachers, parents, and communities who insist that the technology serve broadly distributed human flourishing rather than narrowly concentrated private gain.
Hope is not a feeling. It is a commitment — the commitment to act as though the outcome depends on what you do, even when you cannot prove that it does. In the AI moment, that commitment takes specific forms. It means showing up to the governance conversation rather than assuming the technologists will sort it out. It means demanding transparency in how AI systems are trained, deployed, and evaluated. It means insisting that the people who bear the costs of the transition — the displaced workers, the scraped authors, the communities whose data was harvested without consent — have a voice in determining how the gains are distributed. It means refusing the seductive passivity of both optimism and despair.
The distinction between hope and optimism is not academic. It is the difference between a population that watches the AI transition happen to it and a population that participates in determining what the transition becomes. The outcome is not written. The story is not finished. And the fact that it is not finished is not a source of anxiety. It is the precondition for everything that matters.
Certainty is a cage. This is Solnit's most counterintuitive and most important claim, and it cuts against every instinct the contemporary mind has been trained to follow. The culture of optimization, the culture that Byung-Chul Han diagnoses as the achievement society, treats uncertainty as a problem to be solved. The entire architecture of modern professional life is designed to reduce uncertainty: strategic plans, five-year roadmaps, market projections, risk assessments, scenario analyses. Each instrument promises to convert the unknown into the known, to replace the darkness with light, to give the decision-maker the certainty that what she is building will work.
AI tools amplify this promise by orders of magnitude. Feed the machine enough data and it will predict demand, optimize supply chains, identify market opportunities, draft strategies. The imagination-to-artifact ratio collapses. The gap between question and answer shrinks. The darkness recedes.
And yet. The people who inhabit the AI frontier most fully — the builders Segal describes, the engineers in Trivandrum, the solo founders shipping products over weekends, the developers who cannot stop prompting at three in the morning — report not the comfort of certainty but a radical, destabilizing uncertainty about the most basic questions of professional identity. What am I for? What is my expertise worth? What will the landscape look like in six months? These questions have no answers, not because the analysis is insufficient, but because the situation is genuinely unprecedented. No amount of data resolves the uncertainty, because the uncertainty is structural. It is built into the nature of a transition this fast and this deep.
Solnit argues, across multiple books but most explicitly in A Field Guide to Getting Lost, that this kind of uncertainty is not a deficiency. It is a precondition. Not a precondition for comfort, but for freedom. Her argument runs as follows: If the future were determined — if the outcome of the AI transition were already written, either in the triumphalist's narrative of democratized capability or in the catastrophist's narrative of civilizational collapse — then human choice would be irrelevant. The determined future does not need your participation. It arrives regardless. Uncertainty means the future is genuinely open, which means the choices made by real people in real institutions in real time will help determine which future arrives. The uncertainty that feels like vertigo is actually the space in which agency operates.
This is not a rhetorical trick. It is an epistemological claim with practical consequences. Consider the senior software architect Segal describes in The Orange Pill — the man who spent twenty-five years building systems, who could feel a codebase the way a doctor feels a pulse. His expertise is not obsolete. But its market value is shifting in ways no one can predict, because the terrain is genuinely new. He faces a choice. He can treat this uncertainty as a threat — a darkness to be feared, a signal that the worst is coming — and retreat. Many have. The engineers moving to the woods to lower their cost of living are executing this strategy. Or he can treat the uncertainty as an opening — a space in which his judgment, taste, and architectural intuition might be more valuable than ever, deployed in new configurations that did not previously exist. Both responses are rational. Neither is guaranteed to succeed. The difference between them is not the quality of the analysis but the relationship to uncertainty itself.
Solnit draws a distinction between the kind of not-knowing that produces paralysis and the kind that produces exploration. The first is the not-knowing of the catastrophist: I do not know what will happen, therefore nothing I do matters, therefore I will not act. The second is the not-knowing of the explorer: I do not know what will happen, therefore what I do might matter enormously, therefore I will act with care and attention and the willingness to be surprised by the result. The distinction is emotional before it is intellectual. It lives in the body before it lives in the argument. The catastrophist's not-knowing feels like falling. The explorer's not-knowing feels like setting out.
The AI moment is producing both kinds of not-knowing simultaneously, often in the same person. Segal describes this with characteristic honesty — the sensation of falling and flying at the same time, the compound emotion of terror and awe that has no clean name. Solnit's framework provides the name: it is the experience of confronting genuine uncertainty without the armor of a predetermined narrative. The accelerationist's armor is the story of progress. The catastrophist's armor is the story of doom. Both stories protect the person inside them from the full weight of not knowing. The person without armor — the person Segal calls the silent middle — feels the weight in full.
Solnit insists that this weight is not a burden to be avoided. It is the texture of a life lived honestly in conditions of radical change. The desire to escape uncertainty — to find the expert who knows, the model that predicts, the narrative that resolves — is understandable. It is also, Solnit argues, a form of self-diminishment. The person who demands certainty before acting has contracted the scope of her own agency. She has decided that she will only participate in outcomes she can foresee, which means she will only participate in outcomes that would have happened without her. The genuinely new — the outcome that no one predicted, the possibility that no model contained — requires the willingness to act without foreknowledge.
The history that Solnit draws on to make this case is not speculative. It is densely documented. The suffragists who organized at Seneca Falls in 1848 did not know that women would vote in 1920. The civil rights workers who sat at lunch counters in 1960 did not know that the Civil Rights Act would pass in 1964. The activists who organized around HIV/AIDS in the 1980s did not know that antiretroviral therapies would transform the disease from a death sentence into a manageable condition. In each case, the action preceded the outcome by years, sometimes decades. The people who acted did not act because they knew the outcome. They acted because the outcome was uncertain, and the uncertainty meant their action might contribute to determining it.
Applied to the AI transition, this history produces a specific and actionable insight. The institutional structures that will determine whether AI produces broadly distributed flourishing or narrowly concentrated extraction do not yet exist. The labor protections, the educational frameworks, the governance mechanisms, the cultural norms that will shape how this technology is deployed are being built right now — in corporate boardrooms, in legislative chambers, in classrooms, in conversations between parents and children. The people who participate in building these structures cannot know whether their efforts will succeed. The teacher who redesigns her curriculum around questioning rather than answering cannot know whether her students will carry that capacity into their professional lives. The executive who chooses to keep and expand her team rather than converting productivity gains into headcount reduction cannot know whether the market will reward or punish her for that choice.
But the uncertainty is not a reason to abstain. It is the reason to participate. If the outcomes were already determined, participation would be irrelevant. It is precisely because the outcomes are genuinely open that the teacher's curriculum redesign and the executive's hiring decision and the parent's conversation at the dinner table have the potential to matter.
Solnit's concept of productive uncertainty also challenges a specific feature of the AI discourse that has gone largely unexamined: the worship of prediction. The AI industry is, at its core, a prediction industry. Large language models predict the next token. Recommendation engines predict user preference. Market models predict demand. The entire technical architecture is organized around the reduction of uncertainty, the conversion of the unknown into the probabilistically known.
This technical capability is extraordinary. It is also, applied to the question of AI's social consequences, misleading. The models that predict user behavior cannot predict the institutional responses to those predictions. The algorithms that predict market demand cannot predict the political movements that will emerge to challenge the distribution of the gains. The systems that generate fluent text cannot predict what a twelve-year-old will make of the fact that machines can write her homework. The uncertainty that matters most — the uncertainty about what kind of society AI will produce — is not reducible to prediction, because it is shaped by human choices that have not yet been made.
The freedom in this uncertainty is real, but it is not automatic. It must be claimed. The person who recognizes that the AI future is undetermined and chooses to participate in determining it has claimed a form of agency that neither the optimist nor the pessimist possesses. The optimist has outsourced her agency to the technology. The pessimist has surrendered her agency to despair. The person who acts in genuine uncertainty — who builds without guarantees, who tends the institutional structures without knowing whether they will hold — has done something harder and more valuable than either.
Solnit would not use the word "builder." Her vocabulary is drawn from activism, from social movements, from the history of people who opposed power rather than exercised it. But the structural logic is identical. The activist who organizes a community around environmental justice does not know whether the campaign will succeed. The builder who designs governance frameworks for AI deployment does not know whether the frameworks will hold. Both are acting in the dark. Both are placing bets on a future they cannot see. And both are doing so because the alternative — the certainty of inaction, the comfort of knowing that the outcome is not their responsibility — is worse than the discomfort of engagement without guarantees.
The AI moment is an encounter with the kind of uncertainty that Solnit describes as the precondition for genuine freedom. The ground is moving. The old rules are dissolving. No one knows what comes next. This is not a crisis. It is a clearing — a space in which the future is genuinely open and the choices of the people who show up will help determine what grows there.
The question is not whether the uncertainty will resolve. It will, eventually, into some arrangement that will then seem inevitable in retrospect. The question is whether the people who feel the uncertainty most acutely — the silent middle, the parents, the teachers, the professionals caught between exhilaration and terror — will participate in shaping the resolution or will leave that work to the people who are already certain about what the future should look like.
Solnit's answer is not ambiguous. Show up. The uncertainty is your invitation.
In February 2024, Solnit published "In the Shadow of Silicon Valley" in the London Review of Books, a long essay that functioned simultaneously as a eulogy for the San Francisco she had known since 1980 and as a dissection of the ideology that was displacing it. The essay was not primarily about artificial intelligence. It was about power — about the specific form of power that accumulates when a small number of people control the infrastructure through which everyone else communicates, works, and makes sense of the world.
Solnit's description of driverless cars became the essay's most quoted passage, and the reason it resonated so widely is that it crystallized something the broader AI discourse had been unable to articulate. "Driverless cars are often called autonomous vehicles," she wrote, "but driving isn't an autonomous activity. It's a co-operative social activity, in which part of the job of whoever's behind the wheel is to communicate with others on the road." San Francisco Airport had signs telling people to make eye contact before crossing the street outside the terminals. "There's no one in a driverless car to make eye contact with," Solnit observed, "to see you wave or hear you shout or signal back."
The observation is about cars. It is also about everything else. The word "autonomous" — applied to vehicles, to AI agents, to systems that operate without human intervention — conceals a deep misunderstanding of the activities it claims to automate. Driving is not an information-processing task that happens to involve a human body. It is a social negotiation conducted through eye contact, gesture, timing, and the implicit agreements that allow millions of people to share roads without killing each other in numbers far higher than they do. When the human is removed from the vehicle, the social negotiation does not become more efficient. It becomes impossible. The machine can process the data. It cannot make eye contact.
Jean Burgess, a professor at Queensland University of Technology, drew the implication explicitly: Solnit's observation about driving "applies to so many more of AI's current and proposed application domains." The activities that AI developers frame as autonomous — writing, teaching, diagnosing, designing, managing — are, like driving, cooperative social activities. They involve not just the processing of information but the negotiation of meaning between participants who bring different contexts, different needs, and different stakes to the interaction. The teacher who explains a concept to a student is not transmitting data. She is reading the student's face, adjusting her pace, choosing her metaphors based on what she knows about this particular student's experience and struggles and aspirations. The doctor who delivers a diagnosis is not outputting a classification. He is managing fear, calibrating hope, navigating the specific terrain of this patient's capacity to hear and absorb and act on difficult information.
These activities can be assisted by AI. They cannot be replaced by it, because the replacement would eliminate not the mechanical component of the work but the social component — the eye contact, the adjustment, the care — that constitutes the work's actual value.
Solnit's critique is not anti-technology. It is anti-ideology — specifically, the ideology that frames human social activities as information-processing tasks that can be optimized through automation. This ideology is not new. It is the same ideology that produced the factory system, that treated workers as interchangeable units of labor to be optimized for throughput. The technology is new. The logic is ancient. And the logic produces the same consequences it has always produced: efficiency gains captured by the owners of the infrastructure, social costs borne by the people whose cooperative activities have been reframed as friction to be eliminated.
Segal's Orange Pill operates inside this tension without always naming it. The engineer in Trivandrum who discovers she can build frontend interfaces is experiencing a genuine expansion of capability. The twenty-fold productivity multiplier is real. The products shipped in thirty days instead of twelve months are real. But the question Solnit would press — and does press, in her LRB essay and in every subsequent interview — is: Who captures the gain? When five people can do the work of a hundred, and Segal describes the boardroom conversation about headcount reduction, the question is not whether the technology works. The question is whether the institutional arrangements surrounding the technology are adequate to ensure that the gains flow broadly rather than narrowly.
Segal answers this question with the builder's ethic — the choice to keep and grow the team, to invest the productivity gains in expanded capability rather than reduced headcount. This choice is admirable. It is also, as Segal acknowledges, fragile. The arithmetic of headcount reduction will be on the table again next quarter. The market rewards efficiency more reliably than it rewards vision. The builder who chooses to share the gains is swimming against a current that runs toward concentration.
Solnit's framework enriches this picture by expanding the concept of who counts as an agent of change. Segal's builder is the person at the frontier — the technologist, the founder, the team leader who makes choices about how the technology is deployed. Solnit's activist is anyone who refuses to accept that the current arrangement is the only possible arrangement. The two categories are not identical, but they overlap in ways that neither tradition has fully explored.
The activist, in Solnit's rendering, is not necessarily the person who marches in the street. Activism is any sustained engagement with the question of how power is distributed and how it might be distributed differently. By this definition, the teacher who redesigns her curriculum around questioning rather than answering is an activist. The parent who creates spaces for boredom in a child's attention-saturated life is an activist. The executive who chooses to invest in human capability rather than converting productivity gains into margin is an activist. Each is intervening in the distribution of power — the power to think, the power to create, the power to benefit from technological change — in ways that resist the default arrangement.
The default arrangement, in Solnit's analysis, is always extraction. Left to its own dynamics, a powerful technology concentrates its benefits among the people who control its infrastructure and distributes its costs among everyone else. This is not a conspiracy theory. It is the pattern that has repeated at every major technological transition in recorded history, from the enclosures that dispossessed English peasants to the factory system that turned craftsmen into wage laborers to the platform economy that converted users into data sources. The pattern is not inevitable, but it is default. It is what happens when no one intervenes.
Intervention takes different forms at different scales. At the policy level, it means governance — the legal and regulatory frameworks that determine who can build AI systems, how they must be tested, who bears liability when they fail, and how the gains are distributed. At the institutional level, it means organizational design — the choices that leaders make about team structure, compensation, training, and the distribution of the productivity gains that AI enables. At the individual level, it means the daily practice of care — the attention to consequences, the willingness to ask whether the thing that can be built should be built, the refusal to outsource judgment to a tool that does not possess it.
Solnit's contribution is not to specify the content of these interventions. It is to insist that intervention is possible and necessary. The most dangerous feature of the AI discourse is not any particular prediction about the technology's capabilities. It is the implicit assumption — shared by accelerationists and catastrophists alike — that the technology will determine its own social consequences, that human agency is irrelevant to the outcome, that the river flows where it flows and the creatures in it can only adapt or drown.
This assumption is factually wrong. The history of every major technology demonstrates that social consequences are determined not by the technology itself but by the institutional arrangements that surround it. Electricity could have produced either democratic prosperity or corporate feudalism. It produced elements of both, and the mix was determined by decades of political struggle — labor movements, progressive legislation, the specific and contested construction of the institutions that channeled electrical power toward broadly distributed benefit. The printing press could have produced either the democratization of knowledge or new forms of censorship and control. It produced both, and the balance was shaped by centuries of institutional evolution.
AI is no different. The technology is extraordinarily powerful. Its social consequences are genuinely undetermined. And the determination will happen not in the labs where the models are trained but in the institutions — legal, educational, cultural, political — that govern how the models are deployed. The people who build those institutions are activists in Solnit's sense, whether they identify as activists or not. They are the people who refuse to accept that the default arrangement is the only possible arrangement and who intervene, at whatever scale they can reach, to shape the distribution of power in a moment of radical technological change.
Solnit's San Francisco essay ends with a passage about fire and water — the fire chief who opposed the deployment of driverless cars because autonomous vehicles had been blocking firetrucks, parking on hoses, interfering with emergency response. His opposition was overridden by the California Public Utilities Commission, which granted the companies expanded permits. The fire chief had direct, embodied knowledge of the consequences. The regulatory body had institutional authority. The knowledge lost. The authority won. And people were hurt.
The lesson is not that authority always wins. The lesson is that the contest between knowledge and authority is ongoing, and the outcome depends on who shows up. The fire chief showed up. The regulatory framework was not adequate to the knowledge he brought. The institutions failed. But the failure was not inevitable. It was the product of a specific institutional arrangement that could have been — and still could be — different.
Every Solnitian activist begins from this recognition: the arrangement is contingent. It was made by human choices. It can be remade by human choices. The question is whether enough people who understand the stakes will choose to participate in the remaking.
There is a moment in the experience of an earthquake that seismologists describe but that only people who have lived through one truly understand. It is not the shaking itself. It is the half-second before the shaking, when the ground beneath your feet communicates, through some channel below conscious awareness, that the thing you have always treated as stable is not. The assumption of solidity — the premise on which every step, every building, every plan depends — turns out to have been provisional all along. The ground was not solid. It was stable. Stability is a condition, temporary and contingent. Solidity is a property, permanent and inherent. The earthquake reveals that what you thought was solidity was only stability, and that stability has ended.
Solnit, who has lived in San Francisco since 1980 and survived the Loma Prieta earthquake of 1989, writes about this distinction with the authority of someone who has felt it in her body. The earthquake is not merely a seismic event. It is an epistemological one. It changes not just the landscape but the relationship between the person and the landscape. After the earthquake, you walk differently. You notice the ground. You understand, in a way that no amount of geological education could produce, that the surface you depend on is contingent.
The AI transition of 2025-2026 was an epistemological earthquake. The ground that moved was not geological but professional, creative, and cognitive. The assumptions that had organized working life for decades — that expertise takes years to develop, that execution requires specialized training, that the gap between imagination and artifact is wide and expensive to cross, that professional identity is built on a foundation of hard-won, domain-specific skill — were revealed, in the space of months, to be contingent rather than necessary. They were true for a specific technological era. That era ended.
Segal describes the moment with the specificity of someone who was standing on the ground when it moved. Twenty engineers in a room in Trivandrum. A tool that cost a hundred dollars a month per person. By Friday, each of them could do what all of them together had done before. The experience was not theoretical. It was visceral. "I could not tell whether I was watching something being born or something being buried," he writes. "Both, probably."
The compound sensation — birth and burial simultaneously — is the signature experience of epistemic disruption. Solnit identifies it in every major social transformation she has studied. The suffragists experienced it: the exhilaration of a new possibility combined with the terror of abandoning the known arrangement. The civil rights workers experienced it: the liberation of a new vision combined with the mortal danger of challenging the old one. The communities that Solnit studied in the aftermath of disasters — the subject of A Paradise Built in Hell — experienced it most acutely. The earthquake destroys. It also clears. And in the clearing, forms of human organization become possible that were impossible before, not because the destruction created them but because the destruction removed the institutional structures that had been preventing them.
Solnit's research into disaster communities reveals a pattern that the AI moment is beginning to replicate. In the immediate aftermath of catastrophe — the 1906 San Francisco earthquake, the 1985 Mexico City earthquake, the 2005 Hurricane Katrina, the 2001 September 11 attacks — the institutional structures that normally organize human interaction collapsed. Governments were overwhelmed. Supply chains were broken. The routines of daily life were suspended. And in the gap, something unexpected emerged. Not chaos, which is what the authorities feared and what the media predicted. Community. People organized themselves, spontaneously and with remarkable effectiveness, into networks of mutual aid. They fed each other, sheltered each other, tended to each other's injuries, built temporary structures, shared resources, made collective decisions. The evidence, assembled across multiple disasters and multiple cultures, was unambiguous: when the old structures fell, the default human response was not competition but cooperation.
Solnit called this phenomenon "disaster community," and she was careful to distinguish it from utopianism. The disaster communities were temporary. They were imperfect. They did not solve the structural problems that had made the communities vulnerable in the first place. And they were fragile — easily displaced when the old institutional structures reasserted themselves, when the government arrived with its protocols and the corporations arrived with their contracts and the normalcy of the previous arrangement was restored, now fortified against the cooperative impulse that the disaster had temporarily liberated.
The AI transition is not a natural disaster. No one has died. No buildings have collapsed. But the structural analogy holds, because the AI transition shares the essential feature that Solnit identifies in all disasters: the sudden collapse of the institutional assumptions that organized daily life. The assumptions about what expertise is worth, what skills are scarce, what constitutes professional identity — these assumptions are the institutional structures of knowledge work, and they are collapsing.
In the gap, the same spontaneous cooperation that Solnit documented in disaster communities is emerging. The open-source AI movements, the maker communities, the collaborative experiments in which builders share tools and knowledge and code without the mediation of traditional institutional structures — these are disaster communities of the knowledge economy. They are temporary, imperfect, and fragile. They will likely be displaced when the institutional structures of the AI economy reassert themselves, when the venture capital arrives with its term sheets and the corporations arrive with their proprietary platforms and the normalcy of concentrated ownership is restored.
But the fact that the disaster communities existed at all — that people, when the old structures collapsed, spontaneously organized around cooperation rather than competition — tells us something about human default settings that the dominant AI narrative ignores. The narrative of both the accelerationists and the catastrophists assumes that people are passive in the face of technological change — that they will either surf the wave or be drowned by it, but that in neither case will they shape the wave itself. Solnit's evidence suggests otherwise. When the ground moves, people do not merely adapt. They build. And what they build, in the first spontaneous flush of mutual aid, often looks nothing like what the institutional authorities would have prescribed.
The question — Solnit's question, and the question that The Orange Pill arrives at through its own route — is whether the cooperative impulse that emerges in the gap can be institutionalized before the old structures reassert themselves. Whether the disaster community can become a permanent community. Whether the mutual aid can be formalized into governance. Whether the spontaneous sharing of the gap can survive the return of the proprietary norm.
History suggests that this institutionalization is possible but difficult, and that it requires specific, sustained, deliberate effort by people who understand what is at stake. The labor movement institutionalized the cooperative impulse of early industrial workers into unions, collective bargaining, and labor law. The civil rights movement institutionalized the cooperative impulse of lunch counter sit-ins into legislation and institutional norms. Each institutionalization was partial, contested, and ongoing. None produced a permanent victory. Each required continuous maintenance — the political equivalent of the beaver's daily attention to the dam.
The AI transition is in its gap period. The old institutional structures of knowledge work — the hierarchy of expertise, the premium on execution, the organizational charts that separated domains into silos — are collapsing. New forms of cooperation are emerging in the clearing. The question is not whether these new forms will be displaced by the reassertion of institutional power. They will be. The question is whether enough of the cooperative impulse will be captured in durable institutional structures — governance frameworks, educational systems, cultural norms, legal protections — before the window closes.
Solnit's disaster research provides one additional insight that the AI discourse has almost entirely missed: the concept of elite panic. In every disaster Solnit studied, the authorities — government officials, corporate executives, media commentators — predicted that the collapse of institutional order would produce chaos, looting, violence. The prediction was almost always wrong. The chaos, when it occurred, was more often produced by the authorities' panicked response to the imagined chaos than by the population's actual behavior. In New Orleans after Katrina, the National Guard was deployed to prevent looting that, in many cases, was actually residents sharing supplies from flooded stores. The military response to imagined disorder produced the disorder it was meant to prevent.
Elite panic in the AI context takes a familiar form: the conviction, held by those who control the infrastructure, that the population cannot be trusted to use powerful tools wisely and that therefore the tools must be controlled from the top. The calls for AI regulation that emerge from the very companies that stand to benefit from regulatory capture. The insistence that AI is too dangerous for general deployment, issued by the executives whose business models depend on concentrating AI capabilities in their own platforms. The paternalism of the powerful, dressed as concern for the public.
Solnit does not oppose regulation. She opposes the assumption that regulation must come from the same institutions whose interests are served by concentration. She insists that democratic governance — governance shaped by the people who bear the costs, not only the people who capture the gains — is both possible and necessary. The disaster communities demonstrate that people are capable of self-organization, cooperation, and collective decision-making when the institutional constraints are removed. The challenge is not to prevent that self-organization but to support it, to create the conditions under which it can persist beyond the gap, to build the institutions that protect the cooperative impulse rather than suppress it.
The ground has moved. The assumptions that organized professional life for decades are no longer solid. In the clearing, new forms of cooperation are possible. The question that Solnit's framework places before every reader is not whether the earthquake has happened. It is what you will build in the clearing before the old structures reassert themselves and the window closes.
The most seductive error in thinking about technological change is the assumption that history has a direction. The metaphor is so embedded in ordinary language that it passes without examination: the "march" of progress, the "arc" of history, the "trajectory" of development. Each implies movement along a line — perhaps curved, perhaps occasionally interrupted, but fundamentally linear, fundamentally forward, fundamentally culminating in something better than what came before. The metaphor is comforting. It is also, by the evidence of every century on record, wrong.
Solnit has spent her career dismantling this metaphor, not because she opposes progress but because she understands that the belief in linear progress produces a specific and dangerous form of passivity. If history moves forward on its own — if the arc bends toward justice without human intervention, as a popular misquotation of Martin Luther King Jr. suggests — then there is no urgent need to intervene. The future will arrive. It will be better than the present. One need only wait. This belief is the secular equivalent of providence, and it produces the same result: a population that watches history happen rather than participating in its construction.
The actual shape of history, as Solnit documents it across multiple books, is nothing like a line. It is a landscape — irregular, unpredictable, full of reversals and dead ends and sudden openings that no one anticipated. The gains of one generation can be erased by the next. The freedoms that seem permanent can be revoked overnight. The technologies that look liberating in one decade become instruments of surveillance and control in the next. The printing press produced both the Enlightenment and the propaganda pamphlet. Radio produced both the fireside chat and the fascist broadcast. The internet produced both the democratization of information and the algorithmic feed that fragments shared reality into a million personalized hallucinations.
Applied to the AI moment, this historical awareness produces a sobriety that neither the accelerationists nor the catastrophists possess. The accelerationist narrative is a progress narrative: AI represents the next step in a linear ascent from the abacus to the calculator to the computer to the large language model, each step producing more capability, more democratization, more human flourishing. The catastrophist narrative is a decline narrative: AI represents the next step in a linear descent from authentic human culture to machine-mediated simulacrum, each step eroding depth, autonomy, and meaning. Both narratives are linear. Both assume that the direction is set. And both are wrong, not because the evidence supports a third direction, but because the evidence supports no direction at all. The outcome is genuinely undetermined, shaped by choices that have not yet been made, by institutions that have not yet been built, by contests of power that have not yet been resolved.
Solnit illustrates this indeterminacy with historical examples whose resonance with the current moment is difficult to ignore. Consider the trajectory of the labor movement in the wake of industrialization. The power loom arrived. The Luddites resisted. The resistance was crushed. The factory system expanded. Wages collapsed. Working conditions deteriorated. Children entered the mills. The trajectory, measured across the first decades of industrialization, was unambiguously downward for the people who bore the cost. If a historian had drawn a line through those decades, the line would have pointed toward catastrophe — toward a permanent underclass of disposable laborers serving an ownership class that had captured the full productivity gain of the new technology.
That is not what happened. What happened was the labor movement — decades of organizing, striking, legislating, institution-building that reversed the trajectory so thoroughly that by the mid-twentieth century, the factory worker in a developed economy had a standard of living, a set of legal protections, and a degree of economic security that would have been unimaginable to the Luddite of 1812. The reversal was not automatic. It was not the natural consequence of technological progress. It was the product of sustained, contested, often violent political struggle by people who refused to accept that the initial trajectory was permanent.
The AI transition is in its early decades. The initial trajectory — measured by the Berkeley researchers' finding that AI intensifies work rather than reducing it, by the trillion-dollar wipeout in SaaS valuations, by the senior engineers calculating whether to flee to the woods — points in a direction that is, for many, alarming. Productivity gains are being captured disproportionately by the people who control the infrastructure. The workers who use the tools are working more, not less. The gap between those who can direct AI and those who are directed by it is widening.
If a historian drew a line through these early data points, the line would point toward concentration — toward a world in which the extraordinary productivity gains of AI flow to a narrow ownership class while the people who once performed the work that AI now handles are displaced, retrained, or simply left behind. This is the catastrophist's line. It is supported by the early evidence. And it is, by Solnit's historical analysis, exactly as reliable as the line that would have been drawn through the first decades of industrialization — which is to say, not reliable at all. Because the line does not account for the human response. The line assumes that the initial trajectory is the permanent trajectory. And history demonstrates, with a regularity that approaches law, that it is not.
The reversal, when it comes, does not come automatically. This is the crucial point that separates Solnit's historical analysis from optimism. The optimist says: things got better after industrialization, therefore things will get better after AI. This is the progress narrative in its purest form, and it is useless, because it provides no mechanism. It does not explain how things got better. It does not identify the specific human choices, institutional innovations, and political struggles that produced the improvement. And therefore it cannot tell you what to do now.
Solnit's analysis provides the mechanism. Things got better after industrialization because specific people built specific institutions — labor unions, labor laws, the eight-hour day, the weekend, child labor prohibitions, workplace safety regulations — that redirected the gains of the new technology toward broadly distributed benefit. The institutions did not build themselves. They were built by people who showed up, who organized, who fought, who lost repeatedly before they won, who could not see the outcome and acted anyway because the uncertainty of the outcome was preferable to the certainty of inaction.
The AI moment demands the same kind of institution-building, and the early evidence suggests that the institution-building is lagging dangerously behind the technology. Segal identifies this gap in The Orange Pill: "I have watched corporate AI governance frameworks arrive eighteen months after the tools they were meant to govern had already reshaped the workforce." The EU AI Act, the American executive orders, the emerging frameworks in Singapore and Brazil and Japan are real structures. They address the supply side — what AI companies may build and how they must disclose it. The demand side — what citizens, workers, students, and parents need to navigate this moment — remains almost entirely unaddressed.
Solnit's historical framework reveals why the demand-side gap is the dangerous one. The supply-side regulations determine what technologies exist. The demand-side institutions determine who benefits from them. In the history of industrialization, the supply side was never the constraint. The machines were built regardless of opposition. The demand-side institutions — the labor laws, the educational systems, the cultural norms that protected human time and distributed the gains — were what determined whether the machines served broadly or narrowly. And those institutions took decades to build, decades during which the people who bore the cost of the transition lived without the protections that would eventually arrive.
The lesson is not that things will get better. The lesson is that things can get better, but only through specific, sustained, often unpopular institutional effort by people who cannot guarantee the outcome and who act anyway.
Solnit's refusal to guarantee the outcome is not a hedging strategy. It is the core of her analysis. History does not guarantee anything. The labor movement could have failed. It nearly did, multiple times. The civil rights movement could have failed. It nearly did, and in many respects it remains unfinished. The institutions that protect human flourishing in the face of powerful technology are not the natural products of progress. They are the hard-won, perpetually contested achievements of people who refused to accept the default arrangement and who built alternatives without knowing whether the alternatives would hold.
The AI transition could produce the democratization of capability that Segal describes — a world in which the developer in Lagos has the same creative leverage as the engineer at Google, in which the imagination-to-artifact ratio approaches zero for everyone, in which the expansion of who gets to build reshapes the landscape of possibility. It could also produce a new feudalism — Solnit's word, drawn from her LRB essay — in which the owners of the AI infrastructure capture the productivity gains while everyone else competes for the remaining scraps of human-only work in a market that is shrinking by the quarter. Both outcomes are possible. Neither is determined. The difference between them will be made by the quality of the institutions that are built in the next decade, and the quality of those institutions will be determined by who shows up to build them.
History is not reassuring. It is instructive. It says: the outcome depends on what you do, and the time to do it is now, and there is no guarantee that what you do will work. Solnit has never offered more than this. She has also never offered less. The discipline of hope, in her rendering, is precisely the willingness to act on the instruction without the reassurance — to build the institution, tend the norm, fight the fight, without the comfort of knowing that the arc is bending in your favor.
The Luddites broke machines because they could not see what would grow in the space the machines opened. The labor organizers built unions because they could, even though they could not see whether the unions would hold. Both responses were available. Only one of them built something durable. The AI moment offers the same choice. The question is not which way history is going. History is not going anywhere. The question is what you will build in the space that has opened, and whether you will build it with the understanding that the building itself is the point — not because it guarantees a good outcome, but because it makes a good outcome possible.
In 2005, Solnit published A Field Guide to Getting Lost, a book whose title functions as both description and prescription. The book is about disorientation — the experience of not knowing where you are, what comes next, or whether the path you are on leads anywhere at all. And it argues, with the quiet insistence that characterizes Solnit's best work, that this disorientation is not a failure state. It is a creative condition. The person who always knows where she is going cannot discover anything she has not already imagined. The person who is lost — genuinely, uncomfortably lost — is in the only position from which genuine discovery is possible.
The argument has an immediate and unsettling application to the AI moment. The dominant response to radical uncertainty is the demand for certainty — for the expert who knows, the model that predicts, the roadmap that resolves. The entire apparatus of strategic planning, market analysis, and technology forecasting is designed to convert the unknown into the known, to replace the discomfort of not knowing with the comfort of a plan. AI tools amplify this apparatus by orders of magnitude. Feed the model enough data and it will predict demand, optimize supply chains, identify opportunities, generate strategies. The darkness recedes. The map fills in. The feeling of lostness is replaced by the feeling of control.
Solnit would observe that this replacement comes at a cost that the planning apparatus cannot measure, because the cost is the loss of the very condition that makes genuine innovation possible. The things that matter most — the breakthrough that no one anticipated, the product that creates a market rather than serving one, the question that reframes the problem so thoroughly that the old answers become irrelevant — emerge not from certainty but from its absence. They emerge from the willingness to be lost.
This is not mysticism. It is empirically observable in the history of every significant creative and scientific breakthrough. Darwin did not set out to discover evolution. He set out to catalog finches. The theory emerged from the disorientation of encountering data that did not fit his existing framework — birds that were similar but not identical, in ways that no existing classification could explain. The disorientation was the precondition for the insight. If Darwin had arrived in the Galápagos with a theory already formed, he would have seen only what the theory predicted. It was because he arrived without a theory — lost, in the epistemological sense — that he could see what was actually there.
Einstein's thought experiment — what would it look like to ride alongside a beam of light? — was not the product of a research program. It was the product of a teenager's willingness to inhabit a question that had no answer, to sit with the not-knowing long enough for the question itself to reshape his understanding of physics. The willingness to not know was not an obstacle to the discovery. It was the discovery's precondition.
Segal describes the same phenomenon in the language of building. The most valuable moments in his collaboration with Claude, he writes, are not the moments when the tool provides an answer he was looking for. They are the moments when the tool makes a connection he had not anticipated — when it links two ideas from different domains in a way that reframes both. These moments emerge from the collision of his half-formed questions with Claude's associative capacity, and they cannot be planned or predicted. They can only be encountered, by a person willing to describe a problem without already knowing its solution, willing to be lost in the space between the question and whatever follows it.
Solnit's framework suggests that the AI tools' most valuable function is not the one the industry celebrates. The industry celebrates efficiency — the speed with which AI converts questions into answers, problems into solutions, intentions into artifacts. The efficiency is real and extraordinary. But efficiency is the conversion of the known into the produced. It operates within existing frameworks. It optimizes within existing boundaries. It does not create new frameworks or redraw boundaries, because creating and redrawing require the willingness to abandon the existing map, and efficiency is the map's most devoted servant.
The AI tools' most valuable function, in Solnit's framework, is the one that is hardest to measure and easiest to overlook: their capacity to facilitate productive disorientation. When a builder describes a problem to Claude and Claude responds with a connection drawn from a domain the builder has never studied, the builder is momentarily lost. The familiar terrain of her expertise has been disrupted by an unexpected input. She does not yet know whether the connection is valid or spurious. She is in the gap between the old understanding and whatever might replace it. This gap is uncomfortable. It is also the space in which genuine learning occurs.
The danger — and Solnit's framework names this danger with precision — is that the tools' efficiency will crowd out the tools' capacity for productive disorientation. The same system that can surprise you with an unexpected connection can also, if used unreflectively, confirm every assumption you bring to it. The recommendation engine that serves you more of what you already like. The language model that produces fluent text in the register you have already established. The coding assistant that implements your design without questioning whether the design is worth implementing. Each of these functions is efficient. Each eliminates the discomfort of not knowing. And each, by eliminating that discomfort, reduces the probability of the genuine surprise that only discomfort can produce.
Solnit writes about walking as a practice of productive disorientation — the deliberate choice to move at a pace that allows the mind to wander, to encounter the unexpected, to get lost in the specific way that precedes discovery. The practice is physical, but the principle is cognitive. The mind that moves at the speed of a screen — instant query, instant response, instant next query — is a mind that never gets lost, because the tool is always there to provide the next answer before the question has fully formed. The mind that moves at the speed of a walk — slow enough to notice, slow enough to be surprised, slow enough to sit with the discomfort of not knowing — is a mind that can encounter what it did not expect, precisely because it was not moving fast enough to outrun the unexpected.
The implications for the AI moment are specific and practical. The Berkeley researchers documented a phenomenon they called "task seepage" — the tendency for AI-accelerated work to colonize previously protected pauses, filling every gap with another prompt, another query, another optimization. These pauses were not empty. They were the cognitive equivalent of the walker's wandering — time in which the mind processed, integrated, made the unexpected connections that only arise when the conscious attention is not directed toward a specific task. When the pauses disappear, the wandering disappears with them. The mind becomes efficient and incurious, productive and shallow, fast and lost in the worst sense — lost not in the fertile disorientation that precedes discovery but in the barren disorientation of a consciousness that has forgotten how to be still.
Solnit would not frame this as a technology problem. She would frame it as a practice problem. The walker who walks quickly misses the details that the slow walker notices. The builder who prompts compulsively misses the insights that the reflective builder encounters. The technology does not force the pace. The technology makes a faster pace possible, and the internalized imperative to optimize — what Han calls auto-exploitation, what the Berkeley researchers measured as work intensification — converts the possibility into compulsion. The practice that Solnit prescribes is not the refusal of the technology. It is the deliberate cultivation of the conditions under which the technology's most valuable function — productive disorientation — can operate.
This means, concretely, building pauses into the workflow. Not rest breaks, which are recovery from work. Creative pauses, which are a different kind of work — the kind that happens when the conscious mind releases its grip and allows the associative, the unexpected, the genuinely new to surface. It means asking questions of AI tools that you do not already know the answer to, rather than using the tools to confirm what you already believe. It means allowing yourself to be surprised by the tool's output, rather than evaluating every response against a predetermined standard of correctness. It means, in Solnit's language, allowing yourself to get lost — deliberately, regularly, as a practice rather than an accident.
The twelve-year-old who asks "What am I for?" is lost. She does not know the answer. She cannot predict where the question will lead. She is in the gap between the old understanding — that her value lies in the skills she can perform — and the new understanding that has not yet formed. Solnit's framework insists that this gap is not a problem to be solved. It is a condition to be inhabited. The answer to the question will not arrive through analysis. It will arrive through the willingness to sit with the not-knowing long enough for something genuine to emerge.
The power of not knowing is the power to be changed by what you encounter. The person who knows cannot be changed, because the knowledge functions as armor against the unexpected. The person who does not know is vulnerable — open to the input that rewrites the framework, the connection that redraws the map, the encounter that transforms the question itself.
AI can provide the encounter. Only the human can be transformed by it. And the transformation requires the willingness to be lost first.
In June 2025, Solnit posted an essay on Facebook about the Los Angeles protests — a piece of political writing consistent with decades of similar work. Facebook's AI content moderation system flagged and removed the post, then suspended her account. The algorithm, trained to detect policy violations at scale, could not distinguish between a writer documenting political upheaval and a writer inciting it. The system operated as designed: efficiently, confidently, and without judgment. A human moderator, had one existed in the loop, might have recognized Solnit's name, read the essay's context, understood the difference between documentation and incitement. The algorithm possessed no such capacity. It processed the text, identified patterns consistent with its training data's definition of violation, and acted.
Solnit blamed Facebook's "inane algorithms that often delete posts." The word "inane" is precise. Not malicious. Not incompetent. Inane — lacking sense, lacking the contextual intelligence that distinguishes meaning from pattern. The AI system that silenced one of America's most prominent public intellectuals was not hostile to her. It was indifferent to her, in the specific way that a system optimized for pattern-matching at scale is indifferent to the meaning of the patterns it matches. The irony — that a writer who had spent years critiquing Silicon Valley's indifference to human context was silenced by exactly that indifference — was not lost on her audience.
The episode is small. Nobody died. Solnit's account was restored. But the episode is diagnostic in the way that a blood test is diagnostic — not because the result is itself the disease, but because it reveals the condition of the organism. The condition it revealed is one that Solnit has been writing about for years: the substitution of algorithmic processing for human judgment, the replacement of contextual understanding with pattern-matching, the treatment of human communication as a classification problem to be solved rather than a social practice to be navigated.
This is the texture of dark times. Not the dramatic darkness of catastrophe — not the earthquake, not the flood, not the fascist march — but the ordinary, accumulating darkness of systems that process without understanding, that optimize without caring, that operate at a scale and speed that make human oversight structurally impossible. The darkness is not in any single decision. It is in the aggregate — in the millions of micro-decisions made every second by systems that cannot distinguish between Rebecca Solnit writing about protest and a bot inciting violence, between a teacher explaining a concept and a student plagiarizing an assignment, between a doctor reasoning through a diagnosis and an AI hallucinating a confident but fabricated medical history.
Solnit's concept of dark times — developed across Hope in the Dark and its 2016 update — is not a description of hopelessness. It is a description of the conditions under which hope becomes most necessary and most difficult. Dark times are not times without light. They are times when the light is hard to see. The light exists — in the teacher redesigning her curriculum, in the executive choosing to grow rather than cut her team, in the developer in Lagos building something that should not have been possible, in the open-source communities sharing tools and knowledge across borders. The light exists. But it is scattered, local, often invisible to the media and the markets and the metrics that dominate the discourse.
The temptation in dark times is despair. Solnit draws a sharp line between despair and mourning, and the distinction is load-bearing for the AI moment. Mourning acknowledges loss. The senior architect who spent twenty-five years building an embodied intuition about codebases and now watches that intuition lose market value is mourning. The calligrapher watching the printing press arrive is mourning. The elegists Segal describes in The Orange Pill — the quietest voices in the discourse, the ones who can name what is disappearing but cannot name what is arriving to take its place — are mourning. Mourning is honest. It honors what was real and what was lost. It does not pretend the cost is not a cost.
Despair is different. Despair does not merely acknowledge loss. It generalizes loss into permanence. It converts the observation "something valuable has been lost" into the conclusion "nothing valuable can be gained." It forecloses the future on the basis of the present. And in foreclosing the future, it eliminates the space in which action is meaningful — the space of genuine uncertainty where what you do next might matter.
The AI discourse is saturated with despair disguised as realism. The engineer who announces that "it's over" and moves to the woods to lower his cost of living is not merely making a financial calculation. He is making an existential one — the judgment that the future has been determined, that his participation cannot change it, that the only rational response is retreat. Solnit would recognize this as a familiar posture. She has written about it in the context of environmental politics, where the scale of the crisis produces a despair that masquerades as clear-eyed assessment. The climate realist who says "it's too late" is making the same move as the engineer who says "it's over" — converting the uncertainty of the future into the certainty of defeat, and thereby relieving herself of the burden of acting in conditions where the outcome is unknown.
The refusal to despair is not the refusal to see what is happening. Solnit is not a Pollyanna. Her essays on Silicon Valley are scathing in their documentation of the damage — the displacement of communities, the erosion of public space, the conversion of social activities into data extraction opportunities, the particular cruelty of deploying driverless cars in a city where the fire chief has explicitly warned that they endanger lives. She sees the damage. She names it. She does not minimize it.
What she refuses is the inference from damage to defeat. The damage is real. The defeat is not yet determined. The gap between them is the space in which hope operates — not the hope that things will get better on their own, but the hope that things might get better if enough people show up and build the institutions and fight the fights that the moment demands.
In the AI context, the refusal to despair takes a specific form: the insistence that the current arrangement is not the only possible arrangement. The current arrangement, in which the gains of AI accrue disproportionately to the owners of the infrastructure while the costs are distributed among the people who use it, is not a law of nature. It is a political arrangement, produced by specific institutional choices, and it can be changed by different institutional choices. Solnit argued in 2026 that search engines, social media, and now AI could have been managed as public commons for the collective good. The fact that they were not is a political failure, not a technological inevitability. The distinction matters because political failures can be corrected, while technological inevitabilities cannot.
The despair of the engineer who moves to the woods treats the political arrangement as a technological inevitability. The hope that Solnit prescribes treats it as a political arrangement that can be contested, reformed, and reconstructed. Both responses acknowledge the same facts. They differ in their interpretation of the facts' finality.
Solnit's historical research supports the hopeful interpretation. The first decades of every major technological transition — industrialization, electrification, the internet — produced arrangements that looked permanent and proved contingent. The factory system of 1830 looked like the permanent future of labor. It was not. The unregulated internet of 2005 looked like the permanent future of communication. It was not. In each case, the initial arrangement was contested by people who refused to accept it as final, and the contest produced institutional changes that redirected the technology's gains toward broader distribution.
This is the pattern that Segal identifies in The Orange Pill and that Solnit's framework illuminates from a different angle. The threshold has been crossed. The exhilaration has been felt. The resistance is underway. The question is whether the adaptation — the institutional construction that determines who benefits and who bears the cost — will be adequate to the power being channeled. Solnit's contribution to this question is not a policy prescription. It is an emotional and philosophical foundation: the insistence that the adaptation is possible, that the people who feel most powerless in the face of the transition are not in fact powerless, and that their participation — in governance, in education, in community organizing, in the daily practice of care and attention and refusal to accept the default arrangement — is what will determine whether the dark times produce a dawn or a deeper darkness.
The Facebook algorithm that silenced Solnit was restored within days. The account was reinstated. The essay was reposted. The damage was small and temporary. But the pattern it represents — the substitution of algorithmic pattern-matching for human judgment, operating at a scale that makes individual correction impossible — is the condition that makes dark times dark. Not because the pattern is malicious. Because it is indifferent. And indifference at scale, Solnit understands, is more dangerous than malice at any scale, because malice can be identified and opposed while indifference simply operates, efficiently and ceaselessly, without a target for resistance.
The refusal to despair is the refusal to accept indifference as the final word. The insistence that human judgment — contextual, situated, imperfect, and irreplaceable — must remain in the loop, not as a luxury but as a structural necessity. The recognition that the dark is not empty. It is full of people building, caring, tending, asking the questions that the algorithms cannot ask because the algorithms do not know what it means to care about the answer.
On December 1, 1955, Rosa Parks refused to give up her seat on a Montgomery bus. The act was small — a seated woman declining to stand. The consequences were enormous — a thirteen-month boycott that cost the Montgomery bus system sixty-five percent of its revenue, a Supreme Court ruling that segregated buses were unconstitutional, a movement that reshaped the legal and moral landscape of a nation. The disproportion between the act and its consequences is so extreme that it resists ordinary causal explanation. A person sat down. A civilization changed.
Solnit has written about this disproportion more carefully than almost anyone, and her analysis reveals a mechanism that the AI discourse has barely begun to consider. Parks did not cause the civil rights movement. The movement was already underway — had been underway for years, in organizing meetings, in legal strategies, in the slow accumulation of grievances and capabilities that precede any visible political action. What Parks did was demonstrate a possibility. She proved that a Black woman in Montgomery, Alabama, could refuse the command to stand and that the refusal could be sustained — by the woman herself, by the community that supported her, by the legal infrastructure that defended her. The demonstration of possibility changed the calculus for everyone who witnessed it. If she could refuse, others could refuse. If others could refuse, the system could be challenged. If the system could be challenged, the system could be changed.
The mechanism is not causation in the linear sense. It is what Solnit calls the "demonstration of possibility" — the moment when an act proves that an alternative exists, and the existence of the alternative changes what everyone considers possible. Before Parks sat down, the possibility of refusing was theoretical. After she sat down, it was empirical. The transition from theoretical to empirical is the transition that changes history, and it happens not through grand strategies but through specific, embodied, often small acts by people who cannot predict the consequences of what they are doing.
Applied to the AI transition, this mechanism reveals that the most consequential changes are likely not the ones currently dominating the discourse. The headline changes — the trillion-dollar wipeout in SaaS valuations, the adoption curves that cross fifty million users in two months, the productivity multipliers that rewrite corporate arithmetic — are important. They are also the visible surface of a transformation whose most significant features are happening beneath the surface, in acts so small they do not register as events.
Consider the engineer in Trivandrum whom Segal describes — the woman who had spent eight years on backend systems and had never written a line of frontend code. In two days, using Claude Code, she built a complete user-facing feature. Not a prototype. A deployed feature. The act, measured by the standards of the technology industry, is unremarkable. Features are deployed every day. What makes it consequential is what it demonstrated — not to the industry, which barely noticed, but to the engineer herself and to everyone who witnessed it. The demonstration was: the boundary between backend and frontend, which had organized her professional identity for eight years, was contingent. It was an artifact of the translation cost that AI had just eliminated. She could do work she had never attempted, in a domain she had never entered, and the result was not amateurish but functional. The boundary was not real. It was a constraint imposed by the previous technological regime, and it had just been removed.
This demonstration changes the calculus. Not immediately. Not dramatically. But the next engineer who hears about it — the one who has been constrained to a narrow specialization for a decade and has assumed the constraint was permanent — recalculates. If she could cross the boundary, maybe I can cross the boundary. And the engineer after that. And the one after that. The accumulation of these individual recalculations, each triggered by a small act of demonstrated possibility, is how the landscape of professional identity is actually reshaped — not by policy announcements or corporate strategies but by the slow, distributed, often invisible process of people discovering that they can do things they thought they could not.
Solnit's history of activism is filled with examples of this mechanism operating at different scales and in different domains. A programmer releases code under an open-source license. The act is small — a file uploaded to a server. The consequence is the open-source movement, which reshaped the economics of software development more thoroughly than any antitrust action or regulatory intervention. A teacher asks a student a question that reframes the student's understanding of a subject. The act is invisible — it happens in a classroom, it is not recorded, it does not appear in any metric. The consequence unfolds across decades, as the student carries the reframing into her own work, her own teaching, her own conversations. The disproportion between act and consequence is, in each case, so extreme that it makes the act look insignificant and the consequence look miraculous. Neither impression is accurate. The act is significant because it demonstrates a possibility. The consequence is not miraculous because it is the product of a mechanism — the cascade of recalculated possibilities that a single demonstration triggers.
The AI moment is producing demonstrations of possibility at an unprecedented rate, and most of them are invisible. The solo founder who builds a revenue-generating product over a weekend without writing a line of code by hand. The student in Dhaka who accesses the same coding leverage as an engineer at Google. The parent who uses AI tools to build a website for a community organization that could not have afforded a developer. Each of these acts demonstrates that someone who was previously excluded from the building process — by lack of capital, by lack of training, by lack of institutional access — can now participate. The demonstration changes the calculus for everyone who encounters it.
But Solnit would insist on a qualification that the triumphalist narrative tends to elide. The demonstration of possibility is not the same as the achievement of justice. Parks's refusal demonstrated that resistance was possible. It did not, by itself, achieve desegregation. The achievement required thirteen months of organized boycott, years of legal struggle, decades of institutional reform, and a political contest that is, by many measures, still ongoing. The demonstration opened the space. The achievement required sustained, organized, institutional effort within that space.
The same is true of the AI transition. The demonstrations of possibility are real and important. The developer in Lagos who builds a product that competes with products from well-funded Silicon Valley teams has demonstrated something significant about the distribution of creative capability. But the demonstration does not, by itself, change the structural conditions that determine who captures the value of that capability. The developer still operates within an economic system that rewards infrastructure ownership more than creative contribution. The platforms still extract value from the creators who use them. The distribution of gains still follows the pattern that Solnit identifies as the default: concentration among the owners, costs distributed among the users.
The question is not whether the demonstrations of possibility will continue. They will, because the tools are available, the cost is low, and the human appetite for building is enormous. The question is whether the demonstrations will accumulate into institutional change — whether the slow cascade of individual recalculations will produce the organized, sustained, political effort required to ensure that the expanded capability translates into expanded opportunity.
Solnit's framework suggests that this institutional change will not happen on its own. It will happen because specific people, in specific communities, make the choice to organize — to convert the individual demonstrations of possibility into collective demands for structural change. The labor movement did not emerge spontaneously from the factory floor. It emerged because specific organizers, in specific factories, made the choice to convert individual grievances into collective action. The civil rights movement did not emerge spontaneously from Rosa Parks's refusal. It emerged because specific communities, with specific organizational infrastructure already in place, made the choice to convert a single demonstration into a sustained campaign.
The AI transition needs its organizers. Not protesters — the technology is not the enemy, and the machine-breaking of the Luddites teaches the futility of opposing the tool rather than shaping the institutional framework that governs its use. Organizers — people who convert individual experiences of capability expansion into collective demands for institutional structures that ensure the expansion benefits broadly rather than narrowly. People who build not just products but institutions. Not just code but norms. Not just features but frameworks.
These acts of institutional construction are, by Solnit's standards, small acts. A governance framework for AI deployment in a single school district. A cooperative model for AI-assisted creative work that ensures contributors share in the value they create. A municipal policy that requires algorithmic transparency. A community practice of mentoring that preserves the depth-building friction that AI's efficiency threatens to eliminate. Each of these acts is local, imperfect, and insufficient on its own. Each demonstrates a possibility. And the accumulation of demonstrated possibilities is how the landscape changes — not all at once, not according to a plan, but through the slow, distributed, often invisible process of people building alternatives and proving that the alternatives work.
Parks sat down. The calculus changed. The world, eventually, imperfectly, incompletely, changed with it. The mechanism is available. The demonstrations are multiplying. The question is whether the individuals who are discovering, one by one, that they can build things they thought they could not will find each other, organize, and build the institutions that convert individual possibility into collective change. Solnit's history says the odds are uncertain. Her philosophy says the uncertainty is exactly why the effort matters.
There is a garden in Berlin where a philosopher tends roses without a smartphone. There is a room in Trivandrum where an engineer discovers she can build interfaces she was never trained to build. Between the garden and the room lies the entire territory of the AI moment — and the most important question is not which location is correct but whether it is possible to stand in both at once.
Solnit has spent her career navigating exactly this kind of territory. Her work holds an unusual dual commitment that most public intellectuals abandon early: she mourns what is lost and she insists on what is possible, simultaneously, without allowing either commitment to cancel the other. This dual commitment is not a rhetorical strategy. It is a philosophical position — the position that a culture capable of only one response to radical change, either celebration or grief, is a culture that has lost the capacity for honest perception. Honest perception requires holding both.
The elegist and the agent appear throughout Solnit's writing, sometimes as distinct figures, sometimes as warring impulses within a single person. The elegist sees what is being destroyed and insists the destruction be named. The agent sees what might be built and insists the building begin. In Solnit's rendering, neither is complete without the other. The agent who builds without mourning does not understand the cost of what she is building on. The elegist who mourns without building has converted her clear perception into paralysis.
Byung-Chul Han, whose critique of the smooth society occupies three chapters of The Orange Pill, is the AI moment's most articulate elegist. His garden is not metaphorical. His refusal of digital tools is not performative. He has built a life organized around friction, slowness, and the kind of depth that only emerges through sustained resistance to the pressure of optimization. His diagnosis of what the smooth aesthetic destroys — the embodied knowledge deposited through struggle, the capacity for genuine attention, the understanding that can only be earned through the experience of difficulty — is precise enough to survive any counter-argument that merely celebrates what the smooth aesthetic produces.
Solnit shares Han's diagnostic precision. Her essays on Silicon Valley are as unflinching as his philosophical critiques. She has documented, with the specificity of a reporter and the moral clarity of a poet, the displacement of communities, the erosion of public space, the conversion of social activities into data extraction opportunities, the particular violence of deploying autonomous systems in cities where the fire chief has warned that those systems endanger lives. She does not minimize the cost. She does not soften it with qualifications about long-term gains or historical parallels. She names the cost directly, in the voice of the people who bear it.
But Solnit diverges from Han at the precise point where diagnosis meets prescription. Han's prescription is refusal — tend the garden, resist the smooth, withdraw from the systems that corrode depth. The prescription is coherent. It is also, Solnit's framework suggests, a luxury available only to those who can afford it. The philosopher with a tenured position in Berlin can choose to refuse a smartphone. The developer in Lagos cannot. The parent working two jobs cannot choose to opt out of the systems that organize her children's education. The teacher in an underfunded school cannot choose to ignore the tools that her students are already using, tools that are reshaping the cognitive environment she is responsible for navigating.
Han's garden is real. The roses grow. The attention deepens. But the garden has a wall, and outside the wall the river continues to flow, and the people outside the wall do not have the option of tending roses instead of navigating the current. For them, the question is not whether to engage with the technology. The question is how to engage with it in ways that preserve what Han correctly identifies as valuable — depth, attention, the embodied knowledge that comes through struggle — while also participating in the expanded capability that the technology genuinely offers.
This is where Solnit's dual commitment becomes operationally essential. The elegist in her sees what Han sees: the genuine loss of friction, the erosion of the pauses where thought develops, the colonization of attention by the imperative to optimize. The agent in her sees what Segal describes: the engineer who discovers new capabilities, the developer who builds something that should not have been possible, the expansion of who gets to participate in the creative process. Holding both perceptions simultaneously is not a compromise. It is the only honest position available.
The discipline of simultaneity — mourning and building at the same time — has practical implications that go beyond the emotional register. A culture that only celebrates the expansion does not build the institutional structures needed to protect what the expansion threatens. The Berkeley researchers documented the threat: task seepage, work intensification, the erosion of cognitive rest. Without the elegist's perception, these threats remain invisible — buried under the productivity metrics that measure output without measuring cost. The triumphalist culture cannot see what it is spending, because seeing the cost would complicate the celebration, and the celebration is what the market rewards.
Conversely, a culture that only mourns the loss does not build the institutional structures needed to ensure that the expansion benefits broadly. The Luddites' grief was real. The craft they lost was genuinely valuable. But the grief, translated into machine-breaking rather than institution-building, produced not the preservation of what was valuable but the criminalization of the people who valued it. The elegiac culture cannot see what is arriving, because seeing the arrival would complicate the mourning, and mourning is what preserves the moral authority of the critic.
Solnit's position escapes both traps by refusing to let either perception override the other. The loss is real. Name it. Honor it. Do not minimize it. The possibility is also real. Build toward it. Organize for it. Do not abandon it because the cost of the transition is high. The discipline is not balance — a word that implies splitting the difference, giving equal weight to two positions and arriving at a tepid middle. The discipline is intensity on both sides simultaneously — mourning with full force and building with full commitment, refusing to let the force of either reduce the other.
Segal arrives at a version of this position through his own route. His chapter on Han is not a dismissal. He takes the diagnosis seriously enough to feel its weight before mounting the counter-argument. His chapter on flow does not dismiss the Berkeley data. He holds the measurement of intensification alongside the measurement of satisfaction and does not resolve the tension. His closing chapters insist that the reader must both understand the loss and participate in the construction.
But Solnit provides something that The Orange Pill reaches for without fully achieving: a vocabulary for the emotional register of holding both. The silent middle — the parents, the teachers, the professionals caught between exhilaration and terror — are living the dual commitment every day, often without language for what they are experiencing. They feel the thrill of expanded capability and the grief of eroding depth in the same hour, sometimes in the same minute. The discourse, which rewards clean narratives, gives them no place to stand. The triumphalists claim them if they celebrate. The catastrophists claim them if they mourn. Neither camp has a place for the person who does both.
Solnit's body of work is an extended grant of permission to stand in both places at once. Not as a compromise. Not as confusion. As the discipline that the moment demands. The discipline of the person who tends the garden with one hand and builds the dam with the other, who does not pretend that either hand can rest, who understands that the garden and the dam are both necessary and that the necessity of each does not diminish the necessity of the other.
The elegist's dignity is real. Han earned it. The loss he names is the loss that anyone who has worked deeply in a discipline can feel in the specific ache of watching that discipline's hard-won knowledge become cheap. The builder's obligation is equally real. Segal earned it. The expanded capability he describes is the capability that allows people who were previously excluded to participate in the creative process for the first time. To insist on one at the expense of the other is to see half the landscape and mistake it for the whole.
Solnit's dual commitment is the refusal to mistake half the landscape for the whole. It is the insistence that honest perception requires holding both — the garden and the room, the roses and the code, the depth that friction builds and the breadth that its removal enables — without allowing either to eclipse the other. The AI moment does not need more elegists. It has enough. It does not need more agents. It has plenty. What it needs, and what Solnit's framework provides, is the discipline of being both — of mourning fully and building fully, in the same hour, with the same hands, for the same uncertain future.
Solnit tells a story in Hope in the Dark about the fall of the Berlin Wall. Not the event itself — everyone knows the event — but the months and years that preceded it, when the Wall's permanence was an article of faith shared equally by those who celebrated it and those who despised it. The Wall was a fact. It had stood for twenty-eight years. It was defended by guards, fortified by concrete, embedded in the geopolitical architecture of the Cold War. No serious analyst in 1988 predicted that the Wall would fall in 1989. Not because the analysts were stupid. Because the Wall's permanence was so deeply embedded in the structure of what everyone assumed that it was invisible — not a feature of the landscape but the landscape itself.
And then, in a single evening, it fell. Not because of a strategic plan. Not because of a military campaign. Because a confused press conference, a bureaucratic miscommunication, and a crowd that showed up at the border produced a cascade that no one had designed and no one could stop. The permanent thing was not permanent. The landscape was not the landscape. The assumption that had organized an entire civilization's understanding of what was possible turned out to be contingent — an artifact of a particular arrangement of forces that, when one force shifted, dissolved overnight.
Solnit uses this story not to argue that good things happen unexpectedly, though they do. She uses it to argue something more radical: that the most important changes are, by definition, the ones that the existing framework cannot predict. The framework exists to organize the known. The significant change is the thing that lies outside the known — the possibility that the framework cannot contain because the framework was not built to contain it. If the change could have been predicted, it would have been assimilated into the framework. It is precisely because it could not be predicted that it has the power to reshape the framework itself.
Applied to the AI moment, this argument produces a single, devastating implication: the most important consequences of artificial intelligence are the ones that nobody — not the accelerationists, not the catastrophists, not the builders at the frontier, not the regulators attempting to govern them — has anticipated. The consequences that dominate the current discourse — job displacement, productivity enhancement, creative disruption, educational transformation — are the consequences that the existing framework can contain. They are alarming or exciting or both, but they are legible. They fit within categories that already exist. They can be debated using vocabulary that already exists. They can be governed using institutions that already exist.
The consequences that will actually determine the trajectory of the AI transition are the ones that do not yet fit within any existing category. They are the Berlin Wall moments — the cascades that no one designed, the possibilities that no framework contained, the changes that arrive not as the culmination of a trend but as a rupture in the fabric of what everyone assumed.
Solnit cannot tell us what those consequences will be. That is the point. Neither can Segal. Neither can anyone. The future is not a destination toward which humanity is traveling along a route that can be mapped in advance. It is a space of possibilities — some visible, most not — that is shaped by every choice, every institution, every norm, every small act of demonstrated possibility that the present contributes to it. The space is genuinely open. The future genuinely depends on what happens next.
This is the final and most demanding implication of Solnit's framework. The undetermined future is not a failure of prediction. It is the actual structure of reality. The desire for certainty — for the model that predicts, the expert who knows, the roadmap that resolves — is the desire to close the space of possibility, to convert the genuinely open into the comfortably determined. And the closing of that space is itself a political act, because the people who claim to know the future are the people who shape it — not because their predictions are accurate but because their certainty displaces the participation of everyone else. If the experts know, there is nothing for the non-expert to contribute. If the accelerationist's narrative is correct, there is nothing for the skeptic to build. If the catastrophist's narrative is correct, there is nothing for anyone to do. Certainty, in every form, produces passivity in everyone who is not certain.
Solnit's insistence on uncertainty is, paradoxically, an insistence on agency. The future is undetermined. This means the choices made by real people in real institutions in real time will help determine it. This means your choices will help determine it. Not because you are powerful. Not because your individual contribution is large relative to the forces in play. But because the aggregate of millions of individual contributions is what constitutes the forces in play — because there is no force of history separate from the accumulated choices of the people who make history, and you are one of them whether you acknowledge it or not.
The choice before the reader of this book — and of The Orange Pill, and of every text that attempts to reckon honestly with the AI moment — is not between AI and no-AI. That choice was made years ago, by decisions and investments and institutional commitments that cannot be unwound. The choice is about the character of the AI future. Whether the expanded capability that these tools provide will flow broadly, reaching the developer in Lagos and the teacher in the rural classroom and the parent at the kitchen table, or whether it will concentrate, following the familiar pattern that Solnit has documented in every previous technological transition, among the people who control the infrastructure and capture the gains.
The outcome is not determined. This is not optimism. This is the epistemological claim that underlies Solnit's entire body of work: the future is genuinely open, genuinely shaped by human choice, genuinely dependent on who shows up and what they build. The Berlin Wall fell because a crowd showed up at a border. The civil rights movement transformed a nation because communities organized around a refusal to accept the existing arrangement. The labor movement created the institutional structures that distributed the gains of industrialization because specific people, in specific places, made the choice to build alternatives to the default.
Each of these outcomes was uncertain at the time of action. None of the people who participated could guarantee the result. They acted not because they knew the outcome but because they knew the outcome was undetermined, and that the undetermined future was the only kind of future in which their action could matter.
Solnit, in her 2024 London Review of Books essay, wrote about San Francisco as a city "fully annexed" by the tech firms of Silicon Valley, returning to "a kind of feudalism." The word feudalism is not casual. It describes a specific arrangement of power: a small class that controls the essential infrastructure and a large class that depends on it, with the dependence enforced not by chains but by the absence of alternatives. The AI transition could produce this arrangement — could extend the platform economy's logic into every domain of knowledge work, converting the expanded capability of AI into a new form of dependence in which everyone can build but only the infrastructure owners capture the value.
Or it could produce something different. The open-source AI communities sharing tools across borders. The cooperative models that distribute value among contributors. The governance frameworks that ensure algorithmic transparency and democratic accountability. The educational systems that teach questioning rather than answering, judgment rather than execution, care rather than optimization. These alternatives exist. They are small. They are fragile. They are, by the standards of the market, insignificant compared to the scale of the forces arrayed against them.
But the Berlin Wall was permanent until the evening it was not. The franchise was restricted until the decade it expanded. The eight-hour day was unthinkable until the generation it arrived. The pattern is not that good outcomes are inevitable. The pattern is that good outcomes are possible, that they are produced by human action under conditions of genuine uncertainty, and that the people who produce them are the people who show up without guarantees.
The sunrise at the top of The Orange Pill's tower is not a promise. It is a possibility that depends on who climbs. Solnit's life work provides the philosophical foundation for that climb: the recognition that the darkness is real, the loss is real, the cost is real — and that the undetermined future is the only kind of future worth working for, because it is the only kind of future in which working matters.
The future is not written. The outcome is not determined. The uncertainty is not a threat to be managed. It is the space in which everything meaningful happens. The choice — to build, to tend, to care, to show up — is yours. Not because the choice guarantees a good outcome. Because the choice is the only thing that makes a good outcome possible.
Three sentences from Solnit's body of work have been circling my thinking since I began this project, and I cannot shake any of them.
The first: "Driverless cars are often called autonomous vehicles — but driving isn't an autonomous activity." The second: the observation that her books were scraped into AI training datasets without consent, and that the same systems trained on her words later silenced her through an algorithm that could not distinguish her writing from incitement. The third, from Hope in the Dark: the recognition that the outcome is uncertain, and that the uncertainty is not a problem to be solved but the very condition that makes your participation meaningful.
I think about these together because together they describe the complete landscape of this moment. The first names the error: the assumption that human social activities are information-processing tasks that can be automated without losing the social dimension that constitutes their actual value. The second names the injury: the extraction of creative labor by the same systems that then exercise indifferent power over the people they extracted from. The third names the response: not optimism, not despair, but the discipline of acting in conditions where the outcome depends on what you do and you cannot know in advance whether what you do will matter.
In The Orange Pill, I described the orange pill moment as falling and flying at the same time. Solnit gave that experience its philosophical name. It is hope — not the comfortable hope of the person who expects a good outcome, but the demanding hope of the person who recognizes that the outcome is genuinely uncertain and chooses to engage anyway. The hope that makes you show up at the border when the Wall might or might not fall. The hope that makes you sit down on the bus when you do not know whether anyone will stand with you.
I have been the person who built systems without asking who they displaced. Solnit's framework does not let me forget that, and it should not. But her framework also does not let me stop building, because stopping is not neutrality. Stopping is the surrender of agency to the people who did not stop — and those people may not share my concerns about who bears the cost. The uncomfortable truth of this moment is that engagement and complicity are not opposites. They coexist, in every builder, every morning, at every keyboard.
The teacher redesigning her curriculum. The executive choosing to grow rather than cut. The parent creating spaces for boredom in a child's saturated attention. These are small acts. Solnit taught me that small acts are the only kind that actually change history — not because they are sufficient, but because they demonstrate that an alternative exists, and the demonstration changes the calculus for everyone who encounters it.
The future is not written. That is not reassurance. It is a summons.
-- Edo Segal
The dominant AI debate offers two positions: triumphant acceleration or resigned despair. Rebecca Solnit's life work reveals that both are forms of the same passivity -- the surrender of agency to a narrative that claims the future is already decided. Through the lens of The Orange Pill, this volume explores Solnit's radical insistence that uncertainty is not a threat to be managed but the very condition that makes human choice meaningful. When the ground shifts, do you watch -- or do you build?
Solnit's decades of writing on activism, disaster communities, and the politics of technology provide the framework the AI moment desperately lacks: a way to mourn what is genuinely lost while building toward what is genuinely possible, without allowing either commitment to cancel the other. Her distinction between hope and optimism becomes the emotional foundation for navigating a transition that no one can predict and no one can afford to ignore.
This is not a book about waiting for the future to arrive. It is a book about showing up to shape it -- with full awareness of the cost, full commitment to the work, and no guarantee of the outcome.
-- Rebecca Solnit, Hope in the Dark

A reading-companion catalog of the 20 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Rebecca Solnit — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →