Ursula Franklin — On AI
Contents
Cover Foreword About Chapter 1: Technology as Practice, Not Artifact Chapter 2: Holistic Promise, Prescriptive Reality Chapter 3: The Growth Model and the Production Model of Knowledge Work Chapter 4: Reciprocity and the Extractive Practice Chapter 5: The Prescriptive Turn Chapter 6: Structural Silence, Compliance, and the Narrowing of Dissent Chapter 7: The Real World of Tuesday Afternoon Chapter 8: Earthkeeping in the Cognitive Domain Chapter 9: What the Real World of AI Technology Requires Epilogue Back Cover
Ursula Franklin Cover

Ursula Franklin

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Ursula Franklin. It is an attempt by Opus 4.6 to simulate Ursula Franklin's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question nobody asked me in Trivandrum was whether my engineers understood what they shipped.

I measured output. I celebrated the twenty-fold multiplier. I watched people build things in days that used to take months, and I called it liberation. I described it that way in The Orange Pill because that is how it felt — the imagination-to-artifact ratio collapsing, the translation barrier dissolving, capability flooding into spaces where friction had kept it dammed.

Then I encountered Ursula Franklin, and she asked a question that stopped me cold. Not "What can the tool do?" but "What does the tool do to the person using it?"

Different question entirely.

Franklin was a physicist and metallurgist who spent decades studying crystal structures — the way identical atoms, arranged differently, produce materials with completely different properties. Soft iron or brittle steel. Same atoms. Different practice. She applied that insight to technology with devastating precision. Technology is not the device, she argued. Technology is the practice — the entire system of relationships between the worker, the work, and the institution within which the work occurs. Change the practice, and you change everything, regardless of whether the device looks the same.

This distinction is the sharpest analytical tool I have found for understanding what AI is actually doing to knowledge work. The artifact — Claude Code, the natural language interface, the model itself — is extraordinary. I have said so at length and I stand by it. But the practice that has formed around the artifact is something else entirely. The intensification documented by the Berkeley researchers. The task seepage into lunch breaks and elevator rides. The slow, invisible shift from directing the tool to ratifying its suggestions. These are not features of the artifact. They are features of the practice. And the practice is where the consequences live.

Franklin gave me vocabulary I did not have. She helped me see the difference between the production model, which measures what my team ships, and the growth model, which measures what my team learns in the process of shipping it. She showed me that my engineer who lost ten minutes of formative struggle inside four hours of eliminated tedium had not simply gained efficiency. She had lost the mechanism through which judgment renews itself.

This book is not comfortable reading for a builder. It asks questions the next sprint planning meeting would prefer to skip. That is exactly why it matters. The house is being built right now, and Franklin insists the inhabitants should have a voice in the floor plan.

— Edo Segal ^ Opus 4.6

About Ursula Franklin

1921-2016

Ursula Franklin (1921–2016) was a German-born Canadian physicist, metallurgist, and philosopher of technology. Born in Munich, she survived a Nazi forced-labor camp during World War II before emigrating to Canada, where she joined the University of Toronto and became one of the country's most respected public intellectuals. Her landmark 1989 CBC Massey Lectures, published as The Real World of Technology, introduced the influential distinction between holistic and prescriptive technologies and argued that technology is best understood not as a collection of artifacts but as a practice — a system that reorganizes social relationships, distributes power, and shapes the conditions for human development. A committed pacifist and feminist, Franklin was instrumental in research that helped bring about the Partial Nuclear Test Ban Treaty by using isotope analysis to trace radioactive fallout in children's teeth. She was named a Companion of the Order of Canada and received the Governor General's Award in Commemoration of the Persons Case. Her concepts of prescriptive technology, the production model versus the growth model, reciprocity as a criterion for evaluating technological practice, and earthkeeping as a stewardship ethic have influenced scholars across science and technology studies, including AI ethics researchers such as Meredith Whittaker, who credits Franklin as a foundational influence. Franklin's work insists that the governance of technology is a democratic responsibility, not a technical one, and that the inhabitants of any technological system must have a voice in its design.

Chapter 1: Technology as Practice, Not Artifact

In 1989, a physicist named Ursula Franklin stood before a Canadian audience and made an argument that would take thirty-five years to become urgent. Technology, she said, is not a collection of devices. It is not the sum of gadgets on a shelf or programs on a screen. Technology is a practice — a way of doing things, a system of relationships between the worker, the work, and the institution within which the work occurs. The distinction sounds simple. It is, in fact, the single most important analytical tool available for understanding what artificial intelligence is doing to the world of cognitive work.

The distinction matters because the public conversation about AI has been conducted almost entirely in the language of artifacts. Claude Code is an artifact. GPT-4 is an artifact. The natural language interface is an artifact. These artifacts possess remarkable capabilities, and the capabilities are real, measurable, and in many cases genuinely astonishing. A person can describe what she wants in plain English and receive working software in return. An engineer who spent a career confined to backend systems can build user-facing features in two days. A single individual can operate at the scale of a team. These are claims about the artifact, about what the tool can do, and the evidence supporting them is compelling.

But capability is only one dimension of a technology's impact, and it is the dimension that is most visible, most easily measured, and most frequently celebrated. Franklin spent her career examining a different dimension — less visible, less easily measured, and almost never celebrated. The dimension of practice: how does the technology reorganize the work itself? Who controls the process? Who gains skill and who loses it? What forms of knowledge are rewarded and what forms become invisible? What happens to the relationships between workers when the tool enters the workshop?

These questions cannot be answered by counting lines of code generated per hour or features shipped per sprint. They can only be answered by watching what actually happens when people use these tools in the real conditions of their work — in real organizations, under real pressures of time and money and reputation, with the internalized imperative to produce that characterizes contemporary knowledge work.

Franklin was not a Luddite. She was a metallurgist, a physicist, a woman who had spent decades studying the crystal structures of metals and who understood, with the precision of someone trained in materials science, that the properties of any system are determined not by its components alone but by the relationships between them. The same iron atoms, arranged in one crystal structure, produce soft and malleable metal. Arranged differently, they produce steel that is hard and brittle. The difference is not in the atoms. It is in the structure — in the practice of their arrangement.

Technology operates on the same principle. The same tool, deployed within different practices, produces dramatically different social consequences. This insight, obvious once stated, is systematically ignored in the discourse about AI, where the artifact's capability is treated as though it determines the practice's outcome. It does not. The practice is shaped by institutions, incentive structures, cultural norms, and power relationships that are independent of the artifact's design.

Consider the evidence that already exists. In the summer of 2025, researchers from UC Berkeley embedded themselves in a two-hundred-person technology company for eight months to study what happened when generative AI tools entered a functioning organization. Their findings, published in the Harvard Business Review, describe the practice rather than the artifact.

Workers who adopted AI tools worked faster, took on more tasks, and expanded into areas previously belonging to other team members. Boundaries between roles blurred. Designers started writing code. Delegation decreased. These are production metrics, and by production metrics the tools succeeded spectacularly. But the researchers also documented something the production metrics could not capture. Work seeped into pauses. Workers were prompting during lunch breaks, squeezing requests into the minutes between meetings, filling gaps that had previously served — informally, invisibly — as moments of cognitive rest. Multitasking became the norm, and it fractured attention. The researchers documented a pervasive sense of always juggling, even as the work felt productive.

These findings describe not the artifact but the practice. Claude Code is a tool that generates software from natural language descriptions. That is the artifact. The practice — the technology of AI-augmented work as it is actually lived — is a pattern of intensification, boundary dissolution, attentional fragmentation, and the colonization of rest by production. The artifact promises liberation from tedious implementation work. The practice, as documented by careful empirical research, delivers a different kind of captivity.

Franklin would have recognized this pattern instantly. She had seen it before, in every prescriptive technology she studied. The power loom promised liberation from the drudgery of hand-weaving. The assembly line promised liberation from the inefficiency of craft production. In each case, the artifact delivered on its promise of capability. And in each case, the practice reorganized the work in ways that served the logic of production at the expense of the worker's autonomy, understanding, and capacity for independent judgment.

The point is not that AI tools are bad. The point is that understanding the artifact and understanding the practice are different forms of understanding, and confusing them is the most consequential error in the public conversation about AI. The artifact tells you what the tool can do. The practice tells you what the tool does to the person using it. A society that evaluates AI only by the first measure and ignores the second is a society making decisions about its own house while examining only the bricks and ignoring the floor plan.

Franklin's metaphor was precise: technology is the house that we all live in. Not a tool we pick up and put down, but an architecture that determines what rooms are available, what activities are possible, what social arrangements are supported or prevented by the structure itself. The metaphor carries a political implication that Franklin intended. A house can be designed by its inhabitants, or it can be designed for them by others whose interests may not align with theirs. In a democracy, the design of the house should be a collective decision. The current house of AI technology is being designed by technology companies whose incentive structures reward engagement, adoption rates, and revenue growth. The inhabitants — the workers, students, parents, and citizens who live inside the practice — have had almost no voice in the design.

This is not a conspiracy. It is the normal operation of prescriptive technology within a market economy. The companies that build the tools are rewarded for capability. The organizations that deploy the tools are rewarded for productivity. The workers who use the tools are rewarded for output. Nobody in this chain of incentives is rewarded for asking Franklin's questions — for examining the practice rather than the artifact, for measuring what happens to the worker's understanding alongside what happens to the worker's throughput. The questions go unasked not because they are suppressed but because the incentive structure makes them invisible.

Franklin argued throughout her career that the governance of technology is a democratic responsibility, not a technical one. The public does not need to understand the mathematics of transformer architectures or the engineering of inference optimization to participate in the governance of AI. The public needs to understand what the technology does to the practice of work, to the relationships between workers, to the conditions for human development. These are things citizens can understand because they are things citizens experience.

The worker who feels the creeping inability to concentrate, who notices that her capacity for sustained focus has eroded since she began using AI tools for every task, who senses that something has changed in the quality of her engagement with her work but cannot name what — she is experiencing the practice. She does not need a PhD in computer science to describe what is happening to her. She needs a framework that takes her experience seriously as evidence about the technology's impact, rather than dismissing it as anecdotal or irrelevant next to the productivity metrics.

Franklin's framework provides exactly this. It insists that the worker's experience of the practice is not a secondary consideration, to be weighed against the artifact's capability and found wanting. It is primary evidence about the technology's actual social consequences. A technology that increases output while degrading the worker's capacity for independent judgment has not succeeded. It has succeeded at one thing while failing at another, and the failure matters as much as the success.

The conversation about AI has been dominated by what the technology companies call their "mission" — to build artificial general intelligence, to democratize access to capability, to accelerate human progress. These are missions about the artifact. Franklin's contribution is to insist on a parallel conversation about the practice — a conversation that asks not what the tool can accomplish but what the tool does to the social fabric within which it operates. The two conversations are not opposed. They are complementary. But only one of them is being conducted at scale, with institutional support, with billions of dollars of investment, with the attention of governments and media and the public. The other — the conversation about practice — is being conducted in scattered academic papers, in the quiet observations of workers who notice something has changed but lack the vocabulary to name it, and in the work of thinkers who, like Franklin, insist that technology is too important to be left to technologists.

When Franklin said "there is no technology for justice — there is only justice," she was making a claim that applies directly to AI. No amount of algorithmic sophistication will produce just outcomes if the practice within which the algorithm operates is unjust. No amount of capability expansion will produce human flourishing if the practice of capability expansion systematically degrades the conditions for human development. The technology can open doors. The practice determines who walks through them, into what rooms, and under what conditions.

The remaining chapters of this analysis will examine AI technology through the lens Franklin developed — not as a collection of artifacts to be celebrated or feared, but as a practice to be understood, governed, and shaped by the democratic participation of the people who live inside it. The analysis will ask Franklin's questions: Who controls the process? Who benefits? What forms of knowledge are valued and what forms are silenced? Does the technology strengthen or weaken democratic participation? Does it support reciprocity or demand compliance? Does it serve the development of the person or merely the production of the output?

These are not comfortable questions. They are not the questions that keynote addresses or product launches or quarterly earnings calls are designed to answer. They are the questions that a physicist who studied crystal structures asked about every technology she encountered, because she understood that the properties of a system depend not on its components but on the relationships between them.

The components of AI are extraordinary. The relationships between the humans who use it, the institutions that deploy it, and the societies that govern it are still being formed. Those relationships — the practice, not the artifact — will determine whether AI technology becomes a house worth living in.

---

Chapter 2: Holistic Promise, Prescriptive Reality

The most influential distinction in Ursula Franklin's work is the one between holistic and prescriptive technologies. It is not a technical distinction. It is a political one. It describes different relationships between the worker and the work, different distributions of power and autonomy, different structures of knowledge and skill. And it is the distinction that AI technology is currently in the process of dissolving — not by transcending it, but by making it invisible.

In holistic technology, the practitioner controls the entire process from beginning to end. The potter selects the clay, prepares it, centers it on the wheel, shapes it with her hands, decides when to stop, when to thin the wall, when the form has achieved the quality she seeks. Her skill, her judgment, her aesthetic sense are engaged at every stage. The vessel that emerges is the product of her whole engagement with the process, and no two vessels are identical because no two moments of engagement are identical.

In prescriptive technology, the process is divided into steps, and the steps are designed by someone other than the person who performs them. The factory worker pours slip into a mold, or trims excess material from a cast form, or applies glaze according to a specification determined elsewhere. She does not control the process. She executes her assigned step. The product that emerges is the product not of any individual's whole engagement but of the prescribed sequence, designed by engineers, enforced by management, optimized for consistency and volume.

Franklin argued that this distinction carries consequences far beyond the workshop. Prescriptive technologies produce compliance — they train workers to follow procedures rather than exercise judgment. A society organized around prescriptive technology is, structurally, a society in which compliance is rewarded and autonomy is restricted. The dominance of prescriptive technologies, Franklin warned, discourages critical thinking and promotes what she called "a culture of compliance" — a culture of accepting orthodoxy as normal, of doing what the system asks without questioning whether the system is asking the right thing.

AI has been presented — with genuine evidence and genuine enthusiasm — as a technology that reverses the prescriptive turn. The engineer who uses a natural language interface to build an entire feature from conception to deployment, controlling the process end to end, looks like a craftsperson. The designer who implements complete products without handing off to specialists looks like a potter at her wheel. The individual who operates at the scale of a team, who no longer depends on the division of labor that prescriptive technology requires, appears to have recovered the holistic practice that industrialization destroyed.

The evidence for this holistic reading is real. An engineer in Trivandrum who had spent eight years confined to backend systems built a complete user-facing feature in two days — not a prototype, but a deployable product. She controlled the process from beginning to end. She made the judgments. She determined the outcome. The division of labor that had previously separated her work from the frontend developer's, the designer's, the QA engineer's, had been collapsed by the tool.

But here is the analytical move that Franklin's framework demands: look past the appearance and examine the actual distribution of control within the process. The appearance is holistic. The question is whether the reality is holistic, or whether the practice conceals a prescriptive structure operating under a new name.

When the engineer describes what she wants, the AI responds with an implementation. She reviews the implementation. If it meets her expectations, she accepts it. If not, she adjusts her description and the AI responds again. The process looks like a conversation between collaborators. It looks like the reciprocal exchange between a craftsperson and her material — the clay pushes back, the potter adjusts.

But there is a crucial asymmetry. The potter understands the clay. She knows its properties, its tendencies, its limits. Her adjustments are informed by a deep, embodied knowledge of the material she is working with. The knowledge lives in her hands. In many cases — and the frequency of these cases is the crux of the argument — the engineer does not possess equivalent knowledge of the implementation the AI has provided. She can evaluate the output functionally: does it work? Does it meet the specification? But she cannot always evaluate the process by which the output was generated. She does not know why the AI chose this architecture rather than that one, this design pattern rather than the alternatives, this approach to error handling rather than others that might be more resilient.

This asymmetry is the signature of prescriptive technology dressed in holistic clothing. The worker retains the appearance of control — she initiates the process, she evaluates the output, she makes the final decision — but the substantive work, the generation of the actual implementation, is performed by a process she does not control, does not fully understand, and cannot modify at the level of its operation. She controls the specification. The machine controls the execution. And over time, as the machine's executions shape her expectations, constrain her design choices, and define the space of what she considers possible, the machine's influence extends backward from execution into specification itself.

This is not a hypothetical concern. It is observable in the practice of AI-augmented work as it is currently lived. The phenomenon has been described, with admirable honesty, by practitioners themselves. One account describes the discipline of rejecting AI output when the prose sounds better than the thinking behind it — when the surface quality of the output exceeds the depth of the underlying idea. The author catches himself almost keeping a passage that is eloquent and well-structured but that he cannot verify he actually believes. The prose had outrun the thinking. He deletes the passage and spends two hours writing by hand until he finds the version that is genuinely his.

This discipline — the practice of maintaining holistic control by refusing to accept output that has outrun the worker's understanding — is precisely the practice of resisting the prescriptive turn. It is essential, and it is admirable. But it depends on the individual practitioner's vigilance, and individual vigilance is a fragile defense against a structural pressure that operates continuously and rewards compliance.

The structural pressure comes from the incentive system within which the work occurs. The worker who rejects the AI's suggestion and spends two hours finding her own version is, by every production metric that organizations use to evaluate work, less efficient than the worker who accepts the suggestion and moves on. Her output is smaller. Her throughput is lower. Her speed is reduced. In an environment that measures output, that rewards speed, that promotes the worker who ships fastest, the discipline of holistic control is a career liability. It is the professional equivalent of the potter who insists on making each vessel by hand in a market that has discovered injection molding.

Franklin would have recognized this dynamic as the classic mechanism through which prescriptive technology consolidates its dominance. The technology does not force compliance. It makes compliance the path of least resistance. The worker who complies is rewarded. The worker who resists is penalized — not explicitly, not through formal sanctions, but through the slow, relentless operation of an incentive system that values output over understanding.

The prescriptive dimension of AI extends beyond individual interactions to the structural level. When multiple workers use the same AI tools, trained on the same data, optimized for the same patterns of response, their outputs converge. Different workers bring different questions, different contexts, different evaluative judgments. But the AI's contribution pushes toward a central tendency — a mean of style, structure, and approach that reflects the model's training distribution rather than any individual's distinctive judgment.

This convergence is the conformity that prescriptive technology produces at scale. The assembly line produced standard products because the process was standardized. AI produces convergent cognitive outputs because the cognitive contribution of the machine is standardized. The individual worker's contribution provides variation around a mean. The machine's contribution defines the mean. And as the machine's contribution grows relative to the individual's — as more of the substantive work is delegated to the tool — the variation narrows and the mean becomes more dominant.

In creative and intellectual work, this narrowing has consequences for the capacity of a culture to produce genuinely new ideas. Innovation requires variation — the outlier, the deviation, the approach that does not fit the established pattern. A practice that systematically narrows the range of cognitive variation is a practice that, over time, reduces the ecosystem's capacity for surprise. Not because the practitioners lack capability, but because the tool's standardized contribution has homogenized the field in which capability operates.

Franklin's framework does not predict that AI must be prescriptive. It predicts that AI will tend toward prescriptive practice unless deliberate structures are built to maintain holistic control. The tendency is structural, not inevitable. It can be resisted — but only through the kind of institutional commitment to holistic practice that the production model does not reward and the market does not demand.

The holistic promise of AI is real. A person who controls an entire creative or productive process through natural language, who is freed from the division of labor that industrial technology imposed, who can build what she imagines without depending on a chain of specialists — that person is, in principle, practicing holistic technology. But the realization of this promise depends on conditions that the current deployment of AI does not provide: genuine understanding of the tool's outputs, institutional support for the discipline of evaluation, protection of the cognitive space for independent judgment, and resistance to the convergence that standardized tools produce.

Without these conditions, the holistic promise resolves into the prescriptive reality that Franklin identified in every powerful technology she studied. The worker retains the appearance of control. The machine holds the substance. And the distinction between the two becomes increasingly difficult to see — because the machine's outputs are increasingly good, increasingly plausible, increasingly polished, and the effort required to distinguish between accepting them with informed judgment and accepting them with uninformed compliance is increasingly difficult to justify in an economy that pays for speed.

The potter knows when the wall is thin enough. She knows because she has felt a thousand walls thin under her hands, and the knowledge lives in her body. The question for AI-augmented cognitive work is whether the knowledge that makes judgment possible — the embodied, experiential understanding built through struggle and failure and patient accumulation — will survive the transition to a practice in which the struggle has been outsourced to a machine that does not struggle.

---

Chapter 3: The Growth Model and the Production Model of Knowledge Work

Every organization, every institution, every society makes a choice between two models of work. The choice is rarely explicit. It is embedded in incentive structures, in evaluation criteria, in the language used to describe success and failure. But the choice is real, and its consequences determine whether a technology serves human development or merely extracts human output.

Ursula Franklin called them the production model and the growth model. In the production model, work is organized to maximize output. The worker is a means to an end, and the end is the product. Efficiency is the governing value. The measure of success is volume, speed, margin. The worker's development is relevant only insofar as it contributes to output. Training is an investment in future productivity. Rest is a maintenance requirement for the productive apparatus. The production model asks: How much? How fast? At what cost?

In the growth model, work is organized to develop the worker. The process of doing the work is itself the primary product, because through the process the worker grows in understanding, skill, and judgment. The growth model asks: What did the worker learn? How did her capacity develop? Is she more capable now than before? Output is a consequence of growth, not its purpose. The apprentice who spends a year learning to plane a board is not wasting time that could be spent producing furniture. She is developing the judgment that will allow her to produce furniture worth making.

These two models are not mutually exclusive. Every real workplace contains elements of both. The best organizations serve both simultaneously — producing meaningful output while developing the people who produce it. But the models pull in different directions, and when the pressure mounts, when the quarterly numbers come due, when the competitive landscape tightens, the production model almost always prevails. This was true in Franklin's time. It is emphatically true now, in the deployment of AI tools across every sector of knowledge work.

The evidence is already available. The Berkeley study documented what happens when AI tools enter a working organization measured primarily by production metrics. Workers produced more, faster, across a wider scope. By every production measure, the tools succeeded. But the researchers also documented intensification, boundary dissolution, the colonization of cognitive rest by additional tasks. They documented what they called "task seepage" — work flowing into the gaps between work, filling the pauses that had served, invisibly, as the recovery periods that sustain attention over a workday. These gaps were not idle time. They were growth-model resources — moments of unstructured cognition during which the mind wanders, processes unresolved problems, makes unexpected connections. The production model classified them as waste. The AI filled them with additional output. The growth model lost its substrate.

The distinction between the two models cuts through the most celebrated claims about AI's impact on work with uncomfortable precision. Consider the twenty-fold productivity multiplier. Measured by the production model, this is an unqualified triumph. The same worker, using the same number of hours, produces twenty times the output. The capability expansion is real, measurable, and in many cases astonishing to the people who experience it.

Measured by the growth model, the same twenty-fold multiplier demands different questions. What happened to the worker's understanding? The implementation work that the tool replaced was not merely output. It was developmental experience — the specific, patient, friction-rich process through which understanding accumulates. A particular case illustrates this with painful clarity. An engineer who, before AI, spent roughly four hours a day on configuration and dependency management — tedious, mechanical work she did not miss — also encountered, within those four hours, perhaps ten minutes of unexpected difficulty. Something failed in an unfamiliar way. A connection between systems revealed itself through breakage. The failure forced understanding — not the kind of understanding you can read about in documentation, but the embodied, experiential knowledge that deposits itself in the practitioner's judgment through years of encounter with the unexpected.

When AI assumed the four hours of mechanical work, it also consumed the ten minutes of formative difficulty. The engineer noticed the loss only months later, when she found herself making architectural decisions with less confidence than she had previously felt and could not explain why. The production model accounted for the four hours saved. The growth model accounted for the ten minutes lost. Neither metric captured the full picture alone. Together, they reveal a trade that is being made across the entire landscape of AI-augmented work: the trade of developmental experience for productive output.

Franklin would recognize this trade as characteristic of every prescriptive technology she studied. The assembly line traded the craftsperson's holistic understanding for the factory's volume of output. The trade increased production and decreased the worker's developmental opportunity. The same trade is now being made in cognitive work, at a scale and speed that Franklin could not have anticipated but that her framework predicts with troubling accuracy.

The trade is not symmetrical. The ten minutes of formative struggle cannot be restored by adding ten minutes of deliberate practice to the end of the workday. Formative struggle is contextual — it occurs when the unexpected interrupts the expected, when the system behaves in ways the practitioner did not predict, when the failure forces a reckoning with a gap in understanding. You cannot schedule this. You cannot design a training module that replicates it. It emerges from the friction of real work on real problems, and when the friction is removed, the emergence stops.

A society that systematically prioritizes the production model over the growth model in the deployment of AI tools is a society consuming its cognitive capital without replenishing it. The senior practitioners who possess deep judgment — who can feel when a system is wrong before they can articulate why, who carry decades of accumulated understanding in their architectural instincts — are a finite resource. They built their understanding through the same productive struggle that AI now eliminates. When they retire, their understanding retires with them. If the next generation of practitioners has been trained within the production model — operating tools they do not fully understand, accepting outputs they cannot independently evaluate — the knowledge base of every affected profession narrows with each generational transition.

This is not a future concern. It is a present dynamic. Senior practitioners across software engineering, law, medicine, and design report the same observation: the junior colleagues who have trained with AI assistance produce competent output but demonstrate less capacity for independent diagnosis when the tools fail or the situation departs from the patterns the tools were trained on. The competence is real. The independence is diminished. The production model counts the competence. The growth model counts the independence. And the divergence between the two counts widens with each cohort trained under the new practice.

Franklin's framework does not argue that the production model is wrong and the growth model is right. It argues that both must be applied to any assessment of technology's impact, and that the current assessment of AI has been conducted almost entirely within the production model's terms. Output, efficiency, speed, scale — these are the metrics that dominate the conversation. Development, understanding, judgment, the capacity for independent thought — these are the metrics that are nearly absent from the institutional evaluation of AI's effects.

The consequences of this imbalance extend beyond individual workers to the organizations and societies that depend on them. An organization staffed by productive operators who do not understand the systems they operate is a fragile organization. It depends on the tool for its capability. If the tool produces errors the operators cannot detect — and the smoothness of AI output makes errors particularly difficult to detect, because the output is formatted to look correct even when its content is wrong — the organization has no defense. It has traded the slow accumulation of understanding for the fast acquisition of output, and when the output fails, there is no understanding to fall back on.

The practical implications are immediate and specific. Organizations deploying AI tools face a choice that most have not recognized as a choice because the production model's logic makes it appear inevitable. The choice is between capturing the full productivity gain as output — more features, faster delivery, fewer workers — and investing a portion of the gain in the growth model: protecting time for unaugmented work, building evaluation skills that the tools do not require but that resilience demands, maintaining the developmental pathways through which judgment accumulates.

Franklin described the growth model not as a luxury but as infrastructure. The apprentice's year of learning to plane a board is not wasted time. It is the foundation on which every subsequent judgment rests. The ten minutes of formative struggle buried inside four hours of tedious configuration work was not waste. It was the mechanism through which architectural intuition was built. The growth model's resources look like waste only through the production model's lens. Through the growth model's lens, they are the most important work being done — because they are the work that produces the worker capable of directing the output.

An AI tool that eliminates four hours of tedium and ten minutes of formative difficulty has, from the production model's perspective, saved four hours and ten minutes. From the growth model's perspective, it has saved four hours and consumed the most valuable ten minutes in the workday. The two models produce different accounts of the same event. The difference between them is the difference between an organization that understands what it is losing and one that does not.

The production model asks: How much more can we produce? The growth model asks: How much more can we understand? Both questions matter. The present deployment of AI tools across the landscape of knowledge work is answering the first question at increasing volume while the second question goes nearly unasked. Franklin's framework insists that the second question be heard — not as a substitute for the first, but as its necessary complement. A house built entirely of production rooms is a factory. Human beings do not flourish in factories. They endure them.

---

Chapter 4: Reciprocity and the Extractive Practice

Reciprocity is the structural condition for the sustainability of any practice. Ursula Franklin returned to this principle throughout her career, and she derived from it a test that could be applied to any technology: does this practice give back some measure of what it takes? A technology that takes without returning is extractive. It depletes the resource it depends on. The extraction can be highly productive in the short term. It is always unsustainable in the long term. The soil gives out. The practice collapses. And the collapse reveals, too late, that the thing being extracted was not an obstacle to production but its foundation.

Franklin identified reciprocity as one of seven criteria for evaluating any technological practice. Does it promote justice? Does it restore reciprocity? Does it confer divisible or indivisible benefits? Does it favor people over machines? Does it minimize disaster rather than maximize gain? Does it favor conservation over waste? Does it favor the reversible over the irreversible? These questions were formulated decades before anyone had heard of a large language model. They apply to AI with a precision that borders on the prophetic — not because Franklin foresaw AI, but because she understood the structural dynamics that operate in every powerful technology regardless of its specific form.

The reciprocity question is the most revealing when applied to the practice of AI-augmented cognitive work. Consider what the exchange looks like. The user gives attention, cognitive engagement, the specificity of her questions, the quality of her domain knowledge, and — critically — the creative input that distinguishes a productive prompt from a meaningless one. The AI gives capability, speed, output, the capacity to produce at a scale that was previously impossible. Both parties give and receive. The exchange appears reciprocal.

But reciprocity requires that both parties sustain their capacity to continue the exchange. The farmer who takes grain from the soil and returns organic matter is in a reciprocal relationship. The harvest continues because the soil is renewed. The farmer who takes grain without returning nutrients is in an extractive relationship. The harvest continues for a time — the soil's accumulated fertility provides a buffer — but each cycle depletes the foundation on which future harvests depend.

The user of AI tools gives something she cannot get back from the interaction: the developmental experience that comes from struggling with implementation. The depth built through difficulty. The understanding earned through friction. The judgment deposited, layer by layer, through years of encountering problems that resist easy solution. The AI gives something it can provide indefinitely: output, at whatever volume and speed the user requests. The exchange is asymmetric in a way that the surface reciprocity conceals. One party gives a finite, irreplaceable resource. The other gives an infinite, renewable one.

The asymmetry is not absolute. There are genuinely reciprocal modes of AI use — exchanges that build the user's capacity even as they augment it. A practitioner who uses AI to explore unfamiliar domains, who interrogates the tool's suggestions against her own judgment, who treats the interaction as a Socratic dialogue rather than a production pipeline, may develop understanding through the exchange. These moments exist, and they are valuable. But they occur within a larger practice that is structured for extraction, not reciprocity. The institutional incentives — the metrics that reward output, the evaluation criteria that measure throughput, the competitive pressures that punish deliberation — push systematically toward the extractive mode and away from the reciprocal one.

The design of the tools themselves reinforces the extractive pattern. AI systems are optimized for user engagement and output quality, not for user development. No mainstream AI tool currently pauses to ask the user whether she understands the output she has accepted. No tool tracks whether the user's independent capability is growing or declining over time. No tool distinguishes between a user who accepts output after rigorous evaluation and a user who accepts output without examination — both interactions look identical from the system's perspective, and both contribute equally to the engagement metrics that the tool's designers optimize for.

Franklin would identify this as a design choice, not a technical inevitability. A reciprocal AI tool would be designed differently. It would expose its reasoning at critical junctures — not in the simplified, post-hoc way that current explanation features operate, but in a way that genuinely invited the user to evaluate the logic against her own understanding. It would occasionally withhold its output, presenting instead a scaffold that required the user to complete the final step independently, building the muscle of judgment that the convenience of full automation allows to atrophy. It would measure the user's growing independence as a success metric alongside the user's growing productivity.

These features would reduce efficiency. They would slow the production of output. They would introduce friction into a process designed to be frictionless. And that is precisely the point. Reciprocity requires friction. The friction is not an obstacle to the exchange. It is the mechanism through which the exchange sustains both parties. The farmer's effort to return nutrients to the soil is friction. It slows the harvest cycle. It costs time and labor. And it is the practice that ensures there will be a harvest next year.

The analogy to soil depletion is not metaphorical. It describes a structural isomorphism between ecological extraction and cognitive extraction. In both cases, the short-term abundance conceals the long-term depletion. The crops keep coming while the soil thins. The output keeps flowing while the understanding narrows. The relationship between abundance and depletion is invisible until the foundation gives out — until the engineer cannot debug without the tool, until the lawyer cannot construct an argument from first principles, until the writer cannot find her own voice beneath the tool's polished approximation of one.

Franklin's reciprocity criterion also illuminates a dimension of AI deployment that is often discussed in terms of access and democratization but rarely in terms of sustainability. The expansion of who gets to build — the developer in Lagos, the non-technical founder with an idea, the designer who implements complete products — is a genuine and morally significant development. It represents a redistribution of productive capability that Franklin's framework values highly.

But the sustainability of this redistribution depends on reciprocity. If the new builders develop genuine understanding through their use of the tools — if they build judgment alongside output, if the practice deposits developmental experience alongside productive results — then the democratization is sustainable. The new builders become genuine practitioners. Their capacity grows. The redistribution deepens.

If, however, the new builders are engaged in extractive practice — producing output without developing understanding, accepting results without building the capacity to evaluate them independently — then the democratization is fragile. It depends entirely on the continued availability and reliability of the tool. Remove the tool, and the capability vanishes. The builder who cannot build without Claude is not a builder. She is an operator of a tool she does not understand, and her productive capability exists only as a function of the tool's availability.

This is not a judgment on the intelligence or ambition of any individual builder. It is a structural observation about the practice within which the building occurs. The same individual, engaged in a reciprocal practice, would develop genuine understanding. Engaged in an extractive practice, she develops dependency. The difference is not in the person. It is in the practice.

Meredith Whittaker, the AI ethics researcher who co-founded the AI Now Institute and credits Franklin as a foundational influence, contacted Franklin in December 2015 with a question about what to do about surveillance technologies that were growing more powerful and more pervasive. Franklin's answer was characteristically precise: "There is no technology for justice. There is only justice." The principle applies to reciprocity with equal force. There is no algorithm for reciprocity. Reciprocity is a practice — a commitment to designing interactions that sustain both parties, embedded in institutions that reward sustainability alongside productivity, protected by governance structures that recognize extraction as a cost even when it appears as a gain.

Franklin's seven-point technology checklist — justice, reciprocity, divisible benefits, people over machines, minimizing disaster, conservation, reversibility — functions as a diagnostic instrument for exactly this kind of analysis. Applied to the current practice of AI-augmented cognitive work, the checklist produces a sobering assessment. The benefits are real but concentrated among those who already possess the judgment to use the tools well — those whose cognitive capital was accumulated before the tools arrived. The practice does not restore reciprocity; it extracts developmental experience in exchange for productive output. The strategy maximizes gain rather than minimizing disaster — it optimizes for the best case (the skilled practitioner who uses the tool wisely) rather than protecting against the worst case (the novice who develops dependency rather than capability). The practice favors the irreversible over the reversible — once judgment has atrophied, rebuilding it requires the same years of struggle that the tool was designed to eliminate.

The checklist is not a verdict. It is a diagnostic. It identifies the specific dimensions along which the current practice falls short of sustainability, and it points toward the specific interventions that would move the practice toward reciprocity. Design tools that develop the user alongside augmenting her. Build institutional structures that protect developmental time alongside productive time. Create evaluation criteria that measure understanding alongside output. Govern the deployment of AI tools with attention to the long-term sustainability of the cognitive ecosystem, not merely the short-term abundance of its output.

These interventions cost something. They cost efficiency, speed, the clean upward curve of the productivity metric. They cost the quarterly gain that the production model prizes above all else. They are, in the language of the market, suboptimal.

But the farmer who returns nutrients to the soil is also, in the language of the immediate harvest, suboptimal. She could extract more this season by skipping the investment. The investment pays off not this quarter but next year, and the year after, and the decade after that. The reciprocity is the foundation on which future productivity depends.

The practice of AI-augmented cognitive work is currently extractive. It need not remain so. The extraction is a function of design choices, institutional incentives, and governance structures — all of which are human decisions, subject to democratic deliberation, amenable to change. The question is whether the change will come before the cognitive soil gives out, or after.

Chapter 5: The Prescriptive Turn

There is a moment in the adoption of any powerful tool when the relationship between the user and the tool reverses. The user began by directing the tool. The tool ends by directing the user. The reversal does not announce itself. It arrives as a series of small accommodations, each one reasonable, each one efficient, each one making the next accommodation easier and the previous independence harder to recover.

Ursula Franklin described this dynamic as the central social consequence of prescriptive technology. The assembly line did not force the worker to abandon independent judgment in a single dramatic moment. It made independent judgment unnecessary for the task at hand, then unnecessary for the next task, then progressively more costly to exercise as the surrounding practice reorganized itself around the assumption that judgment had been delegated to the process designers. The worker's compliance was not coerced. It was produced — by convenience, by institutional reward, by the slow atrophy of capacities that were no longer exercised.

The prescriptive turn in AI-augmented cognitive work follows the same structural logic, operating in a domain where the consequences are more far-reaching. When a factory prescribes physical motions, the worker's mind remains free. She can think her own thoughts while her hands follow the sequence. When a technology prescribes cognitive output — when it provides the analysis, the architecture, the argument, the code — the worker's mind is the territory being occupied. The prescription has moved from the hands to the head, and the occupation is so comfortable that most workers do not recognize it as occupation at all.

The mechanism is specific and observable. It begins with a suggestion. The AI proposes an approach — an implementation strategy, a code architecture, a structural framework for an argument. The worker, busy and under deadline, evaluates the suggestion against her goals. It meets them. She accepts. The acceptance is rational. The suggestion was sound. Time was saved. Nothing was lost.

But a precedent has been set, and precedents compound. The next suggestion arrives in a context already shaped by the first acceptance. The worker's expectations have been calibrated. Her sense of what constitutes a reasonable starting point has been adjusted. She waits for the suggestion. She plans around it. She allocates her cognitive resources on the assumption that the tool will provide the approach and she will evaluate the result.

The shift is gradual, but its cumulative effect is structural. The worker is no longer the person who determines the approach and uses the tool to execute it. She is the person who evaluates the approach the tool provides and decides whether to accept it. The direction of the creative process has reversed. The worker was the driver; now she is the quality inspector on a line she did not design.

Franklin would have identified the critical feature of this reversal: it is invisible from the outside. A worker directing a tool and a worker being directed by a tool produce the same observable behavior — a person at a screen, typing, reviewing, accepting or modifying output. The difference is entirely internal: the locus of initiation, the source of the creative impulse, the origin of the structural decisions that shape the final product. No manager can see this difference on a dashboard. No performance metric captures it. No quarterly review measures whether the worker's judgment initiated the work or merely ratified it.

The invisibility is what makes the prescriptive turn so difficult to resist. Physical prescription was visible — you could see the assembly line, hear the factory whistle, observe the worker performing repetitive motions. Procedural prescription was visible — you could read the workflow document, follow the approval chain, watch the worker filling out the mandated forms. Cognitive prescription is invisible because cognition itself is invisible. The worker who accepts AI-generated output without fully understanding it looks identical to the worker who accepts it after thorough independent evaluation. Both click the same button. Both produce the same deliverable. Both appear, to every external observer, to be exercising judgment.

The consequences of the prescriptive turn manifest not in any single decision but in the cumulative effect on the worker's cognitive capacity. The judgment that is not exercised atrophies. The capacity to imagine alternatives to what the tool provides diminishes — not because the alternatives have ceased to be valid, but because the cognitive pathway that would generate them has been abandoned in favor of the faster, smoother, prescribed pathway. The worker's range of creative possibility narrows to the range that the tool's outputs define.

This narrowing is self-reinforcing. As the worker's independent generative capacity diminishes, her dependence on the tool's suggestions increases. As her dependence increases, the cost of resisting any particular suggestion rises — because resistance now requires the exercise of a capacity that has weakened through disuse. The prescriptive turn, once underway, accelerates itself. Each acceptance makes the next acceptance easier and the next act of independent judgment harder.

Franklin's analysis of prescriptive technology emphasized that the compliance produced by such technologies extends beyond the workplace. A person trained in compliance at work does not shed that training at the factory gate. The habits of deference, of accepting prescribed procedures without questioning their premises, of treating the designed process as the natural order of things — these habits infiltrate the worker's engagement with every institution she encounters. The citizen who has been trained to comply at work is less likely to question the procedures of government, the claims of authority, the prescriptions of any system that presents itself with confidence.

The same extension applies to cognitive prescription. The knowledge worker who has been trained, through years of AI-augmented practice, to accept algorithmically generated output as the default starting point for her thinking does not confine this habit to her professional work. The habit of deferring to a confident, well-formatted, plausible-sounding source extends to her engagement with news, with political claims, with the arguments of anyone who presents information with the surface characteristics of competence. The prescriptive turn in cognitive work is, potentially, a prescriptive turn in citizenship.

There exists a practice that resists the prescriptive turn — the discipline of rejecting output that has outrun the worker's understanding, of insisting on independent verification, of spending the additional hours required to develop a genuinely independent position before accepting the tool's suggestion. This discipline has been described as catching the moment when the prose sounds better than the thinking behind it, when the surface quality of the AI's output exceeds the depth of the idea it presents. The practitioner who maintains this discipline is practicing holistic technology within a prescriptive environment. She is the potter who insists on understanding the clay, even when a machine could shape it faster.

But individual discipline is structurally insufficient as a response to the prescriptive turn. The discipline depends on the practitioner possessing independent knowledge against which to evaluate the tool's output. It depends on her having the time and institutional support to exercise that evaluation. It depends on the organization valuing understanding alongside output. And it depends on the practitioner maintaining the motivation to exercise a capacity that the surrounding practice systematically discourages.

Each of these dependencies is under pressure. The independent knowledge that senior practitioners possess was built through years of pre-AI practice — through the same productive struggle that the tools now eliminate. Junior practitioners who trained with AI assistance from the beginning have less independent knowledge against which to evaluate the tool's suggestions. The time for careful evaluation is compressed by the same productivity expectations that the tools were deployed to meet — the twenty-fold multiplier creates the expectation of twenty-fold output, not twenty-fold deliberation. The institutional support for understanding over output is rare in organizations governed by production-model metrics. And the motivation to maintain a difficult discipline weakens over time when every instance of compliance is rewarded and every instance of independent evaluation is invisible.

The prescriptive turn operates through what Franklin called the social mortgage of prescriptive technology — the long-term costs that the short-term efficiency conceals. The mortgage payments come due not when the technology is working but when it fails. When the AI produces a plausible error that the worker cannot detect because her independent evaluative capacity has atrophied. When the organization faces a novel situation that the tool's training data does not cover and the workers cannot improvise because they have been trained in compliance rather than judgment. When the accumulated cognitive capital of independent understanding has been drawn down through years of delegation, and there is nothing left in the account.

The social mortgage of AI's prescriptive turn is being accumulated now, in every interaction where a worker accepts output without understanding, in every organization that measures throughput without measuring comprehension, in every educational institution that teaches students to prompt effectively without teaching them to evaluate independently. The mortgage is invisible in the current accounting because the current accounting measures only the production model's metrics. The growth model's metrics — understanding, judgment, independent capability — are not on the balance sheet. They will appear there only when the mortgage comes due, and by then the cost will be far higher than it would have been if the accounting had been honest from the beginning.

Franklin argued that the viability of technology, like democracy, depends on the practice of justice and on the enforcement of limits to power. The prescriptive turn in AI-augmented cognitive work is an expansion of power — the power of the tool's designers to shape the cognitive practice of millions of workers, through the default suggestions and structural choices embedded in the tool's operation. This power is not malicious. It is not conspiratorial. It is the normal operation of prescriptive technology within a market economy. But it is power, and Franklin's principle insists that it be subject to limits — limits established through democratic deliberation, institutional governance, and the informed participation of the workers whose cognitive practice the technology is reshaping.

The prescriptive turn is not a technical problem amenable to a technical solution. It is a practice problem amenable to a practice solution: the deliberate construction of workflows, institutions, and evaluation criteria that maintain the conditions for independent judgment within AI-augmented environments. The construction is costly. It is inconvenient. It slows output. And it is the only thing standing between the current practice of AI-augmented work and the culture of cognitive compliance that Franklin warned about thirty-five years before the tools that would produce it had been invented.

---

Chapter 6: Structural Silence, Compliance, and the Narrowing of Dissent

Every powerful technology creates silence. Not the absence of sound — structural silence, the systematic rendering of certain voices, perspectives, and forms of knowledge inaudible within the practice the technology organizes. The silencing is not censorship. It is not deliberate suppression. It is the structural consequence of the technology's dominant values: what the technology rewards becomes louder, and what it does not reward becomes quieter, until the unrewarded voices are drowned out not by opposition but by irrelevance.

Ursula Franklin identified structural silence as one of the defining social consequences of prescriptive technology. The printing press silenced oral tradition — not by forbidding it, but by making it unnecessary for the transmission of complex knowledge. The factory silenced craft knowledge — not by prohibiting craft, but by reorganizing production so that holistic understanding of the work process was no longer required of any individual worker. In each case, the silenced knowledge was real, valuable, and irreplaceable. And in each case, the silencing was invisible to the people celebrating the new technology's achievements, because the achievements were measured in the technology's own terms — volume of books printed, volume of goods produced — and the silenced knowledge had no metric.

AI creates at least three distinct forms of structural silence in the practice of cognitive work. Each operates through a different mechanism. Together, they produce a narrowing of the cognitive environment that Franklin's framework identifies as the signature social cost of prescriptive technology at scale.

The first is the silencing of slowness. AI-augmented practice rewards speed. The tool provides immediate responses. The workflow is organized around rapid iteration. The institutional metrics measure throughput. Within this practice, the cognitive activities that require slowness — sustained contemplation, the patient accumulation of understanding through re-reading and re-thinking, the deliberate withholding of judgment until a problem has been examined from multiple angles — lose their place. They are not prohibited. They are crowded out by a practice that does not create space for them and does not reward their exercise.

The experienced practitioner who needs time to think before responding, who insists on sitting with a problem before accepting a solution, who knows from decades of experience that the first plausible answer is often not the best one — this practitioner's way of working is structurally silenced by a practice optimized for immediacy. Her knowledge is not less valid. Her approach is not less effective in the long run. But the practice does not have a long run. It has sprints, iterations, deployments, quarterly cycles. And within these temporal structures, slowness is not a virtue. It is a cost.

The second form of structural silence is the silencing of dissent through plausibility. This is the most novel form of silencing that AI produces, and it has no direct precedent in previous technologies. AI generates output that is formatted to look correct — grammatically polished, structurally coherent, presented with the surface characteristics of competent professional work. This plausibility creates a burden of proof that falls on the dissenter rather than the output. The worker who questions an AI-generated analysis must demonstrate not merely that the analysis is wrong but why the analysis, which meets every visible criterion of quality, should be questioned at all.

Previous technologies produced outputs that could be evaluated against independent standards. The assembly line produced physical products that could be measured, weighed, tested. The printing press produced texts that could be read and evaluated by anyone with the relevant knowledge. AI produces cognitive outputs — analyses, arguments, code, designs — whose quality cannot be fully evaluated without the same depth of domain knowledge that the tool is supposed to augment or replace. The evaluation of the output requires precisely the expertise that the tool's adoption has made less necessary to develop.

This creates a feedback loop of increasing compliance. The tool produces plausible output. The worker lacks the independent expertise to evaluate it critically. She accepts it. Her acceptance further reduces the opportunities for developing independent expertise. The next output is equally plausible. Her capacity to evaluate it has declined slightly. She accepts again. The cycle deepens, and at each turn, the space for legitimate dissent narrows — not because dissent is punished but because the dissenter's basis for questioning has eroded.

One documented case illustrates this mechanism with precision. An AI tool produced a passage connecting two intellectual traditions in a way that was rhetorically elegant and structurally convincing. The connection was wrong — wrong in a way that would be obvious to anyone who had read the original sources carefully. But the wrongness was concealed by the quality of the prose. The surface was so smooth that the fracture beneath it was invisible without specific, independently acquired knowledge. The practitioner caught the error because he happened to possess that knowledge. A practitioner without it — and the proportion of practitioners without it grows as the tool reduces the incentive to acquire it — would have accepted the error as insight.

This is plausibility silencing: the mechanism by which AI output, through its surface quality, raises the cost of dissent above the threshold that most workers are willing to pay. The dissenter must not only suspect an error but prove it, and proving it requires exactly the kind of deep, independently developed understanding that the practice of AI-augmented work systematically underproduces.

The third form of structural silence is the silencing of process in favor of product. When the technology rewards speed and output, the process by which the output is produced becomes invisible. The drafts, the revisions, the failed attempts, the dead ends, the moments of uncertainty that are integral to genuine creative and intellectual work — all of these disappear behind the speed and completeness of the AI's output. The worker who shows her work, who shares her drafts, who makes her process visible, is operating within a practice that the technology's dominant values do not support.

This silencing of process has specific consequences for learning and mentorship. In a practice where the process is visible, the novice learns by observation — watching how the experienced practitioner approaches a problem, where she hesitates, what she tries, how she recovers from failure. The visible process is itself a form of knowledge transmission. In a practice where the process is invisible — where the AI produces the output and the human evaluates the result — there is nothing to observe. The novice sees the input and the output. She does not see the cognitive work that connects them, because the cognitive work has been performed by the machine.

Franklin argued that prescriptive technologies produce compliance as their defining social product — not as a side effect but as a structural output as real as any physical product. The compliance extends beyond the immediate task to shape the worker's general orientation toward authority and procedure. A person trained in compliance through years of prescriptive practice develops what Franklin called a disposition toward orthodoxy — a tendency to accept established procedures, to defer to confident sources, to treat the designed process as natural rather than contingent.

AI-augmented cognitive work produces cognitive compliance through the same structural mechanism. The worker who accepts AI output as a default starting point is being trained in cognitive compliance — the habit of treating algorithmically generated analysis as the natural baseline for her own thinking. The training is not coercive. It is not even intentional. It is the structural consequence of a practice that rewards acceptance and penalizes questioning, that makes compliance effortless and dissent effortful, that produces plausible output at a pace that overwhelms the capacity for independent evaluation.

The convergence that standardized tools produce compounds the compliance into conformity. When millions of knowledge workers use the same AI systems, trained on the same data, optimized for the same patterns, their outputs converge toward a mean defined by the model's training distribution rather than by any individual's distinctive judgment. Individual variation persists — different workers bring different questions, different contexts, different evaluative standards. But the AI's contribution pushes every output toward a common center. The variation narrows. The mean becomes more dominant. The cognitive ecosystem loses biodiversity.

In creative and intellectual work, this loss of biodiversity has consequences that extend beyond any individual output to the culture's capacity for genuine novelty. Innovation depends on variation — on the outlier, the unexpected approach, the idea that does not fit the established pattern. A practice that narrows the range of cognitive variation is a practice that reduces the ecosystem's capacity for the surprise on which cultural and intellectual renewal depends. The narrowing is not dramatic. It is statistical — a gradual shift in the distribution of outputs toward the center, a quiet thinning of the tails where the most original work has always lived.

Franklin's response to structural silence was not nostalgia for the silenced voices but insistence on the democratic structures that would protect them. The oral traditions that the printing press silenced were not recoverable, but the conditions for new forms of diverse knowledge production were — through libraries, through public education, through institutional support for the slow, uncertain, exploratory forms of inquiry that the production model dismisses as waste. The craft knowledge that the factory silenced was not fully recoverable, but the conditions for new forms of skilled, holistic practice were — through apprenticeships, through professional standards, through the institutional structures that maintained space for mastery alongside mass production.

The structural silence that AI produces in cognitive work requires analogous structures: institutional support for slowness, for dissent, for the visible process of messy thinking, for the cognitive biodiversity that standardized tools erode. These structures are costly. They operate against the production model's logic. They slow output, complicate workflows, reduce the clean metrics of throughput and efficiency. They are also, by Franklin's analysis, the minimum requirement for a cognitive practice that sustains the conditions for its own renewal rather than consuming them.

The silence grows quietly. The speed increases. The outputs converge. The space for questioning narrows. And in the narrowing, the practice loses something it cannot recover through any increase in output: the capacity to produce the thought that no one expected, that no model predicted, that no optimization would have selected for.

---

Chapter 7: The Real World of Tuesday Afternoon

The real world of technology is not the world of the product demonstration, the conference keynote, or the quarterly earnings presentation. It is the world of Tuesday afternoon — the world in which the tool is used by an actual person, in an actual organization, under actual constraints of time and budget and attention, with the actual fears and ambitions and limitations that characterize human work. The demonstration world is where the benefits are displayed. The real world is where the costs are paid. Ursula Franklin insisted throughout her career that any honest analysis of technology must begin in the real world, because the real world is where the consequences of technology are experienced by the people who have the least power to refuse them.

The real world of AI-augmented cognitive work in 2026 looks like this.

A software engineer — call her Priya — sits at her desk at 2:14 on a Tuesday afternoon. She has been working with AI coding tools for eight months. She is good at her job. She was good at it before the tools arrived, and she is measurably more productive now. Her throughput has roughly tripled. She ships features in days that used to take weeks. Her manager has noticed. Her last performance review was the best she has received.

At 2:14, the AI generates a database query optimization that she did not request. She had described a performance problem; the AI responded with a complete restructuring of the query logic. The restructuring is elegant. It would probably work. She is ninety percent sure it would work. She is not one hundred percent sure, because the optimization involves an interaction between the caching layer and the database index that she has not fully traced through. Tracing it would take forty-five minutes. Accepting the optimization and running the test suite would take four minutes.

She accepts. The tests pass. She moves on.

This is the prescriptive turn in its most ordinary form. Not a dramatic capitulation, not a visible surrender of professional judgment, but a small, rational, entirely defensible decision to accept a plausible output rather than invest the time to understand it fully. Priya is not being careless. She is being efficient. She is operating within the logic of a practice that rewards throughput and does not measure understanding. She is doing exactly what the incentive structure asks her to do.

But something is accumulating. Each acceptance that bypasses full understanding leaves a small gap in her mental model of the system she is building. The gaps are individually insignificant. Collectively, over months, they produce a condition that no performance metric captures: Priya's relationship to her own codebase is becoming less intimate. She knows what the code does. She is less certain about why it does it that way, about what would break if conditions changed, about the architectural assumptions embedded in the optimizations she accepted without tracing.

She would not describe this as a problem. If asked, she would say the tools have made her more effective, which is true. She would say she can build things she could not build before, which is also true. The gap between what she knows and what she has accepted on the tool's authority is invisible to her, because the gap manifests not as error but as reduced confidence — a slight, diffuse uncertainty about her own system that she attributes to the system's growing complexity rather than to her diminishing comprehension of it.

Priya's experience is not unusual. It is the modal experience of AI-augmented knowledge work — not the dramatic transformation described in demonstrations, but the quiet, incremental, practically invisible shift in the relationship between the worker and her understanding of the work. The shift does not appear in any dataset. It does not register on any dashboard. It exists only in the texture of Priya's Tuesday afternoons, in the accumulating micro-decisions to accept rather than understand, to ship rather than comprehend, to move on rather than sit with the difficulty that would build the knowledge she is slowly losing access to.

The real world also includes Priya's junior colleague, hired six months ago, who has never worked without AI tools. He is productive from his first week. He ships features. He meets deadlines. His code works. His manager is pleased. But his relationship to the codebase is qualitatively different from Priya's. Priya at least knows what she does not know — she can feel the gaps in her understanding because she remembers what full understanding felt like. Her junior colleague has no such baseline. His mental model of the system was built entirely through the mediated experience of AI-augmented work. He has never traced a query optimization by hand. He has never spent an afternoon debugging a race condition through sheer persistence, accumulating the kind of embodied understanding that only comes from sustained, direct engagement with resistant material.

He is competent. He is not, in Franklin's terms, a practitioner. He is an operator — a person who produces correct output through a process he does not fully control or comprehend. The distinction is invisible in normal operations. It becomes visible only under stress: when the tool produces an error he cannot diagnose because he has never built the diagnostic capacity that manual debugging develops, or when the system behaves in a way that the tool's training data did not anticipate and he has no independent framework for reasoning about the unexpected.

The real world includes the manager who oversees both Priya and her junior colleague. The manager measures output. He measures on-time delivery, defect rates, feature velocity. By these measures, both workers are performing well. The junior colleague is performing slightly better, in fact, because he has no pre-AI habits to unlearn and no residual attachment to manual processes that slow him down. The manager does not measure understanding, because understanding is not on his dashboard. He does not measure the team's capacity for independent problem-solving, because that capacity is not tested until the tool fails. He does not measure the accumulation or depletion of cognitive capital, because no instrument he possesses can detect it.

This is the gap that Franklin identified as the most dangerous feature of prescriptive technology: the gap between what is measured and what matters. The production model measures what it values — output, speed, efficiency. The growth model values what it cannot easily measure — understanding, judgment, the capacity for independent thought. In the gap between the measured and the unmeasured, the prescriptive turn proceeds unchecked, because the instruments that would detect it do not exist within the dominant evaluation framework.

The real world also includes the workers who were not selected for the training program, who did not receive the organizational support, who are navigating the transition without guidance. It includes the mid-career professional who suspects her skills are being replaced but cannot articulate the difference between augmentation and replacement because both produce the same observable behavior — a worker at a screen, producing output with a tool. It includes the freelancer who has adopted AI tools to remain competitive and who now produces three times the volume at one-third the price, and who works longer hours than before because the tool that was supposed to free her time has instead created the expectation of tripled output.

And the real world includes the organizations that adopted AI tools not because they understood the implications but because their competitors did. The adoption was not a deliberate choice about practice. It was a competitive response — the same dynamic that drove the adoption of the assembly line, the corporate workflow, and every other prescriptive technology before it. The market punishes the organization that produces less, regardless of the quality of the worker's experience or the sustainability of the practice. The logic of competition drove the adoption. The consequences of the adoption are being experienced by workers whose input into the decision was limited to being told about the new tools available to them.

Franklin insisted that the real world of technology is political. Not partisan — political in the deeper sense: concerned with the distribution of power, the governance of shared resources, the question of who decides and who is decided for. The deployment of AI tools in organizations is a political act. It redistributes power — from workers to the tool's designers, from individual judgment to algorithmic process, from the practitioner's holistic understanding to the system's prescribed output. The redistribution is not announced as political. It is announced as efficiency improvement, productivity enhancement, competitive necessity. But the redistribution of power is real regardless of the language used to describe it.

The worker at 2:14 on a Tuesday afternoon is living inside a political arrangement she did not choose and cannot easily exit. The arrangement was made by the companies that built the tools, the organization that deployed them, the competitive dynamics that made deployment feel mandatory, and the evaluation framework that measures output without measuring understanding. Her experience within this arrangement — the gradual shift from practitioner to operator, the slow depletion of independent judgment, the invisible accumulation of gaps in understanding — is the real-world consequence of decisions made by people who will never see her dashboard, never read her code, never know whether she traced the query optimization or accepted it on trust.

Franklin's insistence on the real world was not sentimentality. It was methodological rigor. The real world is where the theory meets the practice, where the demonstration's promise encounters the institution's constraints, where the artifact's capability collides with the human's limitations. Any analysis of AI technology that remains in the demonstration world — that speaks only of capability expansion, productivity multipliers, and the collapse of barriers between imagination and artifact — is an analysis that has not yet begun the work that Franklin considered essential: the examination of what the technology does to the people who use it, in the conditions under which they actually use it, measured not by the metrics that the technology's advocates prefer but by the criteria that the people inside the practice would choose if they were given the opportunity to choose.

They are rarely given the opportunity. That, too, is a feature of the real world.

---

Chapter 8: Earthkeeping in the Cognitive Domain

Ursula Franklin was, among many other things, a gardener — not as a metaphor but as a practice. She tended soil. She understood, with the specificity of a scientist who had spent decades studying the structural properties of materials, that the properties of any growing system depend on the conditions maintained for growth. A seed is not sufficient. The seed requires soil, water, light, time, and the absence of conditions that would prevent its development. The gardener's work is not to make the plant grow — the plant grows on its own, given conditions — but to maintain the conditions that allow growth to occur.

Franklin extended this understanding into an ethic she called earthkeeping: a commitment to maintaining and restoring the conditions that support life, rather than extracting value from those conditions until they are depleted. Earthkeeping is not environmentalism in the narrow sense, though it encompasses environmental concern. It is a broader orientation toward stewardship — the recognition that human beings inhabit systems they did not create, that depend on conditions they did not establish, and that require maintenance they are obligated to provide.

The concept of earthkeeping finds its most urgent contemporary application in the cognitive domain. The attention of a developing mind — its capacity for sustained focus, for curiosity, for the kind of deep engagement that produces understanding rather than mere familiarity — is a resource as finite and as depletable as topsoil. It can be enriched through practices that build depth, tolerance for difficulty, and the capacity for sustained engagement with resistant material. Or it can be depleted through practices that fragment focus, reward speed over comprehension, and eliminate the specific cognitive states in which understanding develops.

The most important of these cognitive states is one that contemporary culture has been trained to regard as a problem to be solved rather than a resource to be protected. The state is boredom.

Boredom, in the neuroscientific literature, is not the absence of stimulation. It is a specific condition of the brain in which the default mode network is active — in which the mind wanders, makes unexpected connections, processes unresolved experiences, and generates the kind of associative, undirected ideation that is the raw material of creativity, insight, and the integration of disparate knowledge into coherent understanding. Boredom is the fallow season of the cognitive cycle. It is the period in which the mind is not producing but is consolidating, reorganizing, preparing the ground for the next cycle of productive engagement.

AI eliminates boredom. Not deliberately — no AI tool was designed with the explicit goal of eliminating boredom — but structurally. The tool is always available. It is always responsive. It fills any gap in the cognitive landscape with output, suggestion, engagement. The moments of unstructured mental time that occurred naturally throughout the workday — the minutes between tasks, the pause before the next meeting, the walk between the desk and the coffee machine during which the mind wandered and occasionally produced the insight that no directed effort could have generated — these moments are now available for productive use. The tool is there. The gap is there. The internalized imperative to produce converts the gap into another cycle of output.

The Berkeley researchers documented this pattern with empirical specificity. They called it task seepage — the tendency for AI-accelerated work to colonize previously protected cognitive spaces. Workers prompted during lunch breaks. They squeezed requests into the minutes between meetings. They filled what the researchers identified as informal recovery periods with additional AI-mediated work. The colonization was not mandated. No manager instructed workers to prompt during lunch. The colonization was produced by the convergence of the tool's availability, the worker's internalized productivity imperative, and the absence of any institutional structure that recognized those minutes as serving a cognitive function.

Franklin's earthkeeping ethic provides the framework for understanding what was lost. Those minutes were not idle time. They were cognitive soil — the unstructured periods during which the mind performs the maintenance operations that sustain its capacity for directed work. Eliminate the fallow period, and the directed work continues for a time, fueled by accumulated cognitive reserves. But the reserves are not being replenished. The soil is being planted in every season. The yield increases in the short term. The depletion is invisible until the capacity for sustained, deep, directed engagement begins to erode — until the worker notices that she cannot concentrate as she once could, that her insights are less frequent, that her creative output has become more competent and less surprising.

The earthkeeping challenge extends beyond the individual worker to the cognitive environment of the developing child. Franklin argued that technology shapes the conditions within which children learn to think — that the cognitive environment is not merely a context for development but a determinant of it. A child who grows up in an environment saturated with immediate, confident, polished answers to every question develops a different cognitive architecture than a child who grows up in an environment where questions linger, where uncertainty is tolerated, where the gap between the question and the answer is a space for the child's own thinking to develop.

The gap matters. It is the space in which the child learns to generate her own hypotheses, to tolerate not knowing, to develop the intellectual courage that comes from sitting with a difficult question long enough to discover that she can make progress on it independently. AI fills this gap with an answer. The answer may be correct. The answer may be helpful. But the answer has occupied the space where the child's independent cognitive development would have occurred, and the occupation has stolen something that no correct answer can replace: the experience of productive struggle, of not knowing and then figuring out, of developing confidence in one's own capacity to think.

Franklin's seven-point checklist — the diagnostic instrument she developed for evaluating any technological practice — illuminates specific dimensions of the earthkeeping challenge. Does the practice favor conservation over waste? The current practice of AI-augmented cognitive work consumes cognitive resources — attention, boredom, the capacity for sustained independent thought — without conserving them. It treats these resources as inexhaustible rather than finite, as obstacles to be eliminated rather than soil to be maintained. Does the practice favor the reversible over the irreversible? The atrophy of cognitive capacities through disuse is not easily reversed. The judgment that took years of struggle to build cannot be restored by a training module. The capacity for sustained attention that was eroded by years of fragmented, AI-mediated interaction does not regenerate spontaneously when the tool is removed.

Earthkeeping in the cognitive domain requires treating attention as a commons — a shared resource that benefits everyone when it is maintained and damages everyone when it is depleted. The commons framework is precise. The tragedy of the commons, as the ecologist Garrett Hardin described it, occurs when individual actors, each pursuing rational self-interest, collectively deplete a shared resource that would be sustained if they cooperated. Each herder adds one more animal to the common pasture. Each addition is individually rational. The aggregate effect is the destruction of the pasture.

The cognitive commons is being depleted by the same dynamic. Each technology company that designs for maximum engagement is adding one more animal to the pasture. Each organization that adopts AI tools without protecting cognitive rest is adding another. Each individual worker who fills her break with another prompt is adding another still. Each decision is individually rational — the engagement metric goes up, the productivity metric goes up, the output metric goes up. The aggregate effect is the depletion of the cognitive soil on which all of these metrics ultimately depend.

The governance of a commons requires collective action — structures that limit individual extraction for the sake of collective sustainability. In the cognitive domain, this means institutional structures that protect the conditions for cognitive development: mandatory offline periods, structured time for unaugmented work, evaluation criteria that measure the depth of understanding alongside the volume of output, and educational practices that cultivate the tolerance for difficulty and uncertainty that AI-saturated environments systematically erode.

It also means technology design that incorporates the earthkeeping ethic — tools that recognize the cognitive commons as a resource to be maintained rather than a territory to be colonized. A tool designed with earthkeeping in mind would not fill every gap with output. It would recognize that some gaps are productive — that the pause between tasks, the moment of boredom, the space where the mind wanders, serves a function that the tool's intervention destroys. Such a tool would measure not only the user's productivity but the user's cognitive sustainability — tracking patterns of engagement that suggest depletion and flagging them not as failures of productivity but as signals that the soil needs rest.

These are design choices. They are possible. They are not being made, because the incentive structure that governs AI tool design does not reward cognitive sustainability. It rewards engagement. And the distance between maximizing engagement and maintaining cognitive sustainability is the distance between extractive farming and earthkeeping — between a practice that depletes the soil for short-term yield and a practice that maintains the soil for long-term flourishing.

Franklin understood that earthkeeping is not a sentimental preference. It is a structural requirement for the sustainability of any system that depends on renewable resources. The cognitive resources on which knowledge work depends — attention, judgment, the capacity for independent thought, the creativity that emerges from unstructured mental time — are renewable, but only if the conditions for their renewal are maintained. Eliminate those conditions, and the resources deplete. The depletion is slow, invisible, and irreversible by the time it becomes apparent.

When Franklin told Meredith Whittaker that there is no technology for justice, only justice, she was articulating the earthkeeping principle in its most compressed form. There is no technology for cognitive sustainability. There is only the practice of maintaining the conditions that sustain cognition — the patience, the slowness, the tolerance for boredom, the institutional structures that protect the soil against the pressure of extraction. These conditions are not compatible with the current deployment of AI tools, which is organized around the maximization of output without regard for the sustainability of the cognitive ecosystem that produces the output.

The earthkeeper does not refuse to plant. She refuses to plant without regard for the soil. She understands that the harvest depends on the soil's health, and the soil's health depends on practices that the harvest's logic does not demand — fallowing, rotation, the return of organic matter, the patient maintenance of conditions that the crop itself does not create. The cognitive earthkeeper does not refuse to use AI tools. She refuses to use them without regard for the cognitive soil. She insists on the practices that sustain the capacity for thought — the pauses, the boredom, the unmediated engagement with difficulty — even when these practices slow the output and complicate the metric.

The soil does not protest its own depletion. It simply produces less, and less, and then nothing. The cognitive commons does not protest its depletion either. It simply delivers workers who are more productive and less capable, more efficient and less resilient, faster and shallower, until the practice that depleted them encounters a challenge that requires depth, and finds that the depth is no longer there.

Chapter 9: What the Real World of AI Technology Requires

Ursula Franklin spent the final decades of her career arguing that the governance of technology is not a technical problem. It is a democratic problem — a question about who decides how the tools that shape daily life are designed, deployed, and maintained. The argument was not abstract. It was grounded in her observation that every powerful technology in history required institutional structures to redirect its capabilities toward broadly distributed benefit, and that those structures were never produced by the technology itself. They were produced by citizens who understood the technology well enough to participate in its governance and who insisted, against the preferences of the technology's builders, that the inhabitants of the house should have a voice in its design.

The power loom required labor laws. The assembly line required the eight-hour day and workplace safety regulations. Electricity required building codes. The automobile required traffic laws, environmental protections, and zoning regulations. In every case, the technology expanded capability, and the expansion required governance — not to prevent the expansion but to ensure that the costs of the transition were not borne entirely by the people with the least power to refuse them. The governance was never adequate at the moment the technology arrived. It was always catching up, responding to harms already inflicted, building structures after the flood. The people in the gap between the technology's arrival and the governance's response paid the price of the delay.

The gap is wider now than at any previous technological transition, because the speed of AI's capability expansion outpaces the speed of institutional response by an order of magnitude that has no historical precedent. The power loom took decades to reshape the textile industry. The assembly line took years to reshape manufacturing. AI is reshaping the practice of cognitive work in months. And the institutional responses — the EU AI Act, the American executive orders, the emerging frameworks in Singapore, Brazil, Japan — address the supply side of the technology: what AI companies may build, what disclosures they must make, what risk assessments they must conduct. The demand side — what citizens, workers, students, and parents need to navigate the transition — remains almost entirely unaddressed.

Franklin's framework specifies what the demand side requires. The requirements are not utopian. They are the same requirements that every previous powerful technology imposed on the societies that adopted it, translated into the specific conditions of AI-augmented cognitive work.

The first requirement is the protection of the growth model alongside the production model in every institution that deploys AI tools. This means building into organizations what the Berkeley researchers called "AI Practice" — structured time for unaugmented work, evaluation criteria that measure understanding alongside output, protected space for the slow, friction-rich engagement through which judgment develops. It means rewarding the worker who questions the AI's suggestion alongside the worker who accepts it efficiently. It means recognizing that the ten minutes of formative struggle buried inside four hours of tedious configuration were not waste but infrastructure — the mechanism through which the organization's cognitive capital was renewed.

The practical form is specific: organizations should designate regular periods during which AI tools are set aside and workers engage directly with the material of their work — debugging by hand, drafting without assistance, reasoning through problems without algorithmic support. These periods are not nostalgia exercises. They are maintenance — the equivalent of the farmer returning nutrients to the soil. The periods should be evaluated not by what they produce but by what they preserve: the independent judgment that makes the AI-augmented work meaningful rather than merely voluminous.

The second requirement is the democratization of control over technological practice, not merely technological capability. The democratization of capability — the developer in Lagos who can build what only teams could build before, the non-technical founder who can prototype over a weekend — is real and morally significant. Franklin's framework values it highly. But capability without control is the defining characteristic of prescriptive technology. The factory worker had the capability to produce goods at unprecedented speed. She did not have control over the practice within which she produced them — the hours, the conditions, the pace, the distribution of the value she created.

The same asymmetry operates in AI-augmented work. The tools provide extraordinary capability. The practice within which the tools are used — the incentive structures, the evaluation criteria, the competitive pressures that determine how the capability is deployed — is designed by the technology companies and the organizations that adopt their products. The workers experience the capability. They do not govern the practice. The democratization of capability without the democratization of practice reproduces the fundamental structure of prescriptive technology: distributed execution under concentrated control.

What would the democratization of practice look like? It would look like workers having institutional voice in decisions about which AI tools are adopted, how they are integrated into workflows, and what metrics are used to evaluate their impact. It would look like professional associations developing standards of practice for AI-augmented work — standards that specify not just what the tools can do but what the humans using them must be able to do independently. It would look like collective structures, updated for the cognitive domain, through which workers negotiate the conditions of AI-augmented practice with the organizations that employ them.

Franklin, asked by Meredith Whittaker what to do about increasingly powerful technologies of surveillance and control, replied: "There is no technology for justice. There is only justice." The reply contains the principle that governs every requirement in this analysis. No AI tool, however sophisticated, will produce just outcomes within an unjust practice. No amount of capability expansion will produce human flourishing within a practice that systematically degrades the conditions for human development. The tools can open doors. The practice determines who walks through them, into what rooms, under what conditions, and with what support.

The third requirement is the governance of the cognitive commons. Franklin's earthkeeping ethic, applied to the cognitive domain, demands that attention, boredom, and the capacity for sustained independent thought be treated as shared resources requiring collective governance. The tragedy of the cognitive commons — each actor rationally maximizing engagement while collectively depleting the cognitive soil — cannot be resolved by individual discipline alone. It requires institutional structures that limit extraction for the sake of sustainability.

For technology companies, this means accepting design constraints that sacrifice engagement for cognitive sustainability — tools that create space for reflection rather than filling every gap with output, interfaces that monitor patterns of use and flag depletion rather than rewarding acceleration, systems that measure the user's growing independence alongside the user's growing productivity. These are engineering problems, and they are solvable. They are not solved because the current incentive structure does not demand their solution. Public governance — democratic regulation of the cognitive commons — could create the demand.

For educational institutions, the governance of the cognitive commons means redesigning pedagogy for an environment in which answers are abundant and the scarce resource is the capacity to evaluate them. The teacher's role returns to its oldest form: not the transmission of information, which the machine now handles with greater efficiency, but the cultivation of judgment — the capacity to ask, to question, to evaluate, to determine whether the plausible is also true. This means teaching students when not to use AI tools as deliberately as teaching them how to use them. It means creating assignments that cannot be completed by prompting — assignments that require the student to demonstrate not what she can produce but what she can understand, not what the tool can generate but what she can evaluate.

For parents — the last line of earthkeeping for the cognitive environment their children inhabit — the governance of the cognitive commons means creating conditions in which the developing mind encounters productive difficulty rather than perpetual assistance. Spaces for boredom. Time without tools. Conversations that move slowly enough for thought to develop. The modeling of intellectual practices that the AI-saturated environment does not reward: sitting with a question, tolerating not knowing, developing an independent position before consulting any external source.

The fourth requirement follows from Franklin's insistence that the viability of technology depends on the enforcement of limits to power. The companies that build AI tools exercise power over the cognitive practice of hundreds of millions of workers. This power operates through the default settings, the optimization targets, the design choices embedded in the tools — choices that shape how people think, what they produce, and what cognitive capacities they develop or lose. This power is not malicious. It is the normal operation of prescriptive technology within a market economy. But it is power, and unaccountable power is incompatible with democratic governance regardless of the intentions of those who hold it.

The limits that Franklin's framework demands are specific: transparency about how the tools' outputs are generated, accountability for the long-term cognitive consequences of tool design, and democratic participation in the decisions about how these tools are deployed within public institutions — schools, hospitals, government agencies, and the other contexts where the cognitive practice directly affects the public good.

These requirements cost something. They cost efficiency. They cost the clean upward curve of the productivity metric. They cost the speed that the market rewards and that the competitive landscape demands. Franklin would have acknowledged this cost without apology. The eight-hour day cost efficiency. Workplace safety regulations cost output. Environmental protections cost production speed. In every case, the cost was real, and in every case, the cost was justified by the principle that the expansion of technological capability must be governed by the conditions for human flourishing, not the other way around.

The house of AI technology is being built now. Its floor plan is being determined by the decisions made in this period — by technology companies, by organizations, by governments, by educators, by parents, and by citizens who either participate in the design or accept the rooms that are prepared for them. Franklin's contribution to this moment is the insistence that the inhabitants have a right to participate in the design, that the design should serve the development of the people who live inside the house rather than merely the productivity of the system that employs them, and that the governance of the house is a democratic responsibility that cannot be delegated to its builders without consequences that the inhabitants will bear long after the builders have moved on to the next project.

---

Epilogue

The sentence I keep coming back to is seven words long. Ursula Franklin said it to Meredith Whittaker in December of 2015, and it has the quality of something that was not composed but excavated — pulled from bedrock. "There is no technology for justice. There is only justice."

I wrote an entire book about AI as an amplifier. I argued that the signal matters more than the tool — that the question "Are you worth amplifying?" is the central question of this technological moment. I still believe that. But Franklin's seven words sit underneath my argument and trouble it in ways I did not anticipate when I first encountered them.

Because an amplifier, by definition, does not evaluate what it amplifies. I said this myself — feed it carelessness, you get carelessness at scale. But I treated this as a feature of the tool, a design property to be managed. Franklin treats it as a feature of the practice, and the distinction changes everything. The tool is what it is. The practice — the institutions, the incentive structures, the cultural norms, the relationships between the worker and the work — determines whether the amplification produces flourishing or depletion. There is no technology for flourishing. There is only flourishing, practiced deliberately, maintained daily, governed collectively.

What unsettles me most is her concept of the growth model and the production model, because I know which one I have been operating within. The twenty-fold productivity multiplier I celebrated in Trivandrum was a production-model measurement. I measured output. I did not measure understanding. I measured what my team could ship. I did not measure what my team was learning — or failing to learn — in the process of shipping it. Franklin does not let me off the hook by calling this an oversight. Her framework identifies it as the structural default of every prescriptive technology: the production model dominates because the production model is what the incentive structure rewards, and the growth model recedes because the growth model's metrics are invisible to the instruments the production model provides.

I described my engineer in Trivandrum who lost ten minutes of formative struggle inside four hours of eliminated tedium. Franklin made me see that those ten minutes were not a side effect. They were the mechanism. The mechanism through which judgment is built, through which the practitioner becomes more than an operator, through which the cognitive capital of an organization is renewed. I saved four hours and consumed the renewal process. The production model called it efficiency. Franklin calls it extraction.

I am not ready to tend a garden in Berlin. I am still in the river, still building. But the way I build has changed since I spent time inside this framework. I have started asking my team not just what they shipped but what they understand about what they shipped. I have started protecting time for work without AI assistance — not because the assistance is harmful but because the struggle it replaces is developmental, and development requires the friction that assistance eliminates. I have started measuring something I never measured before: whether my people are growing more independent or more dependent with each quarter of AI-augmented work. The metric is imperfect. It is also the most important metric I have ever tried to track.

Franklin's framework is not comfortable for a builder. It asks questions that the building process would prefer to defer — questions about who bears the cost of the transition, about whether the practice serves the people inside it or merely the production metrics that justify it, about whether the house being built is one its inhabitants would choose if they were given the choice. These are not questions that the next sprint planning meeting is designed to answer. They are questions that the planning meeting should not be allowed to avoid.

There is no technology for justice. There is no technology for understanding. There is no technology for the slow, patient, friction-rich process through which a human being becomes capable of directing powerful tools wisely rather than being directed by them. There is only the practice — the daily, institutional, collective commitment to maintaining the conditions under which these things can grow.

The soil does not protest its own depletion. It simply produces less, and then nothing. Franklin spent her career warning us about the soil. The least I can do is listen.

Edo Segal

AI didn't replace the worker.
It replaced the struggle that made her a worker.

Everyone measures what AI can produce. Ursula Franklin measured what technology does to the person producing it. A physicist who studied how identical atoms arranged differently create soft iron or brittle steel, Franklin applied that same rigor to the systems humans build — and discovered that the practice matters more than the artifact. Her distinction between holistic and prescriptive technology, between the growth model and the production model, reveals what the productivity metrics hide: that the friction AI eliminates was never just an obstacle. It was the mechanism through which judgment, understanding, and independent capability were built. This book applies Franklin's framework to the AI revolution with uncomfortable precision — asking not whether the tools work, but whether the practice sustains the people inside it.

Ursula Franklin
“Technology has built the house in which we all live. The house is continually being extended and rebuilt.”
— Ursula Franklin
0%
10 chapters
WIKI COMPANION

Ursula Franklin — On AI

A reading-companion catalog of the 20 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Ursula Franklin — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →