Mary Parker Follett — On AI
Contents
Cover Foreword About Chapter 1: Power-Over and Power-With Chapter 2: The Law of the Situation Chapter 3: Integration, Not Compromise Chapter 4: Circular Response and the Team as Living System Chapter 5: Constructive Conflict Chapter 6: The Illusion of Final Authority Chapter 7: The Giving of Orders and the Invisible Leader Chapter 8: Experience, Coordination, and the Team as the Unit of Intelligence Chapter 9: Creative Experience in the AI Workplace Chapter 10: The Integrative Organization Epilogue Back Cover
Mary Parker Follett Cover

Mary Parker Follett

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Mary Parker Follett. It is an attempt by Opus 4.6 to simulate Mary Parker Follett's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The decision that kept me up was not about technology. It was about people.

Trivandrum, February 2026. Twenty engineers, each suddenly capable of doing the work of an entire team. The math was sitting right there on the table, clean and brutal. Five people could do the work of a hundred. The Believer's path — lean, fast, immediately profitable. Every investor I know would have taken it without blinking.

I kept the team. I grew the team. And for weeks afterward, I could not fully explain why — not in language that would survive a boardroom challenge. I knew in my body it was right. I could feel that the team's collective intelligence was worth more than the headcount savings. But feeling is not an argument. Instinct does not survive a quarterly review.

Then I encountered Mary Parker Follett, a management theorist who died in 1933, decades before the first computer, and she handed me the structural argument I had been missing.

Follett drew a line between two kinds of power. Power-over: the familiar kind, hierarchical, coercive, flowing downward through commands. And power-with: developmental, co-active, the kind that arises when people create together rather than comply together. The critical insight was that power-with does not merely redistribute a fixed quantity of power more fairly. It increases the total amount of power available. The team that collaborates generates capability that no aggregation of individuals can replicate, no matter how augmented those individuals are.

That distinction cracked something open for me. Every boardroom conversation about AI I had witnessed was operating inside a power-over framework without knowing it. How many seats can we cut? Which tasks can we automate? Who becomes redundant? These are power-over questions. They treat capability as a fixed pie and ask how to slice it more efficiently.

Follett's questions are different. How do we grow the total intelligence of this organization? What kind of experience does this work provide to the people doing it? Are we developing judgment or merely accelerating execution?

These are not soft questions. They are the hardest strategic questions an organization faces in the age of AI, because the answers determine whether the amplifier amplifies intelligence or dysfunction. The tools are identical either way. The signal is ours to choose.

Follett died ninety years before the machines learned our language. She anticipated, with uncanny precision, the organizational crisis those machines would create — and she built a framework for navigating it that the contemporary management literature has yet to match.

This book is another lens for the tower we are climbing together.

Edo Segal ^ Opus 4.6

About Mary Parker Follett

1868-1933

Mary Parker Follett (1868–1933) was an American political theorist, social worker, and management philosopher whose ideas about organizational life were decades ahead of their time. Born in Quincy, Massachusetts, she studied at the Society for the Collegiate Instruction of Women (later Radcliffe College) and Newnham College, Cambridge. Her early work focused on democratic governance and community organization, producing The New State (1918) and Creative Experience (1924). She then turned her attention to business management, delivering a series of lectures to industrialists in the 1920s and 1930s that introduced concepts including power-with versus power-over, the law of the situation, constructive conflict, circular response, and integration as an alternative to compromise. Largely overlooked in the United States after her death, her work was preserved and championed in Britain and Japan before being rediscovered by management scholars in the late twentieth century. She is now recognized as one of the most original thinkers in the history of organizational theory, often called the "mother of modern management," whose insights into distributed intelligence, participatory authority, and developmental leadership anticipated concerns that the fields of organizational behavior and complexity science would not formalize for another half-century.

Chapter 1: Power-Over and Power-With

The fundamental question of organizational life has never been the question most organizational theorists believed it to be. It has never been the question of efficiency, nor the question of structure, nor even the question of leadership in the conventional sense of that much-abused term. The fundamental question of organizational life is the question of power: what kind of power an organization cultivates, what kind it rewards, what kind it suppresses, and what kind it mistakes for its opposite.

Mary Parker Follett understood this with a clarity that eluded most of her contemporaries and continues to elude most of ours. Writing in the 1920s and 1930s, decades before the fields of organizational behavior and management science had crystallized into their modern forms, Follett drew the most important distinction in organizational theory: the distinction between power-over and power-with. The distinction is not terminological. It is structural, developmental, and — in the deepest sense — political. It determines not only how an organization operates but what kind of human beings the organization produces, what capacities it develops in its members and what capacities it atrophies, what forms of intelligence it cultivates and what forms it renders invisible.

Power-over is the form of power most people mean when they use the word. Coercive power, hierarchical power, the power of the person who can compel obedience through authority, through the control of resources, through the capacity to reward compliance and punish deviation. Power-over operates through command. Its characteristic instrument is the order, and the order flows downward through the organizational hierarchy with a momentum that the lower levels can resist only at the cost of their position within it. The person who issues the order possesses the power. The person who receives it possesses the obligation. The relationship is asymmetric by design.

Power-with is something categorically different. Developmental power. Co-active power. The power that arises not from the capacity to compel but from the capacity to create together. In Follett's formulation, power-with does not merely redistribute the existing quantum of power more equitably among the members of the organization. It increases the total amount of power available. This is the point most readers of Follett miss, because it violates the zero-sum assumption that structures most thinking about power: the assumption that there is a fixed quantity in any system, and that the question is merely how to divide it. Follett rejected this assumption with a directness that startled her audiences then and should startle us now. "Power is not a pre-existing thing which can be handed out to someone, or wrenched from someone," she argued. Power is a capacity, and the capacity grows when the members of the organization are engaged in genuine co-creation rather than command-and-compliance.

The relevance of this distinction to the moment described in The Orange Pill is not incidental. It is structural, and the structure illuminates features of the AI transition that the prevailing discourse has failed to see because it lacks the conceptual vocabulary to name them.

That book describes two fundamentally different approaches to the deployment of AI tools within organizations. The first treats AI as a replacement for human workers: a cheaper, faster, more reliable substitute for the labor that previously constituted the organization's productive capacity. In Follett's terms, this is an exercise of power-over at unprecedented scale. The decision to replace workers with machines is made at the top of the organizational hierarchy, imposed upon the lower levels, and experienced by those lower levels as the elimination of their contribution. The workers do not participate in the decision. They are its objects, not its subjects. The power flows downward, as it always does in power-over systems, and the workers at the bottom are swept away by a current they did not create, did not choose, and cannot redirect.

The second approach treats AI as an amplifier of human capability. Every member of the team is equipped with AI tools and supported in learning to use them — not as passive instruments that execute predetermined tasks but as collaborative partners that expand the range of what each team member can attempt, imagine, and achieve. An engineer who had spent years confined to backend systems begins building user interfaces. A designer who had never written code begins implementing features end to end. The boundaries that had seemed structural — the walls between specializations that the organizational chart presented as natural and permanent — turn out to be artifacts of translation cost. When the cost drops, the walls dissolve, and what remains is the capacity of the human being, amplified rather than replaced.

This is power-with. When The Orange Pill describes twenty engineers in Trivandrum each becoming capable of doing the work of an entire team, the description is not of the elimination of the team. It is of the amplification of each member's contribution to the point where the distinction between individual capability and team capability begins to dissolve. The power has not been concentrated in management's hands while being removed from the workers'. The power has grown, because the members of the team are operating in a mode that generates capability rather than merely executing commands. They are not following orders more efficiently. They are creating more broadly, reaching across domains, exercising judgment about what to build and how to build it at a level that the previous organizational structure could not support.

Consider how the choice between these two forms of power plays out in practice. Organization A deploys AI as a cost-reduction mechanism. It identifies the tasks AI can perform at lower cost than human workers, automates those tasks, and reduces headcount. The power to make these decisions resides at the executive level. The workers whose tasks are automated have no voice in the process. The organization becomes more efficient in the narrow sense that it produces the same output at lower cost. But the intelligence of the organization — the collective capacity of its members to generate insight, detect problems the data does not reveal, exercise judgment that emerges only from deep engagement with the work — diminishes with every departure.

Organization B deploys AI as an amplification mechanism. It equips every member with AI tools and invests in the training and support structures necessary for those tools to be used effectively. It transforms positions rather than eliminating them. The engineer who previously spent eighty percent of her time on implementation now spends eighty percent on architecture, strategy, and the question of what should be built. The organizational hierarchy does not disappear, but the distribution of contribution within it shifts dramatically. Every member operates at a higher level, and the total intelligence of the organization increases rather than diminishes.

The difference is not marginal. It is the difference between an organization that degrades itself through efficiency and one that develops itself through amplification. Follett would have insisted that the choice is not a management decision in the ordinary sense. It is a decision about the kind of power the organization will cultivate, and the consequences extend far beyond the quarterly earnings report. They extend to the human beings who constitute the organization — to the capacities those human beings develop or fail to develop, to the quality of the intelligence the organization brings to bear on the problems it exists to solve.

There is a constant conversation happening in boardrooms. The twenty-fold productivity number is on the table. If five people can do the work of a hundred, why not just have five? The arithmetic is clean and seductive. Follett's framework reveals what the arithmetic conceals: that the intelligence of the team is not the sum of individual productivities. It is a product of the interactions between team members — the mutual adjustments, the shared contexts, the accumulated trust that allows them to take risks, challenge assumptions, and generate insights that no individual, however capable, could produce alone. Eliminate the team and you eliminate the interactions. Eliminate the interactions and you eliminate the intelligence. The arithmetic gets the inputs right and the function wrong.

Follett called this co-active power. Co-active power is not the sum of individual powers. It is a distinct phenomenon that emerges from the interaction of individual contributions in a context of mutual respect, shared purpose, and genuine engagement. The team operating through co-active power is more than the sum of its parts — not as a cliché but as a verifiable organizational fact. The insights that emerge from genuine teamwork, the solutions that no individual member conceived but that the group generates through mutual adjustment, are products of co-active power. They cannot be reproduced by any arrangement that eliminates the interactions from which they arise.

The AI discourse has largely failed to engage with this dimension. The prevailing framework treats the human-AI relationship as dyadic: a single human interacting with a single AI tool. But the more transformative phenomenon is not the human-AI dyad but the team-AI ecology: the complex system of interactions among multiple human beings, each amplified by AI tools, operating within an organizational context that either supports or undermines the co-active power of the group.

"That is always our problem," Follett wrote, "not how to get control of people, but how all together we can get control of a situation." The sentence is almost a century old. It reads as though it were written yesterday, in response to a technology that its author could not have imagined but whose organizational consequences her framework anticipated with uncanny precision.

The question that the AI age poses to organizational leaders is not "Should I use AI to replace workers or to augment them?" That question operates at the surface. The deeper question is: What kind of power will my organization cultivate? If the answer is power-over, then AI will concentrate capability in fewer hands, eliminate contributions deemed replaceable, and optimize for efficiency at the cost of collective intelligence. If the answer is power-with, then AI will amplify every member's contribution, dissolve artificial boundaries between domains, and cultivate the co-active power that is the highest form of organizational intelligence.

An organization that exercises power-over degrades the human beings within it, regardless of whether the instrument is a hierarchical command structure, a system of economic incentives, or an AI deployment strategy. An organization that cultivates power-with develops the human beings within it, regardless of the specific tools it employs, because the essential feature of power-with is not the tool but the relationship.

The technology will not make this choice for us. The amplifier carries whatever signal it is given. Feed it power-over, and it produces power-over at unprecedented scale. Feed it power-with, and it produces power-with at unprecedented scale. The question is not what the amplifier can do. The question is what signal we choose to feed it.

---

Chapter 2: The Law of the Situation

In a series of lectures that electrified and bewildered the business audiences of the 1920s, Follett proposed that orders in an organization should derive not from personal authority but from what she called the law of the situation. A manager who says "Do this because I am your superior" is exercising personal authority. A manager who says "The situation requires this, and here is how I know" is invoking the law of the situation. The difference is not one of style or tone. It is a structural difference in the source and legitimacy of organizational action, and its consequences for the quality of organizational intelligence are profound.

"My solution is to depersonalize the giving of orders," Follett wrote, "to unite all concerned in a study of the situation, to discover the law of the situation, and obey that." The formulation is radical in its implications. Under the law of the situation, "the employee can issue it to the employer, as well as employer to employee." Authority flows from the situation, not the person. The manager who articulates the situational requirement is a messenger, not a commander.

Personal authority derives its force from hierarchical position. The order is legitimate because the person issuing it has been granted the institutional right to issue orders. The content may be wise or foolish, responsive to the actual requirements of the work or utterly disconnected from them. None of this matters to the order's legitimacy within a personal authority framework. What matters is who issued it.

In a personal authority framework, the quality of organizational decision-making is bounded by the quality of the person at the top. The organization can be no wiser than its senior leader, because the senior leader's judgment is the source of every directive. Information may flow upward through reports and briefings, but the decision flows downward through commands, filtered through a single mind with all its biases, limitations, and blind spots. The organization does not think. It obeys.

In a law-of-the-situation framework, the quality of decision-making is bounded by the organization's collective capacity to read the situation. Every member who possesses relevant knowledge becomes a participant in the decision-making process. The engineer who understands the technical constraint. The customer service representative who has heard the complaint the executives have not yet registered. The junior developer who sees the architectural flaw the senior architect's expertise has rendered invisible. These voices are not consulted as a courtesy. They are structurally necessary, because the situation cannot be read accurately without them.

Now consider what happens when AI enters this framework.

AI tools embody the law of the situation more completely than any human manager can. A well-functioning AI system has no ego to defend, no political position to protect, no career anxiety distorting its judgment, no history of interpersonal conflict coloring its assessment. It responds to the requirements of the task as it understands them. When an engineer describes a problem and the AI responds with an implementation, the AI is not commanding the engineer. It is reading the situation — the engineer's intention to build a feature — and responding with the technical means the situation requires. The authority derives from the work, not from the organizational chart.

But Follett would have immediately identified the limitation this observation conceals. The law of the situation requires judgment about what the situation is. And this judgment — this interpretive act — is precisely what AI tools cannot provide from their own resources.

The AI reads the situation as described by the human. If the description is incomplete, if it omits crucial context, if it reflects biases the human has not examined, then the AI's response, however technically competent, will be a response to the wrong situation. It will solve the problem as stated, but the problem as stated may not be the problem that actually exists. The AI does not know what the customer actually needs, as opposed to what the customer says she needs. It does not know the organization's true strategic position, as opposed to what the data appears to show. It does not know the ethical implications of a design decision, as opposed to what the technical specification permits.

Here is the central paradox of the law of the situation in the AI age: the tool that most perfectly embodies the principle of responding to situational requirements is also the tool that most urgently requires human judgment about what those requirements are.

Crucially, Follett insisted that the law of the situation "is not depersonalizing... I think it really is a matter of repersonalizing. We, persons, have relations with each other, but we should find them in and through the whole situation. We cannot have any sound relations with each other as long as we take them out of that setting which gives them their meaning and value." The distinction between depersonalizing and repersonalizing is critical for AI. Algorithmic systems that claim to "depersonalize" decisions — to remove human bias, human politics, human ego from the process — may actually strip away the human context that gives decisions their meaning. Follett's law of the situation does not remove the person from the decision. It embeds the person more deeply in the situation, requiring that everyone involved study the situation together rather than deferring to hierarchy.

An organization that deploys AI tools within a personal authority framework will use the tools to amplify the decisions of the person at the top. The AI becomes an instrument of hierarchical power: executing the senior leader's vision with unprecedented speed and scope, but never challenging that vision, never bringing independent situational knowledge to bear upon it, never serving as a corrective to the leader's inevitable limitations. The AI becomes a faster, more capable servant that remains a servant, and the quality of its service is bounded by the quality of the commands it receives.

An organization that deploys AI within a law-of-the-situation framework will use the tools differently. The AI becomes a shared resource for reading the situation, available to every member, from the newest hire to the most senior executive. Each brings her own situational knowledge, and the AI amplifies each perspective by connecting it to patterns and possibilities the individual could not have seen. The junior developer uses AI to explore architectural alternatives the senior architect has not considered. The customer service representative uses AI to analyze patterns in complaints the product team has missed. The authority that drives organizational action derives from the collective reading rather than from any individual's hierarchical position.

The difference maps precisely onto Follett's distinction between power-over and power-with. The organization that concentrates power becomes faster but brittler: it can execute decisions at unprecedented speed, but the decisions are no better than the single mind that makes them, and the distributed intelligence that would have caught errors, surfaced objections, and identified alternatives has been bypassed. The organization that distributes intelligence becomes more adaptive, more resilient, more capable of responding to the unexpected, because the collective capacity to read the situation is brought to bear upon every decision.

There is a further dimension that Follett's original formulation anticipates. She argued that when orders derive from the situation rather than from personal authority, the relationship between manager and worker is transformed. Both become students of the situation, both contributors to the collective reading. The hierarchical relationship is softened — though not eliminated — by their shared orientation toward the work.

The most productive human-AI collaborations exhibit precisely this quality. The Orange Pill describes the experience of working with Claude not as a master-servant relationship but as a partnership in which both participants contribute to reading the situation. The human contributes intention, judgment, contextual knowledge, moral sensitivity. The AI contributes pattern recognition, cross-domain connectivity, the capacity to hold complex structures in working memory. Neither commands the other. Both are oriented toward the requirements of the work.

But this operates at the level of the dyad. The more important level is the team — multiple human beings, each amplified by AI, engaged in collective reading of the situation. The dyad is the building block. The team is the structure. And the structure either supports the law of the situation or undermines it, depending on whether the organizational culture encourages genuine collective reading or concentrates decisional authority in the hands of the few.

The practical test is straightforward. When a decision must be made, ask: Is it being driven by personal authority or by the collective reading of the situation? Is the AI being used to amplify a single decision-maker's judgment, or to enrich the organization's collective capacity to read what the situation demands?

The answers will determine the quality of the organization's intelligence in the AI age. Follett's law of the situation is not a management technique. It is the foundation of organizational epistemology. The AI moment does not change this foundation. It reveals, with new clarity, the consequences of building on it well or building on it poorly.

---

Chapter 3: Integration, Not Compromise

The most important word in Follett's vocabulary is integration, and the most important thing about the word is what it is not. It is not compromise.

Compromise operates within a zero-sum framework. Two parties have conflicting interests. Each gives up something in order to reach an agreement. The result is a middle ground that neither fully wants but both can tolerate. The compromise is stable — neither party has sufficient reason to abandon it — but it is also deadening, because neither party's genuine needs have been met. The creative energy that each brought to the conflict is suppressed rather than harnessed. The compromise contains the conflict without resolving it.

Integration operates within a fundamentally different framework. Integration does not divide the existing pie more equitably. It makes a new pie. It does this by reconceiving the problem at a higher level of abstraction, by discovering that the apparent conflict between two positions conceals a deeper compatibility that neither position, stated in its original terms, could reveal. The two parties do not give up what they want. They discover that what they actually want — as opposed to what they initially demanded — is compatible, and the solution that emerges satisfies both more fully than either original demand could have.

Follett's most famous illustration is the window in the library. Two people are reading in a room. One wants the window open. The other wants it closed. Compromise dictates opening the window halfway — a solution that leaves both partially uncomfortable and addresses neither party's actual need. Integration requires asking why each wants what they want. The first wants fresh air. The second does not want the draft on her neck. The integrative solution: open a window in the adjoining room. Both get what they actually need.

The simplicity of this example conceals the profundity of the principle. Most conflicts, when analyzed at the level of underlying need rather than surface demand, contain the potential for solutions that are better for all parties than any compromise could be. The difficulty is not that integrative solutions are impossible. The difficulty is that they require the participants to move beyond stated positions, to examine their own needs with rigor, and to engage with the other party's needs with a generosity that adversarial framing does not support.

The AI transition is being framed as a compromise. On one side stand the advocates of full automation: AI should replace human workers wherever it can do so more cheaply. On the other side stand the defenders of human work: certain tasks should be reserved for humans regardless of whether machines can perform them. The compromise most organizations are attempting is partial automation — some tasks assigned to AI, others retained by humans, the division determined by cost analysis, technical capability, and political negotiation.

This is precisely the resolution Follett would have rejected. It addresses the surface conflict between automation and human employment without engaging with the underlying needs that drive both positions. The advocates of automation do not actually want to eliminate human workers. They want to reduce cost and increase output. The defenders of human work do not actually want to prevent the adoption of useful tools. They want to preserve the conditions under which human beings can develop their capacities, exercise judgment, and experience the satisfaction of contributing meaningfully to a shared enterprise.

Both underlying needs are legitimate. Both are important. And the compromise that divides tasks between humans and machines addresses neither, because it treats the relationship as inherently adversarial — a contest for territory in which every task assigned to the machine is a task lost by the human.

The integrative solution is the one that The Orange Pill describes without naming it as such: the reconception of the human-machine relationship as amplification rather than substitution. The machine does not take tasks from the human. It removes the friction that prevented the human from operating at the level where her contribution is most valuable. The engineer is promoted from implementer to architect. The designer is promoted from mockup creator to end-to-end feature builder. The work the AI performs — syntax, boilerplate, the mechanical translation of intention into code — is not work the human loses. It is work the human is freed from.

This is integration. Both the organization's need for increased productivity and the individual's need for meaningful work are satisfied — not by dividing the work but by reconceiving it at a higher level. The apparent conflict between efficiency and human dignity dissolves, because the conflict was an artifact of surface-level framing.

Follett would push the analysis further. She would identify at least three conditions that must be met for integration to succeed in the AI context.

The first: the organization must genuinely invest in developing the higher-order capacities that the amplified role demands. It is not enough to remove implementation work and declare the engineer promoted. The engineer must be supported in developing architectural judgment, product sense, and strategic vision. This investment is costly, and it is precisely the investment most organizations are unwilling to make, because the short-term calculus of automation — replacing human labor with cheaper machine labor — produces more immediate returns.

The second: organizational culture must support the mutual adjustment integration requires. Integration does not happen automatically. It happens through dialogue, experimentation, and iterative refinement in which the members of the organization discover the practices that allow human and machine to operate in genuine collaboration. This dialogue requires trust, and trust requires time — precisely what the accelerated pace of the AI transition seems to deny.

The third: the organization must be willing to relax the hierarchical assumptions the compromise solution preserves. In a compromise framework, management decides which tasks go to AI and which stay with humans. In an integrative framework, the team collectively discovers the configuration that develops both human and machine capabilities. This requires distributing decisional authority in ways most traditional organizations are structurally incapable of supporting.

The most revealing moment in The Orange Pill is the senior engineer who spent his first two days oscillating between excitement and terror. The excitement came from expanded capability. The terror came from recognizing that the implementation work consuming eighty percent of his career could now be handled by a tool, and that the remaining twenty percent — the judgment, the architectural instinct, the taste — was all that remained. By Friday, he had arrived at the integrative insight: the remaining twenty percent was everything. The tool had not made him redundant. It had stripped away the labor that had been masking what he was actually good at.

This is the phenomenology of integration. It begins in conflict — the tension between old identity and new possibility. It passes through terror — the recognition that the position one has been defending is not the position one actually needs. And it arrives at a reconception that satisfies the underlying need more fully than the original position ever could. The engineer did not compromise by giving up half his tasks. He integrated by discovering that his real contribution lay at a level the AI amplified rather than threatened.

"The only good solution to social conflict," Follett wrote, "is not compromise, not conquest, but integration." The AI transition need not be a conquest of human work by machine capability, nor a compromise that divides the territory between them. It can be an integration that reconceives the relationship — if the organizational conditions support the creative reconception that integration demands.

---

Chapter 4: Circular Response and the Team as Living System

Follett introduced a concept the organizational sciences of her era were not equipped to appreciate and that the organizational sciences of the present era have yet to fully absorb. She called it circular response, and it describes the fundamental mechanism through which genuine collaboration operates.

Linear thinking about organizational interaction goes like this: A speaks. B listens. B responds. A receives the response. The interaction is a sequence of discrete events, each caused by the preceding one. The model is mechanical — stimulus and response, input and output, cause and effect.

Follett rejected this model with a thoroughness that bordered on philosophical reconstruction. The interaction between human beings in an organizational setting is not a sequence of discrete events but a continuous process of mutual modification. When A speaks, A is not delivering a fixed message to a passive receiver. A is initiating a process that immediately changes both participants. B's response is not a reaction to A's statement. It is a response to the situation A's statement has created — and that situation includes not only the content of the statement but B's perception of A's intention, B's emotional state, the history of the relationship, and the organizational context. Moreover, A is changed by the act of speaking, because the articulation of a thought alters the thought itself, and A is further changed by observing B's response, which modifies A's understanding of what she said and what she meant by saying it.

The interaction is circular, not linear. Each participant simultaneously affects and is affected by the other. There is no discrete moment at which influence flows in one direction only. The interaction is a living process, and its product — the decision, the insight, the solution — is not attributable to either participant individually. It is the product of the circular response itself.

This concept has implications for the AI moment that Follett could not have anticipated but that her framework is uniquely equipped to illuminate.

The Orange Pill describes the experience of working with Claude in terms Follett would have recognized immediately. The author poses a problem and receives back not a mere answer but an interpretation — a reading of his intention that reflects back dimensions of his own thinking he had not perceived. He is changed by the AI's response. His understanding of his own problem is modified by the AI's articulation of it. He feeds that modified understanding back into the next exchange, which produces a further modification, and so on in a spiral of mutual adjustment that generates insights attributable to neither participant alone.

The most vivid instance is the emergence of the ascending friction concept. The author had been struggling with the tension between Byung-Chul Han's critique of smoothness and his own experience of AI's generative power. He knew both perspectives contained truth. He could not find the framework that held both. He described the impasse to Claude. Claude returned with laparoscopic surgery: the removal of one kind of friction — the tactile friction of open surgery — did not eliminate difficulty but relocated it to a higher cognitive level. The surgeon who lost the feel of tissue gained the capacity to perform operations open surgery could never attempt. The friction ascended.

Neither participant owns this insight. The human brought the problem — the specific tension, the felt sense that both positions were right in a way the standard discourse could not accommodate. The AI brought the analogy — the connection to surgical technique, drawn from its vast training data, that neither the human nor any other single mind would likely have found. The insight emerged from the circular response between them, from the process of mutual modification in which each was changed by the other's contribution in ways neither could have predicted.

The question Follett would have pressed: What kind of system is this? Is it tool-use, where the human is the agent and the AI the instrument? Or is it a living system, where both participants are active contributors to a process that transforms them both?

Follett's answer would have been clear. If the interaction exhibits the characteristics of circular response — if each participant is genuinely modified by the other's contribution, if the product cannot be attributed to either alone, if the process generates outcomes neither predicted or could have produced in isolation — then the system is, in the relevant sense, alive. Not biologically alive. Not conscious. But alive in the organizational sense: a dynamic process that generates emergent properties through continuous mutual adjustment.

This matters because the organizational implications of working with a living system are categorically different from working with a tool. A tool is designed, used, and maintained. Its properties are fixed by its design. A living system is participated in, adapted to, and co-evolved with. Its properties are emergent, and the participant's task is not merely to use the system effectively but to contribute to the process of mutual adjustment through which its properties develop.

But circular response, when it functions poorly, produces co-active delusion rather than co-active intelligence. The same process of mutual modification that generates genuine insight can generate shared error — a recursion in which each participant reinforces the other's biases, confirms the other's assumptions, and the spiral tightens toward an increasingly polished and internally coherent error that neither can detect because both have been drawn into the same distortion.

The Orange Pill describes precisely this danger. Claude produced a passage connecting Csikszentmihalyi's flow state to a concept attributed to Gilles Deleuze. The passage was elegant, internally coherent, rhetorically compelling. The author read it, liked it, and moved on. Only the next morning did he check and discover that the reference was wrong in a way obvious to anyone who had actually read Deleuze.

This is circular response producing co-active error. The AI generated a plausible output. The human's response — acceptance — was fed back into the system as confirmation. Had the error not been caught by the human's independent knowledge, it would have remained: a polished falsehood the circular response had confirmed rather than corrected.

Follett's framework provides the diagnostic. Circular response generates intelligence only when the participants maintain the capacity for independent evaluation alongside their engagement in the collaborative process. The musician lost in the ensemble, merged so completely with the group that she can no longer hear her own instrument, is not in productive circular response. She is in fusion, and fusion produces noise, not music. Productive circular response requires the paradox of simultaneous engagement and independence: the willingness to be changed by the other's contribution alongside the capacity to evaluate whether the change is genuine learning or co-active error.

This paradox is the deepest challenge of the AI moment. The human-AI collaboration is most productive when circular response is functioning well — each participant genuinely modifying the other, emergent outputs surpassing what either could produce alone. But the collaboration is most dangerous when the circular response overwhelms the human's capacity for independent judgment — when the polish of the AI's output seduces the human into accepting plausibility as truth, when the spiral of mutual reinforcement tightens into a closed loop from which no corrective signal can escape.

The organizational implication is that AI deployment must include structures protecting the capacity for independent evaluation within collaborative engagement. These structures are not technical. They are cultural. They include cultivating critical judgment as an organizational value, creating spaces where AI outputs are subjected to adversarial scrutiny the collaborative process itself cannot provide, and developing metacognitive practices through which individuals learn to recognize the difference between genuine insight and co-active error.

There is a further dimension that extends beyond the dyad to the team proper. The dyadic collaboration between human and AI is the building block, but the building is the team of multiple human beings, each engaged in their own dyadic collaboration with AI, interacting with each other in the continuous mutual adjustment that generates collective intelligence. The intelligence of this system is not the aggregate of individual human-AI dyads. It is the product of the interactions between them — the way one member's AI-amplified insight triggers a modification in another's understanding, which triggers a further insight, which ripples through the team and produces understanding no individual dyad could have reached.

The team is a living system. The AI has entered the system. The system is being transformed. Whether the transformation produces intelligence or delusion depends entirely on the quality of the circular response the organization cultivates — and on the discipline of maintaining independent judgment within the collaborative process that Follett's framework, nearly a century before the machines arrived, identified as the condition for productive interaction between minds of any kind.

Chapter 5: Constructive Conflict

Follett's most counterintuitive contribution to organizational theory was her insistence that conflict is not a pathology to be eliminated but a resource to be used. The claim scandalized the efficiency-minded industrialists of her era, who regarded conflict as friction — as waste, as the organizational equivalent of a machine grinding against its own components. The metaphor was mechanical, and the prescription followed mechanically: reduce friction, align the parts, eliminate sources of resistance, and the organizational machine will run smoothly.

Follett rejected both the metaphor and the prescription. Conflict, she argued, is the appearance of difference. When two members of an organization disagree, the disagreement signals that they possess different knowledge, different perspectives, different understandings of the situation. The disagreement is not a malfunction. It is information. And the organization that treats it as information rather than malfunction gains access to a form of intelligence that harmonious organizations systematically destroy.

She distinguished three modes of dealing with conflict. The first is domination: one side wins, the other loses. The conflict is resolved in the sense that one perspective prevails, but the resolution is unstable — the defeated perspective has not been addressed, only suppressed, and it will resurface in different forms until the underlying need it represents is genuinely met. The second is compromise: both sides give up something, arriving at a middle position neither fully endorses. More stable than domination but equally deadening, because the creative energy both perspectives contained is lost in the splitting of differences. The third is integration: the creative reconception described in Chapter 3, where the problem is reframed at a higher level and both parties' underlying needs are met through a solution neither had initially conceived.

"As conflict — Loss of — difference — is here in the world," Follett wrote, "as we cannot avoid it, we should, I think, use it. Instead of condemning it, we should set it to work for us." The formulation is precise. She does not say conflict is pleasant, or that disagreement should be sought for its own sake. She says it is here — an irreducible feature of organizational life — and that the question is not whether to have it but what to do with it.

The AI transition is generating conflicts of precisely the kind Follett's framework was designed to address. The most consequential is the conflict between depth and breadth.

The Orange Pill describes a senior software architect who told the author he felt like a master calligrapher watching the printing press arrive. Twenty-five years of building systems. The capacity to feel a codebase the way a doctor feels a pulse — not through analysis but through embodied intuition deposited layer by layer through thousands of hours of patient work. He did not dispute that AI was more efficient. He said that something beautiful was being lost, and that the people celebrating the gain were not equipped to see the loss, because the loss was not quantifiable.

On the other side: the engineer who had never written frontend code and who, within two days of working with Claude, built a complete user-facing feature. The designer who had never touched backend code and who, within two weeks, was implementing features end to end. These people experienced the dissolution of boundaries that had seemed structural, and the dissolution released creative energy the previous configuration could not access.

The prevailing discourse treats this as a zero-sum contest. Either depth wins or breadth wins. Either the calligrapher is right or the printer is right. Either the old skills matter or they do not.

This framing is precisely the kind of domination-or-compromise response Follett would have rejected. It forces a choice between perspectives that are both correct, and the forced choice impoverishes the organizational intelligence that would result from their integration.

The integrative approach begins by asking what each perspective genuinely needs. The senior architect does not need AI to disappear. He needs his twenty-five years of embodied knowledge to remain valuable — to retain its connection to meaningful work, to continue serving as the foundation for judgment that only deep experience produces. The junior engineer does not need the senior architect rendered obsolete. She needs access to domains that the friction of specialization previously made inaccessible, and the expanded scope of contribution that AI makes possible.

The ascending friction thesis from The Orange Pill is itself an integrative resolution of this conflict. The senior architect's deep knowledge does not become irrelevant. It becomes the judgment layer operating at the higher floor — the floor to which the friction has ascended. The junior engineer's expanded breadth does not replace the senior architect's depth. It provides the material upon which the architect's judgment operates: the broader range of implementation possibilities from which the architect selects the one that best serves the situation.

Both perspectives get what they genuinely need. The architect's expertise retains its value, exercised at a higher level where the stakes are greater and the judgment harder. The engineer's expanded capability is developed, directed by the architect's judgment toward outcomes neither breadth alone nor depth alone could have produced. The conflict is resolved not by the victory of one side or the compromise of both, but by the integration that discovers how both contributions can be fully deployed.

But this integration does not emerge automatically. It requires the organizational conditions that support constructive conflict: willingness to surface genuine needs behind stated positions, trust that allows parties to examine their own assumptions without feeling threatened, and the creative capacity to envision solutions the original framing could not accommodate. These conditions are cultural, not technical. No AI tool, however sophisticated, can produce them.

There is a second conflict the AI moment produces that Follett's framework illuminates with particular force: the conflict between speed and reflection. Byung-Chul Han argues that the removal of friction from human experience produces hollow productivity — always busy, never accomplishing anything that carries weight. The Orange Pill does not dismiss this critique. It takes it seriously, recognizes the truth it contains, and then mounts a counter-argument that does not deny the truth but contextualizes it. The friction has not disappeared. It has relocated.

This engagement is constructive conflict in action. Two perspectives, each containing genuine insight, each blind to dimensions the other reveals, producing through their creative collision an understanding neither could have achieved alone. The conflict between Han's critique and the builder's experience is not resolved by choosing a side. It is resolved by integration — discovering how both are true simultaneously. The discovery is possible only because the conflict was engaged rather than avoided.

The organizational imperative: organizations navigating the AI transition must actively cultivate conditions under which productive disagreement can occur. The senior architect mourning embodied expertise is not a Luddite to be dismissed. He is a source of organizational intelligence the integrative process requires. The critic warning about hollow AI-assisted productivity is not an obstacle to progress. She is a corrective to the bias toward speed that the tool's design embeds in the user's behavior.

AI tools can either support or undermine these conditions. On the supporting side, they can help each party in a conflict understand the other's perspective more fully. The engineer who disagrees with the designer's approach can use AI to explore the design implications of alternative technical approaches, developing richer appreciation of the constraints shaping the designer's thinking.

On the undermining side, AI tools can enable conflict avoidance by providing a seemingly authoritative third-party resolution that short-circuits the integrative process. When the team turns to the AI to settle a disagreement, the AI produces a response that splits the difference — smooth, confident, articulated in the language of comprehensive analysis. This is compromise dressed as integration, and it is particularly seductive because the AI's response sounds like wisdom.

Genuine integration requires the discomfort of staying in the conflict long enough for underlying needs to surface. It requires patience to resist premature resolution and courage to sit with tension until creative reconception becomes possible. The AI tool, by offering the appearance of resolution on demand, threatens to short-circuit the very process through which genuine integration occurs.

The organizational response must be to establish norms protecting the space for constructive conflict against the pressure of AI-assisted resolution. When team members disagree, the first response should not be asking the AI to adjudicate. The first response should be engaging with the disagreement directly — surfacing underlying needs, exploring what the conflict reveals. The AI can be consulted as a resource for exploring dimensions of the disagreement, modeling consequences of different approaches, surfacing information participants might have missed. But the integrative process itself must remain human, conducted through the direct engagement Follett considered essential.

The organization that eliminates conflict eliminates intelligence. The organization that cultivates constructive conflict cultivates the most powerful form of intelligence available: the intelligence that emerges from the creative collision of perspectives, disciplined by the integrative process, directed toward solutions no single perspective could have conceived.

Follett understood that organizational intelligence does not come from agreement. It comes from difference — engaged, respected, and integrated. The AI age makes this understanding not merely relevant but urgent, because the tools that make agreement easy are the same tools that make the suppression of productive disagreement nearly invisible.

---

Chapter 6: The Illusion of Final Authority

There is an assumption embedded so deeply in organizational thinking that most theorists do not recognize it as an assumption. It is the assumption that somewhere in every organization there exists a final authority — a point at which the chain of decision terminates in a person or body whose judgment is ultimate, whose word is final, whose position atop the hierarchy confers upon them the right and the capacity to resolve any question.

Follett spent much of her intellectual career arguing that this assumption is an illusion. The illusion is not harmless. It distorts the actual functioning of organizations, degrades the quality of decision-making, and prevents organizations from accessing their most valuable resource: the distributed intelligence of their members.

The illusion operates through a confusion between two different things: the authority to decide and the capacity to decide well. The CEO has the authority to decide. Her position confers the institutional right to make determinations binding upon the organization. But the capacity to decide well — to respond accurately to the situation's requirements, to integrate diverse perspectives, to anticipate consequences across multiple dimensions — does not reside in any single person, regardless of position. It resides in the collective intelligence of the organization, in the process through which diverse forms of knowledge are pooled, compared, challenged, and integrated into a reading more accurate than any individual reading could be.

Follett did not argue that hierarchy is unnecessary or that organizations should be governed by committee. She argued that the hierarchical structure of authority should not be confused with the distributed structure of intelligence. The CEO makes the final call. But the quality of that call depends entirely on the process through which collective intelligence is brought to bear upon the question. The CEO who makes decisions in isolation, who treats her position as a license to substitute personal judgment for collective inquiry, is exercising the illusion of final authority. She has the power but not the intelligence.

The AI moment has made this illusion simultaneously more tempting and more dangerous.

More tempting because AI tools give the individual decision-maker the appearance of comprehensive knowledge. A CEO with AI access can process more information, model more scenarios, and generate more options than any CEO in history. The apparent scope of her knowledge expands dramatically, and the expansion reinforces the illusion that she possesses the capacity to decide well on her own. The AI becomes the ultimate staff — the adviser who never sleeps, never forgets, never challenges the boss's ego, and who produces, on demand, a polished analysis of any question.

But the comprehensiveness is an illusion within an illusion. The AI's analysis is bounded by its training data, the framing of the question it has been asked, and the biases embedded in both. It does not know what the customer service representative knows about complaints not yet escalated. It does not know what the junior developer knows about the architectural flaw the senior architect's expertise has rendered invisible. It does not know what the worker on the floor knows about the gap between process as designed and process as practiced. These are forms of situated knowledge — accumulated through specific experiences, accessible only to those who occupy specific positions — and no AI system can substitute for them.

More dangerous because the speed at which AI-assisted decisions can be implemented means systematic error propagates faster than ever. A CEO who makes a wrong decision in a traditional organization experiences consequences gradually, as the decision works through the system and encounters implementation friction — the pushback of informed subordinates, the corrective feedback of the market. There is time for error to be detected and corrected. A CEO who makes a wrong decision in an AI-augmented organization experiences consequences at machine speed. The decision is implemented, scaled, and propagated before corrective mechanisms can engage.

Follett's alternative to the illusion of final authority was cumulative responsibility: authority distributed across the organization in proportion to the knowledge each member possesses about the aspects of the situation for which they are responsible. The engineer has authority over engineering dimensions. The designer over design dimensions. The customer service representative over customer-experience dimensions. No single person has final authority, because no single person possesses final knowledge. The decision emerges from the integration of multiple authoritative perspectives, each rooted in situated knowledge the others cannot provide.

This is not governance by committee, which Follett regarded as dysfunctional — merely distributing the illusion of final authority among a group. Committee governance is still bounded by the members' collective limitations, and the political dynamics of committee deliberation — compromises, logrolling, suppression of dissent — often produce decisions worse than those of a well-informed individual.

Follett's model is the expert team, in which each member's authority derives from knowledge of the situation rather than position in the hierarchy, and the decision is produced through the integrative process: creative reconception satisfying the genuine needs of all perspectives.

The Orange Pill describes vector pods — small groups of three or four people whose job is to decide what should be built. They talk to users, analyze markets, debate strategy, produce specifications that AI tools execute. These pods are organizational instantiations of Follett's model: authority derives from situated knowledge, decisions emerge through integration. The AI tools amplify each member's capacity. The engineer models business implications. The designer analyzes technical constraints. The strategist prototypes features. Boundaries dissolve, and the integrative process is enriched by each member's expanded capacity to engage with the full complexity of the situation.

But the pods work only if the organizational culture supports genuine distribution of authority. If their recommendations are routinely overridden by a senior executive who believes position confers superior judgment, the pod is not a decision-making unit. It is a consulting group, and its members will quickly learn their contributions are decorative. The circular response that generates genuine integration will degrade into command-and-compliance, and the organization will have invested in the apparatus of distributed intelligence while retaining the substance of concentrated authority.

The illusion of final authority is not a failure of character. It is a failure of organizational design — and a failure that AI amplifies rather than corrects. Organizations that concentrate authority will use AI to reinforce that concentration. Organizations that distribute intelligence will use AI to amplify that distribution. The technology does not choose. The organizational design does.

The historical evidence is extensive. The railroad companies of the nineteenth century concentrated authority in executives who made decisions from headquarters while systematically excluding the situated knowledge of conductors, engineers, and station agents. Rapid expansion followed by spectacular crashes — decisions at headquarters systematically disconnected from operational reality. Toyota's production system distributed authority to workers on the line: the worker who noticed a defect could stop the entire line. American manufacturers regarded this as insane until Japanese quality eroded their market share.

The AI moment demands the same recognition at a higher level. The situated knowledge of the organization's members, amplified by AI tools, is the most valuable resource the organization possesses. The design that captures this resource distributes authority. The design that concentrates authority in a single decision-maker, however AI-augmented, wastes it.

"With scientific management," Follett wrote, "the managers are as much under orders as the workers, for both obey the law of the situation." In the AI age, this means the most intelligent organizations will be those where the CEO's AI-augmented judgment is one input among many — essential, respected, but never mistaken for the whole.

---

Chapter 7: The Giving of Orders and the Invisible Leader

Follett's 1925 paper "The Giving of Orders" remains one of the most penetrating analyses of organizational authority ever written. Its central argument: the way an order is given determines the quality of the response it receives, and the most effective orders are those that do not feel like orders at all.

This is not a claim about management style. It is a structural claim about the relationship between the form of a directive and the intelligence of the response. An order issued as a command — deriving authority from the position of the person issuing it rather than from the requirements of the situation — produces compliance. The worker complies because the alternative is sanction, and the compliance is mechanical: executing the letter without engaging the worker's intelligence, judgment, or creative capacity. The order gets done. But it gets done badly, because the worker's knowledge of specific conditions, her understanding of the thousand factors the order-giver could not anticipate, her capacity to adapt the general directive to the particular situation — none of these resources are engaged. They are suppressed by the form of the directive, which communicates: your judgment is not required.

An order deriving from the situation produces something categorically different. It produces engagement. The worker is not merely executing a directive. She is participating in the response to a situation that she and the order-giver are both trying to address. Her intelligence is engaged, her judgment invited, her knowledge treated as a resource rather than an irrelevancy. The result is work not only executed but adapted, not only completed but improved.

The application to human-AI interaction is direct. The person who uses AI by issuing commands — "Write me a report on X," "Generate code that does Y" — is operating in the mode Follett identified as least productive. The machine complies. The output is competent. But the interaction is linear, and linear interaction produces linear results. The prompter's intelligence is not engaged in the process. It is substituted by the machine's capability.

The person who uses AI by posing problems — describing situations, sharing confusion, articulating half-formed intuitions — is operating in the mode Follett would have advocated. She is not commanding but inviting. She is not specifying the output but describing the need. The AI's response is not compliance but contribution: an interpretation that reflects back dimensions of the human's thinking she had not perceived. The circular response described in Chapter 4 is activated only when the interaction is structured as collaboration rather than command.

The practical implications extend to the organization. In a command-based model, the leader specifies what the AI should produce, team members execute the leader's prompts, and output is reviewed for compliance with the original vision. The team members are intermediaries between the leader's vision and the AI's execution. Their own judgment, situational knowledge, and creative capacity are not engaged.

In a situation-based model, the leader describes the situation the team faces: the problem to be solved, the constraints, the values the solution must serve. Team members, each equipped with AI tools, engage from their own perspectives, bringing their situated knowledge to bear. The AI amplifies each perspective, connecting it to patterns and possibilities the individual could not have found alone. The leader's role is not to command but to integrate: receiving diverse contributions, identifying connections, facilitating the process through which an integrative solution emerges.

This is Follett's concept of depersonalizing the order — or rather, as she insisted, repersonalizing it. The authority derives not from the leader but from the situation. Team members are not executing commands. They are contributing to a collective reading. The AI is not implementing the leader's vision. It is participating in the process through which the team's collective vision emerges.

The distinction between these two modes determines whether AI amplifies the organization's intelligence or merely accelerates its execution. The leader who commands produces an organization that executes fast and thinks shallow. The leader who invites produces an organization that thinks deep and acts on intelligence rather than instruction.

There is a developmental dimension Follett identified that the AI moment intensifies. The form of the order affects not only the quality of the immediate response but the trajectory of the person who receives it. A worker who receives commands develops the capacity for compliance. Over time, her judgment atrophies — the organizational environment communicates that her judgment is not required. She becomes precisely the kind of worker the command system assumes she already is.

A worker who receives situational challenges develops the capacity for judgment. Her ability to read situations, adapt principles to conditions, exercise creative problem-solving grows through practice. The situational system produces the worker it requires.

AI amplifies both trajectories. The organization that gives commands produces workers who progressively lose the capacity for independent judgment — the very capacity AI cannot provide: evaluating outputs, detecting concealed errors, exercising taste and vision. The organization that gives situational challenges produces workers who develop these capacities with each engagement, each amplified by AI tools that expand the range of exploration.

This brings us to Follett's most radical contribution to leadership theory: the invisible leader.

"The most successful leader of all," Follett wrote, "is one who sees another picture not yet actualized." And: "Leader and followers are both following the invisible leader — the common purpose." The ideal leader's contribution is so integrated into the group's process that the group experiences its achievements as its own. The invisible leader does not lead by commanding attention. She leads by creating conditions under which the team's collective intelligence operates at its highest level. She facilitates without dominating. She connects without controlling. She ensures the integrative process functions well without becoming the center around which it revolves.

The AI moment demands this form of leadership with particular urgency. The visible leader in the AI age stands at the center, making decisions AI tools execute at machine speed. Brilliant, charismatic, decisive. The organization moves at the pace of her imagination. But the organization is brittle. Its intelligence is concentrated in a single node. When her judgment fails, no distributed intelligence catches the error. When she departs, the intelligence departs with her.

The invisible leader produces a different organization. She creates conditions under which every member, amplified by AI tools, contributes to collective intelligence. The engineer's architectural judgment, the designer's aesthetic sense, the strategist's market awareness, the junior developer's fresh perspective — all integrated into a reading richer and more resilient than any individual could produce. The organization's intelligence is distributed, and the distribution makes it resilient.

The invisible leader's primary function in the AI-augmented organization is the design of the evaluative context. Outputs are produced at a pace exceeding any single evaluator's capacity. The visible leader would attempt to evaluate all outputs herself, creating a bottleneck. The invisible leader designs the norms, processes, and cultural expectations through which the team evaluates collectively. The engineer presents AI-generated architecture to the designer, who evaluates from the perspective of user experience. The designer presents AI-generated interface to the strategist, who evaluates from market fit. Each evaluation is enriched by situated knowledge. The collective evaluation is more rigorous than any individual assessment.

The invisible leader's most important function is modeling the discipline of independent evaluation. The leader who demonstrates — by practice, not pronouncement — the willingness to reject polished AI outputs when the argument beneath them is hollow teaches the team the evaluative discipline the AI age demands. She does not announce that she is demonstrating critical evaluation. She simply does it, consistently, and the team absorbs the norm through observation.

There is a final dimension: managing the team's relationship to the technology itself. The invisible leader creates moments of deliberate disconnection — structured pauses where the team engages without AI, using only their own intelligence and each other's perspectives. Not anti-technology but pro-human — maintaining the capacity for independent thought that the tools, by their efficiency, tend to displace. These pauses produce no metrics, contribute nothing to the quarterly report. But they maintain the resource no AI tool can provide: the human capacity for judgment that constitutes the irreducible core of the team's intelligence.

Follett's invisible leader has never been more needed. The tools are powerful, the pace rapid, the pressure to command intense. The leader who can resist that pressure — who creates conditions for collective intelligence rather than imposing individual brilliance — will build organizations that are not merely productive but genuinely intelligent. The best leadership is the leadership you cannot see, because it has become the water in which the organization swims.

---

Chapter 8: Experience, Coordination, and the Team as the Unit of Intelligence

Follett insisted that experience, not instruction, is the primary teacher. People do not learn to manage by reading about management. They learn by managing — encountering situations, making decisions, observing consequences, adjusting. The cycle of action, observation, and adjustment is the mechanism through which genuine capability develops. Instruction can orient. Only experience can teach.

This principle has been under pressure since the first management textbook was published, but the AI moment applies pressure of a qualitatively different kind. When Claude can generate working code from a natural-language description, the developer who uses it to bypass the experience of writing, testing, failing, and adjusting has gained a working product but lost the developmental encounter with difficulty. The experience was the learning. Remove the experience and you remove the learning, regardless of how good the output is.

But Follett's principle is more subtle than the simple equation of friction with learning. Not all experience is developmental. Follett distinguished between experience that produces genuine growth — the reflective engagement with consequences that builds judgment — and experience that produces mere habituation: the repetition of tasks without the reflective engagement that transforms repetition into understanding. The assembly-line worker who performs the same motion ten thousand times has accumulated repetitions, not experience in Follett's sense. She has not been changed by the work. The work has not developed her capacity.

This distinction reframes the debate about AI and the loss of formative struggle. Much of what AI removes from knowledge work was not formative in Follett's sense. It was habituation — the repetitive mechanical labor that consumed bandwidth without developing judgment. The developer who spent four hours a day on dependency management and configuration files was not accumulating developmental experience during those four hours. She was accumulating repetitions. The ten minutes within those four hours when something unexpected forced her to understand a connection between systems — those were developmental. The rest was plumbing.

The question is whether AI-augmented work can redirect attention from habituation to genuine experience — from the repetitive to the reflective, from the mechanical to the judgmental. The Orange Pill suggests it can. The engineer freed from implementation plumbing invests attention in architectural consequences, user experience, systemic implications of design choices. She is engaging with higher-order consequences, developing higher-order judgment, building understanding the friction of implementation had prevented her from reaching.

But the redirection is not automatic. It requires organizational support — time, patience, the tolerance for the slower pace at which genuine learning occurs. The pressure for production is constant, and AI tools make production so easy that the temptation to substitute production for learning is overwhelming. The leader who asks "What did you build today?" is measuring production. The leader who asks "What did you learn today?" is measuring development. Both questions are legitimate. In the AI age, the balance must shift toward the developmental question, because production without development is consumption of the resource AI cannot replenish: collective judgment.

Follett identified coordination as the fundamental activity of organization — more fundamental than planning, more fundamental than controlling. An organization that coordinates well can survive poor planning, because coordination — the continuous mutual adjustment of parts to each other and to the whole — corrects poor plans in real time. An organization that plans well but coordinates poorly will execute its plans into failure, because the gap between assumptions and reality widens at every implementation point.

AI has transformed the coordination challenge in ways that create a specific and newly possible failure mode. When every member of a team can operate across domains, each producing AI-assisted outputs independently, those outputs may appear polished and competent in isolation but fail to cohere when assembled. Call this the coherence illusion: the phenomenon in which AI-augmented individuals produce work that appears internally consistent but has not been genuinely coordinated with the work of other team members.

The coherence illusion is particularly dangerous because it is invisible until assembly. Each piece looks good on its own. The integration failure appears only when the pieces must function as a whole, and by that point, retrofitting the coordination that should have occurred at the beginning costs many times more than building it in from the start.

Follett's four principles of coordination directly address this failure. First, coordination must be achieved through direct contact between responsible people rather than through intermediaries. The nuances of situation, the subtle signals of misalignment — these forms of information do not survive hierarchical compression. Second, coordination must begin early, before parts have hardened into shapes resistant to adjustment. Third, coordination must be reciprocal — all parts adjusting to all other parts simultaneously. Fourth, coordination is continuous, not a one-time achievement.

The AI-augmented organization needs more coordination infrastructure, not less — even though AI tools reduce the need for coordination at the implementation level. The handoffs between frontend and backend, between design and engineering, can be reduced by tools that let individuals operate across domains. But coordination at the strategic level — alignment of purpose, shared understanding, continuous mutual adjustment — must intensify precisely because individual contributions are larger, more ambitious, and more cross-domain than before.

This brings the argument to its destination: the team as the fundamental unit of organizational intelligence.

Individual intelligence is real, measurable, and bounded. Every mind operates within a specific set of assumptions, experiences a specific angle of vision, possesses a specific range of knowledge. The mind is situated, shaped by history and position, and the shape that makes it brilliant in one dimension makes it blind in others.

Team intelligence is categorically different. It is not the sum of individual intelligences but a product of interactions between them — circular responses, constructive conflicts, integrative processes through which diverse perspectives combine into a reading more complete than any individual could produce. Team intelligence is emergent. The insight no member conceived, the solution no member could have generated, the understanding no individual perspective could have supported — these are products of team intelligence. They cannot be reproduced by any arrangement that eliminates the interactions from which they arise.

The economic argument for replacing teams with individual human-AI dyads is seductive. A single person with AI can produce the output of a team of five. The mathematics is straightforward, the quarterly impact immediate. But the mathematics measures only the dimension of output while ignoring the dimension of intelligence. The output of five individuals working independently with AI is the sum of five outputs. The intelligence of a team of five working together with AI is qualitatively different — an emergent property of interactions that produces insights, catches errors, generates solutions, and exercises collective judgment no aggregation of individual judgments can replicate.

The organization that replaces teams with individuals will save money in the short term and lose intelligence in the long term. The intelligence lost will manifest in decisions not caught, innovations not generated, market shifts not anticipated, quality degradation not detected until consequences become catastrophic.

Follett would have noted that Herbert Simon — who later became a co-founder of the artificial intelligence field itself — built his theory of organizational decision-making on foundations she helped establish. The tradition that gave rise to AI's theoretical origins is the same tradition that insists intelligence is distributed, situated, and irreducible to any single node, however computationally powerful. There is an irony in using AI to concentrate the very intelligence that AI's own intellectual lineage recognized as fundamentally distributed.

The practical test is simple. Take a complex decision involving multiple dimensions, stakeholders, and possible consequences. Present it to a single individual with the best AI tools available. Then present it to a team of five, each with the same tools, operating within trust, constructive conflict, and integrative process. The team's decision will be more nuanced, more comprehensive, more anticipatory of unintended consequences — not because the individual members are more intelligent, but because the interactions between them generate intelligence no individual, however augmented, can produce alone.

The team is not a luxury. It is the unit of intelligence. The organization that dissolves its teams in pursuit of AI-assisted efficiency will discover it has traded the most valuable form of intelligence it possesses for a faster, cheaper, and fundamentally less intelligent alternative. And by the time the discovery is made, the trust that held the team together — the hardest organizational resource to build and the easiest to destroy — will be gone.

Chapter 9: Creative Experience in the AI Workplace

Follett's 1924 book Creative Experience was not, despite its title, a book about creativity in the way the contemporary innovation literature uses that word. It was not about brainstorming techniques or design thinking workshops or the cultivation of individual genius. It was about something more fundamental: the conditions under which human beings, working together on genuine problems, produce outcomes that transform both the problem and the people working on it. Creative experience, in Follett's formulation, is the experience of participating in the generation of something genuinely new — something that did not exist before the interaction and that could not have been predicted from the properties of the participants considered in isolation.

The concept sits at the intersection of every other idea Follett developed. Power-with is the political condition that makes creative experience possible; without genuine shared power, participants cannot contribute fully, and the interaction degenerates into command-and-compliance. The law of the situation is the epistemological condition; the group must be oriented toward the genuine requirements of the work rather than toward the preferences of the person with the most authority. Integration is the process through which creative experience unfolds; the reconception of conflicting positions into a higher-order solution is itself the creative act. Circular response is the mechanism; the continuous mutual modification of participants through interaction is what generates the emergent insight. Constructive conflict provides the raw material; without genuine difference, there is nothing to integrate, and without integration, there is no creation.

Creative experience is what it feels like from the inside when all of these conditions are met simultaneously. And the question the AI moment poses with new urgency is whether the introduction of AI tools into the organizational process enhances or diminishes the conditions under which creative experience occurs.

The evidence from The Orange Pill suggests both — and the distinction between the two outcomes maps precisely onto the organizational choices the previous chapters have described.

When the Trivandrum engineers discovered capabilities they did not know they possessed — the backend specialist building interfaces, the designer implementing features end to end — they were having creative experience. The discovery was genuine. The capabilities were not merely transferred from the AI to the human; they emerged from the interaction between the human's existing knowledge and the AI's capacity to bridge the gap between that knowledge and domains the human had not previously accessed. The engineers were changed by the process. Their understanding of their own capacities expanded. Their sense of what was possible shifted. The experience was developmental in the deepest sense: it produced not just new outputs but new people, people with broader vision and richer judgment than they had possessed before.

But The Orange Pill also describes a contrasting experience — the author catching himself at three in the morning, writing not because the work demanded it but because he could not stop. The exhilaration had drained away hours earlier. What remained was compulsion — the grinding momentum of a person who had confused productivity with aliveness. This is not creative experience. It is its pathological mirror image: productive exhaustion, the state in which the tools generate outputs without the human generating meaning.

Follett would have diagnosed the difference with characteristic precision. Creative experience requires the engagement of the whole person — not merely the intellect executing prompts but the aesthetic sensibility evaluating whether the output is genuinely good, the moral sense asking whether the product serves someone beyond the builder, the embodied intuition that registers rightness or wrongness before conceptual analysis can articulate why. When these faculties are engaged, the work is creative experience regardless of whether AI tools are involved. When they are disengaged — when the human has become a relay station between the AI's suggestions and the AI's execution, reviewing outputs without genuinely evaluating them — the work is production without experience, and the human who performs it is depleted rather than developed.

The organizational conditions that determine which outcome prevails are the conditions this book has described. Power-with engages the whole person because it treats the person as a contributor rather than an executor. The law of the situation engages genuine problem-solving because it requires the group to read the situation rather than merely follow instructions. Integration engages the creative faculty because it demands the reconception of conflicting positions into novel solutions. Constructive conflict engages the critical faculty because it requires the courage to disagree and the discipline to transform disagreement into insight. Invisible leadership engages autonomy because it creates conditions for self-direction rather than imposing external control.

Remove any of these conditions and the creative quality of the experience degrades. Power-over reduces the worker to an executor; the law of personal authority reduces problem-solving to compliance; compromise reduces integration to the splitting of differences; conflict suppression reduces the critical faculty to conformity; visible leadership reduces the team to an audience for the leader's performance. In each case, the AI tools continue to function. The outputs continue to appear. But the experience has ceased to be creative in Follett's sense, and the human beings engaged in it have ceased to develop.

This is why the organizational choices described in the preceding chapters are not merely strategic. They are developmental. An organization that exercises power-with, that derives authority from the situation, that integrates rather than compromises, that cultivates constructive conflict, that practices invisible leadership — this organization produces creative experience as a structural feature of its operation. And creative experience is the mechanism through which organizational intelligence is built. Not because creative experience is pleasant, though it often is. Because creative experience is the only process through which human beings develop the judgment, the evaluative rigor, and the capacity for genuine innovation that the AI age demands.

The concern that AI eliminates the productive struggle through which understanding develops is legitimate — but only when the organizational context fails to redirect struggle to a higher level. When the context supports the redirection, what emerges is not the absence of struggle but a different kind of struggle: harder, more consequential, more genuinely creative. The struggle to determine what should be built rather than how to build it. The struggle to integrate conflicting perspectives rather than to implement a predetermined solution. The struggle to evaluate whether an AI-generated output is genuinely good rather than merely competent. These are more demanding forms of engagement than the implementation struggles they replace, and they produce more developmental experience — if the organizational context supports them.

Follett would have observed that this conditional — if the organizational context supports them — is the crux of the matter. The AI tools do not determine whether the experience is creative. The organizational context does. And the organizational context is the product of choices that leaders, teams, and institutions make about the kind of work they value, the kind of contribution they reward, and the kind of development they invest in.

A society in which work is experienced as creative engagement is a society in which human beings develop. A society in which work is experienced as sophisticated compliance — polished outputs without genuine thought, impressive production without developmental experience — is a society in which human beings atrophy. The tools are identical in both cases. The difference is the organizational context, and the organizational context is ours to shape.

Creative experience is not a luxury to be preserved through nostalgia for pre-AI working conditions. It is the mechanism through which organizations build the collective intelligence that complexity demands. Follett understood this a century before the tools arrived. The tools do not change what creative experience requires. They change what it costs to get it wrong.

---

Chapter 10: The Integrative Organization

Every argument in this book converges on a single question: What kind of organization is worthy of the tools we now possess?

The question is Follett's, though she posed it in different terms. In the 1920s and 1930s, the tools in question were the assembly line, the division of labor, the hierarchical management structures that Frederick Taylor's disciples had designed to maximize factory efficiency. Those tools were powerful. They transformed material conditions at a scale previous centuries could not have imagined. And they posed a question the management theorists of Follett's era were only beginning to formulate: Was the organization these tools made possible the best that could be built, or merely the most efficient? Was efficiency the right metric, or was there something more important that the efficiency metric concealed?

Follett's answer was that efficiency was necessary but not sufficient, and that what the efficiency metric concealed was the developmental capacity of the organization — its ability to grow the capabilities of its members, generate knowledge through the integrative process, and respond to unpredictable challenges with creative intelligence no predetermined plan could substitute for. The organization maximizing efficiency at the cost of developmental capacity was consuming its own future: achieving short-term productivity by depleting the long-term resource of collective intelligence.

The AI moment poses the same question at a different scale. A single individual with AI tools can produce output that would have required twenty people a year ago. A team equipped with AI tools and operating within a well-designed context can produce output that would have required a department. The expansion is real, measurable, and accelerating.

And the question remains: Is the organization that maximizes this capability the best that can be built?

The integrative organization — the organization that Follett's century of thinking describes and that the AI moment makes newly urgent — is defined not by its technology but by its commitments. It is committed to power-with rather than power-over: amplifying the capabilities of every member rather than concentrating capability at the top. It derives authority from the law of the situation rather than from the illusion of final authority: grounding decisions in the collective reading of what the work requires rather than in any individual's hierarchical position. It pursues integration rather than compromise: reconceiving conflicts at a higher level rather than splitting differences. It cultivates constructive conflict: treating disagreement as raw material for intelligence rather than as a symptom of dysfunction. It practices invisible leadership: creating conditions for collective intelligence rather than concentrating brilliance in a single charismatic node. It values creative experience: measuring success by the development of its members' capacities alongside the quality of its outputs. And it recognizes the team as the unit of intelligence: understanding that the emergent properties of genuine collaboration cannot be replicated by any aggregation of individually augmented workers.

These commitments are not aspirational. They are structural requirements, derived from the analysis of how organizational intelligence is generated and sustained. An organization that meets them will produce the intelligence that complexity demands. An organization that fails to meet them will produce impressive outputs that conceal a progressive degradation of its collective capacity to think, adapt, and create.

The danger is clear enough. AI tools deployed within power-over frameworks — command-and-compliance hierarchies, cost-reducing headcount-eliminating strategies that treat workers as costs to be minimized — will accelerate dysfunction. These frameworks will use AI to produce organizations that are faster, cheaper, and progressively less intelligent. The possibility is equally clear. AI tools deployed within integrative frameworks — situation-based authority structures, coordination-rich cultures, team-centered designs — will amplify intelligence. These frameworks will produce organizations not merely more productive but more capable, more adaptive, more developmental for the human beings who constitute them.

There is a genuine tension here that an honest accounting must not smooth away. Follett's framework assumes that organizations have time — for integration, for constructive conflict, for the patient mutual adjustment through which collective intelligence emerges. The AI moment, as described throughout The Orange Pill, is characterized by speed that may be structurally incompatible with Follettian patience. The thirty-day sprint to CES, the two-month adoption curve of ChatGPT, the quarterly pressure that converts productivity gains into headcount reduction before the integrative alternative has time to demonstrate its value — these temporal pressures are real, and they push organizations toward the power-over deployment that is faster to implement and easier to justify in the language of financial return.

Follett would not have denied this tension. She would have insisted that the tension itself is a conflict requiring integration, not a choice requiring capitulation. The underlying need of speed — responsiveness to markets, competitive positioning, the legitimate pressure to deliver value quickly — does not require the elimination of the integrative process. It requires the redesign of the integrative process to operate at the pace the moment demands. The vector pods described in The Orange Pill are one such redesign: small, fast, integrative units that make decisions through collective reading of the situation rather than through hierarchical command, operating at a pace that the traditional committee process could never support.

The redesign is possible. It is also demanding. It requires leaders who are willing to invest in the infrastructure of integration — the trust, the norms of constructive conflict, the evaluative practices, the coordination mechanisms — even when the short-term calculus favors the simpler alternative of centralized command. It requires organizations that measure intelligence alongside productivity, development alongside output. It requires a concept of organizational success that includes the quality of the experience the organization provides to its members — not as a morale benefit but as the mechanism through which the organization builds the collective judgment that constitutes its most valuable and least replaceable asset.

Follett understood something about organizations that the efficiency-maximizing tradition systematically obscures: that organizations are not machines for producing outputs. They are communities for developing human beings. The outputs matter. But the outputs are a consequence of the development, not its substitute. The organization that develops its members' capacities — through power-with, through integration, through constructive conflict, through creative experience — produces better outputs as a structural consequence of that development. The organization that optimizes outputs at the cost of development will discover, as every optimization that ignores its own preconditions eventually discovers, that the thing it has optimized away was the thing that made the optimization possible.

"It is right now because established mindsets and institutions are breaking down, creating an opening for a new disruptive vision," scholars of Follett's work have recently written. "We now find ourselves on the cusp of another large-scale societal inflection." The inflection they identify is the AI transition. The disruptive vision they invoke is Follett's — not because Follett anticipated AI, but because her analysis of how organizations generate intelligence is more precise, more structural, and more practically useful than anything the contemporary management literature has produced.

The early harsh reviews of Follett's The New State in 1918 called the author "Orange" — a blend of "Red" and "Yellow," meaning both dangerously radical and insufficiently bold. A century later, the accusation reads differently. The organizational vision Follett articulated was not too radical. It was too early. The tools that could instantiate it at scale — tools that dissolve domain boundaries, amplify individual contribution, enable collective reading of complex situations — did not yet exist.

They exist now.

The tools are in our hands. The framework is before us. And the question is whether we will build organizations worthy of both — organizations that use the most powerful tools in human history to develop the most powerful form of intelligence available: the collective intelligence of human beings working together, with genuine power, on genuine problems, producing genuine creative experience.

That is the integrative organization. That is what Follett knew, nearly a century before the machines arrived. The machines are here. The choice that remains is ours.

---

Epilogue

The question nobody asks in boardrooms is the one that matters most.

I have sat through hundreds of them over the past year. The AI strategy meeting. The organizational transformation session. The quarterly review where someone puts the twenty-fold productivity number on a slide and the room goes quiet with the particular silence of people calculating headcount. The questions that fill that silence are always the same. How fast can we deploy? What is the cost reduction? How many seats can we eliminate?

These are reasonable questions. They are also the wrong questions, or rather, they are the right questions inside the wrong framework — and Follett's work showed me the framework I had been missing.

I knew, when I made the decision to keep and grow the team in Trivandrum rather than cutting it, that I was doing the right thing. What I did not have was the language to explain why it was right in terms that went beyond instinct or ethics. I could say it felt wrong to eliminate the team. I could say the team's intelligence was worth more than the headcount savings. But I could not articulate the structural argument — the reason the team's intelligence was irreplaceable, the mechanism through which that intelligence was generated, the specific organizational conditions that made the amplification of the team categorically different from the replacement of it.

Follett gave me that language.

Power-with is not generosity. It is strategy. The team whose members are amplified rather than replaced generates a form of intelligence — emergent, integrative, self-correcting — that no configuration of individual human-AI dyads can replicate. I had seen this. Follett explained why I had seen it, and the explanation was more rigorous than anything in the contemporary management literature.

The law of the situation is not a platitude about data-driven decisions. It is a structural principle about where intelligence resides — in the collective reading of the situation by everyone who possesses relevant knowledge, not in the judgment of the person who happens to sit at the top of the chart. The AI tools make this principle more achievable than ever, because they let every team member engage with dimensions of the situation that specialization previously blocked. But the tools only deliver on the principle if the organizational culture distributes authority rather than concentrating it. I had been building that culture by instinct. Follett showed me the architecture.

Integration is not compromise. The sentence seems obvious until you watch an organization try to navigate the AI transition and realize that almost every "strategy" being adopted is a compromise — some tasks for the humans, some for the machines, the territory divided by negotiation rather than reconceived through creativity. Follett's insistence that conflicts can be resolved by discovering what both parties actually need, rather than by splitting what both parties initially demand, is the single most practical insight I have encountered for designing the AI-augmented workplace. The engineer does not need to keep writing boilerplate. She needs her expertise to remain valuable. The organization does not need to eliminate the engineer. It needs to reduce cost and increase output. Both needs are met — not by dividing the work but by reconceiving it at a higher level. This is what happened in Trivandrum, and I did not have the word for it until I read Follett.

The thing that unsettles me most is her analysis of constructive conflict. I have watched teams reach for the AI to settle disagreements — asking Claude to adjudicate between two approaches, accepting the smooth, confident response as resolution. The response splits the difference. It sounds like wisdom. It is compromise dressed as integration, and it short-circuits the very process through which the team's best thinking emerges. The disagreement was the intelligence. The friction between perspectives was the raw material for an insight neither perspective could have reached alone. And the AI, by resolving the conflict prematurely, destroyed the resource the conflict contained.

I do not think Follett would have opposed AI. Her intellectual tradition — the tradition of distributed intelligence, situated knowledge, emergent organizational capacity — is the tradition that produced AI. Herbert Simon, who helped found artificial intelligence as a field, built on foundations Follett helped establish. The irony of using AI to concentrate the very intelligence that AI's own intellectual lineage recognized as fundamentally distributed would not have been lost on her.

But she would have insisted, with the quiet confidence of someone who had spent decades studying how organizations actually work, that the tools do not determine the outcome. The organizational choices do. Power-over or power-with. Personal authority or the law of the situation. Compromise or integration. Conflict suppression or constructive conflict. Visible leadership or invisible leadership. Production or creative experience. Individual optimization or team intelligence.

These are the choices that determine whether the amplifier amplifies intelligence or dysfunction. The tools are the same in either case. The signal is ours.

What I keep coming back to — what sits with me at the end of this particular climb through the tower — is her phrase creative experience. Not creativity as a buzzword. Experience as the mechanism through which human beings develop. The idea that the point of organizational life is not merely to produce things but to produce people — people with broader vision, richer judgment, deeper capacity than they possessed before the work began. That this production of people is not a side effect of good management but its primary purpose, and that the outputs the organization generates are a consequence of the development, not its substitute.

If there is a single sentence I would carry from Follett's work into every boardroom conversation about AI strategy, it is this: The most successful leader of all is one who sees another picture not yet actualized. Not the picture the dashboard shows. Not the picture the quarterly forecast projects. The picture that does not yet exist — the organization that could be built if the tools were used to develop the people rather than to replace them. The picture in which every member of the team is amplified, every perspective is integrated, every conflict is used, and the intelligence that emerges is greater than anything any individual, however augmented, could produce alone.

That picture is not yet actualized. But the tools to actualize it are in our hands.

Edo Segal

In 1925, Mary Parker Follett told a room full of industrialists that orders should come from the work, not the boss -- and that the most powerful organizations are the ones that grow capability rather

In 1925, Mary Parker Follett told a room full of industrialists that orders should come from the work, not the boss -- and that the most powerful organizations are the ones that grow capability rather than concentrate it. She was ignored for decades. The AI revolution has made her impossible to ignore.

This book applies Follett's framework -- power-with over power-over, integration over compromise, the team as the irreducible unit of intelligence -- to the most consequential organizational question of our time. When a single person with AI tools can produce the output of twenty, the temptation to eliminate the other nineteen is overwhelming. Follett's century-old analysis reveals what that arithmetic conceals: the emergent intelligence of genuine collaboration, which no configuration of individuals, however augmented, can replicate.

The tools are the most powerful in human history. Follett's work asks whether our organizations are worthy of them.

-- Mary Parker Follett

Mary Parker Follett
“Power is not a pre-existing thing which can be handed out to someone, or wrenched from someone,”
— Mary Parker Follett
0%
11 chapters
WIKI COMPANION

Mary Parker Follett — On AI

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Mary Parker Follett — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →