By Edo Segal
I have spent the past year watching organizations tear themselves apart.
Not through dramatic failures. Not through competitive pressures or strategic blunders. Through the quiet dissolution of cooperation. Teams of brilliant people, each amplified by AI tools that would have seemed magical just months ago, producing less value together than they could produce alone. The tools work perfectly. The humans have forgotten how to work with each other.
This is why I keep returning to Chester Barnard, a telephone executive who died in 1961 and never saw a computer, whose 1938 book on organizational leadership has become the most relevant management text for the age of AI. Not because he predicted the technology. Because he understood something about human cooperation that the technology has made impossible to ignore.
Barnard's central insight was radical in 1938 and remains radical today: organizations are not machines to be optimized. They are cooperative systems that exist only as long as people choose to participate in them. When AI amplifies individual capability to the point where any person can do what a team used to do, that choice becomes more conscious, more voluntary, and more fragile than ever before.
I wrote The Orange Pill about the moment humans found themselves in intellectual partnership with thinking machines. This book is about what comes next. How do we maintain the cooperative structures that give that partnership direction? How do we lead when everyone has superpowers but no one remembers why they need each other?
Barnard answers these questions with a precision that cuts through the noise of our current moment. He shows us that authority flows upward from the people who choose to accept it, not downward from the people who think they possess it. That communication is about meaning, not information. That the executive's primary function is moral: creating and maintaining the conditions under which people want to cooperate.
Every page of this book spoke to challenges I see daily. The engineer who can build anything but doesn't know what's worth building. The manager whose authority evaporated when her team gained direct access to the same AI tools she uses. The organization that celebrates productivity metrics while its best people quietly leave, taking their judgment elsewhere.
Barnard saw this pattern before. The telephone system he managed required the willing cooperation of thousands of people across vast distances to maintain infrastructure the public had come to consider essential. The system could not function through command alone. It required people to choose, continuously, to contribute their best effort to a shared purpose. When they stopped choosing to cooperate, the system failed, no matter how sophisticated the technology.
We are building the most powerful tools in human history. Whether they serve us or diminish us depends entirely on our capacity to cooperate wisely in their deployment. That capacity is not a technical problem. It is a cooperative problem. And Chester Barnard, writing from his experience managing one of the most complex cooperative enterprises of his era, provides the clearest framework I have found for solving it.
The AI revolution is not a technology story. It is a cooperation story. This book is the manual for that story, written by someone who understood cooperation better than anyone before or since.
-- Edo Segal ^ Opus 4.6
1886–1961
Chester Irving Barnard (1886–1961) was an American business executive and organizational theorist who spent most of his career at American Telephone and Telegraph, culminating in twenty-one years as president of the New Jersey Bell Telephone Company. Despite leaving Harvard before completing his degree, Barnard became one of the most influential management thinkers of the twentieth century through his groundbreaking 1938 work "The Functions of the Executive." His central insight—that organizations exist only through the willing cooperation of their participants—challenged the mechanistic management theories of his era and introduced concepts like the "acceptance theory of authority" and the economy of organizational incentives. After his corporate career, Barnard served as president of the Rockefeller Foundation (1948-1952) and chairman of the National Science Foundation (1952-1954). His framework, developed from practical experience managing complex cooperative systems, remains the most sophisticated analysis of organizational leadership ever written and has gained renewed relevance in the age of artificial intelligence, where the maintenance of cooperative structures has become the defining challenge of executive leadership.
Chester Irving Barnard spent twenty-one years as president of the New Jersey Bell Telephone Company. He was not an academic. He did not hold a doctorate. He left Harvard before completing his degree, funded his earlier education through piano tuning at Mount Hermon Preparatory School, entered American Telephone and Telegraph as a statistician at twenty-three, and spent the next four decades inside the machinery of one of the largest and most complex organizations the world had ever produced. When he sat down in the 1930s to write The Functions of the Executive, published in 1938, he was not theorizing from a library carrel. He was distilling decades of practical experience into a framework that would challenge nearly every assumption the management world held dear.
The management science of Barnard's era treated organizations as machines. Frederick Taylor's scientific management had established the prevailing orthodoxy: organizations are systems of tasks, workers are units of labor, and the manager's job is to optimize the machine for maximum efficiency. Authority flows downward from the top of the hierarchy. Workers execute. Managers command. The organization is a mechanism, and the executive is its engineer. Max Weber's theory of bureaucracy reinforced the point from a different direction: rational-legal authority, formal rules, clear hierarchies, impersonal procedures. The manager's job is to administer the system. The worker's job is to comply.
Barnard rejected this entirely, and the rejection was informed by his direct experience managing one of the most complex organizations of his era. Organizations, he insisted, are not machines. They are cooperative systems. They exist because individuals, each with their own motives, limitations, and capacities, choose to combine their efforts toward a shared purpose. The word "choose" is essential. Cooperation is not compelled. It is offered, and it must be continuously earned. The moment the organization ceases to provide sufficient reason for individuals to cooperate, the cooperative system dissolves -- not through dramatic collapse, but through the quiet withdrawal of effort, attention, and commitment that constitutes the slow death of any organization.
This was a radical claim in the 1930s, and it was informed by the peculiar conditions of Barnard's own experience. The telephone system he managed was one of the most intricate cooperative enterprises in human history. Thousands of operators, technicians, engineers, and administrators had to coordinate their efforts across vast distances, in real time, to maintain a communication network that the public had come to depend on as essential infrastructure. The system could not function through command alone. It required the willing cooperation of every participant, from the executive suite to the repair crews working in winter storms on telephone poles across New Jersey. Barnard saw, in the daily operations of this system, something that the management theorists in their offices could not see: that cooperation is fragile, that it must be cultivated rather than commanded, and that the executive's primary function is not to issue orders but to create the conditions under which people choose to cooperate.
This was not merely an observation about telephone companies. It was, Barnard argued, a universal truth about organized human activity. Every organization, from the smallest team to the largest corporation, from the military to the church to the university, exists because people cooperate. And people cooperate because the organization offers them something -- material compensation, meaningful work, social belonging, a sense of purpose -- that exceeds the burdens the organization imposes. When the balance shifts -- when the burdens exceed the inducements -- cooperation erodes, and the organization, however impressive its formal structure, begins to fail.
Barnard called the range of directives that workers will accept without careful deliberation the "zone of indifference." Within this zone, orders are followed more or less automatically, not because the worker has evaluated each one and found it worthy, but because the overall balance of inducements and contributions makes routine compliance the path of least resistance. The zone is wider when the organization provides generous inducements -- material compensation, meaningful work, social belonging, a sense of purpose. The zone narrows when inducements are inadequate, when purpose is unclear, or when the executive's behavior undermines the trust that lubricates every cooperative interaction.
Barnard also insisted on a distinction between formal and informal organization. The formal organization is the official structure -- the org chart, the job descriptions, the policies and procedures. The informal organization is everything else: the actual patterns of communication, the trust networks, the shared norms, the social relationships, the tacit knowledge that flows through hallway conversations and shared experience. Barnard argued that the informal organization is not a deviation from the formal structure. It is a necessary complement, performing functions -- communication, norm-setting, social integration, protection -- that the formal organization cannot perform. The executive who ignores or suppresses the informal organization cripples the cooperative system, because the informal organization is where the life of the organization actually resides.
These ideas -- cooperation as the foundation, acceptance as the basis of authority, inducements as the currency of organizational life, the interplay of formal and informal structures -- constitute a framework of remarkable power and precision. Barnard developed them not as abstract propositions but as practical insights drawn from the daily experience of managing one of the most complex organizations of his era, an organization that required the willing cooperation of thousands of people across vast distances to maintain a communication network that the public had come to regard as essential infrastructure. The framework was tested in the real world before it was written down, and the testing gave it a practical depth that purely academic theories of organization characteristically lack. They were ahead of their time in 1938. They are even more relevant in 2026, because the AI revolution has disrupted every variable in Barnard's cooperative equation simultaneously, and the executives who are navigating the disruption most successfully are the ones who, knowingly or not, are applying Barnard's principles.
The AI revolution did not arrive as a management challenge to be solved with better processes or updated org charts. It arrived as a cooperation challenge, the most profound cooperation challenge since the industrial revolution forced millions of formerly independent craftspeople into factory organizations whose cooperative demands were entirely unlike anything they had previously experienced. The industrial revolution required people to cooperate in physical proximity, on fixed schedules, under hierarchical authority. The AI revolution requires people to cooperate across dissolved role boundaries, with amplified individual capability, in fluid organizational structures whose shape changes faster than the formal organization can track. The cooperative challenge is different in form but identical in kind: how do we maintain the conditions under which people choose to combine their efforts toward shared purposes when the conditions of combination have been fundamentally transformed? The fundamental question it posed to every organization was not "How do we use these tools?" but "How do we maintain the cooperative system when the tools have changed every variable in the cooperative equation?"
Consider what changed. The AI tools -- Claude Code and its successors -- reduced the gap between what a person could imagine and what that person could build to nearly zero for a significant class of work. A single engineer could now produce what had previously required a team of twenty. A non-technical founder could prototype a working product over a weekend. The barriers between disciplines dissolved: backend engineers built user interfaces, designers implemented complete features, individual contributors operated as one-person product teams.
Every element of Barnard's cooperative system was disrupted simultaneously. The inducements the organization offered -- compensation, career advancement, professional recognition -- had been calibrated to a world where execution was scarce and therefore valuable. When execution became abundant, the old inducement structure lost its coherence. The authority of the executive, which had rested in part on informational and capability asymmetries between leader and led, was undermined when every worker gained access to the same amplifying tools the executive used. The zone of indifference, sustained by the worker's dependence on the organization for the resources to do meaningful work, contracted as individual capability expanded. The formal organizational structures -- designed to coordinate sequential handoffs between specialists -- became obsolete when any individual could work across domains. And the purpose of the organization was thrown into question when any individual could pursue purpose independently.
Barnard died in 1961, thirty-five years before the commercial internet and sixty-four years before the AI threshold. He never saw a computer that could hold a conversation, never imagined a tool that could write software from plain-language description. Yet his framework provides the most precise diagnostic instrument available for understanding what the AI revolution demands of organizational leaders.
This is because Barnard asked the right question. Not "How do we make the organization more efficient?" -- that was Taylor's question, and it remains the default question of most management thinking. But "How do we maintain the conditions under which people choose to cooperate?" That question does not change when the tools change. It becomes more urgent, more difficult, and more consequential, but it remains the question. The AI revolution did not introduce a new problem. It amplified an old one -- the oldest problem in organizational life -- to a scale where ignoring it is no longer possible.
In the age of AI amplification, the maintenance of the cooperative system has become the only executive function that matters. Everything else -- technical implementation, strategic planning, operational execution -- can be assisted, augmented, or performed by the tools. The cooperative system cannot, because cooperation is a human decision, made by human beings, for human reasons, and no tool, however capable, can make that decision on their behalf.
Barnard's framework also illuminates a dimension of the AI revolution that purely technical analyses consistently miss: the emotional infrastructure of cooperation. Every cooperative system depends on what Barnard described as the feeling of being part of something larger than oneself, a feeling that cannot be manufactured through incentive design or communicated through mission statements alone. It arises from the daily experience of contributing to a shared purpose alongside people one trusts. When AI restructures the patterns of contribution, dissolving the team-based workflows through which participants developed this feeling, the emotional infrastructure of cooperation is disrupted alongside the productive infrastructure. The executive who attends only to the productive dimension, who celebrates the efficiency gains while ignoring the emotional dislocation, will find the cooperative system eroding even as the organization's output increases. The paradox is characteristic of the AI moment: an organization can become more productive and less cooperative simultaneously, and the loss of cooperation will eventually undermine the productivity that the tools made possible.
The telephone executive who understood this in 1938 would recognize the challenge facing every organization in 2026. The technology is unrecognizably different. The cooperative problem is the same. And the quality of the executive's response to that problem will determine whether the organization survives, flourishes, or dissolves into a collection of talented individuals who have lost their reason to work together.
Chester Barnard's most subversive contribution to management theory was his insistence that authority does not flow downward from the top of the organizational hierarchy. Authority, in Barnard's framework, flows upward from the people who choose to accept it. A directive is authoritative only when four conditions are met simultaneously: the recipient understands the communication, believes it is consistent with the organization's purpose, believes it is compatible with her personal interests, and is able to comply. If any of these conditions fails, the directive is not authoritative. It is merely a statement made by someone with a title.
This acceptance theory of authority was uncomfortable in the 1930s, when the prevailing management science assumed that authority was a property of the organizational position, not a grant from the subordinate. It is an even more uncomfortable insight in the age of AI, because the tools have given every worker in the organization the capacity to independently evaluate whether each of Barnard's four conditions is met -- and to act on that evaluation with a speed and capability that renders the old compliance model functionally obsolete.
Before AI, the executive's authority rested substantially on information asymmetry. The executive knew things the worker did not: the strategic context, the competitive landscape, the financial constraints, the priorities that justified the directive. The worker who received a directive she did not fully understand could assume -- often correctly -- that the executive had information justifying the request. This assumption supported compliance even in the absence of full understanding, and the zone of indifference was wide enough to accommodate considerable opacity in organizational communication.
AI has compressed this information asymmetry dramatically. The worker who receives a directive she does not understand can now use AI tools to research the strategic context, analyze the competitive landscape, model the financial constraints, and evaluate the priorities that supposedly justify the request. She can do this in hours, not weeks. She can do it without asking the executive for access to information the executive may prefer to keep asymmetric. The foundation of positional authority has been eroded by the same tools that amplified the worker's productive capability.
The erosion does not mean authority has disappeared. It means that authority now rests more heavily than ever on Barnard's two remaining conditions: consistency with organizational purpose, and compatibility with the worker's personal interests. The executive who cannot articulate a purpose that the worker recognizes as genuine, or who issues directives perceived as misaligned with the worker's interests and values, will find the zone of indifference contracted to a narrow band of routine administrative compliance. The worker will follow the rules about expense reports and meeting schedules. She will not contribute her judgment, her creativity, or her discretionary effort -- the contributions that, in the AI age, constitute the only contributions that cannot be produced by the tools themselves.
The contraction of the zone of indifference is perhaps the most significant organizational consequence of AI amplification, and Barnard's framework diagnoses it with precision. When workers depended on the organization for the capability to do meaningful work, the zone was wide. The organization provided the infrastructure -- the teams, the tools, the institutional knowledge, the platform -- without which the individual could accomplish little. This dependence expanded the zone of indifference because the worker's need for the organization's resources outweighed the cost of complying with directives she might not have chosen on her own.
AI has fundamentally reduced this dependence. The worker who possesses judgment and has access to AI tools can accomplish meaningful work outside the organization. She can build products independently, pursue projects that interest her, create value without the organizational infrastructure that once made the organization indispensable. The organization is no longer the only venue for purposeful work, and when it is not, the worker's willingness to accept directives without evaluation diminishes. She evaluates more carefully. She accepts less automatically. The zone contracts.
This dynamic plays out in observable ways across every organization that has adopted AI tools. The most talented workers -- the ones whose judgment is most valuable -- are precisely the ones whose zone of indifference contracts most dramatically, because they are the ones with the most attractive alternatives. They can work independently. They can join smaller organizations that offer more purpose and more autonomy. They can start their own ventures, because the tools have reduced the execution barrier to the point where a person with good judgment and access to AI can build a viable product. The organization that fails to earn the acceptance of these workers -- that relies on positional authority, on compensation alone, on the inertia of employment -- will lose them, quietly and permanently.
For the executive, the contraction of the zone of indifference is not a problem to be solved but a reality to be navigated. The solution is not to reassert positional authority through stronger incentives or stricter controls. Surveillance tools exist, and the temptation to use them is real. But surveillance produces compliance, not cooperation, and compliance is exactly what the AI age has made insufficient. The work that matters -- the judgment work, the creative work, the morally grounded decision-making that no tool can perform -- requires genuine cooperation, and genuine cooperation requires genuine acceptance, and genuine acceptance requires that the executive earn it.
This is what Barnard meant by the acceptance theory of authority, and it is what makes his framework the most relevant management theory for the AI age. The executive who leads through demonstrated competence -- whose decisions are sound, whose purposes are clear, whose moral commitments are consistent -- earns authority that is strengthened, not weakened, by the worker's capacity to evaluate independently. The more the worker examines the executive's judgment, the more she finds it worthy of trust, and the more trust she extends. The zone of indifference expands not through dependence but through demonstrated merit.
Consider what this means in practice. The executive who shares her reasoning openly -- who explains not just what she has decided but why, what alternatives she considered and rejected, what values she prioritized and what tradeoffs she accepted -- invites the kind of evaluation that either confirms her judgment or reveals its gaps. If her judgment is confirmed, her authority is strengthened. If her reasoning has gaps, she benefits from the correction. In either case, the cooperative relationship is deepened by transparency rather than undermined by it.
The executive who conceals her reasoning -- who issues directives without explanation, who relies on positional authority to foreclose questions, who treats the worker's capacity for independent evaluation as a threat rather than a resource -- will find that the worker's evaluation proceeds anyway, in the absence of the executive's input, and the conclusions reached in private are rarely as charitable as the conclusions that might have been reached through open dialogue.
Barnard anticipated this dynamic, though he could not have anticipated the technology that would accelerate it. He observed that the most effective executives were not the ones with the strongest hierarchical authority but the ones whose personal qualities inspired genuine acceptance. Their directives were followed not because the workers had no choice but because the workers had evaluated the directives and found them worthy. The acceptance was informed, not blind. And informed acceptance, Barnard argued, produces more durable cooperation than blind compliance ever can.
The AI age has universalized this dynamic. Every executive now leads a workforce capable of informed acceptance -- or informed rejection. The executive's task is no longer to issue directives that fall within the zone of indifference. It is to be the kind of leader whose directives are accepted because they deserve acceptance: because they are wise, because they serve a genuine purpose, because they are issued by someone whose judgment has been tested repeatedly and found trustworthy.
This is a higher standard of leadership than any previous generation of executives has been held to. It is also a more honest standard, because it makes explicit what was always true but often obscured by information asymmetry and positional power: that authority is a relationship, not a possession. It is granted by the led, not seized by the leader. And it is maintained only as long as the leader's behavior justifies the grant.
Barnard would have recognized the AI-amplified worker not as a new kind of challenge but as the fullest expression of the challenge he had been writing about all along. The acceptance theory of authority was always true. The technology has simply made it impossible to ignore.
The acceptance theory also has implications for how organizations should think about the distribution of AI tools themselves. The executive who distributes AI access as a privilege -- granting more powerful tools to favored employees while restricting others -- is using the tools to reinforce positional authority rather than to enhance the cooperative system. The executive who distributes AI access broadly, ensuring that every participant has the tools needed to evaluate directives, contribute judgment, and work at the frontier of their capability, is building the kind of informed workforce that Barnard argued was both a challenge to manage and an asset beyond price. The informed worker is harder to lead than the ignorant worker, but the cooperation of the informed worker, when it is earned, is genuine rather than extracted, durable rather than fragile, and productive rather than merely compliant.
The geopolitical dimension of this shift is equally significant. In nations where hierarchical authority is culturally embedded and rarely questioned, the AI tools are creating a workforce whose capabilities exceed the organizational structures designed to contain them. The tension between amplified individual capability and traditional organizational authority is not confined to Silicon Valley boardrooms. It is a global phenomenon, and the organizations that resolve it successfully -- that find ways to channel amplified capability through cooperative structures that earn rather than demand participation -- will define the organizational models of the coming century.
Chester Barnard drew a distinction that remains among the most important in organizational theory: the distinction between formal organization and informal organization. Formal organization is the official structure -- the roles, the reporting relationships, the designated responsibilities, the policies and procedures that appear on organizational charts and in employee handbooks. Informal organization is everything else: the actual patterns of communication, the trust networks, the unwritten norms, the social relationships, the tacit knowledge that flows through conversations, shared meals, and the accumulated experience of working together over time.
Barnard did not treat informal organization as a deviation from the formal structure or a problem to be corrected. This was a departure from the management orthodoxy of his era, which viewed any organizational activity not captured in the formal structure as inefficiency, insubordination, or distraction. Barnard argued the opposite: informal organization is a necessary complement to formal organization, performing functions that formal organization cannot perform. The informal organization establishes norms of behavior that no policy manual can legislate with sufficient nuance. It creates communication channels that bypass the formal bottlenecks through which information would otherwise crawl at bureaucratic speed. It develops the trust relationships that enable cooperation to occur even when formal incentives are inadequate. And it provides the social conditions -- belonging, recognition, shared identity, the sense of being part of something -- that make organizational life tolerable and sometimes fulfilling.
The formal organization provides the skeleton. The informal organization provides the flesh, the blood, and the nervous system. Neither can function without the other. A skeleton without flesh is technically complete but incapable of movement. A body without a skeleton is energetic but structurally unsound, capable of momentary action but not of sustained, directed coordination.
Barnard developed this insight by observing the organizations he managed. At New Jersey Bell, the formal organization specified who reported to whom, which departments handled which functions, and how information was supposed to flow through the hierarchy. But the actual work -- the coordination that kept the telephone system running -- depended on informal relationships that the formal structure did not capture. The technician who knew the repair supervisor personally called him directly instead of routing the request through formal channels. The operator who had developed a friendship with someone in engineering shared information about equipment problems before the formal reporting process delivered the same information days later. The informal organization made the formal organization work, and the executives who understood this invested in the informal relationships rather than trying to suppress them.
Barnard also recognized that the informal organization served a protective function that was essential to the cooperative system's survival. When formal directives were poorly conceived or harmful to participants' interests, the informal organization provided the channel through which resistance was organized and communicated. This resistance was not insubordination. It was a feedback mechanism, a signal to the formal leadership that its directives had exceeded the zone of acceptance and required revision. The informal organization's protective function was particularly important during periods of rapid change, when formal directives were likely to be poorly calibrated to conditions that the formal leadership did not yet fully understand. In such periods, the informal organization's resistance to poorly conceived directives functioned as a stabilizing force, slowing the pace of change to the speed at which the cooperative system could absorb it without fracturing.
The AI moment has produced the most dramatic divergence between formal and informal organization in the history of management. Understanding this divergence through Barnard's framework reveals both the opportunity and the danger of the current moment.
When AI tools reduced the gap between intention and execution to nearly zero for a wide range of work, the formal organizational structures that had been designed over decades to coordinate sequential handoffs between specialists became instantaneously obsolete. The formal structure still designated backend engineers, frontend developers, designers, project managers, quality assurance specialists. But the actual work was no longer organized along these lines. Backend engineers were building user interfaces. Designers were implementing complete features end to end. Individual contributors were operating as one-person product teams, producing in days what had previously required cross-functional coordination over weeks.
The formal titles became decorative rather than descriptive. The formal handoffs became unnecessary. The formal coordination mechanisms -- sprint planning meetings, requirements reviews, design-to-development transitions -- continued to exist in the calendar but ceased to correspond to the actual flow of work. The informal organization had reorganized itself around the new reality of AI-amplified individual capability, and the formal organization was a lagging indicator of a transformation that had already occurred beneath it.
Barnard would have recognized this pattern instantly. He observed it in every organization he studied. The formal structure is always, to some degree, out of phase with the informal reality. What distinguishes the AI moment is the speed and magnitude of the divergence. Previous technology transitions created gradual divergences that organizations could address through periodic restructuring -- a reorganization every few years, updated job descriptions, a revised compensation framework. The AI transition created a divergence so rapid and so comprehensive that the formal organization became effectively fictional within weeks.
This divergence creates four specific challenges that Barnard's framework identifies with precision.
The first is the communication challenge. In the AI age, the most critical organizational knowledge is tacit knowledge about how to work with the tools. Which approaches yield useful results. What kinds of judgment the tool cannot supply. Where the tool's outputs should be trusted and where they should be verified. How to frame a problem in language that produces useful responses rather than generic ones. This knowledge is not transmitted through formal channels -- training programs, documentation, process manuals. It is transmitted through the informal network: through conversations between colleagues who are experimenting with the tools, through shared examples that demonstrate what works and what does not, through the osmotic process by which practical knowledge spreads through a community of practitioners who trust each other enough to share their failures alongside their successes.
The executive who invests in the conditions that make this informal communication flourish -- physical proximity or its genuine virtual equivalent, shared experiences, collaborative projects that bring people together across formal boundaries, social events that build the trust relationships through which tacit knowledge flows -- is investing in the infrastructure that enables the organization to learn. These investments appear on a spreadsheet as overhead. They are infrastructure of the most essential kind.
The second is the normative challenge. The tools create new categories of behavior that formal policy has not yet addressed and cannot address with sufficient nuance. When is AI-generated output appropriate without modification? When does AI-assisted work cross the line into AI-dependent work? What constitutes genuine contribution when the tool produces most of the artifact? How much credit should a person take for work the AI performed? These questions arise in infinite variety and require contextual sensitivity that no written policy can provide. They are answered by informal norms, developed through the accumulated judgments of the community about what constitutes good work in the new environment.
The executive's role in the normative process is not to dictate norms but to model them. Barnard was clear that informal norms are shaped more by the behavior of leaders than by their pronouncements. The executive who uses AI thoughtfully, who openly acknowledges what the tool contributes and what her own judgment adds, who treats the tool as an amplifier rather than a substitute for thinking, establishes norms through example that no formal mandate could establish. The executive who uses AI carelessly or who conceals the tool's role in her work establishes different norms, equally through example, and the organization's culture follows the leader's practice, not her policy.
The third is the protective challenge. The informal organization has always served a protective function, establishing norms about pace, effort, and the boundary between work and life that prevent the formal organization from exploiting its participants to the point where cooperation breaks down. In the AI age, this protective function takes on new urgency. The tools create the possibility of work intensity that is difficult for individuals to regulate on their own. The productive engagement that AI-augmented work provides -- the immediate feedback, the visible results, the exhilaration of amplified capability -- can become compulsive, and the formal organization may be tempted to exploit this compulsion for maximum output.
The executive who understands the protective function recognizes that the informal organization's resistance to unsustainable intensity is not a problem to be overcome. It is a diagnostic signal. It is the organization's immune system responding to a genuine threat: the threat that productivity will consume the well-being of the people who produce it. The executive who suppresses this resistance achieves short-term gains at the cost of long-term viability -- a trade that Barnard would have recognized as the classic moral failure of organizational leadership.
The fourth is the recognition challenge. The divergence between formal and informal organization creates a gap between who contributes the most value and who receives recognition and reward. The people who are contributing most through the informal organization -- the backend engineer who builds complete user-facing features, the designer who implements systems end to end, the individual contributor who has become a one-person product team -- are being evaluated against formal criteria that do not capture their actual contributions. The formal metrics measure the job they were hired for, not the job they are actually doing. Their most valuable contribution -- the judgment to identify a need and the capability to address it across domain boundaries -- is invisible to the formal evaluation system.
The executive's task is to close this gap: to restructure formal organization to reflect informal reality. This means redesigning job descriptions, compensation structures, and career ladders to accommodate cross-domain contribution. It means creating evaluation systems that measure judgment and impact, not just execution within a narrow specialty. It means recognizing that the formal structure must follow the informal reality, not the other way around.
Barnard would have cautioned that this restructuring must preserve the essential functions of formal organization: clarity about expectations, fairness in evaluation, predictability in career progression. The goal is not to replace formal structure with informal anarchy. The goal is alignment -- ensuring that the map corresponds to the territory, so that the organization's recognition systems reflect the actual patterns of contribution that its members have developed in response to the new reality.
Chester Barnard devoted considerable attention to what he called the economy of incentives -- the complete system of rewards, material and non-material, that organizations use to secure the cooperation of their participants. Barnard understood what many of his contemporaries did not: that money alone is insufficient to maintain a cooperative system. People do not work only for compensation. They work for the satisfaction of exercising their capabilities, for the sense of contributing to something larger than themselves, for social connection and belonging, for the recognition of their peers, for the opportunity to grow and develop, for the sense that their work has meaning beyond its economic return. The economy of incentives is not a payroll system. It is the total exchange between the organization and the individual, encompassing every material and non-material benefit the organization provides, and the cooperative system survives only as long as the individual perceives the total exchange as favorable.
Barnard classified incentives into specific inducements -- targeted at particular individuals, including material compensation, opportunities for personal distinction, and desirable physical conditions of work -- and general inducements -- available to all participants, including the social compatibility of the working group, shared identity, the sense of enlarged participation in the organization's purpose, and communal feeling. Barnard's insight was that the general inducements are often more powerful than the specific ones. A person will accept lower compensation to work with people she respects, on problems she finds meaningful, in an organization whose purpose she believes in. A person will leave a well-compensated position that provides no meaning, no community, and no sense of purpose, because the non-material dimensions of the exchange have become intolerably deficit.
The AI amplification of individual capability has disrupted both categories of incentive in ways that threaten the cooperative system unless the executive recognizes and addresses the disruption with Barnard's level of systematic attention.
Begin with material compensation. In the pre-AI organization, compensation was tied, however imperfectly, to individual output. The relationship was rough but legible: produce more, earn more. The person who delivered higher output commanded higher compensation, and the logic of the exchange was broadly understood, even when its specifics were debated.
When AI amplifies individual output by an order of magnitude, this correspondence collapses. When one AI-augmented engineer produces the output of an entire pre-AI team, how should she be compensated? Traditional output-based compensation would require multiplying her salary many times over -- unsustainable for any organization. Time-based compensation would pay her the same as a colleague producing a fraction of her output -- unacceptable to any talented person with alternatives. Neither the old model nor its obvious modifications can accommodate the new reality.
The underlying problem is that the tool has mediated the relationship between individual capability and individual output in a way that makes traditional performance measurement meaningless. Two engineers using the same tools may produce vastly different outcomes, not because one works harder or possesses more technical skill, but because one exercises better judgment about what to build and how to direct the tool. The difference is in the judgment -- and judgment is precisely the dimension of contribution that traditional compensation systems are least equipped to measure.
Barnard would argue that this situation demands a fundamental reconception of the economy of incentives. On the material side, compensation must shift from output-based to judgment-based models. This means compensating people not for what they produce but for the quality of the decisions they make about what to produce. Organizations must develop methods for evaluating decision quality -- not just decision outcomes, which may be affected by luck and circumstance, but decision processes, which reflect the actual exercise of judgment. Some professional fields have long experience with this kind of evaluation. Medicine evaluates diagnostic reasoning independently of patient outcomes. Law evaluates legal analysis independently of case results. Aviation evaluates pilot decision-making through systematic debrief processes that distinguish good judgment from good fortune. Their methods, adapted for the organizational context, offer a path toward compensation systems that reward the contribution that actually matters.
But Barnard would insist that material compensation, however well designed, remains insufficient to maintain the cooperative system. The more important shift is in the economy of non-material incentives. The AI moment has disrupted all four of the non-material inducements that Barnard identified as essential, and each disruption requires careful attention.
The satisfaction of meaningful work. AI amplification has the paradoxical effect of making work both more and less meaningful simultaneously. More meaningful, because the tool removes tedious execution and frees the worker to focus on the judgment work she finds genuinely engaging. The senior engineer who discovered that the remaining portion of his work -- judgment, architectural instinct, taste -- was the part that mattered most was experiencing work that was more meaningful than his pre-AI work had ever been. The tool had stripped away the mechanical overlay and revealed the creative core.
But the same amplification can drain meaning from work for those whose sense of purpose was bound up in the struggle of execution itself. The craftsperson who found meaning in hours of patient iteration, in the intimate knowledge of the codebase, in the embodied understanding that came from building by hand -- these sources of meaning were eliminated along with the labor that produced them. The economy of incentives must accommodate both responses: providing opportunities for judgment-driven workers to exercise judgment at the highest level, while creating spaces where craft is valued for its own sake, where the process of building is honored alongside the product.
The sense of personal growth. AI amplification accelerates growth in breadth -- the worker acquires facility with new domains, new technologies, new modes of expression at unprecedented speed. But it may stunt growth in depth -- the slow, patient accumulation of expertise that comes from struggling with a problem over months and years, from making mistakes that teach lessons no documentation can convey, from developing the embodied understanding that only repetition and failure can produce. The organization that rewards only breadth loses the specialists whose deep expertise remains essential for the problems that breadth alone cannot solve. The organization that rewards only depth fails to attract the versatile contributors that the AI age demands. The economy of incentives must value both, providing inducements for depth alongside inducements for breadth, recognizing that the organization needs both kinds of growth to maintain its judgment capability.
The social conditions of the workplace. AI amplification is inherently isolating. The tool that enables one person to do what a team previously did reduces the social density of work. Fewer people are needed for any given project, which means fewer interactions, fewer collaborative moments, fewer of the casual exchanges that constitute the social fabric of organizational life. The engagement that AI-augmented work provides is between human and machine, not between human and human, and however productive that engagement may be, it does not meet the social needs that Barnard identified as among the most powerful inducements organizations can offer.
The executive function must include deliberate investment in the social conditions that amplification erodes. Team rituals, shared learning experiences, collaborative projects where the purpose is the collaboration itself -- these are not overhead. They are essential investments in the non-material economy that holds the cooperative system together. The organization that provides only tools and compensation, without community and belonging, will find its most talented people drifting away in search of the social sustenance their work no longer provides.
The feeling of contributing to purpose. This is, in Barnard's view, the most powerful of all inducements, and the AI age makes it simultaneously more important and more difficult to provide. More important, because when material and social inducements are disrupted by amplification, purpose is the remaining anchor connecting the individual to the organization. More difficult, because purpose requires clarity about what the organization is for, and AI removes the constraints that previously defined the organization's scope. When the limitations dissolve and any worker can build anything, the organization must articulate a purpose that goes beyond capability to touch on meaning. The organization that can compellingly answer the question "What is worth doing?" will attract and retain the judgment workers it needs. The organization that offers only compensation will discover that compensation alone does not inspire the kind of commitment that the cooperative system requires.
The economy of incentives in the AI age is, at its core, an economy of meaning. Material compensation remains necessary -- no one cooperates for meaning alone when the rent is due and the children need food. But the cooperative system is maintained by meaning, purpose, belonging, and growth -- the non-material inducements that no tool can provide and no algorithm can optimize. The executive who understands this will build an economy of incentives that is worthy of the people she asks to cooperate. The executive who does not will find the cooperative system dissolving, one quiet departure at a time, as the most capable people take their amplified capability elsewhere in search of the meaning their organization failed to provide.
Barnard understood this nearly a century ago. He watched workers at New Jersey Bell accept lower wages during the Depression because they believed in the purpose of maintaining the telephone network, because their colleagues had become their community, because the work gave them a sense of contribution that no amount of compensation could replace. The economy of incentives was always, in its deepest structure, an economy of meaning. The AI age has merely stripped away the material scaffolding that obscured this truth, revealing the non-material foundation that was always bearing the weight.
The executive who sees this clearly -- who invests in meaning, in purpose, in community, in the conditions for growth -- builds a cooperative system that can withstand the disruptions of AI amplification. The executive who sees only the material dimension -- who responds to AI disruption by adjusting compensation formulas and revising performance metrics -- addresses the symptoms while ignoring the disease. The disease is a deficit of meaning. The cure is a purpose worth cooperating toward, communicated by a leader worth trusting, in a community worth belonging to. Everything else is arithmetic.
Barnard's analysis of incentives also illuminates the challenge of intergenerational cooperation within organizations navigating the AI transition. Senior professionals whose incentive expectations were formed in the pre-AI era bring different assumptions about the relationship between effort and reward than junior professionals who entered the workforce with AI tools already available. The senior architect who spent decades mastering a craft that AI has partially automated expects recognition for depth of experience. The junior developer who achieves in weeks what previously took years expects recognition for speed and adaptability. Both expectations are legitimate, and the executive who privileges either at the expense of the other fractures the cooperative system along generational lines. The economy of incentives must accommodate both, finding ways to honor accumulated wisdom and youthful fluency simultaneously, creating an organizational culture in which the generations complement rather than compete with each other.
The challenge extends to the incentive structures surrounding AI adoption itself. The participant who embraces the tools enthusiastically may resist any organizational constraint on their use, viewing governance as an obstacle to their amplified productivity. The participant who resists the tools may perceive the organization's adoption pressure as a threat to their established value. The executive must design incentive structures that encourage adoption without punishing resistance, that reward the wise use of the tools without creating a culture in which tool use becomes a substitute for the human judgment that gives the tools their direction. The balance is delicate, and the executive who achieves it demonstrates the kind of moral leadership that Barnard identified as the executive's highest function.
Communication, in Chester Barnard's framework, is not merely one function among many. It is the nervous system of the organization, the medium through which every other function is performed. Without communication, there is no coordination. Without coordination, there is no cooperation. Without cooperation, there is no organization. Barnard insisted that the executive's first function is the maintenance of the system of communication -- ensuring that information flows to where it is needed, in the form it is needed, at the time it is needed. Every other executive function depends on this one, because every other function is executed through communication.
Barnard's account of organizational communication was sophisticated enough to encompass what modern theorists would call both the explicit and tacit dimensions of information transfer. He understood that the system of communication includes not only the formal channels through which explicit information travels -- memoranda, reports, directives, meeting minutes -- but also the informal channels through which tacit information travels: hallway conversations, social relationships, the accumulated shared understanding that enables people to interpret formal communications correctly. A directive that says "improve quality" means very different things depending on the shared context of sender and receiver, and that shared context is maintained through the informal dimension of the communication system -- through the history of interactions, the trust relationships, the shared vocabulary, and the mutual understanding that accumulate over time between people who work together closely.
The AI revolution has transformed organizational communication at three distinct levels, and understanding each through Barnard's framework reveals both the extraordinary gains and the dangerous losses of the transformation.
At the first level, AI has dissolved what might be called translation barriers -- the layers of interpretation that information must pass through as it moves from executive vision to worker implementation. In the pre-AI organization, the executive's vision was translated from natural language into specification documents, from specifications into engineering requirements, from requirements into code, from code into testable artifacts, and from artifacts into deployable products. Each translation introduced distortion. Each layer was performed by a different person or team, each with their own understanding, priorities, and cognitive framework. By the time the original vision reached implementation, it had been transformed -- not through malice or incompetence, but through the inherent limitations of sequential translation across different modes of understanding.
AI collapsed these translation layers into a single conversational exchange. An executive could describe a component in plain language and receive a working implementation within hours. The fidelity of this exchange exceeded anything the sequential translation chain could achieve, because the compression eliminated the compounding distortion of multiple translation steps. This represents, in Barnard's terms, the most significant improvement to the informational dimension of organizational communication in the history of management.
But the dissolution of translation barriers introduces a problem that Barnard's framework identifies with precision. The translation layers that were eliminated served a function beyond mere information transmission. They served an interrogative function: each translation step required the translator to ask questions about the original vision, to challenge assumptions, to request clarification, to identify ambiguities. The specification writer asked the executive, "What do you mean by 'intuitive'?" The engineer asked the specification writer, "What do you mean by 'responsive'?" Each question forced a refinement of the vision, and the cumulative effect was a progressive clarification that improved the vision as it was translated.
When the translation layers are removed, the interrogative function is removed with them. The AI does not ask "What do you mean?" the way a human translator does -- with genuine confusion, with the specific gaps in understanding that only a person embedded in the organization's context would have, with the constructive pushback that comes from someone who cares about the outcome and who will be affected by the result. The AI optimizes for completion rather than for understanding. The executive who relies on AI-mediated communication without supplementing it with deliberate interrogation structures will find her vision implemented faithfully but not challenged. The artifacts will look like what she described, but they may not be what she actually needed, because the process of description is different from the process of clarification, and it is the process of clarification that produces the highest-quality thinking.
This creates the need for what might be called judgment reviews -- structured conversations whose purpose is not to evaluate output but to evaluate the reasoning behind the output. In a judgment review, the executive presents the problem she identified, the alternatives she considered, the values she prioritized, the tradeoffs she accepted. The team's role is to challenge that thinking -- to ask the questions the AI did not ask, to identify the assumptions the AI did not challenge, to provide the constructive pushback that the old translation process used to provide incidentally but that now must be provided deliberately and systematically.
At the second level, AI has introduced a fundamental division between the informational and meaning dimensions of communication. AI tools are remarkably proficient at transmitting information -- gathering data, identifying trends, constructing comparisons, generating projections. They are remarkably poor at transmitting meaning -- why these data matter to this particular audience, what they signify for the specific people reading them, how they connect to the organization's history and values, what emotional response the communicator hopes they will provoke. The information is abundant and machine-generated. The meaning must be supplied by the executive, through her own voice and the informal channels that the AI cannot access.
Barnard would have argued that meaning-deficient communication is worse than no communication at all, because it creates the illusion of understanding without the substance. The team that receives a comprehensive AI-generated strategic analysis believes it understands the strategy, because the analysis is thorough and the presentation is polished. But understanding includes emotional resonance, a sense of urgency, connection to purpose, the feeling of alignment that motivates action. These are transmitted through the executive's own voice, her own emphasis, her own visible conviction -- through the personal qualities that distinguish communication from data transfer.
At the third level, AI has disrupted what communication theorists call the metacommunicative dimension -- signals about how to interpret signals. When the executive communicates through AI-generated artifacts, the team receives communications that are syntactically indistinguishable from human-crafted communications but lack the metacommunicative markers that enable calibration. Is this message urgent? Is the executive confident in this analysis? Is this direction firm or exploratory? How much latitude does the team have to push back? These signals are normally conveyed through the executive's own word choices, her characteristic patterns of emphasis, her known tendency to understate or overstate, the personal note that says "I know this is a lot, but I believe in this direction."
When these signals are absent -- when the communication is polished, comprehensive, and devoid of personal markers -- the receiver must guess about how to interpret the message. This ambiguity is corrosive, because it erodes the shared context that enables the communication system to function. The executive who understands this will maintain a practice of personal communication that supplements AI-assisted communications. She will write her own notes, in her own voice, with her own imperfections, when the message requires metacommunicative clarity. She will use the AI for information and her own voice for meaning. She will ensure that the team always has access to the human signal beneath the machine signal.
Barnard's first function of the executive -- the maintenance of the system of communication -- has not been simplified by AI. It has been restructured. The informational dimension has been enhanced beyond anything previous generations could have imagined. The meaning dimension and the metacommunicative dimension have been placed at risk. The executive who maintains all three -- who uses AI for information, her own voice for meaning, and deliberate practice for metacommunicative clarity -- is the executive who maintains the full system of communication that the cooperative system requires.
There is a further dimension of communication that Barnard's framework illuminates: the relationship between communication and organizational learning. Barnard understood that the communication system is not just a channel for transmitting decisions. It is the medium through which the organization learns -- through which information about the environment, about the consequences of past decisions, about the needs and concerns of participants, flows back to the people who make decisions. This feedback dimension of communication is essential because it enables the organization to adapt, to correct course, to learn from its mistakes and build on its successes.
AI has accelerated the feedback loop for certain kinds of information -- usage data, performance metrics, market signals -- while potentially degrading it for other kinds: the informal feedback about how decisions affect people's lives, their sense of meaning, their willingness to cooperate. The executive who monitors the quantitative feedback channels assiduously while neglecting the qualitative ones will make decisions that are data-informed and humanly uninformed, that optimize metrics while eroding the cooperative system that gives the metrics their significance.
The maintenance of the full communication system -- formal and informal, quantitative and qualitative, informational and meaningful, metacommunicatively rich -- is the executive's first and most demanding function. It has never been more important, or more difficult, than it is right now. Communication is not a commodity that AI can produce. It is a relationship that the executive must maintain. The tools have given us better information channels than any organization in history has possessed. Whether those channels carry meaning or merely data depends entirely on the human being who manages them.
Barnard's insight about communication extends to a phenomenon that the AI age has made particularly acute: the relationship between communication volume and communication quality. In the pre-AI organization, the volume of formal communication was naturally constrained by the time and effort required to produce it. Writing a memo required thinking about what to say. Preparing a presentation required organizing one's thoughts. The friction of production served as a quality filter, ensuring that most formal communication had been considered before it was transmitted. When AI reduces the production friction to near zero, the volume of formal communication explodes while the average quality declines. The organization is flooded with AI-generated reports, summaries, analyses, and recommendations that are technically competent and often substantively empty -- artifacts of production rather than expressions of thought. The executive's task in this environment is not to produce more communication but to maintain the standards of communication quality that the cooperative system requires, ensuring that the organization's communication channels carry genuine thought rather than the simulation of thought that the tools make so effortlessly available.
This quality-filtering function is itself a form of moral leadership, because it requires the executive to value substance over appearance, to reward the hard-won insight over the fluently generated summary, and to model the kind of communication -- honest, considered, willing to acknowledge uncertainty -- that the cooperative system depends upon. The executive who fills the communication channels with AI-generated artifacts is not communicating. She is performing communication, and her participants, capable of distinguishing performance from substance, will respond accordingly.
Chester Barnard introduced a concept that he called the strategic factor -- the element in any situation whose control is decisive for achieving the desired end. In every situation, Barnard argued, one factor is strategic and all others are complementary. The strategic factor is the limiting element, the bottleneck, the constraint that determines whether the purpose can be achieved. When the strategic factor is identified and controlled, the purpose is achievable. When it is misidentified, the purpose fails regardless of how well the complementary factors are managed.
The concept is deceptively simple and profoundly important, because it requires the executive to perform a continuous act of diagnosis: scanning the organizational environment, identifying which factor is currently strategic, and redirecting effort toward controlling that factor. The strategic factor is not fixed. It shifts as conditions change. The factor that was strategic yesterday may be complementary today, and the factor that was complementary yesterday may become strategic tomorrow. The executive who identifies the shift and responds maintains the organization's effectiveness. The executive who fails to identify it -- who continues to manage the factor that was strategic in the previous configuration -- wastes the organization's resources on a problem that is no longer the problem.
The AI revolution represents the most dramatic shift in the strategic factor in the history of organized human activity, and the executives who fail to recognize it are committing the classic strategic-factor error on a civilizational scale.
For most of the history of organized human activity, the strategic factor was execution capability -- the ability to convert intention into artifact, vision into reality, plan into product. The medieval monarch whose strategic factor was the capacity to raise and equip an army. The industrial magnate whose strategic factor was the capacity to build and operate a factory. The technology executive whose strategic factor was the capacity to recruit, retain, and coordinate the engineers who could write the code that implemented the product vision. In each case, the limiting element was the same: the gap between what was imagined and what could be built. The gap was enormous, and the organization existed primarily to close it. The organization that closed the gap most efficiently dominated its competitors.
When AI tools reduced this gap to the time it takes to have a conversation, execution ceased to be the strategic factor. It became a complementary factor -- still necessary, still important, but no longer limiting. The organization that continues to optimize for execution capability in the AI age is committing the strategic-factor error in its purest form: investing heavily in a factor that is no longer the constraint, while neglecting the factor that has become the actual constraint.
What has become the strategic factor is judgment. The ability to determine what should be built, for whom, and why. The ability to distinguish between what is possible and what is valuable. The ability to evaluate alternatives, assess risks, anticipate consequences, and make decisions that serve the organization's purpose when the space of possible actions has expanded from a manageable list to an effectively infinite field.
This shift changes the fundamental nature of the executive's diagnostic task. When execution was the strategic factor, the diagnosis was relatively straightforward: do we have the capability to build what we have decided to build? The assessment could be conducted by examining technical resources, personnel, budget, and timeline. Complex in practice but simple in principle: capability against requirement, resources against plan.
When judgment is the strategic factor, the diagnosis becomes far more difficult, because judgment is harder to assess than capability. How does one determine whether the organization has sufficient judgment to navigate an effectively infinite possibility space? How does one measure the quality of decisions that have not yet been made? How does one evaluate the capacity for discernment that will be required when the organization faces choices it cannot yet anticipate?
Barnard would argue that assessing judgment capability requires a fundamentally different kind of organizational intelligence than assessing execution capability. Execution capability can be measured through proxies: technical certifications, years of experience, portfolio of completed projects. Judgment capability cannot be reliably measured through proxies, because the proxies -- seniority, track record, credentials -- are lagging indicators that reflect the quality of past judgments under past conditions, not the quality of future judgments under conditions that the AI transition has made radically different. The senior executive who made excellent strategic decisions in the pre-AI environment may make poor decisions in the AI environment, because the evaluative framework she developed over decades no longer matches the landscape she is evaluating.
The organizational structures emerging in response to this shift reflect the new strategic factor. Small groups whose purpose is to decide what should be built rather than to build it -- what some organizations call vector pods, others call strategy cells or judgment teams -- are organizational structures designed around the strategic factor of judgment. They do not coordinate execution, because execution does not require coordination when each individual has the tools to execute independently. They coordinate judgment, ensuring that decisions about what to build are informed by multiple perspectives, challenged by diverse viewpoints, and aligned with organizational purpose.
The composition of these judgment teams matters enormously, and Barnard's framework explains why. The team is effective to the extent that it brings together people whose judgment is complementary -- people who see different dimensions of the problem, who bring different evaluative frameworks, who have different kinds of experience that inform different kinds of insight. The engineer who understands what is technically possible. The ethicist who understands what is morally permissible. The designer who understands what is aesthetically compelling. The customer advocate who understands what is practically needed. None of these perspectives alone is sufficient for the judgment the AI age requires. Together, they constitute the judgment capability that has become the strategic factor.
The shift in the strategic factor also changes the nature of competition between organizations. When execution was strategic, organizations competed on capability: who could build faster, more reliably, at greater scale. When judgment is strategic, organizations compete on wisdom: who can make better decisions about what to build, who can anticipate consequences more accurately, who can align decisions with values more consistently. This favors different kinds of organizations. The organization that wins the execution competition is the one with the most resources and the most efficient processes. The organization that wins the judgment competition is the one with the best people -- not the most technically skilled, but the wisest in the evaluative sense. And wisdom cannot be purchased on the open market, because it is embedded in the specific context of the individual's experience, the organization's purpose, and the accumulated judgment that comes from making consequential decisions over time.
The shift from execution to judgment as the strategic factor also changes the nature of organizational failure. When execution was strategic, organizations failed because they could not build what they intended to build -- they lacked the technical capability, the personnel, or the resources to close the gap between vision and product. When judgment is strategic, organizations fail because they build the wrong things with extraordinary efficiency -- they close the gap between vision and product instantly but discover that the vision was flawed, the product was unnecessary, or the market it was designed to serve did not exist. The failure mode has shifted from inability to execute to inability to evaluate, from too little capability to too little wisdom, and this shift requires a corresponding change in how organizations diagnose and prevent failure. The post-mortem that asks "Why couldn't we build it?" must be replaced by the post-mortem that asks "Why did we build this instead of that?" -- a question that probes the quality of judgment rather than the adequacy of execution.
The scarcity of judgment is the defining scarcity of the AI age. The executive who designs the organization to cultivate, evaluate, and retain judgment capability is the executive who controls the strategic factor. The executive who continues to optimize for execution capability -- who hires for technical skill rather than evaluative wisdom, who measures output rather than decision quality, who rewards speed rather than discernment -- is optimizing for a factor that is no longer strategic. Barnard would have recognized this as the most consequential diagnostic error an executive can make.
The implications extend beyond organizational design to the fundamental questions of education, career development, and human value that the AI moment has forced upon every society. If judgment is the strategic factor, then the educational systems that produce judgment are the strategic institutions, and the societies that invest most wisely in the cultivation of judgment -- in teaching people not what to think but how to evaluate, not what to build but what is worth building, not how to comply but how to exercise moral discernment in situations where the right answer is not obvious -- will be the societies that thrive in the AI age.
Barnard spent his later career leading institutions devoted to precisely this kind of cultivation. As president of the Rockefeller Foundation from 1948 to 1952 and chairman of the National Science Foundation from 1952 to 1954, he invested in the development of human capability at the highest level. He understood that the strategic factor in any civilization is the quality of its people's judgment, and that the institutions devoted to cultivating judgment -- universities, research foundations, mentorship traditions, professional communities of practice -- are the most important institutions a society possesses. In the AI age, this understanding has become not merely important but urgent, because the tools have made execution abundant and judgment scarce, and the societies and organizations that recognize this shift first will lead the next century.
The strategic factor has shifted. The old constraint is abundant. The new constraint is scarce. Everything depends on whether the executive -- and the society she operates within -- recognizes this shift and responds to it with the seriousness it demands.
Chester Barnard's most controversial and most important contribution to organizational theory was his insistence that the executive's most fundamental function is moral. Not moral in the narrow sense of ethical compliance -- following regulations, avoiding scandal, satisfying auditors. Moral in the deeper sense of creating and maintaining the conditions under which cooperation is possible, desirable, and self-sustaining. The moral factor, in Barnard's framework, is the executive's capacity to inspire belief in the organization's purpose, to maintain the social conditions that make cooperation attractive, to embody the values the organization professes, and to resolve the conflicts between competing claims -- between organizational needs and individual needs, between short-term pressures and long-term purposes, between efficiency and humanity -- in ways that preserve the integrity of the cooperative system.
This insistence was a departure from the prevailing management science of the 1930s, which treated organizations as mechanical systems and executives as engineers of efficiency. Barnard argued that the mechanical model was not merely incomplete but dangerous, because it encouraged executives to treat people as instruments and cooperation as a problem of incentive design rather than moral leadership. The executive who treats cooperation as a mechanical problem -- who believes the right combination of carrots and sticks will produce the desired behavior -- may achieve short-term compliance but will never achieve the genuine cooperation that sustains an organization through difficulty, uncertainty, and transformation.
The AI age has made Barnard's moral argument not merely important but urgent, because the tools amplify moral as well as productive capacity. The amplifier does not judge. It carries whatever signal is fed into it with equal fidelity. The executive who amplifies carelessness produces carelessness at scale. The executive who amplifies genuine care produces care at scale. The moral factor is the amplifier's setting -- the variable that determines whether amplification serves or damages the organizational community and the broader society that the organization affects.
Consider the specific moral dilemmas that AI amplification creates.
The first is the conflict between speed and care. In the pre-AI organization, the tradeoff between moving fast and being careful was moderated by execution constraints. The work itself took time, and the time provided natural checkpoints for reflection, review, and correction. You could not move so fast that you outran your capacity for quality control, because the execution itself imposed a tempo that gave judgment room to operate. When AI removes the execution constraint, the tradeoff becomes acute. The organization can now build so fast that it outruns its capacity for judgment, producing artifacts at a rate that exceeds its ability to evaluate whether the artifacts are worthy of production. The moral challenge is to impose the discipline that execution constraints previously imposed naturally -- to choose to slow down, to choose to reflect, to choose care over speed even when the tool makes speed essentially free. This is a moral choice because it requires the executive to sacrifice measurable output in favor of unmeasurable quality, and the temptation to optimize for the measurable is one of the oldest and most destructive temptations in organizational life.
The second is the conflict between individual capability and collective well-being. The AI tools amplify individual capability to the point where one person can produce what a team previously did. This amplification is exhilarating for the individual and potentially devastating for the team. The executive faces the moral challenge of distributing the benefits of amplification across the organization rather than concentrating them in the hands of the most capable individuals. The executive who allows the most talented AI-augmented workers to absorb the work of entire teams -- celebrating their productivity while ignoring the displacement of their colleagues -- is making a moral choice, even if she does not recognize it as such. The consequences of that choice ripple through the cooperative system, eroding the trust and the sense of fairness that cooperation requires.
The third is the conflict between organizational success and social responsibility. The organization that deploys AI to maximum productive effect may achieve extraordinary gains while contributing to the displacement of workers, the concentration of economic power, and the erosion of social structures that depend on widespread employment. The executive's moral function includes consideration of these broader consequences, even when they do not appear on the balance sheet and are not demanded by investors or shareholders.
Barnard would have argued that the resolution of these moral conflicts is the executive's most important and most difficult work, because there are no algorithmic solutions to moral dilemmas. The executive cannot ask an AI to resolve the conflict between speed and care, because the resolution depends on values that must be held by a person who can feel the weight of the consequences, who can be held accountable for the choices, and who can embody the values the organization professes. Moral judgment is the one dimension of leadership that cannot be amplified, automated, or outsourced.
Barnard explored what he called moral complexity -- the situation in which competing moral claims cannot all be satisfied simultaneously, and the executive must choose which claims to honor and which to sacrifice. The AI age intensifies moral complexity because the amplification of capability creates new tensions between values that previously coexisted without conflict. The capacity to build anything creates the moral obligation to decide what should not be built. The capacity to produce at unprecedented speed creates the moral obligation to determine what pace is sustainable. The capacity to amplify any signal creates the moral obligation to ensure the signal is worth amplifying.
Barnard would have insisted that the moral factor is not separable from the other dimensions of executive leadership. It is the quality of mind the executive brings to every function: to communication, to securing essential services, to formulating purpose. The executive who communicates with moral clarity -- who tells the truth, acknowledges uncertainty, respects the intelligence of her audience -- builds a communication system that is trustworthy. The executive who secures essential services with moral awareness -- who treats contributors as whole people rather than units of production -- builds a workforce genuinely committed rather than merely compliant. The executive who formulates purpose with moral depth -- who asks not just "What can we build?" but "What should we build, and for whom, and at what cost to what we value?" -- builds an organization whose success is worth celebrating.
The moral factor in AI-era leadership is, in the end, the executive herself. Not her strategy, not her intelligence, not her technical sophistication, but her character. The quality of her values, the consistency of her integrity, the depth of her care for the people she leads and the world she affects. The tools amplify whatever signal they are given. The executive's character is the signal.
Barnard understood this with a clarity that has been underappreciated for nearly a century. He understood it not as an abstract principle but as a practical reality observed over decades of executive practice. He watched executives with brilliant strategy and weak character produce organizations that achieved short-term success and long-term failure. He watched executives with modest strategic vision but deep moral commitment produce organizations that endured through crises that destroyed their more strategically sophisticated competitors. He concluded that character is not a supplement to competence. It is the foundation upon which competence builds.
The AI age has not changed this truth. It has amplified it -- amplified the consequences of moral leadership and moral failure alike -- to a scale where the distinction between the two determines outcomes not just for organizations but for the societies that depend on them. The executive who leads with moral clarity in the AI age is not merely a competent manager. She is a steward of amplified power, and the quality of her stewardship determines whether that power builds or destroys, whether it serves or exploits, whether it expands human capability or concentrates it, whether it creates an ecosystem of flourishing or a landscape of extraction.
The moral dimension of leadership also extends to the executive's relationship with the broader society in which the organization operates. Barnard argued that organizations are not self-contained systems but subsystems within larger social systems, and that the executive's moral responsibility includes consideration of the organization's effects on the communities, institutions, and populations that it touches. In the AI age, this responsibility is amplified alongside everything else. An organization deploying AI at scale is not merely changing its own operations. It is shaping the labor markets, the professional norms, the educational expectations, and the economic structures of the society around it. The executive who deploys AI to eliminate three hundred jobs has made a decision whose consequences extend far beyond the organization's boundaries, affecting families, communities, and institutions that depend on the employment the organization provided. Barnard would have insisted that this broader impact is not a secondary consideration to be addressed after the organizational decision is made. It is part of the decision itself, part of the moral calculus the executive is obligated to perform.
The moral factor was always the executive's most important function. The AI age has merely made it impossible to pretend otherwise.
Chester Barnard defined organizational purpose as the coordinating and unifying principle without which an organization ceases to be an organization and becomes merely a collection of individuals. Purpose is what transforms a group of people with separate interests into a cooperative system with shared direction. Without purpose, there is nothing to cooperate toward. Without something to cooperate toward, there is no cooperation. And without cooperation, there is no organization. Purpose is not one function among many. It is the function that gives all other functions their meaning.
In the pre-AI organization, purpose was often defined in terms of production. The purpose of a software company was to produce software. The purpose of a manufacturing firm was to produce goods. These production-oriented definitions were adequate because the organization's identity was shaped by what it produced, and the production itself was constrained by capabilities. The constraint served a double function: it limited what the organization could attempt, and it thereby defined what the organization was. The statement "we build backend systems" was not just a description of capability. It was a statement of identity, of scope, of organizational boundary. It told the organization's members who they were, what was expected of them, and how their contributions connected to a larger whole.
When output becomes abundant -- when AI tools make it possible for any organization to produce virtually anything -- production-oriented definitions of purpose collapse. If you can build anything, then "we build software" is not a purpose. It is a tautology. Every organization with access to the tools can build software. The production that once defined the organization is now available to everyone, and a purpose that is available to everyone defines no one.
The shift from "What can we do?" to "What is worth doing?" is the most consequential transformation in organizational identity since the industrial revolution, and Barnard's framework provides the most precise language for understanding it.
A judgment-oriented purpose does not define the organization by what it produces. It defines the organization by what it values, what it believes the world needs, and what it is uniquely positioned to contribute. The purpose becomes normative rather than descriptive: not "we produce X" but "we believe X matters, and we direct our amplified capability toward making X real."
This is harder to formulate, because it requires the executive to engage with questions that production-oriented purpose allowed her to avoid entirely. Questions about value: what is genuinely valuable in a world where production is abundant? Questions about need: what does the world actually need, as opposed to what can be sold? Questions about identity: who are we, if not the people who make this particular thing? And perhaps the most important question, the one that marks the moral boundary of organizational purpose: what will we refuse to build, even though we can build it?
In a world of scarce execution capability, the question of what not to build was answered by the constraints themselves. You did not build things you could not build. The limitation was external, imposed by capability, and the executive did not need to exercise moral judgment about what to refrain from because the refrain was forced upon her. When execution constraints are removed, the decision about what not to build becomes a decision of pure judgment -- a moral decision, in Barnard's sense. The organization must choose, freely and deliberately, to refrain from building things it could build, because the building would not serve its purpose, would not align with its values, or would not contribute to the world in a way that justifies the expenditure of its amplified capability.
This requires specificity of purpose that most organizations do not currently possess. The purpose must be specific enough to guide judgment in novel situations. A purpose that says "we make the world better" provides no guidance, because anything can be framed as improvement. A purpose that says "we reduce the friction between people with ideas and the tools they need to realize those ideas" provides specific guidance: this opportunity aligns, because it reduces friction for a new category of people; this opportunity diverges, because it creates dependency rather than empowerment.
But formulation alone is not sufficient. The communication of purpose is equally essential, and in the AI age it must be more deliberate and more pervasive than in any previous organizational configuration.
In the pre-AI organization, purpose was communicated partly through the structure of work itself. The backend engineer who spent her days writing backend code absorbed the organization's implicit purpose through the practice of her work. When the structure of work dissolves -- when the backend engineer builds frontend features, when the designer implements complete systems, when role boundaries dissolve and the structure of work becomes fluid -- the implicit communication of purpose through work structure dissolves with it. Purpose must now be communicated explicitly, repeatedly, and through every channel the executive can reach.
The executive must become the organization's purpose embodiment -- not in the superficial sense of repeating a mission statement at every meeting, but in the deeper sense of making every decision, every resource allocation, every recognition a visible expression of the purpose. The organization's members, working autonomously with AI-amplified capability, need to understand the purpose well enough to make judgment calls the executive cannot directly supervise. This understanding comes not from reading a purpose statement but from observing, over time, how the executive's decisions embody the purpose in concrete, specific situations.
Barnard described this as the executive's most demanding responsibility: the constant, tireless communication of purpose through every dimension of organizational life, so that participants understand not just what the purpose is but what it means in practice, in the specific and varied situations that their work presents. In the AI age, when each participant has the capability to act autonomously at unprecedented scale, this responsibility is amplified along with everything else. The consequence of purpose poorly communicated is not confusion. It is the autonomous production, at AI-amplified scale, of artifacts that are technically competent but purposively misaligned -- that consume the organization's resources without advancing its mission, that build without building toward anything.
When output is abundant, purpose is the only scarce resource. The executive who formulates it with clarity, communicates it with consistency, and embodies it with integrity transforms abundance from a hazard into an advantage. The organization's amplified capability is directed toward ends genuinely worth pursuing, in a world where the question of what is worth pursuing has become the only question that ultimately matters.
There is a final dimension of purpose in the AI age that Barnard's framework illuminates with particular force. Barnard argued that organizational purpose is not merely instrumental -- not merely a means to coordinate action. It is constitutive -- it defines what the organization is, what its members become by participating in it, and what its existence contributes to the world. A purpose that says "we maximize shareholder value" does not merely direct action toward a goal. It constitutes the organization as a value-extraction entity and its members as value-extraction agents. A purpose that says "we expand human capability by building tools that empower rather than exploit" constitutes the organization as a human-capability entity and its members as agents of empowerment.
In the AI age, when the organization's amplified capability gives it the power to shape the world at unprecedented scale, the constitutive dimension of purpose becomes morally consequential in ways that previous generations of executives never had to confront. The purpose does not merely direct what the organization builds. It determines what the organization is building the world into. And the executive who formulates purpose at this level of moral seriousness -- who asks not just "What are we building?" but "What kind of world are we building?" -- is the executive whose organization contributes to human flourishing rather than diminishing it.
The formulation of purpose in the AI age also requires the executive to confront a temporal dimension that previous organizational contexts did not present with the same urgency. When production was slow, purpose could evolve gradually, adapting to changing circumstances through an iterative process that matched the pace of organizational change. When production is instantaneous, purpose must be formulated with sufficient depth and specificity to guide rapid decision-making in real time. The executive cannot pause the AI-amplified production cycle every time a novel situation requires a judgment call about purpose alignment. The purpose must be internalized deeply enough by every participant that it functions as an automatic filter, separating the worthy from the unworthy without requiring executive review. This internalization is achieved not through training programs or purpose workshops but through the sustained, visible embodiment of purpose in the executive's own decisions, creating a pattern that participants absorb through observation and imitation.
Barnard would have recognized this as the executive's highest calling: the formulation and maintenance of a purpose that is worthy of the capability it directs, the people it coordinates, and the world it shapes. In the AI age, that calling has never been more important, because the capability has never been greater, the people have never been more autonomous, and the world has never been more susceptible to the consequences of organizational purpose pursued at scale.
Chester Barnard understood that organizations exist in a state of dynamic equilibrium -- a constant, shifting balance between the forces that hold the organization together and the forces that threaten to pull it apart. The forces of cohesion include shared purpose, mutual trust, adequate inducements, effective communication, and competent leadership. The forces of disintegration include inadequate inducements, misaligned purposes, eroded trust, communication failures, and the shifting interests of participants who continuously evaluate whether the organization serves their needs better than the available alternatives.
The equilibrium is never static. It shifts as conditions change, and the executive's most important practical function is the management of this equilibrium -- the constant adjustment of the organizational system to maintain the balance between cohesion and disintegration. When the equilibrium holds, the organization functions. When it shifts modestly, the executive adjusts. When it shifts dramatically, the organization enters a period of disequilibrium that threatens its survival and demands transformative leadership.
Barnard's understanding of equilibrium was informed by his experience managing New Jersey Bell through the Great Depression, when the telephone company faced simultaneous pressures from declining revenues, reduced staffing, and increased public demand for essential communication services. The equilibrium that the organization maintained through that crisis was not a static balance but a dynamic achievement, requiring continuous adjustment of inducements, purposes, and cooperative structures to match conditions that shifted month by month. The Depression-era experience taught Barnard that organizational equilibrium is always provisional, always under pressure, and always dependent on the executive's willingness to make adjustments that are painful in the short term but essential for the cooperative system's survival. The AI transition presents a comparable challenge at an accelerated pace, demanding the same kind of continuous adjustment but on a timeline measured in weeks rather than years.
The arrival of AI-amplified productivity has produced the most dramatic organizational disequilibrium in living memory. The disequilibrium operates across every dimension of organizational life simultaneously, creating a crisis that is not singular but compound -- multiple dimensions of the cooperative system disrupted at once, each disruption amplifying the others, each requiring attention that the executive can barely spare because every other dimension is demanding attention at the same time.
The first dimension is the contribution crisis. Participants in the cooperative system defined their value through their contributions, and those contributions were largely contributions of execution: writing code, designing interfaces, managing projects, testing products, coordinating handoffs, translating requirements. When AI amplified individual capability by an order of magnitude, the execution contributions that had defined each participant's organizational identity were simultaneously devalued. The code that the engineer spent a week writing could now be produced in hours. The interface the designer spent a month crafting could now be generated in days. The project plan the manager spent weeks developing could now be sketched in an afternoon.
This devaluation is not merely economic. It is existential. The participants whose contributions have been devalued are not merely worth less to the organization. They feel worth less to themselves, because the identity they constructed around their execution capability has been undermined by a tool that renders that capability abundant. The experienced architect who felt like a craftsperson watching the printing press arrive was experiencing the contribution crisis in its most acute form: the recognition that the contribution he had spent decades developing was no longer the contribution the world needed from him.
The executive's task during the contribution crisis is twofold and difficult. First, she must help participants recognize that execution was never the whole of their value -- that the judgment, the taste, the architectural instinct, the contextual understanding that informed their execution was always the more valuable part, and that the tool has revealed rather than eliminated this value. Second, she must restructure the organization's contribution framework so that the newly revealed value -- judgment rather than execution -- is formally recognized, measured, and rewarded. Neither task is easy. The first requires a kind of organizational therapy, creating space for grief alongside discovery. The second requires organizational engineering, designing new systems from principles rather than precedent.
The second dimension is the inducement crisis. The inducements the organization offered were calibrated to the pre-AI configuration. Compensation was tied to output. Career advancement was tied to the accumulation of technical skill. Professional recognition was tied to the visibility of individual contributions in the context of team-based work. Meaningful work was tied to the struggle of execution -- the satisfaction of solving hard problems through sustained effort and hard-won expertise.
Each of these inducements is disrupted by AI amplification. Compensation must be recalibrated to reflect the new basis of contribution -- judgment rather than output volume. Career advancement must be redefined to reward the cultivation of evaluative wisdom rather than the accumulation of specialist skill. Professional recognition must acknowledge the less visible contributions -- the decision not to build something, the insight that redirected effort, the judgment call that averted a problem -- alongside the more visible but less uniquely human production of artifacts.
The third dimension is the structural crisis. The organization's roles, teams, reporting relationships, and decision-making processes were designed for the pre-AI configuration of work. When that configuration changes as dramatically as AI implies, the structure becomes misaligned with actual patterns of contribution. The formal structure assumed specialized roles, sequential handoffs, and hierarchical decision-making. The actual work involves cross-domain contribution, parallel execution, and distributed judgment. The gap creates friction, frustration, and the pervasive sense that the organization is working against its own members. Barnard would have recognized this gap as a specific instance of the broader tension between formal and informal organization that he had identified in 1938: the formal structure represents how the organization thinks it works, while the informal patterns of actual contribution represent how it actually works, and the greater the gap between the two, the more energy the organization wastes on maintaining a fiction that impedes rather than supports the cooperative effort.
The executive's task is to redesign structure to match actual contribution patterns. This means dissolving specialist silos that no longer describe how work is done. It means creating cross-functional judgment teams. It means implementing structures that accommodate the fluid, cross-domain work that AI tools have enabled. And it means doing so while maintaining the stability and predictability that formal structure provides, ensuring that people know where they stand and how their contributions will be evaluated.
The fourth dimension is the temporal crisis. The speed at which AI-augmented work proceeds creates mismatches between the pace of production and the pace of evaluation. In the pre-AI organization, the pace of execution imposed natural checkpoints for reflection. A feature that took two weeks to build provided two weeks for the team to consider whether it was the right feature. When execution accelerates dramatically, the evaluation tempo falls behind. The organization produces faster than it can assess whether the production serves its purpose, building faster than it can reflect on whether the building is directed toward the right ends.
The executive's task during the temporal crisis is to impose rhythm on the accelerated cycle -- to create deliberate pauses, structured reflection, and evaluation gates that ensure judgment keeps pace with output. This requires accepting reduced throughput in exchange for increased alignment: producing less but producing better, building more slowly but building the right things.
Barnard would have recognized this as perhaps the most difficult executive task, because it requires constraining the very capability that AI provides -- telling the organization that it should not build everything it can build, that speed is a means and not an end, that optimal output requires the evaluative dimension that only human judgment can provide.
The period of disequilibrium is dangerous. It is also, Barnard would have noted, the period of greatest potential. When the old equilibrium is shattered and the new one has not yet been established, the executive has the opportunity to build a new cooperative system that is better adapted to the new conditions -- one that is organized around judgment rather than execution, around purpose rather than production, around the cooperative structures that the AI tools cannot provide or replace. The executive who seizes this opportunity -- who manages the disequilibrium with patience, moral seriousness, and a clear vision of the cooperative system she is building toward -- will lead an organization that is not merely surviving the AI transition but thriving through it.
The oscillation between excitement and terror -- the felt experience of organizational disequilibrium -- is the executive's diagnostic signal. The excitement indicates that the new capability is real and valuable. The terror indicates that the old equilibrium has been broken and the new one has not yet been established. The executive who feels both -- who holds the excitement and the terror simultaneously, without collapsing into either triumphalism or despair -- is the executive who has the emotional and moral range to navigate the disequilibrium.
Barnard's framework offers one further insight about disequilibrium that is essential for the present moment. He observed that the temptation during periods of disruption is to optimize for the most measurable dimension -- output -- at the expense of dimensions that matter more but resist measurement: purpose alignment, contributor well-being, cooperative health, and organizational integrity. This temptation is amplified in the AI age, because the tools make output more visible, more impressive, and more addictive than ever before. The executive who succumbs to this temptation -- who celebrates output volume while neglecting the cooperative system that gives output its direction and its meaning -- will produce an organization that is impressively productive and fundamentally hollow.
The executive who resists -- who accepts that the unmeasurable dimensions of organizational health are more important than the measurable dimensions of organizational output -- is the executive who builds an organization capable of sustaining cooperation through the most turbulent period of technological change in living memory. The capability is not the challenge. Capability is abundant. The challenge is maintaining the cooperative system through which capability becomes purposeful, meaningful, and morally sound. And that challenge is, as Barnard understood better than anyone, the defining challenge of executive leadership in every era -- amplified, in the AI age, to a scale where the consequences of success or failure extend far beyond the boundaries of any single organization.
Chester Barnard's entire theory of organization can be restated in a single proposition: the structures that sustain organized human activity are not technical structures, not strategic structures, not hierarchical structures. They are cooperative structures. Shared purpose, mutual trust, aligned incentives, moral leadership, effective communication -- these are the materials from which every functioning organization is built, and they hold against the pressures of change only as long as someone maintains them.
This proposition was always true. The AI age has made it inescapable.
When AI amplified individual capability to the point where a single person could produce what had previously required a team, it stripped away every organizational structure whose justification was the coordination of execution. The specialist silos that organized work by domain were justified when domain expertise was scarce and cross-domain translation was expensive. When AI made it possible for any competent person to work across domains, the specialist silos lost their justification. The hierarchical chains of command that organized decision-making were justified when information was expensive to gather and analysis was expensive to perform. When AI made information gathering and analysis nearly free, the hierarchical chains lost much of their justification. The sequential handoffs between specialists were justified when each specialist could contribute only within her domain. When AI enabled each individual to contribute across domains, the handoffs became unnecessary.
What remained, when the execution-coordinating structures were stripped away, was the cooperative core. The shared purpose that gives the organization direction. The trust that enables people to work together without constant surveillance. The aligned incentives that make cooperation more attractive than individualism. The moral leadership that ensures amplified capability serves rather than damages the world. The communication that transmits not just information but meaning, not just data but direction.
These cooperative elements are not the soft periphery of organizational life. They are the hard center. They are the only organizational structures that cannot be replaced, automated, or rendered obsolete by the tools. Everything else -- the technical infrastructure, the strategic frameworks, the operational procedures -- can be assisted by AI. The cooperative core cannot, because cooperation is a human decision, made by human beings, for human reasons, sustained by human relationships, and no tool, however capable, can make that decision, build those relationships, or sustain them over time.
The cooperative system functions as a structure that redirects capability toward purpose. Without it, amplified capability flows in whatever direction offers least resistance -- toward whatever is easiest to build, whatever produces the most immediate gratification, whatever the tool's optimization function happens to favor. With the cooperative system in place, capability is directed toward the specific conditions the organization is trying to create: products that serve genuine needs, work that develops human potential, contributions that make the world marginally better rather than marginally worse.
The executive's role is to build and maintain this cooperative structure. Not once, as a project with a completion date, but continuously, as an ongoing practice that requires daily attention, daily judgment, daily moral engagement with the forces that constantly test the structure's integrity.
The forces are real and persistent. The cooperative structure is tested by inadequate inducements -- when the economy of incentives falls out of alignment and participants begin to question whether the exchange is still favorable. It is tested by unclear purpose -- when the organization cannot articulate why its amplified capability should be directed here rather than there. It is tested by eroded trust -- when the executive's behavior diverges from her professed values, or when participants exploit organizational resources for personal advantage. It is tested by communication failures -- when meaning is lost in the flood of AI-generated information, when the executive's voice is drowned out by polished but purposeless artifacts.
Each of these tests requires the executive's attention, and the attention must be immediate, because cooperative structures weaken silently. An inducement misalignment that is not corrected drives talented participants to seek alternatives. A purpose that becomes vague loses its capacity to coordinate judgment. A trust violation that is not addressed weakens the cooperative fabric. A communication system that transmits data without meaning produces the illusion of alignment without the substance.
The maintenance is not dramatic. It is daily. It is the executive who tells the truth in a meeting where a comfortable lie would have been easier. The executive who publicly credits a junior colleague's judgment when the temptation was to take the credit herself. The executive who chooses to slow production when the tools make speed free, because slowing down serves the purpose better than speeding up. The executive who listens to the informal organization's resistance to unsustainable intensity and recognizes it as a diagnostic signal rather than a management problem to be overcome.
Barnard would have noted that the cooperative structure's failure is never sudden. It is always incremental. The executive does not wake up one morning to find cooperation gone. She wakes up to find it slightly weakened, to find the participants slightly less engaged, the creative energy slightly lower, the willingness to subordinate individual capability to shared purpose slightly diminished. And the weakening, if not addressed, continues -- each small erosion making the next one easier, each unaddressed test making the structure slightly less able to withstand the next test -- until the cooperative system dissolves and the organization becomes what Barnard feared most: a collection of individuals who happen to occupy the same space, pursuing separate purposes, cooperating with no one.
The AI age has produced two common responses to the organizational challenge, and Barnard would have found both inadequate. The triumphalist response celebrates amplified individual capability without attending to the cooperative structures that give capability direction. It envisions a future of autonomous individuals, each amplified by AI, each pursuing their own vision. Barnard would have observed that this is not an organization. It is a marketplace. And a marketplace, however productive, cannot sustain the purposeful, coordinated, morally grounded effort that the world's hardest problems require.
The nostalgic response mourns the loss of the pre-AI organizational forms and wishes the disruption would stop. Barnard would have observed that the old cooperative structures worked less well than nostalgia suggests, and that the attempt to restore them is futile because the conditions that supported them no longer exist.
The Barnardian response is neither triumphalist nor nostalgic. It is constructive, and it draws on the deepest insights of a thinker who understood that organizational life is always a matter of building and rebuilding, of constructing cooperative structures adequate to the conditions of the moment and then reconstructing them when the conditions change. Accept the capability the tools provide. Study the conditions the capability creates. Build cooperative structures adequate to the new conditions -- structures of purpose, trust, aligned incentives, moral leadership, and meaning-rich communication. Maintain those structures against the constant pressure of forces that are indifferent to whether the structures hold or fall.
Cooperation is not a management technique. It is the structure. It is the only structure that holds. And the executive who understands this -- who builds and maintains the cooperative system with the patience, the care, and the moral seriousness that this extraordinary moment demands -- is the executive who will lead an organization worthy of the tools it possesses.
Barnard understood this nearly a century ago. He understood it from running a telephone company, from directing a state relief system during the Great Depression, from leading the Rockefeller Foundation and chairing the National Science Foundation. He understood it from the daily, unglamorous, morally demanding work of maintaining the conditions under which people choose to work together. His framework has never been more relevant than it is right now, because the tools have never been more powerful, the capability has never been more abundant, and the cooperative challenge has never been more consequential.
The executive function in the age of artificial intelligence is not a new function. It is the oldest function in organizational life, purified to its essence by a technological revolution that has stripped away everything inessential and left only the core: the maintenance of the cooperative system, through moral leadership, in the service of human purpose.
There is a final dimension of cooperation that Barnard's framework illuminates with particular force in the AI context: the relationship between cooperation and meaning. Barnard observed that people do not cooperate solely for material inducements. They cooperate because cooperation provides something that individual action cannot: the sense of belonging to something larger than oneself, the feeling that one's contributions matter to a purpose that extends beyond one's individual interests, the experience of being recognized and valued by others who share the same commitment. These meaning-generating properties of cooperation are not secondary benefits. They are primary motivations, often more powerful than material compensation in sustaining long-term commitment to organizational effort.
The AI tools threaten these meaning-generating properties precisely because they amplify individual capability to the point where cooperation may appear unnecessary. The individual who can build a complete product alone may conclude that the friction of cooperation -- the meetings, the compromises, the accommodations of others' perspectives -- is an inefficiency to be eliminated rather than a source of meaning to be preserved. Barnard would have warned that this conclusion, however rational it appears in the short term, is ultimately self-defeating, because the meaning that cooperation provides cannot be replicated by individual achievement, and the loss of that meaning produces a kind of existential impoverishment that no amount of amplified capability can compensate. The executive who maintains the cooperative system in the AI age is not merely maintaining organizational effectiveness. She is maintaining one of the primary sources of human meaning in an era that threatens to reduce every human activity to a question of productive efficiency.
The cooperative structures that hold are the structures that serve this deeper purpose: not merely coordinating action but creating the conditions under which people experience their work as meaningful, their contributions as valued, and their participation as part of something genuinely worth sustaining. Barnard understood this nearly a century ago, and the understanding has never been more consequential than it is today.
There is a man who ran a telephone company for twenty-one years, and he wrote a book in 1938 that almost nobody reads anymore, and that book contains more wisdom about what we are living through right now than most of the commentary that fills our feeds and our conferences and our anxious dinner-table conversations.
I did not expect that. When I set out on the Orange Pill journey -- this project of visiting the great thinkers and asking how their patterns of thought might illuminate the AI moment -- I did not expect to find one of the most powerful lenses in a telephone executive who never saw a computer and who died before the internet was imagined.
But that is what happened. Chester Barnard saw something that most management thinkers missed then and still miss now. He saw that organizations are not machines. They are not hierarchies. They are not strategic entities optimized for shareholder value. They are groups of human beings who have chosen to cooperate, and who will continue to cooperate only as long as the cooperation is worth it to them. The moment it stops being worth it -- the moment the inducements fall short of the contributions, the moment the purpose grows vague, the moment trust erodes between the leader and the led -- the cooperation dissolves, quietly, incrementally, and irreversibly.
That insight hit me differently after spending months inside the AI revolution. I have watched organizations come apart not because their technology failed but because their cooperation failed. I have watched teams of extraordinarily capable people -- each amplified by tools that would have seemed magical five years ago -- produce less value together than they could have produced alone, because the cooperative structures that were supposed to give their capability direction had been hollowed out by speed, by distraction, by the intoxicating abundance of output that the tools made possible.
And I have watched other organizations -- the ones led by people who understood, perhaps instinctively, what Barnard articulated systematically -- flourish in ways that took my breath away. Not because they had better tools. Everyone has the tools now. But because they had something the tools cannot provide: a shared purpose that their people believed in, trust between leader and led that made genuine cooperation possible, and an executive who understood that her job was not to command but to maintain -- to maintain the conditions under which people choose to bring their best judgment, their honest effort, their moral seriousness to work that matters.
Barnard called it the functions of the executive. I call it the hardest and most important work a human being can do in the age of AI.
Because here is what I keep learning, and what I suspect I will spend the rest of my life learning: the tools will keep getting better. The capability will keep expanding. The river of intelligence I described in The Orange Pill -- that force flowing from atoms to algorithms to whatever comes next -- will keep accelerating. None of us can stop it. None of us should want to stop it.
But capability without cooperation is a flood. It destroys more than it creates. It amplifies more than it aligns. It produces more than it can evaluate, builds more than it can justify, ships more than it can take responsibility for.
Cooperation is what turns the flood into something we can live with. It is the structure that redirects capability toward purpose, the dam that creates the pool behind which an ecosystem can flourish. It is, when you strip everything else away, the only thing that holds.
I think about this when I watch my children navigate their world -- a world being reshaped by these tools at a speed that makes my own experience of technological change seem glacial. They will need the tools. Of course they will. But more than the tools, they will need the capacity to cooperate -- to build trust, to share purpose, to subordinate their individual capability to something larger than themselves when something larger requires it.
Barnard knew this. He knew it from decades of running organizations through crises and transformations. He knew it from the daily, unglamorous, morally demanding work of keeping people working together when it would have been easier for each of them to walk away.
The work has not changed. The stakes have. And the quality of the executive's response to the cooperative challenge will determine whether the extraordinary capability that AI provides is channeled toward human flourishing or dissipated in the absence of the cooperative structures that give capability its direction and its meaning.
-- Edo Segal
organizations are not machines but cooperative systems -- that authority flows not from the top down but from the willingness of individuals to accept it. AI has disrupted every assumption about cooperation that Barnard identified. When the machine can execute without willingness, without morale, without the fragile consent that makes human organizations function, the question is what happens to the organization itself. Barnard's patterns of thought reveal what no technology analysis can see -- that efficiency without cooperation is not management. It is something else entirely.

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Chester Barnard — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →