By Edo Segal
The nine minutes kept haunting me.
I had written an entire book about the AI revolution — about the river of intelligence, the twenty-fold productivity multiplier, the collapse of the imagination-to-artifact ratio. I had celebrated the acceleration. I had warned about the costs. I had built frameworks for navigating the flood.
And then I read Mintzberg's number, and something cracked.
Nine minutes. That is the average duration of a managerial activity. Half of everything a manager does lasts less than nine minutes. Not because managers are undisciplined. Because the role sits at the intersection of every demand the organization generates, and the demands arrive faster than any human can process them.
I knew this in my body before I knew it in my mind. During the Trivandrum sprint I describe in *The Orange Pill*, I watched my own calendar shatter into fragments so small they stopped resembling decisions and started resembling reflexes. Claude had made my team twenty times more productive. Every one of those productivity gains landed on my desk as something requiring evaluation. The prototype was done — now judge it. The analysis was ready — now decide. The options were generated — now choose.
The machine produced faster than I could think. And thinking was the one thing the machine could not do for me.
Mintzberg spent fifty years watching managers the way a naturalist watches animals in the wild. Not surveying them. Not asking them to self-report. Watching. Timing. Cataloging. What he found demolished the fantasy that managers are reflective strategists who spend their mornings on vision and their afternoons on planning. They are conductors of chaos. They work in fragments. They prefer talk to text, action to reflection, the immediate to the important. And no tool in history has ever slowed the fragments down, because every tool that increases capacity is met by a system that generates demands to fill the new capacity.
That pattern — which I am calling Mintzberg's Law in this volume — is the single most important structural insight for anyone trying to lead an organization through the AI transition. The machine does not liberate the manager from the torrent. It accelerates the torrent. And the remedy is not personal discipline. It is organizational structure. Dams built into the system itself, not maintained through heroic effort by the person the system is drowning.
This book is not a summary of Mintzberg. It is a collision between his five decades of empirical observation and the most powerful acceleration of organizational production in history. The collision produces truths that neither discourse — management theory or AI enthusiasm — can reach alone.
The fragments are getting shorter. What you build inside them is the only question that matters.
— Edo Segal ^ Opus 4.6
Henry Mintzberg (born 1939) is a Canadian academic, management theorist, and Cleghorn Professor of Management Studies at McGill University in Montreal. Trained as a mechanical engineer at McGill and holding a doctorate from the MIT Sloan School of Management, Mintzberg built his career on the direct observation of managerial work, beginning with his landmark 1973 book *The Nature of Managerial Work*, which revealed that managers operate not as reflective planners but through brief, fragmented, action-oriented episodes. His subsequent works — including *The Structuring of Organizations* (1979), *Mintzberg on Management* (1989), *The Rise and Fall of Strategic Planning* (1994), and *Managers Not MBAs* (2004) — developed an integrated body of theory encompassing organizational configurations, emergent strategy, the craft of managing, and a sustained critique of MBA education's overemphasis on analysis at the expense of practice. He coined the concept of "communityship" as an alternative to heroic leadership and authored *Rebalancing Society* (2015), arguing for the restoration of the plural sector alongside public and private institutions. The recipient of numerous honors and among the most cited scholars in management history, Mintzberg remains an active writer and teacher whose empirical insistence on describing what managers actually do — rather than what theory says they should do — has shaped organizational thinking for over half a century.
In 1968, a doctoral student at the MIT Sloan School of Management did something that management researchers almost never did. He followed five chief executives through their working days. He did not survey them. He did not ask them to fill out time-use diaries, those instruments of self-flattering retrospection in which every manager remembers spending more time on strategy than they actually did. He watched them. He timed their activities. He recorded what they did in intervals of minutes, not hours, with the dogged empiricism of a naturalist cataloging the behavior of a species that had never been properly observed in its habitat.
What Henry Mintzberg found demolished the edifice of management theory that had been accumulating, with the confident regularity of quarterly reports, since Henri Fayol first proposed in 1916 that management consisted of five neat functions: planning, organizing, commanding, coordinating, and controlling. The textbooks had elaborated these functions into an entire theology of rational administration. The manager, according to this theology, was a reflective strategist who spent the morning analyzing market data, the afternoon formulating long-range plans, and the evening reviewing organizational performance against carefully calibrated benchmarks. The manager's office was a cathedral of purposeful thought.
Mintzberg's five executives did not inhabit this cathedral. They inhabited something closer to a trading floor during a market crash. The average duration of a managerial activity was nine minutes. Half of all activities lasted less than nine minutes. Only one in ten lasted more than an hour. The work was not sequential but simultaneous, not reflective but reactive, not written but verbal. Managers spent the overwhelming majority of their time talking — on the phone, in scheduled meetings, in unscheduled meetings, in hallway encounters that lasted ninety seconds and resolved more than any memo could. They did not retreat to their offices to think. They were summoned from their offices to act. When they did sit at their desks, the average uninterrupted stretch lasted — this number deserves its own sentence — eight minutes.
The portrait that emerged from The Nature of Managerial Work, published in 1973, was not flattering in the way that management theory had trained people to expect flattery. It was honest in the way that field biology is honest. The manager was not a strategist but a conductor — not of a symphony, where every note is written in advance, but of an improvised ensemble where the musicians keep changing, the score keeps shifting, and the audience keeps shouting requests. The work was characterized by brevity, variety, fragmentation, and a strong preference for action over reflection. Managers did not plan and then act. They acted, and the pattern of their actions sometimes, retrospectively, resembled a plan.
This finding was replicated. Not once, not in a single culture or industry, but across decades and continents. Mintzberg and his students and his intellectual descendants observed managers in hospitals, schools, government agencies, technology companies, and manufacturing firms. The details varied. The structure did not. Everywhere, the same brevity. The same fragmentation. The same torrential pace. The same preference for live communication over written analysis. The same inability to spend sustained time on any single activity before the next demand arrived.
The implications were devastating for the rational model of management, and they remain devastating — perhaps more so — in the age of artificial intelligence.
The conventional wisdom about AI and management runs something like this: AI will handle the routine, freeing the manager for the strategic. The machine processes the data, drafts the reports, analyzes the trends, and the liberated manager ascends to a higher cognitive plane where she can finally do the work that the textbooks always said she should be doing — the planning, the visioning, the long-range thinking that fragmentation had previously made impossible. The promise is that AI will resolve the gap between the managerial ideal and the managerial reality. The manager will finally become the reflective strategist she was supposed to be.
Mintzberg's research suggests the opposite.
The gap between the ideal and the reality was never caused by a shortage of analytical tools. Managers in the 1970s were not fragmented because they lacked computing power. They were fragmented because the managerial role sits at the intersection of multiple constituencies, multiple demands, and multiple time horizons, and each constituency generates a flow of requests that the manager must attend to or risk losing the relationships, the information, and the influence on which her effectiveness depends. The fragmentation is not a bug in the managerial role. It is the role. The manager exists precisely to absorb the complexity that would otherwise overwhelm any purely structural system of coordination.
AI does not reduce the number of constituencies. It does not reduce the number of demands. It does not simplify the time horizons. What it does — and what Mintzberg's framework predicts with uncomfortable clarity — is accelerate every dimension of the torrent.
Consider the manager of a technology company in 2026. Before AI, she received perhaps forty emails a day, attended five meetings, fielded a dozen Slack messages, and reviewed two or three documents that her team had produced. Each of these demanded attention. Each was an interruption, or a scheduled block that would itself be interrupted. The fragmentation was already intense.
Now add AI to this picture. Her direct reports are using Claude Code to generate prototypes in hours rather than weeks. Each prototype requires her evaluation. Her marketing team is using AI to produce campaign variations at a pace that turns what used to be a monthly review into a daily decision point. Her finance team is generating scenario analyses that used to take weeks but now arrive before lunch, each with the implicit question: have you reviewed this? Her HR platform is surfacing AI-generated insights about team dynamics that she is expected to act on. The AI tools that were supposed to free her time have instead generated a secondary torrent of outputs, each requiring the exercise of precisely the judgment that only she, as the person who holds the organizational context in her head, can provide.
The production has been automated. The evaluation has been multiplied. And evaluation — the act of assessing whether a machine-generated output is accurate, relevant, aligned with organizational values, and worth acting upon — is in many ways more cognitively demanding than the production it replaced. Production gives you a warm-up. You engage with the material, you work through the logic, you build understanding as you build the artifact. Evaluation asks you to exercise judgment on something you did not build, whose logic you must reconstruct, whose assumptions you must surface, and whose errors you must catch without the benefit of having been present for the process that produced them.
This is the specific cognitive burden of the AI-augmented manager, and Mintzberg's framework explains why it is not a temporary adjustment but a structural feature of the new landscape. The manager's day was already nine-minute fragments. AI has not lengthened the fragments. It has shortened them, because the machine produces outputs faster than any team of humans could, and each output is a fragment demanding attention. The average duration of a managerial activity in the AI-augmented organization is not an empirical finding anyone has yet published. But the direction is clear from the structural logic: more outputs, arriving faster, each requiring evaluation. The fragments get smaller. The torrent accelerates.
Mintzberg observed something else about his five executives that bears directly on the AI question. They showed a marked preference for what he called "current, specific, and ad hoc" information over "historical, aggregated, and routine" information. They wanted the gossip, the anecdote, the phone call from the field representative who had just visited the customer, the hallway whisper about the competitor's latest move. They wanted soft data — the impressions, the feelings, the rumors that never made it into formal reports but that carried the signal of what was actually happening in the organization and its environment.
This preference was not irrational. It was adaptive. The formal reports were already outdated by the time they arrived. The aggregated data concealed the specific signals that mattered. The gossip carried information that the formal systems filtered out — the politics, the morale, the emerging problems that had not yet become data points.
AI excels at processing formal, aggregated, historical data. It excels at pattern recognition across large datasets. It excels at precisely the kind of information processing that Mintzberg's managers consistently deprioritized in favor of current, specific, soft data. The machine's strength is the manager's secondary concern. The manager's primary concern — the real-time, context-rich, politically sensitive, emotionally textured information that flows through conversations and relationships — is the machine's weakness.
This misalignment is structural. It is not a temporary limitation that better AI will resolve. It reflects the fundamental difference between what machines process and what managers need. Machines process data. Managers navigate relationships. Data is an input to managerial work, sometimes an important one. But the work itself — the nine-minute fragments, the hallway conversations, the phone calls that cannot wait, the meetings where what matters most is what is not said — lives in a dimension that data cannot fully capture and algorithms cannot fully address.
None of this means that AI is irrelevant to managerial work. It means that the relevance operates differently than the efficiency narrative suggests. AI does not make management simpler. It makes the simple parts of management faster, which makes the complex parts of management more salient, more demanding, and more time-compressed. The manager who used to spend three hours assembling data for a decision now receives the data in three minutes — and must make the decision in the remaining two hours and fifty-seven minutes that have been "freed." But those freed minutes do not sit empty. They fill instantly with the next fragment, the next demand, the next output requiring evaluation.
The filling is not accidental. It is what Mintzberg's research predicts. The organizational system generates demands at a rate that exceeds any individual's capacity to process them. When capacity increases — through faster tools, better information, more efficient communication — the system responds by generating more demands, not by allowing the manager to rest. The equilibrium point is always overload. The tool changes the throughput. It does not change the equilibrium.
This is why the question "What do managers actually do?" matters more now than when Mintzberg first asked it in the 1960s. Every promise of AI-augmented management depends on an implicit model of what management is. If management is the rational processing of information in service of strategic decisions, then AI transforms it fundamentally, because the machine processes information orders of magnitude faster than any human. If management is the navigation of a complex, fragmented, relationship-intensive, politically charged environment in which information is only one input among many, then AI is a powerful addition to the manager's toolkit — and a powerful accelerator of the demands the manager must navigate.
Mintzberg's research, replicated across half a century, settles the question. Management is the second thing. The nine-minute fragments are not going away. They are getting shorter. And the manager who believes that AI will resolve her fragmentation is making the same error that the textbooks made in the 1960s — mistaking an idealized model of what management should be for an accurate account of what management actually is.
The technology changes. The torrent does not. It simply finds new channels, runs faster, and demands more from the person standing at its center trying to keep the organization from being swept away.
Every communication technology in the history of management was introduced with the same promise. It would save time. It would reduce the burden on the manager. It would streamline the flow of information so that decisions could be made more efficiently and the manager could focus on what mattered most.
The letter was supposed to replace the need for face-to-face meetings across distances. The telephone was supposed to replace the letter, making communication instantaneous. The fax machine was supposed to replace the postal delay for documents. Email was supposed to replace the phone call with something less intrusive, something the recipient could address at her convenience. Instant messaging was supposed to replace email with something faster, more casual, less burdened by the formalities that made email feel like work. Video conferencing was supposed to replace the business trip.
Each technology delivered exactly what it promised at the level of the individual task. The telephone was faster than the letter. Email was less intrusive than the phone. Slack was more casual than email. Each tool made the specific communication it was designed for more efficient.
And each tool, without exception, increased the total volume of communication the manager had to process.
This is not a coincidence. It is a structural pattern, and it operates with the regularity of a physical law. Henry Mintzberg's research across multiple decades documented its mechanism with the granularity of field observation: when you make communication cheaper and faster, people communicate more. They communicate more because the cost of initiating a communication has dropped, so communications that would not have been worth the cost of a letter or a phone call are now worth the cost of a Slack message. The threshold drops. The volume rises. And the manager, sitting at the nexus of the organization's information flows, absorbs the increase.
The pattern deserves a name, because unnamed patterns operate invisibly, and invisible patterns cannot be managed. Call it Mintzberg's Law: the number of interruptions a manager faces is proportional to the number of tools available for generating communication.
The law is proportional, not linear, because each new tool does not merely add its own communication channel. It interacts with every existing channel, creating cross-channel traffic — the email that references the Slack thread, the meeting that was scheduled to discuss the email, the follow-up Slack message about what was decided in the meeting. The channels multiply each other. The manager does not experience them as separate streams but as a single, undifferentiated torrent in which every channel feeds every other channel and the total volume exceeds the sum of the parts.
Consider the evidence. Mintzberg's original studies in the late 1960s documented managers using three primary communication channels: face-to-face meetings (scheduled and unscheduled), telephone calls, and written correspondence. The total communication load was already overwhelming — the nine-minute average, the constant interruptions, the inability to sustain any activity for more than a brief stretch. That was the baseline, established with the communication technology of 1968.
By the 1990s, email had been added. Studies of managerial work in that period showed that the number of communications per day had increased dramatically. Managers who had previously received perhaps ten to twenty pieces of written correspondence daily were now receiving fifty, seventy, a hundred emails, each arriving with the expectation of timely response that the postal service had never imposed. The telephone had not been displaced. It had been supplemented. The face-to-face meeting had not been eliminated. It had been augmented by the email thread that preceded it and the email thread that followed it. The load had grown, and the fragments had shortened.
By the 2010s, Slack, Microsoft Teams, and their competitors had been added. The communication load increased again. A 2019 study found that the average knowledge worker checked email or messaging platforms once every six minutes. The interruptions were not only more frequent but more ambient — the persistent notification, the unread message count, the red dot that signaled attention was owed somewhere, constantly, to something. Managers reported spending increasing proportions of their days in communication-related activities and decreasing proportions in what they described as "real work," a distinction that itself reveals the problem: if communication is not real work, then what is the manager's job?
Now, in 2026, AI has entered the communication ecosystem. And AI does not merely add another channel. It transforms the nature of what flows through every existing channel.
The transformation operates through three mechanisms, each of which accelerates the torrent that Mintzberg's Law describes.
The first mechanism is output generation. AI tools produce outputs — drafts, analyses, prototypes, summaries, recommendations — at a pace that no human team can match. Each output is a communication. It arrives in someone's workflow with the implicit demand: review this, approve this, redirect this, or respond to it. A team of five engineers using Claude Code might generate in a day the volume of code that previously took a week, and that code requires architectural review, product evaluation, integration testing — all of which flow upward to the manager. The production was accelerated. The evaluation was not. The asymmetry between the speed of production and the speed of evaluation is the specific bottleneck that AI creates in the managerial role. And the bottleneck is the manager herself.
The second mechanism is channel proliferation. AI does not just produce outputs within existing channels. It creates new channels entirely. The AI assistant that summarizes meetings generates a new document that must be reviewed for accuracy. The AI system that monitors team sentiment generates alerts that must be assessed. The AI tool that drafts customer communications generates versions that must be approved. Each capability is a new channel. Each channel generates traffic. The total communication load increases not additively but multiplicatively, because each new channel interacts with every existing one.
The third mechanism is expectation inflation. When AI makes responses faster, the organizational expectation of response time compresses. If a market analysis that used to take a week now takes an hour, the manager is expected to act on it within the day, not within the month. The strategic decision that used to wait for next quarter's planning cycle now demands attention this week, because the data is available now, and the competitor who also has AI is acting on equivalent data at equivalent speed. The pace of the organization adjusts upward to match the pace of the tool, and the manager, the human being at the center of this accelerating system, must match the pace or be perceived as the bottleneck.
Mintzberg observed in his original studies that managers showed a "preference for action." They chose to act rather than to reflect. They chose the phone call over the report, the meeting over the memo, the immediate response over the delayed analysis. This preference was not a character flaw. It was adaptive behavior in an environment where the cost of delay was often greater than the cost of imperfect action, and where the information available for any given decision was never complete and never would be.
AI intensifies this preference by making action easier and reflection harder. Easier, because the tool can generate the first draft of any action — the response, the plan, the recommendation — in seconds. Harder, because the time that reflection requires is precisely the time that the accelerating torrent of AI-generated outputs is consuming. The manager who pauses to think is the manager whose inbox fills, whose Slack unreads compound, whose team's AI-generated outputs stack up awaiting review. The cost of reflection, measured in accumulating demands, has increased. The cost of action, measured in effort and time, has decreased. The adaptive response, predicted by Mintzberg's research from the 1960s and 1970s, is more action and less reflection.
This is Mintzberg's Law operating at a new scale. The same structural logic that made the telephone increase communication load beyond what the letter permitted now makes AI increase communication load beyond what Slack permitted. The tool is different. The mechanism is identical. And the mechanism is not about the tool. It is about the system — the organizational system that generates demands at a rate proportional to its capacity to process them, and that absorbs every efficiency gain by generating new demands to fill the expanded capacity.
The history of organizational response to communication overload follows its own pattern: each overload is addressed by a tool that promises to manage the overload and that, in managing it, creates the conditions for the next overload. Email overload was addressed by email management tools — filters, folders, priority inboxes — that made email manageable and thereby encouraged more email. Slack overload was addressed by channel management, notification settings, and status signals that made Slack manageable and thereby encouraged more Slack traffic. AI overload will be addressed by AI management tools — AI assistants that filter and prioritize and summarize the outputs of other AI systems — that will make AI manageable and thereby generate a new layer of outputs requiring human evaluation.
The recursive pattern is not incidental. It is the structural consequence of a system in which the equilibrium point is overload. The organization does not have a natural resting state in which communication is sufficient and the manager has slack. The organization has a natural resting state in which every available channel is fully utilized, every available tool is generating outputs at capacity, and the manager is operating at the limit of her cognitive bandwidth. Tools raise the limit. The system fills to the new limit. The equilibrium is restored at a higher throughput and an equivalent level of overload.
Mintzberg never stated the law in these terms. He did not need to. His research documented the phenomenon with the precision of observation rather than the abstraction of formulation. But the pattern is visible in every study he conducted and every study his intellectual descendants have replicated since. The manager's day is nine-minute fragments, the manager's preference is for action over reflection, and the total communication load absorbs every efficiency gain the system provides.
The implications for AI are uncomfortable. The technology that promises to resolve the information overload is instead the most powerful accelerator of information overload in the history of management. Not because the technology is poorly designed, but because the organizational system into which the technology is deployed has a structural tendency to absorb capacity gains through demand generation. The problem is not the tool. The problem is the equilibrium. And changing the equilibrium requires changing the structure — the organizational design, the cultural norms, the expectations about response time and communication volume that collectively determine how much torrent the system generates.
Mintzberg spent his career arguing that structural problems require structural solutions. A manager cannot solve a structural problem through personal discipline, any more than a beaver can hold back a river by standing in it. The structure must change. The organizational design must protect the manager's capacity for the work that only humans can do — the judgment, the craft, the presence — by limiting the rate at which the system generates demands on that capacity. Without structural change, Mintzberg's Law operates unchecked. The tools get better. The torrent gets faster. The manager gets more efficient at processing individual fragments and more overwhelmed by their total volume.
The law has no natural limit. It will operate until the structure limits it, or until the manager breaks.
In the vocabulary of management theory, three words compete for the soul of the profession: science, profession, and craft. Henry Mintzberg spent decades arguing that the right word is the third one, and that the confusion between the three has produced a managerial class that is analytically sophisticated and practically impoverished.
The distinction matters. A science is systematic. It proceeds through hypothesis, experiment, and replication. Its knowledge is explicit, transferable, and independent of the person who holds it. The scientific knowledge that water boils at one hundred degrees Celsius at sea level is true regardless of who performs the experiment. A profession, in the strict sociological sense, is defined by a codified body of knowledge, standardized training, and certification that guarantees a baseline of competence. A doctor or a lawyer possesses professional knowledge — knowledge that is learned through a prescribed curriculum, tested through standardized examinations, and applied according to established protocols. Craft is neither. A craft is learned through practice. Its knowledge is tacit — it lives in the hands, in the judgment, in the feel for the material that comes only from having worked with it, repeatedly, over time, under conditions that no textbook can fully specify.
A potter at her wheel does not consult a manual. She adjusts the pressure of her hands based on what the clay is telling her — its moisture, its density, its willingness to hold a form. This knowledge cannot be fully articulated. It can be demonstrated, modeled, discussed. It cannot be reduced to rules. If it could be reduced to rules, a machine could do it. And in fact, machines can produce pots — but the machine-made pot and the handmade pot are different objects, not because one is better than the other in some absolute sense, but because they embody different kinds of knowledge and different relationships between the maker and the made.
Mintzberg argued that managing is pottery, not engineering. The manager works with material — people, processes, politics, culture — that has its own properties, its own resistances, its own tendency to behave in ways that no model fully predicts. The effective manager adjusts constantly, reads the situation in real time, applies judgment that has been built through thousands of prior adjustments, and produces outcomes that are shaped by the interaction between intention and circumstance rather than by the implementation of a plan.
This is not an argument against analytical tools. Mintzberg was careful to distinguish his position from anti-intellectualism. He described the managerial role as a blend of three components: art (vision, creative insight), craft (practical learning from experience), and science (systematic analysis based on evidence). The problem, as he saw it, was not that science and analysis were present in management but that they had become dominant — that business schools, consulting firms, and organizational cultures had elevated the analytical component to the point where it crowded out the craft and the art, producing managers who could analyze brilliantly and manage poorly.
Artificial intelligence represents the apotheosis of the analytical component. The machine analyzes with a speed, scale, and consistency that no human can match. If management were a science, AI would be the ultimate management tool — perhaps the ultimate manager. Feed it the data, specify the objective, and the machine optimizes. The analytical component of management has been, for practical purposes, solved.
But management is not a science. And the analytical component, now that it has been solved, reveals by subtraction the components that remain unsolved and unsolvable by the machine. Those components are craft and art. They are the tacit knowledge, the situational judgment, the feel for organizational dynamics that comes only from having managed — from having made decisions that went wrong and learned from the wreckage, from having read a room correctly when the data said one thing and the politics said another, from having developed over years the capacity to sense when an organization is ready for change and when it will resist.
This knowledge cannot be transferred to a machine because it cannot be fully articulated by the person who holds it. The experienced manager who says "something feels off about this restructuring plan" is not being vague. She is applying a pattern-recognition capability that has been trained by hundreds of prior encounters with organizational change, each of which deposited a layer of understanding too fine-grained to be described in words but real enough to produce a reliable signal when the pattern recurs.
Mintzberg called this "the craft of managing," and he meant the phrase literally. The craft is developed through an apprenticeship — not a formal apprenticeship with a contract and a certificate, but the informal apprenticeship of working alongside experienced managers, absorbing their approach through observation and imitation, and gradually building one's own repertoire of responses to situations that never repeat exactly but rhyme enough for experienced judgment to apply.
AI disrupts this apprenticeship at the precise point where it matters most.
The apprenticeship depends on friction. The junior manager who must wrestle with a budget by hand — entering the numbers, confronting the tradeoffs, feeling the resistance of scarce resources against competing demands — builds understanding of what a budget is and what it does in an organization. The act of wrestling is the learning. The struggle is the curriculum. When AI generates the budget, the output may be superior. The numbers may be more accurate. The tradeoffs may be more clearly articulated. But the junior manager has not wrestled. She has reviewed. And reviewing is not the same cognitive operation as producing, because production forces you to confront the material at the level of the grain — to make decisions about each line, to feel the weight of each tradeoff — while review allows you to skim the surface of decisions that someone (or something) else has made.
The parallel to medical education is instructive. For decades, medical educators debated the role of rote memorization in training doctors. One school of thought argued that memorization was pointless — that doctors should learn to look things up rather than carrying medical knowledge in their heads. The opposing school argued that the act of memorization, the struggle to internalize vast bodies of information, built a cognitive architecture that enabled rapid pattern recognition in clinical settings. The doctor who had memorized the symptoms of a rare disease did not need to look it up when a patient presented with those symptoms. The recognition was instant, integrated, contextual. The knowledge lived in the doctor's perceptual apparatus, not in a reference manual.
AI represents the ultimate reference manual. The machine can look up anything, instantly, with perfect recall. The argument that managers do not need to build craft because the machine can provide answers on demand follows the same logic as the argument that doctors do not need to memorize because they can look things up. And the flaw in the argument is the same: the knowledge that matters most in practice is the knowledge that has been internalized — absorbed through experience, integrated into perception, available without conscious retrieval — because the situations in which it is needed do not pause for a database query.
The manager in a crisis does not have time to ask Claude for a scenario analysis. She has seconds to read the room, assess the threat, choose a response, and communicate it with the confidence that the organization needs to see in that moment. Her effectiveness depends entirely on the craft she has built — the accumulated residue of every crisis she has navigated, every misjudgment she has made and corrected, every pattern she has recognized and filed away in the non-verbal, non-articulable register of experienced judgment.
Mintzberg's argument about craft also extends to what might be called organizational feel — the capacity to sense the state of an organization the way a sailor senses the state of the sea. The experienced manager knows when morale is eroding before any survey detects it. She reads the hallway conversations, the body language in meetings, the pattern of who speaks and who does not, the subtle shift in energy that precedes a resignation or a rebellion. This reading is not data processing. It is perception, trained by years of attending to the same kinds of signals in the same kinds of environments.
AI cannot replicate this perception, not because the technology is immature but because the perception depends on being present — physically, emotionally, relationally — in the organizational environment. The machine can analyze sentiment in written communications. It can identify patterns in survey data. It can flag anomalies in behavioral metrics. All of these are useful inputs. None of them replaces the experience of walking through an office and feeling, in your bones, that something is wrong.
The most dangerous implication of AI for the craft of managing is not that the machine will replace the craft but that the machine will prevent the craft from developing. If the junior manager never wrestles with the budget, she never builds the financial intuition that would have developed through the struggle. If the junior manager never drafts the strategy herself — never confronts the blank page, never experiences the specific frustration of trying to articulate what the organization should become — she never builds the strategic intuition that comes from having done the thinking, not just reviewed someone else's thinking.
The craft of managing, like every craft, is built through doing. It cannot be built through reviewing the outputs of a machine that did the doing for you. This is not a romantic attachment to manual labor. It is a structural observation about how tacit knowledge is acquired: through the accumulation of experiences that are effortful, specific, and often frustrating — experiences that AI is designed to eliminate.
The managerial profession faces a choice. It can embrace AI as a tool that handles the analytical work while the manager focuses on craft — on the presence, the judgment, the relationship-building that the machine cannot provide. Or it can allow AI to colonize the developmental experiences that produce craft, resulting in a generation of managers who are analytically augmented and practically hollow.
Mintzberg would recognize this choice. He spent his career arguing that the management profession had already made a version of it — had already chosen analysis over craft, science over practice, the MBA over the apprenticeship. AI intensifies the stakes of that choice by making the analytical option overwhelmingly attractive and the craft option harder to justify in the language of efficiency that organizations speak. The budget that AI generates in five minutes is more analytically rigorous than the budget the junior manager would have wrestled with for two days. On every measurable dimension, the AI output wins.
But measurable dimensions do not capture what the wrestling builds. And what the wrestling builds is the manager who, five years from now, can walk into a boardroom and feel, before the first slide is presented, that the strategy being proposed will not survive contact with the organization — and who can explain why in terms that the room can hear, absorb, and act upon. That capacity is craft. It is the most valuable thing a manager possesses. And it is the thing that AI, by its nature, cannot build and by its convenience threatens to prevent.
Henry Mintzberg did not only time the fragments. He categorized them. What emerged from his structured observation of managerial work was a taxonomy of ten roles that managers perform, organized into three clusters: interpersonal, informational, and decisional. The taxonomy was not a prescription — Mintzberg did not say managers should perform these roles — but a description of what managers actually did when the researcher was watching.
The interpersonal cluster contained three roles. The figurehead: the manager as symbol, performing ceremonial duties, signing documents, hosting visitors, representing the organization at events where what mattered was not what was said but who was there saying it. The leader: the manager as motivator, developer, and director of the people who report to her, shaping the culture through hiring, training, encouraging, and occasionally disciplining. The liaison: the manager as network-builder, maintaining a web of relationships outside the vertical chain of command — with peers in other departments, contacts in other organizations, counterparts in the industry — that provided access to information and influence unavailable through formal channels.
The informational cluster contained three more. The monitor: the manager as scanner of the environment, constantly receiving information from every available source — formal reports, gossip, industry contacts, media, the hallway conversation overheard on the way to the elevator. The disseminator: the manager as transmitter of information within the organization, deciding what information gets passed along, to whom, in what form, with what emphasis. The spokesperson: the manager as transmitter of information outside the organization, representing the unit to its external constituencies — the board, the press, the public.
The decisional cluster contained four. The entrepreneur: the manager as initiator of change, scanning the environment for opportunities and launching projects to exploit them. The disturbance handler: the manager as firefighter, responding to crises and conflicts that arrive unbidden and demand immediate attention. The resource allocator: the manager as distributor of the organization's scarce resources — money, time, people, equipment, attention — deciding who gets what and thereby determining what gets done. The negotiator: the manager as participant in significant negotiations with other organizations or constituencies, committing organizational resources in real time.
Ten roles. Three clusters. One person. And the critical insight: these roles were not performed sequentially. They were performed simultaneously, often in the same conversation. A single meeting might require the manager to be a figurehead (opening with ceremonial remarks), a leader (making a staffing decision), a monitor (absorbing new market data), a disseminator (sharing that data with the team), an entrepreneur (proposing a new initiative based on the data), and a resource allocator (deciding which team members would work on the initiative) — all within thirty minutes. The roles were not compartments in the manager's day. They were threads in a fabric, woven together so tightly that pulling on one disturbed all the others.
Artificial intelligence does not perform all ten roles with equal facility. It excels at some, struggles with others, and is structurally incapable of a few. The way AI intersects with Mintzberg's taxonomy reveals, with the precision of a diagnostic instrument, where the human manager becomes more necessary rather than less.
Begin with the informational roles, because they are where AI's impact is most direct and most straightforwardly positive.
The monitor role — scanning the environment for information — is the role AI was born to perform. The machine can monitor more sources, at greater speed, with more consistency, than any human manager. It can process hundreds of industry reports, thousands of social media signals, millions of data points, and extract the patterns that a human scanner would need months to surface. The manager who spent two hours a day reading industry news, reviewing competitive intelligence, and scanning internal reports can now receive a synthesized briefing in minutes that covers more ground with greater accuracy.
The disseminator role — transmitting information within the organization — is similarly transformed. AI can draft the internal communication, prepare the briefing document, summarize the meeting notes, and distribute the relevant information to the relevant people at the relevant time. The labor of information distribution, which consumed significant managerial bandwidth in Mintzberg's original studies, can be substantially automated.
The spokesperson role — representing the organization externally — is partially affected. AI can draft the press release, prepare the investor presentation, and generate talking points. But the act of speaking for the organization — standing before a camera, sitting across a table from a journalist or a regulator, choosing in real time what to say and what to withhold — remains irreducibly human. The spokesperson role requires not just information but judgment about what information serves the organization's interests in a specific context, and the credibility that comes from being a person who can be held accountable for what she says.
The informational roles, then, are being automated in their mechanical dimensions while their judgmental dimensions intensify. The manager spends less time gathering information and more time deciding which information matters, what it means, and who needs to know it. The shift is from acquisition to interpretation, from volume to signal.
Now consider the interpersonal roles, where the picture is entirely different.
The figurehead role requires a body. It requires physical presence at the ceremony, the handshake at the reception, the tour of the facility for the visiting delegation. The figurehead function is symbolic, and symbols require embodiment. A company that sends an AI-generated message to commemorate a milestone has not performed the figurehead function. It has abdicated it. The employee who receives a congratulatory note from an AI knows it was generated by a machine, and the knowledge empties the gesture of the meaning it was supposed to carry. Meaning in organizational life, as in all human life, is transmitted through presence. The figurehead role is resistant to AI not because AI cannot generate appropriate words but because the role is not about words. It is about showing up.
The leader role is more complex. AI can assist with many functions that fall under leadership — scheduling development conversations, generating performance feedback drafts, identifying team members who may need support based on behavioral signals. But the core of the leader role, as Mintzberg observed it, is relational. It is the way the manager's presence affects the energy in a room. The way her attention, directed toward a particular employee at a particular moment, communicates something that no email or AI-generated message can communicate: I see you. Your work matters. You belong here. This relational dimension of leadership is built on accumulated trust — on the history of interactions between the manager and each team member, on the manager's demonstrated willingness to listen, to be wrong, to protect, to challenge. That history cannot be outsourced.
The liaison role — maintaining a network of external relationships — is similarly resistant. The value of a network is not the information it contains but the trust it embodies. The industry contact who shares confidential intelligence with the manager does so because of a relationship built over years of mutual exchange, reciprocal favors, and demonstrated reliability. AI can map the network, suggest contacts to develop, even draft the outreach message. It cannot build the trust on which the network's value depends.
The interpersonal cluster, taken as a whole, is the least affected by AI and the most important for what management becomes in an AI-augmented organization. As the informational roles are partially automated, the manager's time shifts — or should shift — toward the interpersonal work that only a human presence can perform. The manager of the future is less a processor of information and more a builder of relationships, trust, and meaning.
The decisional roles present the most complicated picture.
The entrepreneur role — scanning for opportunities and initiating change — is amplified by AI. The machine can identify market gaps, generate product concepts, model business scenarios, and prototype solutions at a speed that makes entrepreneurial exploration far more accessible. But the final decision — the judgment about which opportunity to pursue, which change to initiate, which bet to place with the organization's scarce resources — remains a human decision, because it depends on the organizational context, the political dynamics, the cultural readiness, and the risk appetite that no machine can fully model. AI expands the menu of options. The manager still orders from the menu.
The disturbance handler role is fascinating in its resistance to automation. Disturbances — crises, conflicts, unexpected threats — are by definition situations that the existing systems and procedures were not designed to handle. If they were expected, they would be routine. The disturbance handler earns her value precisely in the situations where the playbook does not apply, where the data is incomplete, where the emotions are running high, and where the organization needs a human being to stand in the chaos and make a call. AI can assist — it can rapidly surface relevant precedents, model potential outcomes, draft communications — but the crisis itself demands the physical, emotional, interpersonal presence of a person who is willing to absorb the uncertainty and act.
The resource allocator role is the most subtle. AI can optimize resource allocation according to any specified objective function. This is, in fact, one of the earliest and most successful applications of computational methods to management. But the specification of the objective function — the decision about what the organization is optimizing for — is a human decision that reflects values, priorities, and tradeoffs that cannot be reduced to mathematics without losing something essential. When the manager decides to allocate resources to a project with uncertain returns because she believes in the team, or because the project serves a strategic purpose that the financial model does not capture, or because the organization needs a win and this is the project most likely to produce one — those decisions are craft, not calculation.
The negotiator role remains stubbornly human. Negotiation is a real-time, relationship-dependent, context-sensitive activity in which reading the other party — their interests, their constraints, their emotional state, their willingness to move — is at least as important as the analytical preparation. AI can prepare the negotiation brief. It cannot sit across the table and sense that the counterpart is about to walk away, or that a concession on a secondary issue would unlock the deal, or that the stated position conceals an unstated need that, if addressed, would change the entire dynamic.
The ten roles, viewed through the lens of AI, reveal a reweighting. The informational roles lose weight as the machine absorbs their mechanical dimensions. The interpersonal roles gain weight as the qualities they require — presence, trust, embodiment, relational authenticity — become scarcer and more valuable relative to the analytical capabilities that AI provides in abundance. The decisional roles are complexly affected: the information supporting decisions improves dramatically, but the volume of decisions multiplies, and the most consequential decisions continue to depend on the kind of contextual, political, and emotional judgment that resists formalization.
What emerges is a portrait of the post-AI manager that looks nothing like the rational analyst of the textbooks and everything like the figure Mintzberg observed fifty years ago — fragmented, verbal, action-oriented, relationship-dependent — but with the relative emphasis shifted decisively toward the interpersonal. The manager of the future is less information processor, more presence. Less analyst, more reader of rooms. Less producer of outputs, more builder of the conditions under which others produce. Whether the management profession — trained for decades in the analytical paradigm, rewarded for decades by the analytical paradigm, selected for decades by the analytical paradigm — is prepared for this shift is the open question that the rest of this book will address.
The most seductive idea in the current discourse about artificial intelligence and management is that the manager's job is to build structures — dams, in the metaphor that has gained currency — that direct the flow of AI-generated capability toward productive ends. The manager studies the torrent. She identifies leverage points. She places structures that redirect the current, protect her team from the flood, and create the still water in which reflection, creativity, and judgment can develop. The metaphor is compelling because it gives the manager agency. She is not a passive recipient of technological change. She is an architect of the conditions under which change becomes productive rather than destructive.
Henry Mintzberg's research does not reject this idea. It stress-tests it against the actual texture of managerial life, and the stress test reveals a paradox that the metaphor, in its elegance, conceals.
The paradox is this: the act of building the dam is itself subject to the same torrent of fragmentation that the dam is meant to contain.
Return to the empirical baseline. The manager's day consists of nine-minute fragments. Half of all activities last less than nine minutes. The work is reactive, verbal, interrupted. The manager does not choose her agenda. The agenda chooses her — or rather, the agenda is an emergent property of the demands flowing into her day from every constituency the organization touches. Subordinates need decisions. Superiors need information. Peers need coordination. Clients need attention. The system generates demands at a rate that consistently exceeds the manager's capacity to process them, and the equilibrium point, as the previous chapter established, is always overload.
Now ask: when does the manager build the dam?
The dam-building work is, by definition, strategic. It requires stepping back from the operational torrent to see the pattern of the flow. It requires identifying where the current is strongest and where a structure would have the most effect. It requires designing the structure — deciding what norms to establish, what processes to protect, what expectations to set, what boundaries to enforce. And it requires implementation, which in organizational life means communication, persuasion, negotiation, and the sustained exercise of authority over time.
Every one of these activities requires sustained attention. Every one of them takes longer than nine minutes. Every one of them will be interrupted by the very torrent it is designed to address.
Mintzberg documented this dynamic decades before AI made it acute. He observed that managers consistently reported wanting to spend more time on strategic, long-term, reflective work. They consistently failed to do so. The failure was not a failure of discipline or intelligence. It was structural. The operational demands were immediate, concrete, and attached to real people — the direct report standing in the doorway, the client on the phone, the email from the CEO requiring a response before noon. The strategic work was important but not urgent, abstract rather than concrete, and attached to no specific person demanding attention at this specific moment. In the competition for the manager's limited attention, the immediate always won. Not because the manager chose poorly, but because the organizational system rewarded immediate responsiveness and penalized strategic withdrawal.
AI amplifies both sides of this competition. On the strategic side, AI gives the manager unprecedented capability for structural design. She can model organizational workflows, simulate the effects of process changes, generate policy frameworks, and draft communication plans with a speed and sophistication that would have required a team of consultants a generation ago. The capability for dam-building has never been greater.
On the operational side, AI accelerates the torrent. More outputs require evaluation. More channels generate interruptions. More possibilities demand decisions. The machine that was supposed to free time for strategy has instead filled the freed time with the secondary demands that the machine's own outputs create. The manager's inbox, as demonstrated in the previous chapters, does not lighten. It densifies.
The paradox operates at a deeper level than simple time competition. It operates at the level of organizational expectations. When AI makes the manager more efficient at processing operational demands, the organization does not respond by saying "wonderful, now she has time for strategy." The organization responds by increasing the volume of operational demands to match the new capacity. This is not conspiracy. It is systems dynamics. The organization is a system that generates work at a rate proportional to the capacity available to absorb it. When the capacity increases, the system equilibrates by generating more work. The manager who processes forty emails an hour instead of twenty does not finish her inbox in half the time. She receives eighty emails an hour, because the people sending emails have adjusted their behavior to the new throughput.
This dynamic was observable before AI. Mintzberg saw it in the 1970s when he noted that managers' days were packed regardless of the efficiency tools available to them. The executives he studied in 1968 had fewer communication tools than any modern intern, yet their days were just as fragmented, just as overloaded, just as dominated by immediate demands at the expense of strategic reflection. The overload was not a function of the tools. It was a function of the role — a role defined by its position at the intersection of every organizational flow.
What makes the AI era different is the magnitude of the asymmetry between the machine's production speed and the manager's evaluation speed. Previous tools accelerated communication. AI accelerates production. The manager is no longer merely receiving more messages. She is receiving more artifacts — prototypes, analyses, strategic options, creative concepts, reorganization plans — each of which requires a qualitatively different and more demanding kind of attention than a message. Evaluating a message takes seconds. Evaluating a prototype takes judgment. The distinction matters because judgment, unlike information processing, does not scale with better tools. It scales with experience, and experience accumulates at a fixed rate regardless of how fast the machine produces.
The asymmetry creates what might be called an evaluation bottleneck — a point in the organizational system where the flow of machine-generated outputs converges on the human capacity for judgment and stalls. The bottleneck is always the manager, because the manager is the person who holds the organizational context, the strategic intent, and the authority to approve or redirect. She cannot delegate this judgment without delegating the authority, and she cannot delegate the authority without ceasing to manage. The evaluation bottleneck is not a temporary inefficiency. It is the structural consequence of introducing a production technology that operates at machine speed into an organizational system that is governed at human speed.
The dam builder's paradox, then, is not merely that the manager lacks time to build dams. It is that the conditions that make dams necessary are the same conditions that make dam-building impossible. The torrent that requires structural containment is the torrent that prevents the manager from stepping back long enough to design the containment. The more AI accelerates the flow, the more urgently the organization needs structural intervention, and the less likely the manager is to have the sustained attention required to provide it.
This is not a counsel of despair. It is a structural diagnosis that points toward a structural remedy. If the individual manager cannot build the dam because the torrent prevents it, then the dam must be built into the organizational structure itself — not as a personal practice that the manager maintains through heroic discipline, but as a feature of the organizational design that operates regardless of any individual manager's capacity.
Mintzberg's work on organizational structure provides the vocabulary for this remedy. He described five basic coordination mechanisms: mutual adjustment (informal communication), direct supervision (one person directing others), standardization of work processes (specifying how work should be done), standardization of outputs (specifying what results should be achieved), and standardization of skills (specifying what training is required). Each mechanism coordinates human activity at a different level of organizational complexity, and each produces a different set of strengths and pathologies.
The structural dam against AI-generated overload requires, in Mintzberg's terms, the standardization of work processes around the interface between human judgment and machine output. Not the standardization of the judgment itself — that would defeat the purpose — but the standardization of the process by which machine outputs reach the manager. How many outputs per day does the manager evaluate? In what form are they presented? At what stage of completion must they be before they are submitted for evaluation? What criteria determine whether a machine output requires human review at all?
These are not questions of personal productivity. They are questions of organizational design. They determine the rate at which the system generates demands on the manager's judgment capacity, and they can be set structurally rather than left to the individual manager's discretion. An organization that does not set these parameters structurally leaves the rate of demand generation to the machine's production capacity, which is effectively unlimited. The result is the paradox: the manager drowns in the outputs of the tool that was supposed to help her swim.
The organizations that navigate this paradox successfully will be the ones that recognize a counterintuitive principle: in an age of unlimited machine production, the most valuable organizational resource is not the machine's output but the manager's judgment, and the most important organizational design decision is how to protect that judgment from the flood of outputs that the machine can generate. The dam must be built not by the manager but around the manager — not as a personal discipline but as an institutional structure that limits the rate at which demands reach the human layer.
This is the structural equivalent of what the labor movement achieved a century ago when it built the eight-hour day. The factory could run twenty-four hours. The workers could not. The solution was not to train workers to work longer but to design the system so that it demanded less than their maximum capacity, preserving the human resource for sustained output rather than burning it out in pursuit of machine-speed throughput.
The AI-augmented organization needs its equivalent of the eight-hour day — not a literal time limit, but a structural limit on the rate at which the system demands human judgment. Without that limit, the paradox governs. The river rises faster than the dam can be built, because the builder and the river occupy the same space, and the builder's hands are already full of water.
Mintzberg would recognize this argument as an extension of his career-long insistence that structural problems require structural solutions. The manager who believes she can solve a structural overload through personal productivity techniques — better time management, sharper prioritization, more disciplined use of AI tools — is making the error that management theory has always made: attributing to the individual a problem that belongs to the system. The system generates the overload. Only the system can limit it. And designing the system to limit it is not the manager's job alone. It is the job of the organization's architects — the people who design the structures within which management happens.
The paradox does not resolve. The tension between the need for structural containment and the difficulty of creating it under conditions of operational overload is permanent. But naming the paradox is the first step toward addressing it, and Mintzberg's empirical tradition — the insistence on describing what actually happens rather than what should happen — provides the diagnostic precision that the remedy requires. The manager cannot build the dam alone. The organization must build it around her. And the design of that organizational structure — the new coordination mechanisms required by the age of machine-speed production — is the most important management question of the present moment.
The most widely cited article in the history of the Harvard Business Review on the subject of strategy is not a piece about competitive advantage, core competencies, or the five forces that shape industry structure. It is Henry Mintzberg's 1987 article "Crafting Strategy," and its central claim is one that the strategic planning industry spent the subsequent decades trying to ignore: strategy is not a plan. It is a pattern.
The distinction is not semantic. It is structural, and it carries consequences that become acute in an age when artificial intelligence can generate plans at a speed and scale that no human strategist can match.
A plan is deliberate. It is formulated before action, by people who analyze the environment, assess the organization's capabilities, and design a course of action intended to achieve specified objectives. The plan exists as a document, a presentation, a set of goals and milestones that the organization is supposed to implement. Strategic planning, as it developed in the twentieth century, was the institutionalization of this approach — the formal, periodic, analytically rigorous process of producing plans that would guide organizational action.
Mintzberg's research showed that this is not how strategy actually forms.
What he observed, across organizations of every type and size, was that the strategies that actually guided organizational behavior were frequently not the strategies that had been planned. Plans were produced. Some of them were implemented. Many were not. The strategies that endured — the patterns of action that gave the organization its distinctive direction — emerged from the accumulated decisions of people throughout the organization, decisions made under pressure, in response to unexpected events, based on incomplete information, and shaped by the specific circumstances of the moment. The pattern became visible only in retrospect, when someone looked back and said: "Ah, that is what we were doing."
Mintzberg called this emergent strategy, and he distinguished it from deliberate strategy — the strategy that was planned and then implemented as planned. His research suggested that most real organizational strategy was a blend of the two: partly deliberate (some elements of the plan survived implementation) and partly emergent (other elements arose from the ongoing stream of decisions that no plan had anticipated). The ratio varied across organizations and over time, but the emergent component was always present and usually dominant.
The metaphor he used was the potter at her wheel. The potter begins with a general intention. She means to make a bowl, roughly this size, roughly this shape. But the final form emerges through the interaction between her intention and the clay — its moisture, its texture, its willingness to hold the form she is pressing upon it. The bowl that results is neither purely deliberate (the potter did not know its exact final form in advance) nor purely accidental (the potter's intention and skill shaped every moment of its creation). It is crafted — the product of a conversation between intention and circumstance.
This understanding of strategy has specific implications for the AI era, implications that follow from the structural properties of the technology rather than from speculation about its potential.
AI excels at deliberate strategy. The machine can analyze market data, model competitive scenarios, assess organizational capabilities, and generate strategic plans with a thoroughness and speed that make the strategic planning departments of the twentieth century look like monks copying manuscripts by candlelight. A CEO who asks Claude for a three-year strategic plan will receive, in minutes, a document that incorporates industry analysis, competitive positioning, financial projections, risk assessment, and implementation milestones — a document that would have taken McKinsey three months and seven figures to produce.
The document may be excellent. The analysis may be rigorous. The plan may be coherent. And none of this matters as much as it appears to, because the plan is the easy part. The plan was always the easy part. The hard part was never formulating the strategy. The hard part was discovering whether the strategy was right — a discovery that could only be made through implementation, through the messy, unpredictable, politically charged process of doing things, learning from the results, and adjusting.
Emergence requires doing. It requires the organization to act, to test its assumptions against reality, to encounter the unexpected, and to adapt. The pattern forms through a sequence of actions and reactions that cannot be compressed into a planning exercise, however sophisticated, because the essential ingredient is contact with reality — with customers who do not behave as the model predicted, with competitors who respond in ways the analysis did not anticipate, with internal capabilities that are stronger or weaker than the assessment suggested, with market conditions that shift between the time the plan was produced and the time it was implemented.
AI cannot provide this contact. It can simulate it. It can model scenarios, run sensitivity analyses, stress-test assumptions. But simulation is not experience. The model of the customer is not the customer. The model of the competitor is not the competitor. And the distance between the model and the reality is where strategy actually lives — in the gap between what was planned and what happened, in the adjustments that the organization makes when reality refuses to conform to the plan.
This gap has always existed. Mintzberg spent decades documenting it. What AI changes is not the gap but the temptation to ignore it. When the plan is produced in three minutes rather than three months, when it is analytically rigorous and beautifully formatted, when it arrives with the cognitive authority of a system that has processed more data than any human team could review in a year, the temptation to trust the plan is overwhelming. The plan looks right. It sounds right. It covers every contingency that the model could anticipate. The manager who questions it looks like a Luddite — obstructing progress with irrational doubt, clinging to intuition when the data is clear.
But the data is never clear in the way that a model suggests. Mintzberg's most fundamental insight about strategy was that the analytical clarity of a plan is inversely related to its contact with reality. The more rigorous the analysis, the more it operates on the data that was available at the time of analysis — data that is, by definition, historical. The plan tells you where the world was. It does not tell you where the world is going, because the future is shaped by the actions of other agents who are also adapting, also responding, also failing to follow their own plans. The future is emergent. And a plan, however rigorous, cannot capture emergence.
Mintzberg's alternative to the strategic plan was strategic learning — the organizational capacity to act, learn, and adapt in real time. Strategic learning requires structures that enable experimentation: small bets that test assumptions cheaply, feedback loops that transmit the results quickly, decision-making authority distributed close enough to the front line that adjustments can be made without waiting for the plan to be officially revised. Strategic learning is messy. It is politically charged, because the results of experiments create winners and losers within the organization. It is inefficient by the standards of analytical planning, because much of what is tried does not work.
It is also how strategy actually forms.
AI can support strategic learning if it is used as a tool for rapid experimentation rather than as a substitute for it. The machine can prototype quickly, test assumptions against available data, identify the experiments most likely to produce useful information, and synthesize the results of experiments into patterns that the organization can recognize and act upon. Used this way, AI accelerates the emergence of strategy rather than replacing it with plans.
But this use of AI requires a managerial orientation that is relatively rare in practice: the willingness to act before the analysis is complete, to learn from failure, to recognize patterns in messy data, and to adjust course based on what the organization is discovering through its actions rather than what the plan says it should be doing. This orientation is craft. It is the strategic equivalent of the potter feeling the clay — sensing the resistance, adjusting the pressure, allowing the form to emerge from the interaction rather than imposing a predetermined shape.
The organizations most likely to develop this orientation are the ones Mintzberg described as adhocracies — loosely structured, project-based, relying on the mutual adjustment of skilled professionals rather than on formal plans and standardized procedures. The adhocracy tolerates ambiguity. It expects failure. It trusts the judgment of the people closest to the problem. And it treats strategy as something that is discovered through action rather than designed through analysis.
The organizations least likely to develop this orientation are the ones that have invested most heavily in strategic planning — the machine bureaucracies and the professional bureaucracies where the plan is sacred, the deviation from the plan is pathological, and the suggestion that the plan might be wrong is career-threatening. These organizations will adopt AI as a planning tool, produce plans of extraordinary analytical sophistication, and mistake the sophistication of the plan for the quality of the strategy.
The distinction matters now more than it did when Mintzberg first drew it, because the machine has made planning so easy and so impressive that the difference between a plan and a strategy is harder to see. A three-year strategic plan generated by AI in an afternoon has the appearance of strategic thinking without the substance of strategic learning. It is smooth where real strategy is rough. It is complete where real strategy is partial. It is confident where real strategy is provisional. And its smoothness, completeness, and confidence are precisely the qualities that make it dangerous — not because the analysis is wrong, but because the analysis provides a false sense of certainty about a future that is, by nature, uncertain.
Mintzberg would recognize this danger. He spent a career arguing against the fallacy of formal planning, against the belief that analysis could substitute for learning, against the institutional tendency to mistake the plan for the thing the plan was supposed to represent. AI, in the domain of strategy, is the most powerful instantiation of this fallacy that has ever existed — a machine that can produce plans so convincing that the organization forgets it still needs to learn.
The strategy is not the plan. The strategy is the pattern that emerges when the organization encounters reality, learns from the encounter, and adapts. AI can accelerate every element of this process except the encounter itself. And the encounter — the messy, surprising, humbling contact with a world that refuses to behave as the model predicted — is where strategy lives.
Henry Mintzberg proposed that organizations do not vary along a single spectrum from simple to complex. They cluster into distinct configurations, each representing a different solution to the fundamental problem of coordination — the problem of ensuring that the activities of many individuals serve a common purpose. The configurations are not arbitrary types imposed by the theorist. They emerge from the interaction of the organization's structural elements: the operating core (the people who do the basic work), the strategic apex (the senior managers), the middle line (the managers between the apex and the core), the technostructure (the analysts who design systems and standards), and the support staff (the people who provide indirect services).
Each configuration is defined by the coordination mechanism that dominates. The mechanism determines how the organization holds itself together, how it ensures that the work of one person meshes with the work of another, and how it responds when the environment shifts.
Mintzberg identified five primary configurations, and each faces AI differently — not as a uniform force applied uniformly across all organizations, but as a specific challenge to the specific coordination mechanism on which each configuration depends.
The entrepreneurial organization is the simplest. It is held together by direct supervision: one person at the top directs everyone else. The structure is flat, the hierarchy minimal, the formalization almost nonexistent. Startups, small businesses, and turnaround operations typically exhibit this configuration. The entrepreneur sees the environment, decides what to do, and tells people to do it. The coordination mechanism is the founder's brain.
AI amplifies this configuration dramatically. The entrepreneur who can now use Claude Code to prototype, analyze markets, generate financial models, and draft communications has been given an amplification of personal capability that Mintzberg's framework would recognize as an extension of direct supervision through technological leverage. The solo founder with AI is the entrepreneurial organization taken to its logical extreme — one person directing not other people but machines, with the coordination mechanism remaining direct supervision, now exercised over artificial agents rather than human subordinates.
The risk, which Mintzberg's framework identifies clearly, is the same risk that the entrepreneurial organization always faces: the vulnerability of dependence on a single person. The entrepreneur's judgment is the organization's strategy, the organization's culture, the organization's quality control, and the organization's error-correction mechanism all at once. When AI amplifies this individual, it amplifies both the capability and the single point of failure. The brilliant decision reaches further. The blind spot also reaches further. And the organization has no structural mechanism for catching the error, because the coordination mechanism — direct supervision — runs in only one direction.
The machine bureaucracy is coordinated through standardization of work processes. The analysts in the technostructure design the processes, the workers in the operating core execute them, and the middle line ensures compliance. This is the factory, the call center, the fast-food chain, the government agency that processes applications according to defined procedures. Efficiency is the dominant value. Predictability is the goal. Deviation from the standard is pathological.
AI does not merely improve the machine bureaucracy. It threatens to subsume it. The machine bureaucracy exists because human coordination of repetitive work is imperfect — workers deviate, supervisors overlook, standards drift. AI can standardize more perfectly than any human system. It can monitor compliance in real time, adjust processes dynamically, and execute standard procedures with zero deviation. The logical endpoint of AI applied to the machine bureaucracy is the elimination of the human operating core entirely — a machine bureaucracy without the bureaucracy, staffed by machines that do not deviate, do not resist, and do not organize unions.
This is the configuration where AI's impact on employment is most direct and most severe. The machine bureaucracy employed millions of people in standardized roles — roles defined by the process rather than by the person performing it. When the process can be executed by AI, the role disappears. Not gradually, not after a transition period, but as soon as the AI system proves more reliable than the human worker at executing the standard procedure.
The professional bureaucracy is coordinated through standardization of skills. The professionals — doctors, lawyers, professors, accountants — are trained to a standard, certified to a standard, and then given autonomy to apply their standardized skills to specific cases. The organization coordinates by ensuring that each professional possesses the same skill set, then trusting each professional to exercise judgment within the domain that the skill set defines.
AI challenges this configuration at both of its structural pillars.
The first pillar — the standardized skill — becomes less distinctive when the machine can perform the standard procedures. The lawyer who spent three years learning to draft a contract finds that AI can draft competent contracts. The accountant who spent years mastering tax code finds that AI can prepare returns with greater accuracy. The radiologist who spent a decade learning to read images finds that AI matches diagnostic accuracy. In each case, the professional's standardized skill, the foundation on which the professional bureaucracy rests, is replicated by a system that does not require years of training, does not demand a salary, and does not take vacations.
The second pillar — professional autonomy — is challenged from a different direction. When AI can evaluate the professional's decisions against the standard with greater consistency than any human supervisor, the autonomy that defined the professional role contracts. The doctor whose treatment decisions are reviewed by an AI system that flags deviations from evidence-based protocols is less autonomous than the doctor whose decisions were reviewed, sporadically and subjectively, by a department head. The autonomy was justified by the professional's expertise. When the machine possesses equivalent or superior expertise in the standard domain, the justification weakens.
The professional bureaucracy is not destroyed by AI, but it is transformed in a direction that Mintzberg would recognize as a shift toward the machine bureaucracy — more standardization, less autonomy, more oversight, less professional judgment. Whether this shift produces better outcomes for the clients of professional organizations — patients, students, litigants — is an empirical question that depends on the domain. In some domains, more standardization and oversight may reduce errors and improve quality. In others, it may eliminate the professional judgment that allowed adaptation to unique cases, producing a system that handles the average case brilliantly and the unusual case disastrously.
The diversified organization is coordinated through standardization of outputs. The headquarters sets financial targets and performance standards; the divisions pursue those targets through whatever means they choose. This is the multidivisional corporation — General Electric under Jack Welch, Berkshire Hathaway under Warren Buffett — where the center manages by the numbers and the divisions manage by whatever works.
AI transforms the diversified organization by collapsing the information asymmetry between headquarters and divisions. The central office that previously relied on quarterly financial reports to monitor divisional performance can now monitor in real time — tracking not just financial outputs but operational metrics, customer signals, competitive movements, and employee behavior with a granularity that was previously impossible. The temptation, and the structural tendency, is toward centralization: the headquarters that can see everything will try to control everything, because the information that justified divisional autonomy — the argument that the division knows its business better than the center — is undermined by a machine that gives the center equivalent or superior information.
Mintzberg observed that the diversified organization's effectiveness depended on a delicate balance between central control and divisional autonomy. Too much control from the center and the divisions lose the responsiveness that justified their existence. Too much autonomy and the corporation loses the coherence that justified the diversification. AI pushes this balance decisively toward the center, because information is the currency of central control and AI generates information in unlimited quantities. Whether this centralization improves or degrades the organization's performance depends entirely on whether the center's information advantage translates into better decisions — and Mintzberg's research suggests that information advantage alone is a poor predictor of decision quality, because decisions in organizations depend on context, relationships, and judgment that no dataset can fully capture.
The innovative organization — the adhocracy — is coordinated through mutual adjustment. Skilled professionals communicate informally, in real time, adapting their activities to each other as the work progresses. There are no standard procedures, because the work is novel. There are no standard skills, because the work crosses disciplinary boundaries. There is no standardization of outputs, because the outputs are not known in advance. The coordination happens through the ongoing conversation between people working on problems that have never been solved before.
The adhocracy is, of Mintzberg's five configurations, the one most resistant to AI displacement — and, paradoxically, the one most enhanced by AI assistance. The resistance comes from the nature of the coordination mechanism. Mutual adjustment is interpersonal, context-sensitive, real-time, and dependent on the trust that builds between people who have worked together under uncertain conditions. The AI system that can standardize a process or evaluate a professional's decision cannot replicate the conversation between two engineers at a whiteboard, the argument between a designer and a product manager about what the user actually needs, or the moment when someone in the room says something that changes everyone else's understanding of the problem.
The enhancement comes from the reduction in implementation friction. When the adhocracy's members can prototype their ideas in hours rather than weeks, test their assumptions against data in minutes rather than months, and communicate their insights through working artifacts rather than abstract descriptions, the mutual adjustment accelerates. The conversation becomes richer because the participants can show each other what they mean rather than merely telling each other. The cycle of proposal, experiment, feedback, and adaptation — the fundamental rhythm of innovative work — tightens.
The adhocracy, enhanced by AI and coordinated through mutual adjustment, is the organizational configuration best suited to the environment that AI creates — an environment of rapid change, novel problems, and unprecedented capability that requires human judgment to direct. The future organization, across industries, will increasingly resemble the adhocracy, not because every organization will become a creative agency or a research lab, but because the coordination mechanisms that defined the other configurations — direct supervision, standardization of processes, standardization of skills, standardization of outputs — are the mechanisms that AI can replicate or improve upon. Mutual adjustment is the mechanism that remains irreducibly human.
Mintzberg's configurations were not meant to be permanent categories. They were snapshots of organizational tendencies, ideal types that real organizations approximate and blend. But the direction of the drift, in an environment where AI handles standardization and the irreducible human contribution is adaptive, contextual, interpersonal coordination, points clearly toward the adhocracy as the configuration of the future. The organizations that recognize this early — and begin designing for mutual adjustment rather than standardization — will be the ones that thrive in the environment that AI is creating. The ones that double down on standardization, hoping that AI will perfect the machine bureaucracy or the professional bureaucracy, will discover that perfecting a configuration whose coordination mechanism can be automated is a strategy for obsolescence.
The cult of the heroic leader is one of the most persistent and most damaging myths in the history of management. Henry Mintzberg has argued against it for decades — in articles, in books, in blog posts written with the exasperated precision of someone who has watched the same error repeated across half a century of organizational life. The error is the belief that organizations succeed or fail because of the qualities of the person at the top.
The belief is not merely incorrect. It is structurally dangerous, because it concentrates attention, resources, and accountability on a single individual while obscuring the collective processes that actually produce organizational outcomes. The heroic leader gets the credit when things go well and the blame when they go poorly, and in both cases the attribution is wrong — not because the leader is irrelevant, but because the leader's contribution is inseparable from the contributions of hundreds or thousands of other people whose work, judgment, and commitment are the actual substance of the organization's performance.
Mintzberg proposed an alternative to leadership. He called it communityship — a word he coined deliberately, with the awareness that it was awkward in the mouth, because the awkwardness itself made a point. Leadership rolls off the tongue. It sounds natural, inevitable, the obvious thing that organizations need. Communityship does not roll off the tongue. It requires explanation. And the need for explanation is the need for a fundamentally different way of thinking about how organizations work.
Community, in Mintzberg's usage, is not a warm feeling or a corporate value statement or a team-building exercise. It is a structural condition. An organization is a community when its members feel a sense of belonging, when they contribute not because they are incentivized or coerced but because they care about the collective enterprise, when the quality of the work reflects the quality of the relationships between the people doing it, and when the norms of behavior arise from shared commitment rather than from hierarchical authority.
Community is built slowly. It is built through shared experience — through the accumulation of moments in which people work together, struggle together, succeed and fail together, and develop the mutual respect and trust that comes from having navigated difficulty without losing regard for one another. It cannot be decreed. It cannot be engineered. It cannot be produced by a management consulting framework or a corporate mission statement. It arises, when it arises, from the thousands of small interactions that constitute organizational life — the meeting where someone's contribution was acknowledged, the crisis where someone covered for a colleague, the decision where the leader chose fairness over expediency.
AI poses a specific and underexamined threat to community. The threat does not come from the machine itself but from the organizational logic that the machine enables.
Consider the trajectory. Before AI, building a software product required a team. Not because the founder lacked ideas, but because the implementation of those ideas required specialized skills distributed across multiple people — frontend, backend, design, testing, deployment, operations. The team existed because the work required it. And the team, simply by existing, created the conditions for community. People who work together develop relationships. Relationships develop trust. Trust enables the kind of communication — honest, direct, vulnerable — that produces better work than any collection of atomized individuals could produce on their own.
When AI enables the solo builder — the individual who can prototype, test, and deploy an entire product through conversation with a machine — the structural necessity for the team diminishes. The economics are compelling. One person with Claude Code can produce what five people produced before. The arithmetic of headcount reduction is immediate and obvious. The arithmetic of community loss is neither.
What is lost when the team shrinks from ten to two is not just eight salaries. It is the relational density that made the organization a community rather than a contract. It is the diversity of perspective that arose naturally from having ten different people with ten different histories and ten different cognitive styles working on the same problem. It is the mutual development that occurred when senior members mentored junior members, when junior members challenged senior members, when the friction between different approaches produced solutions that no individual would have reached alone. It is the resilience that came from having multiple people who understood the system, so that when one person left — or burned out, or was sick, or had a bad week — the organization could absorb the absence without collapsing.
The solo builder amplified by AI is extraordinarily productive. The solo builder amplified by AI is also extraordinarily fragile. The organism has no redundancy. The knowledge is concentrated in a single point of failure. The quality control is the builder's own judgment, which is subject to the same blind spots and fatigue and biases that afflict every human judgment that operates without the corrective influence of other perspectives.
Mintzberg would not argue against the solo builder's right to exist. He would argue that the solo builder is not an organization, and that the attempt to build organizations on the model of the solo builder — lean teams, minimal headcount, maximum AI augmentation — will produce structures that are efficient and brittle. Efficient because the machine handles the routine. Brittle because the community that absorbs shocks, transmits culture, develops people, and holds itself together when the strategy fails has been optimized away in pursuit of productivity metrics that do not measure what they should.
The cult of individual heroism, which the leadership myth always promoted, finds its most extreme expression in the AI-amplified builder who ships a product in a weekend, generates revenue without employees, and is celebrated in the discourse as the model for the future of work. The celebration is understandable. The capability is real. The efficiency is unprecedented.
But building is not sustaining. Shipping a product is not maintaining it. Generating revenue is not building the relationships with customers, employees, and communities that make the revenue durable. The distinction between building and sustaining maps precisely onto the distinction between leadership and communityship. Leadership builds. Communityship sustains. And the sustaining — the patient, unglamorous, relationship-intensive work of maintaining a community over time — is the work that AI cannot do and that the heroic-builder narrative systematically undervalues.
The challenge for organizations in the AI era is to maintain community under conditions that make community structurally unnecessary in the short term. The machine handles the work that used to require a team. The team was the substrate of community. Remove the team and the community dissolves — not immediately, not visibly, but gradually, as the relational density thins and the shared experience that builds trust is replaced by the transactional efficiency of human-machine interaction.
Mintzberg wrote on his blog that technologies tend to undermine community and encourage individualism. The observation was made about digital communication technologies generally, but it applies with intensified force to AI. Digital communication substituted screen-mediated interaction for face-to-face presence — a substitution that preserved information exchange while eroding the relational texture that face-to-face interaction provides. AI goes further: it substitutes human-machine interaction for human-human interaction, not just in communication but in the productive work itself. The engineer who used to develop her skills through collaboration with other engineers now develops her capabilities through conversation with a machine. The machine is more responsive, more available, more patient, and more knowledgeable than any colleague. It is also, in every dimension that matters for community, absent.
The machine does not care about the engineer's development as a person. It does not notice when she is tired. It does not push back when she is making a decision that her colleagues would recognize as uncharacteristic. It does not provide the social accountability that comes from working within a community of people who know you, who have expectations of you, who will notice if you cut corners or give up on something you should have pursued.
The manager who builds community in the AI era is building the one thing the machine cannot build and the one thing the organization most needs. She is building the substrate of resilience, the network of relationships that will hold the organization together when the AI outputs fail, when the strategy proves wrong, when the market shifts in ways that no model predicted. She is building the environment in which people develop — not just skills, which the machine can teach, but judgment, which can only be developed through the friction of working with other people who see things differently.
Community is the organizational manifestation of what Mintzberg has always valued most: the recognition that organizations are human systems, not mechanical ones, and that the qualities that make them effective — trust, commitment, shared purpose, mutual accountability — are qualities of relationships, not of individuals and not of machines. AI amplifies individual capability. Community amplifies collective wisdom. And the argument of Mintzberg's career — made before AI and made more urgent by it — is that collective wisdom is what organizations actually need.
The manager who understands this will not maximize AI adoption. She will optimize it — deploying the machine where it enhances collective capability and restraining it where it erodes the relational substrate that collective capability depends upon. She will maintain the team even when the arithmetic says the team is unnecessary, because the arithmetic measures productivity and she is measuring something the arithmetic cannot see: the quality of the community in which the productivity occurs.
That measurement — invisible, uncapturable by metrics, evident only to those who know what a healthy organizational community feels like — is the manager's craft. And it is the craft that matters most now, when the machine can do everything else.
In 1908, Harvard University established the first Master of Business Administration program with a class of eighty students and a curriculum modeled on the case method that the law school had pioneered. The premise was elegant: management could be taught the way law was taught, through the systematic analysis of real situations reduced to written cases that students would dissect, debate, and resolve in the classroom. The student who had analyzed enough cases would develop the analytical reflexes needed to manage in the real world.
By the close of the twentieth century, the MBA had become the dominant credential in management. More than a hundred thousand MBA degrees were awarded annually in the United States alone. The curriculum had expanded from its origins in accounting and finance to encompass marketing, operations, strategy, organizational behavior, and — in progressive programs — leadership, ethics, and entrepreneurship. The case method remained the pedagogical spine. The analytical orientation remained the intellectual core. And the implicit promise remained the same: that the practice of management could be learned through the analysis of management situations, conducted at a remove from the situations themselves.
Henry Mintzberg spent a significant portion of his career arguing that this promise was false — not approximately false, not false in its details while correct in its essentials, but fundamentally, structurally false in a way that had consequences for every organization that hired an MBA and expected a manager.
The argument was laid out most fully in his 2004 book Managers Not MBAs, but it had been building for decades before that. The core of the argument was a distinction between analysis and synthesis, between knowing about management and knowing how to manage, between the cognitive skills that a classroom develops and the practical skills that only experience provides.
Analysis decomposes. It takes a complex situation and breaks it into components that can be examined individually — the financial component, the marketing component, the competitive component, the organizational component. The case method is an analytical exercise: here is a situation, decompose it, identify the problem, propose a solution. The student who excels at case analysis is the student who can break things apart most quickly and most cleverly.
Management requires synthesis. It requires taking the components that analysis has identified and putting them back together into a coherent course of action that accounts for their interactions, their contradictions, and their dependencies on context that the case does not and cannot capture. The manager in the field does not face a case. She faces a situation that is unfolding in real time, that involves real people with real emotions and real political interests, that changes as she acts upon it, and that does not pause while she identifies the relevant framework.
The gap between analysis and synthesis is the gap between the MBA classroom and the managerial reality. Mintzberg argued that the gap was not merely a limitation of the MBA but a systematic distortion — that the MBA produced a particular kind of manager, analytically confident and practically naive, who approached every organizational situation as a problem to be solved through the application of the right framework rather than as a condition to be navigated through judgment, experience, and the capacity to read human situations.
The distortion had consequences. Mintzberg traced a line from MBA-educated managers to a pattern of organizational decisions characterized by excessive abstraction, insufficient attention to implementation, and a preference for dramatic strategic moves over the patient, incremental, relationship-intensive work of building organizational capability. The MBA-educated manager, in his diagnosis, was more likely to restructure than to develop, more likely to acquire than to build, more likely to cut costs than to create value — because these were the moves that case analysis endorsed and that the analytical mindset favored.
Artificial intelligence makes this critique simultaneously more urgent and more actionable.
More urgent, because AI is the ultimate analytical engine. Every analytical skill that the MBA was designed to develop — financial modeling, market analysis, competitive assessment, scenario planning, case decomposition — can now be performed by a machine with greater speed, greater thoroughness, and greater consistency than any human analyst. The MBA's analytical core, the very thing that justified two years of tuition and opportunity cost, has been commoditized.
This is not a marginal observation. It strikes at the economic foundation of management education. If the analytical skills that the MBA develops can be performed by a machine that costs a hundred dollars a month, the value proposition of a degree that costs a hundred and fifty thousand dollars and two years of foregone income collapses — not because the skills are unimportant, but because they are no longer scarce. The premium on analysis evaporates when analysis is abundant. What remains scarce, and therefore what remains premium, is the synthesis, the judgment, the interpersonal skill, and the contextual sensitivity that the MBA was never designed to teach.
Mintzberg designed an alternative. The International Masters Program in Practicing Management, which he created with Jonathan Gosling at Lancaster University, was built on a simple premise that the MBA had always violated: you cannot create a manager in a classroom. You can only develop a manager who is already managing.
The IMPM admitted only practicing managers — people who were already in managerial roles, already navigating the fragmentation, already exercising judgment under conditions of uncertainty. The curriculum was organized not around functional disciplines (finance, marketing, operations) but around managerial mindsets: the reflective mindset, the analytical mindset, the worldly mindset, the collaborative mindset, and the action mindset. Participants spent periods in the classroom and periods back in their organizations, and the two were explicitly connected: the classroom provided frameworks for reflecting on the experience of managing, and the experience of managing provided the material on which the reflection operated.
The critical pedagogical move was reflection on experience rather than analysis of cases. The case is someone else's experience, reduced to a written document, stripped of the context that made it complex, and presented as a puzzle to be solved. Reflection on one's own experience is a fundamentally different cognitive operation — it requires the manager to examine her own decisions, her own assumptions, her own emotional responses, and her own blind spots, with the honesty and vulnerability that case analysis never demands.
The distinction maps precisely onto what AI makes necessary in management development. If the analytical skills can be outsourced to the machine, the development that matters is the development of the capacities that cannot be outsourced: the capacity for self-awareness, for reading the emotional dynamics of a group, for knowing when the data is sufficient and when more data is a form of procrastination, for making decisions that cannot be derived from analysis because they involve values, priorities, and tradeoffs that are irreducibly human.
These capacities are developed through experience, but not through experience alone. Experience without reflection produces routine, not learning. The manager who has managed for twenty years without reflecting on her practice has one year of experience repeated twenty times. Reflection is the mechanism by which experience becomes craft — by which the specific situation is connected to the general pattern, and the general pattern is tested against the next specific situation.
AI can support this reflective process in ways that were not available when Mintzberg designed the IMPM. A manager who uses an AI tool to record and analyze her decisions over time — not to optimize them algorithmically, but to surface the patterns in her own judgment — has a mirror that no previous generation of managers possessed. The machine can show her: you tend to defer difficult personnel decisions. You tend to overweight financial data and underweight relational signals. You tend to intervene in operational details when the strategic picture is unclear. These patterns, surfaced by the machine and reflected upon by the human, could accelerate the development of the self-awareness on which managerial craft depends.
But the reflective use of AI requires a pedagogical context that most organizations do not provide. The manager who receives an AI-generated analysis of her decision patterns needs a framework for interpreting it — a theory of managerial practice that gives meaning to the patterns, a community of fellow managers who are engaged in similar reflection, and a facilitator who can guide the reflective process without reducing it to another analytical exercise.
The failure of management development in the AI era will not be a failure of technology. It will be a failure of pedagogy — a failure to recognize that the capacities AI makes most valuable are the capacities that existing management education is least equipped to develop. The MBA taught analysis because analysis was scarce and valuable. AI has made analysis abundant and cheap. The educational institutions that continue to teach analysis as their primary offering are selling a commodity at a luxury price. The institutions that pivot toward the development of craft — toward reflection, synthesis, judgment, and interpersonal skill — will be teaching the things that remain scarce and are growing more valuable by the month.
Mintzberg's critique of the MBA was always, at bottom, a critique of the separation of thinking from doing. The MBA separated them by design: the student thought about management in the classroom, then went out and did management in the organization, and the two activities were treated as sequential rather than simultaneous. Mintzberg argued that this separation produced managers who could think brilliantly about management and manage poorly — because the thinking and the doing are not separate activities but aspects of a single, integrated practice.
AI threatens to deepen this separation. The manager who uses AI to generate analyses, plans, and recommendations can think about management at one remove from the doing — reviewing the machine's output rather than engaging directly with the organizational reality that the output represents. The separation between the analysis and the situation it analyzes, which was already problematic in the case method, becomes a chasm when the analysis is generated by a machine that has never entered the building, never met the employees, never felt the specific organizational tension that makes this situation different from every other situation the data might suggest it resembles.
The management education that the moment requires would do the opposite. It would insist on the integration of thinking and doing. It would use AI as a tool for reflection — a mirror for the manager's own practice — rather than as a substitute for engagement with organizational reality. It would develop the craft of managing by immersing the manager in the practice of managing and providing the frameworks, the community, and the mentoring that turn practice into learning.
Mintzberg saw this decades before AI made it urgent. The urgency now is that the analytical alternative has become so powerful, so seductive, and so cheap that the craft alternative must be explicitly chosen and deliberately designed. Left to its own devices, the educational system will follow the path of least resistance — more analysis, more AI-assisted case studies, more algorithmic optimization of the curriculum — and produce a generation of managers who are more analytically sophisticated and more practically hollow than any generation before them. The path of craft requires institutional will, pedagogical imagination, and the willingness to invest in a form of development whose returns are real but unmeasurable. Mintzberg built the model. The question is whether organizations and educational institutions will adopt it before the analytical default produces a managerial crisis that makes the adoption involuntary.
Henry Mintzberg published Rebalancing Society in 2015, and the book's argument — that society had tilted dangerously toward the private sector at the expense of the public sector and the plural sector (the sector of communities, cooperatives, associations, and social movements) — was received as a political argument. It was political. But it was also, at its foundation, a structural argument about the conditions under which complex systems remain healthy, and this structural argument applies with uncomfortable precision to the question of how organizations should respond to artificial intelligence.
Rebalancing is not a balance. The word itself carries a dynamic connotation that "balance" does not. Balance suggests a static equilibrium — equal weights on equal sides, a condition that, once achieved, maintains itself. Rebalancing suggests a system that has tilted and requires active correction, a correction that is ongoing because the forces that produced the tilt continue to operate. The effort does not end. The tilt recurs. The rebalancing must be continuous.
Mintzberg argued that the tilt in society was toward one sector's dominance. The structural equivalent in organizational life, the tilt that AI accelerates, is toward one component of the managerial role at the expense of the others.
Recall the components. Mintzberg described management as a blend of art, craft, and science. Art is vision — the capacity to see what does not yet exist and to inspire others to see it. Craft is practice — the tacit knowledge built through experience, the judgment that comes from having done the work, the feel for organizational dynamics that no textbook can convey. Science is analysis — the systematic processing of evidence, the application of formal methods, the rigorous decomposition of problems into components that can be individually understood and collectively optimized.
The tilt, long before AI, was toward science. The MBA emphasized analysis. The consulting industry sold analytical frameworks. The management literature rewarded the person who could model the problem over the person who could navigate the situation. The rise of big data and business analytics in the 2010s intensified the tilt further: the organization that could not demonstrate data-driven decision-making was seen as backward, regardless of the quality of its judgment or the health of its culture.
AI pushes this tilt to its logical extreme. The machine is science — pure, fast, scalable science. It can analyze any dataset, model any scenario, optimize any objective function, and produce results that are more rigorous, more comprehensive, and more internally consistent than any human analyst could produce. The scientific component of management has been, for practical purposes, perfected.
The perfection of one component does not improve the whole. It distorts it. When science becomes effortless and its outputs become abundant, the organizational system naturally tilts further toward science — because science produces measurable results, because measurable results justify budgets, and because budgets are allocated by people who have been trained in the analytical paradigm and who therefore value what the analytical paradigm measures. The cycle is self-reinforcing. More science produces more data. More data produces more justification for more science. The art and the craft, which do not produce measurable outputs and cannot justify themselves in the language of data-driven decision-making, are gradually marginalized.
The rebalancing that the moment requires is a deliberate tilt away from the component that AI has perfected and toward the components that AI has left untouched. More craft. Less science. More practice. Less analysis. More presence. Less processing. More emergence. Less deliberation. More communityship. Less heroic leadership. More mutual adjustment. Less standardization.
This is not a Luddite argument. It does not require the rejection of AI or the abandonment of analytical methods. It requires the recognition that the machine's capabilities define, by complement, the human capabilities that matter most — and that those complementary capabilities are precisely the ones that the management profession has most consistently undervalued, undertaught, and underrewarded.
The rebalancing must operate at three levels: the level of the individual manager, the level of the organization, and the level of the institutions that shape how management is understood and practiced.
At the level of the individual manager, the rebalancing means a shift in self-understanding. The manager who sees herself primarily as an analyst — a processor of information, a solver of problems, a maker of data-driven decisions — is seeing herself as the machine's competitor, and she will lose that competition. The manager who sees herself as a practitioner — a builder of relationships, a navigator of complexity, a craftsperson whose medium is the organization and whose tools are judgment, presence, and the capacity to hold contradictions without resolving them prematurely — is seeing herself as the machine's complement, and her value will increase as the machine's capabilities grow.
This shift in self-understanding is difficult because it requires the abandonment of the metrics that have defined managerial success for decades. The analyst is measured by the quality of her analyses, and these are measurable. The craftsperson is measured by the quality of her organizational community, and this is not measurable — or rather, it is measurable only by the people inside the community, who can feel the difference between a healthy culture and a toxic one but cannot reduce that feeling to a number that would satisfy a board of directors.
The rebalancing at the individual level therefore requires a kind of professional courage — the willingness to prioritize unmeasurable outcomes over measurable ones, to invest time in relationships that do not appear on any dashboard, to be present in ways that cannot be documented in a performance review. This courage is rare, because the incentive systems that govern managerial careers reward the measurable and punish the invisible.
At the level of the organization, the rebalancing means structural design that protects the conditions under which craft can develop and community can form. The structural principles follow from the arguments developed throughout this book: protect the manager's judgment capacity from the flood of AI-generated outputs (Chapter 5). Design coordination mechanisms that rely on mutual adjustment rather than standardization (Chapter 7). Maintain teams even when the productivity arithmetic suggests they are unnecessary, because the community the team embodies is the substrate of organizational resilience (Chapter 8). Create developmental pathways that build craft through reflected experience rather than through analytical training (Chapter 9).
Each of these structural choices has a cost. Protecting judgment capacity means limiting the rate at which AI outputs reach the manager, which means some outputs will not be reviewed and some opportunities will be missed. Relying on mutual adjustment means accepting the inefficiency of human coordination, with its misunderstandings, its politics, and its uneven pace. Maintaining teams means carrying headcount that the productivity model does not justify. Investing in craft-based development means accepting that the returns are real but unmeasurable.
These costs are real, and organizations that bear them will be at a short-term competitive disadvantage relative to organizations that do not. The organization that maximizes AI adoption, minimizes headcount, eliminates unproductive meetings, and measures everything that can be measured will look better on every quarterly metric. It will also be more fragile, more dependent on individual judgment that has not been tested against other perspectives, more vulnerable to the blind spots that community corrects and isolation amplifies.
The rebalancing is therefore a bet on the medium term against the short term — a structural choice to accept lower quarterly performance in exchange for greater organizational health, deeper talent development, and the resilience that comes from community rather than efficiency. It is the same bet that Mintzberg has advocated for throughout his career: the bet on the organization as a human community rather than as a machine for maximizing shareholder value.
At the level of institutions — the business schools, the consulting firms, the professional associations, the regulatory bodies that shape the practice of management — the rebalancing means a fundamental reorientation of what management education teaches, what management consulting advises, and what management research measures. The institutions that continue to emphasize analysis, optimization, and data-driven decision-making are training managers for a competition they have already lost to the machine. The institutions that pivot toward craft, judgment, community, and the irreducibly human dimensions of organizational life are training managers for the role that AI has made more important than it has ever been.
This institutional reorientation is the hardest of the three levels, because institutions change slowly and because the people who govern them were trained in the paradigm that needs changing. The business school professor who built her career on analytical methods is not inclined to teach craft. The consulting firm that sells analytical frameworks is not inclined to advise on community-building. The research journal that publishes quantitative studies is not inclined to reward qualitative observations of organizational life.
Mintzberg fought this institutional inertia for decades. The IMPM was his alternative to the MBA. His books were his alternative to the consulting frameworks. His blog posts were his alternative to the research journals. In each case, he was arguing for the same thing: a rebalancing of the managerial enterprise away from the analytical component that had become dominant and toward the craft and art that had been marginalized.
AI vindicates this argument by making it urgent. The analytical component has been perfected by a machine. The craft and art remain untouched, and their value — relative to the analytical component that no longer differentiates the human manager from the machine — has increased to the point where they are no longer supplementary to management but constitutive of it. The manager who can analyze but cannot navigate is a machine with legs. The manager who can navigate but cannot analyze has a machine in her pocket. The second manager is the one who will lead the organizations of the next generation, if the institutions that develop managers can be rebalanced in time to produce her.
Mintzberg would not frame this as an optimistic conclusion. He would frame it as a structural observation. The machine has perfected one component. The remaining components now define the role. Whether the profession reorganizes itself around this reality or continues to invest in the component the machine has already claimed is a choice. The choice is being made now, in every business school curriculum committee, in every consulting firm's service portfolio, in every organization's management development program. The rebalancing is not inevitable. It is necessary. And the distance between necessary and inevitable is the space where institutional will either rises to the moment or fails it.
The torrent does not stop. The fragments do not lengthen. The evaluation bottleneck does not resolve. But the manager who understands that her value lies in the craft, in the community, in the capacity to be present in the situations that matter — that manager has something the machine will never have: the accumulated wisdom of having been a human being in an organization full of human beings, navigating a world that no algorithm has learned to model, because the world keeps refusing to hold still long enough to be modeled.
That refusal is not a flaw. It is the condition of all living systems. And the manager's job, the job that no machine will take, is to work within that condition — with the patience of a craftsperson, the vision of an artist, and the humility of someone who knows that the plan is never the strategy, the map is never the territory, and the community is always more than the sum of its measured parts.
The meeting that rewired my understanding of management lasted nine minutes.
I did not time it. Mintzberg would have. He would have had a stopwatch and a taxonomy and a little notebook in which each activity was logged by category and duration. He was the person who discovered that managerial work happens in nine-minute fragments, and that discovery — empirical, stubborn, resistant to the narrative of strategic leadership that the business schools were selling — is the one I keep returning to.
Nine minutes. That is what the research showed. Half of all managerial activities last less than nine minutes. The meeting I am thinking about was one of those fragments. It happened during the Trivandrum sprint I describe in The Orange Pill, when twenty engineers were learning to build with Claude Code and I was learning that the twenty-fold productivity multiplier I was celebrating came with a cost I had not anticipated.
The cost was not burnout, though burnout was part of it. The cost was that every hour I freed through AI was instantly colonized by a new demand. The prototype was done — now evaluate it. The analysis was ready — now decide. The options were generated — now choose. My inbox did not lighten. It densified. The machine produced faster than I could judge, and the gap between production speed and judgment speed was the bottleneck, and the bottleneck was me.
I had read Mintzberg years before I encountered AI at this intensity. I thought I understood his point about fragmentation. I did not. You cannot understand fragmentation by reading about it. You understand it when the fragments come faster than you can process them and you realize that no tool will slow them down because every tool accelerates them.
That is Mintzberg's Law, and I lived inside it before I had words for it.
What changed my thinking was not his diagnosis of the problem but his insistence on the remedy. The remedy is not individual. It is structural. The manager cannot solve a structural overload through personal discipline, any more than a beaver can hold back a river by standing in it. The dam must be organizational. The protection must be designed into the system, not maintained through heroic effort by the person the system is overwhelming.
I think about this when I sit in rooms where AI governance is discussed. The frameworks are always individual: prompt engineering best practices, personal productivity systems, mindfulness techniques for the AI-augmented worker. These are useful. They are also insufficient, in the way that teaching a factory worker meditation is insufficient when the factory operates sixteen-hour shifts. The structure produces the overload. Only the structure can contain it.
Mintzberg spent his career watching managers and refusing to tell them what they wanted to hear. He did not say management is a science that can be optimized. He said it is a craft that must be practiced. He did not say leadership is the answer. He said communityship is — the slow, patient, unglamorous work of building relationships that hold an organization together when the strategy fails. He did not say strategy is a plan. He said it is a pattern, discovered through doing, visible only in retrospect.
Every one of these positions is harder to hold in the age of AI than it was when he first articulated them. The machine makes science easy, analysis abundant, planning effortless, and individual heroism spectacularly productive. The machine makes craft look slow, community look expensive, emergence look chaotic, and presence look like a luxury.
Mintzberg would say: the things that look like luxuries are the necessities. The things that look like necessities are the commodities. The machine handles the commodities. The manager handles the necessities. And the necessity that no machine handles — the human work of navigating a world that refuses to hold still long enough to be modeled — is the work that my engineers, my children, and I will be doing for the rest of our lives.
Nine minutes. That is how long you get before the next fragment arrives. The question is what you build in those nine minutes that the machine cannot.
AI was supposed to free managers to think. Mintzberg's fifty years of field research predict the opposite: every tool that increases capacity is met by a system that generates demands to fill it. The fragments get shorter. The torrent accelerates. The bottleneck is always the human being at the center.
This book collides Henry Mintzberg's empirical tradition -- the direct observation of what managers actually do, not what textbooks claim they should do -- with the most powerful production technology in history. What emerges is a structural diagnosis that no amount of prompt engineering or personal productivity hacks can address: when the machine perfects analysis, the manager's value migrates to craft, presence, and the community that no algorithm can build.
The organizations that thrive will not be those that maximize AI adoption. They will be those that protect the human judgment AI cannot replace -- and design the structures to make that protection systemic, not heroic.
-- Henry Mintzberg

A reading-companion catalog of the 27 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Henry Mintzberg — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →