Mary Parker Follett's argument against an assumption embedded so deeply in organizational thinking that most theorists do not recognize it as an assumption: that somewhere in every organization there exists a final authority — a person or body whose judgment is ultimate. Follett argued that this assumption operates through a confusion between the authority to decide and the capacity to decide well. The CEO has the authority; the capacity resides in the collective intelligence of the organization, in the process through which diverse forms of knowledge are pooled and integrated. The AI moment has made this illusion simultaneously more tempting — because AI gives the individual decision-maker the appearance of comprehensive knowledge — and more dangerous — because AI-assisted decisions propagate at machine speed, faster than corrective mechanisms can engage.
The illusion is more tempting in the AI age because AI tools give the individual decision-maker the appearance of comprehensive knowledge. A CEO with AI access can process more information, model more scenarios, and generate more options than any CEO in history. The apparent scope of her knowledge expands dramatically, and the expansion reinforces the illusion that she possesses the capacity to decide well on her own. But the comprehensiveness is an illusion within an illusion — the AI's analysis is bounded by its training data, the framing of the question, and the biases embedded in both. It does not know what the customer service representative knows about complaints not yet escalated, or what the junior developer knows about architectural flaws senior expertise has rendered invisible.
The illusion is more dangerous because AI-assisted decisions can be implemented at machine speed. A CEO who makes a wrong decision in a traditional organization experiences consequences gradually, as the decision works through implementation friction — the pushback of informed subordinates, the corrective feedback of the market. There is time for error to be detected and corrected. A CEO who makes a wrong decision in an AI-augmented organization experiences consequences before corrective mechanisms can engage.
Follett's alternative was cumulative responsibility: authority distributed across the organization in proportion to the knowledge each member possesses about the aspects of the situation for which they are responsible. This is not governance by committee, which Follett regarded as dysfunctional — merely distributing the illusion. Follett's model is the expert team, in which each member's authority derives from situated knowledge rather than hierarchical position, and decisions emerge through the integrative process.
The vector pods described in The Orange Pill are organizational instantiations of Follett's model — small groups whose job is to decide what should be built, with authority deriving from situated knowledge and decisions emerging through integration. But the pods work only if the organizational culture supports genuine distribution of authority. If their recommendations are routinely overridden by a senior executive who believes position confers superior judgment, the pod is not a decision-making unit but a consulting group whose contributions are decorative.
Follett developed the critique from observation of early twentieth-century American corporations, where the illusion of final authority was explicit and theorized. The scientific management movement had made the concentration of decision-making in management a matter of principle, and Follett's counter-argument was that the concentration was producing measurably worse outcomes than distributed reading of the situation would produce.
Authority to decide differs from capacity to decide well. Position confers the first; distributed knowledge is required for the second.
AI amplifies the illusion. Comprehensive-seeming analysis reinforces the decision-maker's confidence without providing the situated knowledge she lacks.
Machine-speed consequences outrun correction. AI-augmented decisions propagate faster than the feedback mechanisms that would detect error.
Cumulative responsibility is the alternative. Authority distributed in proportion to situated knowledge, integrated through collective process.
Vector pods work only with cultural support. Formal structures for distributed authority fail when hierarchical override remains the practiced norm.