Mary Parker Follett's precise term for the form of power that arises from the interaction of individual contributions in a context of mutual respect, shared purpose, and genuine engagement. Co-active power is not the sum of individual powers. It is a distinct phenomenon that emerges from the interactions between team members. The team operating through co-active power is more than the sum of its parts — not as a cliché but as a verifiable organizational fact. The insights that emerge from genuine teamwork, the solutions that no individual member conceived but that the group generates through mutual adjustment, are products of co-active power. They cannot be reproduced by any arrangement that eliminates the interactions from which they arise. The AI discourse has largely failed to engage with this dimension.
There is a parallel reading grounded in what AI systems actually optimize for. The training architecture of large language models reflects—and therefore amplifies—the pattern of individual expertise articulated through text. The corpus is vast but the interaction model is fundamentally dyadic: prompt and completion, question and answer, problem and solution. When we layer AI tools onto team dynamics, we are not amplifying co-active power but introducing a mechanism optimized for individual contribution at scale.
The team's emergent intelligence operates through mechanisms AI cannot directly observe or reproduce: the pause before disagreement, the calibration of trust through repeated low-stakes failures, the shared referent that allows compressed communication. These are not text; they are not in the training data. What AI amplifies is the articulable—the insight that can be written, the solution that can be specified, the knowledge that translates to tokens. The Trivandrum team's cross-domain insights and mutual calibration emerge from interactions largely invisible to the AI systems each member deploys. The actual work pattern may be individuals consulting AI tools in parallel, then reconvening with articulated outputs—the amplification happening in the dyadic layer, the emergence still depending entirely on unmediated human interaction. The economic pressure is not wrong about what AI makes possible; it is precisely right that the bottleneck has shifted from individual capability to coordination cost, making the team overhead newly visible against dramatically higher individual throughput.
The prevailing AI framework treats the human-AI relationship as dyadic: a single human interacting with a single AI tool. But the more transformative phenomenon is not the human-AI dyad but the team-AI ecology — the complex system of interactions among multiple human beings, each amplified by AI tools, operating within an organizational context that either supports or undermines co-active power. The economic argument for replacing teams with individual human-AI dyads measures only the dimension of output while ignoring the dimension of intelligence.
The output of five individuals working independently with AI is the sum of five outputs. The intelligence of a team of five working together with AI is qualitatively different — an emergent property of interactions that produces insights, catches errors, generates solutions, and exercises collective judgment no aggregation of individual judgments can replicate. The boardroom arithmetic that would convert twenty-fold productivity into fifteen redundancies is doing the math right on inputs and wrong on the function — calculating capability as if it were additive when it is in fact emergent.
The Trivandrum team described in The Orange Pill is the paradigmatic instance. Twenty engineers each amplified by AI tools did not produce twenty times an individual engineer's output. They produced a form of collective intelligence — cross-domain insights, mutual calibration of quality standards, accumulated trust that allowed risk-taking and honest challenge — that no configuration of isolated individuals could replicate. The decision to keep and grow the team rather than cut it reflected a recognition of co-active power that the headcount-reduction frame could not accommodate.
Co-active power atrophies when the conditions that generate it are removed. Trust requires time. Shared context requires sustained interaction. Mutual adjustment requires the space for disagreement. The organization that replaces teams with individuals working in parallel has not merely reduced labor cost — it has destroyed the mechanism through which emergent intelligence was being produced, and the destruction is invisible on the cost side of the balance sheet until consequences emerge.
The term emerged from Follett's analysis of community organizing in Boston, where she observed that neighborhood groups generated solutions to problems that no individual member could have conceived. The concept migrated into her industrial work when she began asking why some factory teams produced quality dramatically exceeding the apparent capability of their members.
Not additive but emergent. Co-active power is qualitatively distinct from the aggregation of individual powers.
Product of interactions. The insights and solutions emerge from mutual adjustment among members, not from any individual.
AI dyad thinking misses it. The human-AI dyad is the building block; the team-AI ecology is where co-active power operates.
Requires conditions to persist. Trust, shared context, and mutual adjustment are preconditions, and they atrophy under replacement logic.
Invisible on balance sheets. The destruction of co-active power appears only as missed insights and uncaught errors, not as line items.
The weighting depends on which organizational layer you're examining. At the level of articulated knowledge work—documentation, code generation, analysis that translates to text—the dyadic amplification is real and substantial (80% of the productivity story). The individual engineer equipped with AI genuinely operates at higher capability for tasks that decompose to individual execution. The contrarian view correctly identifies that AI's substrate is optimized for this layer.
But co-active power operates primarily in a different domain: the mutual adjustment that generates insights no individual conceived, the error-catching that emerges from diverse perspectives on shared work, the trust-based challenge that prevents groupthink. This layer is where the original framing holds (90% right)—AI does not directly amplify these interactions because they are not primarily mediated through the text artifacts AI processes. The Trivandrum team's collective intelligence lives in the space between AI-amplified individual contributions.
The synthesis is recognizing that effective team-AI ecology requires deliberate architecture. The team that defaulted to parallel individual-AI work would lose emergent capability while gaining individual throughput—a net loss for complex problem domains requiring genuine synthesis. The team that maintains the interaction structures generating co-active power while strategically deploying AI for individual capability creates genuine multiplicative effect. The economic pressure is not wrong but premature—optimizing for the visible (individual throughput) before understanding what the invisible mechanisms (emergence) actually require to persist and scale.