In traditional social networks, gatekeeping power is distributed and organic. The bridging individual controls information flow across a single structural hole but cannot reshape the entire network topology. The AI transition concentrates gatekeeping power in a way without precedent. The companies that build large language models are not merely participants — they are the architects of the infrastructure through which an expanding share of cross-domain connection occurs. They determine training data, response behavior, and access terms. This is infrastructure control rather than bridging, and the distinction matters enormously for understanding who benefits from the AI transition and who bears its costs.
There is a parallel reading that begins not with the companies controlling AI infrastructure, but with the hidden human labor sustaining it. Behind every frontier model lies an army of data labelers in Kenya, content moderators in the Philippines, and reinforcement learning trainers in Venezuela — workers earning dollars per hour to teach these systems what counts as harmful, truthful, or appropriate. The gatekeeping power Edo identifies is real, but it rests on a substrate of globalized piecework that makes the concentration possible. Without this labor arbitrage, the economics of training would collapse.
The deeper irony is that these workers — who literally shape what the models can and cannot say — have less access to the resulting capabilities than almost anyone. They clean the training data but cannot afford inference costs. They teach the models to recognize exploitation while experiencing it. The structural hole here is not between knowledge domains but between those who build the infrastructure and those whose labor makes it possible. The companies Edo names as gatekeepers are themselves dependent on a gatekeeping they perform: determining which global populations provide cheap enough labor to make the entire enterprise viable. The concentration of power in AI companies is inseparable from the concentration of precarity in the Global South. The real gatekeeping is not about who accesses models but about who gets classified as 'human in the loop' versus 'human using the tool.' That classification — made through wage differentials and geographic arbitrage — determines whether AI represents opportunity or extraction.
The training corpus is not the entire landscape of human thought. It is a specific, historically contingent sample — over-representing English-language academic publications, digitized Western documents, and cultural traditions whose institutions produced the text streams now feeding AI systems. Oral traditions, indigenous knowledge, and non-digitized cultural production are largely absent. The corpus reflects the biases of institutions whose output was indexable.
Granovetter's embeddedness framework insists that economic action — including knowledge production — is never disembedded from social relations. The AI tool appears to offer disembedded knowledge, but the decisions about training data, model weights, access pricing, and output shaping are social decisions made by specific institutions reflecting specific priorities. The apparent neutrality is itself a social achievement.
The economics of access create structural stratification. Frontier models are available at price points that exclude large portions of the global population. The developer at a well-funded Silicon Valley startup accesses capabilities the Lagos developer cannot — not because of differences in talent but because inference costs exceed what peripheral economic contexts can sustain. The democratization of bridging capital is real but stratified.
A 2026 PNAS paper Granovetter served as board member for — Perceiving AI as labor-replacing reduces democratic legitimacy and political engagement — documented the political consequence. Across thirty-eight European countries and over thirty-seven thousand respondents, perceiving AI as replacing rather than augmenting labor was associated with reduced satisfaction with democracy and reduced engagement with technology policy. Those most affected withdrew from the governance process shaping the outcome.
The structural analysis of gatekeeping power derives from Granovetter's embeddedness framework, extended by subsequent work on platform economics, corporate power, and algorithmic governance. Kate Crawford's Atlas of AI and Shoshana Zuboff's Surveillance Capitalism provide empirical extensions.
The concentration of AI infrastructure in a small number of companies — Anthropic, OpenAI, Google, and their peers — creates gatekeeping conditions unlike anything in the history of social networks. Previous communication infrastructures (telegraph, telephone, internet) were regulated as common carriers; AI models are not.
Infrastructure control, not bridging. AI companies do not connect clusters as a structural-hole bridge would — they architect the conditions under which all cross-cluster connection occurs.
Training data is socially embedded. The apparent universality of model outputs conceals specific decisions about whose documents, languages, and traditions are included.
Access stratification is structural. Pricing determines who can connect to the expanded knowledge landscape, producing a digital divide within the AI era that echoes prior technological transitions.
Invisible absences compound. What the model cannot say — the connections it cannot bridge because the relevant knowledge was excluded from training — is invisible to users and therefore structurally undetectable.
Political marginalization follows. The populations most displaced by AI are least positioned to influence its governance — a feedback loop the PNAS study empirically documented.
Whether governance frameworks adequate to AI infrastructure concentration exist is contested. Common-carrier regulation, public-option development, data trusts, and open-source alternatives have all been proposed; whether any will scale sufficiently to counterweight the concentration of frontier capability is not yet answered.
The question of where gatekeeping power actually resides depends entirely on which layer of the AI stack we examine. At the model architecture level, Edo's analysis is essentially correct (95%) — a handful of companies do control the fundamental infrastructure determining what connections are possible. But at the data labeling and content moderation layer, the contrarian view dominates (80%) — the entire enterprise depends on globally distributed human labor that remains invisible in most discussions of AI power. The workers teaching models to distinguish toxic from benign content are themselves the most fundamental gatekeepers, even as they lack access to what they help create.
When we ask about political consequences, both views prove partially right (60/40 favoring Edo). The PNAS study he cites captures real democratic disengagement, but the contrarian perspective reveals a prior disengagement — the populations providing AI's human substrate were never included in democratic deliberation about these systems. They experience AI not as labor-replacing but as labor-intensifying, creating new forms of piecework rather than eliminating work entirely. The political marginalization runs deeper than even Edo suggests.
The synthetic frame this topic needs recognizes AI gatekeeping as a stack, not a single layer. At the top, companies like Anthropic and OpenAI exercise the infrastructural control Edo describes. At the bottom, dispersed workers in the Global South exercise a different kind of gatekeeping — determining through their labor what the models learn to recognize and reject. Between these layers lies the stratified access Edo maps. Understanding AI's gatekeeping power requires holding all three layers simultaneously: the concentration at the top, the dispersion at the bottom, and the graduated exclusion between them. The infrastructure is both more concentrated and more dependent than either view alone captures.