Preferential attachment is the mechanism that produces the hub-and-spoke structure of most real-world networks. When new nodes join a growing network, they preferentially connect to nodes that are already well-connected, producing a power-law distribution in which a few hubs accumulate disproportionate centrality while the vast majority of nodes remain sparsely connected. The mechanism was formalized by Albert-László Barabási and Réka Albert in 1999 but had been recognized in various forms long before — Robert K. Merton's Matthew effect in science, Herbert Simon's work on the distribution of word frequencies, and the economist Pareto's observations on wealth distribution all pointed at the same dynamic. In the AI economy, preferential attachment explains why capability expansion produces hub concentration rather than uniform distribution: each expansion of capability compounds the advantages of the best-positioned users, producing new hubs at the same rate the old ones are displaced.
The mechanism of preferential attachment is not value-neutral observation; it has structural consequences for how networks distribute their benefits. Power-law distributions are extremely unequal by definition: the top one percent of nodes capture a share of value that dwarfs the bottom ninety percent combined. This is not inequality in the ordinary statistical sense (where one describes deviation from a mean); it is structural inequality built into the mathematics of network formation. Attempts to democratize networks without addressing the underlying mechanism produce the same inequality in a different configuration.
For the AI transition, the preferential-attachment analysis suggests that tool access alone is insufficient to produce meaningful democratization. Each new user of AI tools enters a network whose existing distribution of capability — shaped by prior connections, accumulated expertise, institutional position — determines whether her tool access translates into proportional productive output. Without intervention in the attachment mechanism itself, the pattern will reproduce: new hubs will form at the same rate old ones consolidate, and the long tail of users will remain structurally disadvantaged.
The policy implications run through every Castells-derived analysis of AI democratization. To counteract preferential attachment, governance must actively redistribute — through public investment in peripheral infrastructure, through regulatory frameworks that require hub-hosts to subsidize non-hub participation, through institutional arrangements (like common carriage, or antitrust, or universal service) that the network society has been slow to develop. Absent such intervention, the democratization narrative functions as ideology rather than description, obscuring the distributional pattern the technology actually produces.
Formalized in Barabási and Albert's 1999 paper in Science, but recognized in various forms across economics (Pareto), sociology (Merton), linguistics (Zipf), and physics long before.
The mechanism is mathematical, not cultural. Preferential attachment produces hub structure through the arithmetic of growth rather than through deliberate concentration.
Power laws are structurally unequal. The top one percent of nodes capture disproportionate shares of network value by definition.
Tool access does not overcome attachment. Without intervention in the mechanism itself, capability expansion reproduces inequality in new form.
Countering the mechanism requires redistribution. Active governance — public investment, regulatory requirements, universal service — is necessary to disrupt the default pattern.